uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
2,877,628,091,654
arxiv
\section{Introduction} \label{sec:intro} Deep learning has achieved significant progress in computer vision tasks. This success can be partly attributed to the availability of large-scale data~\cite{DBLP:journals/ijcv/RussakovskyDSKS15}. However, acquiring enough labeled data is infeasible in some situations due to event scarcity or expensive labor costs. Inspired by the human visual system that can learn new classes with only a few samples, few-shot learning (FSL) emerges as a promising method that aims to equip a learner with fast adaptability to novel concepts. For FSL, there are usually a base set and a novel set that share no class overlap. Then a model is trained on the base set with sufficient labeled data, and is further adapted to categorize the unseen classes in the novel set with scarce labeled data. In particular, the performance of the few-shot classification highly relies on the model's generalization ability. Recent works\cite{DBLP:conf/iclr/ChenLKWH19,DBLP:conf/eccv/TianWKTI20} have shown that a model trained on the base set with standard supervision can achieve impressive results on the novel set. However, we argue that the learned model tends to be overly discriminative to the training data in the base set, leading to sub-optimal performance when evaluated on unseen classes. This is because that deep networks are likely to memorize some specific training statistics, resulting in overconfident prediction and poor generalization ability\cite{DBLP:conf/nips/ThulasidasanCBB19}. To solve this issue and make the model produce sufficient discriminative representations of unseen test data, it is essential to introduce uncertainty to the input data and to regularise the model in the training process. Data mixing methods introduce uncertainty and reduce the risk of overly memorizing input data by blending original images and labels, which have proved valid in training a more general and transferable model for image classification recently\cite{DBLP:conf/wacv/Mangla0SKBK20,DBLP:conf/iccv/YunHCOYC19,DBLP:conf/aaai/HuangWT21}. In this paper, we propose a simple yet effective regularization method called Semantically Proportional Patchmix (\textbf{SePPMix}) to improve the performance of FSL. In SePPMix, we divide the input images into N$\times$N patches. The patches are randomly cut from one image and pasted onto another image at their respective original locations, which prevents the model from learning over specific structures of training data. Furthermore, rather than directly using the area of patches, the label of the mixed image is generated by the semantic proportion of patches which is estimated by the class activation maps (CAMs)\cite{DBLP:conf/cvpr/ZhouKLOT16}. To increase the number of mixed samples and learn more robust representations of data, we rotate the mixed images and predict the rotated angle of the images as an auxiliary task. We conduct extensive experiments on two widely-used datasets, the empirical results show that the proposed method can significantly improve few-shot image classification performance over corresponding baseline methods. Remarkably, under the same experiment setting, the proposed method improves 1-shot and 5-shot tasks by nearly \textbf{6\%} and \textbf{5\%} on MiniImageNet and CUB, respectively. \section{RELATED WORKS} \label{sec:format} \textbf{Few-shot learning} aims to build a model using available training data of base classes that can classify unseen novel classes with few labeled examples in each class. One popular paradigm to tackle few-shot classification is meta-learning, which can be roughly divided into two streams, optimized-based methods~\cite{DBLP:conf/icml/FinnAL17,DBLP:journals/corr/abs-1803-02999,DBLP:conf/iclr/RusuRSVPOH19} and metric-based methods~\cite{DBLP:conf/nips/VinyalsBLKW16, DBLP:conf/nips/SnellSZ17, luo2021rectifying, DBLP:conf/cvpr/ZhangCLS20}. The first class of methods, i.e., learning to learn methods, aiming to learn a suitable model initialization, so that the model can achieve fast adaption with a few labeled examples. The second category is learning to compare methods, targeting at projecting input samples into a specific embedding space where data from different classes can be distinguished by using distance metrics. Apart from these, recent works\cite{DBLP:conf/iclr/ChenLKWH19,DBLP:conf/eccv/TianWKTI20} adopt the transfer learning strategy to train the model on all the base classes, then use it to extract the feature representation of input data on the novel classes, showing competitive performance. Our work follows this school of thought and further improves the generalization of the model. \textbf{Data augmentation} has been widely used to train deep learning models. Regional-erasing methods~\cite{DBLP:conf/iccv/SinghL17,DBLP:journals/corr/abs-1708-04552} erasing part of the training images, aiming to encourage the networks better utilize the entire context of the data. Mixup\cite{DBLP:conf/iclr/ZhangCDL18} generates images by linearly combining data and fusing their labels using the same coefficients and shows its generalization ability in large datasets; Manifold Mixup\cite{DBLP:conf/icml/VermaLBNMLB19} applies Mixup to the intermediate layer, leading to smoother decision boundaries which benefit feature representation; Cutmix\cite{DBLP:conf/iccv/YunHCOYC19} combines the advantages of Cutout\cite{DBLP:journals/corr/abs-1708-04552} and Mixup, which produces a new image by cutting out one random image patch and pasting it to another image. Different from Cutmix, Snapmix\cite{DBLP:conf/aaai/HuangWT21} generates the target label for the mixed image by estimating its intrinsic semantic composition, which achieves top-level performance in fine-grained data. However, these data mixing methods are not specifically designed for limited data, which may result in suboptimal performance in FSL tasks. \section{METHOD} \label{sec:pagestyle} \subsection{Problem Formulation} Few-shot learning usually consists of two disjoint datasets $\mathcal D_b$ and $\mathcal D_n$. A model is firstly trained on the sufficient labeled dataset $\mathcal D_b$, and then performs the FSL task based on the novel dataset $\mathcal D_n$. The common way to build the FSL task is called $N$-way $K$-shot that should classify $N$ classes sampled from $\mathcal D_n$ with $K$ labeled data in each class. The few labeled data is called support set $\mathcal S = \{x_n^j, y_n^j\}_{j=1}^{N \times K}$ that contains N classes with each class K examples. The performance is evaluated on the query set $\mathcal Q = \{x_n^j, y_n^j\}_{j=N \times K + 1}^{N \times K + H}$, where the $H$ denotes the number of unlabeled examples and the data in $\mathcal Q$ is sampled from the same $N$ classes in each episode. Most meta-learning methods\cite{DBLP:conf/nips/VinyalsBLKW16,DBLP:conf/icml/FinnAL17,DBLP:conf/nips/SnellSZ17} adopt the same episode training scheme as evaluation when trained on $\mathcal D_b$, making training and evaluation stage in the same situation. However, recent works\cite{DBLP:conf/iclr/ChenLKWH19,DBLP:conf/iclr/DhillonCRS20,DBLP:conf/eccv/TianWKTI20} show that training the model on the whole $\mathcal D_b$ in a supervised manner is efficient for the FSL task. We follow the same idea and utilize our proposed method to train a more generalized embedding network on $\mathcal D_b$, which performs better than baseline when tested on $\mathcal D_n$. \begin{figure}[t] \centering \includegraphics[scale=0.46]{figure2} \caption{An overview of SePPMix. $S_a$ and $S_b$ are the semantic information maps of input image $x_a$ and $x_b$ respectively.} \label{fig3} \end{figure} \subsection{Framework Overview} In this work, we consider the simple but effective procedure as \cite{DBLP:conf/eccv/TianWKTI20}, during the first stage, we train the embedding network $f_{\phi}$ on $\mathcal D_b$, \begin{equation} {\phi} = \mathop{\arg\min}_{\phi}{\mathcal L_{base}(\mathcal D_b;\phi)}, \end{equation} where {$\mathcal L_{base}$} is the loss function, $\phi$ is the parameters of the embedding model. Then the pre-trained $f_{\phi}$ is fixed and used as the feature extractor in $D_n$. A linear classifier $g_{\theta}$ is firstly trained on the extracted features of $\mathcal S$, \begin{equation} {\theta} = \mathop{\arg\min}_{\theta}{\mathcal L_{base}(\mathcal S;\theta,\phi)}+R(\theta), \end{equation} where $\theta$ contains weight and bias terms and $R$ is the regulation term. Then $g_{\theta}$ can be leveraged as predictor on the features of $\mathcal Q$. \subsection{Sample Generation via SePPMix} In few-shot learning, SePPMix can better improve the model generalization than other data mixing methods. On the one hand, it increases the uncertainty and diversity of the training data; on the other hand, it generates more accurate labels of mixed data without introducing severe noise. Fig.~\ref{fig3} illustrates the procedure of our proposed method. Formally, let $x\in{\mathbb R^{3 \times h \times w}}$ denotes the training image and $y$ is its corresponding label. SePPMix create a new training sample $(\hat{x},\hat{y})$ given two distinct samples ($x_a$,$y_a$) and ($x_b$,$y_b$). The generation of mixed image is defined as \begin{equation}\label{eq3} \hat{x} = M \odot x_a + (\mathbf 1-M) \odot x_b,\\ \end{equation} where $M \in\{ 0,1\}^{N\times N}$ denotes a binary mask indicating which patches belong to either of the two images. The $\mathbf 1$ is a binary mask filled with ones and the probability of its appearing is 0.5, and $\odot$ denotes the element-wise multiplication. When using the original data of the base set for training, the different patches in the image contain varying levels of information that are valid for the classification task. To generate a more accurate label, we calculate the class activation maps of images as the indicators of semantic information and use them to estimate how a patch correlates with its label, which has been proved useful to measure the semantic relatedness of each original image patch to the corresponding label \cite{DBLP:conf/aaai/HuangWT21}. Given an image $x$, the class activation map can be calculated by \begin{equation} CAM(x) = {\Phi}(\sum_{l=1}^cw_y^lF_l(x)), \end{equation} where $\Phi(\cdot)$ is the operation that upsamples a feature map to match the dimensions of the input image. $F(\cdot)$ denotes the feature extractor and $F_l(x)$ denotes the $l^{th}$ feature map with input $x$, c is the number of feature maps. The $w_y$ is the classifier weight of class $y$, and for simplicity, the bias term is ignored. We obtain the semantic information map $S(x)$ by normalizing the $CAM(x)$ to sum to one. The semantic information map of $x$, $S(x)$, is defined as \begin{equation} S(x) = \frac{CAM(x)}{sum(CAM(x))}. \end{equation} Finally, we can calculate the corresponding semantic label of the image produced using Eq.~\ref{eq3}, \begin{align} &\rho_a = sum(M \odot S(x_a)), \\ &\rho_b = sum((\mathbf 1-M) \odot S(x_b)),\\ &\hat{y} = \rho_a y_a+\rho_b y_b, \end{align} where $\rho_a$ and $\rho_b$ are the weights corresponding to label $y_a$ and $y_b$ respectively. The newly generated image label in Fig ~\ref{fig3} depicts this process. In this way, not only can the model learn rich visual features, but also prevent introducing heavy noise in the augmented data, especially when in the few-shot situation. \subsection{ Training SePPMix} Our SePPMix is applied in the training stage to learn better representations with generalization ability. Given a new sample $(\hat{x},\hat{y})$ generated by SePPMix, we rotate it and predict the angles as an auxiliary task to learn more generalizable features. $\hat{x}^r$ denotes the mixed image rotated by r degrees, r $\in C_R$=$\{{0}^{\circ},{90}^{\circ},{180}^{\circ},{270}^{\circ}\}$. The loss $\mathcal L_m$ for training the image classification task using mixed images and the auxiliary loss $\mathcal L_r$ for predicting the rotation angle are defined as \begin{align} &\mathcal L_m = \sum_{x \in D_b}\sum_{r \in C_R}\mathcal L_{ce}[f_\phi(\hat{x}^r),\hat{y}],\\ &\mathcal L_r = \frac{1}{|C_R|}\sum_{x \in D_b}\sum_{r \in C_R}L_{ce}[g_r(f_\phi(\hat{x}^r)),r], \end{align} where $\mathcal L_{ce}$ is standard cross-entropy loss function and $g_r(\cdot)$ is a 4-way linear classifier. $|C_R|$ denotes the cardinality of $C_R$. Then the overall loss in training stage is \begin{equation} \mathcal L_{base} = \alpha \mathcal L_m + \beta \mathcal L_r. \end{equation} where $\alpha$ and $\beta$ are the weighting factors. For evaluation, the embedding network $f_\phi$ is fixed and tested on FSL tasks consists of $\mathcal S$ and $\mathcal Q$ sampled from $\mathcal D_n$ which plays a role as feature extractor, and we obtain embeddings of the images in $\mathcal S$ and $\mathcal Q$. We follow the implementation of\cite{DBLP:conf/eccv/TianWKTI20}, which trains a logistic regression classifier based on the embeddings of images in $\mathcal S$ and their corresponding labels, and uses the classifier to predict the labels of images in $\mathcal Q$. \section{EXPERIMENTS} \label{sec:typestyle} \subsection{Datasets} We perform our experiments on two widely used datasets in FSL, i.e., miniImageNet\cite{DBLP:conf/iclr/RaviL17} and CUB\cite{WelinderEtal2010}. The miniImageNet is a subset of ILSVRC-12\cite{DBLP:journals/ijcv/RussakovskyDSKS15}, including 100 distinct classes, each of which contains 600 images of size 84 $\times$ 84. We adopt the common setup introduced by\cite{DBLP:conf/iclr/RaviL17}, which split the categories into 64,16,20 classes for training, validation and evaluation respectively. The CUB dataset consisting of 11,788 images of size 84 $\times$ 84 from 200 bird classes. We follow the splits from~\cite{DBLP:conf/iclr/ChenLKWH19}, where 100 classes are used for training, 50 for validation, and 50 for testing. \begin{table}[t] \centering \small \caption{\label{comp1}Average 5-way few-shot classification accuracies (\%) with 95{\%} confidence intervals on miniImageNet dataset.} \begin{tabular}{llll} \hline {\textbf{Method}} & \multicolumn{1}{c}{\textbf{Backbone}} & \multicolumn{1}{c}{\textbf{1-shot}} & \multicolumn{1}{c}{\textbf{5-shot}} \\ \hline \rule{-2pt}{10pt} MAML\cite{DBLP:conf/icml/FinnAL17}&32-32-32-32&48.70$\pm$1.84&63.11$\pm$0.92\\ MatchingNet\cite{DBLP:conf/nips/VinyalsBLKW16}&64-64-64-64&43.56$\pm$0.84&55.31$\pm$0.73\\ ProtoNet\cite{DBLP:conf/nips/SnellSZ17}&64-64-64-64&49.42$\pm$0.78&68.20$\pm$0.66\\ TADAM\cite{DBLP:conf/nips/OreshkinLL18}&ResNet-12&58.50$\pm$0.30& 76.70$\pm$0.30\\ MTL\cite{DBLP:conf/cvpr/SunLCS19}&ResNet-12&61.20$\pm$1.80& 75.50$\pm$0.80\\ MetaOptNet\cite{DBLP:conf/cvpr/LeeMRS19}&ResNet-12&62.64$\pm$0.61&78.63$\pm$0.64\\ Fine-tuning\cite{DBLP:conf/iclr/DhillonCRS20}&WRN-28-10&57.73$\pm$0.62&78.17$\pm$0.49\\ VLCL\cite{luo2021boosting}&WRN-28-10&61.75$\pm$0.43&76.32$\pm$0.49\\ S2M2\cite{DBLP:conf/wacv/Mangla0SKBK20}&WRN-28-10&64.93$\pm$0.18&83.18$\pm$0.11\\ FEAT\cite{DBLP:conf/cvpr/YeHZS20}&ResNet-12&66.78$\pm$0.20&82.05$\pm$0.14\\ DeepEMD\cite{DBLP:conf/cvpr/ZhangCLS20}&ResNet-12&65.91$\pm$0.82&82.41$\pm$0.56\\ RFS-distill\cite{DBLP:conf/eccv/TianWKTI20}&ResNet-12&64.82$\pm$0.60&82.14$\pm$0.43\\ \hline \rule{-2pt}{9pt} RFS-simple\cite{DBLP:conf/eccv/TianWKTI20}&ResNet-12&62.02$\pm$0.63&79.64$\pm$0.44\\ \textbf{Ours}&ResNet-12&\textbf{66.98$\pm$0.81}&\textbf{83.88$\pm$0.54} \\ \hline \end{tabular} \end{table} \subsection{Implementation Details} Following\cite{DBLP:conf/eccv/TianWKTI20}, we use a ResNet-12 network as our backbone, which consists of 4 residual blocks with Dropblock as a regularizer. To generate the initial CAMs of the images, we train the network from scratch in $D_b$ at the beginning. In all experiments, we use Stochastic Gradient Descent (SGD) with a learning rate of 0.05, a momentum of 0.9, and a weight decay of 5$e^{-4}$. The training epoch is 65 epochs and the learning rate is decreased with a factor of 0.1 at 30, 45 and 60 epoch. The batch size is 64 and 16 on miniImageNet and CUB respectively. We empirically set the number of patches N=2, the coefficients $\alpha$ = 1 and $\beta$ = 0.5. We randomly sample 600 episodes to report the accuracies on both datasets. \begin{table}[t] \centering \small \caption{\label{comp2}Average 5-way few-shot classification accuracies (\%) with 95{\%} confidence intervals on CUB. RFS-simple$^*$ indicates the model re-implemented by ours on CUB dataset.} \begin{tabular}{llll} \hline {\textbf{Method}} & \multicolumn{1}{c}{\textbf{Backbone}} & \multicolumn{1}{c}{\textbf{1-shot}} & \multicolumn{1}{c}{\textbf{5-shot}} \\ \hline \rule{-2pt}{10pt} Baseline\cite{DBLP:conf/iclr/ChenLKWH19}&ResNet-18&65.51$\pm$0.87&82.85$\pm$0.55\\ Baseline++\cite{DBLP:conf/iclr/ChenLKWH19}&ResNet-18&67.02$\pm$0.90&83.58$\pm$0.54\\ MAML\cite{DBLP:conf/icml/FinnAL17}&ResNet-18&68.42$\pm$1.07&83.47$\pm$0.62\\ MatchingNet\cite{DBLP:conf/nips/VinyalsBLKW16}&ResNet-12&71.87$\pm$0.85&85.08$\pm$0.57\\ ProtoNet\cite{DBLP:conf/nips/SnellSZ17}&ResNet-12&66.09$\pm$0.92&82.50$\pm$0.58\\ Negative-Cosine\cite{DBLP:conf/eccv/LiuCLL0LH20}&ResNet-18&72.66$\pm$0.85&89.40$\pm$0.43\\ S2M2\cite{DBLP:conf/wacv/Mangla0SKBK20}&ResNet-18&71.43$\pm$0.28&85.55$\pm$0.52\\ DeepEMD\cite{DBLP:conf/cvpr/ZhangCLS20}&ResNet-12&75.65$\pm$0.83&88.69$\pm$0.50\\ \hline \rule{-2pt}{9pt} RFS-simple$^*$\cite{DBLP:conf/eccv/TianWKTI20}&ResNet-12&72.78$\pm$0.86&87.24$\pm$0.50\\ \textbf{Ours}&ResNet-12&\textbf{78.55$\pm$0.77}&\textbf{91.81$\pm$0.41} \\ \hline \end{tabular} \end{table} \subsection{Results} We present the results of SePPMix on two representative benchmarks of the few-shot learning task. As detailed in Table~\ref{comp1}, our method outperforms all previous approaches on miniImageNet. Specifically, our approach achieves 4.96\% and 4.24\% improvement over our baseline RFS-simple\cite{DBLP:conf/eccv/TianWKTI20} for 5-way 1-shot and 5-way 5-shot tasks respectively. Table~\ref{comp2} illustrates the experimental results on CUB dataset. We re-implement the baseline RFS-simple\cite{DBLP:conf/eccv/TianWKTI20} on CUB. In the 5-way tasks, our method improves the accuracies by 5.77\% and 4.57\% over the baseline on 1/5-shot respectively. Besides, our approach outperforms all the strong competitors and sets new records on 5-way 1/5-shot tasks on CUB. \begin{table}[t] \centering \small \caption{\label{comp3}Performance comparison between different mix methods and ablation study of our model, r indicates rotation.} \begin{tabular}{l|ll} \hline \multicolumn{1}{c|}{} & \multicolumn{2}{c}{miniImageNet}\\ Method& \multicolumn{1}{c}{1-shot} & \multicolumn{1}{c}{5-shot}\\ \hline baseline&\multicolumn{1}{c}{61.06} & \multicolumn{1}{c}{78.83}\\ \hline Mixup&59.12 (-1.94) &79.01 (+0.18) \\ \hline Cutmix&61.61 (+0.55) & 80.35 (+1.52) \\ \hline Snapmix&62.52 (+1.46) &80.90 (+2.07) \\ \hline \hline Patchmix&62.75 (+1.69) &80.69 (+1.86) \\ \hline SePPMix (w/o r)&64.09 (\textbf{+3.03})&81.21 (\textbf{+2.38}) \\ \hline SePPMix (w/ r)&66.98 (\textbf{+5.92}) &83.88 (\textbf{+5.05}) \\ \hline \end{tabular} \end{table} \label{sec:majhead} \begin{figure}[htbp] \centering \begin{minipage}[t]{0.19\linewidth} \raggedright \includegraphics[width=0.95\textwidth]{test1.jpg} \includegraphics[width=0.95\textwidth]{test2.jpg} \centerline{(a)} \end{minipage}% \begin{minipage}[t]{0.19\linewidth} \raggedleft \includegraphics[width=0.95\textwidth]{CAM1_a.jpg} \includegraphics[width=0.95\textwidth]{CAM2_a.jpg} \centerline{(b)} \end{minipage}% \begin{minipage}[t]{0.19\linewidth} \raggedleft \includegraphics[width=0.95\textwidth]{CAM1_b.jpg} \includegraphics[width=0.95\textwidth]{CAM2_b.jpg} \centerline{(c)} \end{minipage}% \begin{minipage}[t]{0.19\linewidth} \raggedleft \includegraphics[width=0.95\textwidth]{CAM1_c.jpg} \includegraphics[width=0.95\textwidth]{CAM2_c.jpg} \centerline{(d)} \end{minipage} \begin{minipage}[t]{0.19\linewidth} \raggedright \includegraphics[width=0.95\textwidth]{CAM1_e.jpg} \includegraphics[width=0.95\textwidth]{CAM2_e.jpg} \centerline{(e)} \end{minipage} \centering \caption{\label{fig2}Class activation maps for different methods: (a) input, (b) baseline, (c) Mixup, (d) Cutmix, and (e) SePPMix. Here the warmer color indicates higher value.} \end{figure} \subsection{Ablations and Analysis} We conduct an ablation study to compare SePPMix with other data mixing methods and assess the effects of different components of SePPMix. We report the results on both 5-way 1/5-shot tasks. In Table~\ref{comp3}, we train a simple baseline model without using data mixing augmentation. Notably, Mixup leads to worse performance in 5-way 1-shot task than the baseline, which indicates that linearly combining images and labels cannot improve the transfer and generalization abilities of the model in the FSL scenario. For a fair comparison, in the auxiliary task ``without rotation" (w/o r), our proposed SePPMix (w/o r) outperforms the baseline by 3.03\% and 2.38\% on 1/5-shot tasks respectively, which significantly surpasses other mix-based methods. Fig.~\ref{fig2} gives some class activation maps of samples obtained using different methods. The proposed SePPMix pays more attention on the object while less on the background, which intuitively demonstrates that our approach yields better results. To show that every component in our model has a valid contribution, we use Patchmix to denote that the label is determined proportionally to the area of pixels like Cutmix, while the image is generated the same as ours. The Patchmix performs better than Cutmix, which proves the effectiveness of our patch mix strategy. After combining with our semantic label, SePPMix(w/o r) without rotation improves more than 1\% accuracy compared with Patchmix. Besides, with the auxiliary rotation task, SePPMix(w r) further improves the accuracies to 66.98\% and 83.88\% on 1/5-shot task, respectively. \section{CONCLUSION} In this paper, we aim at better generalizing the model to unseen classes in few-shot learning. To achieve this, we propose a simple but effective mix method called SePPMix, which generates new training data with semantically proportional labels. Additionally, we rotate the generated samples and predict the angles as auxiliary signals. Extensive experiments on the miniImageNet and CUB datasets verify the effectiveness of our method. \vfill\pagebreak \bibliographystyle{IEEEbib} \small
2,877,628,091,655
arxiv
\section{Introduction} \label{sec:intro} Understanding the story behind a visual object is an activity of broad interest. Whether it is determining the palette used to make a painting, the style of a sculptor, or the authenticity of an artwork, deriving the origin and composition of the object at hand has been a difficult but important task for many examiners. Subtle clues derived from the nature of works of art have long been used to provide answers to \textit{provenance} related questions~\cite{bbcprogram}. Off-white colors found in the painting \emph{Darby and Joan} by Laurence Stephen Lowry brought into question its authenticity \cite{fakeorfortune_2015}. Lead content in the paint of \emph{Danseuse Bleue et Contrebasses} and careful scrutiny of the painter's signature allowed experts to rightly restore the validity of Edgar Degas's most famous work \cite{fakeorfortune_2012}. Provenance analysis of this sort has helped historians, cultural analysts and art enthusiasts to analyze the origin, content and growth of works such as these. Although the techniques used to perform provenance analysis have evolved over time~\cite{douglas2010origins}, it is, in general, still an unsolved problem~\cite{provimpnews}. In the domain of art history, it is one of the most active and important areas of research~\cite{metprovproject} as there are still complicated cases where provenance has yet to be established (\textit{e.g.}, the painting \textit{Bords de la Seine \`{a} Argenteuil}~\cite{monetbords}) and new avenues for the interpretation of relationships between artworks. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{./figures/teaser3.pdf} \end{center} \caption{Example of an Image Provenance Graph (IPG) showing some common operations performed on images and how they are manifested in the case of provenance. The examples in this case are meme-style images similar to the ones from the \emph{photoshopbattles} community on the Reddit social media site~\cite{reddit2017photoshopbattles}. The transformations can be as simple as increasing the brightness or as complex as multi-composition. In this paper, we consider the incorporation of meta-data to improve the construction of such graphs.} \vspace{-10pt} \label{fig:teaser} \end{figure} The above case studies might lead one to believe that provenance analysis is a tool to decipher events far in the past. On the contrary, with the growth in popularity of online digital media, the need for provenance analysis has never been more timely. Current social sentiment can often only be fully understood within the context of online memes and other viral movements~\cite{fakenewsmexico}. Further, as the lines between real and fake images blur, the extent to which these types of online phenomena can be deployed towards the deception of the public has become deeply concerning~\cite{politifact}. With high quality cameras and image editing software at anybody's disposal, photographs have become easier to forge than paintings or sculptures. We have reached a point where digital forgeries can be produced with fine-grained detail, down to photographic style and sensor noise \cite{marra2014attacking,li2017anti}. These advancements in anti-forensics undermine the content's credibility, ownership, and authenticity. The current scale at which images and videos are shared requires an automated way of answering such questions. Image processing and computer vision techniques can be employed to detect correspondences between images or other digital art forms~\cite{Lowe:IJCV:2004,Bay:CVIU:2008,zagoruyko_2015}. This kind of correspondence can range from object matching in images~\cite{Lowe_1999} to comparing the style~\cite{gatys2015neural} and semantics~\cite{pan2004automatic} of the two. Provenance analysis can be thought of as ordering pair similarities between multiple image pair sets, and is therefore a natural extension to pairwise image comparison. These subsequent ordered parings can be modeled as a graph, where each edge denotes a correspondence between a pair, and the end vertices of the edge signify the two respective images. An example of such a graph can be seen in Figure~\ref{fig:teaser}. This example shows that a provenance analysis algorithm could be analyzing multiple very close-looking realistic versions of the same visual object. Complex scenarios like this can make content-based similarity metrics unreliable. \begin{figure}[t] \begin{center} \includegraphics[width=1.0\linewidth]{./figures/DiffLoc.pdf} \end{center} \caption{\textit{Left}: Photo of the Eiffel Tower taken at night in Paris. \textit{Right}: Photo of the replica of the monument in Las Vegas taken at night. Note that both photos depict the same visual object --- only the image file metadata in this case can help us understand that they are completely different scenes. Photos and their metadata were obtained from Flickr~\cite{flickreiffel} and Wikimedia Commons~\cite{wikieiffel}.} \vspace{-10pt} \label{fig:diffloc} \end{figure} Due to the vast range of possible versions of a single original image, the metrics for quantifying the similarity between pairs of images can be noisy. Relying solely upon visual cues to order the different versions into a graph can result in poor provenance reconstructions~\cite{bharati2017uphy,moreira2018image}. Therefore, it becomes pertinent to utilize other sources of data to determine connections. For example, it is difficult to point out a semantic difference between the two images in Figure~\ref{fig:diffloc}, but the images can be differentiated by inspecting the metadata of the image files. Such a pair of images can be termed \textit{semantically similar}, as they are related to each other in a semantic way but do not originate from the same source ~\cite{oliveira2014multiple,bharati2017uphy}. Matching difficulty can also arise within sets of near-duplicate images, which are generated from a single origin having undergone a series of transformations (\textit{e.g.}, crop$\rightarrow$saturate$\rightarrow$desaturate). The pixel-level data within these image sets can exhibit ambiguous provenance directionality. Information beyond pixel-level data may be required to detect differences between such images. To handle scenarios where image content fails to explain image evolution, file metadata can be used to help fill in the gaps. In this work, we explore the use of commonly present file metadata tags to improve image provenance analysis. We compare these results against image content-based methods and highlight the advantages and disadvantages of both. \section{Related Work} \label{sec:related} Provenance analysis is a widely known and studied phenomenon in various data-based domains such as the semantic web and data warehousing~\cite{halaschek2006annotation, buneman2001and, anand2010provenance, simmhan2005survey}. However, provenance analysis for online multimedia has not been as extensively studied in the existing literature. The types of work most relevant and related to the problem of image provenance analysis come from three established concepts in the digital forensics literature: near-duplicate detection~\cite{chum2008near, ke2004efficient}, image splicing detection~\cite{cozzolino2015splicebuster, bahrami2015blurred, iuliani2015image, chen2017image, huh2018fighting,Brogan2017Spotting} and image phylogeny~\cite{dias2012image, dias2013toward, de2016multiple}. Most of the proposed methods work towards classifying whether an image is a near-duplicate of the query image in a retrieval context and do not determine the original image among the set of the near-duplicates. However, that particular problem has been studied by the image phylogeny community. Image phylogeny solutions aim at finding kinship relations between different versions of an image~\cite{dias2012image}. Similar to provenance analysis, image phylogeny limits its representation to a single-root tree with the original image as the root, even though there can be multiple original images contributing towards the creation of an image. The algorithm receives a query image and outputs the Image Phylogeny Tree (IPT). That method has also been extended to handle multiple (two) roots by taking spliced images into consideration~\cite{oliveira2014multiple}. An example of this multiple parent scenario can be observed in Figure~\ref{fig:teaser} where four images (donors) contribute to the content of the central composite image. A constraint of these image phylogeny approaches that solve very specific cases of image provenance analysis is that they have dealt with constrained datasets using a limited set of transformations and image formats~\cite{jegou2008hamming,de2016multiple}. In addition to that, most of them only consider two images to form a composite, thereby limiting the solutions for large-scale general applicability. Thus new image provenance algorithms must generalize and be evaluated across different forgery datasets, image transformations, file formats and image resolutions to be applicable in real-world situations. As a step towards a more general framework for image provenance analysis inspired by image phylogeny works, recent work on undirected provenance graph construction~\cite{bharati2017uphy} adopted a more general taxonomy and dataset proposed by the American National Institute of Standards and Technology (NIST)~\cite{nist2017plan}. It offered the \textit{U-phylogeny} pipeline as a preliminary approach towards solving provenance analysis, which is not restricted to either a closed set of image transformations, or the number of donor images to form multi-parent composites. Results are presented for scenarios with and without the presence of distractors (images that are not related to the provenance history of the query image) showing the approach to be tolerant to irrelevant images. A limitation of the U-phylogeny approach is that it does not provide a directed provenance graph, which is required to understand the evolution of the media object. In order to overcome the direction limitation and propose a scalable approach, a more complete end-to-end pipeline for image provenance analysis was described in~\cite{moreira2018image}. That method for graph construction first builds dissimilarity matrices based on local image features, and then employs hierarchical clustering to group nodes and draw edges within the final provenance graph. As stated in Section~\ref{sec:intro}, relying solely on image content can lead to noisy edge inference. This is especially true for directed edges, which have been shown to be more difficult to derive than undirected edges~\cite{bharati2017uphy,moreira2018image}. An option for addressing this is the use of metadata related to the images. File metadata has been predominantly used for data and software provenance analysis~\cite{acar2010graph,anand2010provenance,halaschek2006annotation}, as such information reveals important clues about a file that cannot be directly derived from the data. Secondly, metadata related to online posts which include images can also be utilized for this purpose. In the image domain, metadata often stores information regarding the device used to capture the image and the software used to process the image. Information provided by these types of tags has been utilized to improve the effectiveness of tasks such as image grouping~\cite{iqbal1999applying, logan2009automatic}, content-based image retrieval~\cite{akgul2011content, yee2003faceted}, photo classification~\cite{boutell2004photo}, image annotation~\cite{johnson2015love} and copyright protection~\cite{huang2010metadata}. Among these, algorithms establishing semantic correspondences between images, such as automatic grouping or classification, may utilize tags such as date, location, content originator, camera type and scene type~\cite{huiskes2008mir} whereas those that detect tampering may rely on detecting inconsistencies within the values of these and other tags containing source and copyright information~\cite{choi2013estimation, huang2010metadata}. While metadata has been successfully used for forensics tasks in the past~\cite{birajdar2013digital, farid2009image,huh2018fighting, mahdian2010bibliography}, it has not been used for provenance analysis before. \vspace{-8pt} \section{Proposed Approach} \label{sec:algo} Image provenance analysis algorithms aim at constructing a provenance graph with related images, given a query image. The provenance graph~\cite{moreira2018image} is a Directed Acyclic Graph (DAG) where each node corresponds to an image in the set of related images and the edges stand for the relationship of sharing duplicate content. The direction of an edge denotes the direction of the flow of content between each pair of images and the overall provenance case. In this section, we explain in detail each of the three stages (as seen in Figure.~\ref{fig:fullpipeline}) of image provenance analysis used in the proposed approach. \begin{figure*}[t] \begin{center} \includegraphics[width=1.0\linewidth]{pipeline_v3.png} \end{center} \caption{Stages of image provenance analysis. The proposed method starts with filtering images related to the provided query image $Q$. The `$k$' most relevant images are selected for pairwise image comparison. This step is not present in an oracle scenario where we assume to have been provided with the perfectly correct set of `$k$' related images. The images are compared in terms of visual content and metadata, yielding two types of adjacency matrices. The obtained matrices are then combined in the graph construction step to form an IPG.} \vspace{-2pt} \label{fig:fullpipeline} \end{figure*} \subsection{{Filtering}} \label{subsec:filtering} The first step required to perform provenance analysis for a given query image involves collecting the set of top-$k$ relevant images. In this work, we follow the solution proposed in~\cite{moreira2018image}, which selects a subset of a large source of images (such as millions of images from the Internet), whose elements include samples sharing full content with the query with slight modifications (\textit{i.e.}, near-duplicate images), samples sharing partial content with the query in any form (single or multiple foreground objects, or background), and samples transitively related to the query (\textit{e.g.}, near duplicates of the images sharing content with the query). In summary, this solution utilizes Optimized Product Quantization (OPQ) to store local Speeded-Up Robust Features (SURF)~\cite{Bay:CVIU:2008} in an Inverted File index (IVF), with a large number (\textit{e.g.},~$\sim$400k) of representative centroids. Each image is described through at most 5k SURF features, which are fed to constitute the IVF index. To search the index, multi-stage query expansion is utilized. The first stage mainly retrieves the hosts, while the second stage retrieves donors; further stages retrieve the images transitively related to the query, by replacing the original query with samples retrieved in the previous stages. \subsection{{Adjacency Matrix Computation}} \label{subsec:amc} Upon receiving the set of top-$k$ related images, denoted by $\mathcal{R}$, to the query image $Q$, we build $N \times N$ (here, $N=|\mathcal{R}|+1$) adjacency matrices $\mathcal{D}$, in which each indexed value $\mathcal{D}[i,j]$ is the similarity (or dissimilarity) quotient between images $i$ and $j$. The full matrices are obtained by comparing $(n^2-n) / 2$ pairs. Different from previous work, though, besides using a matrix that relies solely on visual content, we propose the employment of an additional metadata-based asymmetric adjacency matrix that is used to determine the orientation of the pairwise image relations. To the best of our knowledge, this is the first work proposing a way to leverage metadata to complement visual information for the problem of provenance analysis. For visual comparison, the images can be described using interest point detectors and descriptors (such as SURF~\cite{Bay:CVIU:2008}) or learned from data using a Convolutional Neural Network. Image description for provenance analysis typically avoids using computationally expensive methods such as deep learning because of scalability concerns~\cite{zagoruyko_2015}. An empirical evaluation we conducted comparing SURF~\cite{Bay:CVIU:2008} and ShuffleNet~\cite{zhang2018shufflenet}, one of the most efficient deep learning frameworks, highlights this. Ignoring training time, ShuffleNet took 3.5 minutes to describe 10k images using two Nvidia Quadro GPUs, while SURF (a C++ implementation) took 39 seconds for the same images using one GPU. This motivates the usage of SURF-based detection and description of keypoints for the visual comparison between images. Once the images are described, for each image pair, the $p$ most relevant interest points of each image are matched using brute-force pairwise comparison based on the L2 distance between the descriptors. The best matched correspondences are filtered to retain the geometrically consistent ones, as described in \cite{bharati2017uphy}. As a consequence, a symmetric adjacency matrix is obtained with the quantity of matched interest points between each pair of images. Commonly, the value of mutual information is used as the degree of pairwise association between images, or as asymmetric weights of the edges in a complete graph among the $N$ images, with no self loops \cite{moreira2018image}. In this work, in order to incorporate metadata information at this stage, we introduce a heuristic-based normalized voting to attribute weights to each pairwise image relationship. The voting method is chosen as a complement to the similarity comparison in the visual domain. The heuristics used to obtain the scores for each pair are straightforward metadata-related assumptions in the context of image provenance and rely upon the content of the tags. They include: \vspace{0.1cm} \noindent \textbf{Date.} To check for the temporal order of content creation, we individually compare the date-related tags -- {\fontfamily{pcr}\selectfont DateTimeOriginal}, {\fontfamily{pcr}\selectfont ModifyDate} and {\fontfamily{pcr}\selectfont CreateDate}. Considering two images $i$ and $j$, for each one of the three dates, whenever available, the provenance relation $(i, j)$ gets one vote if the date of image $i$ is earlier or equal than the respective date of image $j$. The relationship in the opposite direction $(j, i)$ is also analogously evaluated. \vspace{0.1cm} \noindent \textbf{Location.} Near-duplicates of an image (\textit{e.g.}, cropped versions) should have the same geographic location as the original one. Hence, we cast one vote for the pairwise image relationship $(i, j)$, and one vote for the relationship $(j, i)$, if image $i$ shares with image $j$ exactly the same non-null values for the four location-related tags -- {\fontfamily{pcr}\selectfont GPSLatitude}, {\fontfamily{pcr}\selectfont GPSLatitudeRef}, {\fontfamily{pcr}\selectfont GPSLongitude}, {\fontfamily{pcr}\selectfont GPSLongitudeRef}. Although this does not help to define the direction of the provenance between images $i$ and $j$, since both $(i, j)$ and $(j, i)$ relationships get one vote, it does help to give them more weight than the other image pairs that do not share location content. In addition, in very complex image compositions where there is not a clear presence of a foreground donor, the location-related metadata tags might be null or missing, contrary to the donors of the composition. Thus, we alternatively cast one vote to the relationship $(i, j)$, if image $i$ has non-null location information and image $j$ is missing it. \vspace{0.1cm} \noindent \textbf{Camera.} We propose to use camera-based metadata information in a way that is analogous to the location case. If image $i$ and image $j$ share the same non-null content for the camera's {\fontfamily{pcr}\selectfont Make}, {\fontfamily{pcr}\selectfont Model} and {\fontfamily{pcr}\selectfont Software} tags simultaneously, we cast one vote for both the $(i, j)$ and $(j, i)$ relationships, suggesting near-duplication that maintained image metadata. Similarly, we cast one vote to $(i, j)$ if image $i$ has camera information and image $j$ does not. \vspace{0.1cm} \noindent \textbf{Editing.} We use the editing-related metadata tags to figure out if either image $i$ or image $j$ were ever manipulated. Given that the provenance direction might occur from a non-manipulated to a manipulated image, we give one vote to the relationship $(i, j)$ if image $j$ has information for any of the {\fontfamily{pcr}\selectfont ProcessingSoftware}, {\fontfamily{pcr}\selectfont Artist}, {\fontfamily{pcr}\selectfont HostComputer}, {\fontfamily{pcr}\selectfont ImageResources}. The relationship in the opposite direction $(j, i)$ is also evaluated in the same manner. \vspace{0.1cm} \noindent \textbf{Thumbnail.} We extract the respective thumbnails of images $i$ and $j$. If the thumbnails are exactly the same, both relationships $(i,j)$ and $(j, i)$ get one vote, since it means one image might be generated from the other. Alternatively, if image $i$ has a thumbnail and image $j$ does not have one, then the relationship $(i, j)$ gets one vote, indicating that image $i$ is probably the original one. \vspace{0.1cm} These heuristics are used to generate a metadata-based image pairwise adjacency matrix $M$. For instance, taking images $i$ and $j$ and the possible provenance relationship from $i$ to $j$, whenever a heuristic is satisfied, the respective value $M[i,j]$ is increased in one unity, meaning the cast of one vote to the $(i, j)$ relationship. Aiming to keep the solution as widely applicable as possible, the tags are selected based on their availability and relevance to the provenance problem. An example of such relevance has been shown in Figure~\ref{fig:algo}. It depicts an image pair example that is directionally ambiguous. After performing interest-point-based pairwise analysis between the two images in Figure~\ref{fig:algo}(a), a valid argument for either a splicing (left-to-right edge) or removal (right-to-left edge) operation between the two could be made. Utilizing the ``DateTimeOriginal'' tag from both images disambiguates the relationship, revealing that the lion was indeed spliced into the image at a later time. While a large array of metadata tags are often present in many images, only a small subset of these tags provide pertinent information useful for discerning inter-image relationships. Furthermore, using tags provided by only specific camera firmwares or only applicable for certain formats (\eg, JPEG) reduces the generalizability of the proposed approach. The tags mentioned here are EXIF tags (details provided in supplemental material) but the information provided by their values is what holds relevance to provenance. In case EXIF metadata is missing or tampered, information provided by online image posts, such as date of submission, can also be utilized in a similar way. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{approach.png} \caption{Usage of metadata information for determining direction in image pairwise provenance relationships. In (a), the output of interest-point-based analysis between two images is shown. The operation can be either a splicing or removal of the male lion. In (b), according to the date-based metadata, the operation is revealed to be a splice, since the image on the left is older.} \vspace{-10pt} \label{fig:algo} \end{figure} \subsection{{Graph Construction}} \label{subsec:gc} Based on the values of the adjacency matrix, the final graph construction step chooses the most feasible set of directed edges (\ie, the set of edges that best represents the sequence of image operations). Each chosen directed edge denotes a parent-child relationship in the graph. Once the vision-based and metadata-based adjacency matrices are available, one can either individually use them to directly generate a provenance graph, through, for example, the application of Kruskal's Maximum Spanning Tree (MST) algorithm~\cite{Kruskal:AMS:1956}, or, as we are proposing, use a specialized algorithm for constructing a directed provenance graph, such as \emph{clustered provenance graph expansion}, proposed in~\cite{moreira2018image}. In the latter case, we suggest using the metadata-based asymmetric adjacency matrix to determine the directions of the edges. In the experiments herein reported, we investigate both strategies. In the end, the output graph can be represented as a binary adjacency matrix (BAM). BAM$[i,j]$ is set to 1 whenever there is an edge between images $i$ and $j$, indicating $i \rightarrow j$ flow of content. Understandably, none of the proposed rules guarantee correct inference as metadata can be manipulated, wrong or missing. Using multiple tags reduces the impact of an incorrect inference and makes the process more robust. To mitigate circumstances where file metadata is unavailable, we demonstrate provenance in an online setting in our experiments using an alternative approach that can harvest metadata from website users' comments as opposed to the file itself. In both scenarios, the proposed approach is designed to tolerate events such as data tampering. As the metadata-compliance score is a cumulative score metric, each rule and the corresponding tags contribute to the value used to make the edge decision. \vspace{-5pt} \section{Experimental Setup} \label{sec:exp} \vspace{-5pt} Here we detail the two evaluation scenarios and describe the characteristics of the corresponding datasets. \subsection{Provenance Analysis for Digital Forensics} \label{subsec:nistprov} NIST has recently released a dataset curated for the tasks of provenance image filtering and graph construction in a forensics context, which is devoid of most of the limitations of the existing datasets. Similar to the experimental setup described in~\cite{moreira2018image}, we rely on the \emph{development} partition of this dataset since it provides a full set of ground-truth graphs. Named \emph{NC2017-Dev1-Beta4}, the dataset contains 65 queries, and the ground-truth is released in the form of journals depicting provenance graphs. The provenance graph journals were created manually with the help of a proprietary image-editing journaling tool. The graphs include links corresponding to simple image transformations such as cropping, scaling, sharpening, blurring, and rotation, to complex ones such as splicing from multiple sources and object removal. The total number of related images per case ranges from $[2,81]$. In addition to the images relevant to the provenance of each of the query images, the dataset also contains distractors (\textit{i.e.}, images not related to any query). Following the protocol proposed by NIST~\cite{nist2017dataset}, we perform both \emph{end-to-end} and \emph{oracle-filter} provenance analysis over this dataset. End-to-end analysis requires performing provenance filtering prior to graph construction \cite{pintoFiltering}. In this case, for each query image, graphs are built upon a list of ranked images that might include distractors and miss genuinely related images due to imperfect image filtering. To obtain these filtered image rank lists, we employ the best solution proposed in~\cite{moreira2018image} and retrieve the top-100 ranked images to the query, which may contain unrelated distractors. Conversely, the oracle analysis does not require a filtering step, but instead starts with perfect ranks, \textit{i.e.}, ranks containing all the relevant images and no distractors. Orthogonal to the \emph{end-to-end} versus \emph{oracle} comparison, we also compare results for both \emph{metadata only} and \emph{visual + metadata} solutions. When using only metadata, we compute the vote-based metadata adjacency matrix, as explained in Section~\ref{subsec:amc}. We use ExifTool~\cite{exiftool} to perform file metadata extraction. A table listing the tags used and their details has been provided in the supplemental material. Once the adjacency matrix is computed, we apply Kruskal's maximum spanning tree algorithm~\cite{Kruskal:AMS:1956} to obtain the final provenance graph. For fused metadata and visual solutions, we start with visual content-based adjacency matrices, which are generated according to the method explained in Section~\ref{sec:algo}. We perform two different computations, one based on SURF~\cite{Bay:CVIU:2008} and the other based on Maximally Stable Extremal Regions (MSER)~\cite{Matas_2004}. Both solutions were proposed and evaluated in~\cite{moreira2018image}, hence we follow their pipeline: (1) extraction of at most $5k$ interest points (either with SURF or MSER), (2) computation of adjacency matrices based on the number of geometrically consistent interest-point matches, (3) computation of adjacency matrices based on mutual information, and (4) application of the cluster-based method for generating provenance graphs. For combining visual content and metadata, we proceed as suggested in Section~\ref{sec:algo}: within the cluster-based algorithm, in the step of establishing the directions of edges, instead of using the mutual-information-based adjacency matrix~\cite{moreira2018image}, we consider the metadata-based one and keep the directions with more votes. \begin{table*}[t] \renewcommand{\arraystretch}{1.2} \caption{Results of provenance graph construction over the NIST NC2017-Dev1-Beta4 dataset. We report the mean and the standard deviation for the metrics on the provided 65 queries. Visual results are from Moreira et al.~\cite{moreira2018image}. Best results are in bold.} \centering \footnotesize \vspace{0.3cm} \begin{tabular}{L{2.2cm}L{2.2cm}C{1.6cm}C{1.6cm}C{1.6cm}C{1.6cm}C{1.6cm}C{1.6cm}} \hline \multirow{2}{*}{\textbf{Data Modality}} & \multirow{2}{*}{\textbf{Solution}} & \multicolumn{3}{c}{\textbf{Oracle Filtering}} & \multicolumn{3}{c}{\textbf{End-to-End Analysis}}\\ \cmidrule(lr){3-5} \cmidrule(lr){6-8} & & \textbf{VO} & \textbf{EO} & \textbf{VEO} & \textbf{VO} & \textbf{EO} & \textbf{VEO} \\ \hline \multirow{2}{*}{Visual~\cite{moreira2018image}} & Cluster-SURF & 0.931$\pm$0.075 & 0.124$\pm$0.166 & 0.546$\pm$0.096 & 0.853$\pm$0.157 & 0.353$\pm$0.236 & 0.613$\pm$0.163\\ & Cluster-MSER & 0.892$\pm$0.154 & 0.123$\pm$0.161 & 0.525$\pm$0.129 & 0.835$\pm$0.180 & 0.312$\pm$0.252 & 0.585$\pm$0.177\\ \cmidrule(lr){1-8} \multirow{1}{*}{Metadata} & Kruskal & 0.999$\pm$0.003 & 0.117$\pm$0.099 & 0.577$\pm$0.053 & 0.249$\pm$0.115 & 0.009$\pm$0.016 & 0.130$\pm$0.057\\ \cmidrule(lr){1-8} \multirow{2}{*}{Visual + Metadata} & Cluster-SURF & \textbf{0.931$\pm$0.075} & \textbf{0.445$\pm$0.266} & \textbf{0.699$\pm$0.148} & \textbf{0.853$\pm$0.157} & \textbf{0.384$\pm$0.248} & \textbf{0.628$\pm$0.169} \\ & Cluster-MSER & 0.891$\pm$0.154 & 0.389$\pm$0.254 & 0.651$\pm$0.176 & 0.838$\pm$0.182 & 0.345$\pm$0.232 & 0.603$\pm$0.174\\ \hline \end{tabular} \vspace{-10pt} \label{tab:nist} \end{table*} The provenance graphs generated using the proposed approach for both oracle and end-to-end scenarios are evaluated using the metrics proposed by NIST for the provenance task~\cite{nist2017plan}. The metrics focus on comparing the nodes and edges from both ground-truth and candidate graphs. The corresponding measures of \textbf{Vertex Overlap (VO)} and \textbf{Edge Overlap (EO)} are the harmonic mean of precision and recall (F1 score) for the nodes and edges retrieved by our method. In addition to these, a unified metric representing one score for the graph overlap namely the \textbf{Vertex Edge Overlap (VEO)} is also reported. The VEO is the combined F1 score for nodes and edges. All the metrics are computed through the NIST \emph{MediScore} tool~\cite{mediscore}. The values of these metrics lie in the range $[0,1]$ where higher values are better. \subsection{Provenance Analysis for Cultural Analytics} \label{subsec:reddit} To include experiments with more realistic examples, we also evaluate the approaches from Section \ref{sec:algo} on the Reddit dataset introduced in~\cite{moreira2018image} and maintained at~\cite{redditdataset}. This dataset contains provenance cases created from images extracted from the \emph{photoshopbattles} community on the Reddit website~\cite{reddit2017photoshopbattles}. This community provides a platform for users to experiment with image manipulation in a friendly context. Each thread begins with a single image submitted by one user, which serves as the base image for the manipulations of others, whose contributions appear as comments on the original post. For the purpose of provenance, Moreira et al.~\cite{moreira2018image} utilize this comment structure to obtain 184 provenance graphs with an average graph order of 56. For the sake of fair comparison, we evaluate the variants of the proposed approach on the exact same set. The full set of images from Reddit do not contain distractors. This restricts our experiments for provenance analysis in this setting to \emph{oracle-filter} analysis only, in contrast to the NC2017-Dev1-Beta4 dataset. Since the images in the Reddit dataset are collected from the web, the availability of metadata is restricted by the policies of the Reddit website and image hosting services, such as \url{imgur.com}~\cite{imgur2018privacy}. For that reason, the metadata extraction through ExifTool~\cite{exiftool} does not deliver useful tags for provenance analysis. As an alternative, we use the Reddit users' comments and posts to estimate the date and time of image uploads, thus treating them as \emph{DateTimeOriginal} values, making it possible to invoke the date-based heuristics. Here, one important comment can be made about the restricted availability of metadata and the apparent limited possibility of application of the present solution. Although metadata might not be available to the general public, image hosting websites might still be storing them, hence being able to apply the method in their headquarters or under legal demand. Other image hosting websites such as Flickr and Picasa can be used as image sources that preserve metadata tags, but they do not provide structured information for provenance ground-truth extraction. This promotes Reddit as a choice for obtaining graphs and evaluating provenance in a cultural setting. To evaluate our experimental results on the Reddit dataset, we employ the same metrics and scorer used in the case of the NC2017-Dev1-Beta4 dataset. \begin{table*}[t] \renewcommand{\arraystretch}{1.1} \caption{Ablation results for oracle and end-to-end provenance. We repeat the experiments seven times for the best solution presented in Table~\ref{tab:nist} (Visual + metadata, Cluster-SURF) in both scenarios, keeping only a subset of heuristics activated at a time. Best results in bold.} \centering \footnotesize \vspace{0.3cm} \begin{tabular}{R{2.5cm}C{2cm}C{2cm}C{2cm}C{2cm}C{2cm}C{2cm}} \hline \multirow{2}{*}{\textbf{Heuristic}} & \multicolumn{3}{c}{\textbf{Oracle Filtering}} & \multicolumn{3}{c}{\textbf{\RED{End-to-End Analysis}}}\\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} & \textbf{VO} & \textbf{EO} & \textbf{VEO} & \RED{\textbf{VO}} & \RED{\textbf{EO}} & \RED{\textbf{VEO}} \\ \hline Date only & \textbf{0.931$\pm$0.075} & \textbf{0.446$\pm$0.265} & \textbf{0.700$\pm$0.147} & \RED{0.853$\pm$0.157} & \RED{0.389$\pm$0.244} & \RED{0.630$\pm$0.169} \\ Location only & 0.931$\pm$0.075 & 0.394$\pm$0.282 & 0.674$\pm$0.154 & \RED{0.853$\pm$0.157} & \RED{0.348$\pm$0.241} & \RED{0.611$\pm$0.164} \\ Camera only & 0.931$\pm$0.075 & 0.388$\pm$0.269 & 0.672$\pm$0.147 & \RED{0.853$\pm$0.157} & \RED{0.350$\pm$0.234} & \RED{0.612$\pm$0.164} \\ Editing only & 0.931$\pm$0.075 & 0.396$\pm$0.281 & 0.675$\pm$0.153 & \RED{0.853$\pm$0.157} & \RED{0.353$\pm$0.237} & \RED{0.613$\pm$0.163} \\ Thumbnail only & 0.931$\pm$0.075 & 0.411$\pm$0.285 & 0.683$\pm$0.155 & \RED{0.853$\pm$0.157} & \RED{0.363$\pm$0.238} & \RED{0.618$\pm$0.167} \\ \RED{ All but Date} & \RED{0.931$\pm$0.075} & \RED{0.394$\pm$0.280} & \RED{0.675$\pm$0.152} & \RED{0.853$\pm$0.157} & \RED{0.345$\pm$0.247} & \RED{0.610$\pm$0.168} \\ \RED{Date + Thumbnail} & \RED{0.931$\pm$0.075} & \RED{0.444$\pm$0.268} & \RED{0.699$\pm$0.148} & \RED{\textbf{0.853$\pm$0.157}} & \RED{\textbf{0.391$\pm$0.245}} & \RED{\textbf{0.632$\pm$0.169}} \\ \hline \end{tabular} \vspace{-8pt} \label{tab:ablat} \end{table*} \section{Experimental Results} \label{subsec:results} The experiments performed on both datasets show that utilizing knowledge from metadata helps in the process of edge inference for provenance. As it can be observed from the values reported in Table~\ref{tab:nist}, the proposed method significantly improves total edge overlap, and thereby total graph overlap, since it uses image-content-based information to initially establish connections between images, then relies on metadata to refine edge direction. The tags and checks used in this work yield an edge overlap of 44.5\% and graph overlap (VEO) of $\sim$70\% for provenance in the oracle scenario, improving notably over current state-of-the-art~\cite{moreira2018image} by $\sim$15 percentage points (pp). More notably, metadata fusion provides a $\sim$30pp increase in EO in the oracle cases, when compared to~\cite{moreira2018image}. In the end-to-end scenario, metadata usage also shows improvements in edge overlap by $\sim$3pp, aiding the overall graph overlap to reach $>$60\%. Provenance analysis solutions thus far have struggled at obtaining good edge reconstruction, as can be seen from the disparity between the vertex and edge overlap. Furthermore, the addition of distractors reduces performance by $\sim$5pp, implying that semantically similar images within the distractor sets can lead to high inter-image similarity between pairs that should not be related. This can negatively impact greedy graph construction approaches. Some success and failure provenance cases are presented in the supplemental material, including the graph visualizations. To understand the contribution of each type of metadata information, we conduct an ablation study on the oracle and end-to-end scenarios using the \emph{Visual+Metadata, Cluster-SURF} method from Section~\ref{subsec:nistprov}. We perform the experiment seven times, for each scenario, using only a subset of heuristics for each run. Results are presented in Table~\ref{tab:ablat}. In the oracle scenario, while all five tags individually benefit graph EO, the date-based one performs best, followed by thumbnail usage. For that reason, we also present, in the last two rows of Table~\ref{tab:ablat}, the results of having all heuristics combined except for date (to assess the impact of avoiding the best one), as well as combination of date and thumbnail (the two best ones). Indeed, the date-based heuristic alone slightly surpasses the combination of heuristics, in this particular dataset and scenario. In the end-to-end scenario, in turn, observations are somewhat different. Metadata tags alone do not improve the results of the visual solution, except for date and the date-thumbnail fusion, with the latter showing the best results. Again, this might be particular to the dataset, where the added distractors probably present more unreliable metadata (due to tampering or removal). That reveals the importance of combining tags, since it leads to a more robust solution to metadata tampering. \begin{table}[t] \renewcommand{\arraystretch}{1.1} \caption{Results of provenance graph construction over the Reddit dataset. We report the average values of the metrics over the 184 cases, as well as the standard deviations. This dataset only allows us to report oracle-filtering results. Visual results are from Moreira et al.~\cite{moreira2018image}. Best results are in bold. } \centering \footnotesize \vspace{0.3cm} \begin{tabular}{R{2cm}C{1.5cm}C{1.5cm}C{1.5cm}} \hline \multicolumn{1}{c}{\textbf{Solution}} & \textbf{VO} & \textbf{EO} & \textbf{VEO}\\ \hline \multicolumn{4}{l}{Visual~\cite{moreira2018image}:} \\ \ \ \ \ Cluster-SURF & 0.757$\pm$0.341 & 0.037$\pm$0.034 & 0.401$\pm$0.181\\ \ \ \ \ Cluster-MSER & 0.509$\pm$0.388 & 0.027$\pm$0.034 & 0.271$\pm$0.207 \\ \cmidrule(lr){1-4} \multicolumn{4}{l}{Metadata:} \\ \ \ \ \ Kruskal & \textbf{0.969$\pm$0.073} & 0.034$\pm$0.086 & \textbf{0.506$\pm$0.056} \\ \cmidrule(lr){1-4} \multicolumn{4}{l}{Visual + Metadata:} \\ \ \ \ \ Cluster-SURF & 0.757$\pm$0.341 & \textbf{0.085$\pm$0.065} & 0.424$\pm$0.193 \\ \ \ \ \ Cluster-MSER & 0.509$\pm$0.388 & 0.061$\pm$0.063 & 0.288$\pm$0.220 \\ \hline \end{tabular} \label{tab:reddit} \end{table} Provenance analysis becomes significantly more difficult when dealing with real-world scenarios, such as those presented in the Reddit dataset. Although metadata doubles the number of correctly retrieved edges, as seen in Table~\ref{tab:reddit}, the edge overlap is still much lower than for the NC2017-Dev1-Beta4 dataset. In the Reddit cases, images can be connected by visual puns, inside jokes, and purely associative content without any direct visual correspondence between them. This is very common in meme-style imagery. Understanding the quirks and sentiments of human language can further help provenance analysis in these contexts, but it has not yet been explored. To perform complex relationship retrieval using image provenance analysis, input from other modalities, such as text comments, may be required. Since all experiments calculate initial correspondences using only visual image content, the purely visual method and visual + metadata based method perform identically with respect to VO. This metric is generally high with a low standard deviation whereas the EO has very high standard deviation. Due to the vast range of possible transformations, the provenance analysis approaches are not able to detect and map certain image relationships as well as others. The results of the experiments for both scenarios show that SURF detections for image matching are better than MSER detections, which is consistent with the results in~\cite{moreira2018image}. \section{Discussion} \label{sec:discussion} Image metadata is a valuable asset for improving results for vision-based problems such as image retrieval~\cite{sasikala2015efficient}, semantic segmentation~\cite{ardeshir2015geo}, and manipulation detection~\cite{huh2018fighting}. Our work demonstrates that the task of image provenance analysis also benefits from metadata. External context can corroborate evidence from purely visual techniques, creating an overall better solution to provenance graph reconstruction. In addition to utilizing information that cannot be derived from the images themselves, metadata-based approaches are computationally very cheap. Furthermore, unlike complex, data-driven, vision-based techniques that require large amounts of training resources, methods like ours require no training at all. Such methods can be deployed easily on a large scale, incurring very little performance overhead. Approaches that require large amounts of training data can suffer due to the relatively small sizes of currently available provenance datasets. And most datasets published in this field so far are indeed small. Even though external information can improve image-based approaches, provenance analysis is still far from being solved. This work only presents a preliminary exploration of utilizing metadata in provenance analysis. While our results show improvement, metadata-based approaches have higher chances of being rendered unreliable due to their absence or manipulation. Further advancements in solving the problem must focus on the examination of content-derived metadata as well. Future work could include estimating missing metadata information from the content and available tags~\cite{fan2013estimating,tsai2005extent}. For now, our findings suggest that image-content-based methods should be the fallback option, as metadata alone is more useful for determining edge directions instead of edge selection. We surmise that going forward, the best provenance approaches should rely primarily on image content, but utilize metadata analysis as a secondary refinement system in scenarios where it is present and provides ample evidence. {\small \bibliographystyle{ieee}
2,877,628,091,656
arxiv
\section{Introduction} Stochastic growth equations play a central role in the understanding of surface growth phenomena and are used to classify the different universality classes \cite{barabasi,meakin}. The Kardar-Parisi-Zhang (KPZ) universality class introduced by the stochastic equation~\cite{KPZ} \begin{equation} \frac{\partial h}{\partial t} = \nu \nabla^{2} h + \frac{\lambda}{2} (\nabla h)^{2} + \xi, \label{eq:KPZ} \end{equation} is one of the most fundamental examples of nonequilibrium interface growth model~\cite{krugrev,SasaSpohnJsat,TakeJSP}. Here, $h(\mathbf{x},t)$ represents the interface height at the position $\mathbf{x}$ and time $t$, the first term in the right-hand side accounts the relaxation due to the surface tension, the second one the local lateral growth in the normal direction along the surface and the last one is a white noise with null mean and amplitude $\sqrt{D}$. The benchmark of KPZ class is the lateral growth, second term in Eq.~\eqref{eq:KPZ}, that leads to an excess velocity such that the interface envelop moves faster (or slower if $\lambda<0$) than the rate at which particles are added in the system. The interfaces generated by the KPZ equation obey the Family-Vicsek ansatz~\cite{FV} for the interface width, given by the standard deviation of the height profile, defined as $w=\sqrt{\lrangle{h^2}-\lrangle{h}^2}$. For a scale of observation $\ell$ and a growth time $t$, we have that $w(\ell,t)\sim t^{\beta}$ for $t\ll \ell^{\alpha/\beta}$ and $w(\ell,t)\sim \ell^\alpha$ for $t\gg \ell^{\alpha/\beta}$, where $\alpha$ and $\beta$ are the roughness and growth exponents, respectively~\cite{barabasi}. The scaling relation $\alpha + \alpha/\beta =2$, representing Galilean invariance, holds independently of the dimension~\cite{barabasi}. For $1+1$ dimensions the exponents are exactly known as $\beta=1/3$ and $\alpha=1/2$~\cite{KPZ}; for higher dimensions exponents are obtained from simulations~\cite{odor2010,alves14,Kim2014,Marinari2002}. A thorough analysis of the KPZ class includes the nature of the underlying stochastic fluctuations~\cite{TakeJSP,SasaSpohnJsat}. Considering the non-stationary regime, the height at each surface point evolves as \begin{equation} h = v_\infty t + s_\lambda(\Gamma t)^{\beta} \chi+\eta+\ldots, \label{eq:htcorr} \end{equation} where $s_\lambda=\mbox{sgn}(\lambda)$ and $\chi$ is a stochastic variable, whose distribution is universal and depends on the growth geometries and boundary conditions~\cite{krug92,PraSpo1,TakeuchiSP,carrasco2014}. The constants $v_\infty$ and $\Gamma$ are non-universal and control, respectively, the asymptotic average velocity and the amplitude of height fluctuations of the interface. The last term in the right-hand side of Eq.~\eqref{eq:htcorr} is a non-universal correction that plays an important role at finite-time analyses in simulations~\cite{Alves13,Oliveira13R} and experiments \cite{TakeSano,TakeuchiSP}. It produces a shift in the distribution of the quantity \begin{equation} q = \frac{h-v_\infty t}{s_\lambda(\Gamma t)^{\beta}}, \label{eq:q} \end{equation} in relation to the asymptotic distribution of $\chi$. Except for the very specific case where $\lrangle{\eta}=0$~\cite{Ferrari}, the shift vanishes as $\lrangle{q}-\lrangle{\chi}\sim t^{-\beta}$~\cite{TakeSano,TakeuchiSP,Alves13,Oliveira13R}. Despite of the absence of exact results in higher dimensions, numerical results show that the KPZ ansatz remains valid up to $d=6+1$~\cite{healyPRL,healyPRE,Oliveira13R,bd_box2d,alves14}. Discrete growth models are valuable theoretical tools for the realization of universality classes in surface growth phenomena~\cite{barabasi,meakin} since they permit to flexibly implement specific physical mechanisms. The ballistic deposition (BD) model is a paradigmatic interface growth process initially designed to investigate formation of sediments by the aggregation of small particles from a colloid dispersion \cite{vold}. In the BD model, particles move ballistically and normally towards the substrate and are irreversibly attached at the first contact with the deposit, producing, therefore, lateral growth that is a central characteristic of the KPZ universality class \cite{KPZ}. However, the surface evolution exhibits strong corrections in the scaling traditionally attributed to an intrinsic width~\cite{wolf,moro,evans,chavez} that hampers the direct observation of the KPZ critical exponents in this model. Continuous (coarse-grained) limits of the DB model in $d=1+1$ yield the KPZ equation to leading order but inconsistencies were found in higher dimensions~\cite{Katzav,Haselwandter}. Preceded by studies lying on finite-time and -size corrections~\cite{fabioBD,FabioPhysA2006} and intrinsic width~\cite{wolf,moro,evans,chavez,tiago2}, a direct observation of KPZ universality class for the BD model in $d=1+1$ was obtained recently by means of thoroughgoing simulations of very large systems and very long growth times~\cite{vvdensky,vvdensky2}. Recently, a connection between the BD model and the KPZ class in 2+1 dimensions was possible by unveiling the nature of the intrinsic width of the model~\cite{bd_box2d}. It was shown that the leading contribution to the intrinsic width comes from the short wavelength fluctuations in the height increments $\delta h$ along the deposition events. Besides, it was shown that these effects can be suppressed using a coarse-grained interface built from the original one~\cite{bd_box2d}; see Sec.~\ref{model_methods}. An important theoretical problem is the upper critical dimension $d_u$ above which fluctuations become negligible. {Analytically, there is no consensus on the value of $d_u$ (see discussions in Ref.~\cite{Pagnani13}) and an appealing and recent non-perturbative renormalization group analysis rules out $d_u=3+1$ but the approach losses reliability for $d\gtrsim3.5+1$ within the approximations considered~\cite{Canet,Canet2}.} Moreover, numerical simulations of models believed as belonging to the KPZ class practically discard $d_u=4+1$~\cite{Pagnani13,Kim13,Schwartz2012,Perlsman2006,odor2010} and evidences up $d_u=11+1$ have been recently reported~\cite{Kim2014,Rodrigues15,alves14} in agreement with former conjectures~\cite{Marinari2002,Tu1994,Ala-Nissila1993,Ala-Nissila1998}. While in 2+1 dimensions the generalization of the KPZ ansatz was supported by several models~\cite{healyPRL,Oliveira13R,healyPRE}, its extension to $d>2$ was based on numerical simulations~\cite{alves14} of the restricted-solid-on-solid (RSOS) model~\cite{KK}. In the present work, we investigate the BD model extending the analysis of Ref.~\cite{bd_box2d} to $3+1$ and $4+1$ dimensions. We verify the validity of the KPZ universality class, including exponents and its ansatz. We also, revisited the values of the cumulants of $\chi$ presented in Ref.~\cite{alves14} for RSOS model using now more accurate estimates of $\alpha$. The paper is organized as follow. In the next section the model details and the approach used are presented. In section \ref{results}, the results are presented and discussed. The conclusions are summarized in section \ref{conclusions}. \section{Model and methods} \label{model_methods} The ballistic deposition growth model is implemented in $d+1$ hypercubic lattices of size $L$ with periodic boundary conditions. The particles are deposited one at a time at a randomly chosen position of a $d$-dimensional substrate. Each particle is released perpendicularly to the substrate and becomes permanently stuck at the first contact with either the deposit or substrate~\cite{barabasi}. The original interface is defined as the highest position of a particle at each site of the substrate. A time unity corresponds to the aggregation of $L^d$ particles to the deposit. The simulations were carried out on substrates of sizes up to $L=1024$ with averages over up to $N=2000$ independent samples in $d=3+1$. For $d=4+1$, we consider systems of size up to $L=228$ and up to $N=1000$ samples. The smaller the size the larger the number of samples. We also investigate surfaces using the prescription of Ref.~\cite{bd_box2d}. The procedure consists in dividing the original surface in bins of lateral size $\varepsilon$, the binning parameter, and using only the site of highest height inside each bin to build a coarse-grained interface used to compute statistics. The net effect is that the binned interface is smoother than the original one, the latter characterized by many narrow and deep valleys. In $d=2+1$, it was shown that the intrinsic width of the coarse-grained surfaces is strongly reduced and, consequently, the strong corrections to the scaling fall off~\cite{bd_box2d}. It was shown that the binning does not change the non-universal constants $\Gamma$ and $v_\infty$. The non-universal constants in the KPZ equation, Eq.~\eqref{eq:KPZ}, and in its ansatz, Eq.~\eqref{eq:htcorr}, can be obtained using the approach hereafter called Krug-Meakin (KM) method~\cite{krug90} that is described as follows. From Eq.~\eqref{eq:htcorr}, the asymptotic velocity is given by \begin{equation} \frac{d\lrangle{h}}{dt}=v_\infty + \lrangle{g} t^{\beta-1}+\cdots, \label{eq:dhdt} \end{equation} where $\lrangle{g}=\beta s_\lambda \Gamma^\beta \lrangle{\chi}$. So, plotting $d\lrangle{h}/dt$ against $t^{\beta-1}$ renders a straight line for long times with intercept providing $v_\infty$ and the angular coefficient $\lrangle{g}$. The latter plays an important role to determine the cumulant ratio $R=\lranglec{\chi^2}/\lrangle{\chi}^2$, where $\lranglec{A^n}$ is the notation for $n$th order cumulant of $A$; see subsection~\ref{sec:hds}. The parameter $\lambda$ is obtained by the deposition on tilted large substrates with an overall slope $s$, for which a simple dependence between asymptotic velocity and slope \begin{equation} v \simeq v_\infty+\frac{\lambda}{2}s^2 \label{eq:lambda} \end{equation} is expected for the KPZ equation~\cite{krug90}. We can use the relation~\cite{krug90} \begin{eqnarray} \Gamma=|\lambda| A^{1/\alpha}, \label{eq:Gamma} \end{eqnarray} where $\alpha$ is the roughness exponent of the KPZ class, to determine the amplitude of the fluctuations. The parameter $A$ is obtained from the asymptotic velocity $v_L$ of finite systems of size $L$~\cite{krug90} using the relation \begin{equation} \Delta v = v_L-v_\infty \simeq -\frac{A\lambda}{2}L^{2\alpha-2}. \label{eq:vl} \end{equation} The KM analysis requires a prior accurate knowledge of the both growth and roughness exponents. In $d=3+1$, we adopt the growth exponent $\beta_{3+1}=0.184(5)$ reported by \'Odor \textit{et al.}~\cite{odor2010} since it has a small uncertainty and was obtained for a model with small corrections to the scaling using large systems of size $L=1024$. In $d=4+1$, we adopt the recent estimate $\beta_{4+1}=0.158(6)$ determined by Kim and Kim ~\cite{Kim13} using a RSOS model with an optimal height restriction parameter that improves the corrections to the scaling. The determination of $\Gamma$ is extremely sensitive to the value of the roughness exponent since it is used twice in the analysis via Eqs.~\eqref{eq:Gamma} and \eqref{eq:vl}. In Ref.~\cite{alves14}, it was used the exponents of \'Odor \textit{et al}.~\cite{odor2010}, that in $d=4+1$ is $\alpha_{4+1}=0.245(5)$, and was found for RSOS model $\Gamma^\mathrm{(Odor)}=240(50)$, that led to $\lrangle{\chi}^\mathrm{(Odor)}_{4+1}=-1.00(5)$ and $\lrangle{\chi^2}_\mathrm{c,4+1}^\mathrm{(Odor)}=0.09(1)$ (see Ref.~\cite{alves14} or section~\ref{sec:hds} for the procedure to determine these cumulants). Here, we revisit the data of Ref.~\cite{alves14} using a more recent estimate of Pagnani and Parisi~\cite{Pagnani13} given by $\alpha_{4+1}=0.2537(8)$ that was obtained doing a thorough finite size analysis and we find a different value $\Gamma^\mathrm{(Pagnani)}=105(8)$ that leads to $\lrangle{\chi}^\mathrm{(Pagnani)}_{4+1}=-1.14(2)$ and $\lrangle{\chi^2}_\mathrm{c,4+1}^\mathrm{(Pagnani)}=0.12(1)$ which are, in absolute values, 14\% and 30\%, respectively, above the estimates of Ref.~\cite{alves14}. Similarly, we revisit the data of Ref.~\cite{alves14} for RSOS in $d=3+1$ using an former but with smaller uncertainties estimate of $\alpha_{3+1}=0.3135(15)$ by Marinari \textit{et al}.~\cite{Marinari}, obtained using the same method of Ref.~\cite{Pagnani13}, and we find $\Gamma^\mathrm{(Marinari)}_{3+1}=15.8(6)$ in contrast with $\Gamma^\mathrm{(Odor)}_{3+1}=38(3)$ that was found using $\alpha_{3+1}=0.29(1)$~\cite{odor2010}. This difference in $\Gamma$ leads to first and second cumulants approximately 20\% and 50\% bigger than those found using $\alpha_{3+1}=0.29(1)$. The results are summarized in table~\ref{tab:RSOS}. \begin{table}[] \centering \caption{Non-universal parameter $\Gamma$ and cumulants of $\chi$ for RSOS with height restriction parameter $m=2$ (data from Ref.~\cite{alves14}) obtained using two different values of the roughness exponent reported in the literature for each dimension. } \label{tab:RSOS} \begin{tabular}{ccccc} \hline\hline $d$ & \multicolumn{2}{c}{3+1} & \multicolumn{2}{c}{4+1} \\\hline Ref. & ~~\'Odor~\cite{odor2010} & ~~Marinari~\cite{Marinari} & ~~\'Odor~\cite{odor2010} & ~~Pagnani~\cite{Pagnani13} \\\hline $\alpha$ & 0.29(1) & 0.3135(15) & 0.245(5) & 0.2537(8) \\ $\Gamma$ & 38(3) & 15.8(6) & 240(50) & 205(8) \\ $\lrangle{\chi}$ & $-0.86$ & $-1.06$ & $-1.00$ & $-1.14$ \\ $\lranglec{\chi^2}$ & 0.12 & 0.18 & 0.09 & 0.12 \\\hline\hline \end{tabular} \end{table} \section{results and dicussions} \label{results} \subsection{Scaling and intrinsic width} \label{sec:wint} \begin{figure}[ht] \centering \includegraphics*[width=0.85\linewidth,angle=90]{w2_ells3d_2.pdf} \caption{(a) Time evolution of the squared interface width of the BD model for both original ($\varepsilon=1$) and reconstructed surfaces in $d=3+1$. The dashed line is a power law with exponent $2\beta = 0.368$. Similar behavior is observed for $d=4+1$. Effective growth exponents, $\beta_\mathrm{eff}=d(\ln w)/d(\ln t)$, are shown in bottom panels for (b) $d=3+1$ and (c) $4+1$. The dashed horizontal lines are the growth exponents found for other models in the KPZ class with small corrections to the scaling in the respective dimensions~\cite{odor2010,Kim13}.} \label{fig:w2_ells} \end{figure} Figure~\ref{fig:w2_ells}(a) shows the interface width evolution in $d=3+1$ considering the original surface of the BD model as well as those obtained with binning parameters $\varepsilon=2$, 4 and 8. The effective growth exponents, given by the local derivative of $\ln w$ versus $\ln t$, are shown in Figs.~\ref{fig:w2_ells}(b) and (c). As aforementioned, the time evolution of the interface width for the original BD surfaces exhibits strong corrections in the scaling, leading to a very low effective exponent $\beta_\mathrm{eff}$. In particular, $\beta_\mathrm{eff}$ becomes close to zero for $d=4+1$ in the investigated time interval, which is consistent with an upper critical dimension $d_u=4$. However, as in the previous $d=2+1$ analysis~\cite{bd_box2d}, a convergence to the KPZ growth exponent is observed for the coarse-grained surfaces with $\varepsilon>1$ in both $d=3+1$ and 4+1, see Fig.~\ref{fig:w2_ells}. Notice that there is an optimal interval of bin size where the convergence becomes faster. Indeed, if the bin size is very small the reconstructed surface still has narrow and deep valleys and thus a high intrinsic width. On the other hand, if $\varepsilon$ is too large, only extremal heights are accessed in the statistics and the convergence slows down. The strong corrections observed in the interface width scaling can be reckoned with an additive term, the squared intrinsic width $w^2_i$~\cite{tiago2,evans,chavez,moro}, in the Family-Vicsek ansatz~\cite{FV} as \begin{equation} w^2(L,t) = L^{2\alpha}f\left(\frac{t}{L^{z}}\right) + w^2_i, \label{eq:FV} \end{equation} where the scaling function $f(x)$ behaves as $f(x)\sim x^{2\beta}$ if $x\ll 1$ and $f(x)\sim \mbox{constant}$ if $x\gg 1$. The intrinsic width can be set in terms of the KPZ ansatz, Eq.~\eqref{eq:htcorr}, as~\cite{bd_box2d} \begin{equation} w^2_i = \langle h^2\rangle_c - (\Gamma t)^{2\beta} \langle\chi^2\rangle_c. \label{eq:wi} \end{equation} According to Eq.~\eqref{eq:htcorr}, the second cumulant of the height is given by \begin{equation} \label{eq:h2cum} \langle h^2\rangle_c = (\Gamma t)^{2\beta} \langle\chi^2 \rangle_c + 2(\Gamma t)^{\beta} \mathrm{cov}(\chi,\eta) + \langle\eta^2\rangle_c + \ldots, \end{equation} where $\mathrm{cov}(\chi,\eta)=\langle\chi\eta\rangle-\langle\chi\rangle\langle\eta\rangle$. The cumulant $\lranglec{g^2}=\Gamma^{2\beta} \lrangle{\chi^2}_c$~\cite{Alves13}, necessary to compute $w_i$, can be estimated considering the long time limit of \begin{equation} \lranglec{g^2} = \lim_{t \to \infty}\frac{\langle h^2\rangle_c}{t^{2\beta}} \end{equation} Assuming that there is no statistical dependence between $\chi$ and $\eta$, $\mathrm{cov}(\chi,\eta)=0$, a linear extrapolation to $\lranglec{g^2}$ is expected in curves $\langle h^2\rangle_c/t^{2\beta}$ against $t^{-2\beta}$, as confirmed in Fig.~\ref{fig:g2_e} in both dimensions for three values of the binning parameter. Propagating the uncertainties in the growth exponents, the estimated values are $\lranglec{g^2}^{(3+1)}=1.4(1)$ and $\lranglec{g^2}^{(4+1)}=0.93(8)$; see table~\ref{tab:parBD}. \begin{figure}[ht] \centering \includegraphics*[width=0.85\linewidth]{g2andg1_34d.pdf} \caption{Determination of nonuniversal cumulants. Top: $\lranglec{g^2}=\Gamma^{2\beta} \lrangle{\chi^2}_c$ for $d=3+1$ (open symbols) and 4+1 (filled symbols) for DB using binned substrates with $\varepsilon=2$, 4, and 8 from top to bottom. The lines are linear regressions used to determine $\lranglec{g^2}$. Bottom: determination of $\lrangle{g}$=$\beta\Gamma\lrangle{\chi}$ for $d=3+1$ (open symbols) and 4+1 (filled symbols) for DB using binned substrates with $\varepsilon=2$, 3, and 4. Dashed lines are estimates of $\lrangle{g}$. } \label{fig:g2_e} \end{figure} The leading contribution to the intrinsic width in $d=2+1$ comes from the large fluctuations of the height increments in the deep valleys of the BD interfaces~\cite{bd_box2d}: $w^2_i\approx \langle (\delta h)^2\rangle_c$ where $\delta h(i,t) =h(i,t+dt)-h(i,t)$ is the increment at site $i$ at a step time $dt=1/L^d$. In the present work, we verify that this conjecture is still accurate for $d=3+1$ and 4+1. The upper inset of Fig.~\ref{fig:w2dh2}(a) shows the time evolution of the squared intrinsic width, Eq.~\eqref{eq:wi}, and of the second cumulant of $\delta h$. We observe a very good accordance between these quantities. The intrinsic widths found for long times, propagating the uncertainties in both $\beta$ and $\lranglec{g^2}$, were $w_i^{(3+1)}=21.1(1)$ and $w_i^{(4+1)}=32.6(1)$ while for the height increments we found $\lranglec{(\delta h)^2}=21.13$ and 32.10 in $d=3+1$ and 4+1, respectively; see table~\ref{tab:parBD}. This shows that the corrections in the scaling become more relevant at higher dimensions and explains why it is currently impossible to see KPZ exponents in the high dimensional BD model using a plain analysis. We also compared the third cumulant of $\delta h$ with $\langle h^3\rangle_c - (\Gamma t)^{3\beta} \langle\chi^3\rangle_c$ and a small but relevant difference was found, as in $d=2+1$~\cite{bd_box2d}, showing a non-trivial relation between the height increments and corrections terms in Eq.~\eqref{eq:htcorr}. \begin{table}[] \centering \caption{Non-universal parameters for BD model. } \label{tab:parBD} \begin{tabular}{ccccc} \hline\hline $d$ & $\lrangle{g}$& $\lranglec{g^2}$ & $\lranglec{(\delta h)^2}$ & $w_i^2$ \\\hline 3+1 & 0.568(5) & 1.40(1) & 21.13 & 21.1(1) \\ 4+1 & 0.466(5) & 0.93(8) & 31.10 & 32.6(1) \\\hline\hline \end{tabular} \end{table} \begin{figure*}[ht] \centering \includegraphics[width=0.33\linewidth]{w2_sat_dh2.pdf} \includegraphics[width=0.33\linewidth]{wLs3d_delta2} \includegraphics[width=0.31\linewidth]{wLs3d_delta2_resc} \caption{\label{fig:w2dh2} Interface width analysis for BD in $d=3+1$ and $4+1$. (a) Main panel: squared interface width discounting the second cumulants of height increments for systems of sizes $L=1024$ and $228$ in $d=3+1$ and 4+1, respectively. The lines are power laws with exponents $2\beta = 0.368$ and $0.290$. Top inset: time evolution of the intrinsic width and second cumulant of $\delta h$ for $d=3+1$. Bottom inset: effective growth exponent analysis. (b) Main panel: squared interface width discounting the second cumulants of height increments for $d=3+1$ and different sizes. Left inset: Saturated squared interface width discounting or not the second cumulant of $\delta h$ against lattice size for $d=3+1$. Right inset: Effective roughness exponent analysis for $d=3+1$ and 4+1. (c) Squared interface width in $d=3+1$ scaled with the exponents found in our analysis.} \end{figure*} The evolution of the interface width discounting $\langle (\delta h)^2\rangle_c$ for original BD interfaces is shown in the main panel of the Fig.~\ref{fig:w2dh2}(a). Differently from the binning procedure, this method is free from adjustable parameters. The growth exponents found were $\beta_{3+1}=0.185(5)$ and $\beta_{4+1}=0.145(10)$, in sharp agreement with the exponent $\beta_{3+1}=0.184(5)$ of Ref.~\cite{odor2010} and in marginal agreement with the recent estimate $\beta_{4+1}=0.158(6)$ of Ref.~\cite{Kim2014}, as can be seen in the effective exponent analysis in the bottom inset of Fig.~\ref{fig:w2dh2}(a). Here, it is worth to note that the intrinsic width is slightly larger than $\lrangle{(\delta h)^2}_c$ in $d=4+1$, that, together with the finite time used, can explain the slightly smaller growth exponent found in this dimension. This strategy can be used to obtain the roughness exponent $\alpha$ as well. The squared interface width discounting $\lrangle{(\delta h)^2}_c$ is shown as a function of time for different sizes and $d=3+1$ in Fig.~\ref{fig:w2dh2}(b). The left inset compares the saturated values of $w^2$ and $w^2-\lrangle{(\delta h)^2}_c$. We see that the intrinsic width is still much larger than the long wavelength interface width, obtained discounting the intrinsic one, even for the largest investigated size of $L=256$. Note that $\lranglec{(\delta h)^2}$ has as small but not negligible dependence with size that was reckoned in our analysis. The right inset of Fig.~\ref{fig:w2dh2} shows the effective roughness exponent analysis for $d=3+1$ and 4+1. The estimated values of roughness exponents are $\alpha_{3+1}=0.312(2)$ and $\alpha_{4+1}=0.251(5)$ that, withing uncertainties, agree very well with the both estimates $\alpha_{3+1}^\mathrm{(Marinari)}=0.3135(15)$~\cite{Marinari} and $\alpha_{4+1}^\mathrm{(Pagnani)}=0.2537(8)$~\cite{Pagnani13}. Considering our estimates for the growth exponent we found $\alpha+\alpha/\beta=2.00(15)$ and 1.98(15) in $d=3+1$ and 4+1 dimensions, respectively, in agreement with Galilean invariance scaling relation~\cite{KPZ}. In Fig.~\ref{fig:w2dh2}(c), we confirm the validity of the modified Family-Vicsek ansatz, Eq.~\eqref{eq:FV}, showing the collapse of $(w^2-\lrangle{(\delta h)^2}_c)/L^\alpha$ against $t/L^{\alpha/\beta}$ in $d=3+1$ for different systems sizes onto a universal curve. \subsection{Height Distribution} \label{sec:hds} Let us now focus on the random variable $\chi$ of the KPZ ansatz. An initial assessment involves dimensionless cumulant ratios which can be determined without knowing the constants $\Gamma$ and $v_\infty$. The skewness $S$ and kurtosis $K$ are given by \begin{equation} S=\frac{\lranglec{\chi^3}}{\lranglec{\chi^2}^{1.5}}=\lim\limits_{t\rightarrow\infty} \frac{s_\lambda\lranglec{h^3}}{\lranglec{h^2}^{1.5}} \end{equation} and \begin{equation} K=\frac{\lranglec{\chi^4}}{\lranglec{\chi^2}^{2}}=\lim\limits_{t\rightarrow\infty} \frac{\lranglec{h^4}}{\lranglec{h^2}^{2}}, \end{equation} being the right-hand sides obtained with Eq.~\eqref{eq:htcorr}. Another useful cumulant ratio is given by~\cite{Oliveira13R,Alves13} \begin{equation} R=\frac{\lranglec{\chi^2}}{\lrangle{\chi}^{2}} = \frac{\beta^2\lranglec{g^2}}{\lrangle{g}^{2}}, \end{equation} were $\lrangle{g}=\beta\Gamma^\beta\lrangle{\chi} = \lim\limits_{t\rightarrow\infty}(\lrangle{h}_t-v_\infty)t^{1-\beta}$, see Eq.~\eqref{eq:dhdt} and Fig.~\ref{fig:g2_e}. The analyses of these cumulant are shown in Fig.~\ref{fig:RSK}. We can see that the cumulant ratios are either very close or approaching the values obtained for the RSOS model\footnote{Differently from the cumulants of $\chi$ (see section~\ref{model_methods}), the cumulant ratios obtained for RSOS model in Ref.~\cite{alves14} are reliable references because the model has small finite-time corrections and the determination does not depend on $\alpha$.}in Ref.~\cite{alves14}, corroborating these KPZ signatures for BD in higher dimensions. \begin{figure}[ht] \centering \includegraphics*[width=0.9\linewidth]{skew_kurt.pdf} \caption{Determination of dimensionless cumulant ratios for BD in (a) $d=3+1$ and (b) $4+1$. Dashed lines represent the estimates of cumulant ratios for RSOS model taken from Ref.~\cite{alves14}. The BD results were obtained using a binning parameter $\varepsilon=4$.} \label{fig:RSK} \end{figure} To numerically determine the probability distribution function $\rho(\chi)$ requires accurate estimates of the non universal constants $v_\infty$ and $\Gamma$. The determination of the asymptotic velocities for $d=3+1$ and 4+1 are shown in the main panel of the Fig.~\ref{fig:vinf}. As observed in $d=2+1$~\cite{bd_box2d}, the asymptotic growth velocity is independent of $\varepsilon$, and converges to the same value as the original surface. Our estimated values of the velocity are $v_{\infty,3+1}=4.49820(2)$ and $v_{\infty,4+1}=5.60615(5)$, see table~\ref{tab:KMBD}. Notice that since the asymptotic velocity does not dependent on $\varepsilon$, the KM analysis also does not. The determination of $\lambda$ using Eq.~\eqref{eq:lambda}, shown in the left inset of Fig.~\ref{fig:vinf}, provides $\lambda_{3+1}=2.81(1)$ and $\lambda_{4+1}=3.17(4)$. The KM curves used to determine the values of $\lambda A$, Eq~\eqref{eq:vl}, with the roughness exponents $\alpha_{4+1}^\mathrm{(Marinari)}=0.3135(15)$ and $\alpha_{4+1}^\mathrm{(Pagnani)}=0.2537(8)$, are shown in the right inset of Fig.~\ref{fig:vinf}. The values of $\Gamma=\vert \lambda \vert A^{1/\alpha}$ found are $\Gamma_{3+1}^\mathrm{(Marinari)}=205(20)$ and $\Gamma_{4+1}^\mathrm{(Pagnani)}=730(30)$. Using our exponents, $\alpha_{3+1}=0.312(2)$ and $\alpha_{4+1}=0.251(5)$ we have found $\Gamma_{3+1}=215(15)$ and $\Gamma_{4+1}=700(200)$. Using the exponent of Ref.~\cite{odor2010}, $\alpha_{3+1}=0.29(1)$ and $\alpha_{4+1}=0.245(5)$, we have found $\Gamma_{3+1}^\mathrm{(Odor)}=500(200)$ and $\Gamma_{4+1}^\mathrm{(Odor)}=1200(250)$, both presenting large uncertainties and in odds with the previous estimates. In the remaining of the analysis we use $\Gamma_{3+1}=205(20)$ and $\Gamma_{4+1}=730(30)$ remarking that using the estimates with the exponent of Ref.~\cite{odor2010} leads to values consistent with our previous analysis of Ref.~\cite{alves14}. The KM parameters are summarized in Table~\ref{tab:KMBD}. \begin{table}[] \centering \caption{Non-universal KM parameters and cumulants of $\chi$ for DB model.} \label{tab:KMBD} \begin{tabular}{cccccc} \hline\hline $d$& $v_\infty$ & $\lambda$ & $\Gamma$ & $\lrangle{\chi}$ & $\lranglec{\chi^2}$ \\\hline 3+1& 4.49820(2) & 2.81(1) & $205(20)$ & -1.15(3) & 0.197(7) \\ 4+1& 5.60615(5) & 3.17(4) & $730(30)$ & -1.04(1) & 0.115(3) \\ \hline\hline \end{tabular} \end{table} \begin{figure}[ht] \centering \includegraphics*[width=0.75\linewidth]{vinf_3e4d.pdf} \caption{Parameter determination using KM method~\cite{krug90}. Main panel: Interface growth velocity for BD in $d=3+1$ (bottom curves) and 4+1 (top curves) are represented by open and filled symbols, respectively. We show the results for the original surface (squares) and binning parameter $\varepsilon=4$ (circles). Left Inset: growth velocity against substrate slope for $d=3+1$ (open symbol) and 4+1 (filled symbols). The velocity in $d=4+1$ is subtracted by 1 to improve visualization. Right inset: linear dependence of the velocity difference $\Delta v=v_L-v_\infty$ with the system size according Eq.~\eqref{eq:vl}.} \label{fig:vinf} \end{figure} Possessing the KM parameters, the first and second cumulants of $\chi$ can be obtained directly from \begin{equation} \lrangle{\chi}=\frac{\lrangle{g}}{\beta \Gamma^\beta} \end{equation} and \begin{equation} \lranglec{\chi^2}=\frac{\lranglec{g^2}}{\Gamma^{2\beta}}, \end{equation} where $\lrangle{g}$ and $\lranglec{g^2}$ are defined in Sec.~\ref{model_methods} and shown in table~\ref{tab:KMBD}. The results are $\lrangle{\chi}_{3+1}=1.15(3)$, $\lrangle{\chi}_{4+1}=1.04(1)$, $\lrangle{\chi^2}_{c,3+1}=0.197(7)$ and $\lrangle{\chi^2}_{c,4+1}=0.115(3)$, which are in very good agreement with the corresponding cumulants for RSOS shown in table~\ref{tab:parBD}. These cumulants are summarized in table~\ref{tab:KMBD}. Lets us define the random variable \begin{equation} q'=\frac{h-v_\infty t-\lrangle{\eta}}{(\Gamma t)^\beta} \label{eq:qprime2} \end{equation} whose probability distribution function converges to $\rho(\chi)$ as $t\rightarrow\infty$~\cite{Alves13,Oliveira13R}. To determine the parameter $\lrangle{\eta}$ we use that \begin{equation} \frac{\lrangle{h}-v_\infty t}{t^\beta}=\Gamma^\beta \lrangle{\chi}+\lrangle{\eta}t^{-\beta}+\cdots, \end{equation} such that plotting this left-hand quantity against $t^{-\beta}$ extrapolates linearly to $\Gamma^\beta \lrangle{\chi}$ and the angular coefficient is $\lrangle{\eta}$. Figure~\ref{fig:eta3e4} confirms the expected behavior and the existence of the correction. Since $\eta$ is a short wavelength correction, the value of $\lrangle{\eta}$ depends on the binning parameter $\varepsilon$~\cite{bd_box2d}, as shown in table~\ref{tab:eta}. \begin{table}[] \centering \caption{Average value of the correction $\eta$ in the KPZ ansatz, Eq.~\eqref{eq:htcorr} and Fig.~\ref{fig:eta3e4}.} \label{tab:eta} \begin{tabular}{ccccc} \hline\hline $d$ & $\epsilon=1$ ~ & $\epsilon=2$ ~ & $\epsilon=4$ ~ & $\epsilon=8$ \\\hline 3+1 & -2.3 & 1.9 & 3.9 & 5.7 \\ 4+1 & -3.9 & 1.9 & 4.1 & 6.1 \\\hline\hline \end{tabular} \end{table} \begin{figure}[th] \centering \includegraphics[width=0.9\linewidth]{eta3e4} \caption{Determination of the average shift $\lrangle{\eta}$ in (a) $d=3+1$ and (b) 4+1.} \label{fig:eta3e4} \end{figure} In Fig.~\ref{fig:pchi_ells}, the probability distribution functions for binned surfaces in $d=3+1$ and 4+1 are compared with those of the original interface as well as with those of RSOS model, the last one built using the estimates of $\Gamma$ of table~\ref{tab:RSOS} while the other parameters are those reported in Ref.~\cite{alves14}. If, on the one hand, the original surfaces are not close to the RSOS distributions, on the other hand, we see a satisfactory agreement with the binned surfaces, presenting small deviations in either left or right tails for $d=3+1$ and 4+1, respectively. These deviations must shrink if much longer growth times are considered. \begin{figure}[ht] \centering \includegraphics*[width=7cm]{pchi_ells_silvio2.pdf} \caption{Comparison of the probability distribution function, Eq.~\eqref{eq:qprime2}, of the original and binned surfaces of the BD model in $d=3+1$ and 4+1 dimensions with the RSOS model. The growth times in BD models are $t=190$ and 145 for $d=3+1$ and 4+1, respectively.} \label{fig:pchi_ells} \end{figure} \section{Conclusions} \label{conclusions} Ballistic deposition growth models are characterized by a prominent lateral growth and therefore are considered standards of KPZ growth~\cite{barabasi}. However, strong finite-time and -size corrections make a direct realization of the KPZ exponents in higher dimensions extremely hard and, in practice, unaccessible with our current computer resources. However, eliciting the origin of the leading contributions to the corrections as being due to the fluctuations of height increments along the deposition of particles, it was possible to do a connection between ballistic deposition and KPZ universality class in $d=2+1$ dimensions~\cite{bd_box2d}. Moreover, using the coarse-grained surface where only the highest points inside small bins of size $\epsilon\ll \xi$, where $\xi$ is the surface correlation length, it was possible to obtain the KPZ exponents as well as the universal underlying stochastic fluctuations of the KPZ class in $d=2+1$~\cite{bd_box2d}. In the present work, we show that the methodology of Ref.~\cite{bd_box2d} remains valid for ballistic deposition in $d=3+1$ and 4+1 dimensions. We observe that the squared intrinsic width is given by $w_i^2\approx \lrangle{(\delta h)}_c$, where $\delta h$ is the height increment in a deposition step, and becomes more relevant at higher dimensions. Growth and roughness exponents in very good agreement with those reported for KPZ models with small corrections to the scaling~\cite{odor2010,Kim13,Pagnani13,Marinari2002,alves14} were obtained when the intrinsic width was explicitly reckoned in the scaling analysis. Using a binned surface analysis, we also provide evidences that the underlying fluctuation $\chi$ of height profiles belongs to the KPZ class, using the dimensionless cumulant ratios and the probability distribution function itself. We also revisit the data for RSOS deposition model reported in Ref. \cite{alves14} considering more accurate estimates of the roughness exponents. We have found that the non-universal parameter $\Gamma$, representing the amplitude of the interface fluctuations, changes significantly implying in changes of the estimates of the cumulants of $\chi$. Finally, it is worth noticing that our results provide a new numerical evidence for an upper critical dimension, if it exists, larger than $d=4+1$ corroborating former~\cite{Tu1994,Ala-Nissila1993,Ala-Nissila1998,Marinari2002} and recent~\cite{Pagnani13,Kim2014,alves14,Rodrigues15} findings. \begin{acknowledgments} Authors acknowledge the support from CNPq and FAPEMIG (Brazilian agencies). \end{acknowledgments}
2,877,628,091,657
arxiv
\section{Introduction} Let us start from two equations: \begin{equation} y=ax+b\label{eq:1} \end{equation} and \begin{equation} y^{2}+x^{2}=r^{2}\label{eq:2} \end{equation} To describe them, six different symbols were used. In the first case: $y,=,a,x,+,b$ and in the second: $y,,2,+,x,=,r$. In the first case we have a linear object - a straight line, in the second, we have a nonlinear object - a circle. On these grounds, it is difficult to decide which object is easier or more complicated to handle. In fact, we have here used certain convention which allows to us simplify equations, which should be written as follows: \begin{equation} y=a\cdot x+b\label{eq:3} \end{equation} \begin{equation} \left(y\cdot y\right)^{2}+\left(x\cdot x\right)^{2}=\left(r\cdot r\right)^{2}\label{eq:4} \end{equation} In this case first equation needs 7 different symbols and second equation needs 9! In the most simple case \begin{equation} y=x\label{eq:5} \end{equation} and \begin{equation} \left(y\cdot y\right)^{2}+\left(x\cdot x\right)^{2}=1\label{eq:6} \end{equation} we have 3 and 8 different symbols to describe the particular straight line and circle. If we wanted to have the same generality in a circle as in the case of a straight line, then we would have to enter additional 2 parameters (a total of 10). These simple examples show that the description of nonlinearity includes some additional complexity which are not in the linear models. Other, well known examples, are homogeneous and non homogeneous linear differential equations encountered in physics or engineering, to which one can find relatively easy specific, and in many cases general solutions or to use effective methods of approximations. These may be arguments for trying to look for linear models, even if their original versions are nonlinear. It turns out that this can be done at the expense of \textbf{introducing additional, infinite number of variables} (e.g. correlation functions), having hope that in this way the properties of linear systems will be effectively used. Unfortunately, an infinite number of variables needed to linearize the original nonlinear problem leads at least to two issues: We have too many solutions which can not be related to reasonable physical conditions, see \cite{Han }, and we have difficulties in a precise definition of a number of terms appearing in equations for \textbf{n-point informations (n-pi)}. These two issues were addressed in greater or lesser degree in the previous author's papers. In the present work we will focus on a definition of certain operator-valued functions and we introduce, for the n-pi, instead of generating fuctionals - the new generating structures leading to an algebraization of physics. In this paper we will address these two problems taking into account linear properties of appropriate entities. At this point we would like to note in passing that talking about linearity of considered formulas, or equations, we usually mean that they depend on the first power of certain \textbf{set of dependent variables}. In this way the nonlinearity of the original theory appears into linearized theory in different ways and in fact we are speaking about the \textbf{\textit{relative linearity}}. In the case of formula (\ref{eq:3}), the \textit{absolute linearity} would mean that \begin{equation} y=a+x+b\label{eq:7} \end{equation} which goes on to describe a stright line but inclined at an angle of $45^{0}$ and otherwise translated with respect to the coordinate system. This example shows that when building models of various phenomena a linearity request should be used with a sense and we are rather using the relative linearity, which de facto means coexistence of non-linearity with linearity. \section{Vector-valued functions (v-vf)} In mathematics we are talking about linear spaces and linear mappings (v-vf) as indeed one of the latter is closely related. Given a vector space its elements are present as linear combinations of basis vectors B. The numbers used in these combinations are components of vectors. We can view this by means of (relative) linear mapping $f$: \begin{equation} \left\{ f(\varrho;B)\right\} _{\varrho\in R^{n}}\Longleftrightarrow V^{n}\label{eq:8} \end{equation} where $V^{n}$is n-D linear vector space. In fact, the linear mapping (\ref{eq:8}) \begin{equation} f(\alpha'\varrho'+\alpha''\varrho'';B)=\alpha'f(\varrho';B)+\alpha''f(\rho'';B)\label{eq:9} \end{equation} represents isomorphically n-D linear space created with vectors $\left(\varrho_{1},...,\varrho_{n}\right)$: \begin{equation} f(\varrho;B)\equiv\sum_{i=1}^{n}\varrho_{i}\bar{e}^{i}\Longleftrightarrow\left(\varrho_{1},...,\varrho_{n}\right)\label{eq:10} \end{equation} If a new base, $B'$, is chosen, then we should have: \begin{equation} f(\varrho;B)=f(\varrho';B')\label{eq:11} \end{equation} From what we have previously said it results that for the description of the linear vector spaces one can use v-vf depending on variables $\varrho$ and 'parameters' $B$. In the case of \begin{equation} f=f(\varrho;B(P))\label{eq:12-1} \end{equation} we have base $B$ depending on the point $P$ and relation (\ref{eq:10}) has a local character; to every point $P$ a linear space of vectors is related. In fact, we are dealing here with hidden nonlinearity and, like in the linearized theories, the non-linearity does not permit to forget about yourself. Similar intertwining of linearity and non-linearity exists in the case of non-linear manifolds to which, at each point, the tangent space is introduced, see also \cite{Han 2012'}. \section*{Symmetry} If a certain symmetry takes place, like the permutation symmetry: \begin{equation} \varrho=S\varrho;\; S=S^{\star}\label{eq:12} \end{equation} then we have: \begin{equation} f(\varrho;B)=f(S\varrho;B)=f(\varrho;SB)=f(S\varrho;SB)\label{eq:13} \end{equation} This equality indicates that one can use a base richer than in the absence of symmetry which may lead to left or right invertibility of useful set of operators which make possible to introduce appropriate projectors. For a linear transformation $A$ \begin{equation} f(\varrho';B)=f(A\varrho;B)=f(\varrho;B')\Longleftrightarrow f'=\mathcal{A}f\label{eq:14} \end{equation} where as an excercise see: $B'=?$. In all these formulas the base $B$ can be finite or even uncountably dimensional as in the case of Fock space used for description of linearized equations, see \cite{Han 2012}. In the latter case, a linear function $f$ depends on the uncounable number of parameters $B$. \section{Operator-valued functions } In many areas of science, however, non-linear functions are used which, although they depend on one or more parameters are really difficult to define a meaningful, because of the fact that their arguments (variables) are operators. The \textit{functional calculus }is defined sometimes as a branch of mathematics about inserting operators into functions to get in result meaningful, or, at least formally correct, new operators, see, e.g.,\cite{wiki 2012}, \cite{Hass 2007,Wiki 2010}, \cite{Han 2010,Han 2011}. In this section we try to identify the operator associated with the function \begin{equation} f(x)=a\frac{x}{1-x}\label{eq:15} \end{equation} using a slightly generalized functional calculus. First, we will try to determine the operator \begin{equation} f(\hat{M})=\frac{\hat{M}}{\hat{I}-\lambda_{2}\hat{M}}=?\label{eq:16} \end{equation} where $\hat{M}$ is a right invertible operator. In other words, there is an operator $\hat{M}_{R}^{-1}$ such that \begin{equation} \hat{M}\hat{M}_{R}^{-1}=\hat{I}\label{eq:17} \end{equation} where $\hat{I}$ is the unit operator in a considered linear space $F$, see \cite{Przew 1988} and App.3. Then in $F$ there is projector \begin{equation} \hat{P}=\hat{I}-\hat{M}_{R}^{-1}\hat{M}\equiv\hat{I}-\hat{Q}\label{eq:18} \end{equation} projecting on the null space of the operator $\hat{M}$: \begin{equation} \hat{M}\hat{P}=0,\quad\hat{M}\hat{Q}=\hat{M}\label{eq:19} \end{equation} We would like to specify the operator (\ref{eq:16}) in such way that the following equality would take place: \begin{equation} \lambda_{1}'\hat{M}\frac{1}{\hat{I}-\lambda_{2}\hat{M}}+\lambda_{1}''\frac{1}{\hat{I}-\lambda_{2}\hat{M}}\hat{M}=(\lambda_{1}'+\lambda_{1}'')\hat{B}\label{eq:20} \end{equation} where $\hat{B}$ is an operator given in a moment. The property (\ref{eq:20}) is weaker than the assumption that the two operators standing on the l.h.s. of Eq.\ref{eq:20} are identical. First, we have to spacify the formal operator \begin{equation} \frac{1}{\hat{I}-\lambda_{2}\hat{M}}\equiv\hat{Y}=\left(\hat{I}-\lambda_{2}\hat{M}\right)_{R}^{-1} \end{equation} which we will treat as a right inverse operator to the operator $\hat{I}-\lambda_{2}\hat{M}$: \begin{equation} \left(\hat{I}-\lambda_{2}\hat{M}\right)\hat{Y}=\hat{I}\label{eq:22} \end{equation} Multiplying this equation by the right inverse $\lambda_{2}^{-1}\hat{M}_{R}^{-1}$we get equivalent equation: \[ \left(\lambda_{2}^{-1}\hat{M}_{R}^{-1}-\hat{Q}\right)\hat{Y}=\lambda_{2}^{-1}\hat{M}_{R}^{-1} \] hence we get the following equation for $\hat{Y}$ \begin{equation} \left(\hat{I}-\lambda_{2}^{-1}\hat{M}_{R}^{-1}\right)\hat{Y}=\hat{P}\hat{Y}-\lambda_{2}^{-1}\hat{M}_{R}^{-1}\label{eq:23} \end{equation} in which the projection $\hat{P}\hat{Y}$ of the right inverse operator $\hat{Y}$ is an arbitrary element. Assuming, for a sake of simplicity that $\hat{I}-\lambda_{2}^{-1}\hat{M}_{R}^{-1}$is both side invertible operator, we get: \begin{equation} \hat{Y}=\left(\hat{I}-\lambda_{2}^{-1}\hat{M}_{R}^{-1}\right)^{-1}\left(\hat{P}\hat{Y}-\lambda_{2}^{-1}\hat{M}_{R}^{-1}\right)\equiv\frac{1}{\hat{I}-\lambda_{2}\hat{M}}\label{eq:24} \end{equation} This formula shows all the uncertainty of the expression $\frac{1}{\hat{I}-\lambda_{2}\hat{M}}$. Now it is easy to show that if \begin{equation} \hat{B}=\hat{B}\hat{Q}\label{eq:25} \end{equation} and if the arbitrary term \begin{equation} \hat{P}\hat{Y}=0\label{eq:26} \end{equation} then Eq.\ref{eq:20} is satisfied. In this case the operator (\ref{eq:16}) \begin{eqnarray} & f(\hat{M})=\frac{\hat{M}}{\hat{I}-\lambda_{2}\hat{M}}\equiv\hat{B}=\hat{B}\hat{Q}\nonumber \\ & \left\{ \hat{M}\left(\hat{I}-\lambda_{2}^{-1}\hat{M}_{R}^{-1}\right)^{-1}\left(-\lambda_{2}^{-1}\hat{M}_{R}^{-1}\right)+\left(\hat{I}-\lambda_{2}^{-1}\hat{M}_{R}^{-1}\right)^{-1}\left(-\lambda_{2}^{-1}\hat{M}_{R}^{-1}\right)\hat{M}\right\} \hat{Q}\nonumber \\ & =\lambda_{2}^{-1}\left\{ \left(\lambda_{2}^{-1}\hat{M}_{R}^{-1}-\hat{I}\right)^{-1}+\left(\lambda_{2}^{-1}\hat{M}_{R}^{-1}-\hat{I}\right)^{-1}\hat{Q}\right\} \hat{Q}\nonumber \\ & =2\lambda_{2}^{-1}\left(\lambda_{2}^{-1}\hat{M}_{R}^{-1}-\hat{I}\right)^{-1}\hat{Q}\label{eq:27} \end{eqnarray} where the projector $\hat{Q}=\hat{M}_{R}^{-1}\hat{M}$. Here it is worth noting that the property $\hat{B}=\hat{B}\hat{Q}$, which underlies formu\l{}u (\ref{eq:27}), is consistent with the formal expression for the function $f$, for which $f\simeq\hat{M},$ for $\lambda_{2}\simeq0$. Do not we will get it, if not to force a linear relationship, for (\ref{eq:20}), with the parameters $\lambda_{1}'$ and $\lambda_{1}''$. It is interesting, however, that B has one additional, symmetry property, which does not have a formal prototype, namely that $\hat{B}=\hat{Q}\hat{B}$. Could it be a clue in defining the operator valued functions? Derived formula depends however on the choice of operator $\hat{M}_{R}^{-1}$. Thus, additional conditions are required in order to reduce its ambiguity, see \cite{wiki 2012}, for example, we can demand that $\hat{M}_{R}^{-1}$ is the same type as $\hat{M}$, (e.g. local), see \cite{Han 2012}, Sec.4. In the derivation of the above formula we was influenced mainly by features (\ref{eq:20}) and (\ref{eq:25}) which can be only formally justified. What does it really mean? It means so much that if the formula (\ref{eq:16}) made sense, it would be those properties that we want to take over the (inherited) already correctly defined formula (\ref{eq:27}). There is another aspect here which is not insignificant when considering the equations for n-point correlation functions, or more general, for the n-point information (\textbf{n-pi}), namely that the formula which is correctly specified, in many cases is the sum of the diagonal and lower triangular operators, see previous author papers. This means that it does not lead to additional links (branches) with higher n-pi. Moreover, since the formal (ill defined) formula (\ref{eq:16}) , for small $\lambda_{2}$, describes formally in many cases polynomial interaction, we can consider the correctly defined non-polynomial formula (\ref{eq:27}) as a candidate for description of such polynomial interaction. Taking into account that closed equations for n-pi can be obtained by means of highly complicated nonlinear interactions, which approximate much simpler, polynomial interactions, see \cite{Han' 2010 } we are inclined to say that the non-linearity and linearity are more friends than enemies. But that last sentence would suggest(?) that \textbf{perhaps more effective is the search for simple equations than seeking for simple interactions!} \section{Linearity fetish} The popular belief is that the linearity means more simplicity and effectiveness of the systems and phenomena description. The basic concepts of mechanics as radius vector, force, momentum, angular momentum - linearly depend on the variables defining them. Cartesian reference systems include the concept of linearity for both themselves and the relations binding them (Galilean transformation). It is widely believed that it is easier to solve linear systems of equations than nonlinear systems. It is surprising that it is not always the case, even when a solution is presented in the form of formal, functional integrals (see, e.g., quantum or stochastical field theories) because in that case the functional integral prevents to obtain the effective final result. Moreover, some people including me believe that the functional integrals encountered in the field theory are generally not computable, see \cite{Pen rose 2004}; remarks about computability. It turns out that the linear equations satisfied by the n-pi generated by these functional integrals, branch out to infinity. This means that it can not be possible to write out a reasonable, closed set of equations, for a defined, finite set of n-pi. It is called the \textit{closure problem }which is usually related to a nonlinearity of the original (before linearization) equations. \section{A new paradigm?} These observations seem to indicate that in the existing approaches leading to an impasse - titled the closure problem - linearity and non-linearity are closely related. We propose to move away from the primacy of the detailed description of the dynamic components of the system and replacing it with the primacy of the less detailed descriptions. In this description, in the first place, will be placed on n-pi \begin{equation} <\varphi(\tilde{x}_{1})\cdots\varphi(\tilde{x}_{n})>\label{eq:28} \end{equation} with n=1,2,..., taking a less detailed description of the system than description supplied by the 'field' $\varphi$, see \cite{Han 2008}. A detailed description will be something secondary and should result from the first in which rather global properties of the system are taken into account. Thinking of this kind provides a not so old discoveries in astronomy and still threatening economic crises. Climate change also seem to suggest a different paradigm. Moving away from the detailed description is the basis of abstraction and it allows to cope with of extremely high complexity of the considered system. But the problem is to use it in a more fundamental way. In this approach, concepts such as - local, global - will play at least equivalent role. But how to accomplish this? \section{A 'new' approach. Free Fock space?} \begin{description} \item [{Motto:}]~ \end{description} \textit{'Our experience hitherto justifies us in trusting that nature is the realization of the simplest that is mathematically conceivable' }Albert Einstain In the proposed new approach, we start with the equation on the \textit{generating vector $|V>$ }for the functions $V(\tilde{x}_{(n)});\; n=1,2,...,\infty$: \begin{equation} |V>=\sum_{n=1}\int d\tilde{x}_{(n)}V(\tilde{x}_{(n)})|\tilde{x}_{(n)}>+|0>_{info}\label{eq:29} \end{equation} The n-point functions $V(\tilde{x}_{(n)})$, which we call the \textit{n-point information} (n-pi) about the system, will have different interpretation in classical and quantum physics, see, e.g., \cite{Han' 2011}.$|\tilde{x}_{(n)}>$, for n=1,2..., are linearly independent orthonormal vectors, the vectot $|0>_{info}$ describes so called the \textit{local information vacuum, }see \cite{Han 2012}. The generating vector $|V>$satisfies the following \textbf{linear equation}: \begin{equation} \left(\hat{L}+\lambda\hat{N}+\hat{G}\right)|V>=|0>_{info}=\hat{P}_{0}|0>_{info}\label{eq:30} \end{equation} The operator $\hat{L}$ is a \textbf{right invertible operator}, which in the case of \textit{classical (e.g. statistical) field theory} is a diagonal operator: \begin{equation} \hat{P}_{n}\hat{L}=\hat{L}\hat{P}_{n}\label{eq:31} \end{equation} with respect to the projectors $\hat{P}_{n};n=1,2,...,\infty$, where the project $\hat{P}_{n}$projects on the n-th term in the expansion (\ref{eq:29}), see App1. The projector $\hat{P}_{0}$projects on the subspace of the linear space F constituated by means of vectors (\ref{eq:29}). The subspace $\hat{P}_{0}F$ does not contain any local information about the system. In the papers \cite{Han 2012} and \cite{Han' 2011} we call a vector belonging to subspace $\hat{P}_{0}F$ - the local information vacuum. It is surprising that both in the classical and quantum description of systems they lead to additional nonperturbative corrections. In the case of \textit{quantum field theory} the operator $\hat{L}$ is an invertible or right invertible diagonal, plus - a lower triangular operator - related to the commutation relations of the canonical conjugate operator variables, with respect to the same set of projectors $\hat{P}_{n}$. In the case of polynomial nonlinearity, the operator $\hat{N}$ is an upper triangular operator in a classical as well as in quantum field theory: \begin{equation} \hat{P}_{n}\hat{N}=\sum_{n<m}\hat{P}_{n}\hat{N}\hat{P}_{m}\label{eq:32} \end{equation} see \cite{Han 2012}. The operator $\hat{G}$ , in the both cases, is a left invertible operator, which is lower triangular operator: \begin{equation} \hat{P}_{n}\hat{G}=\sum_{m<n}\hat{P}_{n}\hat{G}\hat{P}_{m}\label{eq:33} \end{equation} All these operators are linear operators acting in the \textbf{\textit{linear space }}$F$ constituated from the vectors (\ref{eq:29}). If linearly independent orthonormal vectors \begin{equation} |\tilde{x}_{(n)}>=\hat{\eta}^{\star}(\tilde{x}_{1})\cdots\hat{\eta}^{\star}(\tilde{x}_{n})|0>\label{eq:33-1} \end{equation} see Sec.7, then we call space $F$ the \textit{free Fock space} (FFS). \textbf{Why the free Fock space} (FFS) F? Because our experience hitherto justifies us in trusting that in such a space constructed by means of operators satisfying Cuntz relations, see, e.g., \cite{Han 2012}, it is easier to find the inverse operations to multiple operators which occur in the equations for the generating vectors as Eq.\ref{eq:30}, see previous author papers. In some sense, we have similarity to the difference which exists in the construction of the inverse matrices by Euler's and Gauss methods: The effectiveness of Gauss type methods, in our opinion, is due to the fact that they use effectively linearity of matrices themselves. In fact, this is a FFS task. But that's not all. It turns out that in this space there are operators which leads to a closed equation for n-pi, see \cite{Han 2012,Han' 2010 ,Han 2010}, and so on, that for small values of the so called minor coupling constant at least formally approximate operators used in the \textbf{usual (not free) Fock space}. It is not excluded that in this way the entangled together problems of non-linearity and closuring are significantly overcome. In FFS it is also possible to introduce vectors describing local and global information which allows the use of a unique language to describe phenomena belonging to different fields of human activity such as physics or economics, complex systems, etc. \section{New generating structures describing classical and quantum physics; noncommutative rings and algebraization of physics} In fact, we can avoid the introduction of the vector space of generating vectors (\ref{eq:29}) by considering only (generating) operators acting in FFS. For this purpose, instead of vectors (\ref{eq:29}) with the base vectors $|\tilde{x}_{(n)}>=\hat{\eta}^{\star}(\tilde{x}_{1})\cdots\hat{\eta}^{\star}(\tilde{x}_{n})|0>$, we introduce the lower triangular operator \begin{equation} \hat{V}_{0}\equiv|V><0|=\sum_{n=1}\int d\tilde{x}_{(n)}V(\tilde{x}_{(n)})\hat{\eta}^{\star}(\tilde{x}_{1})\cdots\hat{\eta}^{\star}(\tilde{x}_{n})\hat{P}_{0}+\hat{P}_{0}\label{eq:34} \end{equation} with $\tilde{x}_{(n)}\equiv\tilde{x}_{1},...,\tilde{x}_{n}$ and $\hat{P}_{0}=|0><0|$, where operators $\hat{\eta}$satify the Cuntz relations \begin{equation} \hat{\eta}(\tilde{y})\hat{\eta}^{\star}(\tilde{x})=\delta(\tilde{y}-\tilde{x})\cdot\hat{I}\label{eq:35} \end{equation} and vectors $|0>,<0|$ describe local information vacuum, see \cite{Han 2012}. The star means an involution of the operator $\hat{\eta}$, $\left(\hat{\eta}^{\star}\right)^{\star}=\hat{\eta}$, and the projector \begin{equation} \hat{P}_{0}\sim\hat{P}_{info}\label{eq:36} \end{equation} We also assume that, for arbitrary 'vectors' $\tilde{x}$, the projector $\hat{P}_{0}=\hat{P}_{0}^{\star}$ and operators $\hat{\eta}$ have the following properties: \begin{equation} \hat{\eta}(\tilde{x})\hat{P}_{0}=\hat{P}_{0}\hat{\eta}^{\star}(\tilde{x})=0\label{eq:37} \end{equation} From the above, we have \begin{equation} \hat{P}_{0}\hat{\eta}(\tilde{y}_{1})\cdots\hat{\eta}(\tilde{y}_{n})\hat{V}_{0}=V(\tilde{y}_{(n)})\hat{P}_{0}\label{eq:37-1} \end{equation} We also have \begin{equation} \hat{V}_{0}=\hat{V}_{0}\hat{P}_{0},\;\hat{P}_{0}\hat{V}_{0}=\hat{P}_{0}\label{eq:37-2} \end{equation} Operators $\hat{V}_{0}$ satisfy very similar equation as Eq.\ref{eq:30}: \begin{equation} \left(\hat{L}+\lambda\hat{N}+\hat{G}\right)\hat{V}_{0}=\hat{P}_{info}\sim\hat{P}_{0}\label{eq:38} \end{equation} One can introduce a more general generating operators than the operators (\ref{eq:34}), with diagonal, lower and upper triangular elements,: \begin{equation} \hat{V}=\sum_{n=0}^{\infty}\sum_{m=0}^{\infty}\int d\tilde{x}_{(m)}d\tilde{y}_{(n)}V_{m,n}(\tilde{x}_{(m)},\tilde{y}_{(n)})\hat{\eta}^{\star}(\tilde{x}_{1})\cdots\hat{\eta}^{\star}(\tilde{x}_{m})\hat{P}_{0}\hat{\eta}(\tilde{y}_{1})\cdots\hat{\eta}(\tilde{y}_{n})+\hat{P}_{info}\label{eq:39} \end{equation} while we agree that the subscript zero means that the variable does not exist in the given expression. We have: \begin{equation} \hat{P}_{0}\hat{\eta}(\tilde{x}_{1})\cdots\hat{\eta}(\tilde{x}_{m})\hat{V}\hat{\eta}^{\star}(\tilde{y}_{1})\cdots\hat{\eta}^{\star}(\tilde{y}_{n})\hat{P}_{0}=V_{m,n}(\tilde{x}_{(m)},\tilde{y}_{(n)})\hat{P}_{0}\label{eq:40} \end{equation} for $k=0,...,n$ and $n=0,1,...,\infty$. We postulate, for the operators $\hat{V}$, the following equation: \begin{equation} \left(\hat{L}+\lambda\hat{N}+\hat{G}\right)\hat{V}=\hat{\Phi}\label{eq:41} \end{equation} with a 'source' operator $\hat{\Phi}$ . Imposing on$\hat{V}$ the condition: \begin{equation} \hat{P}_{0}\hat{V}=\hat{P}_{0}\label{eq:42} \end{equation} and on the source term $\hat{\Phi}$ the condition: \begin{equation} \hat{\Phi}\hat{P}_{0}=\hat{P}_{info}\label{eq:43} \end{equation} we see that the component $\hat{V}\hat{P}_{0}\equiv\hat{V}_{0}$of the generating operator $\hat{V}$ satisfies exactly the same Eq.\ref{eq:38} as the generating operator $\hat{V}_{0}$. An algebraization of equations introduced here leads to description of considered equations in which the sought entities and entities used to describe equations - belong to the same category of notions, for example, they are operators. This allows for raising new questions, To see this let us assume that the generating operator $\hat{V}$ satisfies a more general equation \begin{equation} \hat{A}\hat{V}=\hat{\Phi}\label{eq:44} \end{equation} with given operators $\hat{A},\hat{\Phi}$. Now, in addition to Eq.\ref{eq:42} we postulate that \begin{equation} \hat{\Phi}\hat{P}_{0}=\hat{\Phi}_{0}\label{eq:44-1} \end{equation} which may be different from Eq.\ref{eq:43}. Like in the case of Eq.\ref{eq:30}, let us assume that the operator $\hat{A}$ is a right invertible. This means that a solution can be expressed as \begin{equation} \hat{V}=\hat{A}_{R}^{-1}\hat{\Phi}+\hat{P}_{A}\hat{V}\label{eq:45} \end{equation} with an arbitrary projection $\hat{P}_{A}\hat{V}$, where $\hat{P}_{A}=\hat{I}-\hat{A}_{R}^{-1}\hat{A}$ is a projector on the null space of the operator $\hat{A}$ and $\hat{A}_{R}^{-1}$ is its a right inverse. Now we can see what we would get if the generating operator $\hat{V}$ was also a right invertible with $\hat{V}_{R}^{-1}$ as its right inverse: $\hat{V}\hat{V}_{R}^{-1}=\hat{I}$? From Eq.\ref{eq:45} , we get \begin{equation} \hat{I}=\hat{A}_{R}^{-1}\hat{\Phi}\hat{V}_{R}^{-1}+\hat{P}_{A}\label{eq:46} \end{equation} and this would mean that product of operators \begin{equation} \hat{A}_{R}^{-1}\hat{\Phi}\hat{V}_{R}^{-1}=\hat{I}-\hat{P}_{A}\equiv\hat{Q}_{A}\label{eq:47} \end{equation} which is a projector, would not depend on the arbitray source operator $\hat{\Phi}$. But on the above limitation one can look in a more positive way, namely, that in the case of a more fundamental theory the 'sources' (including in this name the currents) and interactions described by operators $\hat{\Phi}$ and $\hat{A}$ are somehow related to each other. In fact, Eq.\ref{eq:47} like original Eq.\ref{eq:44} relates three entities: $\hat{A}_{R}^{-1},\hat{\Phi}$and $\hat{V}_{R}^{-1}$. But in the case of (\ref{eq:47}) this relation is a more restrictive and indicating a certain 'entanglement' or unification of them. Multiplying Eq.\ref{eq:47} by $\hat{A}$ we get equation \begin{equation} \hat{\Phi}\hat{V}_{R}^{-1}=\hat{A}\hat{Q}_{A}=\hat{A}\label{eq:48} \end{equation} which can be regarded as an equation for a right inverse $\hat{V}_{R}^{-1}$. Having calculated $\hat{V}_{R}^{-1}$, we can calculate the operator $\hat{V}^{\star}$by means of the equation: \begin{equation} \left(\hat{V}_{R}^{-1}\right)^{\star}\hat{V}^{\star}=\hat{I}\label{eq:49} \end{equation} and the generating operator $\hat{V}=\left(\hat{V}^{\star}\right)^{\star}$. Hence, finally, \begin{equation} |V>=\hat{V}|0>\label{eq:50} \end{equation} It shows how algebraization of equations allows for a new approache to old problems. In the case of transformed Eq.\ref{eq:30}, and after its symmetrization, see, e.g.,\cite{Han 2012}, \begin{eqnarray} & \left(\hat{I}+\lambda\left(\hat{I}+\hat{S}\hat{L}_{R}^{-1}\hat{G}\right)^{-1}\hat{S}\hat{L}_{R}^{-1}\hat{N}\right)|V>=\nonumber \\ & \left(\hat{I}+\hat{S}\hat{L}_{R}^{-1}\hat{G}\right)^{-1}\left(\hat{S}\hat{L}_{R}^{-1}|0>_{info}+\hat{S}\hat{P}_{L}|V>\right)\label{eq:51} \end{eqnarray} we can consider the operator equation \begin{equation} \left(\hat{I}+\lambda\left(\hat{I}+\hat{S}\hat{L}_{R}^{-1}\hat{G}\right)^{-1}\hat{S}\hat{L}_{R}^{-1}\hat{N}\right)\hat{V}=\left(\hat{I}+\hat{S}\hat{L}_{R}^{-1}\hat{G}\right)^{-1}\left(\hat{S}\hat{L}_{R}^{-1}\hat{P}_{info}+\hat{S}\hat{P}_{L}\hat{V}\right)\label{eq:52} \end{equation} This means that in Eq.\ref{eq:44}, the operator \begin{equation} \hat{A}=\left(\hat{I}+\lambda\left(\hat{I}+\hat{S}\hat{L}_{R}^{-1}\hat{G}\right)^{-1}\hat{S}\hat{L}_{R}^{-1}\hat{N}\right)\label{eq:53} \end{equation} and the operator \begin{equation} \hat{\Phi}=\left(\hat{I}+\hat{S}\hat{L}_{R}^{-1}\hat{G}\right)^{-1}\left(\hat{S}\hat{L}_{R}^{-1}\hat{P}_{info}+\hat{S}\hat{P}_{L}\hat{V}\right)\label{eq:54} \end{equation} \section{Final remarks and comments on symmetrization of calculations; too much symmetry in science?} The operator-valued functions of the right invertible operators incorporating three properties of the formal formula as: i. linearity, see Eq.\ref{eq:20}, ii. commutativity with respect to the $\hat{M}$ operator, see again Eq.\ref{eq:20}, and iii. projecton properties, see Eq.\ref{eq:27} where constructed. To construct such operator-valued functions, we did not take into account spectral properties of used operators like in the usual approach of the functional calculus, but we have used the most primitive methods of defining such functions, namely, to present them in the form of infinite power series. The specificity of the submitted approach is that in many cases, for interesting projections, only a finite number of terms of such series gives contributions , see previous author papers. In this sense a new approach, illustrated by the motto to Sec.6, is possible:). An important element of this paper is also a new generating structure (operator) for the n-pi $V(\tilde{x}_{(n)})$ that allows to describe systems and considered equations by means of the \textbf{noncommutative ring with the unity}, see, e.g., \cite{Lidl 1983}. In this way, the obstacle has been removed associated with the use of the basic concepts of physics, namely - vector spaces in which are not defined vector products. In the proposed approach has been abolished demarcation line between the description of equations (operators) and the description of physical systems (vectors). For both objects, we use the elements of noncommutative ring with unity. Usually, the division on the operators and vectors is justified \textbf{by the demand} that an action of the operator on the vector should gives the vector. Such deman is automatically realized by the ring in which vectors are represented by the one column matrices, first one. If, however, we resigne from that demand then the vectors can be substituted by the operators, e.g. the diagonal matrices. In many cases such matrices representing vectors can be inverted which is a useful property in many solving procedures. Similar reasoning lies behind the idea of replacing the generating functions or functionals by the generating vectors. In this case, it was possible due to the fact that the generating functions or functionals do not have to be covergent. As a result, obtained equations admit more general representations. And one more thing related to the paper title: the considered generating structures depend in the nonlinear way on the auxiliary field operators $\hat{\eta}$ and $\hat{\eta}^{\star}$, see, for example, formula (\ref{eq:34}). One can introduce the equivalent generating structures which linearly depend on the infinite set of auxiliary n-point functions $\varrho(\tilde{x}_{(n)})$, see \cite{Han 2012}. This leads, as we think, to the more complex formulas, especially in defining the operator-valued functions. As far as the algebraization idea of physics and science in general, we would like to note that it is much less appreciated by the scientific community than the geometrization idea. In fact, we think that \textbf{there is too much symmetry in science} which is reflected in the assumptions on the generating fuctions or functionals - the entities having only auxiliary character. The mere transfer of symmetry of physical quantities on the generating structures leads in general to the divergent power series which we call the formal power series. Excess of symmetry is often masked in a natural way as a result of differences in the laws of nature (equations) from the initial or/and boundary conditions. We speak then about \textbf{spontaneous symmetry breaking}. You must also be aware of the fact that the very existence of the reference frame disturbs the symmetry of the described system, for instance the Universe. see \cite{Ros 2008}. See also \cite{Shea 2013}. Each symmetry is associated with some limitations. So if the auxiliary entities will unnecessarily inherit restrictions that apply to the physical entities then we will needlessly deprive ourselves \textbf{effective calculations}. Since the multiscale, complex systems mostly deal with permutation symmetry, it is worth recalling the \textbf{Cayley's theorem} that every group is isomorphic to a group of permutations. See also \textbf{Klein's Erlangen program }of relation of symmetry with geometry\textbf{. } That's what we're talking about is similar to reductionism in science, which uses a quasi-invariance (quasi-isolation) of the system under study with respect to changes in the environment. In this analogy, the environment would be a generating vector or operator. \textbf{Reduction is symmetry}, see \cite{Ros 2008}. We believe that algebraization of description of the multiscale and complex systems will significantly improve the process of computing. It will also allow for a broader look at the different areas of mathematics and physics, see \cite{Edwa 1993,Yad 2001,Han 2012'}, and especially \cite{Hell 2012}. See, however, \cite{Ros 2008}. See also \cite{Wall 2010}, where algebraic approach to quantum field theory is criticized, but there it was mainly concerned with the problem of renormalization. At the end of work I would like to draw attention to the fact that the \textbf{description of physical systems} based on moving away from the details and its algebraization leading to the noncomutative rings is similar to the way that leads to free probability and noncommutative geometry, see \cite{Hell 2012}. The difference lies in the fact that approach proposed here is realized in a more transparent manner. We would also like to draw the reader's attention to another aspect of the generalization of this and previous works. In Eq.\ref{eq:30}the term associated with the \textbf{linear nature of the phenomenon}, the so-called kinematic term, is described by the operator $\hat{L}$, which is the right invertible operator. In the simplest case, this would be a derivative of the first or higher orders. Maybe this is the reason why sometimes right invertible operators are called the \textit{derivatives}. To these operators are related the basic physical quantities such as velocity, acceleration, the existence of free waves and the existence of physically interesting solutions to considered equations. Interesting is also the fact that these are mostly diagonal plus lower triangular operators. \textbf{Nonlinear phenomena }are described mostly by the upper trangular operators, and external fields in which systems are submerged, are described by the lower triangular, left invertible operators. By the lower triangular operators are also described quantum properties of systems, see \cite{Han 2011-1}. For this reason, it seems interesting definition of operator-valued functions considered in Sec.3 which is not leaving the above class of operators at least, for polynomial nonlinearity. Does this mean a greater unification of linear and non-linear, or, classical and quantum phenomena? That is the question. \section*{App.1 Projectors $\hat{P}_{n}$.} These are projectors projecting on the n-th terms of the expansion (\ref{eq:34}). They are: \begin{equation} \hat{P}_{n}=\int\hat{\eta}^{\star}(\tilde{x}_{1})\cdots\hat{\eta}^{\star}(\tilde{x}_{n})\hat{P}_{0}\hat{\eta}(\tilde{x}_{n})\cdots\hat{\eta}(\tilde{x}_{1})d\tilde{x}_{(n)}\label{eq:55} \end{equation} for n=1,2,.... They can be expressed in another form as: \begin{equation} \hat{P}_{n}=\int\hat{\eta}^{\star}(\tilde{x}_{1})\cdots\hat{\eta}^{\star}(\tilde{x}_{n})\left(\hat{I}-\int\hat{\eta}^{\star}(\tilde{x})\hat{\eta}(\tilde{x})d\tilde{x}\right)\hat{\eta}(\tilde{x}_{n})\cdots\hat{\eta}(\tilde{x}_{1})d\tilde{x}_{(n)}\label{eq:56} \end{equation} in which the \textit{vacuum projector} $\hat{P}_{0}$does not appear. The name of $\hat{P}_{0}$ comes from interpretation of functions $V(\tilde{x}_{(n)})$ as the n-p-i about the field $\varphi$, see (\ref{eq:28}). We have got, of course, that the unit operator $\hat{I}$, \section*{ \begin{equation} \hat{I}=\sum_{n=1}^{\infty}\hat{P}_{n}+\hat{P}_{0}\label{eq:57} \end{equation} App.2 Other projectors} The operator \begin{equation} \hat{R}=\sum_{n=1}\int d\tilde{x}_{(n)}\hat{\eta}^{\star}(\tilde{x}_{1})\cdots\hat{\eta}^{\star}(\tilde{x}_{n})\hat{P}_{0}+\hat{P}_{0}\equiv\sum_{n=1}\hat{R}_{n}+\hat{P}_{0}\label{eq:58} \end{equation} and operators $\hat{R}_{n}$ are very lower triangular operators with respect to projectors $\hat{P}_{n}$. We can see that \begin{equation} \hat{R}_{n}^{\star}\hat{R}_{n}=\hat{P}_{0}vol^{n}\label{eq:59} \end{equation} and \begin{equation} \hat{R}_{n}\hat{R}_{n}^{\star}\cdot\hat{R}_{n}\hat{R}_{n}^{\star}=\hat{R}_{n}\hat{R}_{n}^{\star}\cdot vol^{n}\label{eq:60} \end{equation} In other words, the diagonal products $\hat{R}_{n}\hat{R}_{n}^{\star}$behave like pseudo-projectors which for the unit volume are projectors. In fact they are projectors after division by $vol^{n/2}$. In contrast to the orthogonal projectors $\hat{P}_{n}$: \begin{equation} \hat{P}_{m}\hat{P}_{n}=\delta_{mn}\hat{P}_{n}\label{eq:61} \end{equation} projectors $\hat{R}_{n}$are not orthogonal. \section*{App.3 Algebraic analysis?} In this as well as in the previous papers we are using certain results of algebraic analysis, see \cite{Przew 1988}. Since the same name stands for two different branches of mathematics, see \cite{Przew 2000} to form an opinion on this terminological confusion.
2,877,628,091,658
arxiv
\section{Proposed Method} In this section, we will first present our spatio-temporal scene graph representation (STSGR) for encoding the video sequences, following which we elaborate on our multi-modal shuffled Transformer architecture. \subsection{Overview of Spatio-Temporal Scene Graphs} Given a video sequence $V$, let $C$ denote the associated human-generated video caption, and let $(Q_i, A_i)$ represent the tuple of the text-based $i$-th question and its answer in the given human dialog about $V$ (see Fig.~\ref{fig:first_page}). We will succinctly represent the dialog history by $H = \langle(Q_1, A_1),\dots, (Q_{l-1}, A_{l-1})\rangle$. Further, let $Q_l$ represent the question under consideration. The audio-visual scene-aware dialog (AVSD) task requires the generation (or selection) of the answer denoted by $A_l$, corresponding to the question $Q_l$. Our proposed pipeline to solve this task is schematically illustrated in Fig.~\ref{fig:model}. It consists of four components: (1) a \emph{scene graph construction module}, which extracts objects and relation proposals from the video using pretrained neural network models, building a scene graph for every (temporally-sampled) video frame, (2) an \emph{intra-frame reasoning module}, which conducts node-level and edge-level graph reasoning, producing compact feature representations for each scene graph, (3) an \emph{inter-frame information aggregation module}, that aggregates these features within a temporal sliding window to produce a \emph{visual memory} for each frame's scene graph (at the center frame in that window), and (4) a \emph{semantics-controlled Transformer reasoning module}, which performs multi-modal reasoning and language modelling based on a semantic controller. In this module, we also use a newly-proposed shuffle-augmented co-attention to enable head interactions in order to boost performance. Below, we describe in detail each of these modules. \subsection{Scene Graph Representation of Video} Our approach to generate scene graphs for the video frames is loosely similar to the ones adopted in recent works such as~\cite{pan2020spatio,herzig2019spatio,Wang2018videos}, and has three components: (a) object detection, (b) visual-relation detection, and (c) region of interest (ROI) recrop on union bounding boxes. For (a), we train a Faster R-CNN model~\cite{ren2015faster} on the Visual Genome dataset~\cite{krishna2017visual} using the MMDetection repository \cite{mmdetection}. For a video frame $I$, this Faster-RCNN model produces: $\mathcal{F}_{I}, \mathcal{B}_I, \mathcal{S}_I = \mathrm{RCNN}(I)$, where $\mathcal{F}_{I} \in {\mathbb{R}^{N_o \times d_o}}$ denotes the $d_o$-dimensional object features, $\mathcal{B}_I \in {\mathbb{R}^{N_o \times 4}}$ are the object bounding boxes, and $\mathcal{S}_{I}$ is a list of semantic labels associated with each bounding box. The pair $(\mathcal{F}_{I}, \mathcal{S}_I)$ forms the nodes of our scene graph. Next, to find the graph edges, we train a relation model on the VG200 dataset~\cite{krishna2017visual}, which contains 150 objects and 50 predicates, and apply this learned model on the frames from the given video. The output of this model is a set of $\langle S, P, O \rangle$ triplets per frame, where $S$, $P$, and $O$ represent the \textit{subject}, \textit{predicate}, and \textit{object}, respectively. We keep the $\langle S,O \rangle$ pairs as relation proposals and discard the original predicate semantics, as the relation predicates of the model trained on VG200 are limited and fixed. Instead, we let our reasoning model learn implicit relation semantics during our end-to-end training. For the detected $\langle S,O \rangle$ pairs, we regard the union box of the bounding boxes for $S$ and $O$ as the predicate region of interest. Next, we apply the \emph{ROI-align} operator~\cite{ren2015faster} on the last layer of the backbone network using this union box and make the resultant feature an extra node in the scene graph. \subsection{Intra-Frame Reasoning} Representing videos directly as sequences of scene graphs leads to a complex graph reasoning problem that can be computationally challenging. To avoid this issue, we propose to hierarchically reduce this complexity by embedding these graphs into learned representation spaces. Specifically, we propose an intra-frame reasoning scheme that bifurcates a scene graph into two streams: (i) a \emph{visual scene graph} that generates a representation summarizing the visual cues captured in the graph nodes, and (ii) a \emph{semantic scene graph} that summarizes the graph edges. Formally, let us define a scene graph as $\mathcal{G} = \{(x_i, e_{ij}, x_j) \mid x_i, x_j \in \mathcal{V}, e_{ij} \in \mathcal{E} \}$, where $\mathcal{V}$ denotes the set of nodes consisting of single objects and $\mathcal{E}$ is the set of edges consisting of relations linking two objects. The triplet $(x_i, e_{ij}, x_j)$ indicates that the subject node $x_i$ and the object node $x_j$ are connected by the directed relation edge $e_{ij}$. We denote by $\mathcal{G}_v$ and $\mathcal{G}_s$ the visual scene graph and the semantic scene graph respectively: the former is a graph attention network \cite{velivckovic2017graph} which computes an attention coefficient for each edge and updates node features based on these coefficients; the latter is based on EdgeConv \cite{wang2019dynamic}, which computes extra edge features based on node features and then updates the node features by aggregating the edge features linked to a given node. Both networks are explained in detail next. We combine these two complementary graph neural networks in a cascade to conduct intra-frame reasoning. \noindent \textbf{Visual Scene Graph Reasoning:} For $M$ node features $\mathbf{X} = \{\mathbf{x}_1, \mathbf{x}_2,\dots, \mathbf{x}_M\}$ in a scene graph, multi-head self-attention~\cite{vaswani2017attention} is first performed for each pair of linked nodes. In each head $k$, for two linked nodes $\mathbf{x}_i$ and $\mathbf{x}_j$, the attention coefficient $\alpha^k_{ij}$ indicating the importance of node $j$ to node $i$ is computed by \begin{equation} \alpha^k_{ij} = \frac{\exp \left(\sigma\left(\mathbf{\Theta}^\top_k [\mathbf{W}^k_{\!1} \mathbf{x}_{i} \parallel \mathbf{W}^k_{\!1} \mathbf{x}_{j}]\right)\right)}{\sum_{k \in \mathcal{N}_i}\exp \left(\sigma\left(\mathbf{\Theta}^\top_k [\mathbf{W}^k_{\!1} \mathbf{x}_{i} \parallel \mathbf{W}^k_{\!1} \mathbf{x}_{k}]\right)\right)}, \label{eq:1} \end{equation} where $\parallel$ denotes feature concatenation, $\sigma$ is a nonlinearity (Leaky ReLU), $\mathcal{N}_i$ indicates the neighboring graph nodes of object $i$ (including $i$), $\mathbf{W}^k_{\!1} \in \mathbb{R}^{d_{h} \times d_\text{in}}$ is a (learned) weight matrix transforming the original features to a shared latent space, and $\mathbf{\Theta}_k \in \mathbb{R}^{2d_{h}}$ is the (learned) attention weight vector. Using the attention weights $\alpha^k$ and a set of learned weight matrices $\mathbf{W}^k_{\!2}\in \mathbb{R}^{d_{h}/K \times d_\text{in}}$, we update the node features as: \begin{equation} \mathbf{x}'_{i} = \big\lVert_{k=1}^{K} \sigma \Big ( \sum_{j \in \mathcal{N}_i} \alpha_{ij}^k \mathbf{W}_{\!2}^{k} \mathbf{x}_{j} \Big ). \end{equation} Outputs of the $K$ heads are concatenated to produce $\mathbf{x}'_{i}\in\mathbb{R}^{d_h}$, which is used as input to the semantic graph network. \noindent \textbf{Semantic Scene Graph Reasoning: } This sub-module captures higher-order semantics between nodes in the scene graph. To this end, EdgeConv~\cite{wang2019dynamic}, which is a multi-layer fully-connected network $h_{\mathbf{\Lambda}}$, is employed to generate edge features $\mathbf{e}_{ij}$ from its two connected node features $(\mathbf{x}'_i, \mathbf{x}'_j)$: $ \mathbf{e}_{ij} = h_{\mathbf{\Lambda}}(\mathbf{x}'_i, \mathbf{x}'_j)$, where $h_{\mathbf{\Lambda}}: \mathbb{R}^{d_h} \times \mathbb{R}^{d_h} \rightarrow \mathbb{R}^{d_h}$ is a nonlinear transformation with learnable parameters $\mathbf{\Lambda}$. We then obtain the output node features $\mathbf{x}^{\star}_i$ by aggregating features from the edges that are directed to the object node $i$, i.e., \begin{equation} \mathbf{x}^{\star}_i = \max_{j:(j,i) \in \mathcal{E}_i} \mathbf{e}_{ji}, \label{eq:2} \end{equation} where $\mathcal{E}_i$ denotes the set of edges directed to node $i$. All object features inside the scene graph are updated by the above intra-frame feature aggregation. \noindent \textbf{Memory Generation with Graph Pooling:} After conducting intra-frame reasoning to obtain higher-level features for each node, we pool the updated graph into a memory for further temporal aggregation. Since different frame-level scene graphs have different numbers of nodes and edges, we adopt graph average pooling ($\mathrm{GAP}$) and graph max pooling ($\mathrm{GMP}$) \cite{lee2019self} to generate two graph memories and concatenate them to produce $V^{\star}$: \begin{equation} V^{\star} = \mathrm{GAP}(\mathbf{X}^{\star}, \mathcal{E}) \parallel \mathrm{GMP}(\mathbf{X}^{\star}, \mathcal{E}), \label{eq:3} \end{equation} where $\mathcal{E}$ denotes the scene graph connection structure, and $\mathbf{X}^{\star}$ the $M$ node features $\{\mathbf{x}^{\star}_1, \mathbf{x}^{\star}_2,\dots, \mathbf{x}^{\star}_M\}$ from~\eqref{eq:2}. \subsection{Inter-Frame Information Aggregation} Apart from the spatial graph representations described above, there is a temporal continuity of visual cues in the video frames that needs to be captured as well. To this end, we propose an inter-frame aggregation scheme that operates on the spatial graph embeddings. Specifically, for a sequence of scene graph memories $\langle v^{\star}_1, v^{\star}_2,\dots, v^{\star}_L\rangle$ of length $L$ produced using~\eqref{eq:3} on a sequence of $L$ frames, we use temporal sliding windows of size $\tau$ to update the graph memory of the center frame in each window by aggregating the graph memories of its neighboring frames in that window, both in the past and in the future. Let $F \in \mathbb{R}^{2d_{h} \times \tau}$ denotes a matrix of graph embeddings within this window of length $\tau$, then we perform window-level summarization over all frame memories within $F$ as: $\beta = \mathrm{softmax}(\Gamma^\top\tanh(\mathbf{W}_{\!\tau} F))$, where $\mathbf{W}_{\!\tau} \in \mathbb{R}^{2d_h \times 2d_h}$ is a learned weight matrix, $\Gamma \in \mathbb{R}^{2d_h}$ is a weight vector, and $\beta$ denotes the attention weights. We then use $\beta$ to update the memory $v_c$ of the center frame (in this window) by aggregating information across this window, as: $v'_c = F\beta^\top$. Repeating this step for all sliding windows, we get the final visual graph memory $V'=\left\langle v'_1, v'_2,\dots, v'_L\right\rangle$ aggregating both spatial and temporal information. We also augment these visual features with their temporally-aligned audio embeddings $\left\langle s_1, s_2, \cdots, s_L\right\rangle$ produced using an AudioSet VGGish network~\cite{hershey2017cnn}. \subsection{Semantics-Controlled Transformer Reasoning} \label{sec:Transformer} The above modules encode a video into a sequence of graph memories via reasoning on visual and semantic scene graphs. Besides encoding audio-visual information, we also need to encode the text information available in the AVSD task. For the sentence generation task, we propose to generate the answer autoregressively~\cite{anderson2018bottom,hori2018multimodal}, i.e., predict the next word in the answer from the vocabulary based on source sequences including the visual memory, query $Q_l$, caption $C$, the dialog history $H = \langle(Q_1, A_1),\dots, (Q_{l-1}, A_{l-1})\rangle$, and the partially generated answer so far, denoted $A_l^\text{in}$ (see Fig.~\ref{fig:model} and Fig.~\ref{fig:Transformer}). This sub-answer $A_l^\text{in}$ forms the semantics that control the attention on the various modalities to generate the next word. As shown in Fig.~\ref{fig:Transformer}, our semantics-controlled Transformer module consists of a graph encoder, a text encoder, and a multi-modal decoder. It takes in source sequences and outputs the probability distribution of the next token for all tokens in the vocabulary. We detail the steps in this module next. \begin{figure}[t] \centering \includegraphics[width=9.1cm,trim={0cm 0cm 0cm 0cm},clip]{figs/model.png} \caption{depicts the sequential attention flow in our semantics-controlled Transformer. MHA stands for multi-head attention. FFN is short for feed-forward networks. The acronyms A, V, H, and Q stand for the answers, visual memory, caption/dialog history, and the question, respectively. } \label{fig:Transformer} \vspace{-0.5cm} \end{figure} \noindent \textbf{Encoder:} We first use Transformer to embed all text sources ($H$, $C$, $Q_l$, $A_l^\text{in}$) using token and positional embeddings, generating feature matrices $e_h, e_c, e_{q}$, and $e_a$, each of the same feature dimensionality $d_h$. We also use a single-layer fully-connected network to transfer the audio-augmented visual memories in $V'$ to $d_h$-dimensional features $e_v$ that match the dimension of the text sources. Next, for the answer generation task, the input sub-answer (generated so far) $e_a$ is encoded with a Transformer consisting of multi-head self-attention to get hidden representations $h^a_\text{enc}$: \begin{equation} h^a_\text{enc} = \mathrm{FFN}^a(\mathrm{Attention}(\mathbf{W}^a_{\att{Q}} e_a, \mathbf{W}^a_\att{K} e_a, \mathbf{W}^a_\att{V} e_a)), \label{eq:6} \end{equation} \noindent where $\mathbf{W}^a_{\att{Q}}$, $\mathbf{W}^a_{\att{K}}$, $\mathbf{W}^a_{\att{V}}$ are weight matrices for query, key, and value respectively~\cite{vaswani2017attention}, $\mathrm{FFN}^a$ is a feed-forward module consisting of two fully-connected layers with ReLU in between. The $\mathrm{Attention}$ function is defined as in~\cite{vaswani2017attention}: \begin{equation} \mathrm{Attention}(\mathbf{Q},~\mathbf{K},~\mathbf{V}) = \mathrm{softmax}(\frac{\mathbf{Q}\mathbf{K}^\top}{\sqrt{d_h}})\mathbf{V}, \end{equation} \noindent with a scaling factor $\sqrt{d_h}$ that maintains the order of magnitude in features, and $\mathbf{Q},\mathbf{K},\mathbf{V}$ represent the query, key, and value triplets as described in~\cite{vaswani2017attention}. After encoding the input sub-answer, we conduct co-attention in turn for each of the other text and visual embeddings $e_j$, where $j \in \{v, c, h, q\}$, with a similar Transformer architecture. That is, the encoding $h^j_{\text{enc}}$ for a given embedding type $e_j$ is obtained by using the encoding $h^{j'}_{\text{enc}}$ for the previous embedding type $e_{j'}$ as query (Fig.~\ref{fig:Transformer}): \begin{equation} h^j_{\text{enc}} = \mathrm{FFN}^j(\mathrm{Attention}(\mathbf{W}^j_{\att{Q}} h^{j'}_\text{enc}, \mathbf{W}^j_{\att{K}} e_j, \mathbf{W}^j_{\att{V}} e_j)). \end{equation} In our implementation, the embeddings for history and caption are concatenated as $e_{c+h}=e_c||e_h$. Processing occurs in the following order: starting from $h^a_\text{enc}$, we compute $h^v_{\text{enc}}$, then $h^{c+h}_{\text{enc}}$, and later $h^q_{\text{enc}}$. Finally, we get a feature vector $h^{\star}_\text{enc}$ that fuses all the information from the text and audio-visual sources by concatenating these multi-modal features. \noindent\textbf{Multi-head Shuffled Transformer:} In this paper, we also propose to utilize head shuffling to further improve the performance of the Transformer structure as shown in Fig.~\ref{fig:shuffle-txr}. In the original Transformer~\cite{vaswani2017attention}, the feature vectors of all heads are directly concatenated before being fed into the last fully-connected layer. Thus, there is no interaction between those heads from the start to the end. To enable the interactions across heads, we propose to divide each head and shuffle all head vectors before passing them on to separate fully-connected layers. The outputs are finally concatenated in a late fusion style. This scheme is similar to ShuffleNet~\cite{zhang2018shufflenet}, with the key difference that here we conduct shuffling between different heads within the multi-head attention, while in ShuffleNet the shuffling is across channels. Our empirical results show that our shuffling operation results in better generalization of the model. \noindent \textbf{Decoder:} For the generation setting, with the final encoded feature $h^{\star}_\text{enc}$, we use a feed-forward network with softmax to predict the next token probability distribution $P$ over all tokens in the vocabulary $\mathcal{V}$; i.e., $P = \mathrm{softmax}(\mathrm{FFN}(h^{\star}_\text{enc}))$. In the testing stage, we conduct beam search with $b$ beams to generate an answer sentence. \begin{figure}[t] \centering \includegraphics[width=7.5cm,trim={0cm 0cm 0cm 0cm},clip]{figs/shuffled_transformer.pdf} \vspace{-0.2cm} \caption{An illustration of our multi-head shuffled Transformer, where we shuffle the output of each head before passing it on to the FFN module.} \label{fig:shuffle-txr} \vspace{-0.55cm} \end{figure} \noindent \textbf{Loss Function:} Let $\mathcal{P}$ denote the collection of all next-token probability distributions $P_j\in\mathbb{R}^{|\mathcal{V}|}$, $j=1,\dots,N$ for batch size $N$, and let $\mathcal{G}$ be the collection of respective distributions $G_j$ for the ground truth answer tokens. For the generation setting, we apply label smoothing~\cite{muller2019does} to account for the sparsity of the token distributions, leading to $\tilde{\mathcal{G}}$. We use the cross-entropy ($\mathrm{CE}$) loss between the predicted and the smoothed ground truth distributions to train our model end-to-end: \begin{equation} \mathcal{L} = \mathrm{CE}(\mathcal{P} | \tilde{\mathcal{G}}) = -\frac{1}{N}\sum_{j=1}^{N} \sum_{u\in \mathcal{V}} \tilde{G}_j(u)\log P_j(u). \label{eq:loss} \end{equation} For the retrieval setting, we first concatenate the feature embeddings of the query and the various input modalities obtained from the Encoder module of our network ($e_h, e_c, e_q, e_v$). Next, the candidate answers are embedded into this joint space using LSTMs, and a dot product is taken between the concatenated inputs and embeddings of each of the answer candidates. We then train this model with the binary cross-entropy loss. \section{Conclusions} We presented a novel hierarchical graph representation learning and Transformer reasoning framework for the problem of audio-visual scene-aware dialog. Specifically, our model generates object, frame, and video-level representations that are systematically integrated to produce visual memories, which are sequentially fused to the encodings of other modalities (dialog history, etc.) conditioned on the input question using a multi-head shuffled Transformer. Experiments demonstrate the benefits of our framework for both generation/selection tasks on the AVSD benchmark. Going forward, we plan to explore the use of richer text embeddings (such as GPT~\cite{radford2019language} and BERT~\cite{devlin2018bert}) within our framework. \section{Experiments} In this section, we detail our experimental setup, datasets, and evaluation protocols, before furnishing our results. \noindent\textbf{Dataset and Evaluation:} We use the audio-visual scene-aware dialog (AVSD) dataset~\cite{alamri2019audio} for our experiments, which is the benchmark dataset for this task. This dataset emulates a real-world human-human natural conversation scenario about an audio-visual clip. See~\cite{alamri2019audio} for details of this task and the dataset. We evaluate on two variants of this dataset corresponding to annotations available for the DSTC-7 and DSTC-8 challenges,\footnote{\url{https://sites.google.com/dstc.community/dstc8/home}} consisting of 7,659, 1,787, 1,710, and 1,710 dialogs for training, validation, DSTC-7 testing, and DSTC-8 testing, respectively for the answer generation task. The quality of the generated answers is evaluated using the standard MS COCO evaluation metrics~\cite{chen2015microsoft}, such as BLEU, METEOR, ROUGE-L, and CIDEr. Apart from the answer generation task~\cite{hori2018multimodal}, we also report experiments on the answer selection task, described in~\cite{alamri2019audio} using their annotations and ground truth answers. This task requires selecting the answer to a question from a set of 100 answers. Specifically, in this task, an algorithm is to present a ranking over a set of 100 provided answers, with ideally the correct answer ranked as the first. The evaluation is then based on the mean retrieval rank over the test set. \noindent \textbf{Data Processing:} We follow~\cite{le2019multimodal} to perform text preprocessing which include lowercasing, tokenization, and building a vocabulary by only selecting tokens that occur at least five times. Thus, we use a vocabulary with 3,254 words, both for the generation and retrieval tasks. \noindent \textbf{Feature Extraction:} Motivated by ~\cite{anderson2018bottom}, we train a detector on Visual Genome with 1601 classes and 401 attributes, which incorporates a ``background'' label and a ``no-attribute'' label. We use ResNext-101 as the neural backbone with a multiscale feature pyramid network. We further use fine-grained ROI-alignment instead of ROI-pooling for better feature representation. We extract the 1024-D features for the 36 highest scoring regions, their class labels, and attributes. After extracting the region features, we apply a pretrained relationship detector~\cite{zhang2019vrd} to find visually-related regions. We calculate the minimal bounding box which covers two visually-related regions and perform ROI-alignment to get compact representations for relationship regions. In order to incorporate audio into the STSGR framework, we extract AudioSet VGG-ish features~\cite{hershey2017cnn} from the audio stream for every video. These are 128-D features obtained from the AudioSet VGG-ish CNN, pretrained on 0.96s Mel Spectrogram patches on the AudioSet data~\cite{gemmeke2017audio}. \begin{table}[t] \centering \resizebox{0.435\textwidth}{!}{ \begin{tabular}{l|cccc} \hline Method & B4 & MET & ROUGE & CIDEr \\ \hline STSGR full model & \textbf{0.133} & \textbf{0.165} & \textbf{0.361} & \textbf{1.265}\\ STSGR w/o shuffle & 0.127 & 0.161 & 0.354 & 1.208 \\ STSGR w/o GAT & 0.118 & 0.160 & 0.347 & 1.125 \\ STSGR w/o EdgeConv & 0.131 & 0.162 & 0.356 & 1.244 \\ STSGR w/o union box features & 0.124 & 0.163 & 0.352 & 1.175 \\ STSGR w/o visual features & 0.127 & 0.160 & 0.356 & 1.203 \\ STSGR w/o temporal & 0.125 & 0.164 & 0.357 & 1.212 \\ \hline STSGR + audio & \textbf{0.133} & \textbf{0.165} & \textbf{0.362} & \textbf{1.272}\\ \hline \end{tabular} } \caption{Ablation study using AVSD@DSTC7 dataset.} \label{tab:ablation} \vspace{-0.6cm} \end{table} \noindent \textbf{Model Training:} We set our Transformer hyperparameters following \cite{vaswani2017attention}. The feature dimension is 512, while the inner-layer dimension of the feed-forward network is set to 2048. For multi-head attention, we maintain $h=8$ parallel attention heads and apply shuffling to boost performance. For the semantic labels, we build a 300-D embedding layer for the 1651 words in the vocabulary (which is available with the dataset), and initialize the embeddings using GloVe word vectors~\cite{pennington2014glove}. For semantic labels consisting of more than one word, we use the average word embedding as the label embedding. Our model is trained on one Nvidia Titan XP GPU with Adam optimizer~\cite{kingma2015adam} with $\beta_1 = 0.9$, $\beta_2 = 0.98$. The batch size is set to 16 and we adopt the warm-up strategy as suggested in~\cite{vaswani2017attention} for learning rate adjustment with about 10,000 steps. \noindent \textbf{Baselines:} We consider the following four baselines on the generation task: (i) \textit{Baseline}~\cite{hori2019end}, (ii) \textit{Multi-modal Attention}~\cite{hori2019end}, that uses attention over concatenated features, (iii) \textit{Simple}~\cite{schwartz2019factor} that uses factor-graph attention on the modalities, and (iv) \textit{MTN}~\cite{le2019multimodal} that applies self-attention and co-attention to aggregate multi-modal information. For the retrieval task, we compare our method against the state-of-the-art method of ~\cite{alamri2019audio} on the DSTC-7 split. \begin{figure*}[ht] \centering \includegraphics[width=16.9cm]{figs/qual-gen-retri-long.pdf} \caption{Qualitative results from our model on both generation and retrieval tasks of AVSD. Left: input video frames, Top-right: caption and dialog history, Bottom-middle: top-3 generated answers with confidence scores. Bottom-right: top-5 ranked candidate answers with confidence scores.} \vspace{-.35cm} \label{fig:quals} \end{figure*} \noindent\textbf{Ablation Study:} To understand the importance of each component in our model, Table~\ref{tab:ablation} details an ablation study. We analyze several key components: (i) shuffling in the Transformer structure, (ii) visual and semantic graph, (iii) ROI Recrop on the union bounding boxes, and (iv) temporal aggregation. From the table, we see that Graph Attention Network (GAT), which is used to produce the visual scene graph, is important to aggregate information from neighboring nodes (e.g., improving CIDEr from 1.125 to 1.265), while EdgeConv, used in the semantic graph, offers some improvement (e.g., CIDEr from 1.244 to 1.265). Moreover, the use of shuffling in the multi-head Transformer architecture boosts the performance significantly (from 1.208 to 1.265 for CIDEr). We can also conclude that union bounding boxes, semantic labels, and inter-frame aggregation contribute to stabilize the generation performance. Overall, by adopting all these key components, the full model outperforms all the ablations. From Tables~\ref{tab:ablation} and ~\ref{tab:audio_ret}, we notice that incorporation of audio helps improve the performance of our model. For instance, on the retrieval setting we observe that incorporating audio lowers the Mean-Retrieval Rank noticeably down to 4.08 from 4.33 for the full model and to 5.91 from 6.54 when no language context is available. \begin{table}[t] \centering \resizebox{0.5\textwidth}{!}{ \begin{tabular}{l|cccc} \hline \multicolumn{5}{c}{AVSD@DSTC7}\\ \hline Method & B4 & MET & ROUGE & CIDEr\\ \hline Baseline~\cite{hori2019end} & 0.075 & 0.110 & 0.275 & 0.701 \\ Multi-modal Attention~\cite{hori2019end} & 0.078 & 0.113 & 0.277 & 0.727 \\ Simple~\cite{schwartz2019simple} & 0.091 & 0.125 & 0.307 & 0.883\\ MTN~\cite{le2019multimodal} & 0.128 & 0.162 & 0.355 & 1.249\\ Ours & \textbf{0.133} & \textbf{0.165} & \textbf{0.362} & \textbf{1.272}\\ \hline \multicolumn{5}{c}{AVSD@DSTC8}\\ \hline Baseline~\cite{hori2019end} & 0.289 & 0.210 & 0.480 & 0.651 \\ Multi-modal Attention~\cite{hori2019end} & 0.293 & 0.212 & 0.483 & 0.679 \\ Simple~\cite{schwartz2019simple} & 0.311 & 0.224 & 0.502 & 0.766\\ MTN~\cite{le2019multimodal} & 0.352 & 0.263 & 0.547 & 0.978\\ Ours & \textbf{0.357} & \textbf{0.267} & \textbf{0.553} & \textbf{1.004} \\ \hline \end{tabular} } \caption{Comparisons of our method against the state of the art on the AVSD test splits for DSTC7 and DSTC8.} \label{tab:dstc7} \vspace*{-.2cm} \end{table} \begin{table}[t!] \centering \resizebox{0.469\textwidth}{!}{ \begin{tabular}{l|ccc} \hline Method & Full model & w/o caption & w/o cap. Diag. Hist.\\ \hline \citet{alamri2019audio} & 5.88 & N/A & 7.41 \\ \citet{hori2019end} & 5.60 & N/A & 7.23 \\ MTN w/o audio & 4.51 & 4.90 & 6.85 \\ MTN w/ audio & 4.29 & 4.78 & 6.46 \\ STSGR & 4.33 & 4.67 & 6.54 \\ STSGR w/ audio & \textbf{4.08} & \textbf{4.55} & \textbf{5.91} \\ \hline \end{tabular} } \caption{State-of-the-art comparisons on answer selection as measured by Mean Retrieval Rank (lower the better).} \label{tab:audio_ret} \vspace*{-.15cm} \end{table} \noindent\textbf{Comparisons to the State of the Art:} In Table~\ref{tab:dstc7}, we compare STSGR against baseline methods on various quality metrics based on ground-truth answers. As is clear, our approach achieves better performance against all the baselines. The performance on the answer selection task (mean retrieval rank) is provided in Table~\ref{tab:audio_ret}, demonstrating clearly state-of-the-art results against the baseline in~\cite{alamri2019audio}. We also show that including audio into the STSGR representation helps improve the mean retrieval rank. \begin{table}[t!] \centering \resizebox{0.43\textwidth}{!}{ \begin{tabular}{c|c|cccc} \hline Method & Feature & B4 & M & R & C \\ \hline Simple & i3d & 0.091 & 0.125 & 0.307 & 0.883 \\ Simple & VGG & 0.095 & 0.126 & 0.309 & 0.919 \\ MTN & N/A & 0.114 & 0.147 & 0.332 & 1.106 \\ MTN & i3d & 0.118 & 0.160 & 0.348 & 1.151 \\ STSGR (Ours) & N/A & 0.121 & 0.152 & 0.350 & 1.186 \\ STSGR (Ours) & i3d & 0.122 & 0.152 & 0.353 & 1.223 \\ STSGR (Ours) & Scene Graphs & \textbf{0.133} & \textbf{0.165} & \textbf{0.361} & \textbf{1.265} \\ \hline \end{tabular} } \caption{Comparison of visual representations on DSTC7.} \label{tab:avsd} \vspace*{-.5cm} \end{table} \noindent\textbf{Qualitative Results and Discussion:} In Fig.~\ref{fig:quals}, we provide two qualitative results from our STSGR model. For the first case, our model consistently detects the woman in the frames and finds that she maintains many connections with other objects inside the scene throughout the whole video, thus our model makes/selects the correct answer with high confidence. For the second case, the clutter background poses a challenge to our model. However, STSGR can still generate/rank the correct answer in top-2. In general, we find that STSGR can answer spatial and temporal questions very well. This is quantitatively evidenced by observing that while both STSGR and MTN~\cite{le2019multimodal} use similar backends, they differ in the input representations (I3D in~\cite{le2019multimodal} vs. scene graphs in ours), and our model outperforms MTN noticeably (1.272 vs 1.249 on CIDEr, Table~\ref{tab:dstc7}), substantiating the importance of our STSGR representation. In Table~\ref{tab:avsd}, we further compare STSGR representation against other visual representations (I3D, VGG) and different methods (Simple~\cite{schwartz2019simple}, MTN~\cite{le2019multimodal} on the generation task, and demonstrate that our proposed scene graph representation by itself is a better way to characterize visual content. \section{Introduction} The success of deep learning in producing effective solutions to several fundamental problems in computer vision, natural language processing, and speech/audio understanding has provided an impetus to explore more complex multi-modal problems at the intersections of these domains, attracting wide interest recently~\cite{zhu2020deep}. A few notable ones include: (i) visual question answering (VQA)~\cite{antol2015vqa,yang2003videoqa}, the goal of which is to build an agent that can generate correct answers to free-form questions about visual content, (ii) audio/visual captioning~\cite{hori2017attention,venugopalan2015sequence,xu2015show,drossos2019clotho}, in which the agent needs to generate a sentence in natural language describing the audio/visual content, (iii) visual dialog~\cite{das2017visual}, in which the agent needs to engage in a natural conversation with a human about a static image, and (iv) audio-visual scene-aware dialog (AVSD)~\cite{alamri2019audio,hori2019end} -- that generalizes (i), (ii), and (iii) -- in which the agent needs to produce a natural answer to a question about a given audio-visual clip, in a conversation setting or select the correct answer from a set of candidates. The AVSD task\footnote{\url{https://video-dialog.com/}} emulates a real-world human-machine conversation setting that is potentially useful in a variety of practical applications, such as building virtual assistants~\cite{deruyttere2019talk2car} or in human-robot interactions~\cite{thomason2019improving}. See Figure~\ref{fig:first_page} for an illustration of this task. \begin{figure}[t] \centering \includegraphics[width=12cm,trim={1.2cm 8.9cm 2.5cm 0.5cm},clip]{figs/first-page_v2.pdf} \caption{A result from our proposed model for the AVSD task. Given a video clip, its caption, dialog history, and a question, the AVSD generation task aims to generate the answer in natural language form.} \label{fig:first_page} \vspace{-5pt} \end{figure} The generality of the AVSD task, however, poses a challenging multi-modal representation learning and reasoning problem. Specifically, some of the input modalities to this task may offer complementary information (such as video and audio), while a few others may be independent (audio and captions), or even conflict with each other, e.g., the provided text (captions/dialogs) may include details from human experience that are absent in the video (e.g., ``I think...''), or may include abstract responses (``happy'', ``bored'', etc.) that may be subjective. Thus, the main quest in this task is to represent these modalities such that inference on them is efficient and effective. Previous approaches to this problem used holistic video features produced by a generic 3D convolutional neural network~\cite{carreira2017quo}, and either focused on extending attention models on these features to include additional modalities~\cite{alamri2019audio,hori2019end,schwartz2019simple}, or use vanilla Transformer networks~\cite{vaswani2017attention} to produce effective multi-modal embeddings~\cite{le2019multimodal}. These off-the-shelf visual representations or Transformer architectures are not attuned to the task, potentially leading to sub-optimal performance. In this paper, we present a neural inference algorithm that hierarchically reduces the complexity of the AVSD task using the machinery of graph neural networks and sequential multi-head Transformers. Specifically, we first present a spatio-temoral scene graph representation (STSGR) for encoding the video compactly while capturing its semantics. Specifically, our scheme builds on visual scene graphs~\cite{johnson2015image} towards video representation learning by introducing two novel modules: (i) an intra-frame reasoning module that combines graph-attention~\cite{velivckovic2017graph} and edge-convolutions~\cite{wang2019dynamic} to produce a semantic visual representation for every frame, (ii) subsequently, an inter-frame aggregation module uses these representations and updates them using information from temporal-neighborhoods, thereby producing compact spatio-temporal visual memories. We then couple these memories with temporally aligned audio features. Next, multi-head Transformers~\cite{vaswani2017attention}, encodes each of the other data modalities (dialog history, captions, and the pertinent question) separately alongside these audio-visual memories and fuses them sequentially using Transformer decoders. These fused features are then used to select or generate the \emph{answers} auto-regressively. We also present a novel extension of the standard multi-head Transformer network in which the outputs of the heads are mixed. We call this variant a \emph{shuffled Transformer}. Such random shuffling avoids overfitting of the heads to its inputs, thereby regularizing them, leading to better generalization. To empirically evaluate our architecture, we present experiments on two variants of the AVSD dataset available as part of the 7th and 8th Dialog System Technology Challenges (DSTC). We provide experiments on both the answer generation and the answer selection tasks -- the former requiring the algorithm to produce free-form sentences as answers, while the latter selects an answer from 100 choices for each question. Our results reveal that using the proposed STSGR and our shuffled Transformer lead to significant improvements on both tasks against state-of-the-art methods on all metrics. The key contributions of this paper are: \begin{itemize} \itemsep0em \item We propose to represent videos as spatio-temporal scene graphs capturing key audio-visual cues and semantic structure. To the best of our knowledge, the combination of our intra/inter-frame reasoning modules is novel. \item We introduce a sequential Transformer architecture that uses shuffled multi-head attention, yielding question-aware representations of each modality while generating answers (or their embeddings) auto-regressively. \item Extensive experiments on the AVSD answer generation and selection tasks demonstrate the superiority of our approach over several challenging recent baselines. \end{itemize} \begin{figure*}[t!] \centering \includegraphics[width=16.6cm, height=6.7cm]{figs/stsgr_camera_ready.png} \caption{A schematic illustration of our overall pipeline for dialog response generation/retrieval.} \label{fig:model} \vspace*{-.35cm} \end{figure*} \section{Related Work} Our proposed framework has similarities with prior works along three different axes, viz. (i) graph-based reasoning, (ii) multi-modal attention, and (iii) visual dialog methods. \noindent \textbf{Scene Graphs:}~\cite{johnson2015image} combine objects detected in static images, their attributes, and object-object relationships~\cite{lu2016visual} to form a directed graph that not only provides an explicit and interpretable representation of the image, but is also seen to be beneficial for higher-order reasoning tasks such as image captioning~\cite{li2019know,yang2019auto}, and visual question answering~\cite{ghosh2019generating,norcliffe2018learning,geng20192nd,geng2020character}. There have been efforts~\cite{wang2018non,girdhar2019video,jain2016structural,herzig2019spatio,jang2017tgif,tsai2019video} at capturing spatio-temporal evolution of localized visual cues. In~\cite{Wang2018videos}, a space-time graph reasoning framework is proposed for action recognition. Similarly, the efficacy of manually-labeled video scene graphs is explored in~\cite{ji2019action}. Similar to ours, they use object detections per video frame, and construct a spatio-temporal graph based on the affinities of the features from the detected objects. Spatio-temporal graphs using knowledge distillation is explored for video captioning in~\cite{pan2020spatio}. In contrast, our task involves several diverse modalities, demanding richer architectural choices. Specifically, we present a hierarchically organized intra/inter-frame reasoning pipeline for generating visual memories, trained via neural message passing, offering a powerful inference engine. Our ablation studies demonstrate the usefulness of these modules. \noindent\textbf{Multi-modal Fusion/Attention:} has been explored in several prior works~\cite{hori2017attention,hori2018multimodal,hori2019end,shi2020multi}, however does not use the power of Transformers. Self-attention and feature embeddings using Transformers is attempted in multi-modal settings~\cite{Gao_2019_CVPR,Gao_2019_ICCV,shi2020contrastive}, however only on static images. Bilinear fusion methods~\cite{ben2019block,fukui2016multimodal} have been explored towards inter-modality semantic alignment, however they often result in high-dimensional interaction tensors that are computationally expensive during inference. In contrast, our pipeline is the first to leverage the power of Transformers in a hierarchical graph reasoning setup for video dialogs and is cheap to compute. \noindent\textbf{Multi-modal Dialogs:} have been explored in various ways before. Free-form human-like answers were first considered in~\cite{das2017visual}, which also proposed the VisDial dataset, however on static images. A difficulty in designing algorithms on multi-modal data is in deriving effective attention mechanisms that can divulge information from disparate cues. To tackle this challenge,~\cite{wu2018you} proposed a sequential co-attention scheme in which the neural embeddings of various modalities are co-attended with visual embeddings in a specific order. ~\cite{schwartz2019factor} generalized the co-attention problem by treating the modalities as nodes of a graph, aggregating them as \emph{factors}, using neural message passing. We use a combination of these two approaches; specifically we use Transformer encoders for embedding each modality, and attend on these multi-modal embeddings sequentially to generate the answer. Further, in contrast to~\cite{schwartz2019factor,wu2018you}, that tackle solely the answer generation problem, we consider the answer selection task on AVSD as well. ~\cite{yeh2019reactive} also proposed using Transformers~\cite{vaswani2017attention} for fusing audio-visual features on the AVSD task. A multi-step reasoning scheme is proposed in~\cite{gan2019multi} using joint attention via an RNN for generating a multi-modal representation. The Simple baseline~\cite{schwartz2019simple} extends factor graphs~\cite{schwartz2019factor} for the AVSD problem demonstrating promising results. A multi-modal Transformer for embedding various modalities and a query-aware attention is introduced in~\cite{le2019multimodal}. ~\cite{le2020video} fine-tunes pretrained GPT-2 to obtain improved performance. However, these works neither consider richer visual representations using scene graphs, nor variations of the standard Transformers, like the shuffling scheme we present.
2,877,628,091,659
arxiv
\section{Introduction} \IEEEPARstart{D}{eveloping} vehicle component models and building physics-based vehicle models is a common approach among engineers in the automotive industry. Car manufacturers and researchers in the field have been able to develop a series of tools and simulation-based processes that evaluate the effects of advanced vehicle technologies on energy consumption. For more than two decades, Argonne National Laboratory has supported the U.S. Department of Energy (DOE) Vehicle Technologies Office (VTO) Analysis Program by estimating the impact of new technologies on the energy consumption and cost of several thousand vehicles \cite{moawad_assessment_2016}, \cite{islam_extensive_2018}. To estimate the overall impact, the VTO's Analysis group sponsors different vehicle market penetration tools that rely on Argonne’s vehicle energy efficiency and cost estimates. Although vehicle energy models have been continuously developed and validated with test data, the uncertainty surrounding vehicle cost estimation has been increasing, with the latest studies being several years old \cite{ricardo_autonomie_2010}.\par Vehicle pricing depends directly on the vehicle's attributes, the powertrain-related components' power and size, and the materials used, as well as the manufacturing complexity, volume, manufacturer's reputation and marketing strategies. As the name suggests, the manufacturer's suggested retail price (MSRP) is the recommended selling price calculated by the manufacturer's financial experts in order to earn a competitive rate of return on its investments in technology. It covers direct costs, such as costs of materials and labor, but also indirect costs, such as costs associated with R\&D, pensions and other employee benefits, warranties, advertising, and manufacturer and dealer profits. Thus the MSRP is appropriate measure to study to understand vehicle price evolution over time as well as the distribution of vehicle price over technology. It also reflects the price paid by consumers in competitive market conditions, which is relevant for evaluating the costs and benefits of fuel economy and the resulting market penetration impacts, and can also be used to calculate the per-vehicle cost increase of Corporate Average Fuel Economy (CAFE) rules.\par Previous methodologies tended to approach the problem of vehicle price estimation with a bottom-up approach \cite{ricardo_autonomie_2010}, \cite{hill_improving_2016}, \cite{lutsey_efficiency_2017}. Essentially, a vehicle teardown analysis is performed based on a limited number of high-volume/high-sales vehicle data points, from which a series of technology cost curves are developed. These cost equations are then used in aggregation to estimate the total vehicle manufacturing cost, and a fixed retail price equivalent (RPE) methodology is used to mark up direct manufacturing costs to MSRP. \par When the model year (MY) 2012-2016 greenhouse gas and CAFE standards were developed for the 2011 Average Fuel Economy Standards Passenger Cars and Light Trucks Model Final Rule \cite{nhtsa_average_2009}, \cite{rogozhin_automobile_2009}, DOE used an RPE of 1.5 in conjunction other indirect cost multipliers (ICM), which resulted in an average markup of 1.25 \cite{whinihan_retail_2012}. Ricardo, an environmental consulting services company with which Argonne worked when developing the standards \cite{ricardo_autonomie_2010}, suggested that indirect cost ``must be contained within an external markup factor, either an RPE factor, typically 1.5, or an ICM, which varies from 1.02 to 1.45 depending on the technology complexity and time frame of interest.'' The National Research Council, on the other hand, found that a RPE of 2.0 is more adequate. That being said, this flat approach has also been open to criticism as different levels of profitability across vehicle classes or product lines are recognized among cost analysts.\par Ricardo's work in supporting the development of component cost models for Argonne was based on transactional component prices from an independent supplier to a vehicle manufacturer and includes costs associated with the manufacture and development of the component, system integration costs, vehicle assembly costs, vehicle manufacturer and dealer selling expense, and margins. This effort let to the development of 10 technology module cost models: low voltage system, engine system, engine accessories, transmission system, vehicle drivetrain system, energy storage system, e-drive system, fuel cell system, hydraulic propulsion system, and glider system. Costs were valued in 2010 dollars.\par The authors are not aware of any recent publicly available study that attempts to update the acquired cost curves based on up-to-date vehicle data. In addition, there are concerns about the limited number of data points available to the entities developing the cost estimates. The collected vehicle data misses to reflect all the novel technologies implemented nowadays (e.g., cylinder deactivation or Skip-Fire engines, 10-speed transmissions, etc.). It is also important to consider the inherent interactions between vehicle components and their effect on vehicle price. For example, although individual components may have fixed manufacturing and labor costs, a combination of several advanced technologies can potentially be packaged by the manufacturer at a different price point not necessarily related to the cost of manufacturing. The studies described earlier fail to address this kind of interaction, and so does a applied fixed RPE estimate. Earlier studies also missed to address the issue of correlated features within a vehicle: The presence of a certain advanced technology in a sub-part of the vehicle increases the chances of including advanced technologies in other parts of the vehicle. For example, advanced turbo engines are likely to be found in vehicles with advanced transmissions with a high number of gears, or the higher the engine power of the vehicle the more unlikely it is to find basic/elementary technologies or attributes in other aspects of the vehicle. Cost curves developed in isolation run the risk of misrepresenting the resulting aggregated total vehicle price. \section{Contribution} As described in the introduction, in general the basic method of cost estimation is to tear down technologies within a carefully selected series of vehicles and construct a bottom-up estimate by costing out material, labor and manufacturing processes. An alternative method is to acquire estimates of selling prices of manufactured components. Both methods are rather tedious, expensive, and rely on a certain level of expertise on the part of the estimator (who carry some level of bias). In addition, many original equipment manufacturers (OEMs) manufacture their own components and maintain a strict level of confidentiality. Information about the prices of component parts acquired by third-party suppliers is also not easily accessible, although there have been several useful reports \cite{nrc_effectiveness_2002}, \cite{nescaf_reducing_2004}. Information can also be obtained from the confidential data manufacturers submit and share with governmental institutions or from discussions with OEMs or suppliers. Other attempts have been made in the past to estimate component price by comparing the prices of vehicles with and without the technology or component of interest \cite{duleep_analysis_2008}.\par In this article, we propose to take a top-down approach ― from vehicle retail price (MSRP) estimation using machine learning techniques down to component price attributional effects on MSRP ― by leveraging game theory concepts of coalition and cooperation \cite{shapley_value_1953}. At the very least, the authors are seeking to popularize the use of a novel and delightful alternative methodology within the community and encourage all to improve upon it.\par In the following sections we will detail the different efforts undertaken during this study. Significant efforts have been made to: \begin{itemize} \item Collect a large and reliable amount of vehicle data, with a detailed level of specification and technology breakdown. As a result, Argonne has exclusively developed an internal Vehicle Attribute Database (ArVAD) that includes more than 64,000 vehicles from MY1990 to MY2020. ArVAD contains hundreds of vehicle features: vehicle MSRP, color, front and rear seat details, vehicle measurement details, drivetrain information, fuel-related information, engine specs and technologies, power feature details (such as power or heated mirrors, remote keyless power, etc.), vehicle comfort details, instrumentation information, vehicle entertainment packages, tire and wheel specifications, suspension technologies, etc. \item Cluster the vehicles: to automate the categorization of vehicles into baseline, performance, luxury, and prestige\footnote{Unique and specially manufactured vehicles.} regardless of the manufacturer's name or reputation, the model or the trim level. This clustering step has been found vital to vehicle price modeling, as it reduces model variance and increases model accuracy. In fact, in a model based on powertrain attributes only, for example, price variability for vehicles with similar powertrain specifications can be large, depending on the manufacturer's car line category (standard vs. luxury) as well as non-powertrain-related specifications such as the presence of other advanced options. A classic example of this kind is the price discrepancy between some Honda and Acura vehicles when a very limited number of differences (if any) can be observed. \item Analyze outcome interpretability, and fine-tune several machine learning models, settling on a state-of-the-art gradient boosting on a decision trees algorithm called Catboost \cite{prokhorenkova_catboost:_2019}, \cite{dorogush_catboost_2018}. \item Understand, interpret, and explain model outcomes and predictions and how they relate to the vehicle features/technologies input. This article will discuss and detail the use of a feature attribution method based on the computation of Shapley values, a method from coalitional game theory \cite{shapley_value_1953}. In particular, this article attempts to popularize a framework for optimal credit allocation and the explanation of individual predictions \cite{lundberg_unified_2017}, \cite{lundberg_explainable_2019}. \item Describe and analyze the marginal effects that vehicle components have on the total vehicle price. As a result of this analysis, the authors develop and suggest a ``non-equation'' based method for vehicle price estimation and component price attribution. We call the methodology proposed Shapley-based credit/penalty component pricing (SCP). We will show that this penalty approach can be used to assess the costs and benefits of fuel economy, including such activities performed for U.S. regulatory analysis. In particular, this novel methodology can help regulatory entities evaluate the incremental cost of increasing vehicle efficiency. \end{itemize} \section{Purpose and Potential Beneficiaries} The primary, direct purpose of this research activity was to support the U.S. DOE VTO Analysis Program and explore a potential novel approach to update current vehicle and component price estimation methods involved in the various benefits analysis studies conducted. VTO relies on the Argonne-developed software environment Autonomie\footnote{Autonomie is a MATLAB-based software environment and framework for automotive control system design, simulation, and analysis. The tool is designed for rapid and easy integration of models with varying levels of detail and abstraction as well as processes. Developed by Argonne National Laboratory in collaboration with General Motors, Autonomie was designed to serve as a single tool that can be used to meet the requirements of automotive engineering throughout the development process, from modeling to control. Autonomie is used to evaluate the energy consumption and cost of advanced powertrain technologies. It has been validated for several powertrain configurations and vehicle classes using Argonne's Advanced Powertrain Research Facility vehicle test data. Autonomie is the primary vehicle simulation tool selected by U.S. DOE to support its U.S. Drive Program and Vehicle Technologies Office. It has been used in numerous studies to provide the U.S. government with guidance for future research. More than 175 companies and research entities, including major automotive companies and suppliers, use Autonomie to support their advanced vehicle development programs.} to handle vehicle energy and price estimation efforts that feed into subsequent market penetration tools. Other entities can benefit from the outcome of this work, particularly other governmental and regulatory entities that evaluate the incremental cost of increasing vehicle efficiency, or manufacturers that perhaps would like to advance their understanding of product and component pricing among competitors. \section{Collection Process and Data} This project takes a data-driven approach, and therefore its success depends on the richness and quality of the data in hand. For that reason, Argonne has expended significant work to develop an internal vehicle attribute database by leveraging web-scraping techniques to collect publicly available data. The research team focused especially on developing a general, automated data collection and web-scraping process to collect vehicle data. The process allows researchers to efficiently crawl the web by deploying a web spider that targets car and OEM websites. The web-scraping framework contains four modules that control the process: \begin{enumerate} \item \textbf{The spider module.} Defines what we want to extract from the web page of interest. \item \textbf{The request/response module.} Handles the request sent to the website and the content of that request through the injection of custom headers and assignment of proxies, then manages the download of the data received from the website response. \item \textbf{The processing module.} Takes care of cleaning the data, removing duplication, and storing it in the appropriate form and data structure. \item \textbf{The manager module.} Responsible for preserving operation orders and the priorities of scheduled requests. It coordinates among all the pieces for consistent operation while accounting for website response delays, lagging, and multiple simultaneous requests. \end{enumerate} Images, vehicle specifications and other publicly available information (including vehicle MSRP) is fetched and stored in a non-relational database (MongoDB), resulting in an exhaustive dataset that can be used to build a precise vehicle MSRP estimation model.\par Argonne completed several data processing steps in building the database: \begin{itemize} \item \textbf{Cleaning.} Data have been checked for missing values and inconsistencies. \item \textbf{Integration.} Data from various sources have been successfully integrated into a large dataset. \item \textbf{Modification.} Outliers have been identified and fixed using cross-references of the different sources and imputation methods available. \item \textbf{Transformation and feature engineering.} Several additional calculated fields were created. \item \textbf{Analysis and interpretation.} Several rounds of data analysis were performed. \end{itemize} The database contains an extensive list of vehicle features: power and energy specifications, drivetrain information, measurements, instrumentation, interior and exterior options, entertainment components (such as sound systems/speakers, screens, and other things that can affect vehicle pricing), and detailed information about tires and wheel specifications (type, width, aspect ratio, diameter, load index, speed rating, etc.). VTO's objective is to construct a model in which the MSRP estimation is driven primarily by powertrain components rather than luxury features. However, to reduce model variance and uncertainty, some non-powertrain features are included in the modeling, and basic/standard attributes will be used as input for predictions to reflect Autonomie's standard/average vehicle segments.\par The dataset currently includes some 64,000 vehicles, from 1990 to 2020,\footnote{As of 12/2019. Web crawling is performed on a monthly basis to update the database with newly appearing models.} of various makes, models, and trim levels with hundreds of variables/specs. The data exhaustively cover many vehicle make manufacturers from 1990 to the present (figure \ref{fig:chord}), and we note a general uniform distribution of makes over the years for big and established OEMs. Some newer companies, such as Tesla, will need special treatment during modeling due to the unique technologies they exhibit in terms of powertrain type, specs and others (e.g., electric powertrain, navigation systems, etc.). At this writing, the data collected on MY2020 vehicles were still limited, and many models were not yet released. \begin{figure}[!t] \centering \includegraphics[width=2.5in]{chord.jpg} \caption{Chord plot showing count relationship of vehicle make and vehicle year.} \label{fig:chord} \end{figure} About 152 variables have been selected for analysis. A variable selection study has been performed to carefully select features, assess their importance, understand their degrees of correlation with MSRP, and assess the explanatory power of each variable. Note that the dataset displays a mix of variable types: Some variables are numeric (e.g., engine power), other are categorical (e.g., transmission type), or more specifically Boolean (e.g., engine has turbocharging technology T/F).\par Figures \ref{fig:snapshot} and \ref{fig:corr} show a glimpse of the underlying distributions and existing correlations for some of the variables. We note that there is a increase in the number of models appearing every year. Several other interesting and not unexpected facts arise: Vehicle MSRP distribution has a clear heavy right tail, with most vehicles prices being in the \$0-100,000 range. The mean and median for this distribution are respectively $\sim$\$34,000 and \$29,000, and the data exposes quite a large vehicle price variance as well. There is an apparent "multi" mode of engine power in the distribution; this information coupled with vehicle curb weight can give us an idea of the different clusters of power density values existing in the data, and this, along with vehicle dimensions, can be used as proxy for vehicle classification. The next section will discuss the creation of vehicle clusters to reduce model variance during the modeling phase. Finally, we note from the correlogram certain groups of variables with strong positive or negative correlations. For example, engine power and acceleration are strongly positively correlated. \begin{figure}[!t] \centering \includegraphics[width=2.5in]{snapshot} \caption{Snapshot of distribution exhibited by some of the variables in the data.} \label{fig:snapshot} \end{figure} Keeping in mind that the purpose of the current modeling is to ``model'' vehicle prices and extract component price values as well, we considered, as part of the variable selection process, the removal of systemic non-causal variables. As in this example, the investigation of the causal impact of certain variables has been carefully distinguished. Here, acceleration or vehicle performance is a causal descendant of other system related variables (power, weight, etc.), and consequently the correlation with MSRP can be largely explained by those parent variables. \begin{figure*}[!t] \centering \subfloat[Correlation heatmap of numerical features.]{\includegraphics[width=2.5in]{corr}% \label{fig:corra}} \hfil \subfloat[Subset (zoom)]{\includegraphics[width=2.5in]{corrsub}% \label{fig:corrb}} \caption{Correlogram.} \label{fig:corr} \end{figure*} \section{Make and Model Agnostic Clustering} As noted above, DOE is interested in estimating vehicle segments related to the baseline segment. This is in line with Autonomie practice and its vehicle models, which represent the average market vehicle for each powertrain. To segregate base, luxury, performance, and prestige vehicles for proper modeling without knowledge of the make, model or trim level, a clustering approach is needed.\par Several clustering algorithms were considered, but the \textit{interpretable} hierarchical clustering method gave good results. The hierarchical clustering method groups data points using a bottom-up approach (agglomerative) based on selected features as a measure of similarity. The agglomerative approach in hierarchical clustering is an important and well-established technique in unsupervised machine learning, where the clustering algorithm starts from singleton nodes (vehicles) and iteratively merges pairs based on mutual closeness. The process is repeated until all vehicles have been aggregated into one mega-cluster. Throughout the process, the merging blueprint is recorded and later revealed in the form of a dendrogram from which we have the flexibility to select an adequate number of clusters, segregating the vehicles according to our needs. This clustering approach requires careful selection of distance metrics as well as a measure of inter-cluster dissimilarity. For more detail, there is extensive literature on the subject \cite{murtagh_survey_1983}, \cite{reddy_survey_2018}.\par In our setting, the main assumption driving our clustering is that vehicles of comparable size, performance, and other carefully selected specifications (e.g., vehicle weight, wheel radius) should be comparable in price, and therefore should be clustered together. The effect of this assumption is that inter-cluster vehicles with significant price differences represent different car lines (e.g., luxury).\par The advantage of the hierarchical clustering method is the ability to visualize the resulting tree-based division using a dendrogram to facilitate interpretation. In addition, there is some theoretical support for an optimal number of cluster choices, a task that is always difficult to achieve in unsupervised clustering algorithms. Figure \ref{fig:clustering} shows the resulting clustering projected onto a three-dimensional space. This is achieved by using the t-distributed stochastic neighbor embedding (t-SNE) dimensionality reduction technique (right). t-SNE is considered the current state-of-the-art dimension reduction technique that can produce a low dimensional representation of high dimensional data while preserving local distance structures \cite{maaten_visualizing_2008}, \cite{van_der_maaten_accelerating_2014}. This visualization allows us to cross-check the behavior of the resulting clustering. In fact, the projection shown provides clues about the interpretation of the results. The yellow axes describe the authors' best guess of the clustering interpretation after a quite extensive analysis. The green cluster is separated due to a clear differentiation in vehicle dimension specifications. The red cluster seems to represent luxury car lines, while the black cluster suggest baseline vehicles. A few vehicles, in light and dark blue, distinctly belong to more prestigious categories.\par \begin{figure}[!t] \centering \includegraphics[width=2.5in]{clustering} \caption{Vehicles clusters (Clust) t-SNE 3D projection (top) and a per class interpretation against the associated MSRP (bottom)} \label{fig:clustering} \end{figure} The bottom plot in figure \ref{fig:clustering} identifies clusters by vehicle class, which reveals additional details. Within each class, there is a clear separation between base and luxury vehicles (cluster 1 versus cluster 2). Interestingly, a third cluster, for larger vehicles (pickups), emerges. Where no within-class discrimination is apparent, this seems to support the fact that all pickups usually belong to one car line (a clear small variance in MSRP for pickups is also seen). Clusters 4 and 5 represent the most expensive vehicles, which are eliminated from the analysis and dataset, they can be considered to be exceptional outlier vehicles skewing the data (roughly above \$250k). Vehicles from clusters 1 and 3 are combined into one to represent base vehicles ($\sim$\$0–\$80,000) of all class types. Cluster 2 represents the luxury car line ($\sim$\$30,000–\$240,000). This method allows a soft price margin for vehicle segment segregation, so there is an overlap.\par As noted earlier, the clustering preparation provides additional information to the modeling phase to reduce variance and increase explainability. \section{Vehicle Price Model} The vehicle price modeling approach taken in this work falls into a typical discriminative supervised learning setting. Given a dataset $\mathscr{D}=\{(\boldsymbol{X_i},y_i)\}_{i=1,\ldots n}$ of $n$ pair of examples consisting of a vector of $\boldsymbol{X_i} \in \mathbb{R}^m$ explanatory variables and $y$, a response or output variable, we want to learn a function $f:\boldsymbol{X} \mapsto y$ that can predict $y^*$ for new or unobserved or future inputs $\boldsymbol{X^*}$. In the following, $\boldsymbol{X}$ will refer to a carefully selected set of vehicle attributes, chosen according to the rules of explanatory power, as described in the previous sections, but also conforming to the engineering sense and the domain knowledge of the authors. The variable $y$ will refer to the vehicle price output (MSRP).\par While it is assumed that the data $\mathscr{D}$ is sampled from some unknown distribution $p(\boldsymbol{X},y)$, we are not concerned with learning the distribution from available data. In the following we will detail how $f \in \mathscr{F}$ is chosen from the function space $\mathscr{F}$ of decision trees ― the hypothesis space ― to minimize $\mathbb{E}_{(X,y) \sim p}L(f(X),y)$ for the typical squared loss $L$. \subsection{Catboost Model} Catboost model is a state-of-the art machine learning model, based on gradient boosting, with a novel successful handling of categorical features. Gradient boosting on decision tree algorithms is very popular for problems with heterogeneous features in tabular form. Those algorithms are designed to achieve competitive results in the presence of complex, noisy and highly feature-dependent data \cite{chen_xgboost_2016}, \cite{ke_lightgbm_2017}.\par The Catboost algorithm has the advantage of overcoming categorical data pre-processing, which typically involves some form of naive transformation of the data. One hot encoding, i.e., adding a binary feature as indicator for the category, is one approach \cite{miccibarreca_preprocessing_2001}, but high cardinality leads to infeasible processing and training. Other approaches have been considered to limit the number of features generated, such as grouping categories by target statistics (TS) \cite{miccibarreca_preprocessing_2001}, which estimates the target expected value in each category. That is, if we are given a dataset $\mathscr{D}=\{(\boldsymbol{X_i},y_i)\}_{i=1,\ldots n}$, where $\boldsymbol{X_i}=(x_{i,1},\ldots,x_{i,m}) \in \mathbb{R}^m$ is a vector of $m$ features, possibly categorical, with $y_i \in \mathbb{R}$, then $x_{i,k}$ is substituted by: $$x_{i,k}=\frac{\sum_{j=1}^n \mathbb{1}_{\{x_{j,k}=x_{i,k}\}}.y_j}{\sum_{j=1}^n \mathbb{1}_{\{x_{j,k}=x_{i,k}\}}}$$ Other approaches convert categorical variables into gradient numerical statistics \cite{noauthor_features_nodate}.\par The estimation just described can be noisy, especially for low frequency categories. Catboost uses an approach based on the performance of a random permutation of the dataset, after which the average target value is calculated for the example based on the same categorical values placed before the permutation. That is, if $\sigma = (\sigma_1, \ldots, \sigma_n)$ is a permutation, then $x_{\sigma_p,k}$ is replaced with: $$x_{\sigma_p,k}=\frac{\sum_{j=1}^{p-1} \mathbb{1}_{\{x_{\sigma_j,k}=x_{\sigma_p,k}\}}.y_{\sigma_j}+a.P}{\sum_{j=1}^{p-1} \mathbb{1}_{\{x_{\sigma_j,k}=x_{\sigma_p,k}\}}+a}$$ where $P$ is a prior value and $a$ a weight parameter imposed on the prior value \cite{cestnik_estimating_1990}. \par The $P$ value can simply be set as the average response value of the dataset. This smoothing manipulation allows Catboost to overcome overfitting problems, but it also allows the use of the whole dataset for training in an efficient online manner. The introduction of random permutation for the purpose of calculating target statistics is a strategy against data leakage (target leakage), in which a new feature $x_{i,k}$ is computed from $y_k$ but overcomes conditional shift \cite{zhang_domain_2013} when the distribution of $\boldsymbol{X_i}|y$ in the training set differs from the test set. This is a typical problem of learner generalization error that Catboost is addressing innovatively.\par Gradient boosting models assume that the data $\mathscr{D}$ is sampled from some unknown distribution $p(\boldsymbol{X},y)$, then given a loss function $L:\mathbb{R}^2 \rightarrow \mathbb{R}_+$. The goal is to find a function $F: X \rightarrow \mathbb{R}$ that minimizes the empirical risk: $$\mathscr{L(F)}=\sum_{i=1}^n L(F(\boldsymbol{X_i}),y_i)$$ such that: $$F(\boldsymbol{X})=\sum_{k=1}^t \alpha f_k(\boldsymbol{X})$$ where $t$ is the number of iterations. Typically, for best results $f_k \in \mathscr{F}$ is chosen from the space $\mathscr{F}$ of decision trees functions \cite{breiman_classification_1984}, \cite{friedman_additive_2000}. In other words, each of the $t$ functions $f_k$ is an independent tree structure separating the feature space $\mathbb{R}^m$ into several disjoint regions based on the value of a splitting feature.\footnote{Catboost makes use of oblivious trees \cite{langley_oblivious_1994}, \cite{kohavi_oblivious_1995}, \cite{ferov_enhancing_2016}, \cite{lou_bdt_2017}} Those $f_k$ functions are called base or weak learners and are learned sequentially by constructing the sequence $f_1,\ldots,f_t$ such that $$f_t = \argmin_{f \in \mathscr{F}} \mathscr{L}(F_{t-1}+f)=\argmin_{f \in \mathscr{F}} \sum_{i=1}^n L(F_{t-1}(\boldsymbol{X_i})+f(\boldsymbol{X_i}),y_i)$$ There are several ways to perform this optimization problem. Some are based on first order derivative calculations of $\mathscr{L(F)}$ at point $F_{t-1}$ and use the gradient as the step of minimization in a gradient descent type of optimization setting (i.e., using least squares): \begin{equation}\label{eq:gb} f_t = \argmin_{f \in \mathscr{F}} \sum_{i=1}^n \Big(f(\boldsymbol{X_i})+\frac{\partial L(\hat y_i,y_i)}{\partial y_i}\Bigr|_{\substack{\hat y_i=F_{t-1}(\boldsymbol{X_i})}}\Big)^2 \end{equation} We have presented only a quick description of the most basic gradient-based models. Additional details can be found in other references, e.g., \cite{friedman_greedy_2001}, in which the learning objective can be regularized to treat overfitting problems, and stochastic gradient boosting can be applied \cite{friedman_stochastic_2002} to improve the quality of the learner.\par Catboost leverages gradient boosting to approximate gradient implementation; however, Catboost integrates several tricks to address an apparent weakness in gradient estimation \cite{breiman_outbag_1996}, \cite{breiman_using_2001}: The quantity in (\ref{eq:gb}) is biased due to the bias of the point-wise gradient estimates, because the conditional distribution of the gradient $\frac{\partial L(\hat y_i,y_i)}{\partial y_i}\Bigr|_{\substack{\hat y_i=F_{t-1}(\boldsymbol{X_i})}}$ for a given $\boldsymbol{X_i}$ is shifted from that of a test set. To overcome this problem, Catboost proposes an ordered boosting algorithm detailed in \cite{prokhorenkova_catboost:_2019} and \cite{dorogush_catboost_2018} that does not suffer from prediction shift.\par \subsection{Model Performance} It is worth mentioning that before settling on the Catboost algorithm, we tested several other ensemble learning models: AdaBoost, XgBoost, LightGBM, and Random Forest. We also used and analyzed standard (multilayer perceptron) neural network, support vector machine regression models, and Bayesian networks \cite{fernandez_extension_2008} and assessed them for out-of-sample performance. Simpler interpretable linear regression based models (fine-tuned with added complexity, interactions and regularization) have also been developed for optimal prediction performance while attempting to preserve explainability. Careful hyperparameter tuning and configuration selection (when applicable) with distributed grid search has been carried out throughout each exercise. In the end, Catboost outperformed all the models considered.\par Catboost training was performed on NVIDIA TITAN Xp and NVIDIA Quadro P2000 graphics processing units (GPUs). The training time was on the order of several hours ($\sim 10^3$ minutes). The final ensemble model consists of $\sim 4000$ trees of depth=4, i.e., allowing for order-4 levels of interactions. All model parameters and hyperparameters were tuned in line with nested cross-validation methods for training and testing, using a validation set of vehicles separate from the test set for final performance assessment. The data was randomly split into five 80/20 folds for training and testing. Out-of-sample prediction performance was evaluated on all outer test sets. An inner loop then addresses, for each training set, a 80/20 split for hyperparameter tuning and calibration. For result stability, we bootstrapped over 100 iterations.\par The model performance summary in figure \ref{fig:perf} shows a comprehensive recap of some of the five-fold cross-validated performance metrics. \begin{figure}[!t] \centering \subfloat[Root mean squared error, mean absolute percentage error, explained variance and residuals of averaged five-fold cross-validation.]{\includegraphics[width=2in]{perf}% \label{fig:perf}} \\ \subfloat[Prediction vs. true vehicle MSRP for \$0-100,000 vehicles.]{\includegraphics[width=2.5in]{pred}% \label{fig:pred}} \caption{Model Performance.} \end{figure} The root mean squared error (RMSE) $:= \sqrt{\frac{1}{n}\sum_i (y_i-\hat y_i)^2}$ shows the average vehicle MSRP prediction error a little less than \$1000, corresponding to a 2.2\% average error computed by the mean absolute percentage error (MAPE) $:= \frac{1}{n}\sum_i |\frac{y_i-\hat y_i}{y_i}|$ on the test sets. It can be said that vehicle MSRPs are predicted with very reasonable precision. The normal shape of the residuals centered at zero suggests that most vehicle price predictions are within a few hundreds dollars of the MSRP. Figure \ref{fig:pred} shows the predicted vs. actual vehicle price for \$0-100,000 vehicles where there is a notably thin cloud of prediction points around the line $y=x$, hence the high $R^2$ value. \subsection{Residual Analysis} From an engineering perspective, uniform prediction accuracy levels is required. For generalization purposes, the modeling ignores manufacturer specificity. As a result, it is necessary to ensure that the resulting model accuracy is not biased towards a certain group, type or class of vehicle.\par We performed a series of residual plots that analyze the behavior of the prediction error for a selected attribute of interest. We put special attention to analyze the distributions of residuals over manufacturers, vehicle type and classification. Also, because of the temporal dependency present in the data we tested for correlated errors through a combination of simple successive pairs of residuals plotting and Durbin-Watson statistic tests. We also considered more complex dependencies, although not likely in the vehicle pricing setting: no short or long times series runs of residuals was identified. Normality was checked, and residual behavior against specifically selected predictor variables was inspected carefully. For example, we confirmed homoscedasticity of the residual plot over the vehicle engine power and weight to guarantee that the model performs well across a wide range of vehicles in size and performance. Overall, we insured that residual properties that analysts would want to have in such diagnostics are showing satisfactory behavior, suggesting a good overall fit. \section{Vehicle Component Price Estimation} A we have shown, the Catboost model developed has excellent prediction accuracy, but unfortunately is very opaque due to the complexity of the underlying gradient boosting based structure. Catboost, like many other complex machine learning models, is very flexible, accounting for many layers of nonlinear and interactional structures. This makes the statistical inference, a crucial requirement for the current application, challenging. For the purpose of extracting component price estimates from the predicted total vehicle price values, a certain level of model interpretability is required. Given the model output $f(x_1,\ldots,x_m)$, one would want to quantify to what extent each $x_j$ is responsible for the output value. In other words, with such a complex model, the challenge is to find a way to account for how the input features relate to the predictions without loss of model accuracy. A lot of good work has been done to perform the inverse exercise, which attempts to build simpler (although sometimes complex and sophisticated) but carefully designed models that enable us to explain as well as estimate the effect that each of the input components has on the response. Typically, for example, in linear regression models, model coefficients describe how a unit change in an explanatory variable affects a model response change, while holding other variables constant ― a sometimes impossible task.\par Other methods employ post-hoc model-agnostic interpretation methods, such as the partial dependence plots (PDP) proposed in \cite{friedman_greedy_2001} or individual conditional expectation (ICE) \cite{goldstein_peeking_2013}, to explain complex models, but they can produce misleading results. PDP and ICE can give biased results when high degrees of feature codependence exist, a very common situation, and interactional behaviors are not well captured or quantified.\par We turned to a game theory method to quantify to what extent each component contributes to vehicle price prediction and to retrieve individual component pricing. \subsection{Shapley Method} \label{sec:shapmeth} A promising recent contribution to interpretable machine learning has emerged for proper feature attribution in non-linear complex settings \cite{lundberg_unified_2017}, \cite{lundberg_explainable_2019} \cite{lundberg_consistent_2019}. The work presented here is based on coalitional game theory methods using the computation of Shapley values \cite{shapley_value_1953}. The basic idea was originally developed by the economist Lloyd Shapley while he was working on the problem of fairly allocating credits among players in a game of cooperating players. The method has been adapted from the original purpose ― fairly allocating credit for the outcome of a game to collaborating players who may have contributed unequally ― to the purpose of fairly allocating credit to features of a model for the output of that model. The Shapley approach has the advantage of having strong theoretical support to ensure a fair feature attribution and consequently a fair distribution of the total prediction value among the features and their individual contributions.\par The explanation of complex models via Shapley values starts by defining a class of additive feature attribution methods that will be used as a surrogate explanation model for the original one. If $f$ is the original prediction model and $g$ is an explanation model, then an additive feature attribution model is a linear function of binary variables in the form: $$g(z')=\phi_0+\sum_{i=1}^M \phi_i z_i'$$ where $M$ is the number of input features, $z'\in \{0,1\}^M$ are the features being observed or unobserved, respectively, and $z_i'=1$ or $z_i'=0$ and $\phi_i \in \mathbb{R}$ are the feature attribution values.\par Given a model prediction $f(x)$, by assigning a feature mapping function $h_x(z')$ that maps binary inputs to the original feature space such that $x=h_x(z')$, we can evaluate $f(h_x(z'))$ and calculate the effect of observing or not observing a feature and seek to enforce $f(x) = f(h_x(z')) \approx g(z')$ through a special selection of $\phi_i$. This is one obvious desirable property requiring that the explanation model output matches the original model output. Shapley, through his work, described other desirable properties constraining the space of solutions for $\phi_i$: \begin{itemize} \item \textbf{Local accuracy/additivity/efficiency.} The sum of feature attributes need to match the original model output. \item \textbf{Missingness.} If a feature is missing, it receives zero attribution. \item \textbf{Consistency/monotonicity.} For two different models $f_1$ and $f_2$ in the same feature space, if the contribution of a feature $i$ increases for $f_2$ vs. $f_1$, then the given attribution for feature $i$ should not decrease for $f_2$. \item \textbf{Symmetry.} If $i$ and $j$ are two features that contribute equally, their attribution should be equal. \item \textbf{Linearity.} The attributions of the sum of two functions $f_1$ and $f_2$ expands to the sum of the attributions for each of the two functions. \end{itemize} Those mathematically axiomatized properties (see \cite{lundberg_unified_2017} for details) describe a \emph{fairness} context of attribution. \par Let $S \subseteq \mathcal{M}= \{1,\ldots,M\}$, a subset of non-zero indexes. By defining $f_x(S) = f(h_x(z')) = \mathbb{E}[f(x)|\boldsymbol{\operatorname{do}}(x_S)]$ then the only set of values \cite{shapley_value_1953}, \cite{young_monotonic_1985} for the explanation model satisfying the above properties can be proven to be: \begin{equation}\label{eq:shap} \phi_i(f,x)=\sum_{S \subseteq \mathcal{M} \backslash \{i\}} \frac{|S|!(M-|S|-1)!}{M!}\Big[f_x(S \cup \{i\})-f_x(S)\Big] \end{equation} The above quantity represents some form of weighted average of the assigned attributions, calculated from model evaluation difference with and without the feature of interest, over all possible subsets of features $S$.\par There has been some confusion in the literature over the proper evaluation function to be used to compute the feature contribution from the model \cite{aas_explaining_2019}, \cite{sundararajan_many_2019} \cite{janzing_feature_2019}. This confusion is due to ambiguity about which probability distribution the unconditioned variables should be averaged over, i.e., $\mathbb{E}[f(x)|x_S]$ vs. $\mathbb{E}_{x_{\bar{S}}}[f(x)]$. At first glance, as the minimizer of the squared loss, the former seems an appropriate and commonly used estimator to use, since the conditional expectation summarizes the whole probability distribution. However, several carefully designed counter-examples can be constructed (see \cite{sundararajan_many_2019}) to show that if the former conditional expectation is used as a basis for the functional calculation of Shapley values, then $\phi_i \neq 0 \centernot\implies f \hbox{ depends on } x_i$, which violates the missingness property described earlier, i.e., the contraposition statement that if the feature $i$ is missing, it must receive an attribution of zero. In other words, if a feature exhibits no information with respect to the total outcome, it should not be influential. In this paper, we chose the computation of marginal expectation rather than the conditional; hence the presence of Pearl's $\boldsymbol{\operatorname{do}}$ operator from causal inference calculus \cite{pearl_causality_2000}. Therefore the remaining variables $x_{\bar{S}}$ are left untouched and are sampled from their natural distribution with no conditioning, as follows: $$\mathbb{E}[f(x)|\boldsymbol{\operatorname{do}}(x_S)]=\mathbb{E}_{x_{\bar{S}}}[f(x)]=\int \mathbb{E}[f(x)|x_S,x_{\bar{S}}] d\mathbb{P}(x_{\bar{S}})$$ where we note that by \cite{pearl_causality_2000}: $$\mathbb{P}[f(x)|\boldsymbol{\operatorname{do}}(x_S)]=\int \mathbb{P}[f(x)|x_S,x_{\bar{S}}] d\mathbb{P}(x_{\bar{S}})$$ denotes the distribution of $f(x)$ under intervention on $X_s=x_s$.\par For clarity, given the graph structure shown in figure \ref{fig:sampling}, to evaluate the influence of a feature $X_1$ on the output $f(x)$ after observing $X_1=x_1$, we sample from the joint distribution of the remaining feature variables $\mathbb{P}_{X_2,\ldots,X_M}$. \begin{figure}[!t] \centering \tikz{ \tikzstyle{main}=[circle, minimum size = 10mm, thick, draw =black!80,node distance = 16mm] \node[main,fill = black!10] (Y) {$Y$};% \node[main,above=of Y,xshift=-3cm,fill = black!10] (X1) {$\boldsymbol{x_1}$}; % \node[main,above=of Y,xshift=-1.5cm,fill = black!10] (X2) {$X_2$}; % \node[main,above=of Y,xshift=0cm,fill = black!10] (X3) {$X_3$}; % \node[main,above=of Y,xshift=1.5cm,fill = black!10] (X4) {...}; % \node[main,above=of Y,xshift=3cm,fill = black!10] (XM) {$X_M$}; % \edge {X1,X2,X3,X4,XM} {Y}} \caption{Example of how unaffected variables sampled from the joint distribution to assess the influence of $X_1=x_1$ are intervened upon according to the rules of causal inference $\mathbb{P}_{X_2,\ldots,X_M}$} \label{fig:sampling} \end{figure} It is clear from the expression (\ref{eq:shap}) that there are too many $\mathcal{O}(2^M)$ terms to evaluate the summation completely. As the number of features $M$ increases, the number of possible subsets increases exponentially. The computation efficiency is critical for feasible and timely generation of attribution values. Lundberg \cite{lundberg_consistent_2019} managed to apply a series of tricks and derive an algorithm for tree ensemble structures that reduces the complexity of exact computation of Shapley values from $\mathcal{O}(TL2^M)$ to $\mathcal{O}(TLD^2)$, where $T$ is the number of trees considered in the ensemble model, $L$ is the number of leaves, and $D$ is the depth of the trees.\par Figure \ref{fig:shapwalk} shows a single-path walk-through of how attribution values are retrieved from successive model inquiries. Shapley ensures fair $\phi_i$ values $\forall i$ by considering all possible combinations of sequences and orders. Shapley values are computed in relation to the reference baseline $\mathbb{E}[f(x)]$, assigned the attribution $\phi_0$. Therefore, the sum of the remaining attributions $i=\{1,\ldots,M\}$ captures the difference between the baseline value and the prediction. In our particular setting, Shapley values represent the change in price a certain component causes from the reference average price value of all the vehicles in the market (assuming the database is exhaustive). Although this is certainly valuable information, as we will show later, the true value of this exercise will emerge when selected vehicles are compared with one another, and component prices can be recovered. Also, attribution values can be aggregated over all vehicles with a focus on one component at a time, so component dependencies and their relationship to price will be better understood. \begin{figure*}[!t] \centering \includegraphics[width=\textwidth]{shapwalk} \caption{A single path walk-through of how attribution values are retrieved from successive model inquiries to sum to the model output prediction $f(x)$. The Shapley solution ensures that attributions are computed from an averaging over $M!$ possible orderings so that component boosting effects and interactions are taken into account.} \label{fig:shapwalk} \end{figure*} This theoretically supported and fair feature attribution method gives us the ability to better understand the contribution of each component to the vehicle's price. The next section presents a series of analyses at the vehicle level, where a single vehicle output can be broken down by giving each component a contribution to the outcome (figure \ref{fig:shapbox}), and shows how aggregation is achieved to get vehicle component prices from the total MSRP. \begin{figure}[!t] \centering \includegraphics[width=2.5in]{shapbox} \caption{Example of how vehicle output can be broken down by giving each component a contribution to the outcome} \label{fig:shapbox} \end{figure} \subsection{Results} In this section, we show a series of component price estimation examples leveraging the Shapley attribution method. At the unique vehicle level, figure \ref{fig:oneveh} presents an example of vehicle MSRP prediction with the contribution values of individual components towards the price. As explained earlier, Shapley values are computed in relation to a reference baseline vehicle represented by the market average vehicle, although this hypothetical vehicle is not necessarily useful in itself. \begin{figure}[!t] \centering \includegraphics[width=3in]{oneveh} \caption{Example of vehicle MSRP prediction with the contribution values of individual components towards the pricing for the 2019 Honda Civic LX 4dr Sedan 2.0L 4 cyl CVT. The True MSRP is \$20,350, the predicted value is \$20,717.} \label{fig:oneveh} \end{figure} Figure \ref{fig:twoveh} shows a comparison of two trim levels of the same make and model vehicle for the same year (2019 Honda Civic). This direct trim level comparison allows us to better understand and quantify the components involved in the price difference. In this particular example, the advanced trim has advanced features that explain the price difference, and the additional price by component is computed through Shapley attribution. The turbo engine technology in the advanced trim vehicle explains an additional $\approx \$ 1500$ compared to the base trim level with no turbo technology, the alloy wheels contribute $\approx \$ 850$ compared to steel wheels in the base trim level, and so on. \begin{figure}[!t] \centering \includegraphics[width=3in]{twoveh} \caption{Example of trim level comparison and price difference explanation at the component level for two 2019 Honda Civic trims levels. The True MSRP for the EX-L 4dr Sedan trim is \$24,700, the predicted value is \$25,368.} \label{fig:twoveh} \end{figure} Increasing the number of vehicles in the comparison allows us to better understand the effect of some key vehicle components on pricing. Figure \ref{fig:manyveh} shows several trim levels of the same vehicle for two classes of vehicle (compact and SUV). The set of Honda Civic vehicles represents a typical compact class vehicle, while the Toyota Highlander represents a typical SUV class vehicle. We first note on the Civic graph that trim levels branch out in price with the inclusion of certain technologies. For example, the base trim level is the only trim that has a 6-speed manual transmission, while others have continuously variable transmission (CVT) technology. This is clearly presented in the graph at the level of the transmission type, where all the slopes are parallel except for one that indicates a decrease in pricing. On the other side, SUV trims seem to branch out for different reasons: The drivetrain type (all-wheel drive, 4-wheel drive, etc.) has a big impact on price. The low engine power of the base trim level seems to significantly decrease the price. It is worth noting as well that component technologies do not have the same effect on the two classes presented. For example, the vehicle height has a positive price impact on compact class vehicles (represented here by the Honda Civic) while the SUVs (represented here by the Toyota Highlander) show the reverse. These effects are in comparison to the hypothetical reference and clearly depend on the value of the component feature. \begin{figure}[!t] \centering \subfloat[Comparison of all trim levels of 2019 Honda Civic predictions. ]{\includegraphics[width=3in]{manyveh1}% \label{fig:manyveh1}} \\ \subfloat[Comparison of all trim levels of 2019 Toyota Highlander predictions.]{\includegraphics[width=3in]{manyveh2}% \label{fig:manyveh2}} \caption{Example of several trim level comparisons and how component technologies affect the prediction path.} \label{fig:manyveh} \end{figure} Through the computation of Shapley attribution values for all the vehicles, and because every vehicle will have an distinct attribution value for each of its components, we can aggregate all the vehicles and focus on one component at a time and understand on a global level the overall effects that components have on prices. In figure \ref{fig:shapagg}, we show how individual component dependency plots can extract component price relationships by looking at the attributed Shapley value against the value of the feature of interest. This relationship shows how a feature attribution changes as the feature value varies. The left plot shows engine power dependency as the sum of the marginal effect, and the right plots all the second order interactions that engine power has with other components.\par Retrieving the interactional effect gives valuable additional insights. We first recognize the complexity involved in component pricing with and without the presence of other specific components. Pricing is clearly performed in a "packaged" way, and this approach allows us to reverse-engineer the pricing strategies involved in this exercise. The variance displayed in the dependence plot for a given vertical slice is explained by the complex levels of interactions. To be clear, for example, a turbo system may be given a different price tag for a minivan than for performance car, or the price of a navigation system may be different if bundled with other advanced options than if purchased by itself. The marginal effect plot in the top right corner shows some vertical dispersion that accounts for beyond-second-order interactions.\par Shapley interaction values are computed as follows: $$\phi_{i,j} = \sum_{S \subseteq \mathcal{M} \backslash \{i,j\}} \frac{|S|!(M-|S|-2)!}{2(M-1)!} \nabla_{ij}(S)$$ where: $\nabla_{ij}(S) = f_x(S \cup \{i,j\})-f_x(S \cup \{i\})-f_x(S \cup \{j\})+f_x(S)$ and the interaction is divided equally between feature $i$ and $j$ and $\phi_{i,j}=\phi_{j,i}$. The total interaction is given by $\phi_{i,j}+\phi_{j,i}$. The marginal effect can be extracted through $\phi_{i,i}=\phi_{i}-\sum_{j \neq i} \phi_{i,j}$ where we also note by additivity that $\sum_i \sum_j \phi_{i,j}= f(x)$. More details can be found in \cite{fujimoto_axiomatic_2006} \begin{figure*}[!t] \centering \includegraphics[width=\textwidth]{shapagg} \caption{Dependency plot of the relationship between engine power attributed pricing and the feature value (left). The main/marginal effect (top right) removes the second order interactional effects. Interactional effects (bottom right) provide information about how vertical separation occurs due to strong interactions; here, engine power price attribution interacts quite strongly with the presence of technology options in the vehicle (Bluetooth, navigation systems).} \label{fig:shapagg} \end{figure*} We present the total effect dependencies of some top influential features in a series of plots in figure \ref{fig:series}. The marginal and interactional effect plots are omitted for a more concise analysis. We found strong non-linear dependencies (in vehicle curb weights in \ref{fig:a}, vehicle model year in \ref{fig:b}, vehicle height in \ref{fig:d}) and quite complex dependencies (vehicle length \ref{fig:e}, and vehicle width \ref{fig:f}),\, while some components, like the effect of wheel diameter on price, could reasonably be approximated with a linear relationship (\ref{fig:c}). However, the presence of large vertical dispersion reveals the complex interactional effects involved in the pricing. For example, figure \ref{fig:b} shows the effect of the year on vehicle price, where we note a clear distinction in the trend between vehicles of larger curb weights vs. smaller curb weights. Heavier (ergo larger) vehicles seem to exhibit a sharper and more aggressive price increase over the years. In figure \ref{fig:a}, we see in the curves for the curb weight price relationship a separation between vehicles with and without Bluetooth. We underline that Bluetooth may not be the causal factor for this separation, as we explained in the causal graph in figure \ref{fig:sampling}, due to variable codependencies. The colors highlight the strongest computed interacting features. \begin{figure*}[!t] \centering \subfloat[Curb Weight (lbs)]{\includegraphics[width=0.33\textwidth]{Curb_Wght}% \label{fig:a}} \hfil \subfloat[Model Year]{\includegraphics[width=0.33\textwidth]{Year}% \label{fig:b}} \hfil \subfloat[Wheel Diameter (in.)]{\includegraphics[width=0.33\textwidth]{WheelDiam}% \label{fig:c}} \\ \subfloat[Veh. Height (in.)]{\includegraphics[width=0.33\textwidth]{Veh_Height}% \label{fig:d}} \hfil \subfloat[Veh. Length (in.)]{\includegraphics[width=0.33\textwidth]{Veh_Width}% \label{fig:e}} \hfil \subfloat[Veh. Width (in.)]{\includegraphics[width=0.33\textwidth]{Veh_Length}% \label{fig:f}} \caption{Dependency plots for a selected subset of key influential vehicle features.} \label{fig:series} \end{figure*} We also observe other interesting trends and values for other components, such as the effect on pricing of the number of transmission gears, the type of wheels, tire characteristics, cylinder deactivation or other advanced engine technologies (direct injection, variable valve timing, variable valve lift, etc.), front seat material, etc. For a succinct paper, we omit them from the current analysis, and we plan to provide further analysis in a separate paper in the future.\par \subsubsection*{Engine Displacement} Certain component price trends require special attention in their interpretation. In figure \ref{fig:engsize}, we consider the case of engine displacement's dependency on pricing. A remarkable, perhaps not so surprising, relationship emerges where we note an overall downward price trend with augmented engine size (displacement in liters). A marginal increase in engine size has the effect of reducing price. By "marginal" we mean that other components are controlled for, and therefore the pure isolated effect of an increased engine size is contributing to the downward tendency. In figure \ref{fig:aa}, we show the total effect of the engine size feature's value on price, and the large vertical dispersion due to high levels of interactions obscures the small movements in the trend. In figure \ref{fig:bb}, the true effect is shown when second order interaction levels are removed for a clearer picture. We observe a more detailed change in direction at several key engine size levels, particularly at around 1.8-2 L, at 3.5-4 L and $\sim$ 5 L. From 1 L to 1.8-2 L engines, the tendency is upward; that is, an increase in engine size does contribute to an increase in the technology price. In this case of small cars, the higher price is possibly justified by the better fuel economy seen in these vehicles. However, starting from 1.8-2 L to 3.5-4 L we see a decrease in price. The higher price of smaller engines in this portion of the graph can be explained by the fact that the reduction in engine size is made possible by better engine technology. Turbocharged 4-cylinder engines have replaced 6-cylinder engines in many applications. While the displacement of the turbocharged engines is smaller than the naturally aspirated ones, the use of additional components for turbocharging will increase the price of engine. Assuming the power output needed from the engine remains the same, getting it from a smaller displacement requires technologies such as turbocharging or higher compression ratios, which are all likely to cost more than the base engine. The larger engines in that list are all likely to be older, naturally aspirated engines in minivans or pickups. In case of minivans, people may not pay for more power or displacement. Hence the wiggling of the pricing curve between 4 L and 6 L engines. For pickups, the turbocharged engines with a lower displacement and higher power can command a higher price than a larger naturally aspirated engine. \begin{figure}[!t] \centering \subfloat[Total effect: engine displacement (L). ]{\includegraphics[width=3in]{Eng_Size}% \label{fig:aa}} \\ \subfloat[Main effect: engine displacement (L).]{\includegraphics[width=3in]{Eng_SizeEng_Size}% \label{fig:bb}} \caption{Total effect (a) and main effect (b) of engine size (in liters) value on pricing} \label{fig:engsize} \end{figure} Individual technology prices can be assessed and studied over time. For example, figure \ref{fig:turbotime} shows the effect of time on turbo technology pricing, where after a period of slight increase (late 2000 economic recession), there appears to be a drop in turbo pricing. This is a clear example of how component prices evolve as they become more popular. Figure \ref{fig:navtime} gives another non-powertrain example of such behavior: navigation system price evolution. We observe here fairly stable pricing since 2010, but perhaps a slight, steady, but almost indistinguishable decrease. Analogous examples can presented to follow pricing changes over time when a certain technology (e.g., a rear camera) is made compulsory through regulation (not shown here). \begin{figure}[!t] \centering \includegraphics[width=3.5in]{turbotime} \caption{Effect of time on specific component pricing: turbo technology} \label{fig:turbotime} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=3.5in]{navtime} \caption{Effect of time on specific component pricing: navigation system} \label{fig:navtime} \end{figure} In a similar fashion, the effect of vehicle class on component prices can be assessed. Figure \ref{fig:turboclass} shows clear evidence of how turbo pricing is affected by the vehicle segment. We note an overpricing of the technology for trucks and vans, while minivans display the lowest price, manifestly due to the type of customer who is not necessary seeking efficiency and is not willing to pay for it. On the flip side, vans are typically used as the primary carrier for delivery options. In this category, the benefit of turbocharging is worthwhile and in high demand and is therefore priced differently. \begin{figure}[!t] \centering \includegraphics[width=3.5in]{turboclass} \caption{Effect of class on specific component pricing: turbo technology} \label{fig:turboclass} \end{figure} \section{Influential Vehicle Features} The Shapley values can be used to identify the importance of features to the model output (i.e., to the vehicle price prediction). Features have the most impact when the change in the model output is greatly affected by the feature value. For linear models $f(x)=x^T\beta$, the coefficients of the covariates provide some clues. In a typical setting, the importance of a feature is given in a global form; that is, the importance is measured by looking at its effect on model error. For example, the permutation feature importance method consists of measuring the model prediction error after permuting the values of a specific feature. Then the one feature that most affected the model accuracy is attributed a quantifiable high importance: a global measure of how the model reliability is dependent on that feature. Conversely, a feature is not important if the resulting permutation did not affect the model error (see \cite{statistics_random_2001} or \cite{fisher_all_2019} for an exploration of other methods).\par Alternatively, because of its natural local property, the attribution provided by Shapley derivation gives an individualized feature importance measure for each prediction and each feature. Their aggregation can ultimately provide an equivalent global importance measure, but the natural decomposition produces a richer view of importance. In fact, typical feature importance plots are bar charts showing the general effect a feature has on the prediction, while the Shapley approach, endowed with localizable importance values, delivers higher resolution plots. In addition, Shapley solutions ensure consistency in the sense stated above in section \ref{sec:shapmeth}. \begin{figure}[!t] \centering \subfloat[]{\includegraphics[width=3in]{summary}% \label{fig:summary}} \\ \subfloat[]{\includegraphics[width=3in]{summarybar}% \label{fig:summarybar}} \caption{(a) Individual (one dot per vehicle) Shapley attribution values for a subset of features of the Catboost MSRP predictive model. High Shapley values mean a high price attribution, which depends on the feature value shown by the color code. (b) Standard feature importance bar chart.} \end{figure} Figure \ref{fig:summary} shows the individual Shapley attribution values for a subset of features of the Catboost MSRP predictive model. High Shapley values mean a high price attribution, which depends on the feature value shown by the color code. The plot gives a high resolution feel for feature importance, as each dot is a vehicle feature attribution value. In this type of plotting, the amplitude provides a general idea of the overall distribution over Shapley values that each feature has. The features are ordered by order of importance by summing over the $N$ vehicle examples $j$, i.e., $\frac{1}{N}\sum_{j=1}^N |\phi_i(f,x_j)|$ for each feature $i \in \mathcal{M}$. Figure \ref{fig:summarybar} shows the standard feature importance bar chart computed by the formula just described, which measures global importance through summation over all vehicles. For example, with the vehicle curb weight topping the list, the plot shows that the vehicle curb weight is the most influential variable affecting vehicle pricing. The higher the Shapley value, the bigger the contribution to the total price the curb weight has, and from the colors we see clearly how the higher vehicle curb weight feature value increases the pricing (unsurprisingly). The large variance also provides information on the spread of the vehicle curb weights in the dataset, and the density shows how common each is. We see roughly five density groupings, most likely corresponding to the five main standard U.S. Environmental Protection Agency (EPA) vehicle classifications.\par It is worth noting that no causal conclusion can be drawn from the above analysis, and that vehicle curb weight increase or decrease is not causing vehicle price to change. Vehicle curb weight as encoded in the current model should be interpreted as a proxy for other latent parent variables. Observing one of those variables will change the distribution over curb weight due to the dependencies and therefore its overall influence. In other words, this is a property of the built system and model, but not of the external world. \section{Discussion} \subsection{Methodology Implementation} Current vehicle pricing methods rely on fixed equations to calculate each component cost or technological incremental cost and, ultimately, vehicle manufacturing cost. The vehicle's MSRP is then computed using a constant 1.5 multiplier for retail price equivalent (RPE). In reality, as we showed, OEMs have different margin levels based on the vehicle class, vehicle technology, or other criteria.\par In this paper we proposed an alternative no-teardown top-down approach for component and vehicle price estimation, but improvement upon the proposed work and an extension of its application is possible. Promising new studies can build on the Argonne vehicle attribute database and the developed technique. In particular, further studies can be performed to estimate the increased efficiency per unit of price (\$/mpg) to increase the reliability of overall VTO benefits. This opens the door to deriving \$/mile estimates at the vehicle technology level and also deriving component-level \$/mile estimates to explore the tradeoffs between more efficient vehicle technologies (powertrain level or component) and the added price. Connecting those estimates with sales data will enable an understanding of technology's value to the customer.\par This novel proposed methodology shows some advantage over current Autonomie vehicle pricing methods. Profiting from this novel methodology for future VTO-related benefits analysis efforts would require significant Autonomie process changes. There are two options for direct implementation of this novel methodology into the Autonomie framework: \begin{enumerate} \item \textbf{Equation based.} Preserve the current Autonomie methodology and derive parametric equations (or non-parametric relationships, e.g., kernel smoother methods) for each component and implement independent component prices at the MSRP level (including direct and indirect costs). There will be no need for post-hoc RPE or ICM adjustment. However, due to the high degree of interactional effects, this approach is not recommended. \item \textbf{Shapley-based credit/penalty component pricing.} This approach would rely on the use of the predictive model to estimate vehicle price and then generate the Shapley values to extract a price contribution for each component. Starting from a baseline vehicle and component value, a price credit/penalty is applied via the Shapley attributed score: Through the complexity of interactional effects, the price of a component will be dependent on the presence of other vehicle components and their feature values. This approach is closest to what has been observed in the data, and therefore will provide individually tailored pricing, and hence more accuracy. No post-hoc RPE or ICM adjustment will be needed. \end{enumerate} \subsection{Expert Evaluation} In addition to the traditional data-driven model validation, the authors attempted to compare the resulting Shapley attribution component price values with existing literature. While the comparison and validation exercise was fruitful and encouraging, literature-available component \textit{cost} (not price) data is at the manufacturing cost level, so the difficulty of a fair comparison was threefold: \begin{enumerate} \item Component cost is mapped to component pricing through often unreliable RPM and ICM adjustments. \item Component cost at the manufacturing level fails to account for interactions and component packaging. This can dramatically affect final pricing. \item Component cost values usually neglect to differentiate costing of components by vehicle size, class or powertrain. \end{enumerate} The authors performed component price \emph{pseudo} validation in an honest attempt, but were limited as to data availability, data comparability, and knowledge of the field of vehicle/component pricing and the marketing strategies involved. The results of the validation process will be published in a separate article in the future. Meanwhile, we plan to engage in further literature investigation, complete additional analysis and comparisons, attempt to gather more market level component data, reach out to marketing and financial experts, present current method and results outcomes to stakeholders, and, as a result, produce a more comprehensive, engineer-based, validation. We also encourage interested parties and experts to reach out, adopt the methodology and the resulting outcomes and provide suggestions or directions. \subsection{Model Improvement} While efforts have been made in the vehicle price modeling to reach $\sim~\$1000$ of average prediction error (equivalent to an average error of 2.2\% of predicted vehicle price), more work can be done towards model error improvement. From the current top-down approach suggested, total vehicle price estimation is used as a basis from which to derive component level prices, i.e., component prices may be affected by vehicles with low prediction accuracy, especially for low-price components. In this spirit, we are encouraged to maintain and continue the modeling exercise. \subsection{Uncertainty Estimation} The current suggested approach relies on the fair decomposition of a total vehicle price onto the different component parts using additive feature attribution methods. While the method has certain theoretical guarantees for fairness and optimality, it does not address the uncertainty implicit in the method's outputs. We suggest further investigations of attributional outcome uncertainties, i.e., introducing confidence intervals to quantify the uncertainty in estimated attributions. In return, on the one hand, this will allow us to better quantify how confident one should be about a certain attributed component price for a particular vehicle, and, on the other hand, this will also allow us to exclude or identify uncertain decompositions when deriving global component trends (overall \$/technology feature value). \section{Data and Code Availability} The data used in this study was retrieved from several sources of publicly available data. After a substantial effort of data collection process development, data cleaning, data integration and data analysis, the resulting processed \emph{aggregated} data are Argonne property. The database is managed by a MongoDB database management system. The code implementation of the web-scraping process, the clustering, the vehicle price modeling, the Shapley attribution and the data analysis are done in a combination of R, Python, Tableau and Gephi software. For questions and inquiries please contact Ayman Moawad amoawad@anl.gov. \section*{Acknowledgment} The authors would like to acknowledge the financial support of Jacob Ward and Heather Croteau (Office of Vehicle Technologies, U.S. Department of Energy) for this work. The submitted manuscript has been created by UChicago Argonne, LLC, Operator of Argonne National Laboratory (Argonne). Argonne, a U.S. Department of Energy Office of Science laboratory, is operated under Contract No. DE-AC02-06CH11357. The U.S. Government retains for itself, and others acting on its behalf, a paid-up nonexclusive, irrevocable worldwide license in said article to reproduce, prepare derivative works, distribute copies to the public, and perform publicly and display publicly, by or on behalf of the Government.\par The views and opinions of the authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, expressed or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. \bibliographystyle{IEEEtran}
2,877,628,091,660
arxiv
\section{Introduction} \label{sec:introduction} Production software packages have become increasingly complex. They are comprised of a large amount of source code, sophisticated control and data flow, a hierarchy of component libraries, and growing levels of abstractions. This complexity often introduces inefficiencies across the software stacks, leading to resource wastage, performance degradation, and energy dissipation~\cite{Molyneaux:2009:AAP:1550832, Bryant:2010:CSP:1841497}. Such inefficiencies are usually in the form of useless or redundant operations, such as computations whose results may not be used~\cite{Butts:2002:DDD:605397.605419, journals/jilp/SengT05}, re-computation of already computed values~\cite{RVN}, unnecessary data movement~\cite{chabbi2012deadspy, 854389, 6557169, Marin-sweep3d, redspy}, and excessive synchronization~\cite{Chabbi:2015:BEP:2688500.2688502, Tallent:2010:ALC:1837853.1693489}. The provenance of these inefficiencies can be many: rigid abstraction boundaries, missed opportunities to optimize common cases, suboptimal algorithm design, inappropriate data structure selection, and poor compiler code generation. There is a long history of compiler optimizations aimed at statically analyzing and eliminating redundant operations by techniques such as common sub-expression elimination~\cite{deitz2001eliminating}, value numbering~\cite{gvn}, constant propagation~\cite{Wegman:1991:CPC:103135.103136}, to name a few. However, they have a myopic view of the program, which limits their analysis to a small scope---individual functions or files. Layers of abstractions, dynamically loaded libraries, multi-lingual components, aggregate types, aliasing, sophisticated flows of control, input-specific path-specific redundancies, and combinatorial explosion of execution paths make it practically impossible for compilers to obtain a holistic view of an application to eliminate all redundancies. Link-time optimization~\cite{Fernandez:1995:SEL:207110.207121} can offer better visibility, however, the analysis is still conservative and may err on the side of being less exhaustive to reduce prohibitive analysis cost. Whole-program link-time optimizations~\cite{Johnson:2017:TSI:3049832.3049845, citeulike:481261} have provided less than 5\% average speedup, although a lot more headroom exists as we show in our work. Thus, despite their best efforts, compilers often fall short of eliminating runtime inefficiencies. Execution profiling aims to understand the runtime behavior of a program. Performance analysis tools such as HPCToolkit~\cite{adhianto2010hpctoolkit}, VTune~\cite{vtune}, perf~\cite{perf}, gprof~\cite{Graham-etal:1982:PLDI-gprof}, OProfile~\cite{Levon:OProfile}, and CrayPAT~\cite{DeRose-etal:2008:CrayPAT} monitor code execution to identify hot code regions, idle CPU cycles, arithmetic intensity, and cache misses, etc. These tools can recognize the utilization (saturation or underutilization) of hardware resources, but they cannot inform whether a resource is being used in a \emph{fruitful} manner that contributes to the overall efficiency of a program. A hotspot need not mean inefficient code, and conversely, the lack of a hotspot need not mean better code. Coarse-grained profilers usually cannot distinguish efficient vs. inefficient code; for example, they cannot identify that repeated memory loads of the same value or result-equivalent computations waste both memory bandwidth and processor functional units. \emph{Whole-program fine-grained monitoring} is a means to monitor execution at microscopic details: it monitors each binary instruction instance, including its operator, operands, and runtime values in registers and memory. A key advantage of microscopic program-wide monitoring is that it can identify redundancies irrespective of the user-level program abstractions. Prior work~\cite{chabbi2012deadspy, RVN, redspy, toddler} has shown that the fine-grained profiling techniques can identify many forms of software inefficiencies and offer detailed guidance to tune code. Existing fine-grained profilers pinpoint inefficiencies in a subset of individual operations such as operations with symbolic equivalence~\cite{RVN}, dead memory stores~\cite{chabbi2012deadspy}, and operations writing same values to target registers or memory locations~\cite{redspy}. They have, however, overlooked an important category \emph{temporal load redundancy}---loading the same value from the same memory location. For instance, the code on the left of Listing~\ref{lst:example} shows redundant operations that are invisible in existing fine-grained profilers. In this code, suppose all the scalars are in registers and vectors are in memory. Because there are no ``dead store'' operations (a store followed by another store to the same location without an intervening load), DeadSpy~\cite{chabbi2012deadspy} does not identify any inefficiency. Since the values written in $t$ and $delta$ always change, RedSpy~\cite{redspy} does not report any ``silent store'' operations~\cite{854389}. Finally, since there is no symbolic equivalent computation, RVN~\cite{RVN} does not report any inefficiency. Furthermore, because the optimization involves the mathematically equivalent transformation, as shown on the right of Listing~\ref{lst:example}, it is difficult to optimize with other compiler techniques such as polyhedral optimization~\cite{POP2006GRAPHITE}. \begin{figure} \begin{minipage}[t]{.49\linewidth} \begin{lstlisting}[firstnumber=1,language=c] while (t < threshold) { t = 0; for(i = 0; i < N; i++) @$\blacktriangleright$@ t += A[i] + B[i]*delta; delta -= 0.1 * t; } \end{lstlisting} \end{minipage}\hfill \begin{minipage}[t]{.47\linewidth} \begin{lstlisting}[firstnumber=1,language=c] for (i = 0; i < N; i++) a += A[i]; b += B[i]; while (t < threshold) { t = a + b * delta; delta -= 0.1 * t; } \end{lstlisting} \end{minipage} \vspace{-0.3in} \captionof{lstlisting}{An example code (on the left) with temporal inefficiencies that cannot be identified by existing fine-grained profilers. Because arrays $A$ and $B$ are immutable in the loop nest, computing on these loop invariants introduces many redundancies. One can hoist the redundant computation outside of the loop (on the right) for optimization } \vspace{-0.1in} \label{lst:example} \end{figure} The code on the left of Listing~\ref{lst:exampleSpatial} shows another kind of load redundancy, which loads the same value from the \emph{nearby} memory locations. Even though each element of array $A$ is only loaded once, adjacent elements with the same values result in loading the same value and the subsequent redundant computation. We refer to this type of redundancy as \emph{spatial load redundancy}. As a practical example, a sparse matrix with a dense format can yield many spatial load redundancies. \begin{figure} \begin{minipage}[t]{.48\linewidth} \begin{lstlisting}[firstnumber=1,language=c] int A[N] = {1, 1, 1, 15}; for(i = 0; i < N; i++) { @$\blacktriangleright$@ t += func(A[i]); } \end{lstlisting} \end{minipage}\hfill \begin{minipage}[t]{.48\linewidth} \begin{lstlisting}[firstnumber=1,language=c] int A[N] = {1, 1, 1, 15}; a = func(A[0]); for(i = 0; i < N; i++) { if (A[i] != A[i-1]) a = func(A[i]); t += a; } \end{lstlisting} \end{minipage} \vspace{-0.3in} \captionof{lstlisting}{An example code (on the left) with spatial inefficiencies that cannot be identified by existing fine-grained profilers. The load redundancy happens at line 4 where the program reads the same value from the nearby memory locations since some adjacent elements of array $A$ have the same value. Such redundancy further results in redundant computation involved in the function $func$. Because $func$ always returns the same value for the same input. One can compare if the adjacent elements in array $A$ are equivalent to eliminate redundant computation (on the right). If they are the same, one can reuse the return value of $func$, which is generated in the previous iteration. } \vspace{-.2in} \label{lst:exampleSpatial} \end{figure} Listing~\ref{lst:example} and~\ref{lst:exampleSpatial} show a tip of the iceberg of the inefficiencies we target in this paper to complement existing tools. From our observation, a variety of inefficiencies exhibit \emph{substantial} redundant loads; conversely, the presence of a large fraction of redundant loads in an execution is a symptom of some kind of inefficiency \emph{in the code regions} that exhibit such redundancy. Furthermore, the subsequent operations based on redundant loads are potentially redundant. We have designed and implemented a developer tool---\loadspy{}---aimed at profiling an execution and quantifying load redundancy in the execution. \loadspy{} highlights precise source code in its full calling contexts and the two parties involved in a redundant load. Additionally, \loadspy{} narrows down the investigation scope to help developers focus on the provenance of inefficiencies. A thorough evaluation on a suite of benchmarks and real-world applications shows that looking for redundant loads in a program offers an easy avenue for performance enhancement in many programs. In this paper~\footnote{This is a full-version of our ICSE paper~\cite{loadspy}.}, we make the following contributions: \begin{itemize}[leftmargin=*] \item Show that redundant loads are a common indicator of various forms of software inefficiencies. This finding serves as the foundation of \loadspy{}. \item Describe the design of \loadspy{}---a whole-program fine-grained profiler to pinpoint redundant loads. \item Develop strategies for analyzing a large volume of profiling data by attributing redundancy to runtime contexts, objects, and scopes \item Enable rich visualization for a large volume of profiling data coming from different threads/processes with a user-friendly GUI, which improves the usability for non-experts. \item Apply \loadspy{} to pinpoint inefficiencies in well-known benchmarks and real-world applications that were the subjects of study and optimization for years and eliminate \loadspy{}-found inefficiencies by avoiding redundant loads, which yield nontrivial speedups. \end{itemize} \begin{comment} We organize this paper as follows: \S~\ref{sec:related} shows related work and distinguishes our work. \S~\ref{sec:motivation} describes our findings of inefficiencies in various forms with a common symptom---load redundancy. \S~\ref{sec:methodology} and \S~\ref{sec:implementation} describe the design and implementation details of \loadspy{}. \S~\ref{sec:experiment} evaluates \loadspy{}, and \S~\ref{sec:use} shows several case studies. \S~\ref{sec:conclusion} presents our conclusions. \end{comment} \section{Related Work} \label{sec:related} There exist many compiler techniques and static analysis techniques~\cite{cooper2008redundancy,deitz2001eliminating,Luo:2014:OSC:2628071.2628121,hundt2011mao} to identify redundant computation. However, these static approaches suffer from limitations related to the precision of alias information, optimization scope, and insensitivity to inputs and execution contexts. To address these issues, recent approaches convert the source code to specific notations for redundancy detection and removal~\cite{Ding:2017:GGL:3152284.3133898}, or target specific algorithm for optimization~\cite{Ding:2017:GTD:3062341.3062377}. However, these approaches require substantial prior knowledge to identify whether a program suffers from redundancies that are worthy of optimization. In contrast, \loadspy{} monitors execution, avoids inaccuracies associated with compile-time analysis, and needs no prior knowledge of the measured programs. There exist many hardware-based approaches~\cite{Lipasti:1996:VLL:237090.237173,Lipasti:1996:EDL:243846.243889,854389,Lepak:2000:SSF:360128.360133,miguel2014load, miguel2015doppelganger,yazdanbakhsh2016rfvp,Butts:2002:DDD:605397.605419} that optimize redundant operations during program execution. However, these approaches require hardware extension, which is unavailable in commodity processors. Instead, \loadspy{} is a pure software approach and does not need any hardware changes. The remaining section reviews only other profiling techniques. \subsection{Value profiling} \loadspy{} is a value-aware profiler; value profiling techniques are closely related to our work. Calder et al.~\cite{Calder:1997:VP:266800.266825,Calder99valueprofiling,Feller98valueprofiling} proposed probably the first value profiler on DEC Alpha processors. They instrumented the program code and recorded top N values to pinpoint invariant or semi-invariant variables stored in registers or memory. A variant of this value profiler was proposed in a later research~\cite{Watterson:2001:GVP:647477.760386}. Burrows et al.~\cite{Burrows:2000:EFV:378993.379236} used hardware performance counters to sample values in Digital Continuous Profiling Infrastructure~\cite{DCPI}. Wen et al.~\cite{witch} combined performance monitoring units and debug registers available in x86 to identify redundant memory operations. These approaches do not explore whole-program load redundancy in depth. Moreover, none of them detect spatial redundancy. Some code specialization work depends on value profiling. However, these approaches limit themselves to only analyzing registers~\cite{Muth:2000:CSB:647169.718148}, static instructions~\cite{Oh:2013:PAL:2451116.2451161}, memory store operations~\cite{redspy}, or functions~\cite{Chung00energyefficient,Kamio04avalue,vprof}. They omit many optimization opportunities and require significant manual efforts to reason about the root causes of inefficiencies. Unlike existing value profilers, \loadspy{} has four distinct features. First, \loadspy{} is the first value profiler that tracks the \emph{history of loaded values} from individual \emph{memory locations}, rather than the values produced by \emph{individual instructions}. Second, \loadspy{} identifies both \emph{temporal and spatial} redundancies in load operations. Third, \loadspy{} provides novel redundancy scope and metrics to guide optimization in both contexts and semantics. Fourth, \loadspy{} not only identifies redundancy arising due to exactly the same values but also identifies redundancy due to approximately equal values, which offers opportunities for \emph{approximate computing}. \subsection{Value-agnostic profiling} RVN~\cite{RVN} assigns symbolic values to dynamic instructions and identifies redundancy on the fly. DeadSpy~\cite{chabbi2012deadspy} tracks every memory operation to pinpoint a store operation that is not loaded before a subsequent store to the same location. MemoizeIt~\cite{DellaToffola:2015:PPY:2814270.2814290} detects Java methods that perform identical computations. Travioli~\cite{Padhye:2017:TRD:3097368.3097425} detects redundant data structure traversals. These approaches miss out on certain opportunities that \loadspy{} can detect by explicitly inspecting values generated at runtime. Toddler~\cite{toddler} has to manually add loop events to instrument loops in a C code base and only identifies repetitive memory loads across loop iterations. The follow-on work LDoctor~\cite{ldoctor} reduces Toddler's overhead using a combination of ad-hoc sampling and static analysis techniques. LDoctor instruments a small number of suspicious loops at compile time. This technique can miss redundant loads in different loops. In contrast, \loadspy{} works on fully optimized binaries, is independent of any compiler, and performs the whole-program profiling instead of limiting itself to only profiling loops. \section{Redundant Loads: An Inefficiency Symptom} \label{sec:motivation} While there are several ways to identify the inefficiency, \loadspy{} focuses on memory load operations. If two consecutive load operations performed on the same memory location load the same value, the second load operation can be deemed useless. Thus, the second load could potentially be elided. Our study aims to quantify redundant loads and attribute them to the code regions that cause them. \emph{A single instance of a redundant load is uninteresting; highly frequent redundant loads occurring in the same code location demand attention.} It is easy to imagine how redundant loads happen: repeatedly accessing immutable data structures or algorithms employing memoization. It is equally easy to see how inefficient code sequences show up as redundant loads: missed inlining appears as repeatedly loading the same values in a callee, imperfect alias information shows up as loading the same values from the same location via two different pointers, redundant computations show up as the same computations being performed by loading unchanged values, algorithmic defects, e.g., frequent linear searches or hash collisions, also appear as repeatedly loading unchanged values from the same locations. \begin{comment} \begin{definition}[Temporal Load Redundancy] A memory load operation $L_2$, loading value $V_1$ from location $M$, is redundant $iff$ the previous load operation $L_1$, performed on $M$, also loaded the same value $V_1$. \begin{definition}[Spatial Load Redundancy] A memory load operation $L_2$, loading a value $V_1$ from location $M_2$, is redundant $iff$ the previous load operation $L_1$, performed on location $M_1$, also loaded the same value $V_1$ and $M_1$ and $M_2$ both belong to the address span of the same data object. \end{definition} \end{comment} \begin{definition}[Temporal Load Redundancy] A memory load operation $L_2$, loading value $V_2$ from location $M$, is redundant $iff$ the previous load operation $L_1$, performed on $M$, loaded a value $V_1$ and $V_1 = V_2$. If $V_1 \approx V_2$, we call it approximate temporal load redundancy. \end{definition} \begin{definition}[Spatial Load Redundancy] A memory load operation $L_2$, loading a value $V_2$ from location $M_2$, is redundant $iff$ the previous load operation $L_1$, performed on location $M_1$, loaded a value $V_1$ and $V_1 = V_2$, and $M_1$ and $M_2$ belong to the address span of the same data object. If $V_1 \approx V_2$, we call it approximate spatial load redundancy. \end{definition} \begin{definition}[Redundancy Fraction] We define the \emph{redundancy fraction} ${\mathcal R}$ in an execution as the ratio of bytes redundantly loaded to the total bytes loaded in the entire execution. \end{definition} We emphasize that the redundancy is defined for instruction instances, \emph{not} static instructions. Deleting an instruction involved in one instance of a redundant load can be unsafe. \begin{observation} Large redundancy fraction ($\mathcal R$) in the execution profile of a program is a symptom of some kind of software inefficiency. \end{observation} Redundant loads are neither a necessary condition nor a sufficient condition to capture all kinds of software inefficiencies. However, we show, with many illustrative case studies, that \emph{a large fraction of redundant loads in the same code region} is often a symptom of a serious inefficiency. We notice frequent redundant loads across the board in many programs irrespective of optimization levels, raising a warning alarm of potential inefficiency. Although not all redundant loads demand optimization, in our experience, investigating the top few contributors in a profile offers a high potential to tune and optimize code. Looking for load redundancy opens potentially an easy avenue for code optimization---manual or automatic. We measure the redundancy fraction in a number of benchmarks SPEC CPU2006~\cite{SPEC:CPU2006}, PARSEC-2.1~\cite{parsec}, Rodinia-3.1~\cite{rodinia}, and NERSC-8~\cite{TRINITY-WWW}. We compile these benchmarks with \texttt{gcc-4.8.5 -O3}, link-time optimization (LTO) and profile-guided optimization (PGO), which is one of the highest optimization levels. In practice, most packages do not use this level of optimization. We observe that a large load redundancy fraction correlates with some kind of inefficiency. Furthermore, the code that generates many redundant loads is responsible for the inefficiencies in the program. We classify the causes of redundant loads according to their provenance: input-sensitive redundant loads, inefficient data structure/algorithm designs, or missing compiler optimizations. Different kinds of inefficiencies require different optimization strategies. \subsection{Input-sensitive Redundant Loads} \label{subsec:input} In this section, we classify the inefficiency due to inputs. Rodinia-3.1 backprop~\cite{rodinia}, a supervised machine learning algorithm, trains the weights of connections in a neural network. The redundancy fraction of this program is 64\%. It is common knowledge that as the training progresses, many weights stabilize and do not change. Hence, their gradients become and remain zero. Listing~\ref{lst:backprop} shows the inefficiency at line 3, where the majority of elements in arrays \texttt{delta} and \texttt{oldw} are zeros. Computations at lines 3-5 can be bypassed when \texttt{delta[j]} and \texttt{oldw[k][j]} are zeros. Repeatedly loading the zero value from \texttt{delta[j]} and \texttt{oldw[k][j]} shows up as spatial load redundancy. It is easy to eliminate the input-sensitive redundant loads by predicating the subsequent computation on the values of \texttt{delta[j]} and \texttt{oldw[k][j]} being non-zero. \begin{figure}[t] \begin{lstlisting}[firstnumber=1,language=c, caption= {Spatial load redundancy in Rodinia-3.1 backprop. Arrays \texttt{delta} and \texttt{oldw} are repeatedly loaded from memory whereas most array elements are zero.}\vspace{-1.5em}, label=lst:backprop] for (j = 1; j <= ndelta; j++) { for (k = 0; k <= nly; k++) { @$\blacktriangleright$@ new_dw = ((ETA*delta[j]*ly[k])+(MOMENTUM*oldw[k][j])); w[k][j] += new_dw; oldw[k][j] = new_dw; }} \end{lstlisting} \end{figure} \subsection{Redundant Loads due to Suboptimal Data Structures and Algorithms} \label{subsec:algorithm} Inefficiencies of this category require semantics to identify and optimize. These inefficiencies also incur a significant number of redundant loads. We illustrate some algorithms that introduce inefficiencies in a few well-known benchmarks. \paragraph{\textbf{Linear search}} Rodinia-3.1 particlefilter~\cite{rodinia} is used to estimate the location of a target object in signal processing and neuroscience. The redundancy fraction of this program is 99\%. Listing~\ref{lst:particlefilter} shows the inefficiency in function \texttt{findIndex}, which performs a linear search (line 3) over a sorted array \texttt{CDF} to determine the location of a given particle. This linear search is called multiple times in a loop to become the bottleneck of the program. The symptom of this inefficiency is many redundant loads, which is caused by the repeated loads of immutable array \texttt{CDF} elements in different invocation instances of function \texttt{findIndex}. To fix this problem, one can replace the linear search with a binary search, which reduces the volume of redundant loads. \begin{figure} \begin{lstlisting}[firstnumber=1,language=c, caption= Temporal load redundancy in Rodinia-3.1 particlefilter. A linear search loads the same values from the same memory locations.\vspace{-1em}, label=lst:particlefilter] int findIndex(double *CDF, int lengthCDF, double value) { for(x = 0; x < lengthCDF; x++) { @$\blacktriangleright$@ if (CDF[x] >= value) { index = x; break; }} ... return index; } ... for(j = 0; j < Nparticles; j++) i = findIndex(CDF, Nparticles, u[j]); \end{lstlisting} \end{figure} \paragraph{\textbf{Hash table}} Parsec-2.1 dedup~\cite{parsec} compresses data via deduplication. The redundancy fraction of this program is 75\%. Listing~\ref{lst:dedup} shows the inefficiency in the program, which searches for an item in a linked list associated with a hash table entry. The inefficiency comes from the frequent execution on the slow path due to the hash collision. We noticed that only $\sim$2\% hash buckets are occupied, and the slow path is frequently taken. The linked list traversal on the slow path loads the same values from the same locations (line 8), which results in redundant loads. One can improve the hash function to make hash keys uniformly distributed among buckets, which will reduce the redundancy and hence the inefficiency. \begin{figure}[t] \begin{lstlisting}[firstnumber=1,language=c, caption= Temporal load redundancy in Parsec-2.1 dedup. Excessive hash collisions in linear hashing result in long linked lists.\vspace{-1.5em}, label=lst:dedup] struct hash_entry *hashtable_search(struct hashtable *h, void *k) { struct hash_entry *e; unsigned int hashvalue, index; hashvalue = hash(h,k); index = indexFor(h->tablelength,hashvalue); e = h->table[index]; while (NULL != e) { @$\blacktriangleright$@ if ((hashvalue == e->h) && (h->eqfn(k, e->k))) return e; e = e->next; } ...} \end{lstlisting} \end{figure} \begin{comment} \paragraph{\textbf{Heap sort}} SPEC OMP2012 376.kdtree~\cite{SPEC:OMP2012} uses random coordinate points to build a k-d tree. The redundancy fraction of this program is 80\%. Listing~\ref{lst:kdtree} shows an inefficiency in the function \texttt{downheap}, which uses a binary max heap to sort array \texttt{x}. With \texttt{n} elements in \texttt{x}, \texttt{downheap} is called $n-1$ times to complete the sorting. The heap sort is a comparison-based sort algorithm, which updates the heap root every time according to the comparison result in each iteration. The heap sort has a lower bound of $\Omega(n\log n)$ comparison operations, which is the cause for the redundant loads. With further investigation, we find that elements in array \texttt{x} have uniformly distributed values in a small integer interval $[0, 32767]$ and the array size $n \gg 32767$. In this case, non-comparison sort algorithms are superior choices, e.g., bucket sort, whose time complexity is only $O(n+32768)$, which reduce the redundant loads and yield better performance. \begin{figure}[t] \begin{lstlisting}[firstnumber=1,language=c, caption= Temporal load redundancy in SPEC OMP2012 376.kdtree. Heap sort is an unwise choice of algorithm when unsorted data are uniformly distributed in a small interval., label=lst:kdtree] void heapsort(long long *a, long long n, int **x, int p) { long long k, v; for (k = n / 2; k >= 1; k--) downheap(a, n, k, x, p); while (n > 1) { ... @$\blacktriangleright$@ downheap(a, --n, 1, x, p); }} \end{lstlisting} \end{figure} \paragraph{\textbf{Redundant function calls}} Stamp-0.9.10 vacation~\cite{Nakaike:2015:QCH:2749469.2750403} is a travel reservation system. The redundancy fraction of this program is 85\%. Listing~\ref{lst:vacation} shows the inefficiencies at lines 5-6. The macros \texttt{MANAGER\_QUERY\_CAR} and \texttt{MANAGER\_QUERY\_CAR\_PRICE} call the same function to check the existence and price for a given item. Such redundant function calls also introduce significant redundant loads when reading the same database entry. To fix the problem, one can memoize the first search result and reuse it to avoid redundant calls. \begin{figure}[t] \begin{lstlisting}[firstnumber=1,language=c, caption= Temporal load redundancy in Stamp-0.9.10 vacation due to redundant function calls., label=lst:vacation] for (n = 0; n < numQuery; n++) { ... switch (t) { case RESERVATION_CAR: @$\blacktriangleright$@ if (MANAGER_QUERY_CAR(managerPtr, id) >= 0) { @$\blacktriangleright$@ price = MANAGER_QUERY_CAR_PRICE(managerPtr, id); } ...}} \end{lstlisting} \end{figure} \end{comment} \subsection{Redundant Loads due to Missing Compiler Optimizations} \label{subsec:compiler} Inefficiencies of this category occur in small scopes---loop nests or procedure calls. One needs to either curate the code or manually apply transformations to eliminate these inefficiencies. The following three examples illustrate our findings. \paragraph{\textbf{Missing scalar replacement}} Rodinia-3.1 hotspot 3D~\cite{rodinia} is a thermal simulation program that estimates processor temperature. The redundancy fraction of this program is 95\%. Listing~\ref{lst:hotspot3D} shows a loop nest that performs a stencil computation. At line 8, \texttt{tOut\_t[c]} is updated with the values in nearby \texttt{tIn\_t[]}. Typically, \texttt{w} $=$ \texttt{c} - \texttt{1} and \texttt{e} $=$ \texttt{c} + \texttt{1}. As a result, the value of \texttt{tIn\_t[e]} in the current iteration equals the value of \texttt{tIn\_t[c]} in the next iteration and further equals the value of \texttt{tIn\_t[w]} in the iteration after the next. However, the compiler does not perform register promotion of \texttt{tln\_[e]}. Hence, many \emph{redundant loads} occur in this loop nest. To fix this inefficiency, we employ the scalar replacement to eliminate inter-iteration redundant loads from memory. Specifically, we store the value of \texttt{tIn\_t[e]} in a local variable in the current iteration to be reused by \texttt{tIn\_t[c]} in the next iteration and by \texttt{tIn\_t[w]} in the iteration after the next. \begin{figure}[t] \begin{lstlisting}[firstnumber=1,language=c, caption= Temporal load redundancy in Rodinia-3.1 hotspot3D. Array \texttt{tIn\_t} is repeatedly loaded from memory while the values remain unchanged.\vspace{-1em}, label=lst:hotspot3D] for(y = 0; y < ny; y++) { for(x = 0; x < nx; x++) { int c, w, e, n, s, b, t; c = x + y * nx + z * nx * ny; w = (x == 0) ? c : c - 1; e = (x == nx - 1) ? c : c + 1; ... @$\blacktriangleright$@ tOut_t[c] = cc*tIn_t[c]+cw*tIn_t[w]+ce*tIn_t[e]+... }} \end{lstlisting} \end{figure} \paragraph{\textbf{Missing constant propagation}} NERSC-8 msgrate~\cite{TRINITY-WWW} measures the message passing rate via the MPI interface. The redundancy fraction of this program is 97\%. Listing~\ref{lst:msgrate} shows a procedure \texttt{cache\_invalidate}, which sets all the elements in array \texttt{cache\_buf} to 1. This code adopts a suboptimal forward propagation that loads the value of \texttt{cache\_buf[i-1]} and assigns it to \texttt{cache\_buf[i]}. Although there is no redundant load in one invocation of this function, procedure \texttt{cache\_invalidate} is called in a loop (not shown in the listing), resulting in excessive, redundant loads from array \texttt{cache\_buf}. The compiler does not replace the assignment with a constant, possibly due to its inability to prove the safety of assigning to a global array in the presence of concurrent threads of execution. \begin{figure}[t] \begin{lstlisting}[firstnumber=1,language=c, caption= Temporal load redundancy in NERSC-8 msgrate. The program repeatedly loads a constant ``1'' from array \texttt{cache\_buf}.\vspace{-1.5em}, label=lst:msgrate] int *cache_buf; ... static void cache_invalidate(void) { int i; cache_buf[0] = 1; for (i = 1; i < cache_size; ++i) @$\blacktriangleright$@ cache_buf[i] = cache_buf[i-1]; } \end{lstlisting} \end{figure} \begin{figure}[t] \begin{lstlisting}[firstnumber=1, language=c, caption= Temporal load redundancy in SPEC CPU2006 464.h264ref due to missing function inlining.\vspace{-1.5em}, label=lst:h264ref] for (pos = 0; pos < max_pos; pos++) { ... if(abs_y >= 0 && abs_y <= max_height && ...) PelYline_11 = FastLine16Y_11; else PelYline_11 = UMVLine16Y_11; for (blky = 0; blky < 4; blky++) { for (y = 0; y < 4; y++) { @$\blacktriangleright$@ refptr = PelYline_11(ref_pic, abs_y++, abs_x, img_height, img_width); ... } ...}} \end{lstlisting} \end{figure} \paragraph{\textbf{Missing inline substitution}} SPEC CPU2006 464.h264ref~\cite{SPEC:CPU2006} is a reference implementation of H.264, a standard of video compression. The redundancy fraction of this program is 84\%. The compiler fails to inline the frequently called function \texttt{PelYline\_11} at line 8 shown in Listing~\ref{lst:h264ref}. Because it is invoked via a function pointer and the callee routines are not present in the same file. The parameters of \texttt{PelYline\_11}---\texttt{abs\_x}, \texttt{img\_height}, and {\texttt{img\_width}---are unmodified across multiple successive invocations. In each invocation, the caller pushes the same parameters on the same stack, and then the callee loads the same values from the same location, which show up as redundant loads. To fix the problem, we need to manually inline the function~\cite{redspy}. \paragraph{\textbf{Discussion}} We have explored other compiler flags that enable advanced optimization such as polyhedral optimization~\cite{graphite-www} in \texttt{GCC}. Unfortunately, the polyhedral optimization was unsuccessful in optimizing any of the aforementioned scenarios. Furthermore, we observed that using LTO, PGO, together with the polyhedral optimization made compilation time extremely high for some cases. For example, it took over two hours to compile hotspot 3D, a 30,000$\times$ slowdown compared to simply using \texttt{-O3}. As a result, our later evaluation section does not use LTO and polyhedral optimization, but only uses \texttt{-O3} with PGO. We leave the effectiveness of other compilers such as LLVM~\cite{Lattner:2004:LCF:977395.977673} and ICC~\cite{ICC-WWW} on the same set of programs for a future study. \section{\loadspy{} Implementation} \label{sec:methodology} \loadspy{} employs Intel \texttt{Pin}~\cite{Luk:2005:PBC:1065010.1065034} to intercept every memory load operation. The instrumentation obtains the effective address $M$ to be accessed in the instruction, the access length $\delta$, and offers the pair to a runtime analysis routine. In the rest of this section, we discuss how \loadspy{} identifies \emph{temporal} and \emph{spatial} load redundancies, respectively. \subsection{Detecting Temporal Load Redundancy} \label{subsec:temporalRed} Detecting temporal load redundancy requires two pieces of information: the current value $v_{new}$ at the target location and the last-time loaded value $v_{old}$ from the same location. The runtime analysis routine, run just before the execution of the original program's load instruction, fetches the current value $v_{new}$ at the memory range $[M:M+\delta)$. \loadspy{} employs a shadow memory $S$ for maintaining the last-time loaded value at the same location. $S[M]$ maintains the value last loaded by the program at location $M$. \loadspy{} utilizes the page-table-based scheme~\cite{chabbi2012deadspy} to efficiently manage its shadow memory. At runtime, the analysis routine fetches $v_{old}$ from $S[M:M+\delta)$ and $v_{new}$ from $[M:M+\delta)$. \loadspy{} records an instance of a \emph{redundant} load if $v_{old} = v_{new}$. All bytes must match to qualify a load as redundant. Intuitively, sub-read-size redundancy is not actionable by the programmer. Note, however, that $v_{old}$ might have been generated by multiple shorter reads, a single longer read, or more commonly a single read of the same size. If not redundant, \loadspy{} updates the shadow memory with the newly loaded value. Also, \loadspy{} records an instance of a \emph{non-redundant} load if $v_{old} \ne v_{new}$. \loadspy{} provisions for approximate computation by allowing the new value generated in a floating-point (FP) operation to \emph{approximately} match the previously present value. If the two values are within a threshold of difference, \loadspy{} considers them approximately equal and records an instance of a redundant load. The threshold is tunable; we use 1\% in our experiments. Accordingly, \loadspy{} decomposes the load redundancy into \emph{precise} and \emph{approximate}. \loadspy{} attributes each instance of redundant loads (and non-redundant loads) to two parties $\langle C_{old}, C_{new}\rangle$ involved in two operations, where $C_{old}$ is the calling context of the previous load operation on $M$ and $C_{new}$ is the calling context of the current load operation on $M$. The following equations compute the fraction of temporal load redundancy in an execution: \begin{eqnarray} \scriptsize \begin{aligned} {\mathcal R}_{prog}^{precise} =& {\sum_i\sum_j\text{Redundant non-FP bytes loaded in } \langle C_{i}, C_{j}\rangle \over \sum_i\sum_j\text{non-FP bytes loaded in } \langle C_{i}, C_{j}\rangle} \\ {\mathcal R}_{prog}^{approx} =& {\sum_i\sum_j\text{Redundant FP bytes loaded in } \langle C_{i}, C_{j}\rangle \over \sum_i\sum_j\text{FP bytes loaded in } \langle C_{i}, C_{j}\rangle} \\ \end{aligned} \end{eqnarray} Load redundancy between a pair of calling contexts is given by the following equations: \begin{eqnarray} \scriptsize \begin{aligned} {\mathcal R}_{\langle C_{old}, C_{new}\rangle}^{precise} =& {\text{Redundant non-FP bytes loaded in } \langle C_{old}, C_{new}\rangle \over \sum_i\sum_j\text{non-FP bytes loaded in } \langle C_{i}, C_{j}\rangle} \\ {\mathcal R}_{\langle C_{old}, C_{new}\rangle}^{approx} =& {\text{Redundant FP bytes loaded in } \langle C_{old}, C_{new}\rangle \over \sum_i\sum_j\text{FP bytes loaded in } \langle C_{i}, C_{j}\rangle} \\ \end{aligned} \end{eqnarray} The metrics help identify code regions (pairs of calling contexts) where the highest amount of redundancy is observed. \begin{comment} \blue{A calling context pair $\langle C_{old}, C_{new}\rangle$ may be involved in a redundant load on one occasion and may not be involved in the redundant load on some other occasion. It is important to segregate these two situations, which help developers identify how to optimize; always redundant (in an execution) vs. sometimes redundant may demand different kinds of optimization strategies. This information is readily available since each load is classified and recorded as either redundant or non-redundant. The metric of non-redundancy \emph{usefulness} in context $\langle C_{old}, C_{new}\rangle$, represented by ${\mathcal U}_{\langle C_{old}, C_{new}\rangle}$, is defined as follows: \begin{eqnarray} \scriptsize \begin{aligned} {\mathcal U}_{\langle C_{old}, C_{new}\rangle}^{precise} =& {\text{Non-redundant non-FP bytes loaded in } \langle C_{old}, C_{new}\rangle \over \sum_i\sum_j\text{non-FP bytes loaded in } \langle C_{i}, C_{j}\rangle} \\ {\mathcal U}_{\langle C_{old}, C_{new}\rangle}^{approx} =& {\text{Non-redundant FP bytes loaded in } \langle C_{old}, C_{new}\rangle \over \sum_i\sum_j\text{FP bytes loaded loaded in } \langle C_{i}, C_{j}\rangle} \\ \end{aligned} \end{eqnarray} } \blue{If the ratio $\frac{{\mathcal R}_{\langle C_{old}, C_{new}\rangle}}{{\mathcal U}_{\langle C_{old}, C_{new}\rangle} + {\mathcal R}_{\langle C_{old}, C_{new}\rangle}}$ is $1.0$, it means all loads in a given calling context pair were redundant. If the ratio is $0$, no loads in the given calling context pair were redundant. Any other value indicates the execution had a mix of redundant and non-redundant loads in the same calling context pair. } \end{comment} \textbf{\textit{Obtaining the Runtime Calling Context of an Instruction:}} \label{subsec:programCtxt} Attributing runtime statistics to a flat profile (just an instruction pointer) does not offer full insights for developers. For example, attributing redundant loads to a common library function, e.g., \texttt{strcmp}, offers little insight since \texttt{strcmp} can be invoked from several places in a large code base; some invocations may not even be obvious to the user code. A detailed attribution demands associating profiles to the full calling context: \texttt{main():line->A():line->...}\texttt{->strcmp():line}. \loadspy{} requires obtaining the calling context on each load operation since each load---redundant or not. \loadspy{} employs CCTLib~\cite{Chabbi:2014:CPP:2581122.2544164}, which efficiently maintains calling contexts as a calling context tree (CCT)~\cite{Ammons:1997:EHP:258915.258924} including complex control flows through \texttt{longjump}, tail calls, and exceptions. The calling context, which is provided as a unique 32-bit integer, is recorded (in addition to the last-time loaded value) in the shadow memory. \subsection{Detecting Spatial Load Redundancy} \label{subsec:spatialRed} \begin{figure}[t] \begin{center} \includegraphics[width=0.45\textwidth]{spatial_redundancy.pdf} \end{center} \vspace{-0.15in} \caption{Detecting spatial load redundancy. \textcircled{1} \loadspy{} monitors a load operation and associate its effective address with the data object. In a map, each data object associates itself with the value and context of the previous load belonging to this data object. \textcircled{2} \loadspy{} compares the previous and current load values; if they are (approximately) the same, an instance of (approximate) spatial load redundancy is reported. \textcircled{3} The value and context associated with the data object are updated with the ones from the current load.} \vspace{-1.5em} \label{fig:spatialRed} \end{figure} For arrays and aggregate objects, \loadspy{} checks whether two consecutive loads from any element of the same object load (approximately) the same value. For example, if two consecutive loads from an array \texttt{a}, say \texttt{a[i]} and \texttt{a[j]}, load the same value, \loadspy{} flags it as an instance of \emph{spatial} load redundancy and attributes it to the same data object, as shown in Figure~\ref{fig:spatialRed}. To facilitate spatial load redundancy detection, \loadspy{} maintains a mapping from address ranges to active data objects in a shadow memory. Associated with each data object $\mathcal{O}$ is two additional pieces of information: a singleton value $v_{old}$ loaded as a result of the previous load operation performed on $\mathcal{O}$ and the calling context $C_{old}$ associated with the previous load operation performed on $\mathcal{O}$. Upon each memory load, \loadspy{} uses the effective address of the load operation to look up the data object it belongs to in the map. If the value of the current load matches the one recorded with the previous load on the same object, \loadspy{} records an instance of spatial load redundancy. The redundancy is hierarchically attributed first to the data object involved and then to the two calling contexts involved in the redundancy. \loadspy{} provides the similar whole-program and per-redundancy-pair metrics for spatial redundancy. Moreover, \loadspy{} computes the per-data-object metrics with the following equations where ${\mathcal O}$ is a data object. \begin{eqnarray} \scriptsize \begin{aligned} {\mathcal R}_{\mathcal O}^{precise} =& {\text{Redundant non-FP bytes in object } {\mathcal O} \over \sum_i\text{non-FP bytes in object i}} \\ {\mathcal R}_{\mathcal O}^{approx} =& {\text{Redundant FP bytes in object } {\mathcal O} \over \sum_i\text{FP bytes in object i}} \\ \end{aligned} \end{eqnarray} \textbf{\textit{Obtaining Data-object Addresses at Runtime:}} \loadspy{} monitors static and dynamic data objects but ignores stack objects from spatial redundancy detection. Data allocated in the \texttt{.bss} section in a load module are static objects. Each static object has a named entry in the symbol table that identifies the memory range for the object with an offset from the beginning of the load module. The lifetime of static objects begins when the enclosing load module (executable or dynamic library) is loaded into memory and ends when the load module is unloaded. \loadspy{} intercepts the loading and unloading of load modules to monitor the lifetime of static data objects and establishes a mapping from an object's address range to the corresponding data object. Dynamic objects are allocated via one of malloc family of functions (\texttt{malloc}, \texttt{calloc}, \texttt{realloc}) and \texttt{mmap}~\cite{Liu:2013:DPP:2503210.2503297}. The memories for dynamic objects are reclaimed at \texttt{free} and \texttt{munmap}. \loadspy{} intercepts these functions to establish a mapping from an object's address range to the corresponding data object. Querying an address at runtime obtains a handle to the corresponding static or dynamic object. The handle is a unique identifier representing the object name for a static object or the allocation calling context for a dynamic object. \subsection{Identifying the Redundancy Scope} \label{subsec:redScope} When the redundancy happens in the same calling context, that is $C_{old} = C_{new}$, there is guaranteed to be a loop~\footnote{We consider natural loops~\cite{Torczon:2007:EC:1526330} only.} around the redundancy location. However, in code with nested loops, it is unclear whether the redundancy occurred between iterations of an inner loop or between iterations of an outer loop or some other loop in-between. Hence, it becomes necessary to point out the syntactic scope enclosing a redundancy pair. We illustrate the need for scope using a real-world application \texttt{MASNUM-2.2}~\cite{Qiao:2016:HEG:3014904.3014911} shown on the left of Listing~\ref{lst:motivationExample}. \loadspy{} identifies 91\% of memory loads are redundant and the top contributor is at line 6. It is tempting to infer that \texttt{x(iii+1)} loaded in one iteration of the inner \texttt{do} loop (line 5) is loaded again as \texttt{x(iii)} in the next iteration. An obvious optimization is to perform scalar replacement to retain \texttt{x(iii+1)} across iterations of the inner \texttt{do} loop (on the right of Listing~\ref{lst:motivationExample}). However, this optimization does not eliminate many redundant loads. Actually, the outer \texttt{do} loop at line 1 repeatedly searches for an item \texttt{xx}, and the inner \texttt{do} loop performs a linear search. As a result, the inner loop repeatedly loads the same set of elements across two trips of the outer loop. Thus, the load redundancy exists not only between iterations of the inner loop but also between iterations of the outer loop. The load redundancy at the outer loop highlights an algorithm-level inefficiency---repeated linear searches. With this knowledge, we can replace the linear search with a binary search. More details are shown in \S~\ref{subsec:masnum}. To assist developers to focus on the \emph{scope} where load redundancy occurs, we have incorporated a \emph{redundancy scope} feature in \loadspy{}. We denote redundancy scope with the symbol $\mathcal{S}$. In Listing~\ref{lst:motivationExample}, the redundancy scope is the \emph{outer} \texttt{do} loop. Below we detail how redundancy scope is computed. \begin{figure} \begin{minipage}[t]{.48\linewidth} \begin{lstlisting}[firstnumber=1,language=c] do 500 k=1, kl ... xx=x0-deltt*(cgx+ux(ia,ic))/rslat(ic)*180./pi ... do iii = ixs, ixl-1 @$\blacktriangleright$@ if(xx >= x(iii) .and. xx <= x(iii+1)) then ixx = iii; exit endif enddo ... 500 continue \end{lstlisting} \end{minipage}\hfill \begin{minipage}[t]{.48\linewidth} \begin{lstlisting}[firstnumber=1,language=c] do 500 k=1, kl scalar = x(ixs) do iii = ixs, ixl-1 if(xx >= scalar) then scalar = x(iii+1) if (xx <= scalar) then ixx = iii; exit endif else scalar = x(iii+1) endif enddo ... 500 continue \end{lstlisting} \end{minipage} \vspace{-0.3in} \captionof{lstlisting}{A code example (on the left) from MASNUM-2.2~\cite{Qiao:2016:HEG:3014904.3014911} that requires additional information for disambiguating the scope of load redundancy. Many redundant loads occur at line 6 where the array \texttt{x} is repeatedly loaded from memory. If we only focus on the inner loop, we would be misled to believe the stencil computation, which loads \texttt{x(iii+1)} and \texttt{x(iii)}, causes many redundant loads across iterations of the inner loop. However, performing scalar replacement (on the right) does not yield much speedup. An algorithmic-level redundancy happens in the outer \texttt{do} loop, which repeatedly performs linear searches for a sorted array of elements. } \vspace{-1em} \label{lst:motivationExample} \end{figure} We first extend calling contexts to incorporate loop information. Thus, the calling context of a load operation looks as follows: $main()\to loop_{1}\to f()\to ...\to loop_{n} \to load_{old}$. Additionally, \loadspy{} maintains a 64-bit global timestamp counter $\mathcal{T}$ that is incremented when passing through each loop header and also through each load operation. Thus, the calling context snapshot may appear as follows: $C_{old} = main()\to loop_{1}[\mathcal{T}=1]\to f()\to ...\to loop_{n}[\mathcal{T}=9] \to load_{old}$. We extend the calling context $E$ to be a tuple, that is, $E_{old}= \langle pointer\ to\ old\ context, \mathcal{T}_{old}\rangle$ = $\langle C_{old},10\rangle$. \begin{figure}[t] \begin{minipage}[t]{.48\linewidth} \begin{lstlisting}[firstnumber=1,language=c, caption=Redundancy in the inner loop scope., label=lst:scope1] main () { // loop1 for (i=0; i<M; i++) { // loop2 for (k=0; k<N; k++) { // load from B[i] t += B[i]; }}} \end{lstlisting} \end{minipage}\hfill \begin{minipage}[t]{.48\linewidth} \begin{lstlisting}[firstnumber=1,language=c, caption=Redundancy in the outer loop scope., label=lst:scope2] main () { // loop1 for (i=0; i<M; i++) { // loop2 for (k=0; k<N; k++) { // load from A[k] t += A[k]; }}} \end{lstlisting} \end{minipage} \vspace{-1.5em} \end{figure} Listing~\ref{lst:scope1} shows a simplified example, where the redundancy happens in the inner loop (scope is $loop_2$). In this setting, consider the following pair of calling context snapshot: \begin{eqnarray*} \scriptsize \begin{aligned} E_{old} =&\langle main()\to loop_{1}[\mathcal{T}=1] \to loop_{2}[\mathcal{T}=2] \to load_{old}, \mathcal{T}_{old}=3 \rangle \\ E_{new} =& \langle main()\to loop_{1}[\mathcal{T}=1] \to loop_{2}[\mathcal{T}=4] \to load_{new}, \mathcal{T}_{new}=5 \rangle \end{aligned} \end{eqnarray*} Notice that the counter associated with $loop_{1}$ has remained unchanged whereas the counter associated with $loop_{2}$ has changed. Each load maintains a \emph{pointer} to the calling context, not the entire calling context snapshot. Hence, by the time the redundancy is detected, that is, $load_{new}$ is executed, $loop_{2}[\mathcal{T}=2]$ would have gotten updated to $loop_{2}[\mathcal{T}=4]$; traversing $C_{old}$ would find $\mathcal{T}_{loop_{2}} = 4$. Observe that $\mathcal{T}_{old} < \mathcal{T}_{loop_{2}} < \mathcal{T}_{new}$. This invariant informs that $loop_{2}$ is the scope inside which the redundancy is happening. The same invariant does not hold for $\mathcal{T}_{loop_{1}}$. Now, consider a simplified example in Listing~\ref{lst:scope2}, where redundancy happens in the outer loop (scope is $loop_{1}$). In this setting, consider the following pair of calling context snapshot: \begin{eqnarray*} \scriptsize \begin{aligned} E_{old} =& \langle main()\to loop_{1}[\mathcal{T}=1] \to loop_{2}[\mathcal{T}=2] \to load_{old}, \mathcal{T}_{old}=3 \rangle \\ E_{new} =& \langle main()\to loop_{1}[\mathcal{T}=8] \to loop_{2}[\mathcal{T}=9] \to load_{new}, \mathcal{T}_{new}=10 \rangle \end{aligned} \end{eqnarray*} Notice that the counter associated with both $loop_{1}$ and $loop_{2}$ have changed. Hence, by the time $load_{new}$ is executed, $loop_{1}[\mathcal{T}=1]$ and $loop_{2}[\mathcal{T}=2]$ would have gotten updated to $loop_{1}[\mathcal{T}=8]$ and $loop_{2}[\mathcal{T}=9]$, respectively; traversing $C_{old}$ would find $\mathcal{T}_{loop_{1}} = 8$ and $\mathcal{T}_{loop_{2}} = 9$. Observe that $\mathcal{T}_{old} < \mathcal{T}_{loop_{1}} < \mathcal{T}_{loop_{2}} < \mathcal{T}_{new}$. The loop with the smallest $\mathcal{T}$ value obeying this invariant, that is $loop_{1}$, is the redundancy scope. If there was another enclosing loop, say $loop_{0}$, its counter would not have obeyed this invariant. \begin{claim} \label{claim:scope} Given a redundancy context pair $\langle \langle C, \mathcal{T}_{old}\rangle , \langle C, \mathcal{T}_{new} \rangle \rangle$, the redundancy scope $\mathcal{S}$ is the outermost enclosing loop $i$ in $C$ such that $\mathcal{T}_{old} < \mathcal{T}_{loop_i} < \mathcal{T}_{new}$. \end{claim} \begin{proof} First, $\mathcal{T}_{loop_{i}}$ must be in the range of $(\mathcal{T}_{old}, \mathcal{T}_{new})$ because loop $i$ is the redundancy scope; otherwise, loop $i$ cannot enclose the redundant load instances. Next, assume there exists another loop $j$ in $C$ such that $\mathcal{T}_{old} < \mathcal{T}_{loop_{j}} < \mathcal{T}_{loop_i}<\mathcal{T}_{new}$ but loop $j$ is not the redundancy scope. Loop $i$ and $j$ cannot be the peer loops because they are both in the same context $C$. Then one loop must enclose the other. (1) If loop $i$ encloses loop $j$, $\mathcal{T}_{loop_i} < \mathcal{T}_{loop_{j}}$ because loop $j$'s counter is incremented at least once after loop $i$'s counter is incremented, which contradicts the assumption that $\mathcal{T}_{loop_{j}} < \mathcal{T}_{loop_i}$. Hence, loop $j$ cannot be nested inside loop $i$. (2) If loop $j$ encloses loop $i$, then loop $i$ is no longer the outermost loop with $\mathcal{T}_{old} < \mathcal{T}_{loop_i} < \mathcal{T}_{new}$. Hence, loop $j$ cannot be enclosing loop $i$. Since loop $i$ and loop $j$ are neither peer loops, nor can they be nested within one another, the assumption is void. Thus, Claim~\ref{claim:scope} holds. \end{proof} \textbf{\textit{Implementing Redundancy Scope:}} \loadspy{} combines static and dynamic analysis to compute the redundancy scope $\mathcal{S}$ for each redundancy pair. First, \loadspy{} instruments each loop header in the binary (in addition to procedures) to produce calling contexts with augmented loop information. It identifies an instruction as a loop header by performing an interval analysis~\cite{Havlak:1997:irreducible} on the binary code and integrates the information into the procedure call path. We refer to the calling contexts with loop information as \emph{extended contexts}. A runtime analysis routine run as a part of each loop header increments the 64-bit timestamp counter $\mathcal{T}$. The analysis routine run as a part of each load instruction also increments the counter $\mathcal{T}$. Also, the shadow memory for each byte of the original program is extended to hold the counter $\mathcal{T}$ (in addition to the 32-bit calling context handle and the 8-bit old value). On each detected load redundancy, where $C_{old} = C_{new}$, \loadspy{} searches the call path from root (\texttt{main)} toward the leaf (the \texttt{load} instruction) to look for the first loop node where the Claim~\ref{claim:scope} is found to be true. Such a loop is the redundancy scope $\mathcal{S}$ for the current instance of load redundancy. Each redundancy instance records the triplet $\langle C_{old}, C_{new}, \mathcal{S}\rangle$. If $C_{old} \ne C_{new}$, \loadspy{} first finds the lowest common ancestor (LCA) function or loop enclosing $C_{old}$ and $C_{new}$, and then searches their common call path from root (main) toward the LCA to obtain $\mathcal{S}$ based on the Claim~\ref{claim:scope}. Computing the redundancy scope for each redundancy instance introduces heavy runtime overhead. We compute the redundancy scope for a given calling context pair only a threshold number of times (one in our experiments), which is good enough for most programs. \subsection{Handling Threaded Programs} \loadspy{} maintains per-thread data structures: calling context trees, redundancy profiles, $\mathcal{T}$, among others and hence needs no concurrency control for multi-threaded programs. The runtime object map is maintained as a lock-free map allowing concurrent lookups. \loadspy{} detects only intra-thread redundancy and ignores inter-thread redundancy, if any. \subsection{Reducing Profiling Overhead} \loadspy{} can introduce relatively high runtime overhead, $\sim$40-150$\times$. \loadspy{} adopts a bursty sampling mechanism to control its overhead~\cite{Zhong:2008:SPL:1375634.1375648}. Bursty sampling involves continuous monitoring for a certain number of instructions (\texttt{WINDOW\_ENABLE}) followed by not monitoring for a certain (larger) number of instructions (\texttt{WINDOW\_DISABLE}) and repeating it over time. These two thresholds are tunable. From our experiments, 1\% sampling rate with \texttt{WINDOW\_ENABLE}=1 million and \texttt{WINDOW\_DISABLE}=99 million yields a good tradeoff between overhead and accuracy. \subsection{Discussions} It is worth noting that there is no one-one relationship between the redundancy fraction and potential performance gains because of pipelining, caching and prefetching in hardware. \loadspy{} does not distinguish actionable vs. non-actionable redundancies, which is a topic of our future work. \section{\loadspy{} Workflow} \label{sec:implementation} \loadspy{} consists of three components: a runtime profiler (detailed previously in \S~\ref{sec:methodology}), an analyzer, and a GUI. \loadspy{} accepts fully optimized binary executables and collects runtime profiles via its online profiler. The analyzer and GUI, run in a postmortem fashion, consume the runtime profiles and associate them with the application source code. The rest of this section discusses the analyzer and GUI. \begin{comment} \begin{figure}[t] \begin{center} \includegraphics[width=0.45\textwidth]{loadspy_design.pdf} \end{center} \vspace{-0.1in} \caption{Workflow of \loadspy{}.} \label{fig:loadspyDesign} \end{figure} \end{comment} \subsection{\loadspy{}'s Analyzer} \label{subsec:analyzer} \loadspy{}'s analyzer associates the runtime profiles with source code based on the DWARF~\cite{dwarf} information produced by compilers. As the profiler produces per-thread profiles, the analyzer needs to coalesce the profiles for the whole execution. The calling context profiles scale the analysis of program execution to a large number of cores. The coalescing procedure follows the rule: two redundancy pairs from different threads are merged $iff$ they have the same redundant loads in the same contexts with the same redundancy scope. All the metrics are also merged to compute unified ones across threads. The scheme is similar for profiles from different processes. It is worth noting that the profile coalescing overhead grows linearly with the number of threads and processes used by the monitored program. \loadspy{} leverages the reduction tree technique~\cite{Tallent:2010:SIL:1884643.1884683} to parallelize the merging process. Typically, \loadspy{} takes less than one minute to produce the aggregate profiles in all of our case studies. \subsection{\loadspy{}'s GUI} \label{subsec:gui} \loadspy{}'s GUI inherits the design of an existing Java-based graphical interface~\cite{adhianto2010hpctoolkit}, which enables navigating the calling contexts and the corresponding source code ordered by the monitored metrics. A top-down view shows a call path $C$ starting from \texttt{main} to a leaf function with the breakdown of metrics at each level. Merely attributing a metric to two independent contexts loses the association between two related contexts during postmortem inspection. To correlate the source with the target, \loadspy{} allows appending a copy of the target calling context to the source calling context. For example, if a load in context \texttt{main->A->B} is redundant with another load in context \texttt{main->C->D}, \loadspy{} constructs a synthetic calling context: \texttt{main->A->B->main->C->D}. The redundancy metrics will be attributed to the leaf of this call chain. These synthetic call chains make it easy to visually navigate profiles and focus on top redundancy pairs. Figure~\ref{fig:avro} in \S~\ref{subsec:avro} shows an example of the GUI, and we postpone the explanation of the GUI details to that section. \begin{figure*}[t] \begin{center} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=.95\textwidth]{temporal_redundancy_breakdown.pdf} \caption{Temporal redundancies.} \label{fig:temporal_breakdown} \end{subfigure} ~~ \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=.95\textwidth]{spatial_redundancy_breakdown.pdf} \caption{Spatial redundancies.} \label{fig:spatial_breakdown} \end{subfigure} \end{center} \vspace{-0.1in} \caption{Fraction of temporal and spatial load redundancies on SPEC CPU2006.} \vspace{-.8em} \label{fig:breakdown} \end{figure*} \section{Evaluation} \label{sec:experiment} We evaluate \loadspy{} on a 12-core Intel Xeon E5-2650 v4 CPU (Broadwell) of 2.20GHz frequency running Linux 4.8.0. The machine has 256GB main memory. We evaluate \loadspy{} with well-known benchmarks, such as SPEC CPU2006~\cite{SPEC:CPU2006}, SPEC OMP2012~\cite{SPEC:OMP2012}, SPEC CPU2017~\cite{SPEC:CPU2017}, Parsec-2.1~\cite{parsec}, Rodinia-3.1~\cite{rodinia}, NERSC-8~\cite{TRINITY-WWW}, and Stamp-0.9.10~\cite{4636089}, as well as several real-world applications, such as Apache Avro-1.8.2~\cite{Apache-Avro}, Hoard-3.12~\cite{Berger:2000:HSM:378993.379232}, MASNUM-2.2~\cite{Qiao:2016:HEG:3014904.3014911}, Shogun-6.0~\cite{soeren_sonnenburg_2017_556748}, USQCD Chroma-3.43~\cite{Edwards:2004sx}, Stack RNN~\cite{2015arXiv150301007J}, Binutils-2.27~\cite{binutils}, and Kallisto-0.43~\cite{kallisto-WWW}. All the programs are compiled with \texttt{gcc-4.8.5 -O3 PGO} except Hoard-3.12 and MASNUM-2.2. For Hoard-3.12 we use \texttt{clang-5.0.0 -O3 PGO} and for MASNUM-2.2 we use \texttt{icc-17.0.4 -O3 PGO}. We apply the \texttt{ref} inputs for SPEC CPU2006, OMP2012 and CPU2017 benchmarks, the native inputs for Parsec-2.1 benchmarks, and the default inputs released with the remaining benchmarks and applications if not specified. We run all the parallel programs with four threads with simultaneous multi-threading (SMT) disabled. In the rest of this section, we first show the fraction of temporal and spatial redundancies obtained from SPEC CPU2006. We then evaluate the accuracy and overhead of \loadspy{} with bursty sampling enabled. We exclude three benchmarks---gobmk, sjeng, and xalancbmk---from monitoring because they have deep call recursion causing \loadspy{} to run out of memory. \paragraph{\textbf{Load redundancy in macro benchmarks}} Figure~\ref{fig:breakdown} shows the fraction of temporal and spatial load redundancies on SPEC CPU2006. We can see (1) load redundancy, especially the temporal one, pervasively exists and (2) integer benchmarks show a high proportion of precise redundant loads whereas floating-point benchmarks show a high proportion of approximate redundant loads, as expected. \begin{table}[t] \centering \scriptsize \centering \begin{adjustbox}{width=0.43\textwidth} \begin{tabular} {|c||c|c||c|c|} \hline \multirow{2}{*}{Benchmarks} & \multicolumn{2}{c||}{Detecting Temporal Redundancy} & \multicolumn{2}{c|}{Detecting Spatial Redundancy}\\ \cline{2-5} & Runtime Slowdown & Memory Bloat & Runtime Slowdown & Memory Bloat \\ \hline\hline perlbench & 38$\times$ & 11$\times$ & 51$\times$ & 7$\times$ \\ bzip2 & 13$\times$ & 2$\times$ & 13$\times$ & 1.09$\times$ \\ gcc & 19$\times$ & 26$\times$ & 19$\times$ & 25$\times$ \\ mcf & 6$\times$ & 14$\times$ & 6$\times$ & 1.04$\times$ \\ hmmer & 12$\times$ & 35$\times$ & 11$\times$ & 20$\times$ \\ libquantum & 12$\times$ & 18$\times$ & 13$\times$ & 2$\times$ \\ h264ref & 21$\times$ & 20$\times$ & 21$\times$ & 2$\times$ \\ omnetpp & 10$\times$ &16$\times$ & 14$\times$ & 25$\times$ \\ astar & 11$\times$ & 13$\times$ & 11$\times$ & 18$\times$ \\ bwaves & 17$\times$ & 14$\times$ & 15$\times$ & 1.16$\times$ \\ gamess & 24$\times$ & 25$\times$ & 24$\times$ & 24$\times$ \\ milc & 4$\times$ & 10$\times$ & 4$\times$ & 1.18$\times$ \\ zeusmp & 8$\times$ & 14$\times$ & 7$\times$ & 1.42$\times$ \\ gromacs & 10$\times$ & 23$\times$ & 9$\times$ & 15$\times$ \\ cactusADM & 7$\times$ & 10$\times$ & 7$\times$ & 1.36$\times$ \\ leslie3d & 9$\times$ & 10$\times$ & 8$\times$ & 2$\times$ \\ named & 10$\times$ & 11$\times$ & 10$\times$ & 9$\times$ \\ dealII & 21$\times$ & 30$\times$ & 22$\times$ & 19$\times$ \\ soplex & 13$\times$ & 13$\times$ & 13$\times$ & 2$\times$ \\ povray & 29$\times$ & 216$\times$ & 28$\times$ & 70$\times$ \\ calculix & 21$\times$ & 18$\times$ & 20$\times$ & 19$\times$ \\ GemsFDTD & 8$\times$ & 14$\times$ & 8$\times$ & 1.42$\times$ \\ tonto & 22$\times$ & 49$\times$ & 24$\times$ & 30$\times$ \\ lbm & 4$\times$ & 14$\times$ & 3$\times$ & 1.15$\times$ \\ wrf & 15$\times$ & 10$\times$ & 16$\times$ & 3$\times$ \\ sphinx3 & 13$\times$ & 16$\times$ & 13$\times$ & 7$\times$ \\ \hline \textbf{Median} & \textbf{12.5$\times$} & \textbf{14$\times$} & \textbf{13$\times$} & \textbf{5$\times$} \\ \textbf{GeoMean} & \textbf{13$\times$} & \textbf{17$\times$} & \textbf{13$\times$} & \textbf{5$\times$} \\ \hline \end{tabular} \end{adjustbox} \caption{\loadspy{}'s runtime slowdown and memory bloat over native execution on SPEC CPU2006.} \vspace{-1.5em} \label{tab:overhead} \end{table} \begin{figure*}[t] \begin{center} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{temporal_redundancy_accuracy.pdf} \caption{Temporal redundancies.} \end{subfigure} ~ \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{spatial_redundancy_accuracy.pdf} \caption{Spatial redundancies.} \end{subfigure} \end{center} \vspace{-0.15in} \caption{Comparing temporal and spatial load redundancies with bursty sampling disabled and enabled. The sampling rate is 1\%.} \label{fig:accuracy} \end{figure*} \paragraph{\textbf{Accuracy}} \loadspy{} offers bursty sampling as an optional feature for users willing to tradeoff measurement accuracy with performance. Figure~\ref{fig:accuracy} evaluates the accuracy of \loadspy{} with bursty sampling enabled. The geo-means of spatial load redundancy fractions \loadspy{} measures with sampling enabled and disabled are nearly the same---10\%. The geo-means of temporal load redundancy fractions \loadspy{} measures with sampling enabled and disabled are similar---76\% and 82\%. However, \texttt{libquantum} is an outlier, whose temporal redundancy fractions are 15\% and 68\% with sampling enabled and disabled. With further investigation, we find that the average number of instructions executed between the source and target load operations of most redundancy pairs is more than 10 million, which is greater than the default \texttt{WINDOW\_ENABLE} (= 1 million). In such a case, one can enlarge \texttt{WINDOW\_ENABLE} to improve the accuracy. For instance, when we set \texttt{WINDOW\_ENABLE} = 10 million and 50 million (\texttt{WINDOW\_DISABLE} remains unchanged), the temporal load redundancy fraction of \texttt{libquantum} increases to 30\% and 60\%, respectively. \paragraph{\textbf{Overhead}} Table~\ref{tab:overhead} shows the runtime slowdown and memory bloat of \loadspy{} on SPEC CPU2006. The runtime slowdown (memory bloat) is measured as the ratio of the runtime (peak memory usage) of a benchmark with \loadspy{} enabled to the runtime (peak memory usage) of its native execution. The geo-means of runtime slowdown for detecting temporal and spatial redundancies are both 13$\times$, and the geo-means of memory bloat for detecting temporal and spatial redundancies are 17$\times$ and 5$\times$, respectively. A few benchmarks such as \texttt{tonto} and \texttt{povray} show excessive memory bloat due to the following reasons: (1) \texttt{tonto} has a deep call stack, which demands excessive space to maintain its calling context tree and (2) \texttt{povray} has a small ($\sim$6MB) memory footprint, whereas some preallocated data structures in \loadspy{} overshadow this baseline memory footprint. \begin{table*} \centering \begin{adjustbox}{width=0.88\textwidth} \scriptsize \begin{tabular}{|c|c|c||c|c||c|c|} \hline \multicolumn{3}{|c||}{Program Information} & \multicolumn{2}{c||} {\loadspy{}} & \multicolumn{2}{c|} {Optimization} \\ \cline{1-7} \multicolumn{2}{|c|}{Programs} & Problematic Code & Redundancy Types & Inefficiencies & Approaches & WS$^*$ \\ \hline \hline \multirow{15}{*}{\rot{Macro Benchmarks}} & 359.botsspar & sparselu.c:loop(191) & Temporal & Inefficient register usage & Scalar replacement & 1.77$\times$\\ \cline{2-7} & 453.povray & csg.cpp(250) & Temporal & Missing inline substitution & Function inlining & 1.05$\times$\\ & 464.h264ref & mv-search.c:loop(394) & Temporal & Missing inline substitution & Function inlining & 1.28$\times$\\ & \ding{51} 470.lbm & lbm.c:LBM\_performStreamCollide & Spatial & Redundant computation & Approximate computing & 1.25$\times$\\ \cline{2-7} & \ding{51} 538.imagick\_r & morphology.c:loop(2982) & Spatial & Redundant computation & Conditional computation & 1.25$\times$\\ \cline{2-7} & \ding{51} backprop & backprop.c:loop(322) & Spatial& Input-sensitive redundancy & Conditional computation & 1.13$\times$\\ & \ding{51} hotspot3D & 3D.c:loop(98, 166) & Temporal& Inefficient register usage & Scalar replacement & 1.13$\times$\\ & \ding{51} lavaMD & kernel\_cpu.c(175) & Temporal & Redundant function calls & Reusing the previous result & 1.39$\times$\\ & \ding{51} srad\_v1 & main.c:loop(256) & Temporal& Inefficient register usage & Scalar replacement & 1.11$\times$\\ & \ding{51} srad\_v2 & srad.cpp:loop(131) & Temporal& Inefficient register usage & Scalar replacement & 1.12$\times$\\ & \ding{51} particlefilter & ex\_particle\_OPENMP\_seq.c:findIndex & Temporal & Linear search & Binary search & 9.8$\times$\\ \cline{2-7} & vacation & client.c:loop(198) & Temporal & Redundant function calls & Reusing the previous result & 1.23$\times$\\ \cline{2-7} & dedup & hashtable.c:hashtable\_search & Temporal & Poor hashing & Reducing hash collisions & 1.11$\times$\\ \cline{2-7} & msgrate & msgrate.c:cache\_invalidate & Temporal & Missing constant propagation & Copy propagation & 3.03$\times$\\ \hline \multirow{10}{*}{\rot{Real Applications}} & \ding{51} Apache Avro-1.8.2 & Specific.hh(110, 117) & Temporal & Missing inline substitution & Function inlining & 1.19$\times$\\ \cline{2-7} & \ding{51} Hoard-3.12 & libhoard.cpp:xxmalloc & Temporal & Redundant computation & Reusing the previous result & 1.14$\times$\\ \cline{2-7} & \ding{51} MASNUM-2.2 & propagat.inc:loop(130, 140) & Temporal& Linear search & Locality-friendly search & 1.79$\times$\\ \cline{2-7} & \ding{51} USQCD Chroma-3.43 & qdp\_random.h(56) & Temporal & Missing inline substitution & Function inlining & 1.06$\times$\\ \cline{2-7} & \ding{51} Shogun-6.0 & \makecell{DenseFeatures.cpp(505) \\Distance.cpp(185)} & Temporal& Missing inline substitution & Function inlining & 1.06$\times$\\ \cline{2-7} & \ding{51} Stack RNN & StackRNN.h:loop(350, 355, 363, 367) & \makecell{Temporal \\Spatial} & \makecell{Poor choice of algorithm \\Redundant computation} & \makecell{Loop fusion \\Conditional computation} & 1.09$\times$\\ \cline{2-7} & Kallisto-0.43 & KmerHashTable.h(131) & Temporal & Poor hashing & Reducing hash collisions & 4.1$\times$\\ \cline{2-7} & Binutils-2.27 & dwarf2.c:loop(2166) & Temporal & Linear search & Binary search & 3.29$\times$\\ \hline \multicolumn{4}{l}{{\vbox to 2ex{\vfil}}\scriptsize \ding{51}: newfound performance bugs via \loadspy{}. } \\ \multicolumn{4}{l}{{\vbox to 2ex{\vfil}}\scriptsize WS$^*$: whole-program speedup after problem elimination.} \\ \end{tabular} \end{adjustbox} \caption{Overview of performance improvement guided by \loadspy{}.} \label{tab:perf} \end{table*} \section{Case Studies} \label{sec:use} We evaluate the load redundancies found in some benchmarks and real-world applications. Table~\ref{tab:perf} summarizes the inefficiencies found and the speedups obtained by eliminating them. We quantify the performance of all programs in execution time except Hoard in throughput. In the rest of this section, we exhaustively analyze all the newfound performance bugs. \subsection{Apache Avro-1.8.2} \label{subsec:avro} Avro~\cite{Apache-Avro} is a remote procedure call (RPC) and data serialization processing system. We apply \loadspy{} to evaluate the C++ version of Avro with benchmarks developed by Sorokin~\cite{avro-benchmark-WWW}. \loadspy{} reports a temporal redundancy fraction $\mathcal{R}_{prog}^{precise}$ of 79\% for the entire program. Figure~\ref{fig:avro} shows the full calling contexts of the top redundancy pair visualized through \loadspy{}'s GUI. \loadspy{}'s GUI consists of three panes: the top pane shows the program source code, the bottom left pane shows the full calling contexts of each redundancy pair, and the bottom right pane shows the metrics associated with each redundancy pair. In this figure, the GUI shows two metrics: the number of redundant loads for a given redundancy pair and percentage of redundant instances for a given pair, which if 100\%, means every instance of this pair is redundant. From the figure, we can see that the redundant loads in function \texttt{doEncodeLong} account for 25\% of the total redundant loads in the program. Moreover, all instances of this pair are redundant. The redundancy scope of this pair is the loop at lines 229-233 in the file \texttt{Specific.hh} enclosing the call site of function \texttt{encode}. Function \texttt{encode} is the caller of function \texttt{doEncodeLong}. With further analysis, we find that the epilog of function \texttt{doEncodeLong} consistently pops the same values from the same stack location to restore the register values. To eliminate redundant loads in the function epilog, we inline \texttt{doEncodeLong} into its caller. \loadspy{} further identifies another problematic function (not shown) and guides the same inlining optimization. Together, these optimizations eliminate 31\% of the memory loads and 37\% of the redundant memory loads, yielding a 1.19$\times$ speedup for the whole program. \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{Avro_GUI.pdf} \end{center} \vspace{-0.15in} \caption{The top redundancy pair in \texttt{Avro} with full calling contexts reported by \loadspy{}. Along the calling contexts shown in the bottom left pane, a procedure name following a symbol \texttt{[I]} means it is inlined. We can see that most procedures on the path are inlined, except \texttt{doEncodeLong}. Many redundant loads are from calling \texttt{doEncodeLong}, which can be removed by function inlining.} \vspace{-1.2em} \label{fig:avro} \end{figure} \subsection{MASNUM-2.2} \label{subsec:masnum} MASNUM~\cite{Qiao:2016:HEG:3014904.3014911}, one of the 2016 ACM Gordon Bell Prize finalists, forecasts ocean surface waves and climate change. It is written in Fortran and parallelized with MPI. \loadspy{} identifies 91\% of memory loads are redundant, of which 15\% are attributed to the array \texttt{x} at line 6 on the left of Listing~\ref{lst:motivationExample}. \loadspy{} also pinpoints the redundancy scope as the outermost loop at line 1. We find that the innermost loop (line 5) performs a linear search over the non-decreasing array \texttt{x} for a given input \texttt{xx}. With multiple iterations, elements of array \texttt{x} are frequently loaded from memory for comparison, leading to the redundancy. Changing the linear search to a binary search reduces redundant loads and yields a 1.32$\times$ speedup for the entire program. It is worth noting that the binary search still incurs high load redundancy fraction because of the intensive search requests in the program. To further improve the search algorithm, we analyze the values of \texttt{xx} across iterations. We find that \texttt{xx} has good value locality, that is, the values are similar in adjacent iterations of the outermost loop. Thus, we replace the binary search with a locality-friendly search. We memoize the location index \texttt{iii} when the current search finishes; in the next search, we begin at the recorded \texttt{iii} and alternate the linear search in both directions to the array start and end. This optimization eliminates 33\% of the memory loads and 36\% of the redundant memory loads, yielding a 1.79$\times$ speedup for the entire program. \subsection{Hoard-3.12} \label{subsec:hoard} Hoard~\cite{Berger:2000:HSM:378993.379232}, a high-performance cross-platform C++ based memory allocator, has been integrated into an array of applications and programming languages such as GNU Bayonne and Cilk programming language. It has 20K lines of code and is parallelized with the \texttt{PThreads} library. \loadspy{} identifies that 58\% of memory loads are redundant on profiling Hoard's built-in benchmark \texttt{larson}. The top redundancy pair is associated with lines 4 and 7 shown in Listing~\ref{lst:hoard}, which accounts for 11\% of the total redundant loads. The cause of such redundancy is that the program repeatedly checks whether \texttt{theTLAB} is a null pointer. More specifically, function \texttt{isCustomHeapInitialized} at line 15 and function \texttt{getCustomHeap} at line 16 both include code to check whether \texttt{theTLAB} is equal to \texttt{nullptr}. Hence, the second check at lines 8-11 in \texttt{getCustomHeap} is redundant. To eliminate such redundant loads, we inline these two functions into their caller \texttt{xxmalloc} and remove the redundant check. This optimization eliminates 3\% of the memory loads and 2\% of the redundant memory loads, which improves the throughput (i.e., the number of memory operations per second) of Hoard by 1.14$\times$. \begin{figure}[t] \begin{lstlisting}[firstnumber=1,language=c, caption=Temporal load redundancy in Hoard-3.12. The program repeatedly checks whether the pointer variable \texttt{theTLAB} is null., label=lst:hoard] static __thread TheCustomHeapType * theTLAB INITIAL_EXEC_ATTR = nullptr; ... bool isCustomHeapInitialized() { @$\blacktriangleright$@ return (theTLAB != nullptr); } TheCustomHeapType * getCustomHeap() { @$\blacktriangleright$@ auto tlab = theTLAB; if (tlab == nullptr) { tlab = initializeCustomHeap(); theTLAB = tlab; } return tlab; } void * xxmalloc (size_t sz) { if (isCustomHeapInitialized()) { void * ptr = getCustomHeap()->malloc(sz); ... } } \end{lstlisting} \end{figure} \subsection{USQCD Chroma-3.43} \label{subsec:chroma} Chroma~\cite{Edwards:2004sx} is a complex toolbox for performing quantum chromodynamics lattice computations, which has more than 200K lines of code. We evaluate it using the built-in benchmark \texttt{t\_mesplq}. \loadspy{} reports a temporal redundancy fraction of 61\%. The top redundancy pair is attributed to the function \texttt{sranf} at line 3 shown in Listing~\ref{lst:chroma}. With further investigation, we notice that Chroma has a similar performance bug to the one in \texttt{Apache Avro}: the epilog of function \texttt{sranf} repeatedly pops the same values from the same stack location to restore the register values. To eliminate such redundant loads, we manually inline the callee into its caller. This optimization eliminates 6\% of the memory loads and 7\% of the redundant memory loads, yielding a 1.06$\times$ speedup for the whole program. \begin{figure}[t] \begin{lstlisting}[firstnumber=1,language=fortran, caption=Temporal load redundancy in USQCD Chroma-3.43. The epilog of function \texttt{sranf} often pops the same values from the same stack location to restore the register values., label=lst:chroma] template<class T1, class T2> inline void fill_random(float& d, T1& seed, T2& skewed_seed, const T1& seed_mult) { @$\blacktriangleright$@ d = float(RNG::sranf(seed, skewed_seed, seed_mult)); } \end{lstlisting} \end{figure} \subsection{Shogun-6.0} \label{subsec:shogun} Shogun~\cite{soeren_sonnenburg_2017_556748} is an efficient machine learning toolbox. \loadspy{} reports a temporal redundancy fraction of 71\% on profiling its built-in benchmark \texttt{kernel\_matrix\_sum\_benchmark}. Listing~\ref{lst:shogun} shows one of the top redundancy pairs at line 6. The cause of such redundancy is similar to \texttt{Apache Avro}: the epilog of function \texttt{get\_feature\_vector} repeatedly pops the same values from the same stack location to restore the register values. We manually inline the callee into its caller to eliminate these redundant loads. Additionally, We perform the same optimization for other function invocations that have the same performance issue. These optimizations eliminate 7\% of the memory loads and 2\% of the redundant memory loads, yielding a $1.06\times$ speedup for the whole program. \begin{figure}[t] \begin{lstlisting}[firstnumber=1,language=c, caption= {Temporal load redundancy in Shogun-6.0. The epilog of function \texttt{get\_feature\_vector} often pops the same values from the same stack location to restore the register values.}, label=lst:shogun] template<class ST> float64_t CDenseFeatures<ST>::dot(int32_t vec_idx1, CDotFeatures* df, int32_t vec_idx2) { ... CDenseFeatures<ST>* sf = (CDenseFeatures<ST>*) df; int32_t len1, len2; bool free1, free2; @$\blacktriangleright$@ ST* vec1 = get_feature_vector(vec_idx1, len1, free1); ... } \end{lstlisting} \end{figure} \subsection{Stack RNN} \label{subsec:stackRnn} Stack RNN~\cite{2015arXiv150301007J} is a C++ based project originating from Facebook AI research, which applies memory stack to optimize and extend a recurrent neural network. We evaluate Stack RNN by profiling its built-in application \texttt{train\_add} with \loadspy{}. \loadspy{} quantifies a redundancy fraction of 81\%, and pinpoints that the top temporal and spatial load redundancy pairs are associated with four loops shown in Listing~\ref{lst:stackRnn}. The cause of the temporal load redundancy is that each of the four loops accesses array \texttt{\_err\_stack}. However, the compiler cannot keep all elements of array \texttt{\_err\_stack} in CPU registers across these loops. Thus, the elements of array \texttt{\_err\_stack} are repeatedly loaded from memory into registers. We eliminate the temporal redundant loads by loop fusion, which fuses the four loops into one so that array \texttt{\_err\_stack} is only loaded once. The cause of the spatial load redundancy is that most elements of array \texttt{\_err\_stack} are zeros, resulting in identity computation at lines 2, 5, 9 and 12 shown in Listing~\ref{lst:stackRnn}. We employ a conditional check to avoid the computation on identities. These two optimizations together eliminate 10\% of the memory loads and 15\% of the redundant memory loads, yielding a 1.09$\times$ speedup for the whole program. \begin{figure}[t] \begin{lstlisting}[firstnumber=1,language=c, caption={Temporal and spatial load redundancies in Stack RNN. Array \texttt{\_err\_stack} is loaded from memory by each of the four loops, resulting in temporal load redundancy. Besides, most elements of array \texttt{\_err\_stack} equal zero, resulting in spatial load redundancy.}, label=lst:stackRnn] for (my_int i = _TOP_OF_STACK; i < _TOP_OF_STACK + _STACK_SIZE - 1; i++) { @$\blacktriangleright$@ _pred_err_stack[s][i+1] += _err_stack[s][i] * _act[s][itm][pop]; } for (my_int i = _TOP_OF_STACK; i < _TOP_OF_STACK + _STACK_SIZE - 1; i++) { @$\blacktriangleright$@ _err_act[s][pop] += _err_stack[s][i] * _stack[s][old_it][i+1]; } _err_act[s][pop] += _err_stack[s][_TOP_OF_STACK + _STACK_SIZE - 1] * EMPTY_STACK_VALUE; for (my_int i = _TOP_OF_STACK + 1; i < _TOP_OF_STACK + _STACK_SIZE; i++) { @$\blacktriangleright$@ _pred_err_stack[s][i-1] += _err_stack[s][i] * _act[s][itm][push]; } for (my_int i = _TOP_OF_STACK + 1; i < _TOP_OF_STACK + _STACK_SIZE; i++) { @$\blacktriangleright$@ _err_act[s][push] += _err_stack[s][i] * _stack[s][old_it][i-1]; } \end{lstlisting} \end{figure} \subsection{SPEC CPU2006 470.lbm} \label{subsec:lbm} 470.lbm~\cite{SPEC:CPU2006} employs the lattice boltzmann method to simulate incompressible fluids in three-dimensional space. \loadspy{} reports that spatial redundant loads account for 55\% of the total memory loads, of which more than 30\% are attributed to the array \texttt{srcGrid} at lines 11-54 shown in Listing~\ref{lst:lbm}. With further investigation, we find that array \texttt{srcGrid} is traversed across loop iterations and most of its elements are identical, resulting in many redundant loads. To optimize this inefficiency, we apply loop perforation~\cite{Sidiroglou-Douskos:2011:MPV:2025113.2025133} to reduce the number of iterations at the cost of accuracy. With this optimization, the memory loads and redundant memory loads are reduced by 26\% and 60\%, and the whole program gains a 1.25$\times$ with trivial accuracy loss (7.7e-5\%). \begin{figure}[t] \begin{lstlisting}[firstnumber=1,language=c, caption= Spatial load redundancy in SPEC CPU2006 470.lbm. Array \texttt{srcGrid} is frequently loaded from memory while most array elements have the same values., label=lst:lbm] #define SWEEP_START(x1,y1,z1,x2,y2,z2) \ for( i = CALC_INDEX(x1, y1, z1, 0); \ i < CALC_INDEX(x2, y2, z2, 0); \ i += N_CELL_ENTRIES ) { #define SWEEP_END } ... static double srcGrid[SIZE_Z*SIZE_Y*SIZE_X*N_CELL_ENTRIES]; ... SWEEP_START( 0, 0, 0, 0, 0, SIZE_Z ) // loop entry ... @$\blacktriangleright$@ rho = + SRC_C ( srcGrid ) + SRC_N ( srcGrid ) @$\blacktriangleright$@ + SRC_S ( srcGrid ) + SRC_E ( srcGrid ) @$\blacktriangleright$@ + SRC_W ( srcGrid ) + SRC_T ( srcGrid ) @$\blacktriangleright$@ + SRC_B ( srcGrid ) + SRC_NE( srcGrid ) @$\blacktriangleright$@ + SRC_NW( srcGrid ) + SRC_SE( srcGrid ) @$\blacktriangleright$@ + SRC_SW( srcGrid ) + SRC_NT( srcGrid ) @$\blacktriangleright$@ + SRC_NB( srcGrid ) + SRC_ST( srcGrid ) @$\blacktriangleright$@ + SRC_SB( srcGrid ) + SRC_ET( srcGrid ) @$\blacktriangleright$@ + SRC_EB( srcGrid ) + SRC_WT( srcGrid ) @$\blacktriangleright$@ + SRC_WB( srcGrid ); @$\blacktriangleright$@ ux = + SRC_E ( srcGrid ) - SRC_W ( srcGrid ) @$\blacktriangleright$@ + SRC_NE( srcGrid ) - SRC_NW( srcGrid ) @$\blacktriangleright$@ + SRC_SE( srcGrid ) - SRC_SW( srcGrid ) @$\blacktriangleright$@ + SRC_ET( srcGrid ) + SRC_EB( srcGrid ) @$\blacktriangleright$@ -SRC_WT( srcGrid ) - SRC_WB( srcGrid ); @$\blacktriangleright$@ uy = + SRC_N ( srcGrid ) - SRC_S ( srcGrid ) @$\blacktriangleright$@ + SRC_NE( srcGrid ) + SRC_NW( srcGrid ) @$\blacktriangleright$@ - SRC_SE( srcGrid ) - SRC_SW( srcGrid ) @$\blacktriangleright$@ + SRC_NT( srcGrid ) + SRC_NB( srcGrid ) @$\blacktriangleright$@ - SRC_ST( srcGrid ) - SRC_SB( srcGrid ); @$\blacktriangleright$@ uz = + SRC_T ( srcGrid ) - SRC_B ( srcGrid ) @$\blacktriangleright$@ + SRC_NT( srcGrid ) - SRC_NB( srcGrid ) @$\blacktriangleright$@ + SRC_ST( srcGrid ) - SRC_SB( srcGrid ) @$\blacktriangleright$@ + SRC_ET( srcGrid ) - SRC_EB( srcGrid ) @$\blacktriangleright$@ + SRC_WT( srcGrid ) - SRC_WB( srcGrid ); ... @$\blacktriangleright$@ DST_C ( dstGrid ) = (1.0-OMEGA)*SRC_C ( srcGrid ) + ... @$\blacktriangleright$@ DST_N ( dstGrid ) = (1.0-OMEGA)*SRC_N ( srcGrid ) + ... @$\blacktriangleright$@ DST_E ( dstGrid ) = (1.0-OMEGA)*SRC_E ( srcGrid ) + ... @$\blacktriangleright$@ DST_W ( dstGrid ) = (1.0-OMEGA)*SRC_W ( srcGrid ) + ... @$\blacktriangleright$@ DST_T ( dstGrid ) = (1.0-OMEGA)*SRC_T ( srcGrid ) + ... @$\blacktriangleright$@ DST_B ( dstGrid ) = (1.0-OMEGA)*SRC_B ( srcGrid ) + ... @$\blacktriangleright$@ DST_NE( dstGrid ) = (1.0-OMEGA)*SRC_NE( srcGrid ) + ... @$\blacktriangleright$@ DST_NW( dstGrid ) = (1.0-OMEGA)*SRC_NW( srcGrid ) + ... @$\blacktriangleright$@ DST_SE( dstGrid ) = (1.0-OMEGA)*SRC_SE( srcGrid ) + ... @$\blacktriangleright$@ DST_SW( dstGrid ) = (1.0-OMEGA)*SRC_SW( srcGrid ) + ... @$\blacktriangleright$@ DST_NT( dstGrid ) = (1.0-OMEGA)*SRC_NT( srcGrid ) + ... @$\blacktriangleright$@ DST_NB( dstGrid ) = (1.0-OMEGA)*SRC_NB( srcGrid ) + ... @$\blacktriangleright$@ DST_ST( dstGrid ) = (1.0-OMEGA)*SRC_ST( srcGrid ) + ... @$\blacktriangleright$@ DST_SB( dstGrid ) = (1.0-OMEGA)*SRC_SB( srcGrid ) + ... @$\blacktriangleright$@ DST_ET( dstGrid ) = (1.0-OMEGA)*SRC_ET( srcGrid ) + ... @$\blacktriangleright$@ DST_EB( dstGrid ) = (1.0-OMEGA)*SRC_EB( srcGrid ) + ... @$\blacktriangleright$@ DST_WT( dstGrid ) = (1.0-OMEGA)*SRC_WT( srcGrid ) + ... @$\blacktriangleright$@ DST_WB( dstGrid ) = (1.0-OMEGA)*SRC_WB( srcGrid ) + ... ... SWEEP_END // loop exit \end{lstlisting} \end{figure} \subsection{SPEC CPU2017 538.imagick\_r} \label{subsec:imagick_r} 538.imagick\_r~\cite{SPEC:CPU2017} is applied to create, edit, compose or convert bitmap images. \loadspy{} reports that spatial redundant loads account for 13\% of the total memory loads, of which 24\% are attributed to the variable \texttt{k} at lines 6-9 shown in Listing \ref{lst:imagick_r}. \texttt{k} is a pointer to the floating-point array {\texttt{values} at line 2 and decrements by one in each iteration. We find that most elements of this array equal zero, causing \texttt{*k} to equal zero in most of loop iterations. To remove the identity computation on \texttt{*k}, we introduce a conditional check to filter out all zero values. With this optimization, the memory loads and redundant memory loads are reduced by 19\% and 51\%, and the whole program achieves a 1.25$\times$ speedup. \begin{figure}[t] \begin{lstlisting}[firstnumber=1,language=c, caption= {Spatial load redundancy in SPEC CPU2017 538.imagick\_r. Array \texttt{values} is frequently loaded from memory. However, most array elements equal zero.}, label=lst:imagick_r] register const double *restrict k; k = &kernel->values[kernel->width*kernel->height-1] ... for (u=0; u < (ssize_t) kernel->width; u++, k--) { if (IsNaN(*k)) continue; @$\blacktriangleright$@ result.red += (*k)*k_pixels[u].red; @$\blacktriangleright$@ result.green += (*k)*k_pixels[u].green; @$\blacktriangleright$@ result.blue += (*k)*k_pixels[u].blue; @$\blacktriangleright$@ result.opacity += (*k)*k_pixels[u].opacity; ... } \end{lstlisting} \end{figure} \subsection{Rodinia-3.1 Srad} \label{subsec:srad} Srad~\cite{rodinia} applies partial differential equations to filter noise in images, which is widely used in ultrasonic and radar imaging applications. We profile the OpenMP version of srad\_v1. \loadspy{} reports a temporal redundancy fraction of 99\%. 8\% of the redundancy is attributed to the array \texttt{image} at lines 12-14 shown in Listing~\ref{lst:srad}. We notice that when 0 $<$ \texttt{i} $<$ \texttt{Nr} - 1, the value of \texttt{image[iS[i] + Nr*j]} in one iteration equals the value of \texttt{image[k]} in the next iteration and further equals the value of \texttt{image[iN[i] + Nr*j]} in the iteration after next. To fix this problem, we adopt scalar replacement to avoid redundant loads across iterations, which eliminates 33\% of the memory loads and yields a 1.11$\times$ speedup for the whole program. It is worth noting that the indirect accesses in this inefficient code snippet introduce challenges in compiler's static analysis and optimization. Additionally, \loadspy{} also identifies the same inefficiency occurring in srad\_v2. With the same optimization, srad\_v2 achieves a 1.12$\times$ speedup. \begin{figure}[t] \begin{lstlisting}[firstnumber=1,language=c, caption= Temporal load redundancy in Rodinia-3.1 srad\_v1. Array \texttt{image} is repeatedly loaded from memory while the values remain unchanged., label=lst:srad] for (i=0; i<Nr; i++) { iN[i] = i-1; iS[i] = i+1; } ... iN[0] = 0; iS[Nr-1] = Nr-1; ... for (j=0; j<Nc; j++) { for (i=0; i<Nr; i++) { k = i + Nr*j; @$\blacktriangleright$@ Jc = image[k]; @$\blacktriangleright$@ dN[k] = image[iN[i] + Nr*j] - Jc; @$\blacktriangleright$@ dS[k] = image[iS[i] + Nr*j] - Jc; } } \end{lstlisting} \end{figure} \subsection{Rodinia-3.1 LavaMD} \label{subsec:lavaMD} LavaMD~\cite{rodinia} calculates particle potential and relocation among particles. We apply \loadspy{} to evaluate its OpenMP version. \loadspy{} reports that 87\% of memory loads are redundant, and the top contributor is the \texttt{glibc} function \texttt{exp} at line 7 shown in Listing~\ref{lst:lavaMD}. We notice that the value of \texttt{u2} often remains unchanged across iterations. As a result, a number of redundant loads and computations occur inside \texttt{exp} due to redundant function calls. With further analysis, we find that \texttt{a2} is a loop invariant, and \texttt{u2} is derived from \texttt{a2} and \texttt{r2}. Thus, we infer that \texttt{r2} often has the same value across iterations. To optimize this inefficiency, we introduce a conditional check on \texttt{r2} such that the program can reuse the return value of function \texttt{exp} from the previous iteration if the value of \texttt{r2} has not changed. This optimization eliminates 76\% of the memory loads and 93\% of the redundant memory loads, yielding a 1.39$\times$ speedup for the entire program. \begin{figure}[t] \begin{lstlisting}[firstnumber=1,language=c, caption=Temporal load redundancy in Rodinia-3.1 lavaMD due to redundant function calls., label=lst:lavaMD] for (k=0; k<(1+box[l].nn); k++) { ... for (i=0; i<NUMBER_PAR_PER_BOX; i=i+1) { for (j=0; j<NUMBER_PAR_PER_BOX; j=j+1) { r2 = rA[i].v + rB[j].v - DOT(rA[i],rB[j]); u2 = a2*r2; @$\blacktriangleright$@ vij= exp(-u2); fs = 2.*vij; ... } } } \end{lstlisting} \end{figure} \section{Threats to validity} \label{subsec:threat} The threats mainly exist in applying \loadspy{} for code optimization. The same optimization for one application may show different speedups on different computer architectures. A given load redundancy fraction may not help estimate the potential speedup. Some optimizations are input-specific, and a different profile may demand a different optimization. Based on the reported inefficiencies, programmers need to devise an optimization that is safe in any execution. \section{Conclusions} \label{sec:conclusion} In this paper, we presented a study of identifying program inefficiencies by focusing on whole-program load redundancy. We demonstrate that redundant load operations are often a symptom of various inefficiencies arising from inputs, suboptimal data structure and algorithm choices, and missed compiler optimizations. To pinpoint these inefficiencies in complex software code bases, we have developed \loadspy{}, a fine-grained profiler that profiles load redundancy. \loadspy{} toolchain provides valuable guidance to developers for code tuning---calling contexts of the two parties involved in a redundancy, narrowed-down redundancy scopes to focus on optimization, metrics to understand relative significance of redundancy, and a GUI for the source code attribution. We evaluate \loadspy{} using several benchmarks and real-world applications. Guided by \loadspy{} we are able to optimize prior-known and new inefficiencies in several programs. Eliminating temporal and spatial load redundancies resulted in nontrivial speedups. \section*{Acknowledgment} We thank reviewers for their valuable comments. This work is supported by Google Faculty Research Award and National Natural Science Foundation of China (No. 61502019). \balance{}
2,877,628,091,661
arxiv
\section{Introduction} In coding theory, an interesting and important question is to construct codes from smaller ones and to explore their properties via those of the smaller ones. There have been many such constructions, for example, the $(u|u + v)$-construction and the $(a+x|b+x|a+b+x)$-construction. It was shown in \cite{LS-I} that quasi-cyclic codes over finite fields with co-index coprime to the characteristic of the finite fields can be constructed from linear codes of lower dimension in a similar way, and the $(a+x|b+x|a+b+x)$-construction is one such special case. A more general construction, called the {\em matrix product code}, which is formed by $m$ codes of length $n$ over a finite field and an $m\times l$ matrix over the finite field, was proposed and studied in \cite{Blackmore-Norton}. Many, though not all, quasi-cyclic codes can be rewritten as matrix product codes, for suitably chosen matrices. It was further shown in \cite{Ozbudak-Stichtenoth} that the codes constructed by algebraic geometry in \cite{Niederreiter-Xing} are in fact matrix product codes. In~\cite{Blackmore-Norton}, a class of matrices, called {\em non-singular by columns} matrices, was introduced, and some lower bounds were obtained for the minimum distance of the matrix product codes constructed with such matrices. However, most matrices for quasi-cyclic codes, including the matrix for the $(a+x|b+x|a+b+x)$-construction, are not non-singular by columns. For general matrix product codes over finite fields, a lower bound for the minimum distance was obtained in~\cite{Ozbudak-Stichtenoth}. Decoding methods for some matrix product codes were also discussed in \cite{H-H-R}, \cite{H-L-R} and \cite{H-R-11}. Other related work may be found in \cite{H-R-10}, \cite{M-M} and \cite{Ould-Mamoun}. On the other hand, coding over finite rings has attracted much attention since the seminal work in \cite{Hammons}. It was pointed out in the important works \cite{W-99} and \cite{W-08} that only finite Frobenius rings are suitable for coding alphabets, in the sense that several fundamental properties of codes over finite fields still hold for codes over such rings. For example, the double dual property, which says that the double dual coincides with the original linear code, holds for linear codes over finite Frobenius rings. A special class of finite Frobenius rings consists of the finite chain rings, and codes over finite chain rings have been investigated from many perspectives. Recently, in \cite{Van-Asch}, matrix product codes over finite chain rings were studied and the lower bound on the minimum distance of matrix product codes by non-singular by columns matrices in \cite{Blackmore-Norton} was extended to the minimum homogeneous distance. Some quasi-cyclic codes over finite chain rings have also been decomposed into matrix product codes in \cite{LS-II}, though the terminology ``matrix product code'' was not used. In this paper, we extend previous works on matrix product codes in two directions. First, we formulate matrix product codes over finite commutative Frobenius rings, and explore their general properties, mainly, the minimum distance and the structure of the duals. Second, we consider new classes of matrices, which contain the class of non-singular by columns matrices as a special case, for which we can bound the minimum distance of matrix product codes thus constructed more precisely and more tightly, and for which self-dual matrix product codes can be constructed efficiently. The understanding of dual codes, as well as self-orthogonality and self-duality of codes, is a natural and important question in coding theory. The organization of the paper is as follows. Section 2 contains facts on matrices over finite commutative rings which are needed for later sections, but which may not be readily available in the literature. In Section 3, we formulate matrix product codes over finite commutative Frobenius rings, and give two lower bounds for the minimum distance of such codes. We also prove that the dual code of a matrix product code is also a matrix product code whose structure is described precisely. Not only does this extend earlier results in \cite{Blackmore-Norton} and \cite{Van-Asch}, it also does not require the matrix to be a square matrix. In Section 4, we introduce a class of matrices, called {\em strongly full-row-rank (SFRR) matrices} (see Definition \ref{Def4.1}), which is bigger than the class of non-singular by columns matrices and also contains certain matrices associated to quasi-cyclic codes. We exhibit more precise lower bounds for the minimum distance of matrix product codes constructed with these matrices, as well as for their dual codes. Besides extending corresponding results in~\cite{Blackmore-Norton}, conditions for which these lower bounds are attained are also given. Inspired by the matrix for the $(a+x|b+x|a+b+x)$-construction, in Section~5 we consider special matrices, named {\em two-way $(m')$-SFRR matrices} (see Definition \ref{Def5.1}), and obtain lower and upper bounds for the minimum distance of matrix product codes constructed with these matrices. These bounds cover some known bounds for the minimum distance of codes obtained from the $(a+x|b+x|a+b+x)$-construction as special cases. For such matrices, we also show a condition (see Definition \ref{Def5.2}) which is useful for the construction of self-orthogonal matrix product codes. \section{Matrices over Finite Commutative Rings} In this paper, $R$ is always a finite commutative ring. Writing the identity element~$1$ of the ring $R$ as the sum of the primitive idempotents of $R$, we obtain an isomorphism \begin{equation}\label{eq2.1} R\mathop{\longrightarrow}^{\cong}_\varphi R_1\oplus\cdots\oplus R_s,\quad r\longmapsto(r^{(1)},\cdots,r^{(s)}), \end{equation} where $R_1$, $\cdots$, $R_s$ are local commutative rings. With the isomorphism (\ref{eq2.1}), in the following we usually identify $R$ with $R_1\oplus\cdots\oplus R_s$ and just write $r=(r^{(1)},\cdots,r^{(s)})$. The finite commutative ring $R$ is called a {\em Frobenius ring} if $R$ is self-injective (i.e., the regular module is injective), or equivalently, $(C^{\bot})^\bot=C$ for any submodule $C$ of any free $R$-module $R^n$, where $C^\bot$ denotes the orthogonal submodule of $C$ with respect to the usual Euclidean inner product on $R^n$. Moreover, in this case, $|C^\bot||C|=|R|^n$ for any submodule $C$ of $R^n$, where $|C|$ denotes the cardinality of $C$. This is one of the reasons why only finite Frobenius rings are suitable for coding alphabets. With the isomorphism \eqref{eq2.1}, $R$ is Frobenius if and only if every local component $R_i$ is Frobenius, and the finite local commutative ring $R_i$ is Frobenius if and only if $R_i$ has a unique minimal ideal. Note that, in the non-commutative case, a self-injective ring is called a {\em quasi-Frobenius ring}, while one more condition is required for it to become a Frobenius ring. However, in the commutative case, a finite quasi-Frobenius ring is exactly a finite Frobenius ring. The reader may refer to \cite{W-99} for more details on Frobenius rings. By ${\rm M}_{m\times l}(R)$, we mean the set of all $m\times l$ matrices over $R$. For $A\in{\rm M}_{m\times l}(R)$, we denote the transpose of the matrix $A$ by $A^T$. Given matrices $A$ of size $m \times l$ and $B$ of size $m \times l'$, we use $(A|B)$ to denote the matrix of size $m \times (l + l')$ formed by concatenating $A$ and $B$. If $C$ is another matrix of size $m' \times l$, the $(m + m') \times l$ matrix $\left( \begin{array}{c} A \\ \hline C \end{array} \right)$ is similarly defined (by concatenating vertically). We also let $0$ denote the zero matrix, where the size will either be obvious from the context or specified whenever necessary. Similarly, we denote the $m \times m$ identity matrix by $I_m$, or simply $I$ if the size is clear from the context. Any matrix $A=(a_{ij})_{m\times l}\in{\rm M}_{m\times l}(R)$ can be written as \begin{equation}\label{eq2.2} A=\left(A^{(1)},\cdots,A^{(s)}\right)\,,\qquad A^{(k)}=\left(a_{ij}^{(k)}\right)_{m\times l}\in{\rm M}_{m\times l}(R_k), \quad 1\le k\le s, \end{equation} where the matrix addition and product are the coordinate-wise addition and product, respectively. Consider the free $R$-module $R^n$ of rank $n$. Any element ${\bf a}=(a_1,\cdots,a_n)^T$ (written as a column vector) of $R^n$ is also called a vector, and we let ${\bf 0}$ denote the zero vector. With the identification in (\ref{eq2.1}), we can write $$ R^n=R_1^n\oplus\cdots\oplus R_s^n,\quad {\bf a} = \big({\bf a}^{(1)},\cdots,{\bf a}^{(s)}\big), $$ where ${\bf a}^{(k)}=(a_1^{(k)},\cdots,a_n^{(k)})^T$, for $1\le k\le s$, is a column vector in $R_k^n$. \begin{definition}\label{Def2.1} For any integer $t \ge 1$, let ${\bf a}_i=(a_{i1},\cdots,a_{in})\in R^n$, where $i=1,\cdots,t$. The vectors ${\bf a}_1$,~$\cdots$,~${\bf a}_t$ are said to be {\em linearly dependent} if there exists $(b_1,\cdots,b_t)$ in the set difference $R^t \setminus \{ {\bf 0} \}$ such that $b_1{\bf a}_1+\cdots+b_t{\bf a}_t={\bf 0}$; otherwise, ${\bf a}_1$, $\cdots$, ${\bf a}_t$ are said to be {\em linearly independent}. \end{definition} If an $R$-submodule of $R^n$ is generated by vectors ${\bf a}_1$, $\cdots$, ${\bf a}_t$ which are linearly independent, then it is a free $R$-module of rank $t$ and we say that ${\bf a}_1$, $\cdots$, ${\bf a}_t$ form a {\em basis} of the free submodule. The proof of the following result is straight-forward, so we omit it here. \begin{lemma}\label{Lem2.1} The vectors ${\bf a}_1,\cdots,{\bf a}_t\in R^n$ are linearly dependent if and only if there is an index $k$, with $1\le k\le s$, such that ${\bf a}_1^{(k)},\cdots,{\bf a}_t^{(k)}\in R_k^n$ are linearly dependent. \end{lemma} \begin{remark}\label{Rem-p4} {\rm The following is an equivalent formulation of Lemma \ref{Lem2.1}:} \noindent ``The vectors ${\bf a}_1,\cdots,{\bf a}_t\in R^n$ are linearly independent if and only if, for all $k$ with $1\le k\le s$, the vectors ${\bf a}_1^{(k)},\cdots,{\bf a}_t^{(k)}\in R_k^n$ are linearly independent.'' \end{remark} \begin{definition}\label{Def2.2} Let $A=(a_{ij})_{m\times l}$ be a matrix over $R$. \begin{itemize} \item[(i)] If the rows of $A$ are linearly independent, then we say that $A$ is a {\em full-row-rank (FRR)} matrix. \item[(ii)] If there is an $l\times m$ matrix $B$ over $R$ such that $AB=I$, then we say that $A$ is {\em right-invertible} and $B$ is a {\em right inverse} of $A$. \item[(iii)] If $m=l$ and the determinant $\det A$ is a unit of $R$, then we say that $A$ is {\em non-singular}. \item[(iv)] If, for every $t$ with $1\le t\le m$, any $t\times t$ submatrix of the first (resp., last) $t$ rows of $A$ is non-singular, then we say that $A$ is {\em non-singular by columns} (resp., {\em reversely non-singular by columns}). \end{itemize} \end{definition} \begin{remark}\label{rem-p4-new} {\rm \begin{itemize} \item[(i)] It is obvious that, if $A$ is a matrix over $R$ of size $m\times l$, and $P$, $Q$ are invertible matrices over $R$ of size $l\times l$ and $m\times m$, respectively, then $A$, $AP$ and $QA$ are all FRR provided one of them is FRR. \item[(ii)] By Remark \ref{Rem-p4}, a matrix $A$ over $R$ is FRR if and only if the matrices $A^{(k)}$ over $R_k$ in (\ref{eq2.2}), for $k=1,\cdots,s$, are all FRR. \end{itemize} } \end{remark} As in usual linear algebra, the following two types of operations are called elementary row (or column) operations on matrices over $R$: \begin{itemize} \item adding a multiple of a row (column) to another row (column), \item multiplying a row (column) by a unit of $R$. \end{itemize} \begin{lemma}\label{Lem2.2} Assume that $R$ is a finite local ring and~$A=\left(a_{ij}\right)_{m\times l}$ is a matrix over $R$. Then $A$ is FRR if and only if $m\le l$ and there is an invertible $l\times l$ matrix $P$ over $R$ such that $AP=\left(\,I\mid 0\,\right)_{m\times l}$. In particular, $A$ is FRR if and only if $A$ is right invertible. \end{lemma} \noindent{\bf Proof.}~ Note that $R$ has a unique maximal ideal $J$ such that the set difference $R \setminus J$ is just the set of all units of $R$. Since $R$ is finite, there is an integer $e>0$ such that $J^e=0$ but $J^{e-1}\ne 0$ ($e$ is called the {\em nilpotency index} of $J$, and we adopt the convention that $e=1$ if $R$ is a field). Thus we can pick a $\delta\in J^{e-1}$ with $\delta\ne 0$. For any row $(a_{i1},\cdots,a_{il})$ of $A$, we claim that $\bullet$~ {\it There is an entry $a_{ij}$ which is a unit of $R$.} \noindent For, otherwise, all $a_{i1},\cdots,a_{il}$ belong to $J$ and hence all $\delta a_{i1},\cdots,\delta a_{il}$ belong to $J^e=\{0\}$, that is, $\delta\cdot\left(a_{i1},\cdots,a_{il}\right)= {\bf 0}$, and the row $(a_{i1},\cdots,a_{il})$ of $A$ is linearly dependent, which contradicts the assumption that $A$ is FRR. Therefore, in the first row of $A$, we can find a unit. After some suitable permutation of the columns, we can assume that $a_{11}$ is a unit. With appropriate elementary operations on the columns, we can transform $A$ into an FRR matrix as follows: $$ \begin{pmatrix} 1 & 0 & \cdots & 0 \\ a'_{21} & a'_{22} & \cdots & a'_{2l} \\ \cdots & \cdots & \cdots & \cdots \\ a'_{m1}& a'_{m2} & \cdots & a'_{ml}\end{pmatrix}.$$ Next we assert that $\bullet$~ {\it Some $a'_{2j}$, for $2\le j\le l$, is a unit of $R$.} \noindent Assuming the contrary, then $\delta a'_{21}\cdot(1,0,\cdots,0)-\delta(a'_{21},a'_{22},\cdots,a'_{2l})={\bf 0}$, which contradicts the assumption that the above matrix is FRR. One can continue with elementary operations on the columns in the same manner, until the desired form $\left(\,I\mid 0\,\right)$ is obtained. \qed \medskip Now we return to the general case where $R$ may be not local, and we identify $R$ with the direct sum $R_1\oplus\cdots\oplus R_s$ of local Frobenius rings $R_k$, $k=1,\cdots,s$, by the isomorphism (\ref{eq2.1}). Then we obtain the following: \begin{corollary}\label{Cor2.3} $A\in{\rm M}_{m\times l}(R)$ is FRR if and only if $A$ is right-invertible. \end{corollary} \noindent{\bf Proof.}~ By Lemma \ref{Lem2.1}, the matrix $A$ over $R$ is FRR if and only if every $A^{(k)}$ over $R_k$, for $k=1,\cdots,s$, is FRR (see (\ref{eq2.2})). Further, by Lemma \ref{Lem2.2}, for $k=1,\cdots,s$, every $A^{(k)}\in{\rm M}_{m\times l}(R_k)$, is FRR if and only if there is $B^{(k)}\in{\rm M}_{l\times m}(R_k)$ such that $A^{(k)}B^{(k)}=I$. Setting $B=\left(B^{(1)},\cdots,B^{(s)}\right)\in{\rm M}_{l\times m}(R)$, we obtain $AB=I$. \qed The following corollary follows from a typical linear algebra argument. \begin{corollary}\label{Cor2.4} Let $A$ be in ${\rm M}_{m\times m}(R)$. The following statements are equivalent: \begin{itemize} \item[(i)] $A$ is invertible. \item[(ii)] $A$ is non-singular. \item[(iii)] $A$ is FRR. \end{itemize} \end{corollary} \begin{proposition}\label{Prop2.5} Let $A\in{\rm M}_{m\times l}(R)$ be FRR and let $X=(x_1,\cdots,x_l)^T$, where $x_i$'s are variables. Then the set of solutions of the linear equation system $AX={\bf 0}$ is a free submodule in $R^l$ of rank $l-m$ and we have an FRR $(l-m)\times l$ matrix $G$ over $R$ whose rows form a basis of this free submodule. \end{proposition} \noindent{\bf Proof.}~ First, assume that $R$ is local. By Lemma \ref{Lem2.2}, we have an invertible matrix $P$ of size $l\times l$ such that $AP=\left(\,I\mid 0\,\right)_{m\times l}$. The set of solutions of the linear equation system $(AP)Y={\bf 0}$ in variables $Y=(y_1,\cdots,y_l)^T$ is clearly a free submodule of $R^l$ of rank $l-m$ with the rows of the matrix $\left(\,0\mid I\,\right)_{(l-m)\times l}$ as a basis. Rewriting $AX={\bf 0}$ as $(AP)(P^{-1}X)={\bf 0}$, we see that the set of solutions of $AX={\bf 0}$ is a free submodule of $R^l$ of rank $l-m$ with the rows of the matrix $G=\left(\,0\mid I\,\right)_{(l-m)\times l}P^T$ as a basis. Returning to the general case where $R$ is a commutative Frobenius ring, we have the identification in (\ref{eq2.1}). For each index $1\le k\le s$, we have a linear equation system $A^{(k)}X^{(k)}={\bf 0}$ with the matrix $A^{(k)}$ over the local ring $R_k$ being FRR (see Lemma \ref{Lem2.1}), so we have an FRR matrix $G^{(k)}$ over $R_k$ of size $(l-m)\times l$ such that the rows of $G^{(k)}$ form a basis of the free submodule of $R_k^l$ of the solutions of the system $A^{(k)}X^{(k)}={\bf 0}$. With the identification (\ref{eq2.2}), we can construct a matrix $G=\left(G^{(1)},\cdots,G^{(s)}\right)$ over $R$ of size $(l-m)\times l$ which is FRR too, and any vector ${\bf a}\in R^l$ is a solution of the system $AX={\bf 0}$ if and only if ${\bf a}$ is a combination of the rows of $G$. In other words, the set of solutions of the system $AX={\bf 0}$ is a free submodule of $R^l$ of rank $l-m$ with the rows of $G$ as a basis. \qed \begin{remark}\label{Rem-p6} {\rm With $A,G$ as in Proposition \ref{Prop2.5}, denote by $L_G$ and $L_A$ the free submodules of $R^l$ generated by the rows of $G$ and $A$, respectively. With the usual Euclidean inner product $\langle -,-\rangle$ on~$R^l$, Proposition \ref{Prop2.5} says that $(L_A)^\bot=L_G$. As a consequence, we see that} \begin{itemize} \item If $R$ is a finite commutative Frobenius ring, then a submodule $V$ of $R^l$ is free if and only if its orthogonal submodule $V^\bot$ is free. \end{itemize} {\rm The ``only if'' part is just Proposition \ref{Prop2.5}. For the ``if'' part, taking a generator matrix~$A$ of~$V^\bot$ (i.e., $A$ is FRR and $V^\bot=L_A$), since $R$ is a Frobenius ring, we have that $V=(V^\bot)^\bot=(L_A)^\bot=L_G$ is free. } \end{remark} \begin{proposition}\label{Prop2.6} Any FRR $m\times l$ matrix $A$ over $R$ can be, by appending rows, extended to an invertible $l\times l$ matrix $\tilde A=\left(\begin{array}{c}A\\ \hline A'\end{array}\right)$ (equivalently, any set of linearly independent vectors of $R^l$ can be extended to a basis of $R^l$). Furthermore, for any such extension $\tilde A=\left(\begin{array}{c}A\\ \hline A'\end{array}\right)$, partitioning $\tilde A^{-1}=(B\,|\,B')$ into an $l\times m$ submatrix $B$ and an $l\times(l-m)$ submatrix $B'$, we have that $B$ is a right inverse of $A$ and $B'^T$ is a generator matrix of the submodule of solutions of the linear equation system $AX={\bf 0}$. \end{proposition} \noindent{\bf Proof.}~ By Lemma \ref{Lem2.2}, we have a right inverse $B$ of $A$, and we denote by $B_1,\cdots,B_m$ the columns of $B$. By Proposition \ref{Prop2.5}, we have an $(l-m)\times l$ matrix $G$ whose rows form a basis of the free submodule of solutions of the linear equation system $AX={\bf 0}$, and we denote by $G_1^T,\cdots,G_{l-m}^T$ the columns of $G^T$. Then we form an $l\times l$ matrix $\tilde B=(B\,|\,G^T)$. Suppose $d_1,\cdots,d_m,e_1,\cdots,e_{l-m}\in R$ such that \begin{equation}\label{eq-p6} d_1B_1+\cdots+d_m B_m +e_1G_1^T+\cdots+e_{l-m}G_{l-m}^T={\bf 0}. \end{equation} Then, since $G_i^T$'s are solutions of $AX={\bf 0}$, we have $$ {\bf 0}=d_1AB_1+\cdots+d_m AB_m +e_1AG_1^T+\cdots+e_{l-m}AG_{l-m}^T =d_1AB_1+\cdots+d_m AB_m. $$ However, since $AB=I$, we get that $d_1=\cdots=d_m=0$. Returning to (\ref{eq-p6}), we have that $e_1G_1^T+\cdots+e_{l-m}G_{l-m}^T={\bf 0}$, hence $e_1=\cdots=e_{l-m}=0$ since $G$ is FRR. Thus, $\tilde B$ is a square matrix with linearly independent columns and it is hence invertible. Expressing $\tilde B^{-1}$ as $\tilde B^{-1}=\left(\begin{array}{c}A''\\\hline A'\end{array}\right)$, where $A''$ and $A'$ are formed by the first $m$ and the last $l-m$ rows, respectively, of $\tilde B^{-1}$, we can rewrite $\tilde B^{-1}\tilde B=I$ as $$\left(\begin{array}{c}A''\\\hline A'\end{array}\right)\cdot (B\,|\,G^T)=\begin{pmatrix}A''B& A''G^T\\ A'B&A'G^T \end{pmatrix} =\begin{pmatrix}I&0\\0&I\end{pmatrix}.$$ In particular, $A'\cdot(B\,|\,G^T)=(0\,|\,I)$. On the other hand, it follows from our choices of $B$ and $G$ that $A\cdot(B\,|\,G^T)=(I\,|\,0)$. Therefore, \begin{equation}\label{eq-p7} \left(\begin{array}{c}A\\\hline A'\end{array}\right)\cdot (B\,|\,G^T)=\begin{pmatrix}AB& AG^T\\ A'B&A'G^T \end{pmatrix} =\begin{pmatrix}I&0\\0&I\end{pmatrix}.\end{equation} Thus $\left(\begin{array}{c}A\\\hline A'\end{array}\right)$ is right invertible, which, by Corollary \ref{Cor2.4}, means that it is invertible and $(B\,|\,G^T)$ is an inverse of it. Similar to the equality (\ref{eq-p7}), for any $(l-m)\times l$ matrix $A'$, $l\times m$ matrix $B$ and $l\times(l-m)$ matrix~$B'$, the equality $\left(\begin{array}{c}A\\\hline A'\end{array}\right)\cdot (B\,|\,B')=\begin{pmatrix}I&0\\0&I\end{pmatrix}$ implies that $AB=I$ and $AB'=0$. \qed \section{Matrix Product Codes over Frobenius Rings} Starting from this section till the end of this paper, we assume that $R$ is always a finite commutative Frobenius ring as in (\ref{eq2.1}). Any non-empty subset $C$ of $R^n$ is called a code over $R$ of length $n$ and any vector in $C$ is called a codeword. Let $M$ denote the cardinality of $C$, i.e., $M=|C|$. Then $C$ is said to be an $(n,M)$ code over $R$. If $C$ is an $R$-submodule of $R^n$, then $C$ is called a linear code. With respect to the usual Euclidean inner product, we have the dual code $C^\bot$ which is always linear. When $C \subseteq C^\bot$ (resp., $C = C^\bot$), we say that $C$ is self-orthogonal (resp., self-dual). If $C$ is linear, then $(C^{\bot})^{\bot}=C$ and $|C|\cdot|C^\bot|=|R|^n$, as we have noted in Section 2. Let $A=(a_{ij})_{m\times l}\in{\rm M}_{m\times l}(R)$. For any index $1\le k\le m$, we denote by $U_A(k)$ the linear code over $R$ of length $l$ generated by the $i$th rows of $A$, for $i=1,2,\cdots,k$, and denote by $L_A(k)$ the linear code over $R$ of length $l$ generated by the $i$th rows of $A$, for $i=k,k+1,\cdots, m$. In particular, $U_A(m)=L_A(1)$ is the linear code over $R$ of length $l$ generated by all the rows of $A$. Thus, the set of solutions of the linear equation system $AX={\bf 0}$ is just the dual code $L_A(1)^\bot$ of the code $L_A(1)$. If $A$ is FRR, then $L_A(1)$ is a free submodule of $R^l$ of rank $m$, while its dual $L_A(1)^\bot$ is a free submodule of $R^l$ of rank $l-m$, and the matrix $G$ in Proposition \ref{Prop2.5} is a generator matrix of $L_A(1)^\bot$, i.e., $L_A(1)^\bot=L_G(1)$. For convenience, we also define $U_A(0)$ and $L_A(m+1)$ to be the zero code. Any $n\times m$ matrix can be viewed as a word over $R$ of length $nm$, so any non-empty subset $D$ of ${\rm M}_{n\times m}(R)$ can be viewed as a code over $R$ of length $nm$. From this point of view, for any two words ${\bf w},{\bf v}\in{\rm M}_{n\times m}(R)$, the Euclidean inner product can be computed as follows \begin{equation}\label{eq3.1} \langle{\bf w},{\bf v}\rangle={\rm tr}({\bf w}{\bf v}^T) , \end{equation} where ${\rm tr}({\bf w}{\bf v}^T)$ denotes the trace of the $n\times n$ matrix ${\bf w}{\bf v}^T$. For: writing ${\bf w}=(w_{ij})_{n\times m}$, ${\bf v}=(v_{ij})_{n\times m}$, then ${\rm tr}({\bf w}{\bf v}^T)=\sum_{i=1}^n\sum_{j=1}^m w_{ij}v_{ij}$, which is just the Euclidean inner product of ${\bf w}$ and ${\bf v}$. Note that \eqref{eq3.1} holds for any matrix size, including the usual words written in the form of row or column vectors. Let $A$ be an FRR $m\times l$ matrix over $R$, then the map $$ {\rm M}_{n\times m}(R) \longrightarrow {\rm M}_{n\times l}(R),\quad {\bf v}\longmapsto {\bf v}A $$ is an injective linear map, for: $A$ has a right inverse $B$, so that, if ${\bf d}A={\bf d'}A$, then ${\bf d}={\bf d}I={\bf d}AB={\bf d'}AB={\bf d'}$. Therefore, if the subset $D$ of ${\rm M}_{n\times m}(R)$ is an $(nm,M)$ code over $R$, then $DA=\left\{{\bf d}A\mid{\bf d}\in D\right\}$ is an $(nl, M)$ code over $R$, and $DA$ is linear if and only if $D$ is linear. Let $C_j$ be an $(n,M_j)$ code over $R$, for $j=1,\cdots,m$. For ${\bf c}_1\in C_1,$ $\cdots$, ${\bf c}_m\in C_m$, we have an $n\times m$ matrix $({\bf c}_1,\cdots,{\bf c}_m)$, where each ${\bf c}_j$ is written as a column vector. Hence, we have a subset of ${\rm M}_{n\times m}(R)$ as follows: $$ D=[C_1,\cdots,C_m]=\left\{({\bf c}_1,\cdots,{\bf c}_m)\mid {\bf c}_1\in C_1,\cdots,{\bf c}_m\in C_m \right\}. $$ Obviously, $[C_1,\cdots,C_m]$ is an $\left(nm,\prod_{j=1}^mM_j\right)$ code over $R$, and the code $[C_1,\cdots,C_m]$ is linear if and only if all $C_1,\cdots,C_m$ are linear. Let $A$ be an FRR $m\times l$ matrix over $R$. We have an $\left(n l,\prod_{j=1}^mM_j\right)$ code over $R$, called a {\em matrix product code} over $R$ (see \cite{Blackmore-Norton}), as follows: \begin{equation}\label{eq3.2} [C_1,\cdots,C_m]A=\left\{({\bf c}_1,\cdots,{\bf c}_m)A\mid {\bf c}_1\in C_1,\cdots,{\bf c}_m\in C_m \right\}, \end{equation} which is linear if all $C_1,\cdots,C_m$ are linear. It is easy to check that $[C_1,\cdots,C_m]A=[C_1,\cdots,C_m]$ if $C_1,\cdots,C_m$ are all linear, $A$ is square, and one of the following holds: \begin{itemize} \item $A$ is a diagonal matrix, \item $C_1\supseteq C_2\supseteq\cdots\supseteq C_m$ and $A$ is a lower triangular matrix, \item $C_1=C_2=\cdots=C_m$. \end{itemize} Any weight $w$ on $R$ can be extended to a weight on $R^n$ in the obvious way, hence the distance $d_w$ on $R^n$ with respect to the weight $w$ is defined by $d_w({\bf c},{\bf c'})=w({\bf c}-{\bf c'})$ for ${\bf c},{\bf c'}\in R^n$. The minimum distance of any code $C$ with respect to the weight $w$, denoted by $d_w(C)$, is defined to be the minimum distance with respect to the weight $w$ between any two distinct codewords in $C$; and we adopt the convention that $d_w(0)=n+1$ for the zero code $0=\{{\bf 0}\} \subseteq R^n$. In particular, we denote the Hamming weight by $w_H$ and the Hamming distance by $d_H$, hence $d_H(C)$ denotes the minimum Hamming distance of $C$. The following is a generalization of the main result of \cite{Ozbudak-Stichtenoth} to matrix product codes over finite Frobenius rings. \begin{theorem}\label{Thm3.1} Let $C_j$ be an $(n,M_j)$ code over $R$, for $j=1,\cdots,m$, and let $A=(a_{ij})_{m\times l}$ be an FRR matrix over $R$. Let $w$ be a weight on $R$. Then $C=[C_1,\cdots,C_m]A$ is an $\left(nl, \prod_{j=1}^m M_j\right)$ code over $R$ with minimum distance $d_w(C)$ satisfying \stepcounter{equation} \begin{equation}\tag{\theequation U}\label{eq3.3U} d_w(C)\ge \min\left\{d_H(C_k)d_w\big(U_A(k)\big)\mid k=1,\cdots,m\right\}, \end{equation} \begin{equation}\tag{\theequation L}\label{eq3.3L} d_w(C)\ge \min\left\{d_H(C_k)d_w\big(L_A(k)\big)\mid k=1,\cdots,m\right\}. \end{equation} \end{theorem} \noindent{\bf Proof.}~ Since $A$ is FRR, by \eqref{eq3.2} we have that $C$ is an $\big(nl, \prod_{j=1}^m M_j\big)$ code over $R$. For any two distinct codewords ${\bf c}=({\bf c}_1,\cdots,{\bf c}_m)A$, ${\bf c}'=({\bf c}'_1,\cdots,{\bf c}'_m)A$ of $C$, let ${\bf c}_j-{\bf c}'_j={\bf b}_j$, for $j=1,\cdots,m$. Then ${\bf c}-{\bf c}'=({\bf b}_1,\cdots,{\bf b}_m)A$ and $d_w({\bf c},{\bf c}')=w({\bf c}-{\bf c}')=w\big(({\bf b}_1,\cdots,{\bf b}_m)A\big)$. Note that there is an index $k$ such that ${\bf b}_j={\bf 0}$ for all $j<k$ but ${\bf b}_k\ne{\bf 0}$. Let $A_i$ denote the $i$th row of $A$. Then the word ${\bf c}-{\bf c}'$ which is an $n\times l$ matrix over $R$ is as follows: $$ {\bf c}-{\bf c}'=({\bf 0},\cdots,{\bf 0}, {\bf b}_k,\cdots,{\bf b}_m)A = ({\bf b}_k,\cdots,{\bf b}_m)\begin{pmatrix}A_{k}\\ \vdots\\ A_m\end{pmatrix}, $$ where ${\bf b}_k=(b_{1k},\cdots,b_{ik},\cdots,b_{nk})^T$ with $b_{ik}\in R$. For each non-zero $b_{ik}$, we get the $i$th row of the matrix ${\bf c}-{\bf c}'$ as follows: $$ b_{ik} A_k + b_{i,k+1}A_{k+1}+\cdots+b_{im}A_m, $$ which is a non-zero codeword of the code $L_A(k)$. Therefore, the contribution to $d_w({\bf c},{\bf c}')$ of the $i$th row of ${\bf c}-{\bf c}'$ is $w(b_{ik} A_k + b_{i,k+1}A_{k+1}+\cdots+b_{im}A_m)\ge d_w(L_A(k))$. Since $w_H({\bf b}_k)=d_H({\bf c}_k,{\bf c}'_k)$, the number of non-zero $b_{ik}$ is at least $d_H(C_k)$. In conclusion, $d_w({\bf c},{\bf c}')\ge d_H(C_k)d_w(L_A(k))$. Thus the inequality \eqref{eq3.3L} holds. Similarly, for ${\bf c}, {\bf c}'$ above, there is an index $k'$ such that ${\bf b}_j={\bf 0}$ for all $j>k'$ but ${\bf b}_{k'}\ne{\bf 0}$, so we can write ${\bf c}-{\bf c}'$ as follows: $$ {\bf c}-{\bf c}'=({\bf b}_1,\cdots,{\bf b}_{k'},{\bf 0},\cdots,{\bf 0})A = ({\bf b}_1,\cdots,{\bf b}_{k'})\begin{pmatrix}A_{1}\\ \vdots\\ A_{k'}\end{pmatrix}, $$ and obtain that $d_w({\bf c},{\bf c}')\ge d_H(C_{k'})d_w(U_A(k'))$. We are done for the inequality \eqref{eq3.3U}. \qed \begin{remark}\label{Rem-p9} {\rm \begin{itemize} \item[(i)] Though, in the above proof, it is stated: ``there is an index $k$ such that ...'', in fact any index $k$ can appear when ${\bf c},{\bf c}'$ run over the choices of two distinct codewords of~$C$, since we can choose ${\bf c}_j={\bf c}'_j$, for $j\ne k$, and ${\bf c}_k\ne{\bf c}'_k$. \item[(ii)] In general, the right hand sides of \eqref{eq3.3U} and \eqref{eq3.3L} are not strict lower bounds of the minimum distance (see Section 5). \item[(iii)] The two lower bounds in \eqref{eq3.3U} and \eqref{eq3.3L} cannot be directly compared in general: sometimes \eqref{eq3.3U} is better than \eqref{eq3.3L}, while some other times the opposite is true. \end{itemize} } \end{remark} The following result describes the dual of a matrix product code constructed with an FRR matrix. It may be regarded as a generalization of \cite[Theorem 6.6]{Blackmore-Norton} and \cite[Proposition 3]{Van-Asch}, but here we do not require the matrix to be square. \begin{theorem}\label{Thm3.2} Let $C_1, \cdots , C_m$ be codes over $R$ of length $n$, and let $A\in{\rm M}_{m\times l}(R)$ be FRR. Assume that $B\in{\rm M}_{l\times m}(R)$ is a right inverse of $A$ and $G\in{\rm M}_{(l-m)\times l}(R)$ is a generator matrix of the dual code $L_A(1)^\bot$ of $L_A(1)$. Set $\tilde B=\big(B\,|\,G^T\big)$. Then the dual code of $C=[C_1,\cdots,C_m]A$ is \begin{equation}\label{eq3.4} C^\bot=[\,C_1^\bot,\cdots,C_m^\bot,\,\underbrace{R^n,\cdots,R^n}_{l-m}\,]\tilde B^T =[C_1^\bot,\cdots,C_m^\bot]B^T+{\rm M}_{n\times(l-m)}(R)G. \end{equation} \end{theorem} \noindent{\bf Proof.}~ We denote by $\hat C_j$ the linear code generated by the vectors in $C_j$, and by $\hat C$ the linear code generated by the vectors in $C$. It is then easy to check that $C_j^\bot=\hat C_j^\bot$, $\hat C=[\hat C_1,\cdots,\hat C_m]A$, and $C^\bot=\hat C^\bot$. Thus, without loss of generality, in the following we assume that $C_1,\cdots,C_m$ are all linear codes. In the equality (\ref{eq-p7}) within the proof of Proposition \ref{Prop2.6}, we have seen that $\tilde B=\big(B\,|\,G^T\big)$ is an invertible $l\times l$ matrix such that~$A$ is the $m\times l$ submatrix of $\tilde A=\tilde B^{-1}$ formed by the first $m$ rows of $\tilde A$, i.e., $\tilde A=\tilde B^{-1}$ is partitioned as $\tilde A=\left(\begin{array}{c} A\\\hline A'\end{array}\right)$. It is obvious that \begin{equation}\label{eq3.5} C=[C_1,\cdots,C_m]A=[C_1,\cdots,C_m,\,\underbrace{0,\cdots,0}_{l-m}\,]\tilde A . \end{equation} Now we show that \begin{equation}\label{eq3.6} [C_1^\bot,\cdots,C_m^\bot,\,\underbrace{R^n,\cdots,R^n}_{l-m}\,]\tilde B^T ~\subseteq~ C^\bot. \end{equation} Let ${\bf c}=({\bf c}_1,\cdots,{\bf c}_m,{\bf 0},\cdots,{\bf 0})\tilde A\in C$ with ${\bf c}_j\in C_j$, and let ${\bf d}=({\bf d}_1,\cdots,{\bf d}_m,{\bf w}_{m+1},\cdots,{\bf w}_l)\tilde B^T$ with ${\bf d}_j\in C_j^\bot$ ($1 \le j \le m$) and ${\bf w}_j\in R^n$ ($m+1 \le j \le l$). By \eqref{eq3.1}, we have \begin{eqnarray*} \langle{\bf c},{\bf d}\rangle&=&{\rm tr} \left(({\bf c}_1,\cdots,{\bf c}_m,{\bf 0},\cdots,{\bf 0})\tilde A\cdot \big(({\bf d}_1,\cdots,{\bf d}_m,{\bf w}_{m+1},\cdots,{\bf w}_l)\tilde B^T\big)^T\right)\\ &=&{\rm tr}\left(({\bf c}_1,\cdots,{\bf c}_m,{\bf 0},\cdots,{\bf 0})\tilde A \tilde B\begin{pmatrix}{\bf d}_1^T\\ \vdots\\{\bf d}_m^T\\ {\bf w}_{m+1}^T\\\vdots\\{\bf w}_{l}^T \end{pmatrix} \right). \end{eqnarray*} Since $\tilde A\tilde B=I$ is the identity matrix, we obtain $$\langle{\bf c},{\bf d}\rangle ~=~{\rm tr}\left(({\bf c}_1,\cdots,{\bf c}_m) \begin{pmatrix}{\bf d}_1^T\\ \vdots\\{\bf d}_m^T\end{pmatrix}\right). $$ By the linearity of trace, we have $$\langle{\bf c},{\bf d}\rangle ={\rm tr}\left({\bf c}_1{\bf d}_1^T+\cdots+{\bf c}_m{\bf d}_m^T\right) ={\rm tr}\left({\bf c}_1{\bf d}_1^T\right)+\cdots +{\rm tr}\left({\bf c}_m{\bf d}_m^T\right). $$ By \eqref{eq3.1} again, we obtain $$\langle{\bf c},{\bf d}\rangle= \langle{\bf c}_1,{\bf d}_1\rangle+\cdots+\langle{\bf c}_m,{\bf d}_m\rangle=0. $$ Thus \eqref{eq3.6} is proved. Since $R$ is a Frobenius ring, $|C_j^\bot|=\frac{|R|^n}{|C_j|}$, for $j=1,\cdots,m$, and $|C^\bot|=\frac{|R|^{nl}}{|C|}$. It follows from \eqref{eq3.2} that \begin{eqnarray*} \big|[C_1^\bot,\cdots,C_m^\bot,\,\underbrace{R^n,\cdots,R^n}_{l-m}\,]\tilde B^T\big| &=&|C_1^\bot|\cdots|C_m^\bot|\cdot\,\underbrace{|R^n|\cdots|R^n|}_{l-m}\\ &=&\frac{|R|^n}{|C_1|}\cdots\frac{|R|^n}{|C_m|}\cdot|R|^{n(l-m)}=\frac{|R|^{nl}}{|C|} =|C^\bot|\,. \end{eqnarray*} Therefore, the equality in \eqref{eq3.6} must hold. In other words, we obtain $$ C^\bot=[\,C_1^\bot,\cdots,C_m^\bot,\,\underbrace{R^n,\cdots,R^n}_{l-m}\,]\tilde B^T, $$ which is the first equality in \eqref{eq3.4}. Further, since $\tilde B^T$ has the partitioned form $\tilde B^T=\left(\begin{array}{c} B^T\\\hline G\end{array}\right)$, \begin{eqnarray*} [C_1^\bot,\cdots,C_m^\bot,\,\underbrace{R^n,\cdots,R^n}_{l-m}\,]\tilde B^T &=&[C_1^\bot,\cdots,C_m^\bot]B^T+[\,\underbrace{R^n,\cdots,R^n}_{l-m}\,]G\\ &=&[C_1^\bot,\cdots,C_m^\bot]B^T+{\rm M}_{n\times(l-m)}(R)G , \end{eqnarray*} i.e., the second equality in \eqref{eq3.4} holds. \qed \begin{remark}\label{Rem-p10} {\rm By Proposition \ref{Prop2.6}, the conclusion of Theorem \ref{Thm3.2} can be rewritten as follows: for any $l\times l$ matrix $\tilde A=\left(\begin{array}{c} A\\\hline A'\end{array}\right)$ and $\tilde A^{-1}=\big(B\,|\,B'\big)$, we have that $$ C^\bot=[\,C_1^\bot,\cdots,C_m^\bot,\,\underbrace{R^n,\cdots,R^n}_{l-m}\,] (\tilde{A}^{-1})^T =[C_1^\bot,\cdots,C_m^\bot]B^T+{\rm M}_{n\times(l-m)}(R)B'^T. $$ } \end{remark} \medskip An $m\times l$ matrix $A$ over $R$, where $m\le l$, is said to be {\em quasi-orthogonal} if $AA^T$ is a diagonal square matrix where all the diagonal entries are units of $R$. For example, the matrix $\begin{pmatrix}1&1&1&0\\0&1&1&1\end{pmatrix}$ is quasi-orthogonal if the characteristic of $R$ is $2$, while the matrix $\begin{pmatrix}1&1&0\\0&0&1\end{pmatrix}$ is quasi-orthogonal if the characteristic of $R$ is $3$. \begin{theorem}\label{Thm3.3} Let $C_1,\cdots,C_m$ be self-orthogonal linear codes over $R$ of length $n$, let $A$ be a quasi-orthogonal $m\times l$ matrix over $R$ and let $G$ be a generator matrix of the dual code $L_A(1)^\bot$ of $L_A(1)$. Then the dual code of $C=[C_1,\cdots,C_m]A$ is $C^\bot=[C_1^\bot,\cdots,C_m^\bot]A+{\rm M}_{n\times(l-m)}(R)G$. In particular, $C$ is a self-orthogonal code. \end{theorem} \noindent{\bf Proof.}~ Assume that $AA^T=D=\begin{pmatrix}u_1\\ &\ddots\\&& u_m\end{pmatrix}$, with $u_i$ being units of $R$. Then $A^TD^{-1}$ is a right inverse of $A$. By the equality (\ref{eq-p7}) in the proof of Proposition \ref{Prop2.6}, the matrix $\left( A^TD^{-1} \vert G^T\right)$ is invertible, hence $\left( A^T \vert G^T\right)=\left( A^TD^{-1} \vert G^T\right) \begin{pmatrix}D \\ & I\end{pmatrix}$ is invertible. Thus, $\left(\begin{array}{c} A\\ \hline G \end{array}\right) =\left( A^T \vert G^T\right)^T$ and the product \begin{equation}\label{eq3.7} \left(\begin{array}{c} A\\ \hline G \end{array}\right) \left( A^T \vert G^T\right)=\begin{pmatrix} D\\ & GG^T \end{pmatrix} \end{equation} are invertible; hence $GG^T$ is an invertible $(l-m)\times(l-m)$ matrix, and $\left( A^T \vert G^T\right)\begin{pmatrix} D^{-1}\\ & (GG^T)^{-1}\end{pmatrix}$ is the inverse of $\left(\begin{array}{c} A\\ \hline G \end{array}\right)$. Note that $$ \left(\left( A^T \vert G^T\right)\begin{pmatrix} D^{-1}\\ & (GG^T)^{-1}\end{pmatrix}\right)^T =\begin{pmatrix} D^{-1}\\ & (GG^T)^{-1}\end{pmatrix} \left(\begin{array}{c} A\\ \hline G\end{array}\right), $$ and that $[R^n,\cdots,R^n](GG^T)^{-1}=[R^n,\cdots,R^n]$. By Theorem \ref{Thm3.2}, the dual code $C^\bot$ is as follows: $$ C^\bot=[C_1^\bot,\cdots,C_m^\bot,R^n,\cdots,R^n]\cdot \begin{pmatrix} D^{-1}\\ & (GG^T)^{-1}\end{pmatrix} \left(\begin{array}{c} A\\ \hline G\end{array}\right). $$ Since $D^{-1}=\begin{pmatrix}u_1^{-1}\\&\ddots\\&&u_m^{-1}\end{pmatrix}$ and clearly $u_j^{-1}C_j^\bot=C_j^\bot$, for $j=1, \cdots , m$, we have $$\begin{array}{rl} \, & [C_1^\bot,\cdots,C_m^\bot,R^n,\cdots,R^n]\cdot \begin{pmatrix} D^{-1}\\ & (GG^T)^{-1}\end{pmatrix}\\ = & \big[u_1^{-1}C_1^\bot,\,\cdots,u_m^{-1}C_m^\bot, R^n,\cdots,R^n] \\ = & [C_1^\bot,\cdots,C_m^\bot,R^n,\cdots,R^n], \end{array}$$ so \begin{eqnarray*} C^\bot&=&[C_1^\bot,\cdots,C_m^\bot,R^n,\cdots,R^n]\cdot \left(\begin{array}{c} A\\ \hline G\end{array}\right)\\ &=&[C_1^\bot,\cdots,C_m^\bot]A+{\rm M}_{n\times(l-m)}(R)G\\ &\supseteq&[C_1^\bot,\cdots,C_m^\bot]A\supseteq[C_1,\cdots,C_m]A=C. \end{eqnarray*} The proof is now complete. \qed \medskip The following corollary follows immediately from Theorem \ref{Thm3.3}: \begin{corollary}\label{Cor-p12} Let $C_1,\cdots,C_m$ be self-dual linear codes over $R$ of length $n$ and let $A$ be a quasi-orthogonal $m\times m$ matrix over $R$. Then $C=[C_1,\cdots,C_m]A$ is a self-dual code. \end{corollary} \section{Strongly Full-Row-Rank Matrices} Let $C$ be a non-zero code over $R$ of length $n$ and set $M=|C|$ to be the cardinality of $C$. If $d_H(C)=1$, then $M\le |R|^{n-d_H(C)+1}$. In particular, we have $M\le |R|^{n-d_H(C)+1}$ when $n=1$. If $n>1$ and $d_H(C)>1$, by puncturing at the last coordinate, we get an $(n-1,M, \ge d-1)$ code $C'$, where $d=d_H(C)$, and by induction, we obtain that $M\le|R|^{(n-1)-(d-1)+1}=|R|^{n-d+1}$. By this well-known argument (e.g., see \cite{NS}), we have the following {\em Singleton bound} for codes over the Frobenius ring $R$: \begin{equation}\label{eq4.1} d_H(C)\le n-\log_{|R|}|C|+1. \end{equation} If a code $C$ over $R$ of length $n$ attains the Singleton bound, i.e., the equality holds in (\ref{eq4.1}), then we say that $C$ is a {\em maximum distance separable} code over $R$, or an {\em MDS} code over $R$ for short. Note, in particular, that $C = R^n$ is an MDS code. We also adopt the convention that the zero code is an MDS code (this is consistent with the convention that $d_H(0) = n+1$). Note that, if $C$ is a free code over $R$ of length $l$, then (\ref{eq4.1}) becomes $$ d_H(C)\le l-{\rm rank}(C)+1, $$ and $C$ is MDS if and only if, for any non-zero codeword ${\bf c}\in C$, we have $w_H({\bf c})>l-{\rm rank}(C)$. Moreover, a free code of length $l$ and rank $m$, which we shall also call an $[l,m]$ code (over $R$), has FRR generator matrices of size $m\times l$. The following result is well known for codes over finite fields (see, for example, \cite[Theorems 5.3.2 and 5.3.3]{R}). \begin{lemma}\label{Lem4.1} Let $A\in{\rm M}_{m\times l}(R)$ be FRR and let $C=U_A(m)$ (i.e., $C$ is the free code over $R$ of length $l$ generated by the rows of $A$). Then the following statements are equivalent: \begin{itemize} \item[(i)] $C$ is an $[l,m]$ MDS code. \item[(ii)] Any $m\times m$ submatrix of $A$ is non-singular. \item[(iii)] The dual code $C ^\bot$ of $C$ is an $[l,l-m]$ MDS code. \end{itemize} \end{lemma} The proof of Lemma \ref{Lem4.1} is similar to that of \cite[Theorems 5.3.2 and 5.3.3]{R}. The analogous ingredients needed for our setting (over a finite commutative Frobenius ring) are found in Proposition \ref{Prop2.5} and Corollary \ref{Cor2.4}. \begin{remark}\label{Rem-p13} {\rm \begin{itemize} \item[(i)] Note that there is another statement $\bullet$~ {\it ``Any $(l-m)\times(l-m)$ submatrix of a check matrix of $C$ is non-singular''} \noindent which is equivalent to any of the three statements in Lemma \ref{Lem4.1}, but it is already indirectly covered by Lemma \ref{Lem4.1}. \item[(ii)] If $C=R^l$, then $A$ is invertible and $C^\bot=0$. In this case, we adopt the convention that the zero code is an MDS code with zero as a generator matrix. Recall that we have also adopted the convention that $L_Q(l+1)=0$, for any $l\times l$ matrix $Q$. \end{itemize} } \end{remark} In view of Lemma \ref{Lem4.1}, we introduce the following terminologies. \begin{definition}\label{Def4.1} Let $A$ be an FRR $m\times l$ matrix over $R$. \begin{itemize} \item[(i)] If $U_A(m)=L_A(1)$ is an $[l,m]$ MDS code, then we say that $A$ is a {\em strongly full-row-rank (SFRR) matrix}. \item[(ii)] For $t \ge 2$, if there is a sequence of indices $0 = i_0 <i_1<\cdots<i_t=m$ such that $U_A(i_h)$, for $h=0, 1,\cdots,t$, are MDS codes, then we say that $A$ is an {\em $(i_1,\cdots,i_{t-1})$-SFRR matrix}. (When $t=1$, $A$ is just an SFRR matrix.) \item[(iii)] For $t \ge 2$, if there is a sequence of indices $1=i_0 < i_1 < \cdots<i_{t-1} < i_t = m+1$ such that $L_A(i_h)$, for $h=0, 1,\cdots,t$, are MDS codes, then we say that $A$ is a {\em reversely $(i_1,\cdots,i_{t-1})$-SFRR matrix}. (When $t=1$, $A$ is just an SFRR matrix.) \end{itemize} \end{definition} \begin{proposition}\label{Prop4.2} Let $A\in{\rm M}_{m\times l}(R)$ be FRR and let $0= i_0 <i_1<\cdots<i_t=m$. Assume that $\tilde A\in{\rm M}_{l\times l}(R)$ is an invertible matrix with $A$ as the submatrix consisting of its first $m$ rows. Then $A$ is an $(i_1,\cdots,i_{t-1})$-SFRR matrix if and only if $(\tilde A^{-1})^T$ is a reversely $(i_1+1,\cdots,i_{t-1}+1, m+1)$-SFRR matrix (or, if $m=l$, a reversely $(i_1+1,\cdots,i_{t-1}+1)$-SFRR matrix). \end{proposition} \noindent{\bf Proof.}~ Since $(\tilde A^{-1})^T$ is invertible, $L_{(\tilde A^{-1})^T}(1) = R^l$. Hence, $U_A(0)=0$, $L_{(\tilde A^{-1})^T}(1)$ and $L_{(\tilde A^{-1})^T}(l+1)=0$ are all MDS codes. Let $k=i_h$ with $1\le h\le t$. It is enough to show that $U_A(k)=U_{\tilde A}(k)$ is an MDS code if and only if $L_{(\tilde A^{-1})^T}(k+1)$ is an MDS code. According to Proposition \ref{Prop2.6}, we write $\tilde A=\left(\begin{array}{c}A'\\ \hline A''\end{array}\right)$, where $A'$ is the submatrix consisting of the first $k$ rows of $A$, and write $\tilde A^{-1}=(B'\,|\,B'')$ correspondingly; then $U_A(k)=U_{A'}(k)$ and $U_{A'}(k)^\bot=L_{B''^T}(1)=L_{(\tilde A^{-1})^T}(k+1)$. Therefore, the proposition follows from Lemma \ref{Lem4.1} at once. \qed \medskip Recall that a matrix $A=(a_{ij})_{m\times l}$ over $R$ is said to be {\em non-singular by columns} if, for every $t$ with $1\le t\le m$, any $t\times t$ submatrix of the first $t$ rows of $A$ is non-singular. From Lemma \ref{Lem4.1} and Proposition \ref{Prop4.2}, we have the following obvious consequence which is a generalization of \cite[Proposition~7.2 and Theorem~6.6(i)]{Blackmore-Norton}. \begin{corollary}\label{Cor4.3} Let $A\in{\rm M}_{m\times l}(R)$ be FRR. Assume that $\tilde A\in{\rm M}_{l\times l}(R)$ is an invertible matrix that has $A$ as the submatrix of its first $m$ rows. Then the following statements are equivalent: \begin{itemize} \item[(i)] $A$ is non-singular by columns. \item[(ii)] $A$ is a $(1,2,\cdots,m-1)$-SFRR matrix. \item[(iii)] $(\tilde A^{-1})^T$ is a reversely $(2,\cdots,m,m+1)$-SFRR matrix. (When $m=l$, $(\tilde A^{-1})^T$ is a reversely $(2,\cdots,m)$-SFRR matrix.) \end{itemize} In particular, when $m=l$, the square matrix $A$ is non-singular by columns if and only if $(A^{-1})^T$ is reversely non-singular by columns. \end{corollary} \begin{example}\label{Ex4.1} {\rm Let $T=\begin{pmatrix}1&0&1\\ 0&1&1\\ 1&1&1\end{pmatrix}$, which is the matrix for the $(a+x|b+x|a+b+x)$-construction. Then $T$ is a $(2)$-SFRR matrix, but $T$ is not non-singular by columns because $U_T(1)$ is not MDS. We note that $T$ is also a reversely $(3)$-SFRR matrix. Observe also that $(T^{-1})^T=\begin{pmatrix}0&-1&1\\ -1&0&1\\ 1&1&-1\end{pmatrix}$ is a reversely $(3)$-SFRR matrix (cf. Proposition \ref{Prop4.2}). } \end{example} The following lower bound is a generalization of the main result of \cite{Blackmore-Norton}, and the condition for the equality is a generalization of \cite[Theorem 1]{H-L-R} to SFRR matrices over finite Frobenius rings. \begin{theorem}\label{Thm4.5} Let $A\in{\rm M}_{m\times l}(R)$ be an $(i_1,\cdots,i_{t-1})$-SFRR matrix, where $0=i_0<i_1<\cdots<i_t=m$. Let $C_1,\cdots,C_m$ be codes over $R$ of length $n$ and let $C=[C_1,\cdots,C_m]A$. Then \stepcounter{equation}\begin{equation}\tag{\theequation U}\label{eq4.2U} d_H(C)\ge\min\big\{(l-i_h+1)d_H(C_{k_h}) \mid h=1,\cdots, t,~ i_{h-1}<k_h\le i_h \big\}. \end{equation} Furthermore, if the following three conditions are satisfied: \begin{itemize} \item[{\rm(E1)}] $C_1,\cdots,C_m$ are linear, \item[{\rm(E2)}] $C_1=\cdots=C_{i_1}$, $C_{i_1+1}=\cdots=C_{i_2}$, $\cdots$, $C_{i_{t-1}+1}=\cdots=C_{i_t}(=C_m)$, \item[{\rm(E3)}] $C_{i_1}\supseteq C_{i_2}\supseteq \cdots \supseteq C_{i_t}$, \end{itemize} \noindent then equality holds in \eqref{eq4.2U}, i.e., \stepcounter{equation}\begin{equation}\tag{\theequation U}\label{eq4.3U} d_H(C)=\min\big\{(l-i_h+1)d_H(C_{i_h}) \mid h=1,\cdots, t \big\}. \end{equation} \end{theorem} \begin{remark}\label{Rem-p15} {\rm There is a dual version of Theorem \ref{Thm4.5}, which we now state. Let $A$ be a reversely $( i_1,\cdots,i_{t-1})$-SFRR $m\times l$ matrix over $R$, where $1=i_0 < i_1<\cdots<i_{t-1}< i_t = m+1$. Then the analogue of \eqref{eq4.2U} is: \addtocounter{equation}{-1} \begin{equation}\tag{\theequation L}\label{eq4.2L} d_H(C)\ge\min\big\{(l-m+i_h)d_H(C_{k_h}) \mid h=0, 1,\cdots, t-1,~ i_{h}\le k_h< i_{h+1} \big\}. \end{equation} With further conditions (E1$^*$)=(E1) and \begin{itemize} \item[(E2$^*$)] $(C_1=)C_{i_0}=\cdots=C_{i_1-1}$, $C_{i_1}=\cdots=C_{i_2-1}$, $\cdots$, $C_{i_{t-1}}=\cdots=C_{m}$, \item[(E3$^*$)] $C_{i_0}\subseteq C_{i_1}\subseteq \cdots \subseteq C_{i_{t-1}}$, \end{itemize} the analogous version of the equality \eqref{eq4.3U} is: \stepcounter{equation}\begin{equation}\tag{\theequation L}\label{eq4.3L} d_H(C)=\min\big\{(l-m+i_h)d_H(C_{i_h}) \mid h=0, 1,\cdots, t-1 \big\}. \end{equation} The proof for the dual version is the same as that for Theorem \ref{Thm4.5}. } \end{remark} \noindent{\bf Proof of Theorem \ref{Thm4.5}.}~ By Theorem \ref{Thm3.1} \eqref{eq3.3U}, we have that $$ d_H(C)\ge\min\big\{d_H(U_A(k))d_H(C_{k}) \mid 1\le k\le m \big\}. $$ If $i_{h-1}<k\le i_h$, then $U_A(k)\subseteq U_A(i_h)$, so $d_H(U_A(k))\ge d_H(U_A(i_h))=l-i_h+1$. Hence $$ d_H(U_A(k))d_H(C_{k})\ge(l-i_h+1)d_H(C_{k}),\qquad i_{h-1}<k\le i_h.$$ The inequality \eqref{eq4.2U} holds. In order to prove \eqref{eq4.3U}, first we show that the following lemma holds. \begin{lemma}\label{Lem4.6} Let $A$ be as in Theorem \ref{Thm4.5} and set $m_h=i_h-i_{h-1}$, for $h=1,\cdots,t$. Then there is a block lower triangular matrix $Q$: \begin{equation}\label{eq4.4} Q=\begin{pmatrix}Q_1\\ *&Q_2\\\vdots&\ddots&\ddots\\ *&\cdots&*&Q_t\end{pmatrix}, \end{equation} with $Q_h$ being an invertible $m_h\times m_h$ matrix for each $h=1,\cdots,t$, such that $QA$ is a block upper triangular matrix \begin{equation}\label{eq4.5} QA=\begin{pmatrix} I_{m_1}& * &\cdots& * & \cdots & * \\ & I_{m_2} & \cdots& *& \cdots & * \\ & & \ddots & \vdots & \vdots & \vdots \\ &&& I_{m_t} & \cdots & * \end{pmatrix} , \end{equation} where, for $h=1,\cdots,t$, the $i_h$th row of $QA$ takes the form \begin{equation}\label{eq4.6} \big(\underbrace{0,\;\cdots,\;0}_{i_h-1}\,,~ 1, u_{i_h,i_h+1},~ \cdots,~u_{i_h,l}\big) \end{equation} with $u_{i_h,j}$ being a unit of $R$ for every $j=i_h+1,\cdots,l$. \end{lemma} \noindent{\bf Proof.}~ Write $A=(a_{ij})_{m\times l}$, and consider the top-left $i_1\times i_1$ submatrix $A_1=(a_{ij})_{i_1\times i_1}$. By the assumption on $A$ and Lemma \ref{Lem4.1}, the submatrix $A_1$ is non-singular, hence there is an $m_1\times m_1$ (recall that $m_1=i_1$) invertible matrix $Q_1$ such that $Q_1A_1=I_{m_1}$. Setting $$ Q'=\begin{pmatrix}Q_1\\ & I_{m_2} \\ &&\ddots\\ &&&I_{m_t}\end{pmatrix} , $$ it follows that $$ Q'A=\begin{pmatrix} I_{m_1}&*&\cdots&*\\ *&*&\cdots&* \\ \vdots&\vdots&\vdots&\vdots\\ *&*&\cdots&*\end{pmatrix} . $$ By adding suitable multiples of the rows in the first row partition to the rows in the other row partitions, we obtain an invertible matrix $$ Q''=\begin{pmatrix}Q_1\\ * & I_{m_2} \\ \vdots&&\ddots\\ *&&&I_{m_t}\end{pmatrix} $$ such that $$ Q''A=\begin{pmatrix} I_{m_1}&*&\cdots&*\\ &*&\cdots&* \\ &\vdots&\vdots&\vdots\\ &*&\cdots&*\end{pmatrix}. $$ Note that, by the properties of determinants and Lemma \ref{Lem4.1}, $U_{Q''A}(i_h)$, for $h=1,\cdots,t$, are still MDS codes. The top-left $i_2\times i_2$ submatrix of $Q''A$ looks like $$ \left(\begin{array}{c|c} I_{m_1} & * \\\hline & A_2 \end{array}\right) , $$ which should be non-singular, hence $A_2$ is an invertible $m_2\times m_2$ matrix, where $m_2=i_2-i_1$. Thus we can repeat the above process until $Q$ satisfying conditions \eqref{eq4.4} and \eqref{eq4.5} is found. Note that the $i_h$th row of $QA$ in \eqref{eq4.5} has the form of \eqref{eq4.6}, except that it remains to show that $u_{i_h,j}$, for all $j\ge i_{h}+1$, are units of $R$. Consider the $i_h\times i_h$ submatrix of $QA$ formed by the first $i_h$ rows and the $1$st, $2$nd, $\cdots$, $(i_h-1)$th and the $j$th columns: $$ \begin{pmatrix} 1&\cdots&*&*\\ &\ddots&\vdots&\vdots\\ &&1&*\\&&&u_{i_h,j} \end{pmatrix} . $$ Since $U_{QA}(i_h)$ is still an MDS code, this submatrix is non-singular, hence its determinant $u_{i_h,j}$ is a unit of $R$. \qed \medskip With the notations in Lemma \ref{Lem4.6}, we return to the proof of Theorem \ref{Thm4.5}. Note that $Q^{-1}=(r_{ij})_{m\times m}$ is also a block lower triangular matrix $$Q^{-1}=(r_{ij})_{m\times m}=\begin{pmatrix}Q_1^{-1}\\ *&Q_2^{-1}\\ \vdots&\ddots&\ddots\\ *&\cdots&*&Q_t^{-1}\end{pmatrix},$$ that is, $$r_{ij}=0, \qquad i\le i_h < j,\quad h=1,\cdots,t-1. $$ Then \begin{eqnarray*} C&=&[C_1,\cdots,C_m]A=[C_1,\cdots,C_m]Q^{-1}QA\\ &=&[C_1,\cdots,C_m]Q^{-1} \cdot \begin{pmatrix} I_{m_1}& * &\cdots& * & \cdots & * \\ & I_{m_2} & \cdots& *& \cdots & * \\ & & \ddots & \vdots & \vdots & \vdots \\ &&& I_{m_t} & \cdots & * \end{pmatrix} . \end{eqnarray*} For any $({\bf c}_1,\cdots,{\bf c}_m)\in[C_1,\cdots,C_m]$, write $({\bf c}_1,\cdots,{\bf c}_m)Q^{-1}=({\bf c}'_1,\cdots,{\bf c}'_m)$ with $$ {\bf c}'_k= r_{1k}{\bf c}_1+\cdots+r_{mk}{\bf c}_m . $$ For $h=1, \cdots , t$ and $i_{h-1}<k\le i_h$, since $r_{ik}=0$ for $i \le i_{h-1}$, we have $${\bf c}'_k= r_{i_{h-1}+1,k}{\bf c}_{i_{h-1}+1}+r_{i_{h-1}+2,k}{\bf c}_{i_{h-1}+2} +\cdots+r_{mk}{\bf c}_m;$$ so, by the conditions (E1),(E2) and (E3), we have that ${\bf c}'_k\in C_k$. Hence, ${\bf c}'_k\in C_k$ for all $k=1,\cdots,m$, implying $[C_1,\cdots,C_m]Q^{-1}\subseteq [C_1,\cdots,C_m]$. Moreover, $Q^{-1}$ is an invertible matrix, so $[C_1,\cdots,C_m]Q^{-1}=[C_1,\cdots,C_m]$. Therefore, $$ C=[C_1,\cdots,C_m](QA) =[C_1,\cdots,C_m]\begin{pmatrix} I_{m_1}& * &\cdots& * & \cdots & * \\ & I_{m_2} & \cdots& *& \cdots & * \\ & & \ddots & \vdots & \vdots & \vdots \\ &&& I_{m_t} & \cdots & * \end{pmatrix}. $$ By the inequality \eqref{eq4.2U} and the condition (E2), \begin{equation}\label{eq4.7} d_H(C)\ge\min\big\{(l-i_h+1)d_H(C_{i_h}) \mid h=1,\cdots, t \big\}. \end{equation} To prove \eqref{eq4.3U}, it is enough to show that, for each $h$ with $1\le h\le t$, there is some ${\bf c}\in C$ such that $w_H({\bf c})=(l-i_h+1)d_H(C_{i_h})$. For this purpose, we take ${\bf c}_{i_h}=(c_1,\cdots,c_n)\in C_{i_h}$ such that $w_H({\bf c}_{i_h})=d_H(C_{i_h})$. By \eqref{eq4.6}, we get a codeword ${\bf c}\in C$ as follows: $$ {\bf c}=({\bf 0},\cdots,{\bf 0},{\bf c}_{i_h},{\bf 0},\cdots,{\bf 0})(QA) =\big(\underbrace{{\bf 0},\;\cdots,\;{\bf 0}}_{i_h-1}\,,~ {\bf c}_{i_h},\;u_{i_h,i_h+1}{\bf c}_{i_h},~ \cdots,~u_{i_h,l}{\bf c}_{i_h}\big). $$ Since $u_{i_h,j}$ are units for all $j\ge i_{h}+1$, it follows that $w_H(u_{i_h,j}{\bf c}_{i_h})=w_H({\bf c}_{i_h})=d_H(C_{i_h})$. Hence, we obtain that $$ w_H({\bf c})=w_H({\bf c}_{i_h})+w_H(u_{i_h,i_h+1}{\bf c}_{i_h})+ \cdots+w_H(u_{i_h,l}{\bf c}_{i_h})=(l-i_h+1)d_H(C_{i_h}), $$ which completes the proof of Theorem \ref{Thm4.5}. \qed \medskip We next consider the analogue of Theorem \ref{Thm4.5} for the dual code. Let $A\in{\rm M}_{m\times l}(R)$ be an $( i_1,\cdots,i_{t-1})$-SFRR matrix, where $0=i_0<i_1<\cdots<i_t=m$. Let $C_1,\cdots,C_m$ be codes over $R$ of length $n$ and let $C=[C_1,\cdots,C_m]A$. From Theorem \ref{Thm3.2}, we recall that the dual code is \begin{equation}\label{eq4.8} C^\bot=[C_1^\bot,\cdots,C_m^\bot,\,\underbrace{R^n,\cdots,R^n}_{l-m}\,] (\tilde A^{-1})^T, \end{equation} where $\tilde A\in{\rm M}_{l\times l}(R)$ is an invertible matrix with $A$ as the submatrix consisting of its first $m$ rows (see Remark \ref{Rem-p10}). Now we estimate the minimum distance of $C^\bot$. If $m<l$, we have $C_{m+1}^\bot=\cdots=C_l^\bot=R^n$ and set $i_{t+1}=l$ for convenience. \begin{theorem}\label{Thm4.7} Let the notations be as above. Then \begin{equation}\label{eq4.9} d_H(C^\bot)\ge\min\big\{ (i_h+1)d_H(C_{k_h}^\bot) \mid h=0,1,\cdots, t,~ i_{h}<k_h\le i_{h+1} \big\}. \end{equation} Furthermore, if the following three conditions are satisfied: \begin{itemize} \item[{\rm(E1)}] $C_1,\cdots,C_m$ are linear, \item[{\rm(E2)}] $C_1=\cdots=C_{i_1}$, $C_{i_1+1}=\cdots=C_{i_2}$, $\cdots$, $C_{i_{t-1}+1}=\cdots=C_{i_t}$, \item[{\rm(E3)}] $C_{i_1}\supseteq C_{i_2}\supseteq \cdots \supseteq C_{i_t}$, \end{itemize} then equality holds in \eqref{eq4.9}, i.e., \begin{equation}\label{eq4.10} d_H(C^\bot)=\min\big\{(i_h+1)d_H(C_{i_{h+1}}^\bot) \mid h=0,1,\cdots,t\big\}. \end{equation} \end{theorem} \begin{remark}\label{Rem-p18} {\rm If $m<l$ (i.e., $A$ is not square), then in the braces of the right hand side of \eqref{eq4.9}, the terms for $h=t$ are: $$(i_t+1)d_H(C_{k_t}^\bot)=m+1,\qquad i_t=m<k_t\le l=i_{t+1}. $$ Accordingly, in \eqref{eq4.10}, the term corresponding to $h=t$ is $m+1$. On the other hand, when $m=l$, then in \eqref{eq4.9}, there is no term for $h=t$ since no $k$ satisfies $l<k\le l$. Accordingly, in \eqref{eq4.10}, there is no term $m+1$ for $h=t$. } \end{remark} \noindent{\bf Proof of Theorem \ref{Thm4.7}.}~ Let $\tilde B=\tilde A^{-1}$. For $\tilde A$, we have that $\bullet$~ $U_{\tilde A}(i_h)=U_{A}(i_h)$, for $h=0,1,\cdots,t$, are MDS codes. \noindent By Proposition \ref{Prop4.2}, this is equivalent to $\bullet$~ $L_{\tilde B^T}(i_h+1)$, for $h=0,1,\cdots,t$, are MDS codes. (Note that $L_{\tilde B^T}(l+1)$ is trivially MDS.) Since ${\rm rank}\big(L_{\tilde B^T}(i_h+1)\big)=l-i_h$, we have that $$d_H\big(L_{\tilde B^T}(i_h+1)\big)=l-(l-i_h)+1=i_h+1, \qquad h=0,1,\cdots,t.$$ By the dual of Theorem \ref{Thm4.5} (see \eqref{eq4.2L}), we have that $$ d_H(C^\bot)\ge\min\big\{\,(i_h+1)d_H(C_{k_h}^\bot) \mid h=0,1,\cdots, t,~ i_{h}<k_h\le i_{h+1} \big\}. $$ However, note that, if $i_t=m<l$, then, for any $k$ with $m<k\le l$, we have that $C_k^\bot=R^n$, hence $d_H(C_k^\bot)=1$; so the terms for $h=t$ in the braces are: $$(i_t+1)d_H(C_{k_t}^\bot)=m+1,\qquad i_t=m<k_t\le l=i_{t+1}. $$ The inequality \eqref{eq4.9} is proved. Further, assume that the conditions (E1), (E2) and (E3) hold. Then, for the dual codes, the following conditions hold: \begin{itemize} \item[{\rm(E1$^*$)}] $C_1^\bot,\cdots,C_m^\bot$ are linear (note: $C_{m+1}^\bot, \cdots , C_l^\bot$ are trivially linear), \item[{\rm(E2$^*$)}] $C_1^\bot=\cdots=C_{i_1}^\bot$, $C_{i_1+1}^\bot=\cdots=C_{i_2}^\bot$, $\cdots$, $C_{i_{t-1}+1}^\bot=\cdots=C_{i_t}^\bot$, (note: $C_{m+1}^\bot = \cdots = C_l^\bot$ trivially), \item[{\rm(E3$^*$)}] $C_{i_0+1}^\bot\subseteq C_{i_1+1}^\bot\subseteq \cdots \subseteq C_{i_{t-1}+1}^\bot\subseteq C_{i_t+1}^\bot = R^n$. \end{itemize} By the dual of Theorem \ref{Thm4.5} (see \eqref{eq4.3L}), we obtain the equality \eqref{eq4.10}. (Note that, similar to the case of \eqref{eq4.9}, when $m < l$, the term corresponding to $h=t$ is $m+1$, while, for the case $m=l$, there is no term $m+1$ for $h=t$.) \qed \medskip As a special case, we have the following corollary on non-singular by columns matrices over $R$, which generalizes \cite[Theorems 3.7 and 6.6]{Blackmore-Norton} and \cite[Propositions 2 and 4]{Van-Asch}. However, in our case, for the bound on $d_H(C^\bot)$, we do not require $A$ to be square. \begin{corollary}\label{Cor4.8} Let $A\in{\rm M}_{m\times l}(R)$ be non-singular by columns, let $C_1,\cdots,C_m$ be codes over $R$ of length~$n$, and let $C=[C_1,\cdots,C_m]A$. Then $$d_H(C)\ge\min\big\{\,l\cdot d_H(C_1),\,(l-1)d_H(C_2),\,\cdots,\, (l-m+1)d_H(C_m)\,\big\}$$ and $$d_H(C^\bot)\ge \left\{ \begin{array}{ll} \min\big\{\, 1\cdot d_H(C_1^\bot),\,2\cdot d_H(C_2^\bot),\, \cdots,\,m\cdot d_H(C_m^\bot),\,m+1\,\big\} & {\mbox{ if }} m < l , \\ \min\big\{\, 1\cdot d_H(C_1^\bot),\,2\cdot d_H(C_2^\bot),\, \cdots,\,m\cdot d_H(C_m^\bot)\,\big\} & {\mbox{ if }} m = l . \end{array} \right. $$ Further, if $C_1,\cdots,C_m$ are linear and $C_1\supseteq\cdots\supseteq C_m$, then equalities are attained in all these inequalities. \end{corollary} In the next section, we further discuss the properties of codes constructed with a special type of $(m')$-SFRR matrices, and provide two examples of codes constructed in this manner. \section{Two-Way $(m')$-SFRR Matrices} Recall that the well-known $(a+x|b+x|a+b+x)$-construction is associated with the matrix $T=\begin{pmatrix}1&0&1\\ 0&1&1\\ 1&1&1\end{pmatrix}$ and the matrix product code $C=[C_1,C_1,C_2]T$. We have seen in Example \ref{Ex4.1} that $T$ is a $(2)$-SFRR matrix (but not a non-singular by columns matrix), so \eqref{eq4.2U} of Theorem \ref{Thm4.5} can be applied to show that the minimum distance satisfies $d_H(C)\ge\min\{2d_H(C_1),d_H(C_2)\}$. On the other hand, $T$ is also a reversely $(3)$-SFRR matrix, so \eqref{eq4.2L} of Theorem \ref{Thm4.5} is also applicable, yielding $d_H(C)\ge\min\{d_H(C_1),3d_H(C_2)\}$. Therefore, $$d_H(C)\ge\max\big\{\min\{2d_H(C_1),d_H(C_2)\},\; \min\{d_H(C_1),3d_H(C_2)\}\big\}.$$ However, for this construction, there is another well-known estimation (e.g., see \cite[Section V.B]{Forney}): $$ \min\{d_H(C_1\cap C_2),\,2d_H(C_1),3d_H(C_2)\}\ge d_H(C)\ge \min\{d_H(C_1\cap C_2),\,2d_H(C_1),3d_H(C_1+C_2)\}. $$ Though the two lower bounds above cannot be directly compared in general, in many cases the latter is better than the former. Furthermore, we also note that $C$ is self-dual in many cases though $T$ is not a quasi-orthogonal matrix. Inspired by these observations, we introduce the following notion. \begin{definition}\label{Def5.1} Let $A\in{\rm M}_{m\times l}(R)$ be FRR. If there is an index $m'$ with $1\le m'<m$ such that $A$ is both an $(m')$-SFRR matrix and a reversely $(m'+1)$-SFRR matrix, then we say that $A$ is a {\em two-way $(m')$-SFRR matrix}. \end{definition} \begin{remark}\label{Rem-p21} {\rm For $m'+m''=m$, any $m\times l$ matrix $A$ can be written as $A=\left(\begin{array}{c}A'\\ \hline A''\end{array}\right)$, where $A'$ is an $m'\times l$ matrix consisting of the first $m'$ rows of $A$ while $A''$ is an $m''\times l$ matrix consisting of the last $m''$ rows of $A$. With this partitioned form, $A$ is a two-way $(m')$-SFRR matrix if and only if $A'$, $A''$ and $A$ are all SFRR matrices.} \end{remark} The following property is a key point for constructing self-orthogonal matrix product codes. \begin{definition}\label{Def5.2} Let an $m\times l$ matrix $A=\left(\begin{array}{c}A'\\ \hline A''\end{array}\right)$ be partitioned into an $m'\times l$ matrix $A'$ and an $m''\times l$ matrix $A''$ as above. If every row of $A'$ is orthogonal to every row of $A''$ with respect to the Euclidean inner product on $R^l$, then we say that $A$ has a {\em partitioned orthogonal property}, or, more precisely, the {\em $m'$-partitioned orthogonal property}. \end{definition} \medskip A quasi-orthogonal two-way $(m')$-SFRR matrix obviously has the $m'$-partitioned orthogonal property. \begin{example}\label{Ex5.1} {\rm \begin{itemize} \item[(i)] As we have seen in Example \ref{Ex4.1}, for any Frobenius ring $R$, $T=\begin{pmatrix}1&0&1\\ 0 & 1&1\\ 1&1&1\end{pmatrix}$ is a two-way $(2)$-SFRR matrix. Furthermore, if $R$ has characteristic $2$, then $T$ has the $2$-partitioned orthogonal property, but $T$ is not quasi-orthogonal. In fact, if $R$ is the binary field, then $T$ is the unique two-way $(2)$-SFRR matrix of order $3$. \item[(ii)] $A= \begin{pmatrix}1&1\\ 1&-1\end{pmatrix}$ is a two-way $(1)$-SFRR matrix provided the characteristic of $R$ is different from~$2$. Moreover, $A$ is also a quasi-orthogonal matrix. If $R$ is the binary field, then there are no two-way $(1)$-SFRR matrices of order $2$ over $R$. However, if $R$ is a field of characteristic $2$ but not the binary field, taking any $1\ne\omega\in R$, then $\begin{pmatrix}1&\omega\\ \omega&1\end{pmatrix}$ is a two-way $(1)$-SFRR matrix which is also a quasi-orthogonal matrix. \item[(iii)] $\begin{pmatrix}1&0&1&1\\ 0 & 1&1&-1\\ 1&1&1&0\\ 1&-1&0&1\end{pmatrix}$ is a two-way $(2)$-SFRR matrix if the characteristic ${\rm char}\,R\ne 2$. However, if $3\nmid{\rm char}\,R$ and ${\rm char} \, R>2$, then $\begin{pmatrix}1&0&1&1\\ 0 & 1&1&-1\\ 1&1&-1&0\\ 1&-1&0&-1\end{pmatrix}$ is a two-way $(2)$-SFRR matrix which is also a quasi-orthogonal matrix. Note that, if $R$ is the binary field, there are no two-way $(m')$-SFRR matrices over~$R$ of size $4\times 4$, for any $1 \le m' \le 3$. \end{itemize} } \end{example} According to Remark \ref{Rem-p21}, we can partition a two-way $(m')$-SFRR matrix $A$ as $A=\left(\begin{array}{c}A'\\ \hline A''\end{array}\right)$, where $A'$ is an $m'\times l$ SFRR matrix and $A''$ is an $m''\times l$ SFRR matrix. For linear codes $C_1,\cdots,C_m$ over $R$ of length $n$, it is obvious that the following two matrix product codes are equivalent to each other: $$ \big[C_1,\cdots,C_{m'},C_{m'+1},\cdots,C_m\big] \left(\begin{array}{c}A'\\ \hline A''\end{array}\right)\cong \big[C_{m'+1},\cdots,C_m,C_{1},\cdots,C_{m'}\big] \left(\begin{array}{c}A''\\ \hline A'\end{array}\right). $$ Without loss of generality, we can further assume that $m'\ge m''$. \medskip Let $A\in{\rm M}_{m\times l}(R)$, let $m'+m''=m$ with $m'\ge m''\ge 1$, and let $C'$ and $C''$ be linear codes over $R$ of length $n$. We consider the matrix product code \begin{equation}\label{eq5.1} C= [\,\underbrace{C',\cdots,C'}_{m'}\,,\,\underbrace{C'',\cdots,C''}_{m''}\,]A. \end{equation} If $A$ is a two-way $(m')$-SFRR matrix, then from \eqref{eq4.2U} and \eqref{eq4.2L} of Theorem \ref{Thm4.5}, we have a lower bound for $d_H(C)$ as follows: \begin{equation}\label{eq5.2} d_H(C)\ge\max\left\{\begin{array}{c}\min\{(l-m'+1)d_H(C'),\;(l-m+1)d_H(C'')\},\\[5pt] \min\{(l-m+1)d_H(C'),\;(l-m''+1)d_H(C'')\}\end{array}\right\}. \end{equation} Now we have some more bounds for $d_H(C)$ stated as follows. \begin{theorem}\label{Thm5.1} Let the notations be as in \eqref{eq5.1}. If $A$ is a two-way $(m')$-SFRR matrix, then \begin{equation}\label{eq5.3} d_H(C)\ge\min\big\{(l-m'+1)d_H(C'),\,(l-m''+1)d_H(C'+C''),\,(l-m+1)d_H(C'\cap C'')\big\} \end{equation} and \begin{equation}\label{eq5.4} d_H(C)\le\min\big\{(l-m'+1)d_H(C'),\,(l-m''+1)d_H(C''),\,(l-m+1)d_H(C'\cap C'')\big\}. \end{equation} \end{theorem} \noindent {\bf Proof.}~ Set $C_{\cap} = C'\cap C''$. Since $[C',\cdots,C',\,C'',\cdots,C'']\supseteq[C',\cdots,C',\,C_\cap,\cdots,C_\cap]$, we have $C=[C',\cdots,C',\,C'',\cdots,C'']A\supseteq[C',\cdots,C',\,C_\cap,\cdots,C_\cap]A$, so $$d_H(C)\le d_H\big( [\,\underbrace{C',\cdots,C'}_{m'}\,,\,\underbrace{C_{\cap},\cdots,C_{\cap}}_{m''}\,]A\big). $$ Since $C'\supseteq C_\cap$, by \eqref{eq4.3U} of Theorem \ref{Thm4.5}, we have \begin{equation}\label{eq5.5} d_H\big( [\,\underbrace{C',\cdots,C'}_{m'}\,,\,\underbrace{C_{\cap},\cdots,C_{\cap}}_{m''}\,]A\big) =\min\big\{(l-m'+1)d_H(C'),\,(l-m+1)d_H(C_\cap)\big\}, \end{equation} thus \begin{equation}\label{eq5.6} d_H\big(C\big)\le\min\big\{(l-m'+1)d_H(C'),\,(l-m+1)d_H(C'\cap C'')\big\}. \end{equation} Applying \eqref{eq4.3L} to $C=[C', \cdots , C',C'', \cdots , C'']A \supseteq[C_{\cap}, \cdots , C_{\cap},C'', \cdots , C'']A$ and observing that $C_\cap\subseteq C''$, we obtain \begin{equation}\label{eq5.7} d_H\big(C\big)\le\min\big\{(l-m''+1)d_H(C''),\,(l-m+1)d_H(C'\cap C'')\big\}. \end{equation} Combining \eqref{eq5.6} and \eqref{eq5.7}, the conclusion \eqref{eq5.4} follows. Now we proceed to prove \eqref{eq5.3}. We partition $A$ as $A=\left(\begin{array}{c}A'\\ \hline A''\end{array}\right)$, where $A'$ is the $m'\times l$ matrix consisting of the first $m'$ rows of $A$ while $A''$ is the $m''\times l$ matrix consisting of the last $m''$ rows of $A$. Assume that ${\bf c}'_1,\cdots,{\bf c}'_{m'}\in C'$, ${\bf c}''_1,\cdots,{\bf c}''_{m''}\in C''$ and $\big({\bf c}'_1,\cdots,{\bf c}'_{m'},\,{\bf c}''_1,\cdots,{\bf c}''_{m''}\big)\ne{\bf 0}$. We have a non-zero codeword of $C$ as follows: $$ {\bf c}=\big({\bf c}'_1,\cdots,{\bf c}'_{m'},\,{\bf c}''_1,\cdots,{\bf c}''_{m''}\big)A =\big({\bf c}'_1,\cdots,{\bf c}'_{m'}\big)A'+\big({\bf c}''_1,\cdots,{\bf c}''_{m''}\big)A''. $$ We consider the $m''\times m''$ submatrices of $A''$: there are two cases. For any matrix $M \in {\rm M}_{m\times l}(R)$ and $1 \le j_1 < \cdots < j_s \le l$, let $M(j_1, \cdots , j_s)$ denote the $m \times s$ submatrix of $M$ consisting of the $j_1$th, $\cdots$, $j_s$th columns of $M$. \noindent {\bf Case 1:} There are $m''$ columns of $A$, say the $j_1$th, $\cdots$, $j_{m''}$th columns, such that $$ \big({\bf c}'_1,\cdots,{\bf c}'_{m'}\big)A'(j_1,\cdots,j_{m''}) +\big({\bf c}''_1,\cdots,{\bf c}''_{m''}\big)A''(j_1,\cdots,j_{m''}) ={\bf 0}. $$ Note that $A''(j_1,\cdots,j_{m''})$ is an invertible $m''\times m''$ submatrix of $A''$ because $A''$ is an SFRR matrix. Then $$ \big({\bf c}''_1,\cdots,{\bf c}''_{m''}\big)A''(j_1,\cdots,j_{m''}) =-\big({\bf c}'_1,\cdots,{\bf c}'_{m'}\big)A'(j_1,\cdots,j_{m''}), $$ where the left hand side belongs to $$ [\,\underbrace{C'',\cdots,C''}_{m''}\,]A''(j_1,\cdots,j_{m''}) =[\,\underbrace{C'',\cdots,C''}_{m''}\,], $$ and the right hand side belongs to $$ [\,\underbrace{C',\cdots,C'}_{m'}\,]A'(j_1,\cdots,j_{m''}) \subseteq[\,\underbrace{C',\cdots,C'}_{m''}\,]. $$ Thus $$ \big({\bf c}''_1,\cdots,{\bf c}''_{m''}\big)A''(j_1,\cdots,j_{m''}) \in[\,\underbrace{C'',\cdots,C''}_{m''}\,]\bigcap[\,\underbrace{C',\cdots,C'}_{m''}\,] =[\,\underbrace{C_{\cap},\cdots,C_{\cap}}_{m''}\,]. $$ It then follows that $$ \big({\bf c}''_1,\cdots,{\bf c}''_{m''}\big) \in [\,\underbrace{C_{\cap},\cdots,C_{\cap}}_{m''}\,]A''(j_1,\cdots,j_{m''})^{-1} =[\,\underbrace{C_{\cap},\cdots,C_{\cap}}_{m''}\,]. $$ Hence, $$ {\bf c}=\big({\bf c}'_1,\cdots,{\bf c}'_{m'},\,{\bf c}''_1,\cdots,{\bf c}''_{m''}\big)A \in[\,\underbrace{C',\cdots,C'}_{m'}\,,\,\underbrace{C_{\cap},\cdots,C_{\cap}}_{m''}\,]A, $$ and by \eqref{eq5.5}, we get \begin{equation}\label{eq5.8} w_H({\bf c})\ge\min\big\{(l-m'+1)d_H(C'),\;(l-m+1)d_H(C' \cap C'')\big\} . \end{equation} \noindent {\bf Case 2:} There are at most $m''-1$ columns of $A$, say the first $s$ columns, where $s \le m''-1$, such that $$ \big({\bf c}'_1,\cdots,{\bf c}'_{m'}\big)A'(1,\cdots,s) +\big({\bf c}''_1,\cdots,{\bf c}''_{m''}\big)A''(1,\cdots,s)={\bf 0}. $$ By the construction \eqref{eq5.1} of $C$, ${\bf c}$ is an $n\times l$ matrix. The above assumption means that any one of the last $l-m''+1$ columns of ${\bf c}$ is a non-zero vector of $R^n$. By the construction \eqref{eq5.1} of $C$, any column of ${\bf c}$ is a vector of $C'+C''$, so \begin{equation}\label{eq5.9} w_H({\bf c})\ge (l-m''+1)d_H(C'+C''). \end{equation} Summarizing the discussions for the two cases, we see that, for any non-zero codeword ${\bf c}$ of $C$, one of \eqref{eq5.8} and \eqref{eq5.9} holds, so we obtain $$ d_H(C)\ge\min\big\{(l-m'+1)d_H(C'),\,(l-m+1)d_H(C' \cap C''),\,(l-m''+1)d_H(C'+C'')\big\}, $$ which is just the required inequality \eqref{eq5.3}. \qed \begin{remark}\label{Rem-p24} {\rm \begin{itemize} \item[(i)] For the proof of \eqref{eq5.3}, if we start with considering the $m'\times m'$ submatrices of~$A'$, then we can obtain in a similar way that \begin{equation}\label{eq5.10} d_H(C)\ge\min\big\{(l-m''+1)d_H(C''),\,(l-m+1)d_H(C'\cap C''),\,(l-m'+1)d_H(C'+C'')\big\}. \end{equation} Observing that $l-m'+1\le l-m''+1$ since we have assumed that $m'\ge m''$, and that $d_H(C'+C'')\le d_H(C')$, we have that $$ (l-m'+1)d_H(C'+C'')\le\min\{(l-m''+1)d_H(C'+C''),\,(l-m'+1)d_H(C')\}, $$ so $$\begin{array}{c} \min\big\{(l-m''+1)d_H(C''),\,(l-m+1)d_H(C'\cap C''),\,(l-m'+1)d_H(C'+C'')\big\}\\ \le\min\big\{(l-m'+1)d_H(C'),\,(l-m+1)d_H(C'\cap C''),\,(l-m''+1)d_H(C'+C'')\big\}. \end{array}$$ In other words, under the assumption that $m'\ge m''$, the bound \eqref{eq5.10} is not better than that of \eqref{eq5.3}. \item[(ii)] However, the bounds in \eqref{eq5.2} and \eqref{eq5.3} cannot be compared in general, because $d_H(C'')$ in \eqref{eq5.2} and $(l-m''+1)d_H(C'+C'')$ in \eqref{eq5.3} are not comparable in general. Thus, we can take the larger of \eqref{eq5.2} and \eqref{eq5.3} as a better lower bound for $d_H(C)$. \end{itemize} } \end{remark} \begin{theorem}\label{Thm5.2} Let the notations be as in \eqref{eq5.1}. Further assume that the matrix $A$ has the $m'$-partitioned orthogonal property. If both $C'$ and $C''$ are self-orthogonal, then $C$ is self-orthogonal too. In particular, $C$ is self-dual provided both $C'$ and $C''$ are self-dual and $A$ is invertible. \end{theorem} \noindent{\bf Proof.}~ Write $A=\left(\begin{array}{c}A'\\ \hline A''\end{array}\right)$ with $A'$ being the $m'\times l$ matrix consisting of the first $m'$ rows of $A$ and $A''$ the $m''\times l$ matrix consisting of the last $m''$ rows of $A$. By the product of partitioned matrices, $$ AA^T=\left(\begin{array}{c}A'\\ \hline A''\end{array}\right) \big(\,A'^T\mid A''^T\,\big)= \begin{pmatrix} A'A'^T & A'A''^T\\ A''A'^T&A''A''^T\end{pmatrix}. $$ By the $m'$-partitioned orthogonal property, we have that $A'A''^T=0$ and $A''A'^T=0$, so $$ AA^T=\begin{pmatrix} A'A'^T \\ &A''A''^T\end{pmatrix}. $$ Then, for any two codewords $${\bf c}=\big({\bf c}'_1,\cdots,{\bf c}'_{m'},\,{\bf c}''_1,\cdots,{\bf c}''_{m''}\big)A\in C, \quad {\bf d}=\big({\bf d}'_1,\cdots,{\bf d}'_{m'},\,{\bf d}''_1,\cdots,{\bf d}''_{m''}\big)A\in C, $$ with ${\bf c}'_1,\cdots,{\bf c}'_{m'},\,{\bf d}'_1,\cdots,{\bf d}'_{m'}\in C'$ and ${\bf c}''_1,\cdots,{\bf c}''_{m''},\,{\bf d}''_1,\cdots,{\bf d}''_{m''}\in C''$, by \eqref{eq3.1}, we have \begin{eqnarray*} &&\langle{\bf c},{\bf d}\rangle={\rm tr}\big({\bf c}{\bf d}^T\big) ={\rm tr}\left(\big({\bf c}'_1,\cdots,{\bf c}'_{m'},\,{\bf c}''_1,\cdots,{\bf c}''_{m''}\big)A A^T\big({\bf d}'_1,\cdots,{\bf d}'_{m'},\,{\bf d}''_1,\cdots,{\bf d}''_{m''}\big)^T\right)\\ &=&{\rm tr}\left(\big(({\bf c}'_1,\cdots,{\bf c}'_{m'})(A'A'^T),\, ({\bf c}''_1,\cdots,{\bf c}''_{m''})(A''A''^T)\big)\cdot \big({\bf d}'_1,\cdots,{\bf d}'_{m'},\,{\bf d}''_1,\cdots,{\bf d}''_{m''}\big)^T\right). \end{eqnarray*} However, $ [C',\cdots,C'](A'A'^T)\subseteq [C',\cdots,C']$, so $$({\bf c}'_1,\cdots,{\bf c}'_{m'})(A'A'^T)=(\bar{\bf c}'_1,\cdots,\bar{\bf c}'_{m'}),\qquad {\rm with}~~ \bar{\bf c}'_1,\cdots,\bar{\bf c}'_{m'}\in C'. $$ Similarly, $$({\bf c}''_1,\cdots,{\bf c}''_{m''})(A''A''^T)=(\bar{\bf c}''_1,\cdots,\bar{\bf c}''_{m''}),\qquad {\rm with}~~ \bar{\bf c}''_1,\cdots,\bar{\bf c}''_{m''}\in C''. $$ Since both $C'$ and $C''$ are self-orthogonal, \begin{eqnarray*} \langle{\bf c},{\bf d}\rangle&=&{\rm tr}\left( \big(\bar{\bf c}'_1,\cdots,\bar{\bf c}'_{m'},\,\bar{\bf c}''_1,\cdots,\bar{\bf c}''_{m''}\big) \cdot\big({\bf d}'_1,\cdots,{\bf d}'_{m'},\,{\bf d}''_1,\cdots,{\bf d}''_{m''}\big)^T\right)\\ &=&{\rm tr}\big(\bar{\bf c}'_1{\bf d}_1'^T\big)+\cdots+ {\rm tr}\big(\bar{\bf c}'_{m'}{\bf d}_{m'}'^T\big)+ {\rm tr}\big(\bar{\bf c}''_1{\bf d}_1''^T\big)+\cdots+ {\rm tr}\big(\bar{\bf c}''_{m''}{\bf d}_{m''}''^T\big)\\ &=&\langle\bar{\bf c}'_1,{\bf d}_1'\rangle+\cdots+\langle\bar{\bf c}'_{m'},{\bf d}_{m'}'\rangle+ \langle\bar{\bf c}''_1,{\bf d}_1''\rangle+\cdots+\langle\bar{\bf c}''_{m''},{\bf d}_{m''}''\rangle\\ &=& 0. \end{eqnarray*} Therefore, $C$ is self-orthogonal. Assume that both $C'$ and $C''$ are self-dual. Since $|C'||C'^\bot|=|R|^n$ and $|C''||C''^\bot|=|R|^n$, it follows that $$|C'|=|C'^\bot|=|R|^{n/2}=|C''|=|C''^\bot|.$$ When $A$ is invertible, we have $m =l$, so $$ |C|=|C'|^{m'}|C''|^{m''}=|R|^{(m'+m'')n/2}=|R|^{mn/2} = |R|^{ln/2}. $$ Furthermore, from $|C||C^\bot|=|R|^{ln}$, we have $|C^\bot|=|R|^{ln/2}$. Since $C\subseteq C^\bot$, it follows that $C=C^\bot$. \qed \begin{example}\label{Ex5.2} {\rm Take $R$ to be the binary field and $T=\begin{pmatrix}1&0&1\\ 0&1&1\\ 1&1&1\end{pmatrix}$. Then $T$ is a two-way $(2)$-SFRR matrix that has the $2$-partitioned orthogonal property. Recall that the matrix product code construction $C=[C',C',C'']T$ in \eqref{eq5.1} is just the well-known $(a+x|b+x|a+b+x)$ construction. It is also a quasi-cyclic code of co-index $3$ (see \cite[Theorem 6.7]{LS-I}). The bounds in \eqref{eq5.3} and \eqref{eq5.4} of Theorem \ref{Thm5.1} give the following well-known estimation on the minimum distance of $C$ (cf. \cite[Section V.B]{Forney}): $$ \min\{d_H(C'\cap C''),\,2d_H(C'),3d_H(C'')\}\ge d_H(C)\ge \min\{d_H(C'\cap C''),\,2d_H(C'),3d_H(C'+C'')\}. $$ Another lower bound is given by \eqref{eq5.2}: $$d_H(C)\ge\max\big\{\min\{2d_H(C'),d_H(C'')\},\;\min\{d_H(C'),3d_H(C'')\}\big\}.$$ It was noted in Remark \ref{Rem-p24} that these two lower bounds cannot be compared directly in general. We now consider a few explicit examples. First, we set $$\begin{array}{c|c|c|c} \mbox{Code} & \mbox{parameters} & \mbox{generator matrix} & \mbox{duality}\\[1mm]\hline C_1 & [4,1,4] & (1,1,1,1) &\mbox{self-orthogonal} \\[1mm]\hline C_2 & [4,2,2] & \begin{pmatrix} 1 & 0 & 1 & 0\\ 0 & 1 & 1 & 1\end{pmatrix} & \mbox{not self-orthogonal}\\ \hline C_3 & [4,2,2] & \begin{pmatrix} 1 & 0 & 1 & 0\\ 0 & 1 & 0 & 1\end{pmatrix} & \mbox{Type I self-dual} \\ \hline C_3' & [4,2,2] & \begin{pmatrix} 1 & 1 & 0 & 0\\ 0 & 0 & 1 & 1\end{pmatrix} & \mbox{Type I self-dual} \\ \hline \end{array}$$ \begin{itemize} \item[(i)] Take $C=[C_2,C_2,C_1]T$, with $C_1, C_2$ as above. Since $C_2\cap C_1=0$ and $C_2+C_1$ is a $[4,3,1]$ linear code with generator matrix $\begin{pmatrix} 1 & 0 & 1 & 0\\ 0 & 1 & 1 & 1\\ 0 & 0 & 1 & 0\end{pmatrix}$, the bound \eqref{eq5.2} shows that $d_H(C)\ge 4$, while the bound \eqref{eq5.3} gives $d_H(C)\ge 3$. Therefore, in this case, the bound \eqref{eq5.2} is better than the bound \eqref{eq5.3}. On the other hand, \eqref{eq5.4} shows that $d_H(C)\le 4$, hence, $d_H(C)=4$. Thus $C$ is a $[12,5,4]$ binary linear code. It can be verified directly that $C$ is not self-orthogonal. In fact, the following two codewords are not orthogonal to each other: \[ \left( \begin{array}{ccc} 0 & 0 & 1 \\ 0 & 1 & 1 \\ 0 & 1 & 1 \\ 0 & 1 & 1 \end{array} \right) T = \left( \begin{array}{ccc} 1 & 1 & 1 \\ 1 & 0 & 0 \\ 1 & 0 & 0 \\ 1 & 0 & 0 \end{array} \right) \quad {\mbox{ and }} \quad \left( \begin{array}{ccc} 1 & 0 & 1 \\ 0 & 0 & 1 \\ 1 & 0 & 1 \\ 0 & 0 & 1 \end{array} \right) T = \left( \begin{array}{ccc} 0 & 1 & 0 \\ 1 & 1 & 1 \\ 0 & 1 & 0 \\ 1 & 1 & 1 \end{array} \right) . \] \item[(ii)] Take $C=[C_3,C_3,C_3']T$ with $C_3, C_3'$ as above. Since $C_3\cap C_3'=C_1$ and $C_3+C_3'$ is a $[4,3,2]$ linear code with generator matrix $\begin{pmatrix} 1 & 0 & 1 & 0\\ 0 & 1 & 0 & 1\\ 0 & 0 & 1 & 1\end{pmatrix}$. The bound \eqref{eq5.2} shows that $d_H(C)\ge 2$, while the bound \eqref{eq5.3} gives $d_H(C)\ge 4$. Hence, in this case, the bound \eqref{eq5.2} is weaker than the bound \eqref{eq5.3}. From \eqref{eq5.4}, we obtain $d_H(C)\le 4$, thus $d_H(C)=4$. Further, both $C'$ and $C''$ are self-dual in this case. Therefore, by Theorem \ref{Thm5.2}, $C$ is a self-dual $[12,6,4]$ binary linear code. However, $C$ is not of Type II: this follows from \cite[Proposition 7.1]{LS-I} with the fact that $C_3'$ is not of Type II, but it can also be seen directly that the codeword \[ \left( \begin{array}{ccc} 1 & 0 & 1 \\ 0 & 1 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{array} \right) T = \left( \begin{array}{ccc} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 1 \end{array} \right) \] does not have Hamming weight divisible by 4. \item[(iii)] Take $C=[C_3,C_3,C_1]T$ with $C_1, C_3$ as above. Since $C_3\supseteq C_1$, by \eqref{eq4.3U} we have that $d_H(C)=\min\{2d_H(C_3),\,d_H(C_1)\}=4$. Hence, $C$ is a $[12,5,4]$ binary linear code. Since $C_3$ is self-dual and $C_1$ is self-orthogonal, $C$ is also self-orthogonal. \medskip We summarize the above examples in the following: $$\begin{array}{c|c|c|c} \mbox{Code $C$} & \mbox{parameters} & \mbox{duality} & \mbox{argument for $d_H(C)$}\\[1mm]\hline [C_2,C_2,C_1]T & [12,5,4] & \mbox{not self-orthogonal} & \mbox{by \eqref{eq5.2}} \\[1mm]\hline [C_3,C_3,C_3']T & [12,6,4] & \mbox{Type I self-dual} & \mbox{by \eqref{eq5.3}}\\ \hline [C_3,C_3,C_1]T & [12,5,4] & \mbox{self-orthogonal} & \mbox{by \eqref{eq4.3U}} \\ \hline \end{array}$$ \end{itemize} } \end{example} \begin{example}\label{Ex5.3} {\rm Take $R$ to be the binary field. Take $A=\begin{pmatrix}1&1\\ &1&1\\ &&1&1\\&&&1&1\\1&1&1&1&1\end{pmatrix}$, which is a two-way $(4)$-SFRR matrix that has the $4$-partitioned orthogonal property. In fact, $A$ is the matrix for constructing quasi-cyclic codes of co-index 5, see \cite[Theorem 6.14]{LS-I}. Similar to the construction of the $[24,12,8]$-Golay code from $[8,4,4]$-extended Hamming codes by the $(a+x|b+x|a+b+x)$-construction, we construct $C=[C',C',C',C',C'']A$, where $C'$ and $C''$ are $[8,4,4]$ extended Hamming codes with generator matrices $G'$ and $G''$, respectively, as follows: $$ G'=\begin{pmatrix}1&1&0&1&&&&1\\&1&1&0&1&&&1\\&&1&1&0&1&&1\\&&&1&1&0&1&1\end{pmatrix},\qquad G''=\begin{pmatrix}1&0&1&1&&&&1\\&1&0&1&1&&&1\\&&1&0&1&1&&1\\&&&1&0&1&1&1\end{pmatrix}. $$ It is known that both $C'$ and $C''$ are of Type II. Since $C'\cap C''$ is an $[8,1,8]$ code and $C'+C''$ is an $[8,7,2]$ code, by \eqref{eq5.3} and \eqref{eq5.4} we have that $$ 8=\min\{2\cdot 4,\;5\cdot 2,\;1\cdot 8\}\le d_H(C) \le\min\{2\cdot 4,\;5\cdot 4,\;1\cdot 8\}=8, $$ that is, $d_H(C)=8$. By Theorem \ref{Thm5.2}, $C$ is self-dual. Furthermore, since $C''$ is of Type II, so is $C$ (see \cite[Proposition 7.3]{LS-I}). We conclude that $C$ is a $[40,20,8]$ Type II binary code. } \end{example} \section*{Acknowledgements} This work was done while the first and third authors were visiting the Division of Mathematical Sciences, School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore, in Autumn 2011. They are grateful for the hospitality and support. They also thank NSFC for the support through Grants No.~10871079 and No.~11171370. The work of S.~Ling was partially supported by Singapore MOE-AcRF Tier 2 Research Grant T208B2204. It is also the authors' pleasure to thank the anonymous referees for their helpful comments.
2,877,628,091,662
arxiv
\section{Auxiliary lemmas}\label{app:useful_lemmas} \begin{lemma}\label{lemma:properties_functions} Assume $\varphi: \mathbb{R} \to \mathbb{R}$ is a bounded continuous function. Let $U,V,\widetilde{Z} \overset{\text{\tiny i.i.d.}}{\simiid[-2pt]} \mathcal{N}(0,1)$. For all $\rho \geq 0$, the function \begin{align*} I_{\varphi}(\cdot\, ,\cdot \,;\rho): [0,+\infty) \times [0,\rho] \longrightarrow [0,+\infty), \, (r,q) \longmapsto I(U;\sqrt{r}\varphi(\sqrt{\rho - q}U + \sqrt{q}V) + \widetilde{Z}\vert V) \end{align*} is continuous, and $\forall q \in [0,\rho]: r \mapsto I_{\varphi}(\cdot\, , q\,;\rho)$ is twice-differentiable, nondecreasing, concave and $\nicefrac{\Vert \varphi \Vert_{\infty}^2}{2}$-Lipschitz on $\mathbb{R}_+$. Let $S \sim P_S$ (a probability distribution on $\mathbb{R}$) and $Z \sim \mathcal{N}(0,1)$. Fix $\alpha, \rho_s \geq 0$ and define \begin{align*} \widetilde{\psi}_\alpha: \! [0,+\infty)^2 \times [0,\rho_s] \! \longrightarrow \! [0,+\infty), (r,r_s,q_s) \! \longmapsto \! I(S;\sqrt{r_s}S + Z) \! + \! \alpha I_{\varphi}(r, q_s;\rho_s) - \frac{r_s(\rho_s - q_s)}{2}. \end{align*} Both functions $r \mapsto \sup_{r_s \geq 0} \widetilde{\psi}_\alpha(r,r_s,q_s)$ and $r \mapsto \inf_{q_s \in [0,\rho_s]} \sup_{r_s \geq 0} \widetilde{\psi}_\alpha(r,r_s,q_s)$ are nondecreasing, concave and $\nicefrac{\alpha \Vert \varphi \Vert_{\infty}^2}{2}$-Lipschitz on $\mathbb{R}_+$. \end{lemma} \begin{IEEEproof} Fix $\rho \geq 0$ and $q \in [0,\rho]$. Define $Y^{(r)} \triangleq \sqrt{r}\varphi(\sqrt{\rho - q}U + \sqrt{q}V) + \widetilde{Z}$. Then, $I_{\varphi}(r, q; \rho) = I(U;Y^{(r)}\vert V)$. We have: \begin{multline*} I_{\varphi}(r, q;\rho) \\ = -\mathbb{E} \ln \int du \frac{e^{-\frac{u^2}{2}}}{\sqrt{2\pi}} \,e^{-\frac{r}{2}(\varphi(\sqrt{\rho - q} U + \sqrt{q}V ) - \varphi(\sqrt{\rho - q} u + \sqrt{q}V ))^2 - \sqrt{r}(\varphi(\sqrt{\rho - q} U + \sqrt{q}V) - \varphi(\sqrt{\rho - q} u + \sqrt{q}V ))\widetilde{Z} } \:. \end{multline*} Let $\langle - \rangle_r$ denote the expectation with respect to the posterior distribution of $U$ given $(Y^{(r)}, V)$. The assumptions on $\varphi$ imply domination assumptions justifying the continuity of $I_{\varphi}(\cdot\, ,\cdot \,;\rho)$ and the (twice) differentiability of $r \mapsto I_{\varphi}(r, q;\rho)$. Differentiating w.r.t.\ $r$, it comes: \begin{align} \frac{\partial I_{\varphi}}{\partial r}(r,q;\rho) &= \frac{1}{2}\,\mathbb{E}\,\big\langle(\varphi(\sqrt{\rho - q}\,U + \sqrt{q}V ) - \varphi(\sqrt{\rho - q}\,u + \sqrt{q}V ))^2\big\rangle_{r}\nonumber\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad - \frac{1}{2\sqrt{r}}\mathbb{E}\big[\big\langle \varphi(\sqrt{\rho - q}\,u + \sqrt{q}V ) \big\rangle_{r}\widetilde{Z}\big]\nonumber\\ &= \frac{1}{2} \mathbb{E}\big[\varphi^2(\sqrt{\rho - q}\,U + \sqrt{q}V ) - \big\langle\varphi(\sqrt{\rho - q}\,u + \sqrt{q}V ))\big\rangle_{r}^2\,\big] \geq 0 \;.\label{sign_derivative_I} \end{align} The second equality is obtained using integration by parts w.r.t.\ $\widetilde{Z}$ and Nishimori identity $$ \mathbb{E}\big[\varphi(\sqrt{\rho - q}\,U + \sqrt{q}V ) \big\langle \varphi(\sqrt{\rho - q}\,u + \sqrt{q}V ) \big\rangle_{r}\big] =\mathbb{E}\,\big\langle \varphi(\sqrt{\rho - q}\,u + \sqrt{q}V ) \big\rangle_{r}^2\;. $$ The nonnegativity of the derivative follows from Jensen's inequality and Nishimori identity: $$ \mathbb{E}\,\big\langle \varphi(\sqrt{\rho - q}\,u + \sqrt{q}V ) \big\rangle_{r}^2 \leq \mathbb{E}\,\big\langle \varphi^2(\sqrt{\rho - q}\,u + \sqrt{q}V ) \big\rangle_{r} = \mathbb{E}\,\varphi^2(\sqrt{\rho - q}\,U + \sqrt{q}V ) \;. $$ Further differentiating and using integration by parts w.r.t.\ $\widetilde{Z}$ and Nishimori identity where necessary yields: \begin{equation} \frac{\partial^2 I_\varphi}{\partial r^2}(r,q;\rho) = -\frac{1}{2}\mathbb{E}\,\big\langle \big(\varphi(\sqrt{\rho - q} u + \sqrt{q}V) - \langle \varphi(\sqrt{\rho - q} u + \sqrt{q}V) \rangle_r \big)^2\,\big\rangle_r^2 \leq 0 \;. \label{sign_second_derivative_I} \end{equation} From \eqref{sign_derivative_I},\eqref{sign_second_derivative_I} $I_{\varphi}(\cdot, q;\rho)$ is concave nondecreasing. The Lipschitzianity follows simply from: \begin{equation*} 0 \leq\frac{\partial I_{\varphi}}{\partial r}(r, q;\rho) \leq \frac{1}{2} \mathbb{E}\big[\varphi^2(\sqrt{\rho - q}\,U + \sqrt{q}V )\big] \leq \frac{\Vert \varphi \Vert_{\infty}^2}{2} \;. \end{equation*} The properties of $r \mapsto \sup_{r_s \geq 0} \widetilde{\psi}_\alpha(r,r_s,q_s)$ follow directly from the ones of $I_{\varphi}(\cdot, q_s; \rho_s)$ as $$ \sup_{r_s \geq 0} \widetilde{\psi}_\alpha(r,r_s,q_s) = \alpha I_{\varphi}(r, q_s; \rho_s) + \sup_{r_s \geq 0} I(S;\sqrt{r_s}S + Z) - \frac{r_s(\rho_s - q_s)}{2} \;. $$ Finally, $r \mapsto \inf_{q_s \in [0,\rho_s]}\sup_{r_s \geq 0} \widetilde{\psi}_\alpha(r,r_s,q_s)$ is the infimum of nondecreasing, concave, $\nicefrac{\alpha \Vert \varphi \Vert_{\infty}^2}{2}$-Lipschitzian functions, hence its properties. \end{IEEEproof} \begin{lemma}\label{lemma:properties_I_PS} Assume that $P_S$ is a probability distribution on $\mathbb{R}$ with bounded support $\mathrm{supp}(P_S) \subseteq [-M_S,M_S]$, $M_S > 0$. Let $S \sim P_S, Z \sim \mathcal{N}(0,1)$ be random variables independent of each other. Define the functions \begin{eqnarray*} & I_{P_S}:[0,+\infty) \longrightarrow [0,+\infty), \quad r_s \longmapsto I(S;\sqrt{r_s}S + Z) \\& I_{P_S}^*:\mathbb{R} \longrightarrow [0,+\infty], \quad x \longmapsto \sup_{r_s \geq 0} I_{P_S}(r_s) + xr_s \;. \end{eqnarray*} Then, $I_{P_S}$ is twice-differentiable, concave and nondecreasing, while $I_{P_S}^*$ is convex, nondecreasing, finite on $(-\infty,0)$, equal to $+\infty$ on $(0,+\infty)$ and $I_{P_S}^*(0) = \lim_{r_s \to +\infty} I_{P_S}(r_s) \in [0,+\infty]$. \end{lemma} \begin{IEEEproof} Let $Y^{(r_s)} = \sqrt{r_s}\,S + Z$. We have: \begin{equation*} I_{P_S}(r_s) = \frac{r_s \mathbb{E} S^2}{2} -\mathbb{E}\ln \int dP_S(s) \,e^{r_s S s +\sqrt{r_s}\,Z s -\frac{r_s s^2}{2}} \;. \end{equation*} Let $\langle - \rangle_{r_s}$ denote the expectation with respect to the posterior distribution of $S$ given $Y^{(r_s)}$. The assumption on the support of $P_S$ ensures domination properties, thus allowing to show that $I_{P_S}$ is twice differentiable. Differentiating w.r.t.\ $r_s$, it comes: \begin{equation}\label{sign_derivative_I_PS} I'_{P_S}(r_s) = \frac{\mathbb{E} S^2}{2}-\,\mathbb{E}\,S \langle s \rangle_{r_s} + \frac{\mathbb{E}\,\langle s^2 \rangle_{r_s}}{2} + \frac{\mathbb{E}\,Z \langle s \rangle_{r_s}}{2\sqrt{r_s}} = \frac{1}{2} \mathbb{E}\big[(S - \langle s \rangle_{r_s})^2\big] \geq 0 \;. \end{equation} Further differentiating and using integration by parts w.r.t.\ $Z$ and Nishimori identity where necessary yields: \begin{equation}\label{sign_second_derivative_I_PS} I''_{P_S}(r_s) = -\frac{1}{2}\mathbb{E}\big[\big\langle (s - \langle s \rangle_{r_s})^2\,\big\rangle_{r_s}^2\,\big] \leq 0 \;. \end{equation} From \eqref{sign_derivative_I_PS} and \eqref{sign_second_derivative_I_PS}, $I_{P_S}$ is nondecreasing and concave. The function $I_{P_S}^*$ is the Legendre transform of the convex function $-I_{P_S}$, hence it is well-defined and convex. Besides, $I_{P_S}^*$ is defined as the supremum of nondecreasing affine functions of $x$ so it is nondecreasing. The trivial lower bound $I_{P_S}^*(x) \geq \sup_{r_s \geq 0} xr_s$ shows that $I_{P_S}^*$ is nonnegative and is equal to $+\infty$ on $(0,+\infty)$. Because the support of $P_S$ is included in $[-M_S,M_S]$, its differential entropy is upper bounded by $\ln(2M_s)$ (the differential entropy of the uniform distribution on the segment $[-M_S,M_S]$) and we have $\forall x \in (-\infty,0)$: $$ I_{P_S}^*(x) \; \leq \; \sup_{r_s \geq 0} \ln\bigg(2M_s\sqrt{\frac{r_s}{2\pi e}}\bigg) +xr_s \; = \; \ln\bigg(\frac{M_S^2}{\pi e^2 \vert x \vert}\bigg) < +\infty \;. $$ Finally, $I_{P_S}^*(0) = \sup_{r_s \geq 0} I_{P_S}(r_s) = \lim_{r_s \to +\infty} I_{P_S}(r_s)$. \end{IEEEproof} \section{Concentration of the free entropy}\label{app:concentration_free_entropy} Consider the inference problem~\eqref{interpolation_model_R}. Once both observations ${\mathbf{Y}}^{(t)}$ and $\widetilde{{\mathbf{Y}}}^{(t,R)}$ have been replaced by their definitions, the associated Hamiltonian reads: \begin{multline} \mathcal{H}_{t,R}({\mathbf{s}} ; {\mathbf{S}}, {\mathbf{Z}}, \widetilde{{\mathbf{Z}}}, {\mathbf{W}}) \triangleq \sum_{j=1}^{n} \frac{\lambda R}{4} x_j^2 - \frac{\lambda R}{2}\,X_j x_j - \sqrt{\frac{\lambda R}{2}}\,\widetilde{Z}_j x_j \\ +\sum_{{\mathbf{i}} \in \mathcal{I}} \frac{\lambda(1-t)}{2n^2} x_{i_1}^2 x_{i_2}^2 x_{i_3}^2 - \frac{\lambda(1-t)}{n^2}\, X_{i_1} X_{i_2} X_{i_3} x_{i_1} x_{i_2} x_{i_3} - \frac{\sqrt{\lambda(1-t)}}{n}\, Z_{{\mathbf{i}}} x_{i_1} x_{i_2} x_{i_3} \;. \end{multline} In this section, we show that the free entropy \begin{equation} \frac{1}{n} \ln \mathcal{Z}_{t,R}\big({\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,R)}, {\mathbf{W}}\big) = \frac{1}{n} \ln\int dP_s({\mathbf{s}}) \, e^{-\mathcal{H}_{t,R}({\mathbf{s}} ; {\mathbf{S}}, {\mathbf{Z}}, \widetilde{{\mathbf{Z}}}, {\mathbf{W}})} \;. \end{equation} concentrates around its expectation. We will sometimes write $\frac{1}{n} \ln \mathcal{Z}_{t,R}$, omitting the arguments, to shorten notations. \begin{proposition}[Concentration of the free entropy]\label{prop:concentration_free_entropy} Suppose that \ref{hyp:S_bounded_support}, \ref{hyp:varphi} hold. There exists a polynomial $C(\Vert \varphi \Vert_{\infty}, \Vert \varphi' \Vert_{\infty}, \Vert \varphi'' \Vert_{\infty}, M_S, \lambda, R)$ with positive coefficients such that $\forall t \in [0,1]$: \begin{equation}\label{bound_variance_free_entropy} \mathbb{E} \Bigg[\Bigg(\frac{ \ln \mathcal{Z}_{t,R}}{n} - \mathbb{E}\bigg[\frac{\ln \mathcal{Z}_{t,R}}{n} \bigg] \Bigg)^{\!\! 2}\:\Bigg] \leq \frac{C(\Vert \varphi \Vert_{\infty}, \Vert \varphi' \Vert_{\infty}, \Vert \varphi'' \Vert_{\infty}, M_S, \lambda, R)}{n} \;. \end{equation} \end{proposition} \begin{IEEEproof} First, we show that the free entropy concentrates on its conditional expectation given $({\mathbf{W}}, {\mathbf{S}})$. Thus, $\nicefrac{\ln \mathcal{Z}_{t,R}}{n}$ is seen as a function of the Gaussian random variables ${\mathbf{Z}}$, $\widetilde{{\mathbf{Z}}}$ and we work conditionally to $({\mathbf{W}}, {\mathbf{S}})$: $g({\mathbf{Z}}, \widetilde{{\mathbf{Z}}}) \equiv \nicefrac{\ln \mathcal{Z}_{t,R}}{n}$. By the Gaussian-Poincar\'{e} inequality (see \cite[Theorem 3.20]{boucheron_concentration}), we have: \begin{equation*} \mathbb{E} \Bigg[\Bigg(\frac{ \ln \mathcal{Z}_{t,R}}{n} - \mathbb{E}\bigg[\frac{\ln \mathcal{Z}_{t,R}}{n} \bigg\vert {\mathbf{S}}, {\mathbf{W}} \bigg]\Bigg)^{\!\! 2}\,\Bigg] \leq \mathbb{E}\big[\big\Vert \nabla g({\mathbf{Z}},\widetilde{{\mathbf{Z}}}) \big\Vert^2 \,\big] \;. \end{equation*} The squared norm of the gradient of $g$ reads $\Vert \nabla g\Vert^2 = \sum_{{\mathbf{i}} \in \mathcal{I}} \vert\nicefrac{\partial g}{\partial Z_{{\mathbf{i}}}}\vert^2 + \sum_{j} \vert\nicefrac{\partial g}{\partial \widetilde{Z}_{j}}\vert^2$. Each of these partial derivatives takes the form $\nicefrac{\partial g}{\partial x} = -n^{-1} \big\langle \nicefrac{\partial \mathcal{H}_{t,R}}{\partial x} \big\rangle$. More precisely: \begin{equation*} \frac{\partial g}{\partial Z_{{\mathbf{i}}}} = n^{-1} \frac{\sqrt{\lambda (1-t)}}{n} \, \langle x_{i_1} x_{i_2} x_{i_3} \rangle_{t,R} \quad ; \quad \frac{\partial g}{\partial \widetilde{Z}_{j}} = n^{-1} \sqrt{\frac{\lambda R}{2}} \, \langle x_j \rangle_{t,R} \;. \end{equation*} We see that $\vert \nicefrac{\partial g}{\partial Z_{{\mathbf{i}}}} \vert \leq \frac{\sqrt{\lambda}}{n^2} \Vert \varphi \Vert_{\infty}^3$ and $\vert \frac{\partial g}{\partial \widetilde{Z}_{j}} \vert \leq n^{-1} \sqrt{\frac{\lambda R}{2}} \Vert \varphi \Vert_{\infty}$. Therefore: $$ \Vert \nabla g({\mathbf{Z}},\widetilde{{\mathbf{Z}}}) \Vert^2 \leq \frac{\lambda^{\nicefrac{3}{2}}}{6n} \Vert \varphi \Vert_{\infty}^6 + \frac{\lambda R}{2n}\Vert \varphi \Vert_{\infty}^2 + \mathcal{O}(n^{-2})\;. $$ Making use of the Gaussian-Poincar\'{e} inequality, we obtain (the term $\mathcal{O}(n^{-2})$ is neglected): \begin{equation}\label{bound_variance_GP_1} \mathbb{E} \Bigg[\Bigg(\frac{ \ln \mathcal{Z}_{t,R}}{n} - \mathbb{E}\bigg[\frac{\ln \mathcal{Z}_{t,R}}{n} \bigg\vert {\mathbf{S}}, {\mathbf{W}} \bigg]\Bigg)^{\!\! 2}\,\Bigg] \leq \frac{\lambda \Vert \varphi \Vert_{\infty}^4}{6n} (\sqrt{\lambda} \Vert \varphi \Vert_{\infty}^2 + 3 R) \;. \end{equation} Next we show that $\mathbb{E}[\nicefrac{\ln \mathcal{Z}_{t,R}}{n} \vert {\mathbf{S}}, {\mathbf{W}}]$ concentrates on its conditional expectation given ${\mathbf{S}}$. Thus, $\nicefrac{\ln \mathcal{Z}_{t,R}}{n}$ is seen as a function of the Gaussian random variables ${\mathbf{W}}$ and we work conditionally to ${\mathbf{S}}$: $g({\mathbf{W}}) \equiv \mathbb{E}[\nicefrac{\ln \mathcal{Z}_{t,R}}{n} \vert {\mathbf{W}}, {\mathbf{S}}]$. We will again invoke the Gaussian-Poincar\'{e} inequality (see \cite[Theorem 3.20]{boucheron_concentration}). To lighten the equations we drop the subscripts in the Gibbs bracket $\langle - \rangle_{t,R}$, we introduce the notation $\widetilde{\mathbb{E}} \triangleq \mathbb{E}[\cdot \vert {\mathbf{S}},{\mathbf{W}}]$ and we define the following quantities: \begin{align*} {\mathbf{X}}' = \varphi'\bigg(\frac{{\mathbf{W}} {\mathbf{S}}}{\sqrt{p}}\bigg) \quad ; \quad {\mathbf{x}}' = \varphi'\bigg(\frac{{\mathbf{W}} {\mathbf{S}}}{\sqrt{p}}\bigg) \;. \end{align*} The squared norm of the gradient of $g$ reads $\Vert \nabla g\Vert^2 = \sum_{i, j} \vert\nicefrac{\partial g}{\partial W_{ij}}\vert^2$ where $\forall (i,j) \in \{1,\dots,n\} \times \{1,\dots, p\}$: \begin{align*} \frac{\partial g}{\partial W_{ij}} &= \mathcal{O}(n^{-\nicefrac{5}{2}})+ \frac{1}{2n} \sum_{\substack{k = 1\\ k \neq i}}^n \sum_{\substack{\ell = 1\\ \ell \neq k,i}}^{n} \bigg(-\frac{\lambda(1-t)}{n^2\sqrt{p}} \widetilde{\mathbb{E}} \langle x_{i} x'_i s_j x_{k}^2 x_{\ell}^2 \rangle +\frac{\lambda(1-t)}{n^2\sqrt{p}}\, S_j X'_{i} X_{k} X_{\ell} \widetilde{\mathbb{E}}\langle x_{i} x_{k} x_{\ell}\rangle\\ &\qquad\qquad\qquad\qquad\quad\;\: +\frac{\lambda(1-t)}{n^2\sqrt{p}}\, X_{i} X_{k} X_{\ell}\,\widetilde{\mathbb{E}} \langle s_j x'_{i} x_{k} x_{\ell} \rangle +\frac{\sqrt{\lambda(1-t)}}{n\sqrt{p}}\, \widetilde{\mathbb{E}} Z_{i k \ell} \langle s_j x'_{i} x_{k} x_{\ell} \rangle\bigg)\\ &\quad +\frac{1}{n} \sqrt{\frac{\lambda R}{2p}} \bigg(\!-\sqrt{\frac{\lambda R}{2}} \widetilde{\mathbb{E}}\langle s_j x'_i x_i\rangle + \sqrt{\frac{\lambda R}{2}}\,S_j X'_i \,\widetilde{\mathbb{E}}\langle x_i \rangle + \sqrt{\frac{\lambda R}{2}}\, X_i \,\widetilde{\mathbb{E}}\langle s_j x'_i\rangle + \widetilde{\mathbb{E}}\widetilde{Z}_i \langle s_j x'_i \rangle\bigg). \end{align*} The term $\mathcal{O}(n^{-\nicefrac{5}{2}})$ comes from those triplets ${\mathbf{i}}$ in $\mathcal{I} \triangleq \{(i,j,k) \in [n]^3: i \leq j \leq k \}$ whose elements are non unique. In order to further simplify these partial derivatives, we do an integration by parts with respect to the Gaussian noises ${\mathbf{Z}}$ and $\widetilde{{\mathbf{Z}}}$. It yields: \begin{align*} \widetilde{\mathbb{E}} Z_{i k \ell} \langle s_j x'_{i} x_{k} x_{\ell} \rangle &= \frac{\sqrt{\lambda (1-t)}}{n}\widetilde{\mathbb{E}}\langle s_j x'_{i} x_i x_{k}^2 x_{\ell}^2 \rangle - \frac{\sqrt{\lambda (1-t)}}{n}\widetilde{\mathbb{E}} \langle s_j x'_{i} x_{k} x_{\ell} \rangle \langle x_{i} x_{k} x_{\ell} \rangle \;;\\ \widetilde{\mathbb{E}}\widetilde{Z}_i \langle s_j x'_i \rangle &= \sqrt{\frac{\lambda R}{2}}\widetilde{\mathbb{E}}\langle s_j x'_i x_i\rangle - \sqrt{\frac{\lambda R}{2}}\widetilde{\mathbb{E}}\langle s_j x'_i\rangle \langle x_i\rangle\;. \end{align*} Therefore, $\forall (i,j) \in \{1,\dots,n\} \times \{1,\dots, p\}$: \begin{multline*} \frac{\partial g}{\partial W_{ij}} = \mathcal{O}(n^{-\nicefrac{5}{2}}) +\frac{\lambda R}{2 n\sqrt{p}} \bigg(S_j X'_i \,\widetilde{\mathbb{E}}\langle x_i \rangle + X_i \,\widetilde{\mathbb{E}}\langle s_j x'_i\rangle - \widetilde{\mathbb{E}}\langle s_j x'_i\rangle \langle x_i\rangle\bigg)\\ +\frac{\lambda(1-t)}{2n^3\sqrt{p}} \sum_{\substack{k = 1\\ k \neq i}}^n \sum_{\substack{\ell = 1\\ \ell \neq k,i}}^{n} \bigg(\! S_j X'_{i} X_{k} X_{\ell} \widetilde{\mathbb{E}}\langle x_{i} x_{k} x_{\ell}\rangle + X_{i} X_{k} X_{\ell}\,\widetilde{\mathbb{E}} \langle s_j x'_{i} x_{k} x_{\ell} \rangle -\widetilde{\mathbb{E}} \langle s_j x'_{i} x_{k} x_{\ell} \rangle \langle x_{i} x_{k} x_{\ell} \rangle \!\bigg)\,. \end{multline*} Making use of the boundedness assumptions, we obtain the following uniform bound on the partial derivatives: \begin{equation*} \bigg\vert \frac{\partial g}{\partial W_{ij}} \bigg\vert \leq \mathcal{O}(n^{-\nicefrac{5}{2}}) + \frac{3 \lambda M_S}{2n\sqrt{p}}\Vert \varphi \Vert_{\infty} \Vert \varphi' \Vert_{\infty}\big(\Vert \varphi \Vert_{\infty}^4 + R \big)\;. \end{equation*} Therefore, $\Vert \nabla g({\mathbf{W}}) \Vert^2 \leq \frac{9 \lambda^2 M_S^2}{4n}\Vert \varphi \Vert_{\infty}^2 \Vert \varphi' \Vert_{\infty}^2\big(\Vert \varphi \Vert_{\infty}^4 + R \big)^2 + \mathcal{O}(n^{-3})$ and the Gaussian-Poincar\'{e} inequality yields (the term $\mathcal{O}(n^{-3})$ is neglected): \begin{equation}\label{bound_variance_GP_2} \mathbb{E} \Bigg[\Bigg(\mathbb{E}\bigg[\frac{\ln \mathcal{Z}_{t,R}}{n} \bigg\vert {\mathbf{S}}, {\mathbf{W}} \bigg] - \mathbb{E}\bigg[\frac{\ln \mathcal{Z}_{t,R}}{n} \bigg\vert {\mathbf{S}}\bigg]\Bigg)^{\!\! 2}\,\Bigg] \leq \frac{9 \lambda^2 M_S^2}{4n}\Vert \varphi \Vert_{\infty}^2 \Vert \varphi' \Vert_{\infty}^2\big(\Vert \varphi \Vert_{\infty}^4 + R \big)^2 \;. \end{equation} Finally, it remains to show that $\mathbb{E}[\nicefrac{\ln \mathcal{Z}_{t,R}}{n} \vert {\mathbf{S}}]$ concentrates on its expectation. We will show that the function $$ g: \mathtt{S} \in [-M_S,M_S]^p \mapsto \mathbb{E}[\nicefrac{\ln \mathcal{Z}_{t,R}}{n} \vert {\mathbf{S}} = \mathtt{S} ] $$ has bounded differences. To do so, we will show that the partial derivatives of $g$ are uniformly bounded. Then we will apply the bounded differences inequality, also called McDiarmid's inequality, to get the concentration result (see \cite{McDiarmid}, \cite[Corollary 3.2]{boucheron_concentration}). Similarly to what has be done with the random vector ${\mathbf{S}}$, we define $\mathtt{X} = \varphi\big(\frac{{\mathbf{W}} {\mathtt{S}}}{\sqrt{p}}\big)$, $\mathtt{X}' = \varphi'\big(\frac{{\mathbf{W}} {\mathtt{S}}}{\sqrt{p}}\big)$ and $\mathtt{X}'' = \varphi''\big(\frac{{\mathbf{W}} {\mathtt{S}}}{\sqrt{p}}\big)$. For $\ell \in \{1,\dots,p\}$, the partial derivative of $g$ with respect to its $\ell^{\mathrm{th}}$ coordinate: \begin{align} \frac{\partial g}{\partial {\mathtt{S}}_\ell} &= \frac{\lambda(1-t)}{n^3\sqrt{p}} \sum_{{\mathbf{i}} \in \mathcal{I}} \mathbb{E}\big[(W_{i_1 \ell} {\mathtt{X}}'_{i_1} {\mathtt{X}}_{i_2} {\mathtt{X}}_{i_3} + W_{i_2 \ell} {\mathtt{X}}_{i_1} {\mathtt{X}}'_{i_2} {\mathtt{X}}_{i_3} + W_{i_3 \ell} {\mathtt{X}}_{i_1} {\mathtt{X}}_{i_2} {\mathtt{X}}'_{i_3} ) \langle x_{i_1} x_{i_2} x_{i_3}\rangle\big\vert {\mathbf{S}} = {\mathtt{S}} \big]\nonumber\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad +\frac{\lambda R}{2n\sqrt{p}} \sum_{i=1}^{n} \mathbb{E}\big[W_{i\ell}{\mathtt{X}}'_i \langle x_i \rangle\big\vert {\mathbf{S}} = {\mathtt{S}} \big]\nonumber\\ &= \mathcal{O}(n^{-\nicefrac{3}{2}}) + \frac{\lambda(1-t)}{2n^3\sqrt{p}} \sum_{i=1}^{n}\sum_{\substack{j = 1\\ j \neq i}}^n \sum_{\substack{k = 1\\ k \neq i,j}}^{n} \mathbb{E}\big[W_{i \ell} {\mathtt{X}}'_{i} {\mathtt{X}}_{j} {\mathtt{X}}_{k} \langle x_{i} x_{j} x_{k}\rangle\big\vert {\mathbf{S}} = {\mathtt{S}} \big]\nonumber\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad +\frac{\lambda R}{2n\sqrt{p}} \sum_{i=1}^{n} \mathbb{E}\big[W_{i\ell}{\mathtt{X}}'_i \langle x_i \rangle\big\vert {\mathbf{S}} = {\mathtt{S}} \big]\label{formula_dg/dS_l}\;. \end{align} Once again the triplets ${\mathbf{i}} \in \mathcal{I}$ whose elements are non unique are accounted for with the term $\mathcal{O}(n^{-\nicefrac{3}{2}})$, which is negligible compared to the others. An integration by parts with respect to ${\mathbf{W}}$ gives for all $(i,j,k,\ell) \in \{1,\dots,n\}^3 \times \{1,\dots,p\}$ such that $j \neq i$ and $k \neq i,j$: \begin{align} &\mathbb{E}\big[W_{i \ell} {\mathtt{X}}'_{i} {\mathtt{X}}_{j} {\mathtt{X}}_{k} \langle x_{i} x_{j} x_{k}\rangle\big\vert {\mathbf{S}} = {\mathtt{S}} \big]\nonumber\\ &\qquad= \frac{1}{\sqrt{p}} \mathbb{E}\big[ {\mathtt{S}}_\ell {\mathtt{X}}''_{i} {\mathtt{X}}_{j} {\mathtt{X}}_{k} \langle x_{i} x_{j} x_{k}\rangle\big\vert {\mathbf{S}} = {\mathtt{S}} \big] +\frac{1}{\sqrt{p}} \mathbb{E}\big[ {\mathtt{X}}'_{i} {\mathtt{X}}_{j} {\mathtt{X}}_{k} \langle s_\ell x'_{i} x_{j} x_{k}\rangle\big\vert {\mathbf{S}} = {\mathtt{S}} \big]\nonumber\\ &\qquad\qquad -\mathbb{E}\bigg[{\mathtt{X}}'_{i} {\mathtt{X}}_{j} {\mathtt{X}}_{k} \bigg\langle \!\! x_{i} x_{j} x_{k} \frac{\partial \mathcal{H}_{t,R}}{\partial W_{i\ell}} \bigg\rangle\bigg\vert {\mathbf{S}} = {\mathtt{S}} \bigg] +\mathbb{E}\bigg[{\mathtt{X}}'_{i} {\mathtt{X}}_{j} {\mathtt{X}}_{k} \langle x_{i} x_{j} x_{k} \rangle \bigg\langle\frac{\partial \mathcal{H}_{t,R}}{\partial W_{i\ell}} \bigg\rangle\bigg\vert {\mathbf{S}} = {\mathtt{S}} \bigg]\;; \label{1st_expectation_dg/dS_l} \end{align} \begin{align} \mathbb{E}\big[W_{i\ell}{\mathtt{X}}'_i \langle x_i \rangle\big\vert {\mathbf{S}} = {\mathtt{S}} \big] &= \frac{1}{\sqrt{p}} \mathbb{E}\big[{\mathtt{S}}_\ell {\mathtt{X}}''_i \langle x_i \rangle\big\vert {\mathbf{S}} = {\mathtt{S}} \big] + \frac{1}{\sqrt{p}} \mathbb{E}\big[{\mathtt{X}}'_i \langle s_\ell x'_i \rangle\big\vert {\mathbf{S}} = {\mathtt{S}} \big]\nonumber\\ &\qquad\quad -\mathbb{E}\bigg[{\mathtt{X}}'_i \bigg\langle\! x_i \frac{\partial \mathcal{H}_{t,R}}{\partial W_{i\ell}} \bigg\rangle\bigg\vert {\mathbf{S}} = {\mathtt{S}} \bigg] +\mathbb{E}\bigg[{\mathtt{X}}'_i \langle x_i \rangle \bigg\langle\frac{\partial \mathcal{H}_{t,R}}{\partial W_{i\ell}} \bigg\rangle\bigg\vert {\mathbf{S}} = {\mathtt{S}} \bigg]\;. \label{2nd_expectation_dg/dS_l} \end{align} Here $\mathcal{H}_{t,R} \equiv \mathcal{H}_{t,R}({\mathbf{s}}; {\mathtt{S}}, {\mathbf{Z}}, \widetilde{{\mathbf{Z}}}, {\mathbf{W}})$. In order to prove the concentration result that we aim for, we need to make sure that both expectations\eqref{1st_expectation_dg/dS_l} and \eqref{2nd_expectation_dg/dS_l} are $\mathcal{O}(n^{-\nicefrac{1}{2}})$. The main difficulty resides in managing the terms where partial derivatives $\nicefrac{\partial \mathcal{H}_{t,R}}{\partial W_{i \ell}}$ appear. We have already dealt with these partial derivatives when proving the concentration with respect to ${\mathbf{W}}$, and we found: \begin{align*} \frac{\partial \mathcal{H}_{t,R}}{\partial W_{i \ell}} &= \mathcal{O}(n^{-\nicefrac{3}{2}})+ \frac{1}{2} \sum_{\substack{j = 1\\ j \neq i}}^n \sum_{\substack{k = 1\\ k \neq i,j}}^{n} \bigg(-\frac{\lambda(1-t)}{n^2\sqrt{p}} x_{i} x'_i s_\ell x_{j}^2 x_{k}^2 +\frac{\lambda(1-t)}{n^2\sqrt{p}}\, {\mathtt{S}}_\ell {\mathtt{X}}'_{i} {\mathtt{X}}_{j} {\mathtt{X}}_{k} x_{i} x_{j} x_{k}\\ &\qquad\qquad\qquad\qquad\qquad\quad +\frac{\lambda(1-t)}{n^2\sqrt{p}}\, {\mathtt{X}}_{i} {\mathtt{X}}_{j} {\mathtt{X}}_{k} s_\ell x'_{i} x_{j} x_{k} +\frac{\sqrt{\lambda(1-t)}}{n\sqrt{p}}\, Z_{i j k} s_\ell x'_{i} x_{j} x_{k}\bigg)\\ &\qquad\qquad\quad + \sqrt{\frac{\lambda R}{2p}} \bigg(- \sqrt{\frac{\lambda R}{2}} s_\ell x'_i x_i + \sqrt{\frac{\lambda R}{2}}\,{\mathtt{S}}_\ell {\mathtt{X}}'_i x_i + \sqrt{\frac{\lambda R}{2}}\, {\mathtt{X}}_i \, s_\ell x'_i + \widetilde{Z}_i s_\ell x'_i \bigg)\;. \end{align*} For $(i,\ell) \in \{1,\dots,n\} \times \{1,\dots,p\}$ define \begin{multline} \mathcal{A}_{i\ell} \triangleq \frac{\lambda(1-t)}{2n^2\sqrt{p}} \sum_{\substack{j = 1\\ j \neq i}}^n \sum_{\substack{k = 1\\ k \neq i,j}}^{n} \big(-x_{i} x'_i s_\ell x_{j}^2 x_{k}^2 +{\mathtt{S}}_\ell {\mathtt{X}}'_{i} {\mathtt{X}}_{j} {\mathtt{X}}_{k} x_{i} x_{j} x_{k} + {\mathtt{X}}_{i} {\mathtt{X}}_{j} {\mathtt{X}}_{k} s_\ell x'_{i} x_{j} x_{k} \big)\\ + \frac{\lambda R}{2\sqrt{p}} \big(- s_\ell x'_i x_i + {\mathtt{S}}_\ell {\mathtt{X}}'_i x_i + {\mathtt{X}}_i \, s_\ell x'_i \big)\;. \end{multline} Note these two simple facts about $\mathcal{A}_{i\ell}$: \begin{align} \frac{\partial \mathcal{H}_{t,R}}{\partial W_{i \ell}} &= \mathcal{O}(n^{-\nicefrac{3}{2}}) + \mathcal{A}_{i\ell} + \frac{\sqrt{\lambda(1-t)}}{2n\sqrt{p}} \sum_{\substack{j = 1\\ j \neq i}}^n \sum_{\substack{k = 1\\ k \neq i,j}}^{n} Z_{i j k} s_\ell x'_{i} x_{j} x_{k} + \sqrt{\frac{\lambda R}{2p}} \widetilde{Z}_i s_\ell x'_i \;;\label{link_A_dG/dW}\\ \vert \mathcal{A}_{i\ell} \vert &\leq \frac{3\lambda}{2\sqrt{p}} \, M_S \Vert \varphi \Vert_{\infty} \Vert \varphi' \Vert_{\infty}\big(\Vert \varphi \Vert_{\infty}^4 + R \big)\;.\label{upperbound_A} \end{align} Plugging the identity \eqref{link_A_dG/dW} in both \eqref{1st_expectation_dg/dS_l} and \eqref{2nd_expectation_dg/dS_l} and making use of the upper bound \eqref{upperbound_A}, we obtain: \begin{align*} &\big\vert \mathbb{E}\big[W_{i \ell} {\mathtt{X}}'_{i} {\mathtt{X}}_{j} {\mathtt{X}}_{k} \langle x_{i} x_{j} x_{k}\rangle\big\vert {\mathbf{S}} = {\mathtt{S}} \big] \big\vert\\ &\;\leq \mathcal{O}(n^{-\nicefrac{3}{2}}) + \frac{M_S \Vert \varphi \Vert_{\infty}^4 }{\sqrt{p}} \Big(\Vert \varphi \Vert_{\infty} \Vert \varphi'' \Vert_{\infty} + \Vert \varphi \Vert_{\infty} \Vert \varphi' \Vert_{\infty}^2 + 3\lambda\,\Vert \varphi \Vert_{\infty}^6\Vert \varphi' \Vert_{\infty}^2 + 3\lambda\, \Vert \varphi \Vert_{\infty}^2\Vert \varphi' \Vert_{\infty}^2 R\Big)\\ &\quad\; +\frac{\sqrt{\lambda(1-t)}}{2n\sqrt{p}} \sum_{\substack{j' = 1\\ j' \neq i}}^n \sum_{\substack{k' = 1\\ k \neq i',j'}}^{n}\! \big\vert \mathbb{E}\big[{\mathtt{X}}'_{i} {\mathtt{X}}_{j} {\mathtt{X}}_{k} Z_{i j' k'} \big(\langle x_{i} x_{j} x_{k} s_\ell x'_{i} x_{j'} x_{k'} \rangle - \langle x_{i} x_{j} x_{k} \rangle \langle s_\ell x'_{i} x_{j'} x_{k'} \rangle\big) \big\vert {\mathbf{S}} = {\mathtt{S}} \big]\big\vert\\ &\quad\; +\sqrt{\frac{\lambda R}{2p}} \big\vert\mathbb{E}\big[{\mathtt{X}}'_{i} {\mathtt{X}}_{j} {\mathtt{X}}_{k} \widetilde{Z}_i \big(\langle x_{i} x_{j} x_{k} s_\ell x'_i \rangle -\langle x_{i} x_{j} x_{k} \rangle \langle s_\ell x'_i \rangle \big)\big\vert {\mathbf{S}} = {\mathtt{S}} \big]\big\vert\;; \end{align*} \begin{align*} &\big\vert \mathbb{E}\big[W_{i\ell}{\mathtt{X}}'_i \langle x_i \rangle\big\vert {\mathbf{S}} = {\mathtt{S}} \big] \big\vert\\ &\qquad\leq \mathcal{O}(n^{-\nicefrac{3}{2}}) + \frac{M_S}{\sqrt{p}} \Big(\Vert \varphi \Vert_{\infty}\Vert \varphi'' \Vert_{\infty} + \Vert \varphi' \Vert_{\infty}^2 + 3\lambda\,\Vert \varphi \Vert_{\infty}^6\Vert \varphi' \Vert_{\infty}^2 + 3\lambda\, \Vert \varphi \Vert_{\infty}^2\Vert \varphi' \Vert_{\infty}^2 R\Big)\\ &\qquad\qquad\qquad +\frac{\sqrt{\lambda(1-t)}}{2n\sqrt{p}} \sum_{\substack{j' = 1\\ j' \neq i}}^n \sum_{\substack{k' = 1\\ k \neq i',j'}}^{n} \big\vert \mathbb{E}\big[{\mathtt{X}}'_{i} Z_{i j' k'} \big(\langle x_{i} s_\ell x'_{i} x_{j'} x_{k'} \rangle - \langle x_{i} \rangle \langle s_\ell x'_{i} x_{j'} x_{k'} \rangle\big) \big\vert {\mathbf{S}} = {\mathtt{S}} \big]\big\vert\\ &\qquad\qquad\qquad +\sqrt{\frac{\lambda R}{2p}} \big\vert\mathbb{E}\big[{\mathtt{X}}'_i \widetilde{Z}_i \big(\langle x_i s_\ell x'_i \rangle -\langle x_i \rangle \langle s_\ell x'_i \rangle \big)\big\vert {\mathbf{S}} = {\mathtt{S}} \big]\big\vert\;. \end{align*} By integrating by parts with respect to ${\mathbf{Z}}$ or $\widetilde{{\mathbf{Z}}}$, we can now show that both upper bounds are $\mathcal{O}(p^{-\nicefrac{1}{2}})$. This is because $Z_{ij'k'}$ and $\widetilde{Z}_i$ appear in the Hamiltonian $\mathcal{H}_{t,R}$ via the terms $\frac{\sqrt{\lambda (1-t)}}{n}x_i x_{j'} x_{k'} Z_{ij'k'}$ and $\sqrt{\frac{\lambda R}{2}} x_i \widetilde{Z}_i$, respectively. In the end, for all $(i,j,k,\ell) \in \{1,\dots,n\}^3 \times \{1,\dots,p\}$ such that $j \neq i$ and $k \neq i,j$: \begin{align*} \big\vert \mathbb{E}\big[W_{i \ell} {\mathtt{X}}'_{i} {\mathtt{X}}_{j} {\mathtt{X}}_{k} \langle x_{i} x_{j} x_{k}\rangle\big\vert {\mathbf{S}} = {\mathtt{S}} \big] \big\vert &\leq \frac{M_S \Vert \varphi \Vert_{\infty}^4 }{\sqrt{p}}C_1\;;\\ \big\vert \mathbb{E}\big[W_{i\ell}{\mathtt{X}}'_i \langle x_i \rangle\big\vert {\mathbf{S}} = {\mathtt{S}} \big] \big\vert &\leq \frac{M_S}{\sqrt{p}} C_1\;; \end{align*} where $C_1 \triangleq \Vert \varphi \Vert_{\infty}\Vert \varphi'' \Vert_{\infty} + \Vert \varphi' \Vert_{\infty}^2 + 6\lambda\Vert \varphi \Vert_{\infty}^6\Vert \varphi' \Vert_{\infty}^2 + 6\lambda \Vert \varphi \Vert_{\infty}^2\Vert \varphi' \Vert_{\infty}^2 R$. Going back to the identity \eqref{formula_dg/dS_l}, these upper bounds yield $\big\vert \frac{\partial g}{\partial {\mathtt{S}}_\ell} \big\vert \leq \frac{\lambda M_S}{2 p} (\Vert \varphi \Vert_{\infty}^4 + R)\, C_1$ uniformly in ${\mathtt{S}} \in [-M_S,M_S]^p$ and $\ell \in \{1\dots,p\}$ (we neglect the term $\mathcal{O}(n^{-\nicefrac{3}{2}})$ that should appear in the upper bound). Hence $g$ has bounded differences (this is a simple application of the mean value theorem): \begin{equation*} \forall \ell \in \{1,\dots,p\}: \sup_{-M_S \leq {\mathtt{S}}_1,\dots,{\mathtt{S}}_p, {\mathtt{S}}'_\ell \leq M_S} \big\vert g({\mathtt{S}}) - g({\mathtt{S}}_1,\dots,{\mathtt{S}}_{\ell-1},{\mathtt{S}}'_\ell,{\mathtt{S}}_{\ell+1},\dots,{\mathtt{S}}_p)\big\vert \leq \frac{C_2}{p}\,; \end{equation*} where $C_2 \triangleq \lambda M_S^2 (\Vert \varphi \Vert_{\infty}^4 + R)\,C_1$. By McDiarmid's inequality: \begin{equation}\label{bound_variance_McDiarmid} \mathbb{E} \Bigg[\Bigg(\mathbb{E}\bigg[\frac{\ln \mathcal{Z}_{t,R}}{n} \bigg\vert {\mathbf{S}} \bigg] - \mathbb{E}\bigg[\frac{\ln \mathcal{Z}_{t,R}}{n}\bigg]\Bigg)^{\!\! 2}\,\Bigg] \leq \frac{C_2^2}{4p}\;. \end{equation} Combining the inequalities \eqref{bound_variance_GP_1}, \eqref{bound_variance_GP_2} and \eqref{bound_variance_McDiarmid} yields the final result. \end{IEEEproof} \section{Concentration of the overlap}\label{app:concentration_overlap} One important result in order to prove Propositions~\ref{prop:upperbound_mutual_info} and \ref{prop:lowerbound_mutual_info} is the concentration of the scalar overlap $Q$ around its expectation $\mathbb{E} \langle Q \rangle_{t,R}$ as long as we integrate over $R$ in a bounded subset of $(0,+\infty)$. Remember that the angular brackets $\langle - \rangle_{t,R}$ denote the expectation with respect to the posterior distribution \eqref{posterior_H_t_R}. \begin{proposition}[Concentration of the overlap around its expectation]\label{prop:concentration_overlap} Suppose that \ref{hyp:S_bounded_support} and \ref{hyp:varphi} hold. Let $M >0$. For $n$ large enough, there exists a constant $C$, which depends only on $\Vert \varphi \Vert_{\infty}$, $\Vert \varphi' \Vert_{\infty}$, $\Vert \varphi'' \Vert_{\infty}$, $M_S$, $\lambda$ and $M$, such that $\forall (a,b) \in (0,M)^2: a < \min\{1,b\}$, $\forall \delta \in (0,a)$, $\forall t \in [0,1]$: \begin{equation}\label{eq:concentration_overlap} \int_{a}^{b} \mathbb{E}\,\big\langle \big(Q -\mathbb{E}\,\langle Q \rangle_{t,R}\,\big)^2\,\big\rangle_{t,R}\, dR \leq C\bigg(\frac{1}{\delta^2 n} - \frac{\ln(a)}{n} + \frac{\delta}{a-\delta}\bigg)\;. \end{equation} \end{proposition} The concentration of the scalar overlap around its expectation will follow from the concentration of the quantity: \begin{equation}\label{def_L} \mathcal{L} = \frac{1}{n}\sum_{j=1}^{n} \frac{\lambda}{4} x_j^2 - \frac{\lambda}{2}\, x_j X_j - \frac{1}{2}\sqrt{\frac{\lambda}{2 R}}\, x_j\widetilde{Z}_j \:. \end{equation} \begin{lemma}[Link between the fluctuations of \texorpdfstring{$\mathcal{L}$}{L} and \texorpdfstring{$Q$}{Q}]\label{lemma:computation_E<L>_and_others}\\ Assume $\varphi: \mathbb{R} \to \mathbb{R}$ is continuous and bounded. For all $(t,R) \in [0,1] \times (0, +\infty)$: \begin{align} \mathbb{E}\,\langle \mathcal{L} \rangle_{t,R} &= -\frac{\lambda}{4} \mathbb{E}\,\langle Q \rangle_{t,R} \;;\label{formula_E<L>}\\ \frac{\lambda}{4}\mathbb{E}\,\langle (Q - \langle Q \rangle_{t,R})^2 \rangle_{t,R} &\leq \frac{\Vert \varphi \Vert_{\infty}^2}{\sqrt{2}}\sqrt{\mathbb{E}\,\big\langle \big(\mathcal{L} - \langle \mathcal{L} \rangle_{t,R}\big)^{2}\,\big\rangle_{t,R} -\frac{1}{n}\mathbb{E}\,\bigg\langle \frac{\partial \mathcal{L} }{\partial R} \bigg\rangle_{\!\! t,R}} \;\:;\label{upperbound_fluctuation_Q_<Q>}\\ \frac{\lambda^2}{16}\,\mathbb{E}\langle (Q - \mathbb{E}\,\langle Q\rangle_{t,R})^2 \rangle_{t,R} &\leq\mathbb{E}\langle (\mathcal{L} - \mathbb{E}\,\langle \mathcal{L} \rangle_{t,R})^2 \rangle_{t,R}\;.\label{upperbound_fluctuation_Q_E<Q>} \end{align} \end{lemma} \begin{IEEEproof} Fix $(t,R) \in [0,1] \times (0,+\infty)$. By the definition \eqref{def_L} of $\mathcal{L}$, we have: \begin{align} \mathbb{E}\,\langle \mathcal{L} \rangle_{t,R} &= \frac{1}{n}\sum_{j=1}^{n} \frac{\lambda}{4} \mathbb{E} \langle x_j^2 \rangle_{t,R} - \frac{\lambda}{2} \, \mathbb{E}\big[ \langle x_j \rangle_{t,R} X_j \big] - \frac{1}{2}\sqrt{\frac{\lambda}{2 R}}\, \mathbb{E}\big[ \langle x_j \rangle_{t,R} \widetilde{Z}_j \big] \;;\label{def_E<L>}\\ \mathbb{E}\,\langle Q \mathcal{L} \rangle_{t,R} &= \frac{1}{n}\sum_{j=1}^{n} \frac{\lambda}{4}\mathbb{E} \langle Q x_j^2 \rangle_{t,R} - \frac{\lambda}{2} \mathbb{E}\big[ \langle Q x_j \rangle_{t,R} X_j \big] - \frac{1}{2}\sqrt{\frac{\lambda}{2 R}}\, \mathbb{E}\big[ \langle Q x_j \rangle_{t,R} \widetilde{Z}_j \big]\,.\label{def_E<Q L>} \end{align} Integrating by parts with respect to the Gaussian random variable $\widetilde{Z}_j$, the last expectation on the right-hand side of each of \eqref{def_E<L>} and \eqref{def_E<Q L>} reads: \begin{align} \mathbb{E}\big[ \langle x_j \rangle_{t,R} \widetilde{Z}_j \big] &= \sqrt{\frac{\lambda R}{2}}\mathbb{E}\big[ \langle x_j^2 \rangle_{t,R} \big] -\sqrt{\frac{\lambda R}{2}} \mathbb{E}\big[ \langle x_j \rangle_{t,R}^2\big] \;;\label{stein_lemma_last_term_E<L>}\\ \mathbb{E}\big[ \langle Q x_j \rangle_{t,R} \widetilde{Z}_j \big] &= \sqrt{\frac{\lambda R}{2}} \mathbb{E}\big[ \langle Q x_j^2 \rangle_{t,R} \big] -\sqrt{\frac{\lambda R}{2}} \mathbb{E}\big[ \langle Q x_j \rangle_{t,R} \langle x_j \rangle_{t,R} \big] \;.\label{stein_lemma_last_term_E<Q L>} \end{align} Plugging \eqref{stein_lemma_last_term_E<L>} in \eqref{def_E<L>} yields: \begin{equation*} \mathbb{E}\,\langle \mathcal{L} \rangle_{t,R} = \frac{\lambda}{2n}\sum_{j=1}^{n} \frac{1}{2} \mathbb{E}\big[ \langle x_j \rangle_{t,R}^2\big] - \, \mathbb{E}\big[ \langle x_j \rangle_{t,R} X_j \big] = -\frac{\lambda}{4n} \sum_{j=1}^{n} \frac{\mathbb{E}\big[ \langle x_j \rangle_{t,R} X_j \big]}{n} = -\frac{\lambda}{4} \mathbb{E}\,\langle Q \rangle_{t,R} \;, \end{equation*} where the second equality follows from Nishimori identity: $\mathbb{E}[ \langle x_j \rangle_{t,R}^2] = \mathbb{E}[ \langle x_j \rangle_{t,R} X_j]$. This ends the proof of \eqref{formula_E<L>}. Plugging \eqref{stein_lemma_last_term_E<Q L>} in \eqref{def_E<Q L>}, it comes: \begin{multline} \mathbb{E}\,\langle Q \mathcal{L} \rangle_{t,R} = \frac{\lambda}{2n}\sum_{j=1}^{n_v} \frac{1}{2}\mathbb{E}\big[ \langle Q x_j \rangle_{t,R} \langle x_j \rangle_{t,R} \big] - \mathbb{E}\big[ \langle Q x_j \rangle_{t,R} X_j \big]\\ = \frac{\lambda}{2n}\sum_{j=1}^{n_v} \frac{1}{2}\mathbb{E}\big[ \langle Q \rangle_{t,R} \langle x_j X_j\rangle_{t,R} \big] - \mathbb{E}\big[ \langle Q x_j \rangle_{t,R} X_j \big] = \frac{\lambda}{2}\bigg(\frac{1}{2}\mathbb{E}\big[ \langle Q \rangle_{t,R}^2\big] - \mathbb{E}\big[ \langle Q^2 \rangle_{t,R}\big]\bigg) \;.\label{formula_E<QL>} \end{multline} The second equality follows once again from Nishimori identity. Combining \eqref{formula_E<QL>} and \eqref{formula_E<L>} yields: \begin{align*} \mathbb{E}\,\langle (Q - \mathbb{E}\,\langle Q \rangle_{t,R}) (\mathcal{L} - \mathbb{E}\,\langle \mathcal{L} \rangle_{t,R}) \rangle_{t,R} &= \mathbb{E}\,\langle Q \mathcal{L} \rangle_{t,R} - \mathbb{E}\,\langle Q \rangle_{t,R} \mathbb{E}\,\langle \mathcal{L} \rangle_{t,R}\\ &= \frac{\lambda}{4} \Big(\mathbb{E}\big[ \langle Q \rangle_{t,R}^2\big] - 2\mathbb{E}\big[ \langle Q^2 \rangle_{t,R}\big] + (\mathbb{E}\,\langle Q \rangle_{t,R})^2\Big)\\ &= -\frac{\lambda}{4} \Big(\mathbb{E}\,\big\langle (Q - \langle Q \rangle_{t,R})^2 \big\rangle_{t,R} + \mathbb{E}\,\langle (Q - \mathbb{E}\,\langle Q\rangle_{t,R})^2 \rangle_{t,R}\Big) \;. \end{align*} The upper bound \eqref{upperbound_fluctuation_Q_E<Q>} on the fluctuation of $Q$ simply follows from this last identity: \begin{align*} \frac{\lambda}{4} \mathbb{E}\,\big\langle (Q - \mathbb{E}\,\langle Q\rangle_{t,R})^2 \big\rangle_{t,R} &\leq -\mathbb{E}\,\langle (Q - \mathbb{E}\,\langle Q \rangle_{t,R}) (\mathcal{L} - \mathbb{E}\,\langle \mathcal{L} \rangle_{t,R}) \rangle_{t,R}\\ &\leq \sqrt{\mathbb{E}\,\langle (Q - \mathbb{E}\,\langle Q \rangle_{t,R})^2 \rangle_{t, R} \,\mathbb{E}\,\langle (\mathcal{L} - \mathbb{E}\,\langle \mathcal{L} \rangle_{t,R})^2 \rangle_{t,R}}\;. \end{align*} The second inequality is a simple application of Cauchy-Schwarz inequality. The proof of the inequality \eqref{upperbound_fluctuation_Q_<Q>} is more involved. These two identities will be useful (just replace $Q$ by its definition): \begin{align} \mathbb{E}\,\langle (Q - \langle Q \rangle_{t,R})^2 \rangle_{t,R} &= \frac{1}{n^2}\sum_{i,j=1}^n \mathbb{E}\big[X_iX_j(\langle x_i x_j \rangle_{t, R} - \langle x_i \rangle_{t, R} \langle x_j \rangle_{t, R})\big] \label{expression_E(Q-<Q>)^2}\\ \mathbb{E}\bigg[\!\bigg(\langle Q \rangle_{t,R} - \bigg\Vert \frac{\langle {\mathbf{x}} \rangle_{t,R}}{\sqrt{n}} \bigg\Vert^2\,\bigg)^{\!\! 2}\,\bigg] &= \frac{1}{n^2}\sum_{i,j=1}^n \mathbb{E}\big[X_iX_j \langle x_i \rangle_{t, R}\langle x_j \rangle_{t, R}\big] - 2 \mathbb{E}\big[X_i \langle x_i \rangle_{t, R} \langle x_j \rangle_{t, R}^2\big]\nonumber\\ &\qquad\qquad\qquad\qquad\qquad\qquad\quad\;\;\, + \mathbb{E}\big[\langle x_i \rangle_{t, R}^2 \langle x_j \rangle_{t, R}^2\big]\nonumber\\ &= \frac{1}{n^2}\sum_{i,j=1}^n \mathbb{E}\big[X_iX_j \langle x_i \rangle_{t, R}\langle x_j \rangle_{t, R}\big] - \mathbb{E}\big[\langle x_i \rangle_{t, R}^2 \langle x_j \rangle_{t, R}^2\big]\;.\label{expression_E(Q-<x><x>/n)^2} \end{align} Differentiating with respect to $R$ on both side of the identity \eqref{formula_E<L>} yields: \begin{align} -n\mathbb{E}\,\big\langle \big(\mathcal{L} - \langle \mathcal{L} \rangle_{t,R}\big)^{2}\,\big\rangle_{t,R} +\mathbb{E}\,\bigg\langle \frac{\partial \mathcal{L} }{\partial R} \bigg\rangle_{\!\! t,R} &= \frac{n\lambda}{4}(\mathbb{E}\,\big\langle Q \mathcal{L} \rangle_{t,R} - \mathbb{E}\,\langle Q \rangle_{t,R}\langle \mathcal{L} \rangle_{t,R})\nonumber\\ \Leftrightarrow\qquad -\frac{\lambda}{4}\Big(\mathbb{E}\,\big\langle Q \mathcal{L} \rangle_{t,R} - \mathbb{E}\,\langle Q \rangle_{t,R}\langle \mathcal{L} \rangle_{t,R}\Big) &= \mathbb{E}\,\big\langle \big(\mathcal{L} - \langle \mathcal{L} \rangle_{t,R}\big)^{2}\,\big\rangle_{t,R} -\frac{1}{n}\mathbb{E}\,\bigg\langle \frac{\partial \mathcal{L} }{\partial R} \bigg\rangle_{\!\! t,R} \;.\label{identity_E<QL>-E<Q><L>} \end{align} Next, we simplify the left-hand side of \eqref{identity_E<QL>-E<Q><L>}. By definition, we have: \begin{multline} \mathbb{E}\,\langle Q \rangle_{t,R}\langle \mathcal{L} \rangle_{t,R}\\ = \frac{1}{n}\sum_{j=1}^{n} \frac{\lambda}{4}\mathbb{E}\big[\langle Q \rangle_{t,R} \langle x_j^2 \rangle_{t,R}\big] - \frac{\lambda}{2} \mathbb{E}\big[ \langle Q \rangle_{t,R} \langle x_j \rangle_{t,R} X_j \big] - \frac{1}{2}\sqrt{\frac{\lambda}{2 R}}\, \mathbb{E}\big[ \langle Q \rangle_{t,R} \langle x_j \rangle_{t,R} \widetilde{Z}_j \big]\;.\label{def_E<Q><L>} \end{multline} After an integration by parts with respect to $\widetilde{Z}_j$ the third expectation in the summand of \eqref{def_E<Q><L>} reads: \begin{align*} &\mathbb{E}\big[ \langle Q \rangle_{t,R} \langle x_j \rangle_{t,R} \widetilde{Z}_j \big]\\ &\qquad\qquad= \sqrt{\frac{\lambda R}{2}}\mathbb{E}\big[ \langle Q x_j \rangle_{t,R} \langle x_j \rangle_{t,R} \big] + \sqrt{\frac{\lambda R}{2}}\mathbb{E}\big[ \langle Q \rangle_{t,R} \langle x_j^2 \rangle_{t,R} \big] -2 \sqrt{\frac{\lambda R}{2}}\mathbb{E}\big[ \langle Q \rangle_{t,R} \langle x_j \rangle_{t,R}^2 \big]\\ &\qquad\qquad= \sqrt{\frac{\lambda R}{2}}\mathbb{E}\big[ \langle Q \rangle_{t,R} \langle x_j X_j \rangle_{t,R} \big] + \sqrt{\frac{\lambda R}{2}}\mathbb{E}\big[ \langle Q \rangle_{t,R} \langle x_j^2 \rangle_{t,R} \big] -2 \sqrt{\frac{\lambda R}{2}}\mathbb{E}\big[ \langle Q \rangle_{t,R} \langle x_j \rangle_{t,R}^2 \big]\;. \end{align*} Plugging this result back in \eqref{def_E<Q><L>} gives: \begin{align} \mathbb{E}\langle Q \rangle_{t,R}\langle \mathcal{L} \rangle_{t,R} &= \frac{\lambda}{2n}\sum_{j=1}^{n} \mathbb{E}\big[ \langle Q \rangle_{t,R} \langle x_j \rangle_{t,R}^2 \big] - \frac{3}{2}\mathbb{E}\big[ \langle Q \rangle_{t,R} \langle x_j X_j \rangle_{t,R} \big]\nonumber\\ &= \frac{\lambda}{2}\mathbb{E}\bigg[ \langle Q \rangle_{t,R} \bigg\Vert \frac{\langle {\mathbf{x}} \rangle_{t,R}}{\sqrt{n}} \bigg\Vert^2 \bigg] - \frac{3\lambda}{4}\mathbb{E}\big[ \langle Q \rangle_{t,R}^2 \big]\;.\label{formula_E<Q><L>} \end{align} Finally, combining \eqref{formula_E<QL>} and \eqref{formula_E<Q><L>} yields the following expression for the left-hand side of \eqref{identity_E<QL>-E<Q><L>}: \begin{align} &-\frac{\lambda}{4}\Big(\mathbb{E}\,\big\langle Q \mathcal{L} \rangle_{t,R} - \mathbb{E}\,\langle Q \rangle_{t,R}\langle \mathcal{L} \rangle_{t,R}\Big)\nonumber\\ &\qquad\qquad\qquad= \frac{\lambda^2}{8} \bigg(\mathbb{E}\big[ \langle Q^2 \rangle_{t,R}\big] -\mathbb{E}\big[ \langle Q \rangle_{t,R}^2\big] + \mathbb{E}\bigg[ \langle Q \rangle_{t,R} \bigg\Vert \frac{\langle {\mathbf{x}} \rangle_{t,R}}{\sqrt{n}} \bigg\Vert^2 \,\bigg] -\mathbb{E}\big[ \langle Q \rangle_{t,R}^2\big]\bigg)\nonumber\\ &\qquad\qquad\qquad= \frac{\lambda^2}{8} \Bigg(\mathbb{E}\big[ \langle (Q-\langle Q \rangle_{t,R})^2 \rangle_{t,R}\big] -\mathbb{E}\bigg[ \bigg(\langle Q \rangle_{t,R}- \bigg\Vert \frac{\langle {\mathbf{x}} \rangle_{t,R}}{\sqrt{n}} \bigg\Vert^2\,\bigg)^{\!\! 2}\, \bigg]\Bigg)\nonumber\\ &\qquad\qquad\qquad= \frac{\lambda^2}{8n^2} \sum_{i,j=1}^n \mathbb{E}\big[X_iX_j\langle x_i x_j \rangle_{t, R}\big] -2\mathbb{E}\big[X_iX_j \langle x_i \rangle_{t, R}\langle x_j \rangle_{t, R}\big] +\mathbb{E}\big[\langle x_i \rangle_{t, R}^2 \langle x_j \rangle_{t, R}^2\big]\nonumber\\ &\qquad\qquad\qquad= \frac{\lambda^2}{8n^2} \sum_{i,j=1}^n \mathbb{E}\big[\big(\langle x_i x_j \rangle_{t, R} - \langle x_i \rangle_{t, R}\langle x_j \rangle_{t, R}\big)^2\big]\label{formula_E<QL>-E<Q><L>}\;. \end{align} The second-to-last equality follows from \eqref{expression_E(Q-<Q>)^2} and \eqref{expression_E(Q-<x><x>/n)^2}, while the factorization in the last equality is easily obtained after applying Nishimori identity: $\mathbb{E}[X_iX_j \langle x_i x_j \rangle_{t, R}] = \mathbb{E}\langle x_i x_j \rangle_{t, R}^2$ and $\mathbb{E}[X_iX_j \langle x_i \rangle_{t, R}\langle x_j \rangle_{t, R}] = \mathbb{E}[\langle x_i x_j \rangle_{t, R}\langle x_i \rangle_{t, R}\langle x_j \rangle_{t, R}]$. We now come back to the identity \eqref{expression_E(Q-<Q>)^2} and apply Jensen's inequality to its right-hand side to get: \begin{multline}\label{upperbound_fluctuations_Q_<Q>_proof} \mathbb{E}\,\langle (Q - \langle Q \rangle_{t,R})^2 \rangle_{t,R} \leq \frac{\Vert \varphi \Vert_{\infty}^2}{n^2}\sum_{i,j=1}^n\mathbb{E}\big[\big\vert \langle x_i x_j \rangle_{t, R} - \langle x_i \rangle_{t, R} \langle x_j \rangle_{t, R}\big\vert\big]\\ \leq \Vert \varphi \Vert_{\infty}^2\sqrt{ \frac{1}{n^2}\sum_{i,j=1}^n\mathbb{E}\big[\big( \langle x_i x_j \rangle_{t, R} - \langle x_i \rangle_{t, R} \langle x_j \rangle_{t, R}\big)^2\big]}\;. \end{multline} Combining \eqref{identity_E<QL>-E<Q><L>}, \eqref{formula_E<QL>-E<Q><L>} and \eqref{upperbound_fluctuations_Q_<Q>_proof} yields the inequality \eqref{upperbound_fluctuation_Q_<Q>}. \end{IEEEproof} \subsection{Concentration of \texorpdfstring{$\mathcal{L}$}{L} around its expectation} To prove concentration results on $\mathcal{L}$, it will be useful to work with the free entropy $\frac{1}{n} \ln \mathcal{Z}_{t,R}({\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,R)}, {\mathbf{W}})$ where $\mathcal{Z}_{t,R}({\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,R)}, {\mathbf{W}})$ is the normalization factor of the Gibbs posterior distribution \eqref{posterior_H_t_R}. In Appendix \ref{app:concentration_free_entropy}, we prove that this free entropy concentrates around its expectation when $n \to +\infty$. In order to shorten notations, we define: \begin{equation*} F_n(t,R) \triangleq \frac{1}{n} \ln \mathcal{Z}_{t,R}\big({\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,R)}, {\mathbf{W}} \big)\,;\: f_n(t,R) \triangleq \frac{1}{n} \mathbb{E}\big[\ln \mathcal{Z}_{t,R}\big({\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,R)}, {\mathbf{W}} \big)\big] = \mathbb{E}\,F_n(t,R) \,. \end{equation*} \begin{proposition}[Thermal fluctuations of $\mathcal{L}$ and $Q$]\label{prop:concentration_L_on_<L>} Assume $\varphi: \mathbb{R} \to \mathbb{R}$ is continuous and bounded. For all positive real numbers $a < b$ and $t \in [0,1]$, we have: \begin{align*} \int_a^b \mathbb{E}\,\big\langle \big(\mathcal{L} - \langle \mathcal{L} \rangle_{t,R}\big)^{2}\,\big\rangle_{t,R}\,dR &\;\leq\; \frac{\lambda \Vert \varphi \Vert_{\infty}^2}{4n} \bigg(\frac{\ln(b/a)}{2} + 1 \bigg)\;;\\ \frac{\lambda}{4}\,\int_a^b \mathbb{E}\,\bigg\langle \!\bigg( Q - \bigg\Vert \frac{\langle {\mathbf{x}} \rangle_{t,R}}{\sqrt{n}} \bigg\Vert^2\,\bigg)^{\!\! 2} \,\bigg\rangle_{\!\! t,R}\,dR &\;\leq\; \Vert \varphi \Vert_{\infty}^3\sqrt{\frac{\lambda (b-a)}{2n}}\;. \end{align*} \end{proposition} \begin{IEEEproof} Fix $(n,t) \in \mathbb{N}^* \times [0,1]$. Note that $\forall R \in (0, +\infty)$: \begin{equation}\label{first_derivative_fn} \frac{\partial f_n}{\partial R}\bigg\vert_{t,R} = -\frac{1}{n}\mathbb{E}\Bigg[\Bigg\langle \frac{\partial \mathcal{H}_{t,R}({\mathbf{x}};{\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,R)}, {\mathbf{W}})}{\partial R} \Bigg\rangle_{\!\! t,R}\,\Bigg] =-\mathbb{E}\,\langle \mathcal{L} \rangle_{t,R} \;. \end{equation} Further differentiating, we obtain: \begin{align} \frac{\partial^2 f_n}{\partial R^2}\bigg\vert_{t,R} &= \mathbb{E}\bigg[\bigg\langle \mathcal{L} \,\frac{\partial \mathcal{H}_{t,R}}{\partial R} \bigg\rangle_{\!\! t,R}\,\bigg] -\mathbb{E}\bigg[\langle \mathcal{L} \rangle_{t,R}\, \bigg\langle \frac{\partial \mathcal{H}_{t,R}}{\partial R} \bigg\rangle_{\!\! t,R}\,\bigg] -\mathbb{E}\,\bigg\langle \frac{\partial \mathcal{L} }{\partial R} \bigg\rangle_{\!\! t,R}\nonumber\\ &=n\mathbb{E}\,\big\langle \big(\mathcal{L} - \langle \mathcal{L} \rangle_{t,R}\big)^{2}\,\big\rangle_{t,R} -\mathbb{E}\,\bigg\langle \frac{\partial \mathcal{L} }{\partial R} \bigg\rangle_{\!\! t,R}\;.\label{second_derivative_f_n_R} \end{align} It follows directly from \eqref{second_derivative_f_n_R} and the definition \eqref{def_L} of $\mathcal{L}$ that: \begin{equation}\label{thermal_fluctuation_L} \mathbb{E}\,\big\langle \big(\mathcal{L} - \langle \mathcal{L} \rangle_{t,R}\big)^{2}\,\big\rangle_{t,R} = \frac{1}{n}\frac{\partial^2 f_n}{\partial R^2}\bigg\vert_{t,R} + \frac{1}{4 R}\sqrt{\frac{\lambda}{2R}}\frac{\mathbb{E}\big[\langle {\mathbf{x}} \rangle_{t,R}^T \widetilde{{\mathbf{Z}}}\,\big]}{n^2} \end{equation} We start with upper bounding the integral over the second summand on the right-hand side of \eqref{thermal_fluctuation_L}. Thanks to an integration by parts with respect to $\widetilde{Z}_j$, $j \in \{1,\dots,n_u\}$, it comes: \begin{equation} \frac{1}{4 R}\sqrt{\frac{\lambda}{2R}}\frac{\mathbb{E}\big[\langle {\mathbf{x}} \rangle_{t,R}^T \widetilde{{\mathbf{Z}}}\,\big]}{n^2} = \frac{\lambda}{8 R}\frac{\mathbb{E}\big[\langle \Vert {\mathbf{x}} \Vert^2 \rangle_{t,R} - \Vert \langle {\mathbf{x}} \rangle_{t,R} \Vert^2\,\big]}{n^2} \leq \frac{\lambda \Vert \varphi \Vert_{\infty}^2}{8 R n}\:. \end{equation} Therefore: \begin{equation}\label{upperbound_int_dL/dR} \int_a^b \frac{dR}{4 R}\sqrt{\frac{\lambda}{2R}}\frac{\mathbb{E}\big[\langle {\mathbf{x}} \rangle_{t,R}^T \widetilde{{\mathbf{Z}}}\,\big]}{n^2} \leq \frac{\lambda \Vert \varphi \Vert_{\infty}^2}{8 n} \ln(b/a) \:. \end{equation} It remains to upper bound $\int_a^b \frac{dR}{n}\frac{\partial^2 f_n}{\partial R^2}\big\vert_{t,R} = \frac{1}{n}\frac{\partial f_n}{\partial R}\big\vert_{t,R=b} - \frac{1}{n}\frac{\partial f_n}{\partial R}\big\vert_{t,R=a}$. Note that $\forall R \in (0,+\infty)$: \begin{equation}\label{upperbound_derivative_fn_R} 0 \leq \frac{\partial f_n}{\partial R}\bigg\vert_{t,R} = - \mathbb{E}\,\langle \mathcal{L} \rangle_{t,R} = \frac{\lambda}{4} \mathbb{E}\,\langle Q \rangle_{t,R} = \frac{\lambda}{4n} \mathbb{E}\,\Vert \langle {\mathbf{x}} \rangle_{t,R} \Vert^2 \leq \frac{\lambda}{4}\Vert \varphi \Vert_{\infty}^2 \:, \end{equation} where the first equality follows from \eqref{first_derivative_fn}, the second one from \eqref{formula_E<L>} in Lemma~\ref{lemma:computation_E<L>_and_others}, and the third one from Nishimori identity. Combining both \eqref{upperbound_int_dL/dR} and \eqref{upperbound_derivative_fn_R}, we finally get the first inequality: \begin{equation*} \int_a^b \mathbb{E}\,\big\langle \big(\mathcal{L} - \langle \mathcal{L} \rangle_{t,R}\big)^{2}\,\big\rangle_{t,R}\,dR \leq \frac{\lambda \Vert \varphi \Vert_{\infty}^2}{4n} \bigg(\frac{\ln(b/a)}{2} + 1 \bigg)\;. \end{equation*} To prove the second inequality, we first integrate both sides of the inequality \eqref{upperbound_fluctuation_Q_<Q>} with respect to $R$ and then use Cauchy-Schwarz inequality. We obtain: \begin{multline} \frac{\lambda}{4}\int_a^b \mathbb{E}\,\langle (Q - \langle Q \rangle_{t,R})^2 \rangle_{t,R}\,dR\\ \leq \Vert \varphi \Vert_{\infty}^2\sqrt{\frac{b-a}{2}\int_a^b \bigg(\mathbb{E}\,\big\langle \big(\mathcal{L} - \langle \mathcal{L} \rangle_{t,R}\big)^{2}\,\big\rangle_{t,R} -\frac{1}{n}\mathbb{E}\,\bigg\langle \frac{\partial \mathcal{L} }{\partial R} \bigg\rangle_{\!\! t,R}\,\bigg)dR}\\ = \Vert \varphi \Vert_{\infty}^2\sqrt{\frac{b-a}{2}\int_a^b \frac{dR}{n}\frac{\partial^2 f_n}{\partial R^2}\bigg\vert_{t,R}} \leq \Vert \varphi \Vert_{\infty}^3\sqrt{\frac{\lambda (b-a)}{8n}}\;.\label{upperbound_integral_E<(Q_<Q>)^2>} \end{multline} Finally, note that: \begin{align} \mathbb{E}\,\bigg\langle \bigg( Q - \bigg\Vert \frac{\langle {\mathbf{x}} \rangle_{t,R}}{\sqrt{n}} \bigg\Vert^2\bigg)^{\!\! 2} \bigg\rangle_{\!\! t,R} &= \mathbb{E}\,\big\langle \big( Q - \langle Q \rangle_{t,R}\big)^2\big\rangle_{t,R} + \mathbb{E} \bigg[\bigg( \langle Q \rangle_{t,R} - \bigg\Vert \frac{\langle {\mathbf{x}} \rangle_{t,R}}{\sqrt{n}} \bigg\Vert^2\bigg)^{\!\! 2} \bigg]\nonumber\\ &= \mathbb{E}\,\big\langle \big( Q - \langle Q \rangle_{t,R}\big)^2\big\rangle_{t,R} + \mathbb{E} \bigg[\bigg\langle Q - \frac{\langle {\mathbf{x}} \rangle_{t,R}{\mathbf{x}}}{n} \bigg\rangle_{\!\! t,R}^{\!\! 2}\,\bigg]\nonumber\\ &\leq \mathbb{E}\,\big\langle \big( Q - \langle Q \rangle_{t,R}\big)^2\big\rangle_{t,R} + \mathbb{E} \bigg[\bigg\langle \bigg(Q - \frac{\langle {\mathbf{x}} \rangle_{t,R}{\mathbf{x}}}{n}\bigg)^{\!\! 2} \bigg\rangle_{\!\! t,R}\,\bigg]\label{proof1_upperbound_fluctuation_Q_<x><x>/n}\\ &= \mathbb{E}\,\big\langle \big( Q - \langle Q \rangle_{t,R}\big)^2\big\rangle_{t,R} + \mathbb{E} \bigg[\bigg\langle \bigg(Q - \frac{\langle {\mathbf{x}} \rangle_{t,R}{\mathbf{X}}}{n}\bigg)^{\!\! 2} \bigg\rangle_{\!\! t,R}\,\bigg]\label{proof2_upperbound_fluctuation_Q_<x><x>/n}\\ &= 2 \mathbb{E}\,\big\langle \big( Q - \langle Q \rangle_{t,R}\big)^2\big\rangle_{t,R} \;.\label{upperbound_fluctuation_Q_<x><x>/n} \end{align} The inequality \eqref{proof1_upperbound_fluctuation_Q_<x><x>/n} follows from Jensen's inequality, while the equality \eqref{proof2_upperbound_fluctuation_Q_<x><x>/n} is a simple application of Nishimori identity. The inequalities \eqref{upperbound_integral_E<(Q_<Q>)^2>} and \eqref{upperbound_fluctuation_Q_<x><x>/n} together give the second inequality of the proposition. \end{IEEEproof} \begin{proposition}[Quenched fluctuations of $\mathcal{L}$]\label{prop:concentration_<L>_on_E<L>} Suppose that \ref{hyp:S_bounded_support} and \ref{hyp:varphi} hold. Let $M >0$. For $n$ large enough, there exists a constant $C$, which depends only on $\Vert \varphi \Vert_{\infty}$, $\Vert \varphi' \Vert_{\infty}$, $\Vert \varphi'' \Vert_{\infty}$, $M_S$, $\lambda$ and $M$, such that $\forall (a,b) \in (0,M)^2: a < \min\{1,b\}$, $\forall \delta \in (0,a)$, $\forall t \in [0,1]$: \begin{equation} \int_{a}^{b} \mathbb{E}\,\big[\big(\langle \mathcal{L} \rangle_{t,R}-\mathbb{E}\,\langle \mathcal{L} \rangle_{t,R}\,\big)^2\,\big]\,dR \leq C\bigg(\frac{1}{\delta^2 n} - \frac{\ln(a)}{n} + \frac{\delta}{a-\delta} \bigg)\,. \end{equation} \end{proposition} \begin{IEEEproof} Fix $(n,t) \in \mathbb{N}^* \times [0,1]$. For all $R \in (0,+\infty)$, we have: \begin{align} \frac{\partial F_n}{\partial R}\bigg\vert_{t,R} &=-\langle \mathcal{L} \rangle_{t,R} \;;\\ \frac{\partial^2 F_n}{\partial R^2}\bigg\vert_{t,R} &=n \big\langle \big(\mathcal{L} - \langle \mathcal{L}\rangle_{t,R}\big)^{2}\,\big\rangle_{t,R} - \frac{1}{4 R}\sqrt{\frac{\lambda}{2R}}\frac{\langle {\mathbf{x}} \rangle_{t,R}^T \widetilde{{\mathbf{Z}}}}{n}\;;\label{quenched:2ndDeriv_Fn}\\ \frac{\partial f_n}{\partial R}\bigg\vert_{t,R} &=-\mathbb{E}\,\langle \mathcal{L} \rangle_{t,R}\;;\\ \frac{\partial^2 f_n}{\partial R^2}\bigg\vert_{t,R} &=n \mathbb{E}\,\big\langle \big(\mathcal{L} - \langle \mathcal{L}\rangle_{t,R}\big)^{2}\,\big\rangle_{t,R} - \frac{1}{4 R}\sqrt{\frac{\lambda}{2R}}\frac{\mathbb{E}\big[\langle {\mathbf{x}} \rangle_{t,R}^T \widetilde{{\mathbf{Z}}}\,\big]}{n}\,. \end{align} The second term on the right-hand side of \eqref{quenched:2ndDeriv_Fn} can be upper bounded with Cauchy-Schwarz inequality: \begin{equation}\label{upperbound_2ndterm_2ndDeriv_Fn} \Bigg\vert\frac{1}{4 R}\sqrt{\frac{\lambda}{2R}}\frac{\langle {\mathbf{x}} \rangle_{t,R}^T \widetilde{{\mathbf{Z}}}}{n}\Bigg\vert \leq \frac{1}{4 R}\sqrt{\frac{\lambda}{2R}} \frac{\Vert \langle {\mathbf{x}} \rangle_{t,R} \Vert\, \Vert \widetilde{{\mathbf{Z}}} \Vert}{n} \leq \frac{\Vert \varphi \Vert_{\infty}}{4 R}\sqrt{\frac{\lambda}{2R}} \frac{\Vert \widetilde{{\mathbf{Z}}} \Vert}{\sqrt{n}}\;. \end{equation} We now define for all $R \in (0, +\infty)$: \begin{align} F(R) &\triangleq F_n(t,R) - \Vert \varphi \Vert_{\infty}\sqrt{\frac{\lambda R}{2}} \frac{\Vert \widetilde{{\mathbf{Z}}} \Vert}{\sqrt{n}}\:;\\ f(R) &\triangleq f_n(t,R) - \Vert \varphi \Vert_{\infty}\sqrt{\frac{\lambda R}{2}}\frac{\mathbb{E}\,\Vert \widetilde{{\mathbf{Z}}} \Vert}{\sqrt{n}}\,. \end{align} $F$ is convex on $(0, +\infty)$ as it is twice differentiable with a nonnegative second derivative by \eqref{quenched:2ndDeriv_Fn} and \eqref{upperbound_2ndterm_2ndDeriv_Fn}. The same holds for $f$. % We will apply the following standard result to these two convex functions (we refer to \cite{barbier_adaptive_2019} for a proof): \begin{lemma}[An upper bound for differentiable convex functions]\label{lemma:diff_convex_functions} Let $g$ and $G$ be two differentiable convex functions defined on an interval $I\subseteq \mathbb{R}$. Let $r \in I$ and $\delta > 0$ such that $r \pm \delta \in I$. Then \begin{equation} \vert G'(r) - g'(r) \vert \leq C_{\delta}(r) + \frac{1}{\delta}\sum_{u \in \{-\delta,0,\delta\}} \vert G(r+u) - g(r+u) \vert \,, \end{equation} where $C_{\delta}(r) = g'(r+\delta) - g'(r-\delta) \geq 0$. \end{lemma} % \noindent For all $R \in (0, +\infty)$, we have: \begin{align} F(R) - f(R) = F_n(t,R) - f_n(t,R) - \Vert \varphi \Vert_{\infty}\sqrt{\frac{\lambda R}{2}} \frac{\Vert \widetilde{{\mathbf{Z}}} \Vert - \mathbb{E}\,\Vert \widetilde{{\mathbf{Z}}} \Vert}{\sqrt{n}}\:;\label{F_minus_f}\\ F'(R) - f'(R) = -\Big(\langle \mathcal{L} \rangle_{t,R} - \mathbb{E}\,\langle \mathcal{L}\rangle_{t,R}\Big) - \frac{\Vert \varphi \Vert_{\infty}}{2}\sqrt{\frac{\lambda}{2R}} \frac{\Vert \widetilde{{\mathbf{Z}}} \Vert - \mathbb{E}\,\Vert \widetilde{{\mathbf{Z}}} \Vert}{\sqrt{n}} \;. \label{F'_minus_f'} \end{align} Let $C_{\delta}(r) = f'(r+\delta) - f'(r-\delta)$, which is nonnegative by convexity of $f$. It follows from Lemma \ref{lemma:diff_convex_functions} and the two identities \eqref{F_minus_f} and \eqref{F'_minus_f'} that $\forall R \in (0, +\infty)$, $\forall \delta \in (0,R)$: \begin{align*} \big\vert \langle \mathcal{L} \rangle_{t,R} - \mathbb{E}\,\langle\mathcal{L} \rangle_{t,R}\big\vert &\leq \frac{\Vert \varphi \Vert_{\infty}}{2}\sqrt{\frac{\lambda}{2R}} \frac{\big\vert \Vert \widetilde{{\mathbf{Z}}} \Vert - \mathbb{E}\,\Vert \widetilde{{\mathbf{Z}}} \Vert \big\vert}{\sqrt{n}} + C_{\delta}(R)\\ &\qquad\qquad\qquad\qquad\qquad\qquad + \frac{1}{\delta}\sum_{x \in \{-\delta,0,\delta\}} \vert F(R+x) - f(R+x)\vert\\ &\leq \Vert \varphi \Vert_{\infty}\sqrt{\frac{\lambda}{2}} \bigg(\frac{1}{2\sqrt{R}} + 3\sqrt{R}\bigg) \frac{\big\vert \Vert \widetilde{{\mathbf{Z}}} \Vert - \mathbb{E}\,\Vert \widetilde{{\mathbf{Z}}} \Vert \big\vert}{\sqrt{n}} + C_{\delta}(R)\\ &\qquad\qquad\qquad\qquad\qquad\qquad + \frac{1}{\delta}\sum_{x \in \{-\delta,0,\delta\}} \vert F_n(t,R + x) - f_n(t,(R + x)\vert \;. \end{align*} Thanks to the inequality $(\sum_{i=1}^{m} v_i)^2 \leq m \sum_{i=1}^{m} v_i^2$, this directly implies $\forall R \in (0, +\infty)$, $\forall \delta \in (0,R)$: \begin{multline}\label{upperbound_variance_thermal_L} \mathbb{E}\big[\big( \langle \mathcal{L} \rangle_{t,R} - \mathbb{E}\,\langle\mathcal{L} \rangle_{t,R}\big)^{2}\,\big] \leq 5 \Vert \varphi \Vert_{\infty}^2\frac{\lambda}{2} \bigg(\frac{1}{4 R} + 3 + 9 R\bigg) \, \frac{{\mathbb{V}\mathrm{ar}} \Vert \widetilde{{\mathbf{Z}}} \Vert}{n} + 5C_{\delta}(R)^2\\ + \frac{5}{\delta^2}\sum_{x \in \{-\delta,0,\delta\}} \mathbb{E}\big[\big(F_n(t,R + x) - f_n(t,R + x)\big)^2\big]\,. \end{multline} The next step is to bound the integral of the three summands on the right-hand side of \eqref{upperbound_variance_thermal_L}. By \cite[Theorem 3.1.1]{vershynin_2018}, there exists $C_1$ such that ${\mathbb{V}\mathrm{ar}}\,\Vert\widetilde{{\mathbf{Z}}}\Vert \leq C_1$ independently of the dimension $n$. Then: \begin{equation}\label{upperbound_1st_summand} \int_{a}^{b} dR \, 5 \Vert \varphi \Vert_{\infty}^2\frac{\lambda}{2} \bigg(\frac{1}{4 R} + 3 + 9 R\bigg) \, \frac{{\mathbb{V}\mathrm{ar}} \Vert \widetilde{{\mathbf{Z}}} \Vert}{n} \leq 5 \Vert \varphi \Vert_{\infty}^2\frac{\lambda}{2} \bigg(\frac{\ln(b/a)}{4} + 3b + \frac{9}{2} b^2 \bigg) \, \frac{C_1}{n} \,. \end{equation} Note that $C_{\delta}(R) = \vert C_{\delta}(R)\vert \leq \vert f'(R+\delta) \vert + \vert f'(R-\delta) \vert$. For all $R \in (0, +\infty)$, we have: \begin{equation}\label{upperbound_f'} \vert f'(R) \vert \leq \big\vert \mathbb{E}\,\langle \mathcal{L} \rangle_{t,R} \big\vert + \frac{\Vert \varphi \Vert_{\infty}}{2}\sqrt{\frac{\lambda}{2R}} \frac{\mathbb{E}\,\Vert \widetilde{{\mathbf{Z}}} \Vert}{\sqrt{n}} \leq \frac{\Vert \varphi \Vert_{\infty}}{2}\sqrt{\frac{\lambda}{2}} \bigg( \sqrt{\frac{\lambda}{2}}\,\Vert \varphi \Vert_{\infty} + \frac{1}{\sqrt{R}}\bigg)\;, \end{equation} The second inequality in \eqref{upperbound_f'} follows from the upper bounds $\vert \mathbb{E}\,\langle \mathcal{L} \rangle_{t,R} \vert \leq \nicefrac{\lambda \Vert \varphi \Vert_{\infty}^2}{4}$ (see \eqref{upperbound_derivative_fn_R}) and $\mathbb{E} \Vert\widetilde{{\mathbf{Z}}}\Vert \leq \sqrt{n}$. Thus, for the second summand, we obtain $\forall \delta \in (0,a)$: \begin{align}\label{upperbound_2nd_summand} &\int_a^b dR \, C_{\delta}(R)^2\nonumber\\ &\qquad\quad\leq \Vert \varphi \Vert_{\infty}\sqrt{\frac{\lambda}{2}} \bigg( \sqrt{\frac{\lambda}{2}}\,\Vert \varphi \Vert_{\infty} + \frac{1}{\sqrt{a - \delta}}\bigg) \int_a^b dR \, C_{\delta}(R)\nonumber\\ &\qquad\quad= \Vert \varphi \Vert_{\infty}\sqrt{\frac{\lambda}{2}} \bigg( \sqrt{\frac{\lambda}{2}}\,\Vert \varphi \Vert_{\infty} + \frac{1}{\sqrt{a-\delta}}\bigg) \big[\big(f(b+\delta) - f(b-\delta)\big) - \big(f(a + \delta) - f(a -\delta)\big)\big]\nonumber\\ &\qquad\quad= \lambda \Vert \varphi \Vert_{\infty}^2\delta \bigg( \sqrt{\frac{\lambda}{2}}\,\Vert \varphi \Vert_{\infty} + \frac{1}{\sqrt{a-\delta}}\bigg)^{\! 2}\;. \end{align} The last inequality is a simple application of the mean value theorem. We finally turn to the third summand. By Proposition~\ref{prop:concentration_free_entropy} of Appendix \ref{app:concentration_free_entropy}, there exists a positive constant $C_2$ depending only on $a$, $b$, $\Vert \varphi \Vert_{\infty}$, $\Vert \varphi' \Vert_{\infty}$, $\Vert \varphi'' \Vert_{\infty}$, $M_S$ and $\lambda$ such that $\forall(t,R) \in [0,1] \times (0,b+a)$: \begin{equation}\label{upperbound_variance_free_entropy} \mathbb{E}\big[\big(F_n(t,R) - f_n(t,R)\big)^2 \,\big] \leq \frac{C_2}{n}\,. \end{equation} Using \eqref{upperbound_variance_free_entropy}, we see that the third summand satisfies $\forall \delta \in (0,a)$: \begin{equation}\label{upperbound_3rd_summand} \int_{a}^{b} \! dR \, \frac{5}{\delta^2}\sum_{x \in \{-\delta,0,\delta\}} \mathbb{E}\big[\big(F_n(t,R+x) - f_n(t,R+x)\big)^2 \,\big] \leq \frac{15C_2}{\delta^2 n} b \;. \end{equation} To end the proof it remains to integrate \eqref{upperbound_variance_thermal_L} over $R \in [a,b]$ and use the three upper bounds \eqref{upperbound_1st_summand}, \eqref{upperbound_2nd_summand} and \eqref{upperbound_3rd_summand}. \end{IEEEproof} \subsection{Concentration of \texorpdfstring{$Q$}{Q} around its expectation: proof of Proposition~\ref{prop:concentration_overlap}} Using the upper bound \eqref{upperbound_fluctuation_Q_E<Q>}, it directly comes: \begin{equation} \frac{\lambda^2}{16}\int_{a}^{b} \mathbb{E}\,\big\langle \big(Q-\mathbb{E}\,\langle Q \rangle_{t,R}\,\big)^2\,\big\rangle_{t,R}\,dR \leq \int_a^b \mathbb{E}\,\langle (\mathcal{L} - \mathbb{E}\,\langle \mathcal{L} \rangle_{t,R})^2 \rangle_{t,R} \,dR \;. \end{equation} We then use the concentration results for $\mathcal{L}$, that is, Propositions~\ref{prop:concentration_L_on_<L>} and~\ref{prop:concentration_<L>_on_E<L>}, to upper bound $$ \int_a^b \mathbb{E}\,\langle (\mathcal{L} - \mathbb{E}\,\langle \mathcal{L} \rangle_{t,R})^2 \rangle_{t,R} \,dR = \int_a^b \mathbb{E}\,\langle (\mathcal{L} - \langle \mathcal{L} \rangle_{t,R})^2 \rangle_{t,R} \, dR + \int_a^b \mathbb{E}[(\langle \mathcal{L} \rangle_{t,R} - \mathbb{E}\,\langle \mathcal{L} \rangle_{t,R})^2\,]\,dR $$ and prove Proposition~\ref{prop:concentration_overlap}.\hfill\IEEEQED \section{Proof of Proposition~\ref{proposition:properties_h}}\label{app:proof_mmse} The proof is based on the envelope theorem \cite[Corollary 4]{Milgrom_Envelope_Theorems} to obtain the derivative of $h$. We proceed as follows: \begin{enumerate} \item We show that $h$ is equal to the minimization on a compact subset of a function having sufficient regularity properties to apply \cite[Corollary 4]{Milgrom_Envelope_Theorems}. \item The later gives a formula for the derivative of $h$ at any point where it is differentiable. \item We use an optimality condition on $q_x^* \in \mathcal{Q}_x^*(\lambda)$ leading to simplified formula \eqref{derivative_h} for $h'(\lambda)$. \end{enumerate} \begin{IEEEproof}[Proof of Proposition \ref{proposition:properties_h}] We proceed according to the plan outlined above.\\ \noindent\textbf{1)} We define $f(q_x,q_s,\lambda) \triangleq {\sup}_{r_s \geq 0}\; \psi_{\lambda,\alpha}(q_x , q_s, r_s)$. By the definition \eqref{def_potential_psi} of $\psi_{\lambda, \alpha}$, we have for all $(q_x, q_s,\lambda) \in [0,\rho_x] \times [0,\rho_s] \times (0,+\infty)$: \begin{equation}\label{def_potential_f} f(q_x,q_s,\lambda) = \frac{1}{\alpha}I_{P_S}^*\bigg(\frac{q_s - \rho_s}{2}\bigg) + I_\varphi\bigg(\frac{\lambda q_x^2}{2}, q_s;\rho_s\bigg) +\frac{\lambda}{12} (\rho_x - q_x)^2(\rho_x + 2 q_x)\;, \end{equation} where the functions $I_{P_S}^*$ and $I_{\varphi}(\cdot\, , \cdot\,; \rho_s)$ are defined in Lemma~\ref{lemma:properties_I_PS} and Lemma~\ref{lemma:properties_functions}, respectively. By Lemma~\ref{lemma:properties_functions}, $I_{\varphi}(\cdot\, , \cdot\,; \rho_s)$ is continuous on $[0,+\infty) \times [0,\rho_s]$. By Lemma~\ref{lemma:properties_I_PS}, $I_{P_S}^*$ is convex and finite on $(-\infty,0)$, hence continuous on $(-\infty,0)$. Besides, $I_{P_S}^*$ is nondecreasing on $(-\infty,0)$ and we distinguish between two cases: (i) $\lim_{\substack{x \to 0\\x<0}} I_{P_S}^*(x)$ exists and is finite, and (ii) $\lim_{\substack{x \to 0\\x<0}} I_{P_S}^*(x)$ diverges to $+\infty$. If (i) then, by monotonicity of $I_{P_S}^*$, $\lim_{\substack{x \to 0\\x<0}} I_{P_S}^*(x) \leq I_{P_S}^*(0)$. We can redefine $I_{P_S}^*$ at $x=0$ to make it left continuous while leaving $h$ unchanged: $I_{P_S}^*(0) \triangleq \lim_{\substack{x \to 0\\x<0}} I_{P_S}^*(x)$. Hence, $f$ is continuous on $[0,\rho_x] \times [0,\rho_s] \times (0,+\infty)$ and $h(\lambda) = \min_{(q_x,q_s) \in [0,\rho_x] \times [0,\rho_s]} \;f(q_x,q_s,\lambda)$. If (ii), first note that \begin{align*} f(0,0,\lambda) = \frac{1}{\alpha}I_{P_S}^*\big(-\rho_s/2\big) +\frac{\lambda}{12} \rho_x^3 \quad \text{and} \quad f(q_x,q_s,\lambda) \geq \frac{1}{\alpha}I_{P_S}^*\bigg(\frac{q_s - \rho_s}{2}\bigg) \xrightarrow[q_s \to \rho_s]{q_s < 0} +\infty \;. \end{align*} Then, for every positive $\bar{\lambda}$, there exists $\rho_s(\bar{\lambda}) \in (0,\rho_s)$ such that: \begin{itemize} \item $\forall (q_x,q_s,\lambda) \in [0,\rho_x] \times [\rho_s(\bar{\lambda}),\rho_s] \times (0,\bar{\lambda}]$: $f(q_x,q_s,\lambda) > f(0,0,\lambda)$; \item $f$ is continuous on $[0,\rho_x] \times [0,\rho_s(\bar{\lambda})]\times (0,+\infty)$. \end{itemize} Thus, $\forall \lambda \in [0,\bar{\lambda}]$: $h(\lambda) = \min_{(q_x,q_s) \in [0,\rho_x] \times [0,\rho_s(\bar{\lambda})]} \;f(q_x,q_s,\lambda)$. \noindent \textbf{2)} Fix $\bar{\lambda} > 0$. The conclusion of step \textbf{1)} is that there exists $\rho_s(\bar{\lambda}) \in (0,\rho_s]$ such that \begin{equation*} \forall \lambda \in (0,\bar{\lambda}]: h(\lambda) = \min_{(q_x,q_s) \in [0,\rho_x] \times [0,\rho_s(\bar{\lambda})]} \;f(q_x,q_s,\lambda) \;, \end{equation*} where $f$ is defined in \eqref{def_potential_f} with $I_{P_S}^*(0) \triangleq \lim_{\substack{x \to 0\\x<0}} I_{P_S}^*(x) \in [0,+\infty]$ and is continuous on $[0,\rho_x] \times [0,\rho_s(\bar{\lambda})] \times (0,+\infty)$. By Lemma~\ref{lemma:properties_functions}, $f$ admits a derivative with respect to $\lambda$ and for all $(q_x, q_s, \lambda) \in [0,\rho_x] \times [0,\rho_s(\bar{\lambda})] \times (0,+\infty)$: \begin{equation}\label{derivative_f_wrt_lambda} \frac{\partial f}{\partial \lambda}\bigg\vert_{q_x,q_s,\lambda} = \frac{q_x^2}{2}\frac{\partial I_\varphi}{\partial r} \bigg(\frac{\lambda q_x^2}{2},q_s;\rho_s\bigg) +\frac{1}{12} (\rho_x - q_x)^2(\rho_x + 2 q_x) \;. \end{equation} This partial derivative is continuous on $[0,\rho_x] \times [0,\rho_s(\bar{\lambda})] \times (0,+\infty)$ ($\nicefrac{\partial I_\varphi}{\partial r}$ is given by \eqref{sign_derivative_I} and its continuity is justified by domination assumptions). For all $\lambda \in (0,\bar{\lambda})$, define the following nonempty subset of $[0,\rho_x] \times [0,\rho_s(\bar{\lambda})]$: $$ \mathcal{Q}_{x,s}^*(\lambda) \triangleq \big\{(q_x^*,q_s^*) \in [0,\rho_x] \times [0,\rho_s]: f(q_x^* , q_s^*, \lambda) = h(\lambda)\big\}\;. $$ By \cite[Corollary 4]{Milgrom_Envelope_Theorems}, $h$ is differentiable at $\lambda \in (0,\bar{\lambda})$ if, and only if, the set \begin{equation*} \mathcal{F}(\lambda) \triangleq \bigg\{ \frac{\partial f}{\partial \lambda}\bigg\vert_{q_x^*,q_s^*,\lambda}: (q_x^* , q_s^*) \in \mathcal{Q}_{x,s}^*(\lambda)\bigg\} \end{equation*} is a singleton. In this case, $\forall (q_x^* , q_s^*) \in \mathcal{Q}_{x,s}^*(\lambda): h'(\lambda) = \frac{\partial f}{\partial \lambda}\big\vert_{q_x^*,q_s^*,\lambda}$. Note that $\mathcal{F}(\lambda)$ could be a singleton without $\mathcal{Q}_{x,s}^*(\lambda)$ being one. However, in the next and final step, we derive a simple expression for $\frac{\partial f}{\partial \lambda}\big\vert_{q_x^*,q_s^*,\lambda}$ where $(q_x^* , q_s^*) \in \mathcal{Q}_{x,s}^*(\lambda)$ that shows that $\mathcal{F}(\lambda)$ is a singleton if, and only if, $\mathcal{Q}_{x,s}^*(\lambda)$ is one too. \noindent \textbf{3)} Let $\lambda \in (0,\bar{\lambda})$ and $(q_x^*, q_s^*) \in \mathcal{Q}_{x,s}^*(\lambda)$. The function $q_x \mapsto f(q_x,q_s^*,\lambda)$ is differentiable on $[0,\rho_x]$ and $f(q_x^*,q_s^*,\lambda) = \min_{q_x^* \in [0,\rho_x]} f(q_x,q_s^*,\lambda)$. If $q_x^* \in (0,\rho_x)$ then it satisfies the optimality condition $\frac{\partial f}{\partial q_x}\Big\vert_{q_x^*,q_s^*,\lambda} = 0$, i.e., \begin{equation}\label{optimality_condition_q_x*} q_x^* \,\frac{\partial I_\varphi}{\partial r} \bigg(\frac{\lambda (q_x^*)^2}{2},q_s^*;\rho_s\bigg) = \frac{q_x^*}{2} (\rho_x - q_x^*)\;. \end{equation} The identity \eqref{optimality_condition_q_x*} is trivially satisfied if $q_x^* = 0$. If $q_x^* = \rho_x$ then the necessary optimality condition reads $\frac{\partial f}{\partial q_x}(\rho_x,q_s^*,\lambda) = \lambda \rho_x \frac{\partial I_\varphi}{\partial r} \big(\frac{\lambda \rho_x^2}{2},q_s^*;\rho_s\big) \leq 0$. We also show in the proof of Lemma~\ref{lemma:properties_functions} that $\frac{\partial I_\varphi}{\partial r} \geq 0$. Hence, if $q_x^* = \rho_x$, the identity \eqref{optimality_condition_q_x*} is still satisfied. Making use of the identity \eqref{optimality_condition_q_x*} in \eqref{derivative_f_wrt_lambda}, we have $\forall (q_x^*, q_s^*) \in \mathcal{Q}_{x,s}^*(\lambda)$: \begin{align*} \frac{\partial f}{\partial \lambda}\bigg\vert_{q_x^*,q_s^*,\lambda} &= \frac{(q_x^*)^2}{2}\frac{\partial I_\varphi}{\partial r} \bigg(\frac{\lambda (q_x^*)^2}{2},q_s^*;\rho_s\bigg) +\frac{1}{12} (\rho_x - q_x^*)^2(\rho_x + 2 q_x^*)\\ &= \frac{(q_x^*)^2}{4} (\rho_x - q_x^*) +\frac{1}{12} (\rho_x - q_x^*)^2(\rho_x + 2 q_x^*) = \frac{\rho_x^3 - (q_x^*)^3}{12}\;. \end{align*} It follows that $\mathcal{F}(\lambda)$ is a singleton if, and only if, $\mathcal{Q}_x^*(\lambda)$ is a singleton. We conclude that $h$ is differentiable if, and only if, $\mathcal{Q}_x^*(\lambda)$ is a singleton in which case, letting $\mathcal{Q}_x^*(\lambda) = \{q_x^*(\lambda)\}$, $h'(\lambda) = \frac{\rho_x^3 - (q_x^*(\lambda))^3}{12}$. \end{IEEEproof} \section{Establishing the sum-rule}\label{app:establishing_sum_rule} \begin{lemma}[Link between average free entropy and normalized mutual information]\label{lemma:link_i_n_f_n} Suppose that \ref{hyp:S_bounded_support} and \ref{hyp:varphi} hold, and that $R'(t,\epsilon)$ is uniformly bounded in ${(t,\epsilon) \in [0,1] \times [0,+\infty)}$ where $R'(\cdot,\epsilon)$ denotes the derivative of $R(\cdot,\epsilon)$. The normalized mutual information $\eqref{definition_i_n}$ and its partial derivative with respect to $t$, which we denote $i'_n(t,\epsilon)$, satisfy: \begin{align} i_n(t,\epsilon) &= - f_n(t,\epsilon) + \frac{\lambda R(t,\epsilon)}{4} \rho_x + \frac{\lambda (1-t)}{12} \rho_x^3 + (1-t)\,\smallO_n(1) \label{link_i_n_f_n} \\ i'_n(t,\epsilon) &= - f'_n(t,\epsilon) + \frac{\lambda R'(t,\epsilon)}{4} \rho_x - \frac{\lambda}{12} \rho_x^3 \, + \, \smallO_n(1)\;.\label{link_i'_n_f'_n} \end{align} The quantity $\smallO_n(1)$ does not depend on $(t,\epsilon)$ and vanishes when $n$ goes to infinity. Besides, at $t=0$, for all $\epsilon \in [0,+\infty)$: \begin{equation}\label{i_n(t=0,epsilon)} \bigg\vert i_n(0,\epsilon) - \frac{I({\mathbf{X}} ; {\mathbf{Y}} \vert {\mathbf{W}})}{n} \bigg\vert \leq \frac{\lambda \Vert \varphi \Vert_{\infty}^2}{2}\,\epsilon\;. \end{equation} \end{lemma} \begin{IEEEproof} By definition of the normalized mutual information \eqref{definition_i_n}, we have: \begin{align} i_n(t,\epsilon) &= \frac{1}{n}H\big({\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,\epsilon)}\big\vert {\mathbf{W}} \big) -\frac{1}{n}H\big({\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,\epsilon)}\big\vert {\mathbf{S}}, {\mathbf{W}} \big)\nonumber\\ &= -\frac{1}{n}\mathbb{E}\,\ln\Big(\mathcal{Z}_{t,\epsilon}({\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,\epsilon)}, {\mathbf{W}})e^{-\frac{1}{2} (\sum_{{\mathbf{i}} \in \mathcal{I} } Y_{{\mathbf{i}}}^2+ \Vert \widetilde{{\mathbf{Y}}} \Vert^2 )}\Big) + \frac{1}{n}\mathbb{E}\Big[\ln e^{-\frac{1}{2} (\sum_{{\mathbf{i}} \in \mathcal{I} } Z_{{\mathbf{i}}}^2+ \Vert \widetilde{{\mathbf{Z}}} \Vert^2 )}\Big]\nonumber\\ &= -f_n(t,\epsilon) + \frac{\lambda R(t,\epsilon)}{4n}\sum_{j=1}^n \mathbb{E}[ X_j^2 ] + \frac{\lambda (1-t)}{2 n^3 }\sum_{{\mathbf{i}} \in \mathcal{I} }\mathbb{E}[X_{i_1}^2 X_{i_2}^2 X_{i_3}^2]\nonumber\\ &= -f_n(t,\epsilon) + \frac{\lambda R(t,\epsilon)}{4} \mathbb{E}[ X_1^2 ] \nonumber\\& \qquad\qquad\quad + \frac{\lambda (1-t)}{2 n^3 } \bigg(\binom{n}{3}\mathbb{E}[X_{1}^2 X_{2}^2 X_{3}^2] + n(n-1)\mathbb{E}[X_{1}^2 X_{2}^4] + n\mathbb{E}[X_{1}^6]\bigg)\nonumber\\ &= -f_n(t,\epsilon) + \frac{\lambda R(t,\epsilon)}{4} \mathbb{E}[ X_1^2 ] + \frac{\lambda (1-t)}{12} \mathbb{E}[X_{1}^2 X_{2}^2 X_{3}^2] + \lambda (1-t)\mathcal{O}\big(n^{-1}\big)\;.\label{link_i_n_f_n_proof} \end{align} The quantity $\mathcal{O}(n^{-1})$ appearing in the last equality does not depend on $(t,\lambda)$ and is such that $\big\vert \mathcal{O}(n^{-1}) \big\vert \leq \nicefrac{C}{n}$ with $C \triangleq \Vert \varphi \Vert_{\infty}^6/2$. It directly follows that \begin{equation}\label{link_i'_n_f'_n_proof} i'_n(t,\epsilon) = -f'_n(t,\epsilon) + \frac{\lambda R'(t,\epsilon)}{4} \mathbb{E}[ X_1^2 ] - \frac{\lambda }{12n^3 } \mathbb{E}[X_{1}^2 X_{2}^2 X_{3}^2] - \lambda \mathcal{O}\big(n^{-1}\big)\;; \end{equation} where the quantity $\mathcal{O}(n^{-1})$ on the right-hand side of \eqref{link_i'_n_f'_n_proof} is the same as the one appearing on the right-hand side of \eqref{link_i_n_f_n_proof}. Note that $\mathbb{E}[X_{1}^2 X_{2}^2 X_{3}^2] = \mathbb{E}[\mathbb{E}[X_{1}^2 \vert {\mathbf{S}}]^3]$ converges to $\rho_x^3$ as $n$ goes to infinity (the proof of this limit is similar to \cite[Lemma 3 of Supplementary material]{Gabrie_TwoLayerGLM_JSTAT_2019}). This limit together with \eqref{link_i_n_f_n_proof} and \eqref{link_i'_n_f'_n_proof} ends the proofs of \eqref{link_i_n_f_n} and \eqref{link_i'_n_f'_n}. At $t=0$, we can use \eqref{link_i_n_f_n_proof} to obtain (remember that $R(0,\epsilon) = \epsilon$): \begin{equation}\label{upperbound_i_n(0,epsilon)-i_n(0,0)} \vert i_n(0,\epsilon) - i_n(0,0)\vert \leq \vert f_n(0,\epsilon) - f_n(0,0)\vert + \frac{\lambda \epsilon}{4}\mathbb{E}[X_1^2]\;. \end{equation} It is clear that $i_n(0,0) = n^{-1}I({\mathbf{X}} ; {\mathbf{Y}} \vert {\mathbf{W}})$ where ${\mathbf{Y}}, {\mathbf{X}}$ are defined in \eqref{eq:entries_Y}, \eqref{eq:definition_X}. At $t=0$, the free entropy \eqref{interpolating_free_entropy} reads: \begin{equation}\label{f_n(0,epsilon)} f_n(0,\epsilon) = \frac1n \mathbb{E} \ln \int dP_s({\mathbf{s}}) \, e^{-\mathcal{H}_{0,\epsilon}({\mathbf{s}} \,;\, {\mathbf{Z}}, \widetilde{{\mathbf{Z}}}, {\mathbf{X}}, {\mathbf{W}})}\, \end{equation} with (remember that $x_1,\dots,x_n$ are the entries of ${\mathbf{x}} \triangleq \varphi(\nicefrac{{\mathbf{W}} {\mathbf{s}}}{\sqrt{p}})$): \begin{multline*} \mathcal{H}_{0,\epsilon}({\mathbf{s}} \,;\, {\mathbf{Z}}, \widetilde{{\mathbf{Z}}}, {\mathbf{X}}, {\mathbf{W}}) \triangleq \sum_{{\mathbf{i}} \in \mathcal{I}} \frac{\lambda}{2n^2} x_{i_1}^2 x_{i_2}^2 x_{i_3}^2 - \frac{\lambda}{n^2}\, X_{i_1} X_{i_2} X_{i_3} x_{i_1} x_{i_2} x_{i_3} - \frac{\sqrt{\lambda}}{n}\, Z_{{\mathbf{i}}} x_{i_1} x_{i_2} x_{i_3}\\ + \sum_{j=1}^{n} \frac{\lambda \epsilon}{4} x_j^2 - \frac{\lambda \epsilon}{2}\,X_j x_j - \sqrt{\frac{\lambda \epsilon}{2}}\,\widetilde{Z}_j x_j \;. \end{multline*} Differentiating \eqref{f_n(0,epsilon)} under the integral sign yields $ \nicefrac{\partial f_n}{\partial \epsilon}\vert_{0,\epsilon} = - \mathbb{E}\,\langle \nicefrac{\partial \mathcal{H}_{0,\epsilon}}{\partial \epsilon}\rangle_{0,\epsilon} = - \mathbb{E}\,\langle \mathcal{L} \rangle_{0,\epsilon} $ where $$ \mathcal{L} \triangleq \frac{1}{n}\sum_{j=1}^{n} \frac{\lambda}{4} x_j^2 - \frac{\lambda}{2}\,X_j x_j - \frac{1}{2}\sqrt{\frac{\lambda}{2\epsilon}}\,\widetilde{Z}_j x_j\,. $$ We show in Lemma~\ref{lemma:computation_E<L>_and_others} in Appendix \ref{app:concentration_overlap}, that $\mathbb{E}\,\langle \mathcal{L} \rangle_{0,\epsilon} = -\frac{\lambda}{4} \mathbb{E}\,\langle Q \rangle_{0,\epsilon}$ with $Q \triangleq n^{-1}{\mathbf{x}}^T {\mathbf{X}}$ the overlap. Hence $\big\vert \nicefrac{\partial f_n}{\partial \epsilon}\vert_{0,\epsilon} \big\vert \leq \nicefrac{\lambda \Vert \varphi \Vert_{\infty}^2}{4}$. This upper bound together with the mean value theorem and the inequality \eqref{upperbound_i_n(0,epsilon)-i_n(0,0)} yields \eqref{i_n(t=0,epsilon)}. \end{IEEEproof} \begin{lemma}[Derivative of the normalized mutual information]\label{lemma:formula_derivative_free_entropy} Suppose that \ref{hyp:S_bounded_support} and \ref{hyp:varphi} hold, and that $R'(t,\epsilon)$ is uniformly bounded in $(t,\epsilon) \in [0,1] \times [0,+\infty)$ where $R'(\cdot,\epsilon)$ denotes the derivative of $R(\cdot,\epsilon)$. Recall $$ Q \triangleq \frac{1}{n} \sum_{i=1}^{n} \varphi\bigg(\bigg[\frac{{\mathbf{W}} {\mathbf{s}}}{\sqrt{p}}\bigg]_i \,\bigg) \varphi\bigg(\bigg[\frac{{\mathbf{W}} {\mathbf{S}}}{\sqrt{p}}\bigg]_i \,\bigg) = \frac{1}{n} \sum_{i=1}^{n} x_i X_i $$ denotes the overlap. Then, the derivative with respect to $t$ of the normalized mutual information \eqref{definition_i_n} satisfies $\forall (t, \epsilon) \in [0,1] \times (0,+\infty)^2$: \begin{equation}\label{formula_derivative_free_entropy} i'_{n}(t,\epsilon) = \frac{\lambda}{12} \big(\mathbb{E}\,\langle Q^3 \rangle_{t,\epsilon} - \rho_x^3\big) + \frac{\lambda R'(t,\epsilon)}{4} \big(\rho_x - \mathbb{E}\,\langle Q \rangle_{t,\epsilon}\big) + \smallO_n(1) \;, \end{equation} where $\smallO_{n}(1)$ vanishes uniformly in $(t, \epsilon)$ as $n$ goes to infinity. \end{lemma} \begin{IEEEproof} The average interpolating free entropy satisfies: \begin{equation}\label{eq:precise_formula_f} f_n(t,\epsilon) = \frac{1}{n} \mathbb{E}_{{\mathbf{S}}, {\mathbf{W}}}\bigg[\int d{\mathbf{y}} d\widetilde{{\mathbf{y}}} \: \frac{e^{-\frac{1}{2} (\sum_{{\mathbf{i}} \in \mathcal{I} } y_{{\mathbf{i}}}^2+ \Vert \widetilde{{\mathbf{y}}} \Vert^2 )}}{\sqrt{2\pi}^{n + \vert \mathcal{I} \vert}} e^{-\mathcal{H}_{t,\epsilon}({\mathbf{s}} \,;\, {\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,\epsilon)}, {\mathbf{W}})} \ln \mathcal{Z}_{t,\epsilon}\big({\mathbf{y}},\widetilde{{\mathbf{y}}}, {\mathbf{W}}\big) \bigg] \;. \end{equation} Taking the derivative with respect to $t$ of \eqref{eq:precise_formula_f}, we get: \begin{align} f'_n(t,\epsilon) &= -\frac{1}{n} \mathbb{E}\big[\mathcal{H}'_{t,\epsilon}\big({\mathbf{S}}; {\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,\epsilon)}, {\mathbf{W}}\big) \ln \mathcal{Z}_{t,\epsilon}\big({\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,\epsilon)} , \overline{{\mathbf{Y}}}^{(t,\epsilon)}\big)\big]\nonumber\\ &\qquad\qquad\qquad\qquad\qquad\quad -\frac{1}{n} \mathbb{E}\big[\big\langle \mathcal{H}'_{t,\epsilon}\big({\mathbf{s}}; {\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,\epsilon)}, {\mathbf{W}}\big)\big\rangle_{\!t,\epsilon} \,\big]\,, \label{f'_sum_2_expectations}\\ \shortintertext{with} \mathcal{H}'_{t,\epsilon}({\mathbf{s}} ; {\mathbf{y}}, \widetilde{{\mathbf{y}}}, {\mathbf{W}}) &\triangleq \sum_{{\mathbf{i}} \in \mathcal{I}} -\frac{\lambda}{2n^2} x_{i_1}^2 x_{i_2}^2 x_{i_3}^2 + \frac{1}{2n}\sqrt{\frac{\lambda}{1-t}}\, y_{{\mathbf{i}}} x_{i_1} x_{i_2} x_{i_3}\nonumber\\ &\qquad\qquad\qquad\qquad\qquad\quad +\sum_{j=1}^{n} \frac{\lambda R'(t,\epsilon)}{4} x_i^2 - \frac{R'(t,\epsilon)}{2}\sqrt{\frac{\lambda}{2 R(t,\epsilon)}}\,\widetilde{y}_j x_j \;. \label{derivative_hamiltonian} \end{align} Equation \eqref{derivative_hamiltonian} comes from differentiating the interpolating Hamiltonian \eqref{interpolating_hamiltonian}. Evaluating \eqref{derivative_hamiltonian} at $({\mathbf{s}}, {\mathbf{y}}, \widetilde{{\mathbf{y}}}) = ({\mathbf{S}}, {\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,\epsilon)})$ yields: \begin{equation}\label{eq:H_timederivative} \mathcal{H}'_{t,\epsilon}\big({\mathbf{S}}; {\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,\epsilon)} , {\mathbf{W}} \big) = \sum_{{\mathbf{i}} \in \mathcal{I}} \frac{1}{2n}\sqrt{\frac{\lambda}{1-t}}\, Z_{{\mathbf{i}}} X_{i_1} X_{i_2} X_{i_3} - \sum_{j=1}^{n} \frac{R'(t,\epsilon)}{2}\sqrt{\frac{\lambda}{2 R(t,\epsilon)}}\,\widetilde{Z}_j X_j \;. \end{equation} The second expectation on the right-hand side of \eqref{f'_sum_2_expectations} is now easily shown to be zero thanks to the Nishimori identity: $ \mathbb{E}\,\big\langle \mathcal{H}'_{t,\epsilon}\big({\mathbf{s}}; {\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,\epsilon)}, {\mathbf{W}}\big)\big\rangle_{\!t,\epsilon} = \mathbb{E}\,\mathcal{H}'_{t,\epsilon}\big({\mathbf{S}}; {\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,\epsilon)}, {\mathbf{W}}\big) =0$. Therefore, the identity \eqref{f'_sum_2_expectations} simplifies to: \begin{multline}\label{eq:non_final_formula_derivative} f'_{n}(t,\epsilon) = -\frac{1}{2n^2}\sqrt{\frac{\lambda}{1-t}}\sum_{{\mathbf{i}} \in \mathcal{I}} \mathbb{E}[Z_{{\mathbf{i}}} X_{i_1} X_{i_2} X_{i_3}\ln \mathcal{Z}_{t,\epsilon}\big({\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,\epsilon)} , {\mathbf{W}}\big)]\\ +\frac{R'(t,\epsilon)}{2n}\sqrt{\frac{\lambda}{2 R(t,\epsilon)}}\, \sum_{j=1}^{n} \mathbb{E}[\widetilde{Z}_j X_j \ln \mathcal{Z}_{t,\epsilon}\big({\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,\epsilon)} , {\mathbf{W}}\big)]\;. \end{multline} The two expectations appearing on the right-hand side of~\eqref{eq:non_final_formula_derivative} are simplified thanks to Stein's lemma, i.e., by integrating by parts w.r.t.\ the Gaussian noises: \begin{align*} \mathbb{E}[Z_{{\mathbf{i}}} X_{i_1} X_{i_2} X_{i_3}\ln \mathcal{Z}_{t,\epsilon}\big({\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,\epsilon)} , {\mathbf{W}}\big)] &= \frac{\sqrt{\lambda (1-t)}}{n}\mathbb{E}\,\langle x_{i_1} X_{i_1} x_{i_2}X_{i_2} x_{i_3} X_{i_3} \rangle_{t,\epsilon} \;;\\ \mathbb{E}[\widetilde{Z}_j X_j \ln \mathcal{Z}_{t,\epsilon}\big({\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,\epsilon)} , {\mathbf{W}} \big)] &= \sqrt{\frac{ \lambda R(t,\epsilon)}{2}}\mathbb{E}\,\langle x_j X_j \rangle_{t,\epsilon} \;. \end{align*} Hence, we have \begin{align*} f'_{n}(t,\epsilon) &= -\frac{\lambda}{2n^3} \sum_{{\mathbf{i}} \in \mathcal{I}} \mathbb{E}\,\langle x_{i_1} X_{i_1} x_{i_2}X_{i_2} x_{i_3} X_{i_3} \rangle_{t,\epsilon} +\frac{\lambda R'(t,\epsilon)}{4 n}\, \sum_{j=1}^{n} \mathbb{E}\,\langle x_j X_j \rangle_{t,\epsilon}\\ &= -\frac{\lambda}{12} \mathbb{E}\,\langle Q^3 \rangle_{t,\epsilon} + \frac{\lambda R'(t,\epsilon)}{4} \mathbb{E}\,\langle Q \rangle_{t,\epsilon} + \frac{\lambda}{2}\mathcal{O}(n^{-1}) \;, \end{align*} with $\vert \mathcal{O}(n^{-1}) \vert = \frac{1}{n^3} \Big\vert \sum\limits_{{\mathbf{i}} \in \mathcal{I}} \mathbb{E}\,\langle x_{i_1} X_{i_1} x_{i_2}X_{i_2} x_{i_3} X_{i_3} \rangle_{t,\epsilon} - \frac{1}{6}\sum\limits_{i_1,i_2,i_3=1}^n \mathbb{E}\,\langle x_{i_1} X_{i_1} x_{i_2}X_{i_2} x_{i_3} X_{i_3} \rangle_{t,\epsilon} \Big\vert \leq \frac{\Vert \varphi \Vert_{\infty}^6}{n}$. \end{IEEEproof} \section*{Acknowledgment} \noindent C. L acknowledges funding from Swiss National Foundation for Science grant 200021E 175541. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran} \section{Derivation of the asymptotic Tensor-MMSE}\label{sec:proofthm2} The derivation of the asymptotic Tensor-MMSE rests on the following preliminary proposition. \begin{proposition}\label{proposition:properties_h} Suppose that \ref{hyp:S_bounded_support} and \ref{hyp:varphi} hold. Define for all $\lambda \in (0, +\infty)$: \begin{align*} h(\lambda) &\triangleq \mathop{\vphantom{p}\inf}_{q_x \in [0,\rho_x]}\adjustlimits{\inf}_{q_s \in [0,\rho_s]} {\sup}_{r_s \geq 0}\; \psi_{\lambda,\alpha}(q_x , q_s, r_s) \;;\\ \mathcal{Q}_x^*(\lambda) &\triangleq \bigg\{q_x^* \in [0,\rho_x] : \adjustlimits{\inf}_{q_s \in [0,\rho_s]} {\sup}_{r_s \geq 0}\; \psi_{\lambda,\alpha}(q_x^* , q_s, r_s) = h(\lambda) \bigg\}\;. \end{align*} For every $\lambda > 0$, $\mathcal{Q}_x^*(\lambda)$ is nonempty. The function $h$ is differentiable at $\lambda$ if, and only if, the set $\mathcal{Q}_x^*(\lambda)$ is a singleton. In this case, letting $\mathcal{Q}_x^*(\lambda) = \{q_x^*(\lambda)\}$, the derivative of $h$ at $\lambda$ satisfies: \begin{equation}\label{derivative_h} h'(\lambda) = \frac{1}{12} \bigg(\rho_x^3 - \big(q_x^*(\lambda)\big)^3\bigg)\;. \end{equation} \end{proposition} The proof of this result is given in Appendix~\ref{app:proof_mmse}. We can now prove Theorem~\ref{theorem:tensor_MMSE}. \begin{IEEEproof}[Proof of Theorem~\ref{theorem:tensor_MMSE}] Let $n \in \mathbb{N}^*$. The angular brackets $\langle - \rangle_{n,\lambda}$ denote the expectation with respect to the posterior distribution of ${\mathbf{S}}$ given $({\mathbf{Y}}, {\mathbf{W}})$. Define $h_n: \lambda \in (0,+\infty) \mapsto \frac{I({\mathbf{X}},{\mathbf{Y}} \vert {\mathbf{W}})}{n}$ (the mutual information depends on $\lambda$ through the observation ${\mathbf{Y}}$). We have for all $\lambda \in (0, +\infty)$: \begin{align} h_n(\lambda) &= \frac{\lambda}{2 n^3 }\sum_{{\mathbf{i}} \in \mathcal{I} }\mathbb{E}[X_{i_1}^2 X_{i_2}^2 X_{i_3}^2] - \frac{1}{n}\mathbb{E} \ln \! \int \! dP_S({\mathbf{s}}) e^{\sum\limits_{{\mathbf{i}} \in \mathcal{I}}x_{i_1} x_{i_2} x_{i_3}\big(- \frac{\lambda}{2n^2} x_{i_1} x_{i_2} x_{i_3} +\frac{\lambda}{n^2}\, X_{i_1} X_{i_2} X_{i_3} +\frac{\sqrt{\lambda}}{n}\, Z_{{\mathbf{i}}}\big)}\nonumber\\ h'_n(\lambda) &= \frac{1}{2 n^3 }\sum_{{\mathbf{i}} \in \mathcal{I} }\mathbb{E}[(X_{i_1}^2 X_{i_2}^2 X_{i_3} - \langle x_{i_1} x_{i_2} x_{i_3} \rangle_{n,\lambda})^2] = \frac{\mathrm{MMSE}_n({\mathbf{X}}^{\otimes 3}\vert {\mathbf{Y}}, {\mathbf{W}})}{12} + \mathcal{O}(n^{-1})\label{I-MMSE_relation}\\ h''_n(\lambda) &= -\frac{1}{2 n^5}\sum_{{\mathbf{i}}, \mathbf{j} \in \mathcal{I}}\mathbb{E}[(\langle x_{i_1} x_{i_2} x_{i_3} x_{j_1} x_{j_2} x_{j_3} \rangle_{n,\lambda} -\langle x_{i_1} x_{i_2} x_{i_3} \rangle_{n,\lambda}\langle x_{j_1} x_{j_2} x_{j_3} \rangle_{n,\lambda})^2]\label{2nd_derivative_hn} \end{align} Differentiations under the integral sign yielding \eqref{I-MMSE_relation} and \eqref{2nd_derivative_hn} are justified by the domination properties implied by \ref{hyp:S_bounded_support}, \ref{hyp:varphi}. $h_n''$ is nonpositive so $h_n$ is concave on $(0,+\infty)$. By Theorem~\ref{theorem:limit_mutual_information}, $h: \lambda \mapsto \mathop{\vphantom{p}\inf}\limits_{q_x \in [0,\rho_x]}\adjustlimits{\inf}_{q_s \in [0,\rho_s]} {\sup}_{r_s \geq 0}\; \psi_{\lambda,\alpha}(q_x , q_s, r_s)$ is the pointwise limit of the sequence of differentiable concave functions $(h_n)_{n \in \mathbb{N}^*}$. Hence, $h$ is concave and thus differentiable on $(0,+\infty)$ minus a countable set, and at every $\lambda$ where $h$ is differentiable we have: \begin{equation}\label{limit_derivative_hn} \lim_{n \to +\infty} h_n^\prime(\lambda) = h'(\lambda) = \frac{\rho_x^3 - (q_x^*(\lambda))^3}{12} \;. \end{equation} The last equality follows from Proposition~\ref{proposition:properties_h} and $q_x^*(\lambda)$ denotes the unique element of $\mathcal{Q}_x^*(\lambda)$. Combining \eqref{limit_derivative_hn} with the I-MMSE relation \eqref{I-MMSE_relation} yields the theorem. \end{IEEEproof} \section{Introduction}\label{section:introduction} \IEEEPARstart{N}{a}tural signals have an underlying structure, an insight that has triggered a paradigm shift in the last fifteen years, and spurred fundamental progress in estimation and inference. Compressive sensing \cite{CandesRombergTao_2006,Donoho_CompressedSensing2006} takes sparsity as the model of structure when a signal ${\mathbf{X}}\in\mathbb{R}^n$ has a sparse representation in an appropriate basis, that is, ${\mathbf{X}} = \Psi {\mathbf{Z}}$ with $\Psi$ an $n\times n$ change of basis matrix and ${\mathbf{Z}}\in \mathbb{R}^n$ a sparse vector with $p \ll n$ non-zero components. For example, ${\mathbf{X}}$ can represent a natural image and $\Psi$ a wavelet basis \cite{Mallat_book_1999}. Despite its success, this model of structure is often too constrained because the appropriate basis may be unknown and, more generally, the linearity of the transformation may be a severe limitation. Deep networks have been proposed as an alternative \cite{MousaviPatel_2015} and, with the advent of generative adversarial networks (GAN) \cite{GoodfellowGAN_2014} and variational auto-encoders (VAE) \cite{Hinton504}, such flexible and non-linear ``generative models'' of structure have been the object of intense interest. Roughly speaking, a generative model can be viewed as a mapping $G: {\mathbf{S}}\in \mathbb{R}^p \mapsto {\mathbf{X}} = G({\mathbf{S}})\in \mathbb{R}^n$ with $p\ll n$ and satisfying certain general regularity assumptions \cite{Bora_2017}. In other words, the signal ${\mathbf{X}}$ lies on a low $p$-dimensional ``manifold'' parametrized by ${\mathbf{S}}$. Such models have been studied in the framework of classical denoising problems from observations ${\mathbf{Y}}=A {\mathbf{X}} + {\mathbf{Z}}$ where $A$ is a sensing matrix and ${\mathbf{Z}}$ some Gaussian noise. In particular, \cite{Bora_2017} studies fundamental limits under minimal Lipshitz conditions on $G$ and empirically investigates the problem with learned mappings coming from GAN and VAE Another kind of generative model takes $G$ equal to a one-layer or multi-layer neural network with fixed weights (i.e., frozen and not learned) drawn from a random matrix ensemble \cite{ManoelKrzakala_2017, DBLP:journals/tit/HandV20, heckel2018rateoptimal, HandLeongVoroninski_2018, DBLP:journals/corr/abs-1803-09319}. Such mappings $G$ are often referred to as \textit{generalized linear models} and this is the terminology that we adopt here. The simplification of fixed random weights has the virtue of being much more amenable to mathematical (or at least analytical) analysis. Especially, the mutual information as well as the message passing algorithmic behaviour for classical denoising have been discussed in depth in a Bayesian setting at various levels of rigor \cite{ManoelKrzakala_2017, Gabrie_TwoLayerGLM_JSTAT_2019}. In this work we investigate generalized models of structure in the context of non-linear estimation (or factorization) of noisy tensors. Tensors representing data have found many modern applications in signal processing, graph analysis, data mining and machine learning \cite{sidiropoulos2016, cichoki2015, kolda2009}, with a large part of the literature focusing on tensor decompositions, either in deterministic settings, or in random settings with independent structureless components. Here we focus on a simple statistical model of noisy symmetric rank-one tensors. A {\it structured} signal ${\mathbf{X}}=(X_1, \cdots, X_n) \in \mathbb{R}^n$ is generated by a one-layer GLM $X_i = \varphi( ({\mathbf{W}} {\mathbf{S}})_i/\sqrt p)$ where the latent vector ${\mathbf{S}}\in \mathbb{R}^p$ has independent and identically distributed (i.i.d.) entries and ${\mathbf{W}}$ is a \textit{known} random matrix with independent standard Gaussian entries. We only observe a noisy version of the rank-one tensor ${\mathbf{X}}^{\otimes r}$ ($r\geq 2$) through an additive white Gaussian noise channel, i.e., ${\mathbf{Y}} = \frac{\sqrt{\lambda}}{n^{(r-1)/2}} {\mathbf{X}}^{\otimes r} + {\mathbf{Z}}$ where the noise ${\mathbf{Z}}$ is a symmetric tensor with independent standard Gaussians entries and $\lambda > 0$ is the signal-to-noise ratio. We study the high dimensional limit $n, p\to \infty$ such that $n/p\to \alpha = \Theta(1)$ and show that, quite remarkably, the asymptotic mutual information $\lim_{n\to +\infty}I({\mathbf{X}};{\mathbf{Y}}\vert {\mathbf{W}})/n$ is given by a finite-dimensional variational problem (see Theorem~\ref{theorem:limit_mutual_information} in Section~\ref{subsec:results}). We also rigorously deduce the corresponding asymptotic \textit{minimum mean square error} (MMSE), which is given by a simple function of the solution to the variational problem (see Theorem ~\ref{theorem:tensor_MMSE} in Section~\ref{subsec:results}). For concreteness, and to keep the analysis as simple as possible, we focus on the case $r=3$ and one-layer GLM. However, extensions to any order $r > 3$, multi-layer GLM and asymmetric tensors are possible with the techniques used here. An extensive recent study of the matrix case $r=2$ can be found in \cite{aubin2019spiked}. The analysis and results presented here go beyond many recent works dealing with i.i.d.\ components for ${\mathbf{X}}$, for matrices $r=2$ \cite{XXT, Lelarge_fundamental_2019, miolane2017fundamental}, and tensors $r\geq 3$ \cite{LesieurMiolane_2017, barbier2017layered}. There is a rich phenomenology of phase transitions already for the i.i.d.\ case which stems from the (simpler) variational formula for the mutual information. In Section~\ref{subsec:examples} we discuss the (numerical) solutions to the new variational problem obtained for structured signals for various examples of priors and activation functions, and we illustrate properties of phase transitions. Furthermore we discuss the similarities and differences between the genuine tensor and matrix cases. Let us say a few words about the techniques used in this work. There is a long history in the literature connecting Bayesian inference problems with spin-glass models of statistical mechanics \cite{nishimori01, mezard2009information} and it has been conjectured for some time that the true variational expressions for the mutual information should coincide with the so-called ``replica-symmetric'' formula for the free energy derived by analytical non-rigorous methods. The veracity of these conjectures has now been established by a variety of methods for various problems, e.g., coding theory \cite{Giurgiu_SCproof}, random linear estimation \cite{8606971, 9079920}, matrix and tensor estimation \cite{koradamacris, XXT, Lelarge_fundamental_2019, miolane2017fundamental, barbier2017layered}. In all these cases the signal has i.i.d.\ components. For structured signals, rigorous proofs of the low-dimensional variational expression for the asymptotic mutual information are virtually non-existent. To the best of our knowledge, besides the case where ${\mathbf{X}}$ is uniformly distributed on the sphere \cite{luneau2020highdimensional} (which turns out to be equivalent to an i.i.d.\ Gaussian prior), there are two recent exceptions: \cite{Gabrie_TwoLayerGLM_JSTAT_2019} which includes the rigorous calculation of a mutual information for a GLM with input generated by another GLM, and \cite{aubin2019spiked} which treats the rank-one matrix case with input coming from a GLM. The later work uses two different flavors of the interpolation method \cite{Guerra-Toninelli-2002, Alaoui2018} which do not extend to odd-order tensors nor asymmetric ones. Moreover, certain (reasonable) assumptions are required. In this work we leverage on recent progress on the proofs of replica-symmetric formulas by the \textit{adaptive interpolation method} \cite{barbier_adaptive_2019, Barbier_Macris_jphysA_2019} which is a powerful evolution of the celebrated Guerra-Toninelli interpolation scheme \cite{Guerra-Toninelli-2002}. Our treatment is completely self-contained, leverages on only one method, and can also deal with asymmetric matrices and tensors. In Section~\ref{section:main_results} we formulate the model, present the main theorems for the asymptotic mutual information and MMSE along with examples and illustrations of phase transitions, and explain key ideas behind the proofs. In Sections~\ref{sec:proofthm1} and \ref{sec:proofthm2} we go through the proofs and in Section~\ref{section:alphatendstozero} we give an analysis of the limit $\alpha \to 0$. The appendices contain technical derivations. \section{Limit at \texorpdfstring{$\alpha = 0$}{alpha = 0}}\label{section:alphatendstozero} In this section we give a \textit{non-rigorous} derivation of the limit of the asymptotic normalized mutual information when $\alpha$ goes to $0$. \noindent Fix $\lambda > 0$. We define the function \begin{align*} &\qquad\qquad\qquad\qquad\qquad\qquad\;\; \Psi_*: \alpha \mapsto \inf\limits_{q_x \in [0,\rho_x]}\adjustlimits{\inf}_{q_s \in [0,\rho_s]} {\sup}_{r_s \geq 0}\; \Psi(q_x , q_s, r_s, \alpha)\\ \shortintertext{where} &\Psi(q_x , q_s, r_s, \alpha) \triangleq \alpha \psi_{\lambda,\alpha}(q_x , q_s, r_s) = I(S;\sqrt{r_s}\,S + Z) - \frac{r_s(\rho_s - q_s)}{2} + \alpha \frac{\lambda}{12} (\rho_x - q_x)^2(\rho_x + 2 q_x)\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad + \alpha I\big(U; \sqrt{\lambda q_x^2/2}\,\varphi(\sqrt{\rho_s - q_s} \, U + \sqrt{q_s} \, V) + \widetilde{Z} \,\big\vert\, V \big)\;. \end{align*} The function $\Psi_*$ is convex on $[0,+\infty)$ so it is continuous on $[0,+\infty)$ and differentiable almost everywhere on $(0,+\infty)$. Note that: \begin{equation*} \frac{\partial \Psi}{\partial \alpha}\bigg\vert_{(q_x , q_s, r_s, \alpha)} = I\big(U; \sqrt{\lambda q_x^2/2}\,\varphi(\sqrt{\rho_s - q_s} \, U + \sqrt{q_s} \, V) + \widetilde{Z} \,\big\vert\, V \big) + \frac{\lambda}{12} (\rho_x - q_x)^2(\rho_x + 2 q_x)\;. \end{equation*} Hence, assuming that we can apply some envelope theorem\cite{Milgrom_Envelope_Theorems} as in Appendix~\ref{app:proof_mmse}, it comes \begin{equation*} \Psi_*^\prime(\alpha) = I\big(U; \sqrt{\nicefrac{\lambda q_x^*(\alpha)^2}{2}}\,\varphi(\sqrt{\rho_s - q_s^*(\alpha)} \, U + \sqrt{q_s^*(\alpha)} \, V) + \widetilde{Z} \big\vert V \big) + \frac{\lambda}{12} (\rho_x - q_x^*(\alpha))^2(\rho_x + 2 q_x^*(\alpha)) \end{equation*} whenever $(q_x^*(\alpha), q_s^*(\alpha), r_s^*(\alpha))$ is the unique triple satisfying $\Psi_*(\alpha) = \Psi(q_x^*(\alpha), q_s^*(\alpha), r_s^*(\alpha), \alpha)$. At $\alpha = 0$, $\Psi(q_x , q_s, r_s, \alpha) = I(S;\sqrt{r_s}\,S + Z) - r_s(\rho_s - q_s)/2$ so $\Psi_*(0) = \Psi(q_x , q_s^*(0), r_s^*(0), \alpha) = 0$ where $q_s^*(0) = (\mathbb{E}_{S \sim P_S}\,S)^2 \triangleq m_s^2$, $r_s^*(0) = 0$. By Theorem~\ref{theorem:limit_mutual_information}, $\lim_{\substack{n \to +\infty\\ \nicefrac{n}{p}\to \alpha}} \frac{I({\mathbf{X}} ; {\mathbf{Y}} \vert {\mathbf{W}})}{n} = \frac{\Psi_*(\alpha)}{\alpha}$. Using L'H\^opital's rule, it follows that (provided that the limit on the right-hand side exists): \begin{equation}\label{eq:hopital_rule} \lim_{\alpha \to 0^+} \lim_{\substack{n \to +\infty\\ \nicefrac{n}{p}\to \alpha}} \frac{I({\mathbf{X}} ; {\mathbf{Y}} \vert {\mathbf{W}})}{n} = \lim_{\alpha \to 0^+} \Psi_*^\prime(\alpha)\;. \end{equation} Assuming that $\lim_{\alpha \to 0^+} (q_s^*(\alpha), r_s^*(\alpha))= (q_s^*(0),r_s^*(0)) = (m_s^2,0)$, we have: \begin{align*} &\lim_{\alpha \to 0^+} \psi_{\lambda,\alpha}(q_x^*(\alpha), q_s^*(\alpha), r_s^*(\alpha)) = \lim_{\alpha \to 0^+} \psi_{\lambda,\alpha}(q_x^*(\alpha), m_s^2, 0)\\ &=\lim_{\alpha \to 0+} I\big(U; (\nicefrac{\lambda q_x^*(\alpha)^2}{2})^{\frac12}\,\varphi(\sqrt{\rho_s - m_s^2} \, U + \vert m_s \vert\, V) + \widetilde{Z} \big\vert V\big) + \frac{\lambda}{12} (\rho_x - q_x^*(\alpha))^2(\rho_x + 2 q_x^*(\alpha))\\ &=\lim_{\alpha \to 0+} \Psi_*^\prime(\alpha) \shortintertext{as well as} &\lim_{\alpha \to 0^+} \psi_{\lambda,\alpha}(q_x^*(\alpha), q_s^*(\alpha), r_s^*(\alpha)) = \! \lim_{\alpha \to 0^+} \inf_{q_x \in [0,\rho_x]} \psi_{\lambda,\alpha}(q_x , q_s^*(\alpha), r_s^*(\alpha)) = \! \inf_{q_x \in [0,\rho_x]} \psi_{\lambda,\alpha}(q_x , m_s^2, 0) \\& = \inf_{q_x \in [0,\rho_x]}\; I\big(U; (\nicefrac{\lambda q_x^2}{2})^{\frac{1}{2}} \,\varphi(\sqrt{\rho_s - m_s^2} \, U + \vert m_s \vert\, V) + \widetilde{Z} \big\vert V\big) + \frac{\lambda}{12} (\rho_x - q_x)^2(\rho_x + 2 q_x)\;. \end{align*} Both chains of equalities together with \eqref{eq:hopital_rule} give: \begin{multline}\label{limit_mutual_info_alpha=0} \lim_{\alpha \to 0} \lim_{\substack{n \to +\infty\\ \nicefrac{n}{p}\to \alpha}} \frac{I({\mathbf{X}} ; {\mathbf{Y}} \vert {\mathbf{W}})}{n}\\ = \inf_{q_x \in [0,\rho_x]} I\big(U; \sqrt{\nicefrac{\lambda q_x^2}{2}}\,\varphi(\sqrt{\rho_s - m_s^2} \, U + \vert m_s \vert\, V) + \widetilde{Z} \big\vert V\big) + \frac{\lambda}{12} (\rho_x - q_x)^2(\rho_x + 2 q_x)\;. \end{multline} Thus, we conjecture that the asymptotic normalized multual information converges when $\alpha \to 0^+$ to the asymptotic normalized mutual information of the following channel: \begin{align*} \widetilde{Y}_{ijk} = \frac{\sqrt{\lambda}}{n} \widetilde{X}_i\widetilde{X}_j \widetilde{X}_k + \widetilde{Z}_{ijk} \;, 1 \leq i \leq j \leq k \leq n\;, \end{align*} with $\widetilde{X}_i = \varphi(\sqrt{\rho_s - m_s^2} \, U_i + \vert m_s \vert\, V_i)$ where $U_1,\dots, U_n, V_1,\dots, V_n\overset{\text{\tiny i.i.d.}}{\simiid[-2pt]} \mathcal{N}(0,1)$ and ${\mathbf{V}}$ is known. The second moment of the i.i.d.\ random variables $\widetilde{X}_i$ is $\rho_x \triangleq \mathbb{E}[\varphi(\mathcal{N}(0,\rho_s))^2]$. Proofs in the literature can be easily adapted to show that $\lim\limits_{n \to +\infty} \frac{I(\widetilde{{\mathbf{X}}} ; \widetilde{{\mathbf{Y}}} \vert {\mathbf{V}} )}{n}$ is given by the right-hand side of \eqref{limit_mutual_info_alpha=0}. \section{Asymptotic mutual information and MMSE for tensor decomposition with a generative prior}\label{section:main_results} We formulate a statistical model of rank-one tensor decomposition given noisy observations, when the spike is itself generated from another latent vector. We observe the entries of a symmetric tensor ${\mathbf{Y}} \in (\mathbb{R}^n)^{\otimes 3}$ given by: \begin{equation}\label{eq:entries_Y} Y_{ijk} = \frac{\sqrt{\lambda}}{n} X_{i} X_{j} X_{k} + Z_{ijk} \;, 1 \leq i \leq j \leq k \leq n\: ; \end{equation} where the positive real number $\lambda$ plays the role of a SNR, $Z_{ijk} \overset{\text{\tiny i.i.d.}}{\simiid[-2pt]} \mathcal{N}(0,1)$, $1 \leq i \leq j \leq k$, is an additive white Gaussian noise and $X_1, \dots, X_n$ are the entries of the spike ${\mathbf{X}} \in \mathbb{R}^n$. This spike is generated by a latent vector ${\mathbf{S}} \in \mathbb{R}^p$ -- whose entries are i.i.d. with respect to (w.r.t.) some probability distribution $P_S$ on the real numbers -- via a \textit{generalized linear model} (GLM): \begin{equation}\label{eq:definition_X} X_i \triangleq \varphi\bigg(\frac{({\mathbf{W}} {\mathbf{S}})_i}{\sqrt{p}}\bigg),\quad i=1,\cdots, n \;. \end{equation} The $n \times p$ random matrix ${\mathbf{W}}$ has entries i.i.d.\ with respect to $\mathcal{N}(0,1)$. It is often customary to summarize \eqref{eq:definition_X} by ${\mathbf{X}} = \varphi\big({\mathbf{W}} {\mathbf{S}} / \sqrt{p}\big)$ where it is understood that the function $\varphi: \mathbb{R} \to \mathbb{R}$ is applied componentwise. \subsection{Main results}\label{subsec:results} Our main results are stated in the next two theorems. They provide a complete information-theoretic characterization of the problem. Theorem 1 expresses the normalized mutual information $n^{-1}I({\mathbf{X}} ; {\mathbf{Y}} \vert {\mathbf{W}})$, in the high-dimensional regime where $n\to +\infty$ while $n/p =\alpha$ is kept fixed, as a \textit{low-dimensional} explicit variational problem. This variational problem involves an optimization over three parameters and can be solved numerically given the activation function $\varphi$ and the prior distribution $P_S$. \begin{theorem}[Mutual information between ${\mathbf{X}}$ and ${\mathbf{Y}}$ given ${\mathbf{W}}$ in the high-dimensional regime]\label{theorem:limit_mutual_information} Suppose that the following hypotheses hold: \begin{enumerate}[label=(H\arabic*), nosep] \item \label{hyp:S_bounded_support} There exists $M_S > 0$ such that the support of $P_S$ is included in $[-M_S,M_S]$. \item \label{hyp:varphi} $\varphi$ is bounded and twice differentiable with its first and second derivatives being bounded and continuous. They are denoted $\varphi^\prime$, $\varphi^{\prime\prime}$. \end{enumerate} Let $S \sim P_S$ and $U,V,Z,\widetilde{Z} \sim \mathcal{N}(0,1)$ independent scalar random variables. Define the second moments $\rho_s = \mathbb{E}[S^2]$ and $\rho_x = \mathbb{E}[\varphi(T)^2]$ with $T \sim \mathcal{N}(0,\rho_s)$. Define the potential function $\psi_{\lambda, \alpha}: [0,+\infty)^2 \times [0,\rho_s]$: \begin{multline}\label{def_potential_psi} \psi_{\lambda,\alpha}(q_x, q_s, r_s) \triangleq \frac{1}{\alpha}I(S;\sqrt{r_s}\,S + Z) + I\big(U; \sqrt{\lambda q_x^2/2}\,\varphi(\sqrt{\rho_s - q_s} \, U + \sqrt{q_s} \, V) + \widetilde{Z} \,\big\vert\, V \big)\\ - \frac{r_s(\rho_s - q_s)}{2\alpha} +\frac{\lambda}{12} (\rho_x - q_x)^2(\rho_x + 2 q_x)\;. \end{multline} If $n$, $p$ go to infinity such that $\nicefrac{n}{p} \to \alpha >0$ then: \begin{equation}\label{eq:main_I} \lim_{n \to +\infty} \frac{I({\mathbf{X}} ; {\mathbf{Y}} \vert {\mathbf{W}})}{n} = \mathop{\vphantom{p}\inf}_{q_x \in [0,\rho_x]}\adjustlimits{\inf}_{q_s \in [0,\rho_s]} {\sup}_{r_s \geq 0}\; \psi_{\lambda,\alpha}(q_x , q_s, r_s) \;. \end{equation} \end{theorem} One important quantity to assess the performance of an algorithm designed to recover ${\mathbf{X}}^{\otimes 3}$ from the knowledge of ${\mathbf{Y}}$ and ${\mathbf{W}}$ is the \textit{minimum mean square error} (MMSE). The later serves as a lower bar on the error of any estimator, and as a limit to approach as closely as possible for any algorithm striving to estimate ${\mathbf{X}}^{\otimes 3}$. It is well-known that the mean square error of an estimator of ${\mathbf{X}}^{\otimes 3}$ that is a function of ${\mathbf{Y}}, {\mathbf{W}}$ \textit{only} is minimized by the posterior mean $\mathbb{E}[{\mathbf{X}}^{\otimes 3} \vert {\mathbf{Y}}, {\mathbf{W}}]$. We denote the tensor-MMSE by $\mathrm{MMSE}_n({\mathbf{X}}^{\otimes 3}\vert {\mathbf{Y}}, {\mathbf{W}})$, i.e., \begin{equation}\label{def:tensor_MMSE} \mathrm{MMSE}_n({\mathbf{X}}^{\otimes 3}\vert {\mathbf{Y}}, {\mathbf{W}}) \triangleq \frac{\mathbb{E}\,\big\Vert {\mathbf{X}}^{\otimes 3} - \mathbb{E}[{\mathbf{X}}^{\otimes 3} \vert {\mathbf{Y}}, {\mathbf{W}}] \big\Vert^2 }{n^3}\;. \end{equation} It depends on $\lambda$ through the observations ${\mathbf{Y}}$. Combining Theorem~\ref{theorem:limit_mutual_information} with the \textit{I-MMSE relation} (see \cite{GuoShamaiVerdu_IMMSE_2005}) \begin{equation}\label{eq:I-MMSE} \frac{\partial}{\partial \lambda}\bigg(\frac{I({\mathbf{X}}, {\mathbf{Y}} \vert {\mathbf{W}})}{n}\bigg) = \frac{1}{12}\mathrm{MMSE}_n({\mathbf{X}}^{\otimes 3}\vert {\mathbf{Y}}, {\mathbf{W}}) + \mathcal{O}(n^{-1}) \end{equation} yields Theorem~\ref{theorem:tensor_MMSE}. It gives a formula for the tensor-MMSE in the high-dimensional regime that can be calculated from the solution to the variational problem \eqref{eq:main_I}. Its proof is given in Section~\ref{sec:proofthm2}. \begin{theorem}[Tensor-MMSE]\label{theorem:tensor_MMSE} Suppose that \ref{hyp:S_bounded_support} and \ref{hyp:varphi} hold. Define for all $\lambda \in (0, +\infty)$: \begin{multline*} \mathcal{Q}_x^*(\lambda) \triangleq \bigg\{q_x^* \in [0,\rho_x] : \adjustlimits{\inf}_{q_s \in [0,\rho_s]} {\sup}_{r_s \geq 0}\; \psi_{\lambda,\alpha}(q_x^* , q_s, r_s) = \mathop{\vphantom{p}\inf}_{q_x \in [0,\rho_x]}\adjustlimits{\inf}_{q_s \in [0,\rho_s]} {\sup}_{r_s \geq 0}\; \psi_{\lambda,\alpha}(q_x , q_s, r_s) \bigg\}\;. \end{multline*} For every $\lambda > 0$, $\mathcal{Q}_x^*(\lambda)$ is nonempty and the set $\mathcal{D} \triangleq \big\{\lambda \in (0,+\infty): \mathcal{Q}_x^*(\lambda) \text{ is a singleton} \big\}$ is equal to $(0,+\infty)$ minus a countable set. For every $\lambda \in \mathcal{D}$, letting $\mathcal{Q}_x^*(\lambda) = \{q_x^*(\lambda)\}$, we have: \begin{equation} \lim_{\substack{n \to +\infty\\ \nicefrac{n}{p}\to \alpha}} \mathrm{MMSE}_n({\mathbf{X}}^{\otimes 3}\vert {\mathbf{Y}}, {\mathbf{W}}) = \rho_x^3 - \big(q_x^*(\lambda)\big)^3 \;. \end{equation} \end{theorem} Extensions in various directions of Theorems~\ref{theorem:limit_mutual_information} and \ref{theorem:tensor_MMSE} are possible by the methods of the present paper, but at the expense of more technical work. First, the analysis for rank-one tensors of any rank $r \geq 3$ is identical. The potential is given by \begin{multline*} \psi_{\lambda,\alpha}(q_x, q_s, r_s) \triangleq \frac{1}{\alpha}I(S;\sqrt{r_s}\,S + Z) + I\big(U; \sqrt{\nicefrac{\lambda q_x^{r-1}}{(r-1)!}}\,\varphi(\sqrt{\rho_s - q_s} \, U + \sqrt{q_s} \, V) + \widetilde{Z} \,\big\vert\, V \big)\\ - \frac{r_s(\rho_s - q_s)}{2\alpha} +\frac{\lambda}{2(r!)}\big(\rho_x^r + r q_x^r - rq_x^{r-1}\rho_x\big)\;, \end{multline*} while the asymptotic tensor-MMSE is $\rho_x^r - (q_x^*(\lambda))^r$. Second, the results can be extended to unbounded activation functions and priors with unbounded support but finite third moments. This involves a technical limiting process on both sides of equation \eqref{eq:main_I} using the methods in \cite{barbierGLM}. Another direction that should be amenable to analysis with our methods is the case of asymmetric tensors, e.g., ${\mathbf{X}}^{\otimes 3}$ is replaced by ${\mathbf{U}} \otimes {\mathbf{V}} \otimes {\mathbf{W}}$ where each of the three different vectors is given by a GLM. The structureless case where all three vectors ${\mathbf{U}}$, ${\mathbf{V}}$, ${\mathbf{W}}$ have i.i.d.\ entries is treated in \cite{barbier2017layered}, and the variational problem already displays a rich phenomenology in the highly asymmetric case \cite{Kadmon_2019}. A high level summary on how we prove the theorems is given in Section~\ref{subsec:key_ideas_proofs} while the proofs themselves are carried out in Sections~\ref{sec:proofthm1} and \ref{sec:proofthm2}. \subsection{Examples of phase transitions and their properties}\label{subsec:examples} This section illustrates features of the phase transitions found when numerically solving the variational problem \eqref{eq:main_I} for $r=3$. We also discuss similarities and differences with the matrix case $r=2$. To find solutions to the variational problem \eqref{eq:main_I}, we write down the stationary point equations of the potential function \eqref{def_potential_psi}. It yields a fixed point equation for $(q_x,q_s,r_s)$ that we solve with a fixed-point iteration starting from several different initializations. When multiple fixed points exist, we keep the one corresponding to the smallest potential value as it should be clear from the form of the optimization problem \eqref{eq:main_I}. We first focus on the case of odd activation functions $\varphi(-z) = -\varphi(z)$ and centered priors $\mathbb{E}_{S \sim P_S}[S] = 0$. This implies $\mathbb{E}\,X_i = 0$ and, if $\varphi$ is not identically zero, this is a necessary and sufficient condition for the existence of a fixed point $(q_x, q_s, r_s)$ such that $q_x=0$ (in which case we also have $q_s = r_s = 0$). The same condition arises in the matrix case \cite{aubin2019spiked} but, contrary to what happens there, we find that all eigenvalues of the Jacobian matrix at the all-zero fixed point are zero indicating that it is asymptotically stable for order-$3$ tensors. Numerically, we observe that for all $\lambda < \lambda_c(\alpha)$ this \textit{uninformative} fixed point yields the smallest potential. This means that in this phase the asymptotic tensor-MMSE is equal to its maximum $\rho_x^3$: one cannot estimate the signal better than random guessing. When $\lambda > \lambda_c(\alpha)$ a fixed point with a lower potential value appears. The asymptotic MMSE has a jump discontinuity at $\lambda = \lambda_c(\alpha)$ and decreases for $\lambda > \lambda_c(\alpha)$. These features are already observed for the structureless i.i.d.\ case. In the structured case, we observe that $\lambda_c(\alpha)$ has a monotone decrease with increasing $\alpha$. This is illustrated in Figure~\ref{MMSE_alpha_lambda} for a linear activation function and in Figure~\ref{MMSE_sign_activation} for a $\mathrm{sign}$ activation function\footnote{Our theorems are proven here for bounded and smooth activation functions but, as explained, the proofs can be extended to unbounded and piecewise differentiable ones. Numerical solutions involve non-trivial integrals that are much easier to handle for piecewise linear functions.} \begin{figure}[hbt] \centering \includegraphics[width=0.85\textwidth]{figures/MMSE_alpha_lambda.png} \caption{\label{MMSE_alpha_lambda} Asymptotic tensor-MMSE for $r=3$ as a function of $(\lambda,\alpha)$ for a linear activation $\varphi(x) = x$. {\it Left:} Gaussian prior $P_S \sim \mathcal{N}(0,1)$. {\it Right:} Rademacher prior $P_S(1) = P_S(-1) = \frac{1}{2}$. We observe a unique discontinuity line $\lambda_c(\alpha)$ below which the MMSE equals its maximum $\rho_x^3=1$. Above the line, the MMSE is strictly less than $1$ and decreases to zero. For $\alpha$ close to $0$, the threshold $\lambda_c(\alpha) \approx 8.73$ is the same threshold than in the i.i.d.\ case with a Gaussian prior $X_1,\dots,X_n \overset{\text{\tiny i.i.d.}}{\simiid[-2pt]} \mathcal{N}(0,1)$.} \end{figure} \begin{figure}[hbt] \centering \includegraphics[width=0.7\textwidth]{figures/MMSE_gaussian_sign_activation} \caption{\label{MMSE_sign_activation} Asymptotic tensor-MMSE for $r=3$, $P_S = \mathcal{N}(0,1)$ and $\varphi(z) =\mathrm{sign}(z)$ as a function of $\lambda$. The location $\lambda_c(\alpha)$ of the discontinuity decreases with increasing $\alpha$. For $\alpha=10^{-12}$ the threshold $\lambda_c(\alpha) \approx 7.07$ is the same than for the i.i.d. case with Rademacher prior $X_1, \dots, X_n \overset{\text{\tiny i.i.d.}}{\simiid[-2pt]} P_X(\pm 1)=\frac{1}{2}$ (whose asymptotic MMSE is given by the curve ``Limit $\alpha \to 0$''). } \end{figure} In Section~\ref{section:alphatendstozero} we present a \textit{non-rigorous} calculation which shows that, in the limit $\alpha \to 0$, the asymptotic tensor-MMSE -- and in particular the threshold $\lambda_c(\alpha)$ -- is the same than for a tensor denoising problem $\widetilde{Y}_{ijk} = \frac{\sqrt{\lambda}}{n} \widetilde{X}_i\widetilde{X}_j \widetilde{X}_k + \widetilde{Z}_{ijk}$ with $\widetilde{X}_i = \varphi(\sqrt{\rho_s - \mathbb{E}[S]^2} \, U_i + \vert \mathbb{E} S \vert\, V_i)$, where $U_1,\dots, U_n\overset{\text{\tiny i.i.d.}}{\simiid[-2pt]} \mathcal{N}(0,1)$ are latent variables and $V_1,\dots, V_n \overset{\text{\tiny i.i.d.}}{\simiid[-2pt]} \mathcal{N}(0,1)$ are \textit{known}. The latter take into account the bias that is present when $\mathbb{E} S \neq 0$. We stress that when $\mathbb{E} S \neq 0$ the asymptotic mutual information of this problem (given by \eqref{limit_mutual_info_alpha=0} in Section~\ref{section:alphatendstozero}) is not quite the same as the one known in the literature for rank-one tensor problems with i.i.d.\ $X_i$'s. However, it is not difficult to adapt the proof to account for the side information ${\mathbf{V}}$ and obtain \eqref{limit_mutual_info_alpha=0}. When the prior is centered $(\mathbb{E} S = 0)$, the limiting problem is just the usual rank-one tensor denoising problem with spike signal $\widetilde X_i \overset{\text{\tiny i.i.d.}}{\simiid[-2pt]} \varphi(\mathcal{N}(0, \rho_s))$. Numerically, we indeed observe in Figure~\ref{MMSE_alpha_lambda} that for both kinds of priors and for $\alpha$ close to $0$ the threshold $\lambda_c(\alpha) \approx 8.73$ is the same than for a signal $X_1,\dots,X_n \overset{\text{\tiny i.i.d.}}{\simiid[-2pt]} \mathcal{N}(0,1)$. Similarly, in Figure~\ref{MMSE_sign_activation}, the curve for $\alpha=10^{-12}$ agrees with the one labelled ``Limit $\alpha \to 0$'' corresponding to the asymptotic tensor-MMSE of the limiting tensor problem and that is computed using the formulas known in the literature. We next discuss an example of non-centered latent prior $P_S$. In Figure~\ref{MMSE_asym} we draw the asymptotic tensor-MMSE for a linear activation function and a Rademacher prior $P_S(1) = p$, $P_S(-1) = 1-p$ with $p \in \{0.6, 0.7\}$. We observe that for a small asymmetry the asymptotic MMSE has a jump discontinuity just as in the centered case, while it becomes continuous once the asymmetry is large enough. Here $\mathbb{E} S = 2p-1$ and the asymptotic MMSE of the predicted limiting problem \eqref{limit_mutual_info_alpha=0} is again in agreement with the one for $\alpha=10^{-12}$ close to $0$. \begin{figure}[ht] \centering \includegraphics[width=0.85\textwidth]{figures/MMSE_nonsymmetric_rademacher} \caption{\label{MMSE_asym} Asymptotic tensor-MMSE for $\varphi(z)=z$ and an asymmetric Rademacher prior $P_S(1) = 1-P_S(-1) =p$. \textit{Left:} $p=0.6$. \textit{Right:} $p=0.7$. } \end{figure} To conclude this section we wish to briefly discuss the matrix case $r=2$, and point out similarities and differences with genuine tensors $r\geq 3$. In the matrix case, \cite{aubin2019spiked} observe for \textit{a set} of centred priors and odd activations that the asymptotic matrix-MMSE is equal to its maximum $\rho_x^2$ for $\lambda < \lambda_c(\alpha)$ and decreases for $\lambda > \lambda_c(\alpha)$ \textit{while remaining continuous at $\lambda_c(\alpha)$}. Again $\lambda_c(\alpha)$ decreases with increasing $\alpha$. We give an example on the left panel of Figure~\ref{MMSE_bernouilli_rademacher}. The continuity of the phase transition is an important qualitative difference with what we observe here for order-$3$ tensors. Such continuity for Bayesian inference problems is known to go hand in hand with the optimality of the AMP algorithm and, as shown in \cite{aubin2019spiked}, matrix factorization with generative prior is no exception. Because the continuity of the phase transition is observed for all the priors and activations used in \cite{aubin2019spiked}, it supports the claim that such model of structure makes estimation algorithmically easier. In contrast, the persisting discontinuity of the transition for tensors of order $r\geq 3$ suggests that structure does not make the problem algorithmically easier here. The observations of \cite{aubin2019spiked} should also be nuanced as it is not difficult to come up with a situation where the phase transition is discontinuous. E.g., consider the spiked matrix model with generative prior ${\mathbf{X}} = \varphi(\nicefrac{{\mathbf{W}} {\mathbf{S}}}{\sqrt{p}})$ for the \textit{odd activation} function $\varphi(x) = 0$ if $\vert x \vert \leq \epsilon$ and $\varphi(x)=\mathrm{sign}(x)$ otherwise, and the \textit{centered latent prior} $P_S = \mathcal{N}(0,1)$. Similarly to what is done in Section~\ref{section:alphatendstozero}, we can show that when $\alpha$ vanishes the asymptotic matrix-MMSE approaches the one of the spiked matrix model $\widetilde{Y}_{ij} = \sqrt{\frac{\lambda}{n}} \widetilde{X}_i\widetilde{X}_j + \widetilde{Z}_{ij}$ where $\widetilde{X}_1,\dots,\widetilde{X}_n \overset{\text{\tiny i.i.d.}}{\simiid[-2pt]} \varphi(\mathcal{N}(0,1))$ are i.i.d. Bernouilli-Rademacher random variables. We can make $\mathbb{P}(\widetilde{X}_i = 0) = 1 - 2 \mathbb{P}(\mathcal{N}(0,1) < -\epsilon) = 1-\rho$ as large as needed by increasing $\epsilon$ (then $\mathbb{P}(\widetilde{X}_i = 1) = \mathbb{P}(\widetilde{X}_i = -1) =\rho/2$). It is known that the asymptotic matrix-MMSE has a jump discontinuity for such prior when the probability of being $0$ is large enough, e.g., see the right panel in Figure~\ref{MMSE_bernouilli_rademacher}. Therefore, when $\epsilon$ is large enough, the asymptotic matrix-MMSE of the original spiked matrix model with generative prior also has a jump discontinuity, at least for small $\alpha$. An interesting question for future research is whether or not the discontinuity disappears when $\alpha$ is made large enough. If so, it would further support the claim that such generative prior makes estimation algorithmically easier when the ratio $\alpha$ of signal-to-latent space dimensions is large enough. If not, the existence of a jump discontinuity would then merely depend on the choice of activation function and not on the ratio of signal-to-latent space dimensions. \begin{figure}[hbt] \centering \includegraphics[width=0.85\textwidth]{figures/MMSE_bernouilli_rademacher_rho=5e-2} \caption{\label{MMSE_bernouilli_rademacher} Asymptotic matrix-MMSE when estimating ${\mathbf{X}}^{\otimes 2}$ from ${\mathbf{Y}} = \sqrt{\nicefrac{\lambda}{n}} \, {\mathbf{X}}^{\otimes 2} + {\mathbf{Z}}$. We use a Bernouilli-Rademacher prior $P_S(0) = 1-\rho$, $P_S(\pm 1) = \rho/2$ with $\rho = 0.05$. \textit{Left:} generative prior ${\mathbf{X}} = {\mathbf{W}} {\mathbf{S}} / \sqrt{p}$ with ${\mathbf{S}} \overset{\text{\tiny i.i.d.}}{\simiid[-2pt]} P_S$. \textit{Right:} ${\mathbf{X}} \overset{\text{\tiny i.i.d.}}{\simiid[-2pt]} P_S$. } \end{figure} \subsection{Key ideas in the proofs of Theorems \ref{theorem:limit_mutual_information} and \ref{theorem:tensor_MMSE}}\label{subsec:key_ideas_proofs} The proof of Theorem~\ref{theorem:limit_mutual_information} is based on the adaptive interpolation method \cite{barbier_adaptive_2019, Barbier_Macris_jphysA_2019} whose main difference with the canonical interpolation method \cite{guerra2002thermodynamic,Guerra-2003} is the increased flexibility given to the path followed by the interpolation between its two extremes. The method has been developed separately for symmetric rank-one tensor problems where the spike has i.i.d. components \cite{barbier_adaptive_2019,Barbier_Macris_jphysA_2019}, and for one-layer GLMs whose input signal has again i.i.d. components \cite{barbierGLM}. The problem studied in this contribution combines the two aforementioned models and our proof shows that the two interpolations combine well in a modular way. This modular feature of the adaptive interpolation method has also been used for \textit{non-symmetric} order-three tensors \cite{barbier2017layered} and two-layer GLMs\cite{Gabrie_TwoLayerGLM_JSTAT_2019}. An essential ingredient is an interpolating inference problem. Let $t\in [0,1]$ an interpolation parameter and $R(t)$ a smooth interpolation function that will be suitably \textit{adapted}. We consider the pair of observations $({\mathbf{Y}}^{(t)}, \widetilde{{\mathbf{Y}}}^{(t)} ) = \big(\frac{\sqrt{\lambda (1-t)}}{n} {\mathbf{X}}^{\otimes 3} + {\mathbf{Z}}, \sqrt{\frac{\lambda R(t)}{2}}\, {\mathbf{X}} \;\, + \widetilde{{\mathbf{Z}}}\big)$ where ${\mathbf{X}} \triangleq \varphi(\nicefrac{{\mathbf{W}} {\mathbf{S}}}{\sqrt{p}})$ and the noise vector $\widetilde{{\mathbf{Z}}}$ and the symmetric noise tensor ${\mathbf{Z}}$ have entries $Z_{ijk}, \widetilde{Z}_{\ell} \overset{\text{\tiny i.i.d.}}{\simiid[-2pt]} \mathcal{N}(0,1)$ for $1 \leq i \leq j \leq k \leq n$, $1 \leq \ell \leq n$. At $t=0$ we recover the original problem while at $t=1$ we have a pure GLM with signal-to-noise ratio $\frac{\lambda R(1)}{2}$. From the fundamental theorem of calculus, we have $I({\mathbf{X}};{\mathbf{Y}} \vert {\mathbf{W}})/n = I({\mathbf{X}}; \widetilde{{\mathbf{Y}}}^{(1)}\vert {\mathbf{W}})/n - \int_0^1 n^{-1}\big(\nicefrac{\partial I({\mathbf{X}}; {\mathbf{Y}}^{(t)}, \widetilde{\mathbf{Y}}^{(t)}\vert W)}{\partial t}\big)dt$. The first term on the right-hand side is the normalized mutual information of a GLM given in the high-dimensional regime by the variational formula (proved in \cite{barbierGLM} with the adapative interpolation method): \begin{equation*} \adjustlimits{\inf}_{q_s \in [0,\rho_s]} {\sup}_{r_s \geq 0}\bigg\{\frac{I(S;\sqrt{r_s}\,S \! + \! Z) }{\alpha}\! + \! I\big(U; \sqrt{\frac{\lambda R(1)}{2}}\,\varphi(\sqrt{\rho_s - q_s} \, U \! + \! \sqrt{q_s} \, V) \! + \! \widetilde{Z} \,\big\vert\, V \big) \! - \! \frac{r_s(\rho_s - q_s)}{2\alpha}\bigg\}\;. \end{equation*} Comparing with \eqref{def_potential_psi} and \eqref{eq:main_I} we see that, if we set for the end point $R(1) = q_x^2$, we are missing the term $\frac{\lambda}{12}(\rho_x - q_x)^2 (\rho_x + 2 q_x)$. In other words, \textit{and roughly speaking}, Theorem~\ref{theorem:limit_mutual_information} follows if we can show that $-n^{-1}\frac{\partial I({\mathbf{X}}; {\mathbf{Y}}^{(t)}, \widetilde{\mathbf{Y}}^{(t)}\vert W) }{\partial t} \approx \frac{\lambda}{12}(\rho_x+ q_x^2)(\rho_x + 2 q_x)$ for a \textit{suitable choice} of the interpolating function $R(t)$. Remarkably, this condition essentially reduces to an \textit{ordinary differential equation} (ODE) for $R(t)$. The existence of a solution to this ODE is guaranteed by the standard Cauchy-Lipshitz theorem. Obtaining the ODE is non-trivial and involves: (i) remarkable identities stemming from Bayes' law; (ii) concentration theorems for the \textit{overlap} $Q = \frac{1}{n}\sum_{i=1}^n x_i X_i$ akin to a correlation between the ground truth ${\mathbf{X}}$ and a vector ${\mathbf{x}}$ distributed with respect to the posterior of the interpolating inference problem. In order to prove Theorem~\ref{theorem:tensor_MMSE} we use the I-MMSE relation \eqref{eq:I-MMSE}. This involves the computation of the \textit{derivative with respect to $\lambda$ of the variational formula} \eqref{eq:main_I} for the asymptotic mutual information. The computation requires a careful application of an envelope theorem \cite{Milgrom_Envelope_Theorems} which eventually allows to show that, except for a countable set of $\lambda$'s, it is enough to evaluate the \textit{partial derivative with respect to $\lambda$ of the potential} \eqref{def_potential_psi} at the solution to the variational problem. \section{Proof of the variational formula for the mutual information}\label{sec:proofthm1} In this section we present the main steps of the proof of Theorem \ref{theorem:limit_mutual_information}. Intermediate results are found in the appendices. \subsection{Adaptive path interpolation} We introduce a ``time'' parameter $t \in [0,1]$. The adaptive interpolation interpolates from the original model \eqref{eq:entries_Y} at $t=0$ to a GLM whose asymptotic mutual information is known \cite{barbierGLM}. In between, we follow an interpolation path $R(\cdot,\epsilon): [0,1] \to (0,+\infty)$ which is a continuously differentiable function of $t$ parametrized by a ``small'' perturbation $\epsilon \in (0,+\infty)$ and is such that $R(0,\epsilon)=\epsilon$. More precisely, for $t \in [0,1]$, the observations are: \begin{align}\label{interpolation_model} \begin{cases} {\mathbf{Y}}^{(t)} \;\; = \frac{\sqrt{\lambda (1-t)}}{n} {\mathbf{X}}^{\otimes 3} + {\mathbf{Z}}\\ \widetilde{{\mathbf{Y}}}^{(t,\epsilon)} = \:\sqrt{\frac{\lambda R(t,\epsilon)}{2}}\, {\mathbf{X}} \;\, + \widetilde{{\mathbf{Z}}} \end{cases} \end{align} where ${\mathbf{X}} \triangleq \varphi(\nicefrac{{\mathbf{W}} {\mathbf{S}}}{\sqrt{p}})$. The noise vector $\widetilde{{\mathbf{Z}}} \in \mathbb{R}^n$ has entries $\widetilde{Z}_1,\dots,\widetilde{Z}_n \overset{\text{\tiny i.i.d.}}{\simiid[-2pt]} \mathcal{N}(0,1)$, while the symmetric noise tensor ${\mathbf{Z}} \in (\mathbb{R}^n)^{\otimes 3}$ has entries ${\mathbf{Z}}_{{\mathbf{i}}} \overset{\text{\tiny i.i.d.}}{\simiid[-2pt]} \mathcal{N}(0,1)$ for ${\mathbf{i}} \in \mathcal{I} \triangleq \{(i_1, i_2, i_3) \in [n]: i_1 \leq i_2 \leq i_3\}$. Before diving further, we introduce some important quantities and notations. We denote $i_n(t,\epsilon)$ the normalized mutual information between ${\mathbf{X}}$ and $({\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,\epsilon)})$ given ${\mathbf{W}}$, that is: \begin{equation}\label{definition_i_n} i_n(t,\epsilon) \triangleq \frac{1}{n}I({\mathbf{X}} ; {\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,\epsilon)} \vert {\mathbf{W}}) =\frac{1}{n}I({\mathbf{S}} ; {\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,\epsilon)} \vert {\mathbf{W}}) \;. \end{equation} The last equality holds because ${\mathbf{X}}$ is a deterministic function of ${\mathbf{S}}$ when ${\mathbf{W}}$ is known. Set $dP_s({\mathbf{s}}) = \prod_{i=1}^{p} dP_s(s_i)$ for the prior distribution of ${\mathbf{S}}$. The {\it usual} Bayesian posterior distribution of ${\mathbf{S}}$ given $({\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,\epsilon)}, {\mathbf{W}})$ reads: \begin{equation}\label{posterior_interpolation} dP({\mathbf{s}} ; {\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,\epsilon)}, {\mathbf{W}}) = \frac{1}{\mathcal{Z}_{t,\epsilon}({\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,\epsilon)}, {\mathbf{W}})} dP_s({\mathbf{s}}) \, e^{-\mathcal{H}_{t,\epsilon}({\mathbf{s}} \,;\, {\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,\epsilon)}, {\mathbf{W}})}\;, \end{equation} where the normalization factor $\mathcal{Z}_{t,\epsilon}({\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,\epsilon)}, {\mathbf{W}})$ is simply: \begin{equation}\label{partition_function} \mathcal{Z}_{t,\epsilon}({\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,\epsilon)}, {\mathbf{W}}) \triangleq \int dP_s({\mathbf{s}}) \, e^{-\mathcal{H}_{t,\epsilon}({\mathbf{s}} \,;\, {\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,\epsilon)}, {\mathbf{W}})} \;. \end{equation} and \begin{multline}\label{interpolating_hamiltonian} \mathcal{H}_{t,\epsilon}({\mathbf{s}} ; {\mathbf{Y}}^{(t)}, \widetilde{{\mathbf{Y}}}^{(t,\epsilon)}, {\mathbf{W}}) \triangleq \sum_{{\mathbf{i}} \in \mathcal{I}}\biggl( \frac{\lambda(1-t)}{2n^2} x_{i_1}^2 x_{i_2}^2 x_{i_3}^2 - \frac{\sqrt{\lambda(1-t)}}{n}\, Y_{{\mathbf{i}}}^{(t)} x_{i_1} x_{i_2} x_{i_3}\bigg)\\ + \sum_{j=1}^{n} \biggl(\frac{\lambda R(t,\epsilon)}{4} x_j^2 - \sqrt{\frac{\lambda R(t,\epsilon)}{2}}\,\widetilde{Y}_j^{(t,\epsilon)} x_j\biggr) \,, \end{multline} with $x_1,\dots,x_n$ the entries of ${\mathbf{x}} \triangleq \varphi(\nicefrac{{\mathbf{W}} {\mathbf{s}}}{\sqrt{p}})$. This dependence on ${\mathbf{s}}$ must be kept in mind each time we use the notation ${\mathbf{x}}$. It is common to adopt the statistical mechanics interpretation and call \eqref{interpolating_hamiltonian} a Hamiltonian, \eqref{partition_function} the partition function and \eqref{posterior_interpolation} the Gibbs distribution. To deal with future computations, it is useful to introduce the angular brackets $\langle - \rangle_{t,\epsilon}$ (also called Gibbs brackets) which denote an expectation with respect to the posterior distribution \eqref{posterior_interpolation}. That is, for a generic function $g: \mathbb{R}^p \to \mathbb{R}$, we have: \begin{equation} \langle g({\mathbf{s}}) \rangle_{t,\epsilon} \triangleq \int \!g({\mathbf{s}})\,dP({\mathbf{s}} ; {\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,\epsilon)}, {\mathbf{W}})\:. \end{equation} Finally, we define the so-called average free entropy: \begin{equation}\label{interpolating_free_entropy} f_n(t,\epsilon) \triangleq \frac1n \mathbb{E} \ln \mathcal{Z}_{t,\epsilon}({\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,\epsilon)}, {\mathbf{W}})\;. \end{equation} This is equal to the mutual information $i_n(t,\epsilon)$ up to some additive term (see formula \eqref{link_i_n_f_n} in Lemma~\ref{lemma:link_i_n_f_n} in Appendix~\ref{app:establishing_sum_rule}). It is often easier to work directly with $f_n(t,\epsilon)$ instead of $i_n(t,\epsilon)$. We now focus on the mutual information \eqref{definition_i_n} at both extremes of the interpolation path. Letting $t=0$ in \eqref{interpolation_model}, we see that the observation ${\mathbf{Y}}^{(0)}$ is exactly \eqref{eq:entries_Y}, while $\widetilde{{\mathbf{Y}}}^{(0,\epsilon)} = \sqrt{\frac{\lambda \epsilon}{2}} {\mathbf{X}} + \widetilde{{\mathbf{Z}}}$. This latter channel induces a perturbation to the normalized mutual information of the former channel of the order of $\epsilon$ (see Lemma~\eqref{lemma:link_i_n_f_n} in Appendix~\ref{app:establishing_sum_rule} for the proof), that is: \begin{equation}\label{i_n_t=0} i_n(0,\epsilon) \triangleq \frac{1}{n}I({\mathbf{X}} ; {\mathbf{Y}}^{(0)},\widetilde{{\mathbf{Y}}}^{(0,\epsilon)} \vert {\mathbf{W}}) =\frac{I({\mathbf{X}} ; {\mathbf{Y}} \vert {\mathbf{W}})}{n} + \mathcal{O}(\epsilon) \;, \end{equation} where $\vert \mathcal{O}(\epsilon) \vert \leq C \epsilon$. At $t=1$ the observation ${\mathbf{Y}}^{(1)}$ is pure noise, while the normalized mutual information between ${\mathbf{S}}$ and $\widetilde{{\mathbf{Y}}}^{(1,\epsilon)} = \sqrt{\nicefrac{\lambda R(1,\epsilon)}{2}} \, \varphi(\nicefrac{{\mathbf{W}} {\mathbf{S}}}{\sqrt{p}}) + \widetilde{{\mathbf{Z}}}$ is given by a variational formula in the high-dimensional regime $\nicefrac{n}{p} \to \alpha$ \cite{barbierGLM}. Let $S \sim P_S$ and $U,V,Z,\widetilde{Z} \sim \mathcal{N}(0,1)$ independent scalar random variables. Define the potential function $\widetilde{\psi}_\alpha: [0,+\infty)^2 \times [0,\rho_s]$: \begin{multline} \widetilde{\psi}_\alpha(r, r_s, q_s) \triangleq I(S;\sqrt{r_s}\,S + Z) + \alpha I\big(U; \sqrt{r}\,\varphi(\sqrt{\rho_s - q_s} \, U + \sqrt{q_s} \, V) + \widetilde{Z} \,\big\vert\, V \big) - \frac{r_s(\rho_s - q_s)}{2} \;. \end{multline} By \cite[Corollary 1]{barbierGLM}, we have: \begin{equation}\label{i_n_t=1} i_n(1,\epsilon) = \frac{1}{n}I({\mathbf{X}} ; \widetilde{{\mathbf{Y}}}^{(1,\epsilon)} \vert {\mathbf{W}}) = \smallO_n(1) + \frac{1}{\alpha} \,\adjustlimits{\inf}_{q_s \in [0,\rho_s]} {\sup}_{r_s \geq 0}\; \widetilde{\psi}_\alpha\bigg(\frac{\lambda R(1,\epsilon)}{2} , r_s, q_s\bigg) \;. \end{equation} Combining \eqref{i_n_t=0}, \eqref{i_n_t=1} and the fundamental theorem of calculus $i_n(0,\epsilon)=i_n(1,\epsilon)-\int_{0}^{1}i_n^{\prime}(t,\epsilon) dt$, where $i_n^{\prime}(\cdot,\epsilon)$ is the derivative of $i_n(\cdot,\epsilon)$ w.r.t.\ its first argument, we obtain the sum-rule of the adaptive interpolation. \begin{proposition}[Sum-rule]\label{prop:sum_rule} Suppose that \ref{hyp:S_bounded_support} and \ref{hyp:varphi} hold, and that $R'(t,\epsilon)$ is uniformly bounded in $(t,\epsilon) \in [0,1] \times [0,+\infty)$ where $R'(\cdot,\epsilon)$ denotes the derivative of $R(\cdot,\epsilon)$ with respect to its first argument. Define the scalar overlap $$ Q \triangleq \frac{1}{n} \sum\limits_{i=1}^{n} \varphi\big(\big[\nicefrac{{\mathbf{W}} {\mathbf{s}}}{\sqrt{p}}\big]_i \,\big) \varphi\big(\big[\nicefrac{{\mathbf{W}} {\mathbf{S}}}{\sqrt{p}}\big]_i \,\big) = \frac{1}{n} \sum_{i=1}^{n} x_i X_i \;. $$ Then: \begin{multline}\label{sum_rule} \frac{I({\mathbf{X}} ; {\mathbf{Y}} \vert {\mathbf{W}})}{n} = \mathcal{O}(\epsilon) + \smallO_n(1) + \frac{1}{\alpha}\,\adjustlimits{\inf}_{q_s \in [0,\rho_s]} {\sup}_{r_s \geq 0}\; \widetilde{\psi}_\alpha\bigg(\frac{\lambda R(1,\epsilon)}{2} , r_s, q_s\bigg)\\ -\frac{\lambda}{12} \int_0^1 \big(\mathbb{E}\,\langle Q^3 \rangle_{t,\epsilon} - \rho_x^3\big)dt - \frac{\lambda}{4} \int_0^1 R'(t,\epsilon)\big(\rho_x - \mathbb{E}\,\langle Q \rangle_{t,\epsilon}\big)dt\;, \end{multline} where $\smallO_n(1)$ and $\mathcal{O}(\epsilon)$ are independent of $\epsilon$ and $n$, respectively. \begin{IEEEproof} See Lemma~\ref{lemma:formula_derivative_free_entropy} in Appendix~\ref{app:establishing_sum_rule} for the computation of the derivative $i_n^{\prime}(t,\epsilon)$. \end{IEEEproof} \end{proposition} The sum rule of Proposition~\ref{prop:sum_rule} is valid for the general class of differentiable interpolating paths. By choosing two appropriate interpolation paths we can prove matching upper and lower bounds on the asymptotic normalized mutual information. This is discussed in the next two paragraphs. \subsection{Upper bound on the asymptotic normalized mutual information}\label{subsec:upper_bound} \begin{proposition}\label{prop:upperbound_mutual_info} Suppose that \ref{hyp:S_bounded_support} and \ref{hyp:varphi} hold. Then: \begin{equation} \limsup_{n \to +\infty} \frac{I({\mathbf{X}} ; {\mathbf{Y}} \vert {\mathbf{W}})}{n} \leq \inf_{q_x \in [0,\rho_x]} \adjustlimits{\inf}_{q_s \in [0,\rho_s]} {\sup}_{r_s \geq 0}\;\psi_{\lambda,\alpha}\big(q_x, q_s, r_s\big)\;. \end{equation} \end{proposition} \begin{IEEEproof} Fix $\epsilon > 0$ and pick the linear interpolation path $R(t,\epsilon) = \epsilon + t q^2$ where $q \in [0,\rho_x]$. Then the sum-rule \eqref{sum_rule} in Proposition~\ref{prop:sum_rule} reads: \begin{align} & \frac{I({\mathbf{X}} ; {\mathbf{Y}} \vert {\mathbf{W}})}{n} = \mathcal{O}(\epsilon) + \smallO_n(1) + \frac{1}{\alpha} \adjustlimits{\inf}_{q_s \in [0,\rho_s]} {\sup}_{r_s \geq 0}\; \widetilde{\psi}_\alpha\bigg(\frac{\lambda \epsilon}{2} + \frac{\lambda q^2}{2}, r_s, q_s\bigg) +\frac{\lambda}{12} \rho_x^3 - \frac{\lambda}{4} q^2 \rho_x \nonumber\\& \qquad\qquad\qquad\qquad\qquad -\frac{\lambda}{12} \int_0^1 \bigg(\mathbb{E}\,\langle Q^3 \rangle_{t,\epsilon} - \mathbb{E}\bigg[\bigg\Vert \frac{\langle {\mathbf{x}} \rangle_{t,\epsilon}}{\sqrt{n}} \bigg\Vert^4\langle Q \rangle_{t,\epsilon}\bigg]\bigg)dt \nonumber\\& \qquad\qquad\qquad\qquad\qquad -\frac{\lambda}{12}\int_0^1 \bigg(\mathbb{E}\bigg[\bigg\Vert \frac{\langle {\mathbf{x}} \rangle_{t,\epsilon}}{\sqrt{n}} \bigg\Vert^4\langle Q \rangle_{t,\epsilon}\bigg]- 3 q^2 \, \mathbb{E}\langle Q \rangle_{t,\epsilon} \bigg) dt\;. \label{sum_rule_linear_path} \end{align} In this last identity, we "artificially" added and subtracted the term $\mathbb{E}\big[\big\Vert \frac{\langle {\mathbf{x}} \rangle_{t,\epsilon}}{\sqrt{n}} \big\Vert^4\langle Q \rangle_{t,\epsilon}\big]$ for reasons that will appear immediately. By the Nishimori identity\footnote{ In our setting, the Nishimori identity states that ${\mathbb{E}\langle g({\mathbf{s}},{\mathbf{S}})\rangle_{t,\epsilon} = \mathbb{E}\langle g({\mathbf{s}},{\mathbf{s}}') \rangle_{t,\epsilon} = \mathbb{E}\langle g({\mathbf{S}},{\mathbf{s}}) \rangle_{t,\epsilon}}$ where ${\mathbf{s}},{\mathbf{s}}'$ are two samples drawn independently from the posterior distribution of ${\mathbf{S}}$ given $({\mathbf{Y}}^{(t)}, \widetilde{{\mathbf{Y}}}^{(t,\epsilon)}, {\mathbf{W}})$. It is a direct consequence of Bayes' theorem. Here $g$ can also explicitly depend on ${\mathbf{Y}}^{(t)}, \widetilde{{\mathbf{Y}}}^{(t,\epsilon)}, {\mathbf{W}}$ so the identity holds for ${\mathbf{X}} = \varphi(\frac{{\mathbf{W}} {\mathbf{S}}}{\sqrt{p}}), {\mathbf{x}} = \varphi(\frac{{\mathbf{W}} {\mathbf{s}}}{\sqrt{p}}), {\mathbf{x}}' = \varphi(\frac{{\mathbf{W}} {\mathbf{s}}'}{\sqrt{p}})$ too.}, we have \begin{equation} \mathbb{E}\bigg[\bigg\Vert \frac{\langle {\mathbf{x}} \rangle_{t,\epsilon}}{\sqrt{n}} \bigg\Vert^4\langle Q \rangle_{t,\epsilon}\bigg] = \mathbb{E}\,\bigg\Vert \frac{\langle {\mathbf{x}} \rangle_{t,\epsilon}}{\sqrt{n}} \bigg\Vert^6, \quad \mathbb{E}\langle Q \rangle_{t,\epsilon} = \mathbb{E}\,\bigg\Vert \frac{\langle {\mathbf{x}} \rangle_{t,\epsilon}}{\sqrt{n}} \bigg\Vert^2 \;, \end{equation} and, by convexity of $x \mapsto x^3$ on $[0,+\infty)$, we have $\forall a,b \geq 0: a^3 - 3b^2 a \geq -2 b^3$. Hence the integrand of the last integral on the right-hand side of \eqref{sum_rule_linear_path} satisfies: \begin{equation}\label{lowerbound_convexity_x^3} \mathbb{E}\bigg[\bigg\Vert \frac{\langle {\mathbf{x}} \rangle_{t,\epsilon}}{\sqrt{n}} \bigg\Vert^4\langle Q \rangle_{t,\epsilon}\bigg]- 3 q^2 \, \mathbb{E}\langle Q \rangle_{t,\epsilon} = \mathbb{E}\bigg[\bigg\Vert \frac{\langle {\mathbf{x}} \rangle_{t,\epsilon}}{\sqrt{n}} \bigg\Vert^6 - 3 q^2 \bigg\Vert \frac{\langle {\mathbf{x}} \rangle_{t,\epsilon}}{\sqrt{n}} \bigg\Vert^2 \bigg] \geq - 2 q^3 \;. \end{equation} Besides, by Lemma~\ref{lemma:properties_functions} in Appendix \ref{app:useful_lemmas}, the function $r \mapsto \adjustlimits{\inf}_{q_s \in [0,\rho_s]} {\sup}_{r_s \geq 0}\; \widetilde{\psi}_\alpha(q, r_s, q_s)$ is nondecreasing and $(\alpha/2) \Vert \varphi \Vert_{\infty}^2$-Lipschitz. Thus: \begin{equation}\label{upperbound_psi_epsilon} \adjustlimits{\inf}_{q_s \in [0,\rho_s]} {\sup}_{r_s \geq 0}\; \widetilde{\psi}_\alpha\bigg(\frac{\lambda \epsilon}{2} + \frac{\lambda q^2}{2}, r_s, q_s\bigg) \leq \frac{\lambda \alpha \Vert \varphi \Vert_{\infty}^2}{4} \epsilon + \adjustlimits{\inf}_{q_s \in [0,\rho_s]} {\sup}_{r_s \geq 0}\; \widetilde{\psi}_\alpha\bigg(\frac{\lambda q^2}{2}, r_s, q_s\bigg) \;. \end{equation} Therefore, making use of \eqref{lowerbound_convexity_x^3} and \eqref{upperbound_psi_epsilon} to upper bound \eqref{sum_rule_linear_path} yields: \begin{align} \frac{I({\mathbf{X}} ; {\mathbf{Y}} \vert {\mathbf{W}})}{n} &\leq \mathcal{O}(\epsilon) + \smallO_n(1) + \adjustlimits{\inf}_{q_s \in [0,\rho_s]} {\sup}_{r_s \geq 0}\; \frac{1}{\alpha}\widetilde{\psi}_\alpha\bigg(\frac{\lambda q^2}{2}, r_s, q_s\bigg) +\frac{\lambda}{12} \rho_x^3 - \frac{\lambda}{4} q^2 \rho_x + \frac{\lambda}{6} q^3\nonumber\\ &\qquad\qquad\qquad\qquad\qquad\quad -\frac{\lambda}{12} \int_0^1 \bigg(\mathbb{E}\,\langle Q^3 \rangle_{t,\epsilon} - \mathbb{E}\bigg[\bigg\Vert \frac{\langle {\mathbf{x}} \rangle_{t,\epsilon}}{\sqrt{n}} \bigg\Vert^4\langle Q \rangle_{t,\epsilon}\bigg]\bigg)dt\nonumber\\ &= \mathcal{O}(\epsilon) + \smallO_n(1) + \adjustlimits{\inf}_{q_s \in [0,\rho_s]} {\sup}_{r_s \geq 0}\; \psi_{\lambda,\alpha}\big(q, q_s, r_s\big)\nonumber\\ &\qquad\qquad\qquad\qquad\qquad\quad -\frac{\lambda}{12} \int_0^1 \bigg(\mathbb{E}\,\langle Q^3 \rangle_{t,\epsilon} - \mathbb{E}\bigg[\bigg\Vert \frac{\langle {\mathbf{x}} \rangle_{t,\epsilon}}{\sqrt{n}} \bigg\Vert^4\langle Q \rangle_{t,\epsilon}\bigg]\bigg)dt\;. \label{sum_rule_upper_bound} \end{align} where the last equality follows from the trivial identity: \begin{equation}\label{link_psi_tilde_and_psi} \psi_{\lambda,\alpha}\big(q, q_s, r_s\big) = \frac{1}{\alpha}\widetilde{\psi}_\alpha\bigg(\frac{\lambda q^2}{2}, r_s, q_s\bigg) +\frac{\lambda}{12} \rho_x^3 - \frac{\lambda}{4} q^2 \rho_x + \frac{\lambda}{6} q^3 \;. \end{equation} It now remains to get rid of the integral on the right-hand side of \eqref{sum_rule_upper_bound}. The integrand satisfies: \begin{multline}\label{inequality_remainder} \bigg\vert \mathbb{E}\,\langle Q^3 \rangle_{t,\epsilon} - \mathbb{E}\bigg[\bigg\Vert \frac{\langle {\mathbf{x}} \rangle_{t,\epsilon}}{\sqrt{n}} \bigg\Vert^4\langle Q \rangle_{t,\epsilon}\bigg]\bigg\vert = \bigg\vert \mathbb{E}\,\bigg\langle Q \bigg(Q + \bigg\Vert \frac{\langle {\mathbf{x}} \rangle_{t,\epsilon}}{\sqrt{n}} \bigg\Vert^2\bigg) \bigg(Q - \bigg\Vert \frac{\langle {\mathbf{x}} \rangle_{t,\epsilon}}{\sqrt{n}} \bigg\Vert^2\bigg)\bigg\rangle_{\!\! t,\epsilon} \bigg\vert\\ \leq 2 \Vert \varphi \Vert_{\infty}^4 \mathbb{E}\,\bigg\langle \bigg\vert Q - \bigg\Vert \frac{\langle {\mathbf{x}} \rangle_{t,\epsilon}}{\sqrt{n}} \bigg\Vert^2\bigg\vert \bigg\rangle_{\!\! t,\epsilon} \leq 2 \Vert \varphi \Vert_{\infty}^4 \sqrt{\mathbb{E}\,\bigg\langle \!\bigg( Q - \bigg\Vert \frac{\langle {\mathbf{x}} \rangle_{t,\epsilon}}{\sqrt{n}}\bigg\Vert^2 \bigg)^{\!\! 2} \bigg\rangle_{\!\! t,\epsilon}} \;. \end{multline} We see that if the overlap $Q \triangleq \nicefrac{{\mathbf{x}}^{{\mathrm{T}}} {\mathbf{X}}}{n}$ would concentrate on $\nicefrac{\langle {\mathbf{x}} \rangle_{t, \epsilon}^{{\mathrm{T}}} \langle {\mathbf{x}} \rangle_{t, \epsilon}}{n}$ then the remaining integral in \eqref{sum_rule_upper_bound} would be negligible. However, proving such a concentration property is only holds when we average on a well-chosen set of ``perturbations'' $\epsilon$. In essence, the average over $\epsilon$ smoothens the phase transitions that might appear for particular choices of $\epsilon$ when $n$ goes to infinity.\\ We now take $\epsilon \in [s_n, 2s_n]$ where $s_n \triangleq n^{-\eta}$, $\eta > 0$, and integrate w.r.t.\ $\epsilon$ on both sides of \eqref{sum_rule_upper_bound}: \begin{align}\label{sumrule_lowerbound_integration_epsilon} &\frac{I({\mathbf{X}} ; {\mathbf{Y}} \vert {\mathbf{W}})}{n} = \int_{s_n}^{2s_n}\frac{I({\mathbf{X}} ; {\mathbf{Y}} \vert {\mathbf{W}})}{n} \frac{d\epsilon}{s_n}\nonumber\\ &\leq \smallO_n(1) + \adjustlimits{\inf}_{q_s \in [0,\rho_s]} {\sup}_{r_s \geq 0}\;\psi_{\lambda,\alpha}\big(q, q_s, r_s\big) -\frac{\lambda}{12} \int_0^1 dt \int_{s_n}^{2s_n} \bigg(\mathbb{E}\,\langle Q^3 \rangle_{t,\epsilon} - \mathbb{E}\bigg[\bigg\Vert \frac{\langle {\mathbf{x}} \rangle_{t,\epsilon}}{\sqrt{n}} \bigg\Vert^4\langle Q \rangle_{t,\epsilon}\bigg]\bigg)\frac{d\epsilon}{s_n}\nonumber\\ &\leq \smallO_n(1) + \adjustlimits{\inf}_{q_s \in [0,\rho_s]} {\sup}_{r_s \geq 0}\;\psi_{\lambda,\alpha}(q, q_s, r_s) + \frac{\lambda \Vert \varphi \Vert_{\infty}^4}{6} \! \int_0^1 \! dt \int_{s_n}^{2s_n} \!\sqrt{\mathbb{E}\,\bigg\langle \!\bigg( Q - \bigg\Vert \frac{\langle {\mathbf{x}} \rangle_{t,\epsilon}}{\sqrt{n}}\bigg\Vert^2 \bigg)^{\!\! 2} \bigg\rangle_{\!\! t,\epsilon}}\,\frac{d\epsilon}{s_n}\;. \end{align} Since $R(t,\cdot)$ is a $\mathcal{C}^1$-diffeomorphism from $[s_n,2s_n]$ to its image $R(t, [s_n,2s_n]) \subseteq [s_n, 2s_n + \rho_x^2]$, we make the change of variables $\epsilon \to R \equiv R(t,\epsilon)$ and obtain (using Cauchy-Schwarz for the first inequality) for all $t \in [0,1]$: \begin{align} \int_{s_n}^{2s_n} \sqrt{\mathbb{E}\,\bigg\langle \!\bigg( Q - \bigg\Vert \frac{\langle {\mathbf{x}} \rangle_{t,\epsilon}}{\sqrt{n}}\bigg\Vert^2 \bigg)^{\!\! 2} \bigg\rangle_{\!\! t,\epsilon}}\,\frac{d\epsilon}{s_n} &\leq \sqrt{\int_{s_n}^{2s_n} \mathbb{E}\,\bigg\langle \!\bigg( Q - \bigg\Vert \frac{\langle {\mathbf{x}} \rangle_{t,\epsilon}}{\sqrt{n}}\bigg\Vert^2 \bigg)^{\!\! 2} \bigg\rangle_{\!\! t,\epsilon}\,\frac{d\epsilon}{s_n}}\nonumber\\ &=\sqrt{\int_{R(t, [s_n,2s_n])} \mathbb{E}\,\bigg\langle \!\bigg( Q - \bigg\Vert \frac{\langle {\mathbf{x}} \rangle_{t,R}}{\sqrt{n}}\bigg\Vert^2 \bigg)^{\!\! 2} \bigg\rangle_{\!\! t,R}\, \frac{dR}{s_n}}\nonumber\\ &\leq \sqrt{\int_{s_n}^{2s_n + \rho_x^2} \mathbb{E}\,\bigg\langle \bigg( Q - \bigg\Vert \frac{\langle {\mathbf{x}} \rangle_{t,R}}{\sqrt{n}} \bigg\Vert^2\bigg)^{\!\! 2} \bigg\rangle_{\!\! t,R} \,\frac{dR}{s_n}}\;. \label{lowerbound_change_variable} \end{align} By Proposition~\ref{prop:concentration_L_on_<L>} in Appendix \ref{app:concentration_overlap} and the inequality \eqref{lowerbound_change_variable}, we get (remember that $s_n \triangleq n^{-\eta}$): \begin{multline} \frac{\lambda \Vert \varphi \Vert_{\infty}^4}{6} \int_{s_n}^{2s_n} \sqrt{\mathbb{E}\,\bigg\langle \!\bigg( Q - \bigg\Vert \frac{\langle {\mathbf{x}} \rangle_{t,\epsilon}}{\sqrt{n}}\bigg\Vert^2 \bigg)^{\!\! 2} \bigg\rangle_{\!\! t,\epsilon}}\,\frac{d\epsilon}{s_n}\\ \leq \frac{\sqrt{\lambda} \Vert \varphi \Vert_{\infty}^4}{3} \sqrt{\frac{\Vert \varphi \Vert_{\infty}^3}{s_n}\sqrt{\frac{\lambda (s_n + \rho_x^2)}{2n}}} = \frac{\lambda^{\frac{3}{4}} \Vert \varphi \Vert_{\infty}^{\nicefrac{11}{2}}}{3} \bigg(\frac{s_n + \rho_x^2}{2}\bigg)^{\frac{1}{4}} \frac{1}{n^{\frac{1-2\eta}{4}}} \;. \end{multline} Therefore, we see that the remainder on the right-hand side of \eqref{sumrule_lowerbound_integration_epsilon} vanishes as $\mathcal{O}(n^{-\nicefrac{1}{6}})$ if we pick $\eta = \nicefrac{1}{6}$. Passing to the limit superior on both sides of the inequality \eqref{sumrule_lowerbound_integration_epsilon} then yields: $$ \limsup_{n \to +\infty} \frac{I({\mathbf{X}} ; {\mathbf{Y}} \vert {\mathbf{W}})}{n} \leq \adjustlimits{\inf}_{q_s \in [0,\rho_s]} {\sup}_{r_s \geq 0}\;\psi_{\lambda,\alpha}(q, q_s, r_s) \;. $$ This inequality is true for all $q \in [0, \rho_x]$ and Proposition~\ref{prop:upperbound_mutual_info} follows directly. \end{IEEEproof} \subsection{Matching lower bound on the asymptotic normalized mutual information} We now prove a matching lower bound by considering a different choice for $R(\cdot,\epsilon)$ in the sum-rule \eqref{sum_rule}. $R(\cdot,\epsilon)$ will be the solution to a first-order ordinary differential equations (ODE). We first describe this ODE and give the derivation of the lower bound. \subsubsection{An ordinary differential equation} For $t \in [0,1]$ and $R \in [0, +\infty)$, consider the problem of estimating ${\mathbf{S}}$ from the observations: \begin{align}\label{interpolation_model_R} \begin{cases} {\mathbf{Y}}^{(t)} \;\; = \frac{\sqrt{\lambda (1-t)}}{n} {\mathbf{X}}^{\otimes 3} + {\mathbf{Z}}\\ \widetilde{{\mathbf{Y}}}^{(t,R)} = \:\sqrt{\frac{\lambda R}{2}}\, {\mathbf{X}} \;\, + \widetilde{{\mathbf{Z}}} \end{cases}; \end{align} where ${\mathbf{X}} \triangleq \varphi(\nicefrac{{\mathbf{W}} {\mathbf{S}}}{\sqrt{p}})$, $S_1,\dots,S_p \overset{\text{\tiny i.i.d.}}{\simiid[-2pt]} P_S$. The noise vector $\widetilde{{\mathbf{Z}}} \in \mathbb{R}^n$ has entries $\widetilde{Z}_1,\dots,\widetilde{Z}_n \overset{\text{\tiny i.i.d.}}{\simiid[-2pt]} \mathcal{N}(0,1)$, while the symmetric noise tensor ${\mathbf{Z}} \in (\mathbb{R}^n)^{\otimes 3}$ has entries ${\mathbf{Z}}_{{\mathbf{i}}} \overset{\text{\tiny i.i.d.}}{\simiid[-2pt]} \mathcal{N}(0,1)$ for ${\mathbf{i}} \in \mathcal{I} \triangleq \{(i_1, i_2, i_3) \in [n]: i_1 \leq i_2 \leq i_3\}$. The posterior distribution of ${\mathbf{S}}$ given $({\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,R)}, {\mathbf{W}})$ is: \begin{equation}\label{posterior_H_t_R} dP({\mathbf{s}} ; {\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,R)}, {\mathbf{W}}) = \frac{1}{\mathcal{Z}_{t,R}({\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,R)}, {\mathbf{W}})}dP_{S}({\mathbf{s}}) \, e^{-\mathcal{H}_{t,R}({\mathbf{s}} ; {\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,R)}, {\mathbf{W}})} \;. \end{equation} where $\mathcal{Z}_{t,R}({\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,R)}, {\mathbf{W}}) = \int dP_{S}({\mathbf{s}}) \, e^{-\mathcal{H}_{t,R}({\mathbf{s}} ; {\mathbf{Y}}^{(t)},\widetilde{{\mathbf{Y}}}^{(t,R)}, {\mathbf{W}})}$ and \begin{multline}\label{hamiltonian_model_R} \mathcal{H}_{t,R}({\mathbf{s}} ; {\mathbf{Y}}^{(t)}, \widetilde{{\mathbf{Y}}}^{(t,R)}, {\mathbf{W}}) \triangleq \sum_{{\mathbf{i}} \in \mathcal{I}} \frac{\lambda(1-t)}{2n^2} x_{i_1}^2 x_{i_2}^2 x_{i_3}^2 - \frac{\sqrt{\lambda(1-t)}}{n}\, Y_{{\mathbf{i}}}^{(t)} x_{i_1} x_{i_2} x_{i_3}\\ + \sum_{j=1}^{n} \frac{\lambda R}{4} x_j^2 - \sqrt{\frac{\lambda R}{2}}\,\widetilde{Y}_j^{(t,R)} x_j \;. \end{multline} Again, \eqref{hamiltonian_model_R} has the interpretation of a Hamiltonian and \eqref{posterior_H_t_R} a Gibbs distribution. The Gibbs bracket notation $\langle - \rangle_{t,R}$ denotes the expectation with respect to this last posterior. Finally, we define the following function used to formulate the ODE satisfied by the interpolation path: \begin{equation} G(t, R) = (\mathbb{E} \langle Q \rangle_{t,R})^2\;. \end{equation} \begin{lemma}\label{lemma:ode_for_interpolation_path} Assume $\varphi:\mathbb{R} \to \mathbb{R}$ is continuous and bounded. For all $\epsilon \in [0,+\infty)$, there exists a unique global solution $R(\cdot,\epsilon): [0,1] \to [0,+\infty)$ to the first-order ODE: \begin{equation}\label{eq:ODE} \forall \,t \in [0,1]: \frac{d g(t)}{dt}=G(t,g(t)) \,, \quad g(0)=\epsilon\,. \end{equation} This solution is continuously differentiable with bounded derivative (w.r.t.\ $t$) $R'(\cdot, \epsilon)$ and, for any $\delta>0$, $R'([0,1],\epsilon) \subseteq [0, (\rho_x+\delta)^2]$ for $n$ large enough independent of $\epsilon$. Besides, $\forall \,t \in [0,1]$, $R(t,\cdot)$ is a $\mathcal{C}^1$-diffeomorphism from $[0, +\infty)$ into its image whose derivative w.r.t.\ $\epsilon$ is greater than or equal to one, i.e., \begin{equation}\label{sol_ode_derivative_gretaer_one} \forall \,\epsilon \in [0, +\infty): \frac{\partial R}{\partial \epsilon}\Big\vert_{t,\epsilon} \geq 1 \,. \end{equation} \end{lemma} \begin{remark} This lemma guarantees a unique global solution $R_n(t,\epsilon)$ for each finite $n$. Slightly abusively we do not indicate the $n$-dependence and simply write $R(t, \epsilon)$ for the solution. \end{remark} \begin{IEEEproof} The function $G: (t, R) \in [0,1] \times [0,+\infty) \mapsto G(t,R)$ is continuous in $t$ and uniformly Lipschitz continuous in $R$ (meaning the Lipschitz constant is independent of $t$). The later is readily checked by computing the derivative of $G(t,\cdot)$ and showing it is uniformly bounded in $(t, R)$: \begin{align} \frac{\partial G}{\partial R}\bigg\vert_{t,R} &= \frac{\lambda \mathbb{E}\langle Q \rangle_{t, R}}{n} \sum_{i,j=1}^{n} \mathbb{E}[(\langle x_i x_j \rangle_{t,R} - \langle x_i \rangle_{t,R} \langle x_j \rangle_{t,R})^2] \in [0, 4 \lambda \Vert \varphi \Vert_{\infty}^6 n ] \;.\label{derivative_G_wrt_R} \end{align} Therefore, by the Cauchy-Lipschitz theorem, for all $\epsilon \geq 0$ there exists a unique solution $R(\cdot, \epsilon): [0,\gamma] \to [0,+\infty)$ to the initial value problem \eqref{eq:ODE}. Here $\gamma \in [0,1]$ is such that $[0,\gamma]$ is the maximal interval of existence of the solution. By the Cauchy-Schwarz inequality and Nishimory identity, we have: $$ \mathbb{E} \langle Q \rangle_{t, R} \leq \frac{\mathbb{E} \langle \Vert {\mathbf{x}} \Vert \Vert {\mathbf{X}} \Vert \rangle_{t,R}}{n} \leq \frac{1}{n} \sqrt{\mathbb{E} \langle \Vert {\mathbf{x}} \Vert^2 \rangle_{t,R} \, \mathbb{E} \Vert {\mathbf{X}} \Vert^2 } = \frac{\mathbb{E} \Vert {\mathbf{X}} \Vert^2 }{n} = \mathbb{E}\bigg[\varphi\bigg(\frac{{\mathbf{W}}_{1,\cdot}\,{\mathbf{S}}}{\sqrt{p}}\bigg)^{\! 2}\bigg] \xrightarrow[n \to +\infty]{} \rho_x \;. $$ See \cite[Lemma 3 of Supplementary material]{Gabrie_TwoLayerGLM_JSTAT_2019} for a proof of the later limit. Besides, by Nishimori identity, $\mathbb{E} \langle Q \rangle_{t, R} = n^{-1}\mathbb{E} \Vert \langle {\mathbf{x}} \rangle_{t,R}\Vert^2$ is nonnegative. Hence, for any $\delta > 0$, $G$ has its image in $[0, (\rho_x + \delta)^2]$ and $R([0, \gamma],\epsilon) \subseteq [\epsilon, \epsilon + \gamma (\rho_x + \delta)^2]$ as long as $n$ is large enough. It implies that $\gamma = 1$ (the solution never leaves the domain of definition of $G$). Each initial condition $\epsilon \in [0, +\infty)$ is tied to a unique solution $R(\cdot,\epsilon)$. This implies that the function $\epsilon \mapsto R(t,\epsilon)$ is injective. Its derivative is given by Liouville's formula \cite{hartman1982ordinary} \begin{equation}\label{liouville_formula_ode_lowerbound} \frac{\partial R}{\partial \epsilon}\bigg\vert_{t,\epsilon} = \exp \biggl\{\int_0^t ds \, \frac{\partial G}{\partial R}\bigg\vert_{s,R(s,\epsilon)}\biggr\} \end{equation} and is greater than, or equal to one, by positivity of $\frac{\partial G}{\partial R}$ -- see \eqref{derivative_G_wrt_R} above --. The fact that this partial derivative is bounded away from $0$ uniformly in $\epsilon$ implies by the inverse function theorem that the injective function $\epsilon \mapsto R(t,\epsilon)$ is a $\mathcal{C}^1$-diffeomorphism from $[0, +\infty)$ onto its image. \end{IEEEproof} \subsubsection{Derivation of the lower bound} \begin{proposition}\label{prop:lowerbound_mutual_info} Suppose that \ref{hyp:S_bounded_support} and \ref{hyp:varphi} hold. Then: \begin{equation} \limsup_{n \to +\infty} \frac{I({\mathbf{X}} ; {\mathbf{Y}} \vert {\mathbf{W}})}{n} \geq \mathop{\vphantom{p}\inf}_{q_x \in [0,\rho_x]} \adjustlimits{\inf}_{q_s \in [0,\rho_s]} {\sup}_{r_s \geq 0}\;\psi_{\lambda,\alpha}(q_x, q_s, r_s)\;. \end{equation} \end{proposition} \begin{IEEEproof} For all $\epsilon \in [0, +\infty)$, choose for the interpolation path the unique solution $R(\cdot, \epsilon)$ to the first-order ODE \eqref{eq:ODE}. Fix $\nu > 0$ and let $n$ be large enough so that $\forall \epsilon \in [0, +\infty):R'(\cdot, \epsilon) \subseteq [0,(\rho_x+\nu)^2]$. The interpolation path satisfies $R'(t,\epsilon) = (\mathbb{E} \langle Q \rangle_{t,\epsilon})^2$ and the sum-rule of Proposition~\ref{prop:sum_rule} yields: \begin{multline}\label{sum_rule_sol_ode} \frac{I({\mathbf{X}} ; {\mathbf{Y}} \vert {\mathbf{W}})}{n} = \mathcal{O}(\epsilon) + \smallO_n(1) + \frac{1}{\alpha} \adjustlimits{\inf}_{q_s \in [0,\rho_s]} {\sup}_{r_s \geq 0}\; \widetilde{\psi}_\alpha\bigg(\frac{\lambda \epsilon}{2} + \int_0^1 \frac{\lambda R'(t,\epsilon)}{2}dt, r_s, q_s\bigg)\\ + \int_0^1 \bigg(\frac{\lambda}{12} \rho_x^3 + \frac{\lambda}{6} (\mathbb{E}\,\langle Q \rangle_{t,\epsilon})^3 - \frac{\lambda}{4} (\mathbb{E}\,\langle Q \rangle_{t,\epsilon})^2 \rho_x \bigg)dt -\frac{\lambda}{12} \int_0^1 \Big( \mathbb{E}\,\langle Q^3 \rangle_{t,\epsilon} - (\mathbb{E}\,\langle Q \rangle_{t,\epsilon})^3 \Big) dt \;. \end{multline} By Lemma~\ref{lemma:properties_functions} in Appendix \ref{app:useful_lemmas}, the map $r \mapsto \inf_{q_s \in [0,\rho_s]} \sup_{r_s \geq 0}\; \widetilde{\psi}_\alpha(r , r_s, q_s)$ is nondecreasing and concave. Therefore: \begin{equation}\label{lowerbound_psi_tilde_jensen} \adjustlimits{\inf}_{q_s \in [0,\rho_s]} {\sup}_{r_s \geq 0} \;\widetilde{\psi}_\alpha\bigg(\frac{\lambda \epsilon}{2} + \int_0^1 \frac{\lambda R'(t,\epsilon)}{2}dt, r_s, q_s\bigg) \geq \int_0^1 \adjustlimits{\inf}_{q_s \in [0,\rho_s]} {\sup}_{r_s \geq 0} \;\widetilde{\psi}_\alpha\bigg(\frac{\lambda R'(t,\epsilon)}{2}, r_s, q_s\bigg) dt \;. \end{equation} Combining the identity \eqref{sum_rule_sol_ode} with \eqref{lowerbound_psi_tilde_jensen} yields: \begin{align} &\frac{I({\mathbf{X}} ; {\mathbf{Y}} \vert {\mathbf{W}})}{n}\nonumber\\ &\qquad\geq \int_0^1 \!\bigg\{\adjustlimits{\inf}_{q_s \in [0,\rho_s]} {\sup}_{r_s \geq 0} \frac{1}{\alpha}\widetilde{\psi}_\alpha\bigg(\frac{\lambda(\mathbb{E} \langle Q \rangle_{t,\epsilon})^2}{2}, r_s, q_s \!\bigg) + \frac{\lambda \rho_x^3}{12} + \frac{\lambda(\mathbb{E}\,\langle Q \rangle_{t,\epsilon})^3}{6} - \frac{\lambda (\mathbb{E}\,\langle Q \rangle_{t,\epsilon})^2 \rho_x}{4} \bigg\}dt \nonumber\\& \qquad\qquad\qquad\qquad\qquad\qquad\quad -\frac{\lambda}{12} \int_0^1 \Big( \mathbb{E}\,\langle Q^3 \rangle_{t,\epsilon} - (\mathbb{E}\,\langle Q \rangle_{t,\epsilon})^3 \Big) dt + \mathcal{O}(\epsilon) + \smallO_n(1) \nonumber\\ &\qquad\geq \mathop{\vphantom{p}\inf}_{q_x \in [0,\rho_x + \nu]} \adjustlimits{\inf}_{q_s \in [0,\rho_s]} {\sup}_{r_s \geq 0} \; \psi_{\lambda,\alpha}(q_x, q_s, r_s)\nonumber\\ &\qquad\qquad\qquad\qquad\qquad\qquad\quad -\frac{\lambda}{12} \int_0^1 \Big( \mathbb{E}\,\langle Q^3 \rangle_{t,\epsilon} - (\mathbb{E}\,\langle Q \rangle_{t,\epsilon})^3 \Big) dt + \mathcal{O}(\epsilon) + \smallO_n(1) \;.\label{lowerbound_I(X,Y)_with_remainder} \end{align} The second inequality follows from identity \eqref{link_psi_tilde_and_psi} and $\mathbb{E}\langle Q \rangle_{t, \epsilon} \in [0, \rho_x + \nu]$. The result of the proposition will follow if we can get rid of the integral term on the right-hand side of \eqref{lowerbound_I(X,Y)_with_remainder} This is achieved by proceeding exactly as in the proof of the upper bound in Section~\ref{subsec:upper_bound}, that is, we integrate \eqref{lowerbound_I(X,Y)_with_remainder} over $\epsilon \in [s_n, 2 s_n]$ where $s_n = n^{-\eta}$, $\eta > 0$. Then: \begin{align} &\frac{I({\mathbf{X}} ; {\mathbf{Y}} \vert {\mathbf{W}})}{n} = \int_{s_n}^{2s_n} \frac{I({\mathbf{X}} ; {\mathbf{Y}} \vert {\mathbf{W}})}{n}\,\frac{d\epsilon}{s_n}\nonumber\\ &\geq \smallO_n(1) \! + \! \mathop{\vphantom{p}\inf}_{q_x \in [0,\rho_x + \nu]} \adjustlimits{\inf}_{q_s \in [0,\rho_s]} {\sup}_{r_s \geq 0} \; \psi_{\lambda,\alpha}(q_x, q_s, r_s) -\frac{\lambda}{12} \int_0^1 \! dt \! \int_{s_n}^{2s_n} \! \frac{d\epsilon}{s_n} \Big( \mathbb{E}\,\langle Q^3 \rangle_{t,\epsilon} - (\mathbb{E}\,\langle Q \rangle_{t,\epsilon})^3 \Big) \nonumber \\ & \geq \smallO_n(1) \! + \! \mathop{\vphantom{p}\inf}_{q_x \in [0,\rho_x + \nu]} \adjustlimits{\inf}_{q_s \in [0,\rho_s]} {\sup}_{r_s \geq 0} \; \psi_{\lambda,\alpha}(q_x, q_s, r_s) -\frac{\lambda \Vert \varphi \Vert_{\infty}^4}{6} \int_0^1 \! dt \! \int_{s_n}^{2s_n} \! \frac{d\epsilon}{s_n} \sqrt{\mathbb{E}\,\langle (Q -\mathbb{E}\,\langle Q \rangle_{t,\epsilon})^2 \rangle_{t,\epsilon}} . \label{integral_perturbation_upperbound} \end{align} The last inequality is simply due to: \begin{align*} \mathbb{E}\langle Q^3 \rangle_{t,\epsilon} - (\mathbb{E}\langle Q \rangle_{t,\epsilon})^3 = \mathbb{E}\langle Q(Q + \mathbb{E}\langle Q \rangle_{t,\epsilon})(Q -\mathbb{E}\langle Q \rangle_{t,\epsilon} ) \rangle_{t,\epsilon} \leq 2 \Vert \varphi \Vert_{\infty}^4 \sqrt{\mathbb{E}\langle (Q -\mathbb{E}\langle Q \rangle_{t,\epsilon})^2 \rangle_{t,\epsilon}} \;. \end{align*} After the change of variables $\epsilon \to R \equiv R(t,\epsilon)$, which is justified by $R(t,\cdot)$ being a $\mathcal{C}^1$-diffeomorphism from $[0,+\infty)$ to its image (see Lemma~\ref{lemma:ode_for_interpolation_path}), we can upper bound the remainder on the right-side of \eqref{integral_perturbation_upperbound} in a way similar to \eqref{lowerbound_change_variable}: $$ \bigg\vert \int_{s_n}^{2s_n} \sqrt{\mathbb{E}\,\langle (Q -\mathbb{E}\,\langle Q \rangle_{t,\epsilon})^2 \rangle_{t,\epsilon}}\, \frac{d\epsilon}{s_n} \bigg\vert \leq \sqrt{\int_{s_n}^{2s_n + \rho_x^2} \mathbb{E}\,\big\langle \big( Q - \mathbb{E}\,\langle Q \rangle_{t,R}\big)^{2} \big\rangle_{t,R} \,\frac{dR}{s_n}}\;. $$ Finally, applying Proposition~\ref{prop:concentration_overlap} in Appendix \ref{app:concentration_overlap} with $M=2 + \rho_x^2, a=s_n, b=2s_n + \rho_x^2$ and $\delta=s_n n^{\frac{2\eta-1}{3}}$, we can further bound the right-hand side of the last inequality to obtain: $$ \bigg\vert \frac{\lambda \Vert \varphi \Vert_{\infty}^4}{6} \int_{s_n}^{2s_n} \sqrt{\mathbb{E}\,\langle (Q -\mathbb{E}\,\langle Q \rangle_{t,\epsilon})^2 \rangle_{t,\epsilon}}\, \frac{d\epsilon}{s_n} \bigg\vert \leq C n^{\frac{5\eta-1}{6}} $$ for $n$ large enough and $C$ a positive constant which does not depend on $t$ and $n$. Thus, the remaining term on the right-hand side of \eqref{integral_perturbation_upperbound} vanishes when $n$ goes to infinity as long as $\eta < \nicefrac{1}{5}$. Passing to the limit inferior on both sides of the inequality \eqref{integral_perturbation_upperbound} yields: \begin{equation*} \liminf_{n \to +\infty} \frac{I({\mathbf{X}} ; {\mathbf{Y}} \vert {\mathbf{W}})}{n} \geq \mathop{\vphantom{p}\inf}_{q_x \in [0,\rho_x + \nu]} \adjustlimits{\inf}_{q_s \in [0,\rho_s]} {\sup}_{r_s \geq 0} \; \psi_{\lambda,\alpha}(q_x, q_s, r_s) \;. \end{equation*} This is true for all $\nu > 0$ and Proposition~\ref{prop:lowerbound_mutual_info} follows directly. \end{IEEEproof}
2,877,628,091,663
arxiv
\section{Introduction} Visual question answering (VQA) \cite{antol2015vqa} is an attractive research direction aiming to jointly analyze multimodal content from images and natural language. Equipped with the capacities of grounding, reasoning and translating, a VQA agent is expected to answer a question in natural language based on an image. Recent works~\cite{cadene2019murel,li2019relation,ben2019block} have achieved great success in the VQA problems that are answerable by solely referring to the visible content of the image. However, such kinds of models are incapable of answering questions which require external knowledge beyond what is in the image. Considering the question in Figure \ref{fig:intro}, the agent not only needs to visually localize `the red cylinder', but also to semantically recognize it as `fire hydrant' and connects the knowledge that `fire hydrant is used for firefighting'. Therefore, how to collect the question-oriented and information-complementary evidence from visual, semantic and knowledge perspectives is essential to achieve general VQA. \begin{figure}[t] \centering \setlength{\abovecaptionskip}{5pt} \setlength{\belowcaptionskip}{-5mm} \includegraphics[width=\columnwidth]{motivation.pdf} \caption{\small{An illustration of our motivation. We represent an image by multi-layer graphs and cross-modal knowledge reasoning is conducted on the graphs to infer the optimal answer.}} \label{fig:intro} \end{figure} To advocate research in this direction, ~\cite{wang2018fvqa} introduces the `Fact-based' VQA (FVQA) task for answering questions by joint analysis of the image and the knowledge base of facts. The typical solutions for FVQA build a fact graph with fact triplets filtered by the visual concepts in the image and select one entity in the graph as the answer. Existing works \cite{wang2017explicit,wang2018fvqa} parse the question as keywords and retrieve the supporting-entity only by keyword matching. This kind of approaches is vulnerable when the question does not exactly mention the visual concepts (\textit{e.g.} synonyms and homographs) or the mentioned information is not captured in the fact graph (\textit{e.g.} the visual attribute `red' in Figure \ref{fig:intro} may be falsely omitted). To resolve these problems, \cite{narasimhan2018out} introduces visual information into the fact graph and infers the answer by implicit graph reasoning under the guidance of the question. However, they provide the whole visual information equally to each graph node by concatenation of the image, question and entity embeddings. Actually, only part of the visual content are relevant to the question and a certain entity. Moreover, the fact graph here is still homogeneous since each node is represented by a fixed form of image-question-entity embedding, which limits the model's flexibility of adaptively capturing evidence from different modalities. In this work, we depict an image as a multi-modal heterogeneous graph, which contains multiple layers of information corresponding to different modalities. The proposed model is focused on \textit{\textbf{Mu}lti-Layer \textbf{C}ross-Modal \textbf{K}nowledge Reas\textbf{o}ning} and we name it as \textbf{Mucko} for short. Specifically, we encode an image by three layers of graphs, where the object appearance and their relationships are kept in the \textit{visual layer}, the high-level abstraction for bridging the gaps between visual and factual information is provided in the \textit{semantic layer}, and the corresponding knowledge of facts are supported in the \textit{fact layer}. We propose a modality-aware heterogeneous graph convolutional network to adaptively collect complementary evidence in the multi-layer graphs. It can be performed by two procedures. First, the Intra-Modal Knowledge Selection procedure collects question-oriented information from each graph layer under the guidance of question; Then, the Cross-Modal Knowledge Reasoning procedure captures complementary evidence across different layers. The main contributions of this paper are summarized as follows: (1) We comprehensively depict an image by a heterogeneous graph containing multiple layers of information based on visual, semantic and knowledge modalities. We consider these three modalities jointly and achieve significant improvement over state-of-the-art solutions. (2) We propose a modality-aware heterogeneous graph convolutional network to capture question-oriented evidence from different modalities. Especially, we leverage an attention operation in each convolution layer to select the most relevant evidence for the given question, and the convolution operation is responsible for adaptive feature aggregation. (3) We demonstrate good interpretability of our approach and provide case study in deep insights. Our model automatically tells which modality (visual, semantic or factual) and entity have more contributions to answer the question through visualization of attention weights and gate values. \section{Related Work} \paragraph{Visual Question Answering.} The typical solutions for VQA are based on the CNN-RNN architecture \cite{malinowski2015ask} and leverage global visual features to represent image, which may introduce noisy information. Various attention mechanisms \cite{yang2016stacked,lu2016hierarchical,anderson2018bottom} have been exploited to highlight visual objects that are relevant to the question. However, they treat objects independently and ignore their informative relationships. \cite{battaglia2018relational} demonstrates that human’s ability of combinatorial generalization highly depends on the mechanisms for reasoning over relationships. Consistent with such proposal, there is an emerging trend to represent the image by graph structure to depict objects and relationships in VQA and other vision-language tasks \cite{hu2019language,wang2019neighbourhood,li2019relation}. As an extension, \cite{jiang2019dualvd} exploits natural language to enrich the graph-based visual representations. However, it solely captures the semantics in natural language by LSTM, which lacking of fine-grained correlations with the visual information. To go one step further, we depict an image by multiple layers of graphs from visual, semantic and factual perspectives to collect fine-grained evidence from different modalities. \paragraph{Fact-based Visual Question Answering.} Human can easily combine visual observation with external knowledge for answering questions, which remains challenging for algorithms. \cite{wang2018fvqa} introduces a fact-based VQA task, which provides a knowledge base of facts and associates each question with a supporting-fact. Recent works based on FVQA generally select one entity from fact graph as the answer and falls into two categories: query-mapping based methods and learning based methods. \cite{wang2017explicit} reduces the question to one of the available query templates and this limits the types of questions that can be asked. \cite{wang2018fvqa} automatically classifies and maps the question to a query which does not suffer the above constraint. Among both methods, however, visual information are used to extract facts but not introduced during the reasoning process. ~\cite{narasimhan2018out} applies GCN on the fact graph where each node is represented by the fixed form of image-question-entity embedding. However, the visual information is wholly provided which may introduce redundant information for prediction. In this paper, we decipt an image by multi-layer graphs and perform cross-modal heterogeneous graph reasoning on them to capture complementary evidence from different layers that most relevant to the question. \paragraph{Heterogeneous Graph Neural Networks.} Graph neural networks are gaining fast momentum in the last few years~\cite{wu2019comprehensive}. Compared with homogeneous graphs, heterogeneous graphs are more common in the real world. \cite{schlichtkrull2018modeling} generalizes graph convolutional network (GCN) to handle different relationships between entities in a knowledge base, where each edge with distinct relationships is encoded independently. \cite{wang2019heterogeneous,hu2019heterogeneous} propose heterogeneous graph attention networks with dual-level attention mechanism. All of these methods model different types of nodes and edges on a unified graph. In contrast, the heterogeneous graph in this work contains multiple layers of subgraphs and each layer consists of nodes and edges coming from different modalities. For this specific constrain, we propose the intra-modal and cross-modal graph convolutions for reasoning over such multi-modal heterogeneous graphs. \section{Methodology} \begin{figure*}[t] \setlength{\abovecaptionskip}{5pt} \setlength{\belowcaptionskip}{-14pt} \includegraphics[width =\textwidth]{framework.pdf} \caption{\small{An overview of our model. The model contains two modules: Multi-modal Heterogeneous Graph Construction aims to depict an image by multiple layers of graphs and Cross-modal Hetegeneous Graph Reasoning supports intra-modal and cross-modal evidence selection. }} \label{fig:model} \end{figure*} Given an image $I$ and a question $Q$, the task aims to predict an answer $A$ while leveraging external knowledge base, which consists of facts in the form of triplet, \textit{i.e.} $<e_1, r, e_2>$, where $e_1$ is a visual concept in the image, $e_2$ is an attribute or phrase and $r$ represents the relationship between $e_1$ and $e_2$. The key is to choose a correct entity, \textit{i.e.} either $e_1$ or $e_2$, from the supporting fact as the predicted answer. We first introduce a novel scheme of depicting an image by three layers of graphs, including the visual graph, semantic graph and fact graph respectively, imitating the understanding of various properties of an object and the relationships. Then we perform cross-modal heterogeneous graph reasoning that consists of two parts: \textit{Intra-Modal Knowledge Selection} aims to choose question-oriented knowledge from each layer of graphs by intra-modal graph convolutions, and \textit{Cross-Modal Knowledge Reasoning} adaptively selects complementary evidence across three layers of graphs by cross-modal graph convolutions. By stacking the above two processes multiple times, our model performs iterative reasoning across all the modalities and results in the optimal answer by jointly analyzing all the entities. Figure \ref{fig:model} gives detailed illustration of our model. \subsection{Multi-Modal Graph Construction}\label{sec:graphConstruction} \paragraph{Visual Graph Construction.} Since most of the questions in FVQA grounded in the visual objects and their relationships, we construct a fully-connected visual graph to represent such evidence at appearance level. Given an image $I$, we use Faster-RCNN \cite{ren2015faster} to identify a set of objects $\mathcal{O} = \{o_i\}_{i=1}^K$ ($K$ = 36), where each object $o_i$ is associated with a visual feature vector $\bm{v}_i \in \mathbb{R}^{d_v}$ ($d_v$ = 2048), a spatial feature vector $\bm{b}_i\in \mathbb{R}^{d_b}$ ($d_b$ = 4) and a corresponding label. Specifically, $\bm{b}_i = [x_i, y_i, w_i, h_i]$, where $(x_i, y_i)$, $h_i$ and $w_i$ respectively denote the coordinate of the top-left corner, the height and width of the bounding box. We construct a visual graph $\mathcal{G}^V=(\mathcal{V}^V,\mathcal{E}^V)$ over $\mathcal{O}$, where $\mathcal{V}^V =\{v_i^V\}_{i=1}^K$ is the node set and each node $v^V_{i}$ corresponds to a detected object $o_i$. The feature of node $v^{V}_{i}$ is represented by $\bm{v}^V_i$. Each edge $e^V_{ij} \in \mathcal{E}^V$ denotes the relative spatial relationships between two objects. We encode the edge feature by a 5-dimensional vector, \textit{i.e.} $\bm{r}^V_{ij} = [\frac{x_j-x_i}{w_i},\frac{y_j-y_i}{h_i},\frac{w_j}{w_i},\frac{h_j}{h_i},\frac{w_jh_j}{w_ih_i}]$. \paragraph{Semantic Graph Construction.} In addition to visual information, high-level abstraction of the objects and relationships by natural language provides essential semantic information. Such abstraction is indispensable to associate the visual objects in the image with the concepts mentioned in both questions and facts. In our work, we leverage dense captions \cite{johnson2016densecap} to extract a set of local-level semantics in an image, ranging from the properties of a single object (color, shape, emotion, $etc.$) to the relationships between objects (action, spatial positions, comparison, $etc.$). We decipt an image by $D$ dense captions, denoted as $Z=\{z_i\}_{i=1}^D$, where $z_i$ is a natural language description about a local region in the image. Instead of using monolithic embeddings to represent the captions, we exploit to model them by a graph-based semantic representation, denoted as $\mathcal{G}^S=(\mathcal{V}^S,\mathcal{E}^S)$, which is constructed by a semantic graph parsing model \cite{anderson2016spice}. The node $v^S_i \in \mathcal{V}^S$ represents the name or attribute of an object extracted from the captions while the edge $e_{ij}^S\in \mathcal{E}^S$ represents the relationship between $v^S_i$ and $v^S_j$. We use the averaged GloVe embeddings \cite{pennington2014glove} to represent $v^S_i$ and $e_{ij}^S$, denoted as $\bm{v}_{i}^S$ and $\bm{r}_{ij}^S$, respectively. The graph representation retains the relational information among concepts and unifies the representations in graph domain, which is better for explicit reasoning across modalities. \paragraph{Fact Graph Construction.} To find the optimal supporting-fact, we first retrieve relevant candidate facts from knowledge base of facts following a scored based approach proposed in \cite{narasimhan2018out}. We compute the cosine similarity of the embeddings of every word in the fact with the words in the question and the words of visual concepts detected in the image. Then we average these values to assign a similarity score to the fact. The facts are sorted based on the similarity and the 100 highest scoring facts are retained, denoted as $f_{100}$. A relation type classifier is trained additionally to further filter the retrieved facts. Specifically, we feed the last hidden state of LSTM to an MLP layer to predict the relation type $\hat{r}_i$ of a question. We retain the facts among $f_{100}$ only if their relationships agree with $\hat{r}_i$, \textit{i.e.} $f_{rel}=f\in f_{100}:r(f) \in\{\hat{r}_i\}$ ($\{\hat{r}_i\}$ contains top-3 predicted relationships in experiments). Then a fact graph $\mathcal{G}^F=(\mathcal{V}^F,\mathcal{E}^F)$ is built upon $f_{rel}$ as the candidate facts can be naturally organized as graphical structure. Each node $v_i^F \in \mathcal{V}^F$ denotes an entity in $f_{rel}$ and is represented by GloVe embedding of the entity, denoted as $\bm{v}_i^F$. Each edge $e_{ij}^F \in \mathcal{E}^F$ denotes the relationship between $v_i^F$ and $v_j^F$ and is represented by GloVe embedding $\bm{r}_{ij}$. The topological structure among facts can be effectively exploited by jointly considering all the entities in the fact graph. \subsection{Intra-Modal Knowledge Selection} \label{subsec:Intra} Since each layer of graphs contains modality-specific knowledge relevant to the question, we first select valuable evidence independently from the visual graph, semantic graph and fact graph by \textbf{Visual-to-Visual Convolution}, \textbf{Semantic-to-Semantic Convolution} and \textbf{Fact-to-Fact Convolution} respectively. These three convolutions share the common operations but differ in their node and edge representations corresponding to the graph layers. Thus we omit the superscript of node representation $\bm{v}$ and edge representation $\bm{r}$ in the rest of this section. We first perform attention operations to highlight the nodes and edges that are most relevant to the question $q$ and consequently update node representations via intra-modal graph convolution. This process mainly consists of the following three steps: \paragraph{Question-guided Node Attention.} We first evaluate the relevance of each node corresponding to the question by attention mechanism. The attention weight for $v_i$ is computed as: \begin{equation}\label{eq:node attention} \alpha_i=\softmax(\bm{w}^T_a\tanh({\textbf{W}_{1} }\bm{v}_i + \textbf{W}_{2}\bm{q})) \end{equation} where $\textbf{W}_{1}$,$\textbf{W}_{2}$ and $\bm{w}_a$ (as well as $\textbf{W}_{3}$,..., $\textbf{W}_{11}$, $\bm{w}_b$, $\bm{w}_c$ mentioned below) are learned parameters. $\bm{q}$ is question embedding encoded by LSTM. \paragraph{Question-guided Edge Attention.} Under the guidance of question, we then evaluate the importance of edge $e_{ji}$ constrained by the neighbor node $v_j$ regarding to $v_i$ as: \begin{equation}\label{edge attention} \beta_{ji}=\softmax(\bm{w}^T_b\tanh({\textbf{W}_{3} }\bm{v'}_{j} + \textbf{W}_{4}\bm{q'})) \end{equation} where $\bm{v'}_{j}=\textbf{W}_5[\bm{v}_j,\bm{r}_{ji}]$, $\bm{q'}=\textbf{W}_{6}[\bm{v}_i,\bm{q}]$ and $[\cdot,\cdot]$ denotes concatenation operation \paragraph{Intra-Modal Graph Convolution.} Given the node and edge attention weights learned in Eq. {\ref{eq:node attention}} and Eq. \ref{edge attention}, the node representations of each layer of graphs are updated following the message-passing framework~\cite{gilmer2017neural}. We gather the neighborhood information and update the representation of $v_i$ as: \begin{align} &\bm{m}_i=\sum_{j\in\mathcal{N}_i}\beta_{ji}\bm{v'}_{j} \\ &\hat{\bm{v}}_i=\text{ReLU}(\textbf{W}_7[\bm{m}_i,\alpha_i\bm{v}_i])\label{eq:4} \end{align} where $\mathcal{N}_i$ is the neighborhood set of node $v_i$. We conduct the above intra-modal knowledge selection on $\mathcal{G}^V$, $\mathcal{G}^S$ and $\mathcal{G}^F$ independently and obtain the updated node representations, denoted as $\{\hat{\bm{v}}_i^V\}_{i=1}^{\mathcal{N}^V}$, $\{\hat{\bm{v}}_i^S\}_{i=1}^{\mathcal{N}^S}$ and $\{\hat{\bm{v}}_i^F\}_{i=1}^{\mathcal{N}^F}$ accordingly. \subsection{Cross-Modal Knowledge Reasoning} \label{subsection:3.2} To answer the question correctly, we fully consider the complementary evidence from visual, semantic and factual information. Since the answer comes from one entity in the fact graph, we gather complementary information from visual graph and semantic graph to fact graph by cross-modal convolutions, including \textit{visual-to-fact convolution} and \textit{semantic-to-fact convolution}. Finally, a \textit{fact-to-fact aggregation} is performed on the fact graph to reason over all the entities and form a global decision. \paragraph{Visual-to-Fact Convolution.} For the entity $v_i^F$ in fact graph, the attention value of each node $v_j^V$ in the visual graph w.r.t. $v_i^F$ is calculated under the guidance of question: \begin{equation}\label{eq:VtoF} \gamma^{\textit{\text{V-F}}}_{ji}=\softmax(\bm{w}_c\tanh(\textbf{W}_8\hat{\bm{v}}^V_j+\textbf{W}_9[\hat{\bm{v}}^F_i,\bm{q}])) \end{equation} The complementary information $\bm{m}^\textit{\text{V-F}}_i$ from visual graph for $v_i^F$ is computed as: \begin{equation}\label{eq:6} \bm{m}^\textit{\text{V-F}}_i=\sum_{j\in\mathcal{N}^V}\gamma^\textit{\text{V-F}}_{ji}\hat{\bm{v}}^V_j \end{equation} \paragraph{Semantic-to-Fact Convolution.} The complementary information $\bm{m}^\textit{\text{S-F}}_i$ from the semantic graph is computed in the same way as in Eq. \ref{eq:VtoF} and Eq. \ref{eq:6}. Then we fuse the complementary knowledge for $v_i^F$ from three layers of graphs via a gate operation: \begin{align} \label{eq:gate} &\bm{gate}_i=\sigma(\textbf{W}_{10}[\bm{m}^\textit{\text{V-F}}_i, \bm{m}^\textit{\text{S-F}}_i, \hat{\bm{v}}_i^F])\\ &\widetilde{\bm{v}}^{F}_i=\textbf{W}_{11}(\bm{gate}_i\circ[\bm{m}^\textit{\text{V-F}}_i, \bm{m}^\textit{\text{S-F}}_i, \hat{\bm{v}}_i^F]) \end{align} where $\sigma$ is sigmoid function and ``$\circ$'' denotes element-wise product. \paragraph{Fact-to-Fact Aggregation.} Given a set of candidate entities in the fact graph $\mathcal{G}^F$, we aim to globally compare all the entities and select an optimal one as the answer. Now the representation of each entity in the fact graph gathers question-oriented information from three modalities. To jointly evaluate the possibility of each entity, we perform the attention-based graph convolutional network similar to Fact-to-Fact Convolution introduced in Section \ref{subsec:Intra} to aggregate information in the fact graph and obtain the transformed entity representations. We iteratively perform intra-modal knowledge selection and cross-modal knowledge reasoning in multiple steps to obtain the final entity representations. After $T$ steps, each entity representation $\widetilde{\bm{v}}^{F(T)}_i$ captures the structural information within $T$-hop neighborhood across three layers. \subsection{Learning} The concatenation of entity representation $\widetilde{\bm{v}}^{F(T)}_i$ and question embedding $\bm{q}$ is passed to a binary classifier to predict its probability as the answer , \textit{i.e.} $\hat{y}_i=p_\theta([\widetilde{\bm{v}}^{F(T)}_i,\bm{q}])$. We apply the binary cross-entropy loss in the training process: \begin{equation} l_n=-\sum_{i\in \mathcal{N}^F}\big[a\cdot{y}_i\ln \hat{y}_i + b\cdot(1-{y}_i)\ln(1-\hat{y}_i)\big] \end{equation} where ${y}_i$ is the ground truth label for $v_i^F$ and $a, b$ represent loss function weights for positive and negative samples respectively. The entity with the largest probability is selected as the final answer. \section{Experiments} \begin{table} \centering \setlength{\abovecaptionskip}{5pt} \setlength{\belowcaptionskip}{-1mm} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \resizebox{\columnwidth}{!}{\scriptsize \begin{tabular}{l|cc} \hline \multirow{2}*{\bf Method} & \multicolumn{2}{c}{{\bf Overall Accuracy} }\\ \cline{2-3} & {\bf top-1} & {\bf top-3}\\ \hline LSTM-Question+Image+Pre-VQA & $24.98$ & $40.40$ \\ Hie-Question+Image+Pre-VQA& $43.14$ & $59.44$ \\ FVQA (top-3-QQmaping) & $56.91$ & $64.65$ \\ FVQA (Ensemble) & $58.76$ & - \\ Straight to the Facts (STTF) & $62.20$ & $75.60$ \\ Reading Comprehension & $62.96$ & $70.08$ \\ Out of the Box (OB) &$69.35$&$80.25$\\ \hline Human&$77.99$&-\\ \hline {\bf Mucko}&$\bm {73.06}$&$\bm{ 85.94}$\\ \hline \end{tabular} } \caption{State-of-the-art comparison on FVQA dataset.} \label{table:overall} \end{table} \begin{table} \centering \setlength{\abovecaptionskip}{5pt} \setlength{\belowcaptionskip}{-1mm} \renewcommand{\arraystretch}{0.9} \resizebox{\columnwidth}{!}{\scriptsize \begin{tabular}{l|l|cc} \hline \multicolumn{2}{l|}{\multirow{2}*{\bf Method}}& \multicolumn{2}{c}{{\bf Overall Accuracy} }\\ \cline{3-4} \multicolumn{2}{l|}{~} & {\bf top-1} & {\bf top-3}\\ \hline \multicolumn{2}{l|}{ {\bf Mucko} (full model)}& ${\bf73.06}$ & ${\bf85.94}$ \\ \hline 1& w/o Semantic Graph & $71.28$ & $82.76$ \\ 2&w/o Visual Graph& $69.12$ & $78.05$ \\ 3& w/o Semantic Graph \& Visual Graph & $20.43$ &$29.10$ \\ \hline 4& S-to-F Concat.& $67.82$ & $76.65$ \\ 5& V-to-F Concat. & $69.93$ & $80.12$ \\ 6& V-to-F Concat. \& S-to-F Concat. &$70.68$ & $82.04$ \\ \hline 7& w/o relationships &${ 72.10}$&${ 83.75}$\\ \hline \end{tabular} } \caption{Ablation study of key components of Mucko.} \label{table:ablation} \end{table} \paragraph{Dataset.} We evaluate Mucko on the FVQA dataset \cite{wang2018fvqa}. It consists of 2,190 images, 5,286 questions and a knowledge base of 193,449 facts. Facts are constructed by extracting top visual concepts in the dataset and querying these concepts in WebChild, ConceptNet and DBPedia. \footnote{We provide more experimental results on OK-VQA and Visual7W+KB in supplementary materials.} \paragraph{Evaluation Metrics.} We follow the metrics in \cite{wang2018fvqa} to evaluate the performance. The top-1 and top-3 accuracy is calculated for each method. The averaged accuracy of 5 test splits is reported as the overall accuracy. \paragraph{Implementation Details.} We select the top-10 dense captions according to their confidence. The max sentence length of dense captions and the questions is set to 20. The hidden state size of all the LSTM blocks is set to 512. We set $a=0.7$ and $b=0.3$ in the binary cross-entropy loss. Our model is trained by Adam optimizer with 20 epochs, where the mini-batch size is 64 and the dropout ratio is 0.5. Warm up strategy is applied for 2 epochs with initial learning rate $1\times 10^{-3}$ and warm-up factor $0.2$. Then we use cosine annealing learning strategy with initial learning rate $\eta_{max}=1\times 10^{-3}$ and termination learning rate $\eta_{min}=3.6\times10^{-4}$ for the rest epochs. \begin{figure*}[t] \centering \setlength{\abovecaptionskip}{2pt} \setlength{\belowcaptionskip}{-4mm} \includegraphics[width=\textwidth]{case.pdf} \caption{\small{Visualization for Mucko. Visual graph highlights the most relevant subject (red box) according to attention weights of each object ($\alpha^V$ in Eq. \ref{eq:node attention}) and the objects (orange boxes) with top-2 attended relationships ($\beta^V$ in Eq. \ref{edge attention}). Fact graph shows the predicted entity (center node) and its top-4 attended neighbors ($\alpha^F$ in Eq. \ref{eq:node attention}). Semantic graph shows the most relevant concept (center node) and its up to top-4 attended neighbors ($\alpha^S$ in Eq. \ref{eq:node attention}). Each edge is marked with attention value ($\beta^{F/S}$ in Eq. \ref{edge attention}). Dash lines represent visual-to-fact convolution (orange) and semantic-to-fact convolution weights (blue) of the predicted entity ($\gamma^{\textit{\text{V-F}}},\gamma^{\textit{\text{S-F}}}$ in Eq. \ref{eq:VtoF}). The thermogram on the top visualizes the gate values ($\bm{gate}_i$ in Eq. \ref{eq:gate}) of visual embedding (left), entity embedding (middle) and semantic embedding (right). }} \label{fig:case} \end{figure*} \subsection{Comparison with State-of-the-Art Methods} Table \ref{table:overall} shows the comparison of Mucko with state-of-the-art models, including CNN-RNN based approaches \cite{wang2018fvqa}, \textit{i.e.} LSTM-Question+Image+Pre-VQA and Hie-Question+Image+Pre-VQA, semantic parsing based approaches \cite{wang2018fvqa}, \textit{i.e.} FVQA (top-3-QQmaping) and FVQA (Ensemble), learning-based approaches, \textit{i.e.} Straight to the Facts (STTF) \cite{narasimhan2018straight} and Out of the Box (OB) \cite{narasimhan2018out}, and Reading Comprehension based approach \cite{li2019visual}. Our model consistently outperforms all the approaches on all the metrics and achieves 3.71\% boost on top-1 accuracy and 5.69\% boost on top-3 accuracy compared with the state-of-the-art model. The model OB is most relevant to Mucko in that it leverages graph convolutional networks to jointly assess all the entities in the fact graph. However, it introduces the global image features equally to all the entities without selection. By collecting question-oriented visual and semantic information via modality-aware heterogeneous graph convolutional networks, our model gains remarkable improvement. \subsection{Ablation Study} In Table \ref{table:ablation}, we shows ablation results to verify the contribution of each component in our model. (1) In models `1-3', we evaluate the \textbf{influence of each layer of graphs} on the performance. We observe that the top-1 accuracy of `1' and `2' respectively decreases by 1.1\% and 3.94\% compared with the full model, which indicates that both semantic and visual graphs are beneficial to provide valuable evidence for answer inference. Thereinto, the visual information has greater impact than the semantic part. When removing both semantic and visual graphs, `3' results in a significant decrease. (2) In models `4-6', we assess the \textbf{effectiveness of the proposed cross-modal graph convolutions}. `4', `5' and `6' respectively replace the `Semantic-to-Fact Conv.' in `2', `Visual-to-Fact Conv.' in `1' and both in full model by concatenation, \textit{i.e.} concatenating the mean pooling of all the semantic/visual node features with each entity feature. The performance decreases when replacing the convolution from either S-to-F or V-to-F, or both simultaneously, which proves the benefits of cross-modal convolution in gathering complementary evidence from different modalities. (3) We evaluate the \textbf{influence of the relationships} in the heterogeneous graph. We omit the relational features $\bm{r}_{ij}$ in all the three layers in `7' and the performance decreases by nearly 1\% on top-1 accuracy. It proves the benefits of relational information, though it is less influential than the modality information. \subsection{Interpretability} Our model is interpretable by visualizing the attention weights and gate values in the reasoning process. From case study in Figure \ref{fig:case}, we conclude with the following three insights: \textbf{(1) Mucko is capable to reveal the knowledge selection mode.} The first two examples indicate that Mucko captures the most relevant visual, semantic and factual evidence as well as complementary information across three modalities. In most cases, factual knowledge provides predominant clues compared with other modalities according to gate values because FVQA relies on external knowledge to a great extent. Furthermore, more evidence comes from the semantic modality when the question involves complex relationships. For instance, the second question involving the relationship between `hand' and `while round thing' needs more semantic clues. \textbf{(2) Mucko has advantages over the state-of-the-art model.} The third example compares the predicted answer of OB with Mucko. Mucko collects relevant visual and semantic evidence to make each entity discriminative enough for predicting the correct answer while OB failing to distinguish representations of `laptop' and `keyboard' without feature selection. \textbf{(3) Mucko fails when multiple answers are reasonable for the same question.} Since both `wedding' and `party' may have cakes, the predicted answer `party' in the last example is reasonable from human judgement. \begin{table} \centering \renewcommand{\arraystretch}{1} \setlength{\abovecaptionskip}{5pt} \setlength{\belowcaptionskip}{-2mm} \resizebox{0.9\columnwidth}{!}{\scriptsize \begin{tabular}{l|rrrr} \hline {\bf \#Retrieved facts}& {\bf @50} &{\bf @100} &{\bf @150} &{\bf @200} \\ \hline \multirow{2}*{\shortstack{\bf Rel@1 (top-1 accuracy) \\\bf Rel@1 (top-3 accuracy)}} & 55.56 & 70.62 & 65.94 & 59.77 \\ ~ & 64.09 & 81.95 & 73.41 & 66.32 \\ \hline \multirow{2}*{\shortstack{\bf Rel@3 (top-1 accuracy) \\\bf Rel@3 (top-3 accuracy)}} & 58.93 & {\bf73.06}&70.12&65.93 \\ ~& 68.50& {\bf85.94} & 81.43 & 74.87 \\ \hline \end{tabular} } \caption{Overall accuracy with different number of retrieved candidate facts and different number of relation types.} \label{table:factrecall} \end{table} \begin{table} \centering \setlength{\abovecaptionskip}{5pt} \setlength{\belowcaptionskip}{-4mm} \resizebox{0.7\columnwidth}{!}{\scriptsize \begin{tabular}{l|ccc} \hline {\bf \#Steps } & 1 & 2 & 3\\ \hline {\bf top-1 accuracy}&62.05 & {\bf 73.06} & 70.43 \\ {\bf top-3 accuracy} & 71.87 & {\bf85.94} & 81.32 \\ \hline \end{tabular} } \caption{Overall accuracy with different number of reasoning steps.} \label{table:step} \end{table} \subsection{Parameter Analysis} In Table \ref{table:factrecall}, we vary the number of retrieved candidate facts and relation types for candidate filtering. We achieve the highest downstream accuracy with top-100 candidate facts and top-3 relation types. In Table 4, we evaluate the influence of different number of reasoning steps $T$. We find that two reasoning steps achieve the best performance. We use the above settings in our full model. \section{Conclusion} In this paper, we propose Mucko for visual question answering requiring external knowledge, which focuses on multi-layer cross-modal knowledge reasoning. We novelly depict an image by a heterogeneous graph with multiple layers of information corresponding to visual, semantic and factual modalities. We propose a modality-aware heterogeneous graph convolutional network to select and gather intra-modal and cross-modal evidence iteratively. Our model outperforms the state-of-the-art approaches remarkably and obtains interpretable results on the benchmark dataset. \section*{Acknowledgements} This work is supported by the National Key Research and Development Program (Grant No.2017YFB0803301). \clearpage \section{Supplementary Materials} We also conduct extensive experiments on another two large-scale knowledge-based VQA datasets: OK-VQA \cite{marino2019ok} and Visual7W+KB \cite{li2017incorporating} to evaluate performance of our proposed model. In this section, we first briefly review the dataset and then report the performance of our proposed method comparing with several baseline models. \subsection{Datasets} \paragraph{Visual7W+KB:} The Visual7W dataset \cite{zhu2016visual7w} is built based on a subset of images from Visual Genome \cite{krishna2017visual}, which includes questions in terms of (what, where, when, who, why, which and how) along with the corresponding answers in a multi-choice format. However, most of questions of Visual7W solely base on the image content which don't require external knowledge. Furthermore, \cite{li2017incorporating} generated a collection of knowledge-based questions based on the test images in Visual7W by filling a set of question-answer templates that need to reason on both visual content and external knowledge. We denoted this dataset as Visual7W+KB in our paper. In general, Visual7W+KB consists of 16,850 open-domain question-answer pairs based on 8,425 images in Visual7W test split. Different from FVQA, Visual7W+KB uses ConceptNet to guide the question generation but doesn't provide a task-specific knowledge base. In our work, we also leverage ConceptNet to retrieve the supporting knowledge and select one entity as the predicted answer. \paragraph{OK-VQA:} \cite{marino2019ok} proposed the Outside Knowledge VQA (OK-VQA) dataset, which is the largest knowledge-based VQA dataset at present. Different from existing datasets, the questions in OK-VQA are manually generated by MTurk workers, which are not derived from specific knowledge bases. Therefore, it requires the model to retrieve supporting knowledge from open-domain resources, which is much closer to the general VQA but more challenging for existing models. OK-VQA contains 14,031 images which are randomly collected from MSCOCO dataset \cite{lin2014microsoft}, using the original 80k-40k training and validation splits as train and test splits. OK-VQA contains 14,055 questions covering a variety of knowledge categories such as science \& technology, history, and sports. \subsection{Experimental results on Visual7W+KB} The comparison of state-of-the-art models on Visual7W-KB dataset is shown in the Table \ref{table:V7W_SOTA}. The compared baselines contains two sets, i.e. memory-based approaches and a graph-based approach. The memory-based approaches \cite{li2017incorporating} include KDMN-NoKnowledge (w/o external knowledge), KDMN-NoMemory (attention-based knowledge incorporation), KDMN (dynamic memory network based knowledge incorporation) and KDMN-Ensemble (several KDMN models based ensemble model). We also test the performance of Out of the Box (OB) \cite{narasimhan2018out} on Visual7W-KB and report the results in Table \ref{table:V7W_SOTA}. As consistent with the results on FVQA, we achieve a significant improvement (7.98\% on top-1 accuracy and 13.52\% on top-3 accuracy ) over state-of-the-art models. Note that our proposed method is an single-model, which outperforms the existing ensembled model \cite{li2017incorporating}. \begin{table}[tp] \centering \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \resizebox{\columnwidth}{!}{\scriptsize \begin{tabular}{l|cc} \hline \multirow{2}*{\bf Method} & \multicolumn{2}{c}{{\bf Overall Accuracy} }\\ \cline{2-3} & {\bf top-1} & {\bf top-3}\\ \hline KDMN-NoKnowledge \cite{li2017incorporating} & $45.1$ & - \\ KDMN-NoMemory \cite{li2017incorporating}&$51.9$&-\\ KDMN \cite{li2017incorporating}&$57.9$&-\\ KDMN-Ensemble \cite{li2017incorporating}&$60.9$&-\\ Out of the Box (OB) \cite{narasimhan2018out}&$57.32$&$71.61$\\ \hline {\bf Mucko (ours)}&$\bm {68.88}$&$\bm{ 85.13}$\\ \hline \end{tabular} } \caption{State-of-the-art comparison on Visual7W+KB dataset.} \label{table:V7W_SOTA} \end{table} \subsection{Experimental results on OK-VQA} We also report the performance on the challenging OK-VQA dataset in Table \ref{table:OK-VQA_SOTA}. We compare our model with three kinds of existing models, including current state-of-the-art VQA models, knowledge-based VQA models and ensemble models. The VQA models contain Q-Only \cite{marino2019ok}, MLP \cite{marino2019ok}, BAN \cite{kim2018bilinear}, MUTAN\cite{kim2018bilinear}. The knowledge-based VQA models \cite{marino2019ok} consist of ArticleNet (AN), BAN+AN and MUTAN+AN. The ensemble models \cite{marino2019ok}, i.e. BAN/AN oracle and MUTAN/AN oracle, simply take the raw ArticleNet and VQA model predictions, taking the best answer (comparing to ground truth) from either. Our model consistently outperforms all the compared models on the overall performance. Even the state-of-the-art models (BAN and MUTAN) specifically designed for VQA tasks, they get inferior results compared with ours. This indicates that general VQA task like OK-VQA cannot be simply solved by a well-designed model, but requires the ability to incorporate external knowledge in an effective way. Moreover, our model outperforms knowledge-based VQA models including both single models (BAN+AN and MUTAN+AN) and ensemble models (BAN/AN oracle and MUTAN/AN oracle), which further proves the advantages of our proposed multi-layer heterogeneous graph representation and cross-modal heterogeneous graph reasoning. \begin{table}[tp] \centering \resizebox{\columnwidth}{!}{\scriptsize \begin{tabular}{l|cc} \hline \multirow{2}*{\bf Method} & \multicolumn{2}{c}{{\bf Overall Accuracy}}\\ \cline{2-3} & {\bf top-1} & {\bf top-3}\\ \hline Q-Only \cite{marino2019ok} & 14.93 &-\\ MLP \cite{marino2019ok} & 20.67 &- \\ BAN \cite{kim2018bilinear} & 25.17&- \\ MUTAN \cite{ben2017mutan} & 26.41&- \\ ArticleNet (AN) \cite{marino2019ok} & 5.28&- \\ BAN + AN \cite{marino2019ok}& 25.61 & - \\ MUTAN + AN \cite{marino2019ok}& 27.84 & - \\ BAN/AN oracle \cite{marino2019ok}& 27.59 & - \\ MUTAN/AN oracle \cite{marino2019ok}& 28.47 & - \\ \hline \textbf{Mucko (ours)} & $\bm{29.20}$&$\bm{30.66} $ \\\hline \end{tabular} } \caption{State-of-the-art comparison on OK-VQA dataset.} \label{table:OK-VQA_SOTA} \end{table} \clearpage \bibliographystyle{named}
2,877,628,091,664
arxiv
\section{\textbf{Introduction}} Image compression using artificial neural networks (ANN) has shown great potential to be applied on a wide variety of different areas since their first appearance \cite{toderici2015}. In the past couple of years, they have outperformed most of the hand-engineered codecs such as JPEG \cite{jpeg} and JPEG2000 \cite{jpeg2000} in terms of rate-distortion (RD) performance \cite{yang2022introduction}. One major advantage of ANN-based compression algorithms is that they can be developed on any ad-hoc dataset to do better compression than general codecs \cite{yang2022introduction}. Although it is believed that the ultimate trade-off in image compression is between the rate and distortion, recent studies have shown that there is a third role governing the visual quality of compressed images known as perception \cite{blau2019}. Generative Adversarial Networks (GANs) are known for their high-quality reconstructed images by enforcing the ANN to capture the distribution of their input image. Hence, to improve the perceptual quality of reconstructed image at the receiver, GANs have been applied to image compression networks in the literature \cite{mentzer2020}. Another venue of works to improve the performance of Convolutional Neural Networks (CNNs) is attention mechanism. With its unprecedented influence in natural language processing \cite{vaswani2017transformer}, attention has found its way in computer vision and object detection/classification tasks \cite{dosovitskiy2021vit,liu2021}. We have utilized both of these improvements in learned image compression networks to enhance the performance in terms of rate-distortion-perception trade-off \cite{blau2019}. As shown in Figure \ref{fig:visual-comparison}, although the attention mechanism can reach better performance compared with other compression standards, augmenting it with a GAN will lead to better perceptual quality. \textbf{Contributions of This Work}. In this work we have investigated the application of recently successful learned image compression methods in the field of solar imaging. We have shown that these neural compression schemes could easily outperform traditional and currently-in-use image codecs. In addition, we have proposed a curated attention module to improve the RD tradeoff performance in state-of-the art neural compression architectures. We have also utilized adversarial training to encourage the decoder of our neural network to preserve the distribution of the solar images during the reconstruction process. The remainder of the paper is organized as follows. Section \ref{sec:related-work} reviews the neural-based compression methods and the importance of compression on SDO mission. Section \ref{sec:methods} describes our proposed method. The experiments and ablation studies are discussed in section \ref{sec:experiments} with a conclusion at section \ref{sec:conclusion}. \section{\textbf{Related Work}}\label{sec:related-work} \begin{figure}[tp] \centering \includegraphics[width=0.9\linewidth]{figs/visual_comparison.pdf} \caption{Visual comparison of proposed compression schemes (Attention only and GAN+Attention) to other standard codecs. Reported performance in terms of bit-rate/distortion [bpp$\downarrow$/PSNR$\uparrow$]. GAN outputs are visually closer to the original input unless their lower performance in terms of PSNR. \emph{Best viewed on screen.}} \label{fig:visual-comparison} \end{figure} \subsection{\textbf{Neural Image Compression}} Transform coding based image compression algorithms share four main steps to compress an image \cite{goyal2001transformcoding}. First, encoding the images from their input space (e.g., RGB) to an uncorrelated space. Second, quantizing to discard less significant information from the data in its uncorrelated domain. At the third step, an entropy coding will be utilized to losslessly encode the quantized samples into a stream of ones and zeros. This bitstream will be the compressed image. Final step occurs at the receiving end (or at reconstructing step), which is responsible to decode the quantized values to the original space of the input image. The first and most widely used architecture to mimic this scenario in deep neural networks, is the convolutional autoencoder, which has shown its superiority in the literature \cite{balle2017endtoend}. Both the encoding and decoding part of the traditional transform coding, could be imitated by an autoencoder \cite{balle2021}. End-to-end optimizing of the neural networks are capable of handling various tasks \cite{DEHKORDI2022109091, ebrahimi2022, akyash2021, KASHIANI201917} if the learning objective chosen to be differentiable. In an end-to-end optimisation of an autoencoder, problems arise when we want to do quantization on its bottleneck. It is worth mentioning that quantization is the essential part of compression. Merely doing dimensionality reduction cannot necessarily result in discarding the redundant information, which is necessary to attain high compression ratios \cite{theis2017}. ANNs are optimized using gradient descent algorithms which update the parameters of the network by back-propagating the gradients of the loss function. Thus, all the operations performed in it must be differentiable. As a result, we need to approximate hard discrete quantization with a soft continuous operation. To do so, several approaches have been proposed in the literature. \cite{toderici2015} used recurrent neural networks to directly binarize the latent code stochastically, while \cite{theis2017} used an approach similar to straight through estimator \cite{bengio2013} by back-propagating the gradients of identity function and rounding to the nearest integer in the forward pass. By this continuous approximation, the network parameters can be successfully learned with backpropagating the gradient of the loss function. The most widely used approach is proposed by \cite{balle2016}, inherited from \cite{gray1998}, they showed that adding independent and identically distributed uniform noise in the range of scalar quantization can be interpreted equivalently as doing scalar quantization on the bottleneck. By doing so, we can optimize the differential entropy of the continuous approximation as a variational upper bound \cite{theis2016} to reduce the entropy of the bottleneck. Low entropy messages are compressed more efficiently into bitstreams \cite{cover1999infotheory, aghdaie2022morph}. In classical image compression schemes, to get the best out of the quantization process, the first step was to apply an invertible linear transform and translate the image into decorrelated coefficients using Discrete Cosine Transform (DCT). By doing so, scalar quantization could reach a reasonable performance close to vector quantization \cite{goyal2001transformcoding}. The application of vector quantization in ANN-based compression has been investigated by \cite{agustsson2017}, with the cost of complicated training procedure. On the other hand, it has been shown \cite{balle2017endtoend, balle2021} that a joint-optimized learned nonlinear transform, i.e., neural network, followed by scalar quantization can ideally approximate a parametric form of vector quantization. Replacing the actual quantization with uniform noise approximation in the bottleneck of a vanilla autoencoder during training of the network \cite{balle2018a}, will transform it to a Variational Aueoencoder (VAE) \cite{kingma2014}. The only difference is on the chosen prior. In autoencoder-based image compression, the Gaussian prior of the VAE is replaced with unit uniform distribution centered on integer numbers. \subsection{\textbf{Solar Dynamics Observatory (SDO) Mission}} \subsubsection{\textbf{Image Compression on SDO Data}} Advances in sensor technology and an increasing desire for a deeper understanding of the space environment (Sun to Earth and beyond) have led to an explosion of data volume in recent years (unprecedented spatial and/or temporal resolution as well as multi-spectral data). As a result, it requires new innovative data compression algorithms. The SDO mission transmits 1.4 TB of data (most of it images of the sun at different wavelengths) each day to the ground station \cite{sdoguide}, which shows the importance of compression on transmitting and more importantly on archiving this huge amount of data \cite{chamberlin2012solar}. In \cite{fischer2017jpeg2000eve} authors pointed out the essential need to do lossy image compression on petabyte size data gathered from Solar missions. They studied usage of JPEG2000 which is a transform coding compression scheme based on discrete wavelength transform on SDO data. \subsubsection{\textbf{Imagery Instruments on SDO Spacecraft}} SDO data are captured using three instruments onboard that gather data from the Sun 24 hours a day. The \emph{Helioseismic and Magnetic Imager} (HMI) was created to investigate oscillations and the magnetic field at the solar surface, or photosphere \cite{schou2012hmi}. The \emph{Atmospheric Imaging Assembly} (AIA) on SDO takes full-sun images (1.3 solar diameters) of the solar corona at a spatial resolution of near 1 arcsec, with an image cadence of 12 seconds for multiple wavelengths \cite{lemen2011aia}. To better understand variations on the timeframes that affect Earth's climate and near-Earth space, the \emph{Extreme ultraviolet Variability Experiment} (EVE) analyzes the solar Extreme UltraViolet (EUV) irradiance with high spectral precision \cite{woods2010eve}. \subsubsection{\textbf{Machine/Deep Learning on SDO Data}} Recently \cite{galvez2019} has gathered a portion of SDO raw data and cleaned it as machine-learning ready dataset to be used in developing new learning-based methods on SDO mission data. From here now on, we call this dataset \emph{SDOML}. Based on this dataset, \cite{salvatelli2019} used a U-Net in a GAN network to translate AIA multi-spectral (94, 171, 193 \mbox{\normalfont\AA}) images to a specific wavelength (211 \mbox{\normalfont\AA}). As another machine learning work on SDOML dataset, \cite{santos2020} proposed deep neural networks as a means to auto-calibrate the instrument degradation on SDO imagery instruments. A conditional GAN is used in \cite{dash2021super} to translate downloaded HMI images from SDO to AIA images. More details on the SDOML dataset will be presented in section \ref{sec:experiments:dataset}. \section{\textbf{Methods}}\label{sec:methods} \begin{figure*}[tp] \centering \includegraphics[width=0.9\linewidth]{figs/network.pdf} \caption{Network architecture. Input image is down-scaled by a factor of 16 to get the latent code and up-sampled in reverse to get the reconstructed image. A conditional discriminator encourages the generator (decoder) for better perceptual quality. Number of channels in encoder and decoder are set by $N=192$ and $M=320$. Q performs the scalar quantization. EE and ED indicate entropy encoder and decoder, respectively. $\mu$ and $\sigma$ are predicted parameters of the latent code probability distribution, defined by entropy estimation model. GDN and IGDN correspond to Generalized Divisive Normalization nonlinearity and its inverse, discussed in section \ref{sec:experiments:implementation}.} \label{fig:network-arch} \end{figure*} \subsection{\textbf{Generative Image Compression}} Autoencoder based learned image compression networks, like the one we have proposed in Figure \ref{fig:network-arch}, generally consist of two major parts. First the encoder/decoder network, and second the bottleneck entropy estimation network. The latter is discussed in depth in section \ref{section:entropy-model}. According to Figure \ref{fig:network-arch}, the network input ($\bm{x}$) and output ($\bm{x'}$) relations can be summarized as follows \begin{equation} \begin{aligned} &\bm{x'}=g_s(\bm{\hat{y}};\bm{\theta_g}),\\ &\bm{\hat{y}}=\lfloor g_a(\bm{x};\bm{\phi_g})\rceil,\\ &\bm{\hat{z}}=\lfloor h_a(\bm{y};\bm{\phi_h})\rceil, \end{aligned} \end{equation} in which, $\lfloor\cdot\rceil$ denotes quantizing the real valued input to the nearest integer number. $\bm{\hat{y}}$ is the quantized latent variable and $\bm{\hat{z}}$ is its quantized hyper-prior. Encoder and decoder nonlinear transforms are represented by $g_a$ and $g_s$ with their learned parameters, $\bm{\phi_g}$ and $\bm{\theta_g}$, respectively. The subscripts $a$ and $s$ refer to \emph{analysis} and \emph{synthesis} as they are common words in the area of transform coding based compression. $h_a$ is the analysis transform to get the hyper-priors of the entropy estimation model, leaned by its parameters $\bm{\phi_h}$. \subsubsection{\textbf{Learning Objective}} \label{sec:compression:objective} Any learned image compression network tries to tackle with rate-distortion trade-off, governed by a Lagrangian coefficient $\lambda$ which can be described as \begin{equation} R+\lambda D \label{eq:rate-distortion}, \end{equation} where $R$ and $D$ correspond to the estimated entropy of the latent code and reconstruction distortion, respectively. Estimated entropy of the quantized bottleneck represents the rate term which is desired to be minimized during the training of the neural network. The probability distribution of the latent code is variationally approximated by hyper-prior $\bm{z}$. Then the quantized $\bm{\hat{z}}$ is transmitted alongside the compressed image as a side-information. Therefore, the entropy of both should be optimized as defined below \begin{equation} R=\mathbb{E}_{x\sim p_X}[-\log_2P_{\bm{\hat{y}}|\bm{\hat{z}}}(\bm{\hat{y}}|\bm{\hat{z}};\bm{\theta_h})-\log_2P_{\bm{\hat{z}}}(\bm{\hat{z}};\bm{\psi})], \end{equation} where $\bm{\theta_h}$ and $\bm{\psi}$ are parameters of learned entropy model on latent code ($\bm{\hat{y}}$) and hyper-prior ($\bm{\hat{z}}$), respectively. In Eq. \ref{eq:rate-distortion}, $D$ accounts for the distortion between input and output image of the network which can be measured by any desired metric. The prevalently used criterion to measure distortion between input and output is the Mean Squared Error (MSE), which is heavily criticized because of reconstructing blurry images. Efforts have been made to propose metrics which can adhere perceptually to human visual system, e.g. Multi Scale Structural SiMilarity Index (MS-SSIM) \cite{wang2009}. Even these metrics have shown weaknesses when intensely scrutinized \cite{nilsson2020}. Recently, perceptual-aware metrics based on features generated by pre-trained neural networks have been proposed. Learned Perceptural Image Patch Similarity (LPIPS) introduced by \cite{zhang2018lpips} uses trained AlexNet/VGGNet features to compare patches of an image with a corresponding reference. In training our neural compressor we will fortify its reconstruction loss by exploiting this perceptual metric. To make the reconstruction closer to the input image, we also consider adversarial training of our decoder network. Generative Adversarial Networks (GANs) \cite{goodfellow2014} consisting of a generator and a discriminator sub-network, are able to follow the distribution of data at reconstruction instead of just trying to find the nearest pixel values in order to decrease the distortion. In our network the decoder plays the role of the generator. Then the discriminator forces the decoder output to preserve the distribution of the input image at the reconstructed image. The proposed objective to be optimized is a combination of distortion and perception as follows \begin{equation} \begin{split} D=\mathbb{E}_{\bm{x}\sim p_X}[&\lambda_{recon}MSE(\bm{x},\bm{x'})\\+&\lambda_{perc}LPIPS(\bm{x},\bm{x'})\\-&\lambda_{adv}\log D(\bm{x'},\bm{y})]. \end{split} \end{equation} To make the adversarial training feasible we need the discriminator to judge whether its input sample came from the true distribution of data or is a fake generated one. The discriminator will need to be optimized by a separate auxiliary loss, given as \begin{equation} \begin{split} L_{disc.}=&\mathbb{E}_{\bm{x}\sim p_X}[-\log(D(\bm{x},\bm{y})]\\+&\mathbb{E}_{\bm{x}\sim p_X}[-\log(1-D(\bm{x'},\bm{y}))]. \end{split} \end{equation} It has been shown analytically that the distortion is in direct trade-off with perception \cite{blau2018}. GANs are the solutions to find a better perception quality by losing an acceptable amount of distortion. In \cite{blau2019} a third term in this tradeoff was introduced as the rate in the lossy compression scheme. More detailed experiments have been adopted in \cite{mentzer2020} to prove this idea in practice. Therefore, it would be an expected behavior to have a lower Peak Signal to Noise Ratio (PSNR) value on a decoder trained adversarially in contrast to a decoder trained merely on distortion metrics. \subsubsection{\textbf{Entropy Estimator Model}} \label{section:entropy-model} Performance of any learned image compression scheme heavily depends on how well it can estimate the true entropy of the bottleneck. So the objective will be to minimize the cross entropy between the two. To make the entropy estimation possible, several probability estimation methods have been proposed in the literature, including empirical histogram density estimation \cite{agustsson2017,theis2017}, piecewise linear models \cite{balle2017endtoend}, conditioning on a latent variable (hyper-prior) \cite{balle2018a} and context modelling based on autoregressive models. \cite{minnen2018}. \begin{figure*}[tp] \centering \subfigure[Attention module with skip connection. RB denote residual block.]{\includegraphics[width=0.45\textwidth]{figs/residual_attention.pdf} \label{fig:residual-attention}} \hfill \subfigure[Window-based non-local attention module (WNLAM).]{\includegraphics[width=0.35\textwidth]{figs/non_local_attention.pdf}\label{fig:wnlam}} \vfill \subfigure[Window-based convolutional block attention module (WCBAM). A feature map of C channels with spatial dimensions of H$\times$W. $w$ is the window size to calculate channel attention on.]{\includegraphics[width=0.8\textwidth]{figs/wcbam.pdf} \label{fig:wcbam}} \caption{Attention module architecture.} \end{figure*} From a high-level overview, entropy estimation models can be divided into two main categories, Forward Adaptation (FA) and Backward Adaptation (BA) models. The former suffers from low capacity to capture all dependencies in the probability distribution of the latent code and the latter's disadvantage is that decoding process cannot be parallelized. Learned FA models \cite{balle2018a, balle2021} will only use the information provided during the encoding of the image, while BA methods based on autoregressive models \cite{minnen2018} need information from the decoded message as well. To take advantage of both of these models \cite{minnen2020} we define the conditional probability of the latent code as \begin{equation} P_{\bm{\hat{y}}|\bm{\hat{z}}}(\bm{\hat{y}}|\bm{\hat{z}})=\prod_i P(\bm{\hat{y}_i}|\bm{\hat{y}_{j<i}},\bm{\hat{z}};\bm{\theta_h}). \end{equation} Conditioning on quantized hyper-prior, i.e., $\bm{\hat{z}}$, as a side-information is an example of FA and conditioning on all previously decoded elements of the latent space, i.e., $\bm{\hat{y}_{j<i}}$, is an example of BA. BA performance have been improved in \cite{minnen2020} by letting the conditioning exist only between slices of channels in the bottleneck. In contrast to spatial autoregressive modeling in \cite{minnen2018}, \cite{minnen2020} only considers the conditioning of the probabilities on the channels and it showed that by doing this the decoding process could be reasonably parallelized. We have used the same approach in \cite{minnen2020} to estimate the entropy and minimize it during the training. \subsection{\textbf{Attention Assisted Image Compression}} \begin{figure*}[tp] \centering \subfigure{\includegraphics[width=0.45\textwidth]{figs/combined_plot_psnr.pdf}} \hfill \subfigure{\includegraphics[width=0.45\textwidth]{figs/combined_plot_msssim.pdf}} \caption{Rate distortion curves aggregated over test set described on section \ref{sec:experiments:dataset}. On the left PSNR is calculated from MSE by $10\log_{10}\frac{255^2}{MSE}$. On the right MS-SSIM is reported in logarithmic scale by $-10\log(1-m)$ to show the differences better, in which $m$ is the MS-SSIM in the range of zero to one.} \label{fig:rd-cvrves} \end{figure*} When it comes to computer vision, deep convolutional neural networks are the de facto standard despite their poor performance on capturing long range dependencies. CNNs will have problems if it is required to simultaneously capture a few characteristics from non-neighboring pixels. It has been determined that the local nature of kernel sliding on just a few pixels of the input image is the primary cause of this degradation \cite{ramachandran2019stand}. Efforts have been made to help CNNs capture more robust representation of the input image. One naive solution is to make the network deeper but other problems will arise on training such networks which has led to the introduction of deep residual networks (ResNet) \cite{he2016resnet}. Although increasing the parameters of network will generally lead to richer representation and better performance, it will make training of such networks harder. Attention mechanisms have been proposed to address this issue of CNNs without making the network deeper. In \cite{wang2017residualattn} authors have proposed a single module to be included in between sequential convolutional layers, consisting of two branches, \emph{trunk}, to process local features and \emph{mask} to decide which of the local features in the trunk are more important to be passed to the next convolutional layer, as in Figure \ref{fig:residual-attention}. In contrast to local attention, \cite{wang2018} first discussed how non-local attention can be viewed as a special case of non-local algorithm which was traditionally used as a method to denoise images \cite{buades2005}. The idea was to find similar pixels/patches in the image/feature map and replace it with a weighted sum over all the others, with higher weights for more similar ones. It can be inferred from \cite{wang2018} that Vision Transformers (ViT) \cite{dosovitskiy2021vit} are all special cases of non-local attention mechanism. Non-local attention block (Figure \ref{fig:wnlam}) helps the \emph{mask} branch to efficiently learn the most informative parts of features (in the \emph{trunk}) for the task in hand \cite{zhang2018residualnonlocal}. The authors in \cite{zhang2018residualnonlocal} also added a skip connection to help the output feature maps be richer. This skip connection prevents vanishing gradients as well. Another recently proposed simple tweak to incorporate attention in CNNs has been introduced in \cite{woo2018cbam}. It is an enhanced version of Squeeze-and-Excitation network \cite{hu2018squeeze}, to apply attention both on spatial and channels feature maps separately. This way of applying attention is simpler and computationally more efficient. Discussed attention mechanisms have been employed in deep learned neural compression networks as well. \cite{zhou2019attention} applied residual attention, then \cite{chen2021} improved their work by adding a non-local attention to the mask of the residual attention. To improve further, \cite{zou2022} applied non-local attention limited to small windows of the feature maps. This window-based attention attained better results in the compression area. Here we propose to use two kinds of attention mechanisms in a window-based manner. \subsubsection{\textbf{Window-based Non-Local Attention Module (WNLAM)}} Non-local attention block as shown in Figure \ref{fig:wnlam} is composed of a weighted sum (weights calculated as a softmax) over linear transformed version of $\mathbf{x}$, i.e., $g(\mathbf{x})$. \begin{equation} \mathbf{y}_i = \frac{1}{\sum_{\forall j}e^{\theta(\mathbf{x}_i)^T\phi(\mathbf{x}_j)}}\sum_{\forall k}e^{\theta(\mathbf{x}_i)^T\phi(\mathbf{x}_k)}g(\mathbf{x}_k), \label{eq:non-local} \end{equation} where $g(.)$ is a linear transformation ($W_g$) implemented by a $1\times1$ convolution layer defined as $g(\mathbf{x_k})=W_g\mathbf{x_k}$. The weights of the sum in Eq. \ref{eq:non-local} are calculated by the measure of similarity in embedding space of the input, i.e., $\theta(\mathbf{x}_i)=W_\theta\mathbf{x_i}$ and $\phi(\mathbf{x}_k)=W_\phi\mathbf{x_k}$. As the final operation in non-local attention, $\mathbf{z}_i$ is calculated by a linear transformation ($W_z$) added to the original $\mathbf{x}_i$ as follows \begin{equation} \mathbf{z}_i = W_z\mathbf{y}_i+\mathbf{x}_i. \end{equation} In image compression, restoring edges and high-frequency content is more important than representing the global features in the latent representation. Consequently, naive non-local attention mechanism performs worse than local attentions which are able to capture local redundancies and preserve details on the reconstructed image \cite{zou2022}. \subsubsection{\textbf{Window-based Convolutional Block Attention Module (WCBAM)}} A simple to implement kind of attention in CNNs is the convolutional block attention module (CBAM) which has shown great benefit in classification tasks \cite{woo2018cbam}. It carries out two attention mechanisms. First, the channel attention ($CA$) guides the network to only consider channels with higher importance for the desired task. Second, the spatial attention ($SA$) dictates the network where to pay attention more. Here we propose to utilize this attention module in a window-based manner. Instead of globally considering the whole spatial dimensions of each channel, we focus only on a cropped window size of $w$, as shown in Figure \ref{fig:wcbam}. Applying WCBAM mechanism on the input features $X$ can be summarized as \begin{equation} \begin{aligned} X_{CA} &= CA_w \odot X,\\ X_{CA,SA} &= SA \odot X_{CA}, \end{aligned} \end{equation} where $CA_w$ reweighs the channels over each window. Then $SA$ is multiplied on each refined channel to highlight the important spatial content. Window based channel attention is calculated by passing the average and max pool through a shared fully connected network (F), as in Eq. \ref{eq:CAw} \begin{equation} CA_w = sigmoid(F(Avg(X_w))+F(Max(X_w))). \label{eq:CAw} \end{equation} Afterwards, the spatial attention weights ($SA$) will be derived by concatenating average and max pool passed through a convolutional layer as \begin{equation} SA = sigmoid(Conv([Avg(X_{CA}),Max(X_{CA})])). \end{equation} WCBAM helps the network to capture global dependencies which may be needed during transforming the image from pixel space to feature space, specially those which the window-based non-local attention are incapable of. \subsubsection*{\textbf{Transformers as Attention Modules}}The superiority of models based on Transformers which are a special kind of non-local attention mechanism, has been recently proven \cite{dosovitskiy2021vit, liu2021}. Although Transformers have shown great benefit in image classification or object detection tasks, their naive application in image compression networks has failed \cite{zou2022}. The whole purpose of transformers is to capture long-range dependencies in an image, while the ultimate goal in image compression is to capture dependencies in order to summarize the dependencies efficiently in the latent code. \section{\textbf{Experiments}}\label{sec:experiments} \subsection{\textbf{Dataset}} \label{sec:experiments:dataset} SDOML includes images of the sun at wavelengths 94, 131, 171, 193, 211, 304, 335, 1600, 1700 \mbox{\normalfont\AA}\/ at a cadence of 6 minutes. We downsampled the images to a cadence of 1 hour to avoid the dependency between training samples. In addition, to prevent biases of the images with respect to solar variations at different stages of the solar cycle, we followed the same approach proposed by \cite{salvatelli2019} to divide the dataset based on the month they are taken. Images of January to August are chosen for training and September to December are reserved for testing. The results reported in this section are all based on this portion of dataset. \subsection{\textbf{Implementation Details}} \label{sec:experiments:implementation} As nonlinearity in our neural network, we have utilized computationally efficient \cite{johnston2019gdnfast} version of Generalized Divisive Normalization (GDN) \cite{balle2016gdn}. As a result of GDN's local normalization, statistical dependencies are reduced in the feature maps. By exploiting GDN instead of more conventional nonlinearities like ReLU, the feature maps will be decorrelated. Ideally, scalar quantization of a set of decorrelated features will present compression performance close to parametric vector quantization \cite{balle2021}. During the evaluation phase, entropy coding of the latent integer values was realized by asymmetric numeral systems \cite{duda2013asymmetric}. Seven models have been trained with $\lambda\in\displaystyle\{0.0015, 0.0035,\\ 0.0070, 0.0125, 0.0250, 0.0410, 0.0550\}$ governing the rate-distortion trade off as in Eq. \ref{eq:rate-distortion} for 100 epochs to train each model. We have used Adam \cite{kingma15adam} optimizer on batches of size 16 consisting of randomly cropped $256\times256$ patches out of the original images of $512\times512$. Initial value of the learning rate is set to $10^{-4}$ and annealed during the training to $1.2\times10^{-6}$. \begin{figure}[tp] \centering \includegraphics[width=0.8\linewidth]{figs/combined_plot_lpips.pdf} \caption{Rate distortion curve. Distortion is measured by LPIPS metric (lower is better) described in section \ref{sec:compression:objective}. As it can be seen, GAN performance in generating high quality image can be quantified by this metric.} \label{fig:rd-lpips} \end{figure} All common reconstruction losses have been blamed for not being close to human perceptual vision \cite{zhao2017losses}. L1 loss pays more attention to edges and high frequency areas of the image, L2 loss results in blurry reconstruction and SSIM/MS-SSIM losses will have an effect of not reconstructing minute details like text in images. Training our autoencoder based on MSE and LPIPS will result in outperforming even the state of the art hand-engineered codec, i.e., BPG \cite{bpg}, as shown in Figure \ref{fig:rd-cvrves}. The general lower performance of our GAN network is a common issue addressed in \cite{blau2018}. The PSNR or MS-SSIM are unable to capture the perceptual quality of the generated image in a GAN. The perceptual quality of GAN network reconstructions is discussed in section \ref{sec:experiments:ablation}, measured by perceptual metrics. \subsection{\textbf{Ablation Study}} \label{sec:experiments:ablation} To investigate how much attention modules contribute to the performance of our neural compressor, we have trained three separate networks for each of the seven targeted bit-rates discussed in section \ref{sec:experiments:implementation}. The first architecture has only the WNLAM module (Figure \ref{fig:rd-cvrves}) whose performance in terms of PSNR and MS-SSIM has been improved by adding the WCBAM attention module. As emphasized in Figure \ref{fig:visual-comparison}, the adversarially trained decoder results in better visual quality on the reconstructed image than the autoencoder only trained with attention mechanism. Conventional metrics like PSNR and MS-SSIM are unable to capture the higher perceptual quality of the GAN reconstructed images. We empirically found that LPIPS can show the merit of adversarially trained network. As it is shown in Figure \ref{fig:rd-lpips}, LPIPS values correspond to the human judgment of the quality of reconstructed images. \section{\textbf{Conclusion}}\label{sec:conclusion} In this work, we have shown how an effective image compression scheme based on trainable neural networks could be utilized for ad-hoc applications like images from NASA's SDO mission. In addition, we explored the effectiveness of attention mechanisms in an adversarially trained neural network to improve performance of compression in terms of rate-distortion-perception trade-off. \bibliographystyle{IEEEtran}
2,877,628,091,665
arxiv
\section{Introduction} For applications in fields as diverse as chemical and biological imaging, material science, telecommunication, semiconductor and superconductor research, there is great interest in having a source of intense pulses of terahertz radiation. Laser-based sources of such radiation~\cite{Auston:84,You:93} are capable of generating several-cycle pulses with frequency over the range 10--70~THz and energy of 20~$\mu$J~\cite{Sell:08}. In a beam-based sources, utilizing short, relativistic electron bunches~\cite{Nakazato:89,Carr:02} an electron bunch impinges on a thin metallic foil and generates coherent transition radiation (CTR). An implementation of this method at the Linac Coherent Light Source (LCLS) has obtained single-cycle pulses of radiation that is broad-band, centered on 10~THz, and contains $>0.1$~mJ of energy~\cite{Daranciang:11}. Another beam-based method generates THz radiation by passing a bunch through a metallic pipe with a dielectric layer. As reported in~\cite{Cook:09}, this method was used to generate narrow-band pulses with frequency 0.4~THz and energy 10~$\mu$J. It has been noted in the past, in the study of wall-roughness impedance~\cite{bane99n,bane00st}, that a metallic pipe with corrugated walls supports propagation of a high-frequency mode that is in resonance with a relativistic beam. This mode can be excited by a beam whose length is a fraction of the wavelength. Similar to the dielectric-layer method, metallic pipe with corrugated walls can serve as a source of terahertz radiation~\cite{terahertz12}. In this paper we study another option of excitation of the resonant mode in a metallic pipe with corrugated walls---via the mechanism of the free electron laser instability. This mechanism works if the bunch length is much longer than the wavelength of the radiation. While our focus will be on a metallic pipe with corrugated walls, most our results are also applicable to a dielectric-layer round geometries. The connection between the electrodynamic properties of the two types of structures can be found in Ref.~\cite{stupakov12b}. Our analysis is carried out for relativistic electron beams with the Lorentz factor $\gamma \gg 1$. However, in some places we will keep small terms on the order of $1/\gamma^2$ to make our results valid for relatively moderate values of $\gamma \sim 5-10$. In particular, we will take into account that the particles' velocity $v$ differs from the speed of light $c$ (in contrast to the approximation $v=c$ typically made in~\cite{bane99n,bane00st,terahertz12}) . We will see that the FEL mechanism becomes much less efficient in the limit $\gamma \to \infty$, so the moderate values of $\gamma$ are of particular interest. This paper is organized as follows. In Section~\ref{sec:2} we discuss the resonant frequency, the group velocity and the loss factor of the resonant mode whose phase velocity is equal to the velocity of the particle. Their derivations are given in Appendices~\ref{app:1} and~\ref{app:2}. In section~\ref{sec:3} we find the gain length and an estimate for the saturated power of an FEL in which a relativistic beam excites the resonant mode. In section~\ref{sec:4} we consider a practical numerical example of such an FEL. In section~\ref{sec:5} we discuss some of the effects that are not included in our analysis. \section{Wake in a round pipe with corrugated walls}\label{sec:2} We consider a round metallic pipe with inner radius $a$. \begin{figure}[htb] \centering \includegraphics[height=0.4\textwidth, trim=0mm 0mm 0mm 0mm, clip]{corrug_pipe_1.pdf} \hspace{2mm} \includegraphics[height=0.4\textwidth, trim=0mm 0mm 0mm 0mm, clip]{corrug_pipe_2.pdf} \caption{Dimensions of a round corrugated pipe. An electron beam propagates along the axis of the pipe. The beam position $s$ in the pipe is measured along the axis with $s=0$ corresponding to the entrance to the pipe.} \label{fig:1} \end{figure} Small rectangular corrugations have depth $h$, period $p$ and gap $g$, as shown in Fig.~\ref{fig:1}. In the case when $h,p\ll a$ and $h\gtrsim p$, the fundamental resonant mode with the phase velocity equal to the speed of light, $v_{ph}=c$, has the frequency $\omega_0 =ck_0$ and the group velocity $v_{g0}$, where~\cite{bane99n,bane00st} \begin{align}\label{eq:1} k_0 = \left( \frac{2p}{agh} \right)^{1/2} ,\qquad 1 - \frac{v_{g0}}{c} = \frac{4gh}{ap} . \end{align} Such a mode will be excited by an ultra-relativistic particle moving along the axes of the pipe with velocity $v=c$. Note that from the assumption $h,p\ll a$ follows the high-frequency nature of the resonant mode, $k_0\gg 1/a$. As explained in the Introduction, in our analysis we would like to take into account the fact that the phase velocity of the resonant mode is smaller than the speed of light, $v_{ph} = v<c$. Calculation of the frequency and the group velocity of the resonant mode for this case is carried out in Appendix~\ref{app:1}. As follows from this calculation, the deviation of the resonant frequency and the group velocity from Eqs.~\eqref{eq:1} is controlled by the parameter \begin{align}\label{eq:2} u = \frac{ak_0}{\gamma} \end{align} with $k_0$ defined by~\eqref{eq:1}. The plot of the frequency $\omega_r$ of the resonant mode versus parameter $u$ is shown in Fig.~\ref{fig:2}. \begin{figure}[htb] \centering \includegraphics[width=0.6\textwidth, trim=0mm 0mm 0mm 0mm, clip]{freq_vs_gamma.pdf} \caption{Plot of the normalized frequency $\omega_r$ of the resonant wave as a function of the parameter $a\omega_0/c\gamma$.} \label{fig:2} \end{figure} We see that decreasing the beam energy $\gamma$ increases the frequency $\omega_r$ of the mode. Note that because $k_0a\gg 1$ the deviation from the ultra-relativistic results~\eqref{eq:1} can become important even for large values of gamma, $\gamma\sim k_0a$. The group velocity of the resonant mode for $u\sim 1$ also deviates from the limit $\gamma\to\infty$ given by~\eqref{eq:1}. Calculations of the group velocity are given in Appendix~\ref{app:1} and the plot of $\Delta\beta_g = 1-v_g/c$ versus $u$ is shown in Fig.~\ref{fig:3}. \begin{figure}[htb] \centering \includegraphics[width=0.6\textwidth, trim=0mm 0mm 0mm 0mm, clip]{group_velocity} \caption{Plot of the ratio $\Delta\beta_g/\Delta\beta_{g0}$ (with $\Delta\beta_{g0} = 1-v_{g0}/c$ defined in~\eqref{eq:1}) versus the parameter $a\omega_0/c\gamma$. } \label{fig:3} \end{figure} A relativistic point charge entering the pipe at the longitudinal coordinate $s=0$ and moving along the pipe axis excites the resonant mode and generates a longitudinal wakefield. The standard description of this process in accelerator physics is based on the notion of the (longitudinal) wake $w(z)$ that depends on the distance between the source and the test charges measured in the direction of motion \cite{chao93}. In case of the resonant mode, this wake is localized behind the driving charge and is equal to $w(z) = 2\varkappa \cos(\omega_r z/c)$ where $\varkappa$ is the loss factor per unit length (see, e.g.,~\cite{stupakov12b,stupakov_bane_dechirper12}). For our purposes, it is important to modify this wake taking into account that at any given distance $s$ from the entrance to the pipe, the wake extends behind the particle over a finite length; this makes the wake a function of two variables, $w(s,z)$. The distance at which the wake extends behind the charge can be obtained from a simple consideration: the wake propagates with the group velocity $v_g$ and when the charge travels distance $s$ with speed $v$ the wake emitted at $s=0$ lags behind the charge at the distance $\Delta z = s(1-v_g/v)$ (we assume $v_g<v$). Mathematically, this is expressed by the following equation: \begin{align}\label{eq:3} w(s,z) = \left\{ \begin{array} {rl} 2\varkappa \cos(\omega_r z/c),& \mathrm{ for }\,\, -s(1-v_g/v) < z < 0\\ \varkappa,& \mathrm{ for }\,\, z = 0\\ 0,& \mathrm{otherwize} \end{array} \right. . \end{align} The sign of the wake~\eqref{eq:3} is such that a positive wake corresponds to the energy loss, and a negative wake means the energy gain. Note that the wake is only non-zero for negative $z$, that is behind the source charge. The loss factor $\varkappa_0$ in the limit $\gamma\to\infty$ is given by~\cite{stupakov_bane_dechirper12} \begin{align}\label{eq:4} \varkappa_0 = \frac{2}{a^2} . \end{align} With account of finite, but large, value of $\gamma$ the loss factor is derived in Appendix~\ref{app:2}. It is plotted in Fig.~\ref{fig:4} again as a function of parameter $u$. \begin{figure}[htb] \centering \includegraphics[width=0.6\textwidth, trim=0mm 0mm 0mm 0mm, clip]{loss_factor.pdf} \caption{Plot of the normalized loss factor $\varkappa/\varkappa_0$ factor versus parameter $u = ak_0/\gamma$.} \label{fig:4} \end{figure} We see that the interaction of the mode with the beam decreases when $\gamma$ becomes small. This happens because the spot size of the relativistically compressed Coulomb field of the point charge field on the wall of the pipe has the size on the order of $a/\gamma$, and when $u\sim 1$, is comparable with the inverse wave number of the wake $c/\omega_0$. For $u\gtrsim 1$ the frequency content of the Coulomb field at wavenumbers $\sim\omega_0/c$ gets depleted, and the excitation of the resonant mode is suppressed. \section{1D FEL equations}\label{sec:3} We now consider an electron beam of energy $\gamma mc^2$ with the transverse size much smaller than the pipe radius $a$ and with the uniform longitudinal current distribution propagating along a pipe with corrugated walls. Such a beam will be driving a resonant mode in the pipe, and if the pipe is long enough, it will become modulated and micro-bunched through the interaction with the mode. The mechanism of this interaction is exactly the same as in the free electron laser instability. In this section we describe an approach to calculate this instability, following the method developed in Ref.~\cite{PAC03stupakov03kr}. The actual derivation is presented in Appendix~\ref{app:3}. The crucial step in the derivation is a modification of the standard Vlasov equation that describes evolution of the distribution function of the beam. This modification takes into account retardation effects associated with emission of the wake field. The distribution function of the beam $f(\eta,z,s)$ is a function of the relative energy deviation, $\eta = \Delta \gamma/\gamma_0$, with $\gamma_0$ corresponding to the averaged beam energy, longitudinal position inside the bunch $z$, and the distance $s$ from the entrance to the pipe. The evolution of $f$ is described by the Vlasov equation\ \begin{align}\label{eq:5} & \frac{\partial f}{\partial s} - \alpha\eta \frac{\partial f}{\partial z} - \frac{r_0}{\gamma} \frac{\partial f}{\partial \eta} \int_{-\infty}^\infty dz' \int_{-\infty}^\infty d\eta' w(s,z-z') f\left( \eta',z',s-v\frac{z'-z}{v-v_g} \right) = 0 , \end{align} where $\alpha = -\gamma^{-2}$ is the slip factor per unit length and $r_0 = e^2/mc^2$ is the classical electron radius. The distribution function $f$ is normalized so that $\int fd\eta$ gives the number of particles per unit length. The third argument of $f$ in the integrand of~\eqref{eq:5} takes into account the retardation: the wake that is generated by a beam slice at coordinate $z'$ slips behind the slice with the velocity $v-v_g$ relative to the beam, and if it reaches the point $z$ when the beam arrives at location $s$, it should have been emitted at position $s-v(z'-z)/(v-v_g)$ ~\cite{PAC03stupakov03kr}. To establish a closer analogy with the standard FEL theory, it is convenient to introduce a new variable $k_w$ (an analog of the FEL undulator wave number) defined by the equation \begin{align}\label{eq:6} \frac{k_0}{k_w} = \frac{v}{v-v_g} \approx \frac{1}{\Delta \beta_g - \Delta \beta_{ph}} , \end{align} where $\Delta \beta_g = 1 - v_g/c$ and $\Delta \beta_{ph} = 1 - v_{ph}/c$. In the ultra-relativistic limit $\gamma\to\infty$ using~\eqref{eq:1} we find \begin{align}\label{eq:7} k_w = k_{w0} \equiv 4 \left( \frac{2gh}{a^3p} \right)^{1/2} . \end{align} Eq.~\eqref{eq:5} is linearized assuming a small perturbation of the beam equilibrium $f_0(\eta)$, $f=f_0(\eta) + f_1(\eta, z, s)$, with $|f_1| \ll f_0$. In this analysis we assume a coasting beam with the equilibrium distribution function $f_0(\eta) = n_0 F(\eta)$, where $n_0$ is the number of particles per unit length of the beam. We seek the perturbation in the form $f_1\propto e^{ikz+q k_w s}$, where $k$ is the wavenumber and $q$ is the dimensionless propagation constant whose real part is responsible for the exponential growth (or decay, if $\Re q<0$) of the perturbation with $s$. The main result of the linear instability analysis is the dispersion relation that defines the propagation constant $q$ as a function of the frequency detuning $\nu = (ck - \omega_r)/\omega_r$. This dispersion relation is derived in Appendix~\ref{app:3} (it follows closely the derivation of Ref.~\cite{PAC03stupakov03kr}), and is given by~\eqref{eq:53}, \begin{align}\label{eq:8} \frac{1}{2} \frac{(2\rho)^3}{q - i\nu} \int_{-\infty}^\infty d\eta \frac{F'(\eta)}{q - i\alpha\eta({\omega_r}/{ck_w})} = 1 \,, \end{align} where the parameter $\rho$ (an analog of the Pierce parameter \cite{bonifacio84pn}) is \begin{align}\label{eq:9} (2\rho)^3 = \frac{2n_0\kappa c r_0}{k_w\gamma\omega_r} . \end{align} Except for a slight notational difference, Eqs.~\eqref{eq:8} and~\eqref{eq:9} coincide with the standard equations of the 1D FEL theory~\cite{huang07k}. For a cold beam, $F(\eta) = \delta(\eta)$ (here $\delta$ stands for the delta-function), and from~\eqref{eq:8} we obtain \begin{align}\label{eq:10} q^2(q - i\nu) = - \frac{i \alpha\omega_r}{2ck_w} (2\rho)^3 . \end{align} If follows from this equation that the fastest growth of the instability is achieved at zero detuning. Assuming $\nu=0$ we rewrite~\eqref{eq:10} using the definition~\eqref{eq:9} and $\alpha = -1/\gamma^2$, \begin{align}\label{eq:11} q^3 = i \frac{ n_0\kappa r_0}{k_w^2\gamma^3} . \end{align} Among the three roots of this equation, there is one, which we denote $q_1$, with a positive real part. Introducing the power gain length $\ell = (2\Re q_1 k_w)^{-1}$, and using $n_0r_0 = I/I_A$, where $I$ is the beam current and $I_A = 17.5$ kA is the Alfven current, we obtain \begin{align}\label{eq:12} \ell = \frac{1}{\sqrt{3}} \gamma \left( {\kappa k_w} \frac{I}{I_A} \right)^{-1/3} . \end{align} In addition to the gain length, an important characteristic of the described FEL is the radiation power at saturation. Here we can use the result of the standard FEL theory, that the saturation occurs at the distance equal about 10-20 gain length, and the saturation power $P_{\mathrm{sat}}$ is \begin{align}\label{eq:13} P_{\mathrm{sat}} \approx \rho\gamma mc^2 \frac{I}{e} . \end{align} In the next section we will consider a practical example of an FEL based on a pipe with corrugated walls and evaluate $\ell$ and $P_{\mathrm{sat}}$ for that example. \section{Numerical example}\label{sec:4} To give an illustrative example of a practical device we consider in this section a pipe with corrugated walls with the parameters close to those accepted in Ref.~\cite{terahertz12}. Noting from Eq.~\eqref{eq:12} that the gain length is proportional to the beam energy, and having in mind a compact device, we choose a relatively small beam energy of 5 MeV. The beam current is 100 A. The pipe and corrugation dimensions with the beam parameters are summarized in Table~\ref{tab:1}. \begin{table}[hbt] \centering \caption{Corrugation and beam parameters} \begin{tabular}{|l|c|}\hline\hline Pipe radius, mm & 2\\ Depth $h$, $\mu$m & 50\\ Period $p$, $\mu$m & 40\\ Gap $g$, $\mu$m & 10\\ Bunch charge, nC & 1\\ Energy, MeV & 5\\ Bunch length, ps & 10\\ \hline\hline \end{tabular} \label{tab:1} \end{table} Note that parameter $u$ defined by~\eqref{eq:2} is $u=1.3$, and hence the deviation from the ultra-relativistic limit (corresponding to $u\ll 1$) is expected to be noticeable. From Eq.~\eqref{eq:26} we find that the frequency $\omega_r/2\pi$ of the resonant mode is $0.34$ THz. Using the results of the Appendices~\ref{app:1} and~\ref{app:2} we find the group velocity of the resonant mode, $\Delta \beta_g = 0.053$, and the loss factor $\kappa = 0.6(2/a^2)= 2.7$ kV/(pC m), and calculate the Pierce parameter $\rho=0.013$. This gives the gain length $\ell \approx 7$ cm, and the saturation power $P_{\mathrm{sat}} \approx 6.7$ MW. It is interesting to point out that for a given pipe radius and corrugations, there is an optimal value of the beam energy that minimized the gain length. This follows from Eq.~\eqref{eq:12} which shows that $\ell$ increases with $\gamma$ due to an explicit dependence $\ell\propto \gamma$, but $\ell$ also increases when $\gamma$ becomes too small due to the decrease of $\kappa$ shown in Fig.~\ref{fig:4}. As numerical minimization shows, the minimal value or $\ell$ is achieved for $u=1.9$ and is given by \begin{align}\label{eq:15} \ell = 0.74 \frac{a^2k_0}{2\sqrt{3}} \left( \frac{I_A}{I} \right)^{1/3} \left( \frac{ap}{2hg} \right)^{1/6} . \end{align} For the parameter considered above this gives the optimal value of the beam energy: $\gamma = 6.6$ with the corresponding gain length $\ell = 5.5$ cm. \section{Discussion}\label{sec:5} There are several issues of practical importance that were omitted in our analysis in preceding sections. Here will briefly discuss some of them leaving a more detailed study for a separate publication. First, we used an approximation of a coasting beam, without taking into account the finite length of the bunch. This approximation assumes that the bunch length is much longer than the cooperation length of the instability $l_\mathrm{c}$ that is defined as the distance at which the point charge wake extends within the bunch when the particle travels one gain length $\ell$. Using Eq.~\eqref{eq:3} we evaluate the coherence length as $l_\mathrm{c}\sim \ell(1-v_g/v)$. For the parameters considered in Section~\ref{sec:4} we find $l_\mathrm{c}\approx 3.3$ mm, or 11 ps. This is comparable with the bunch length of 10 ps, and hence the numerical estimates of the previous section should only be considered as crude estimates of the expected parameters of the FEL. A more accurate prediction for the selected set of parameters require computer simulations. Second, we neglected the resistive wall losses that would cause the resonant mode to decay when it propagates in the pipe. The effect of the wall losses on the FEL instability can be estimated if we compare the gain length with the decay distance $l_\mathrm{d}$ of the resonant mode. An analytical formula for $l_\mathrm{d}$ is given in Ref.~\cite{terahertz12}; using the formula we estimate that for our parameters $l_\mathrm{d}=66$ cm, which is much larger than the gain length calculated in the previous section. Hence, we conclude that the resistive wall effect is small. Finally, we mention a deleterious effect of the transverse wake, that might cause the beam break-up instability. It is known that in a round pipe with corrugated walls, in addition to the resonant longitudinal wake, there is also a resonant dipole mode that creates a transverse wakefield. In the limit $\gamma\to\infty$, in a round pipe, the transverse mode has the same frequency as the longitudinal one. To mitigate the effect of the breakup instability, one has to apply a strong external transverse focusing on the beam and minimize the initial beam offset at the entrance to the pipe. It may also be advantageous to change the cross sections of the pipe from round to rectangular or elliptic, that will likely detune the transverse mode frequency from the longitudinal one. A more detailed study of the transverse instability is necessary. \section{Acknowledgments} The author thanks M. Zolotorev and K. Bane and I. Kotelnikov for useful discussions. This work was supported by Department of Energy contract DE-AC03-76SF00515.
2,877,628,091,666
arxiv
\subsubsection{#1}\vspace{-3\baselineskip}\color{black}\medskip{\noindent \bf \thesubsubsection. #1.}} \newcommand{\myparagraph}[1]{\needspace{1\baselineskip}\medskip\noindent {\it #1.}} \newcommand{\myindentedparagraph}[1]{\needspace{1\baselineskip}\medskip \hangindent=11pt \hangafter=0 \noindent{\it #1.}} \newcommand{\myparagraphtc}[1]{\needspace{1\baselineskip}\medskip\noindent {\it #1.}\addcontentsline{toc}{subsubsection}{\qquad\qquad\quad#1}} \section{Introduction} \label{sec_intro} In the setting of decentralized optimization, a connected network of agents, or nodes, are interested in minimizing a common global objective function, the components of which are distributed locally across all the agents. To jointly optimize the global objective, nodes must collaborate with their neighbors by successively sharing information of their locally measurable objective function components. Decentralized optimization has proven effective in contexts where information is gathered by different nodes of a network, such as decentralized control \cite{Bullo2009,Cao2013-TII,LopesEtal8}, wireless systems \cite{Ribeiro10,scutari2014decomposition}, sensor networks \cite{Schizas2008-1,KhanEtal10,cRabbatNowak04}, and large scale machine learning \cite{bekkerman2011scaling,Cevher2014}. Perhaps the most common and well-studied problem in decentralized optimization is the consensus optimization problem. Here, the common minimizer that optimizes the global objective is found locally at each node through the use of local copies of the decision variable. Each node minimizes its local objective using its local copy while simultaneously seeking agreement, or \emph{consensus}, with its neighbors. This approach allows for a fundamentally decentralized manner of optimizing a global function that relies only on the ability to exchange information with neighbors. There are many methods that solve the consensus problem, differing largely in how the consensus condition is enforced. Among the popular techniques include the use of additional penalties for violating consensus in the objective function \cite{nedic2009, YuanQing, Jakovetic2014-1,shi2015extra,mokhtari2016dsa,mokhtari2017network}. Alternatively, the consensus can formulated as an explicit constraint, which can be optimized directly in the dual domain {\cite{cRabbatNowak04,Schizas2008-1,makhdoumi2016convergence,bianchi2014stochastic,zargham2014accelerated}}. A promising extension of dual-based methods include ``primal-dual'' methods, which iteratively find solutions that incrementally get closer to both optimality and consensus \cite{chang2014distributed,mokhtari2016decentralized,li2017primal}. These methods are beneficial in combining the low computational cost of penalty-based methods with the exactness of dual-based methods. In general, standard consensus optimization techniques that rely only on first order information contained in the gradient, such as gradient descent, suffer from slow convergence rates. These slow convergence rates are particularly apparent in problems that are ill-conditioned, or in other words have large spread in eigenvalues of the function's Hessian matrix. In centralized optimization, the ill-conditioning is commonly corrected by incorporating second order Hessian information into the descent computation using the Newton step. While this technique cannot be used directly in distributed optimization due to the non-sparsity of the Hessian inverse, there exist ways of using second order information to approximate the Newton step for distributed settings. This has been done for consensus optimization problems reformulated as both the penalty-based methods \cite{mokhtari2017network,mansoori2017superlinearly} and dual-based methods \cite{zargham2014accelerated}, as well as the more recent primal-dual methods \cite{mokhtari2015dqm,mokhtari2016decentralized}. These approximate Newton methods exhibit faster convergence relative to their corresponding first order methods. Despite the advances in distributed Newton-based methods, there are many cases in which the exact Hessian information is either difficult or computationally expensive to compute. The centralized alternative to Newton's method comes in the form of quasi-Newton methods, which use gradients to produce a curvature estimation in lieu of the Hessian inverse \cite{dennis1974characterization,powell1976some}. In the distributed setting, the commonly used Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton has been adapted for both penalty-based and dual-based formulations \cite{eisen2017decentralized}. This method improves convergence but suffers from many of the same issues of first-order penalty-based and dual-based methods regarding accuracy and computational burden. In this paper, we develop a novel ``primal-dual'' quasi-Newton method that achieves state of the art convergence rates to exact solutions at a lower computational cost relative to the dual-based alternative. This is done through the approximation of an internal optimization problem present in dual methods with a fully distributed quasi-Newton update. We further employ a distributed dual quasi-Newton update to accelerate the dual ascent relative to dual gradient updates. In this way, we can obtain a method with a linear convergence rate to the exact solution with the computational complexity of penalty-based methods, all while not requiring the computation of Hessian information. The paper begins with a formal statement of the consensus optimization problem and the introduction of the augmented Lagrangian (Section \ref{sec_problem_formulation}). For solving the consensus problem in a distributed manner, a set of standard dual methods (Section \ref{sec_dual_methods}) and augmented Lagrangian methods (Section \ref{sec_aug_lag}) are discussed. The former methods suffer from slow convergence rates and the latter methods suffer from either the inability to distribute the computations or the necessity in solving an internal minimization problem at every step. All of these issues are bypassed by substituting the standard primal and dual updates in these dual methods with quasi-Newton updates. This provides the basis of the proposed PD-QN method (Section \ref{sec_dbfgs}). Separate quasi-Newton updates are derived for both the primal update (Section \ref{sec_primal_update}) and dual update (Section \ref{sec_dual_update}), each of which is designed to be distributedly computable while retaining desirable properties of traditional quasi-Newton methods. Convergence properties are then established (Section \ref{sec_convergence}). Given standard properties of smoothness and strong convexity, we demonstrate a linear convergence rate to the exact solution of the consensus problem. We close the paper with numerical results comparing the performance of PD-QN to first and second order methods on various consensus problems of practical interest (Section \ref{sec_numerical_results}). \section{Problem Formulation} \label{sec_problem_formulation} We consider a distributed system of $n$ nodes connected by an undirected communication graph $\ccalG=(\ccalV,\ccalE)$ with nodes $\ccalV=\{1,\dots,n\}$ and $m$ edges $\ccalE=\{(i,j)\ |\ i\ \text{and}\ j \ \text{are connected} \}$. Define the set $n_i$ as the neighborhood of node $i$ including $i$, i.e., $n_i=\{j\ |\ j=i \lor (i,j)\in\ccalE\}$, and the neighborhood size $m_i := |n_i|$. Each node $i$ has access to a local strongly convex cost function $f_i: \reals^p \rightarrow \reals$ and the goal is to find the optimal variable $\tbx^* \in \reals^p$ that minimizes the aggregate of all local cost functions $\sum_i f_i$ with distributed computations. By distributed computations, we mean in particular that each node is able to itself obtain the common minimizer $\tbx^*$ through computations performed locally and through information exchanges with its neighbors in $\ccalE$. To solve this locally, we consider the consensus formulation, in which each node $i$ stores and maintains a local variable $\bbx_i \in \reals^p$ and seeks to minimize its local cost $f_i(\bbx_i)$ while satisfying a consensus constraint with all neighbors. More specifically, consider the global variable $\bbx=[\bbx_1;\dots;\bbx_n]\in\reals^{np}$ and resulting optimization problem of interest \begin{align}\label{eq_primal_problem} \bbx^* \ &:= \ \argmin_{\bbx\in \reals^{np}} \ f(\bbx) = \sum_{i=1}^n f_i(\bbx_i) \\ &\quad \st \ (\bbI-\bbZ)^{1/2}\bbx=\bb0, \nonumber \end{align} where the matrix $\bbZ\in \reals^{np \times np}$ is chosen so that the feasible variables in \eqref{eq_primal_problem} satisfy the consensus constraint $\bbx_i=\bbx_j$ for all $i,j$. A customary choice of a matrix $\bbZ$ with this property is to make it the Kronecker product $\bbZ \coloneqq \bbW \otimes \bbI_p$ of a weight matrix $\bbW\in\reals^{n\times n}$ and the identity matrix $\bbI_p\in\reals^{p \times p}$. The elements of the weight matrix are $w_{ij}> 0$ if $(i,j)\in\ccalE$ and $w_{ij}= 0$ otherwise and the weight matrix $\bbW\in \reals^{n\times n}$ is further assumed to satisfy \begin{equation}\label{weight_matrix_conditions} \bbW = \bbW^T, \quad \bbW\mathbf{1} = \mathbf{1}, \quad \text{null}\{\bbI-\bbW\} = \text{span}\{\mathbf{1}\}. \end{equation} The first two conditions in \eqref{weight_matrix_conditions} imply symmetry and row stochasticity of $\bbW$, respectively. The third condition enforces the consensus constraint. We further impose the following assumption on diagonal weights necessary for the analysis later in this paper. \begin{assumption} \label{as_weight_bound} The diagonal entries of $\bbW$ are bounded by two positive constants $1>\Delta > \delta>0$. I.e., for all $i$ we have that $\delta<w_{ii}<\Delta$. \end{assumption} Since $\text{null}(\bbI-\bbW)=\text{null}(\bbI-\bbW)^{1/2}=\text{span}\{\mathbf{1}\}$, it follows that for any vector $\bbx=[\bbx_1;\dots;\bbx_n]\in \reals^{np}$ the relation $(\bbI-\bbZ)^{1/2}\bbx=\bb0$ holds if and only if $\bbx_1=\dots=\bbx_n$. This means that the feasible variables in \eqref{eq_primal_problem} indeed satisfy $\bbx_i=\bbx_j$ for all $i,j$ and that, consequently, the problem in \eqref{eq_primal_problem} is equivalent to the minimization of $\sum_{i=1}^n f_i(\tbx)$ and subsequently $\bbx^* = [\tbx^*; \tbx^*; \hdots; \tbx^*]$. To solve \eqref{eq_primal_problem}, it is necessary to form the Lagrangian. Define $\bbnu:= [\bbnu_1, \bbnu_2, \hdots, \bbnu_n] \in \reals^{np}$ to be a set of dual variables for the equality constraint $(\bbI-\bbZ)^{1/2}\bbx=\bb0$, with node $i$ holding the $i$th block $\bbnu_i \in \reals^p$. The Lagrangian is then defined as $\ccalL_0(\bbx,\bbnu) := \sum_{i=1}^n f_i(\bbx_i) + \bbnu^T (\bbI-\bbZ)^{1/2}\bbx$. To remove the dependence of the potentially dense matrix $(\bbI-\bbZ)^{1/2}$, we introduce an adjusted dual variable $\bby := (\bbI-\bbZ)^{1/2}\bbnu$ and work directly with $\bby \in \reals^{np}$. We additionally may add a quadratic penalty term to form the augmented Lagrangian in terms of $\bbx$ and $\bby$ as \begin{align}\label{eq_aug_lagrangian} \ccalL_{\alpha}(\bbx,\bby) := \sum_{i=1}^n f_i(\bbx_i) + \bby^T\bbx + \frac{\alpha}{2}\bbx^T(\bbI-\bbZ)\bbx, \end{align} indexed by $\alpha\geq0$, the weight of the quadratic penalty. By setting $\alpha=0$, we remove the quadratic penalty and recover the standard Lagrangian $\ccalL_0(\bbx,\bby)$. Note that any feasible point to the constrained problem in \eqref{eq_primal_problem} will make the quadratic penalty in \eqref{eq_aug_lagrangian} zero and thus add no additional cost to the standard Lagrangian. A pair $\bbx^*$ and $\bby^*$ that jointly optimize $\ccalL_{\alpha}(\bbx,\bby)$ will thus provide the solution to the original problem of interest in \eqref{eq_primal_problem} for any choice of $\alpha$. \subsection{Dual methods} \label{sec_dual_methods} A standard set of approaches towards jointly optimizing \eqref{eq_aug_lagrangian} is to operate exclusively on the dual variable $\bby$ in what are known as dual methods. The basic dual objective function to be maximized is obtained by minimizing a \emph{non-augmented} Lagrangian $\ccalL_{0}(\bbx,\bby)$ over $\bbx$, where $\ccalL_0(\bbx,\bby)$ is defined as the Lagrangian in \eqref{eq_aug_lagrangian} with no quadratic weight, i.e. $\alpha=0$. This results in the following dual problem \begin{align}\label{eq_dual_problem} \bby^* := \argmax_{\bby \in \reals^{np}} \left[\min_{\bbx \in \reals^{np}} \sum_{i=1}^n f_i(\bbx_i) + \bby^T \bbx \right]. \end{align} The optimal primal variable given any dual iterate $\bby$ found in the internal minimization step in \eqref{eq_dual_problem} we notate as $\bbx_0(\bby)$. The optimal $\bbx^*$ from the original problem in \eqref{eq_primal_problem} can then be recovered as $\bbx^* = \bbx_0(\bby^*)$ due to strong duality of a strongly convex problem---see, e.g., \cite{boyd2004convex}. A wide array of optimization techniques can be used to iteratively solve for $\bby^*$. Gradient ascent, for example, can be performed directly on the maximization problem in \eqref{eq_dual_problem}. In gradient ascent, at each iteration index $t=0,1,2,\hdots$ the dual variable $\bby_{t+1}$ is computed using the previous iterate $\bby_{t}$ and the gradient of objective function in \eqref{eq_dual_problem}, resulting in the respective primal and dual updates, \begin{align} \bbx_{t+1} &= \argmin_{\bbx} \ccalL_{0}(\bbx,\bby_{t}) \label{eq_dual_ascent_p} \\ \bby_{t+1} &= \bby_{t} + \eps_d (\bbI - \bbZ) \bbx_{t+1}, \label{eq_dual_ascent_d} \end{align} where $\eps_d > 0$ is a scalar step size. Observe that, in both the updates in \eqref{eq_dual_ascent_p}-\eqref{eq_dual_ascent_d}, the $i$th block of $\bbx_{i,t+1}$ and $\bby_{i,t+1}$ can be computed locally by node $i$ using only local exchanges with neighbors $j \in n_i$. Further observe this distributed capability is permitted only by considering the non-augmented Lagrangian in the definition of the dual function in \eqref{eq_dual_problem}. The updates together are commonly known as dual ascent (DA) \cite{cRabbatNowak04} and are known to converge sub-linearly to the optimal pair $(\bbx^*, \bby^*)$ with a rate of $\ccalO(1/t)$. However, first order gradient-based methods can be further slowed in practice when the problem is ill-conditioned, motivating the development of more sophisticated dual techniques. Quasi-Newton methods are a well known alternative to first order methods. In traditional centralized settings, they are known to have more desirable convergence properties and perform better in practice than first order methods because they approximate a curvature correction to the descent direction. They have recently been adapted for distributed algorithms as well. In \cite{eisen2016decentralized,eisen2017decentralized}, a dual quasi-Newton method is derived that replaces the first order dual update in \eqref{eq_dual_ascent_d} with an approximate second order update of the form \begin{align} \bby_{t+1} &= \bby_{t} +\eps_d \bbH^{-1}_{t}(\bbI - \bbZ) \bbx_{t+1}, \label{eq_dual_dbfgs} \end{align} where $\bbH_{t}$ is an approximation of the dual Hessian and thus serves as a distributed approximation to Newton's method. In particular, $\bbH_{t}$ is constructed as a positive definite matrix that satisfies what is known as the secant condition. Recall $\bbh_{t} = (\bbI - \bbZ)\bbx_{t+1}$ as the dual gradient, and define the dual variable variation $\bbv_{t}$ and gradient variation $\bbs_{t}$ vectors, \begin{align} \bbv_{t} = \bby_{t+1} - \bby_{t}, \enskip \bbs_{t} = (\bbI - \bbZ) (\bbx_{t+1}-\bbx_{t}).\label{eq_d_bfgs_vars} \end{align} Observe that $\bbv_{t}$ and $\bbs_{t}$ capture differences of two consecutive dual variables and gradients, respectively, evaluated at steps $t+1$ and $t$. At each iteration, we select a new Hessian approximation $\bbH_{t+1}$ that satisfies the secant condition $\bbH_{t+1} \bbv_{t} = \bbs_{t}$. This condition is fundamental, as the secant condition is satisfied by the actual Hessian for small $\bbv_{t}$. The dual D-BFGS \cite{eisen2016decentralized} method designs a matrix $\bbH_{t}$ that satisfies the global secant conditions while being locally computable by each node. The inclusion of this approximation is shown to numerically improve upon the first order DA method, but ultimately suffers from the same slow converge rate of $\ccalO(1/t)$. \subsection{Augmented Lagrangian methods}\label{sec_aug_lag} A well studied technique to improve upon the convergence rate of dual methods is to operate on the augmented Lagrangian $\ccalL_{\alpha}(\bbx,\bby)$ in \eqref{eq_aug_lagrangian} for some $\alpha >0$. Consider substituting the primal update in \eqref{eq_dual_ascent_p} with the minimization over the augmented Lagrangian \begin{align} \bbx_{t+1} &= \argmin_{\bbx} \ccalL_{\alpha}(\bbx,\bby_{t}). \label{eq_mm_p} \end{align} Using the augmented primal update in \eqref{eq_mm_p} with the first order dual update in \eqref{eq_dual_ascent_d} results in a method commonly referred to as the method of multipliers (MM), which is shown to exhibit a fast linear convergence rate to the optimal primal-dual pair \cite{bertsekas2014constrained,jakovetic2015linear,mokhtari2016decentralized}. However, MM cannot be implemented in a distributed manner because \eqref{eq_mm_p} cannot be computed locally due to the quadratic coupling term in \eqref{eq_aug_lagrangian}. The ADMM methods exists as a distributed alternative to MM, which permits distributed computation through a decoupling of the variables \cite{Shi2014-ADMM}. Both the augmented Lagrangian update in \eqref{eq_mm_p} and the standard update in \eqref{eq_dual_ascent_p} will in any case still require an internal minimization step. For most objective functions, this primal update will be computationally expensive, thus making these methods difficult to implement in practice. Despite this shortcoming, it would nonetheless be beneficial to incorporate the augmented Lagrangian in the primal update to improve upon the convergence rate of distributed quasi-Newton methods. In this paper, we develop a primal-dual quasi-Newton method that, in addition to using the quasi-Newton dual update in \eqref{eq_dual_dbfgs} for improved conditioning, implements a quasi-Newton approximation of \eqref{eq_mm_p} that permits local and distributed computation to find exact solutions to \eqref{eq_primal_problem}. \subsection{Related Work} \begin{table*}[t] \centering \caption{Comparison of consensus optimization methods} \label{tab_methods} \begin{tabular}{l | lllll} & Convergence rate & Exact solution? & Internal opt. problem? & Primal Order & Dual Order \\ \hline DGD \cite{nedic2009} & Linear & N & N & First & N/A\\ Network Newton\cite{mokhtari2017network} & Linear-Quadratic & N & N & Second & N/A\\ D-BFGS (primal) \cite{eisen2017decentralized} & Linear & N & N & ``Second'' & N/A\\\hline Dual descent \cite{cRabbatNowak04} & Sublinear & Y & Y & N/A & First\\ ADMM \cite{Schizas2008-1,Shi2014-ADMM} & Linear & Y & Y & N/A & First\\ D-BFGS (dual) \cite{eisen2017decentralized} & Sublinear & Y & Y & N/A & ``Second''\\\hline EXTRA \cite{shi2015extra,li2017primal} & Linear & Y & N & First & First\\ ESOM \cite{mokhtari2016decentralized} & Linear & Y & N & Second & First\\ \textbf{PD-QN} & \textbf{Linear} & \textbf{Y} & \textbf{N} & \textbf{``Second''} & \textbf{``Second''} \\ \end{tabular} \end{table*} The existing literature in solving the consensus problem in \eqref{eq_primal_problem} differs in many ways, ranging from convergence rate to computational complexity to communication cost. We summarize many of these methods in Table \ref{tab_methods} and discuss their qualities here. Both Table \ref{tab_methods} and the following discussion break consensus optimization methods into three basic classes. The first class contains methods specifically developed for the consensus problem that operate exclusively with the primal variable $\bbx_{t}$. The general approach here is to perform gradient steps on the local primal variables while also averaging local primal variables with neighbors. The standard first order method is called distributed gradient descent \cite{nedic2009}, and can also be formulated as moving the consensus constraint in \eqref{eq_primal_problem} into the objective function as a penalty term. For better performance in ill-conditioned problems, higher order versions of the primal domain approach include Network Newton \cite{mokhtari2017network} and D-BFGS \cite{eisen2017decentralized}, which employ exact second order and approximated second order information, respectively, to speed up convergence. While all of these methods benefit from achieving at least a linear convergence rate and with low computational cost, they suffer from finding only approximate solutions to \eqref{eq_primal_problem} when using a constant stepsize. They may alternatively use diminishing step sizes to reach the exact solution, but at a slower sublinear convergence rate. The second class of methods contain those that convert the constrained problem in \eqref{eq_primal_problem} to the dual domain and operate exclusively on the dual variable. These include the standard first order dual descent \cite{cRabbatNowak04} as well as an augmented Lagrangian variation ADMM \cite{Schizas2008-1,Shi2014-ADMM}. Both of these methods perform first order updates in the dual domain and are able to achieve a sub-linear and linear convergence rate, respectively. The D-BFGS method \cite{eisen2017decentralized} performs an approximate second order update in the dual domain. While the convergence rate of these methods is typically not as fast as the linear rate of primal domain methods, they nonetheless improve upon the primal domain methods by finding exact solutions to \eqref{eq_primal_problem}. However, these methods face the additional cost of requiring solutions to internal optimization problems at every iteration of the method. This quality may make them difficult or impractical to use for general objective functions. The third class of methods combines the faster convergence rate of primal domain methods with the exactness of solutions of the dual domain methods by performing updates on both the primal and dual variables. These may be considered as primal-dual methods. The EXTRA method is an exact first order method with linear convergence rate \cite{shi2015extra} that works effectively as a first order primal-dual method \cite{mokhtari2016dsa,li2017primal}. The ESOM method \cite{mokhtari2016decentralized} performs a second order update on the primal and variable and first order update on the dual variable to achieve a linear convergence rate to the exact solution without requiring an internal optimization method. The PD-QN method proposed in this work similarly is able to achieve a linear convergence rate without the internal optimization method, but additionally includes an approximate second order update in the dual domain. Thus, the method is able to retain desirable qualities of ESOM without computing exact second order information and providing additional robustness to problems that are ill-conditioned in the dual domain. \medskip\noindent{\bf Notation remark. } In this paper, we use boldface lower case letters to denote vectors and boldface upper case letters to denote matrices. At any time $t$, the $i$th block of a vector $\bbz_t \in \reals^{np}$ is denoted as $\bbz_{i,t} \in \reals^{p}$, while $\bbz_{n_i.t} \in \reals^{m_i p}$ denotes a concatenation of the components in $i \in n_i$. Likewise, the $i$th block of matrix $\bbA_t \in \reals^{np \times np}$ is denoted as $\bbA_{i,t} \in \reals^{p \times p}$, while $\bbA_{n_i,t} \in \reals^{m_i p \times m_i p}$ denotes the $(j,k)$ entries of $\bbA$ where $j,k \in n_i$. We further denote by $\bbZ_{\emptyset}$ the matrix containing only the diagonal elements of $\bbZ$. \section{Primal-Dual Quasi-Newton (PD-QN) Method}\label{sec_dbfgs} We introduce the Primal-Dual Quasi-Newton (PD-QN) algorithm as a fully distributed quasi-Newton update on the augmented Lagrangian function $\ccalL_{\alpha}(\bbx,\bby)$. We refer to it as a primal-dual quasi-Newton method because a quasi-Newton update is used to approximate second order information for both the primal and dual updates. In the primal domain, the second order information is approximated to approximately solve the update in \eqref{eq_mm_p} in a distributed manner, while second order information is approximated in dual domain to make the dual update more robust in ill-conditioned settings and better performing in practice. For notational convenience, we define the primal and dual gradients of $\ccalL_{\alpha}(\bbx_{t},\bby_{t})$, labelled $\bbg_{t}$ and $\bbh_{t}$, respectively, as \begin{align}\label{eq_primal_grad} \bbg_{t} &:= \nabla f(\bbx_{t}) + \bby_{t} + \alpha (\bbI-\bbZ)\bbx_{t}, \\ \bbh_{t} &:= (\bbI - \bbZ) \bbx_{t+1}. \label{eq_dual_gradient} \end{align} The full gradients $\bbg_t, \bbh_t \in \reals^{np}$ stacks the local gradients at each node, e.g. $\bbg_t = [\bbg_{1,t}; \bbg_{2,t}; \hdots; \bbg_{n,t}]$, where $\bbg_{i,t} \in \reals^p$ is computed locally by node $i$. In the following subsections, we proceed to derive the primal and dual updates of the PD-QN method. \subsection{Primal update}\label{sec_primal_update} We seek to replace the non-distributed primal update in \eqref{eq_mm_p} with an update that both allows distributed computation and does not require the explicit solving of a subproblem. To derive such an update, we recall that any descent-based method is in fact the solution of a quadratic Taylor series approximation of the objective function, centered at the current iterate. It is natural then to consider solving the optimization problem in \eqref{eq_mm_p} in the same manner. Consider the quadratic approximation of $\ccalL_{\alpha}(\bbx,\bby_{t})$, centered at $\bbx_{t}$, expressed as \begin{align} \label{eq_aug_lagrangian_quad} \hat{\ccalL}_{\alpha}(\bbx,\bby_{t}) &= \ccalL_{\alpha}(\bbx_{t},\bby_{t}) + \nabla \ccalL_{\alpha}(\bbx_{t},\bby_{t})^T(\bbx - \bbx_{t}) \nonumber \\ &\qquad + \frac{1}{2} (\bbx- \bbx_{t}) \mathcal{G}_{t}^T(\bbx - \bbx_{t}) \end{align} where $\mathcal{G}_{t}$ is some matrix that approximates the second order information of the augmented Lagrangian. As \eqref{eq_aug_lagrangian_quad} is a quadratic function, the vector $\bbx_{t+1}$ that minimizes it can be found explicitly with a closed form solution. This provides us the following primal variable update \begin{align} \label{eq_pdqn_p} \bbx_{t+1} &= \bbx_{t} - \mathcal{G}^{-1}_{t} \bbg_{t}, \end{align} This then provides us the general form of our primal update in PD-QN. A well studied choice of Hessian approximation inverse matrix $\mathcal{G}_{t}$ is that given by the quasi-Newton BFGS method \cite{dennis1974characterization,powell1976some}. Consider the particular structure of the Hessian of the augmented Lagrangian $\nabla^2_{\bbx \bbx} \ccalL_{\alpha}(\bbx,\bby_{t}) = \nabla^2 f(\bbx) + \alpha (\bbI-\bbZ)$. The first term, $\nabla^2 f(\bbx)$, is the Hessian of the local objective functions and thus a locally computable block diagonal matrix. The second term, $\alpha (\bbI-\bbZ)$, is not diagonal but has the sparsity pattern of $\ccalG$ and, more importantly, is constant matrix. To construct an approximate matrix then, it is only necessary for nodes to approximate the first term, which can be done solely with local variables and gradients using the standard BFGS quasi-Newton update. To compute this update, define the primal variable and variation variables as \begin{align} \label{eq_primal_variations} \bbu_{t} = \bbx_{t+1}-\bbx_{t}, \qquad \bbr_{t} = \bbg_{t+1}-\bbg_{t}. \end{align} Because each node $i$ estimates a Hessian that only depends on local variables, its respective BFGS approximation matrix $\bbB_{i,t} \in \reals^{p \times p}$ can be computed using only the local variables and gradients $\bbu_{i,t}$ and $\bbr_{i,t}$. Each node maintains and updates at time $t+1$ an approximation of $\nabla^2 f_i (\bbx_i)$ with the iterative BFGS quasi-Newton update \begin{align} \label{eq_bfgs_p} \bbB_{i,t+1} &= \bbB_{i,t} + \frac{ \bbr_{i,t} \bbr_{i,t}^{T}}{{\bbu_{i,t}}^{T} \bbr_{i,t}} - \frac{\bbB_{i,t} \bbu_{i,t}\bbu_{i,t}^{T}\bbB_{i,t}} {{\bbu_{i,t}^{T}}\bbB_{i,t}{\bbu_{i,t}}}. \end{align} The global objective function Hessian approximation is then defined as the block diagonal matrix combining all local approximations, i.e. $\bbB_{t} := \diag \{ \bbB_{i,t}\}_{i=1}^n \in \reals^{np \times np}$ and the respective full Hessian is approximated as $\mathcal{G}_{t} := \bbB_{t} + \alpha(\bbI-\bbZ)$. To implement the update in \eqref{eq_pdqn_p} in a distributed manner, the $i$th component of the descent direction $\mathcal{G}^{-1}_{t} \bbg_{t}$ must be computable by node $i$ using local information and exchanges with neighbors. More specifically, although $\mathcal{G}_{t}$ has the required sparsity pattern of $\ccalG$, its inverse does not. We can, however, approximate the inverse as $\mathcal{G}^{-1}_{t,K}$ using $K$ terms of the Taylor series expansion of the inverse $\mathcal{G}^{-1}_{t} = (\bbB_{t} + \alpha(\bbI-\bbZ))^{-1}$, written as \begin{align} \label{eq_hessian_K} &\mathcal{G}^{-1}_{t,K} := \\ & \bbD^{-1/2}_{t} \sum_{k=0}^K \left( \bbD^{-1/2}_{t} \alpha(\bbI \!\!- \!\!2\bbZ_{\emptyset}\! +\! \bbZ) \bbD_{t}^{-1/2} \right)^k \bbD^{-1/2}_{t}, \nonumber \end{align} where the matrix $\bbD_{t} := \bbB_{t} + 2\alpha(\bbI - \bbZ_{\emptyset})$ contains the block diagonal elements of the approximate Hessian $\mathcal{G}_{t}$. The resulting descent update $\mathcal{G}^{-1}_{t,K} \bbg_{t}$ can indeed be implemented in a distributed manner at each nodes, with $K+1$ local exchanges needed per iteration to use $K$ terms in the series in \eqref{eq_hessian_K}---see, e.g, \cite{mokhtari2017network}. As larger $K$ will result in a better approximation of the matrix inverse, practical implementations require a tradeoff between accuracy and number of local exchanges required by each node. Each node $i$ can compute its local primal descent component $\bbd_{i,t} := [\mathcal{G}^{-1}_{t,K} \bbg_{t}]_i$ and subsequent update using the subroutine displayed in Algorithm \ref{alg_primal}. \begin{algorithm}[t] \setstretch{1.35} {\small\begin{algorithmic}[1] \REQUIRE $\{\bbx_{i,\tau},\bbg_{i,\tau}\}_{t,t-1}$,$\bbB_{i,t-1}$, Weights $w_{ij}$ for $j\in n_i$ \STATE Compute $\bbu_{i,t-1},\bbr_{i,t-1}$, $\bbB_{i,t}$ [cf. \eqref{eq_primal_variations}-\eqref{eq_bfgs_p}] \STATE Form $\bbD_i = (1-w_{ii})\bbB_{i,t} + 2(1-w_{ii})\bbI$ \STATE Initialize $\bbd_{i,t} = -\bbD_i^{-1} \bbg_{i,t}$ \FOR {$k=0,\hdots,K-1$} \STATE Exchange $\bbd_{i,t}$ with neighbors $j \in n_i$ \STATE $\bbd_{i,t} = \bbD_i^{-1} [\sum_{j \in n_i} w_{ij} \bbd_{j,t} - \bbg_{i,t}$] \ENDFOR \STATE Local update $\bbx_{i,t+1} = \bbx_{i,t} + \bbd_{i,t}$ \RETURN $\bbx_{i,t+1}$, $\bbB_{i,t}$ \end{algorithmic}} \caption{Primal update for node $i$ at time $t$} \label{alg_primal} \end{algorithm} \subsection{Dual update}\label{sec_dual_update} We proceed to derive the dual update of the PD-QN method, which replaces the first order dual update in \eqref{eq_dual_ascent_d} with a quasi-Newton update similar to that in \eqref{eq_dual_dbfgs}, i.e. \begin{equation}\label{eq_pdqn_d} \bby_{t+1} = \bby_{t} +\alpha\bbH^{-1}_{t} \bbh_{t}, \end{equation} where $\bbH_{t}$ is the dual Hessian approximation and$\alpha>0$ is the quadratic penalty coefficient in the augmented Lagrangian. As in the primal domain, $\bbH_{t}$ is designed to allow for the distributed computation for dual descent direction $\bbe_{i,t} := [\bbH^{-1}_{t}\bbh_{t}]_i$ to be computed locally at node $i$ using local exchanges. We in particular employ the quasi-Newton dual update used in the dual D-BFGS method \cite{eisen2017decentralized,eisen2016decentralized}, the details of which we discuss here. Recall that, in traditional BFGS method, the secant condition $ \bbH_{t+1} \bbv_{t} =\bbs_{t}$ induces desirable properties onto the approximation matrix $\bbH_{t}$ relating to acceleration and robustness in ill-conditioned problems. We therefore construct a matrix $\bbH_{t}$ that is not only distributable across the network, but maintains the global secant condition. Define the diagonal normalization matrix $\bbUpsilon \in \reals^{np}$ whose $i$th block is $m_i^{-1} \bbI$ and a small scalar regularization parameter $\gamma > 0$. We then define the modified neighborhood variable and gradient variations, $\tbv_{n_i,t} \in \reals^{m_i p}$ and $\tbs_{n_i,t} \in \reals^{m_i p}$, as \begin{align} \tbv_{n_i,t} &:= \bbUpsilon_{n_i} \left[ \bby_{n_i,t+1} - \bby_{n_i,t} \right] \label{eq_dbfgs_vars} \\ \tbs_{n_i,t} &:= \bbh_{n_i,t+1} - \bbh_{n_i,t} - \gamma\tbv_{n_i,t}\label{eq_dbfgs_grads}. \end{align} The neighborhood variations in \eqref{eq_dbfgs_vars} and \eqref{eq_dbfgs_grads} are modified not just in their locality, but also in the normalization by $ \bbUpsilon_{n_i}$ in \eqref{eq_dbfgs_vars} and regularization by $\gamma\tbv_{n_i,t}$ in \eqref{eq_dbfgs_grads}. As $\tbv_{n_i,t}$ and $\tbs_{n_i,t}$ can be obtained locally at each node $i$ with one hope excahnges, each node $i$ computes and maintains a local Hessian approximation $\bbC_{n_i,t} \in \reals^{m_ip \times m_ip}$, which is updated as the solution of a local optimization problem, \begin{alignat}{2}\label{eq_dbfgs_update} \bbC_{n_i,t+1} := &\argmin_{\bbZ}\ && \text{tr}[ (\bbC_{n_i,t})^{-1} (\bbZ - \gamma \bbI)] - \\\nonumber &&&\qquad\quad \text{logdet}[(\bbC_{n_i,t})^{-1} (\bbZ - \gamma \bbI)] - n\\\nonumber & \text{ s.t.}\quad && \bbZ \tbv_{n_i,t} = \bbs_{n_i,t},\quad \bbZ \succeq \bb0. \end{alignat} To update in \eqref{eq_dbfgs_update} provides an updated approximation matrix $\bbC_{n_i,t+1}$ that has eigenvalues greater than $\gamma$ while satisfying a \emph{modified} local secant condition with respect to the normalized variable variation $\tbv_{n_i,t}$. A closed form solution to \eqref{eq_dbfgs_update} exists \cite[Proposition 1]{mokhtari2014res}, and is given by \begin{align}\label{eq_bfgs_dist} \bbC_{n_i}(t + 1) &= \bbC_{n_i,t}+ \frac{\tbs_{n_i,t} \tbs^T_{n_i,t}}{\tbs^T_{n_i,t} \tbv_{n_i,t}} \\ & \qquad - \frac{\bbC_{n_i,t} \tbv_{n_i,t} \tbv^T_{n_i,t} \bbC_{n_i,t}}{\tbv^T_{n_i,t} \bbC_{n_i,t} \tbv_{n_i,t}} + \gamma \bbI. \nonumber \end{align} Note that the dual BFGS update differs from the more traditional primal BFGS update in \eqref{eq_bfgs_p} both in the use of modified neighborhood variations $\tbv_{n_i,t} $ and $\tbs_{n_i,t}$ and the addition of regularization parameter $\gamma \bbI$. Observe that, through the use of \emph{neighborhood} variables in the update in \eqref{eq_bfgs_dist}, nodes approximate the dual Hessian of themselves \emph{and} their neighbors. They subsequently use $\bbC_{n_i,t}$ along with an additional small regularization parameter $1 \geq \Gamma >0$ to compute the neighborhood descent direction $\bbe^i_{n_i,t} \in \reals^{m_i p}$ as \begin{equation} \bbe^i_{n_i,t} = - \left( \bbC_{n_i,t}^{-1} + \Gamma \bbUpsilon_{n_i} \right) \bbh_{n_i,t}. \label{eq_direction_local} \end{equation} The neighborhood descent direction $\bbe^i_{n_i,t} \in \reals^{m_ip}$ contains components for variables of node $i$ itself and all neighbors $j \in n_i$ -- see Fig. \ref{fig_variable_flow_diagram}. Likewise, neighboring nodes $j \in n_i$ contain a descent component of the form $\bbe^j_{i,t}$. The local descent $\bbe_{i,t}$ is then given by the sum of the components $\bbe^j_{i,t}$ for all neighbors $j\in n_i$, i.e. $\bbe_{i,t} = \sum_{j \in n_i} \bbe^j_{i,t}$. The global dual Hessian approximation $\bbH_{t}$ in \eqref{eq_pdqn_d} can be derived from all nodes performing the local update in \eqref{eq_direction_local} simultaneously in parallel. More precisely, it can be shown that $\bbH^{-1}_{t} = \hbH^{-1}_{t} + \Gamma\bbI$, where $\hbH$ satisfies the global secant condition $ \hbH_{t+1} \bbv_{t} =\bbs_{t}$---see \cite[Proposition 1]{eisen2017decentralized} for details. Moreover, the update in \eqref{eq_pdqn_d} can be computed distributedly using a sequence of local exchanges. The necessary exchanges for node $i$ are detailed in Algorithm \ref{alg_dual}. \begin{algorithm}[t] \setstretch{1.35} \begin{algorithmic}[1]\small \REQUIRE $\{\bby_{n_i,\tau},\bbh_{n_i,\tau}\}_{t,t-1}$,$\bbC_{n_i,t-1}$, $\eps_d$,$\gamma,\Gamma >0$ \STATE Compute $\tbv_{ n_i,t-1},\tbs_{ n_i,t-1},\bbC_{n_i,t}$ [cf.\eqref{eq_dbfgs_vars}--\eqref{eq_dbfgs_update}] \STATE Compute $\bbe^i_{n_i,t} = - (\bbC_{n_i,t}^{-1} +\Gamma\bbUpsilon_{n_i}) \bbh_{n_i,t}$ [cf. \eqref{eq_direction_local}] \STATE Exchange $\bbe^i_{j,t}$ with neighbors $j \in n_i$ \STATE Compute descent dir. $\displaystyle{\bbe_{i,t} := \sum_{j \in n_i} \bbe^j_{i,t}}$ \STATE Update $\bby_{i,t+1} = \bby_{i,t} +\eps_d\bbe_{i,t}$ \RETURN $\bbx_{i,t+1}$, $\bbC_{n_i,t}$ \end{algorithmic} \caption{Dual update for node $i$ at time $t$} \label{alg_dual} \end{algorithm} \begin{figure}\centering \input{variable_exchange_diagram_2_nodes_2.tex} \caption{PD-QN dual variable flow. Nodes exchange variable and gradients -- $\bby_i$ and $\bbh_i$ sent to $j$ and $\bby_j$ and $\bbh_j$ sent to $i$ -- to build variable and gradient variations $\tbv$ and $\tbs$ that they use to determine local curvature matrices -- $\bbC_{n_i}$ and $\bbC_{n_j}$. They then use gradients $\bbh_{n_i}$ and $\bbh_{n_j}$ to compute descent directions $\bbe_{n_i}^i$ and $\bbe_{n_j}^j$. These contain a piece to add locally -- $\bbe_i^i$ stays at node $i$ and $\bbe_j^j$ stays at node -- and a piece to add at neighbors -- $\bbe_j^i$ is sent to node $j$ and $\bbe_i^j$ is sent to node $i$.} \label{fig_variable_flow_diagram} \end{figure} For the ease of presentation, a full description of variable exchanges necessary to perform the update is not shown in Algorithm \ref{alg_dual}. We present in Fig. \ref{fig_variable_flow_diagram} a diagram of the flow of variables among neighbors. Variable and gradients are exchanged -- $\bby_{i,t}$ and $\bbh_{i,t}$ are sent to node $j$ and $\bby_{j,t}$ and $\bbh_{j,t}$ are sent to node $i$ -- and \eqref{eq_bfgs_dist} is used to compute the curvature estimation matrices $\bbC_{n_i,t}$ and $\bbC^j_{t}$. Using these Hessian approximations, the nodes premultiply the inverse by the neighborhood gradients $\bbh_{n_{i,t}}$ and $\bbh_{n_{j,t}}$ to obtain the neighborhood descent directions -- $\bbe^i_{n_i,t}$ and $\bbe^j_{n_j,t}$. These descent directions contain a piece to be added locally -- $\bbe^i_{i,t}$ stays at node $i$ and $\bbe^j_{j,t}$ stays at node -- and a piece to be added at the neighboring node -- $\bbe^i_{j,t}$ is sent to node $j$ and $\bbe^j_{i,t}$ is sent to node $i$. The local descent direction $\bbe_{i,t}$ is the addition of the locally computed $\bbe^i_{i,t}$ and the remotely computed $\bbe^j_{i,t}$. We conclude the discussion on the dual update with a brief series of remarks. \begin{remark}\label{remark_comm}\normalfont As can be seen in the variable flow of Fig. \ref{fig_variable_flow_diagram}, the dual update of PD-QN requires a larger amount of information exchanges than is needed for other consensus methods. Indeed, the total communication overhead for PD-QN includes $K+1$ exchanges for the primal update and 4 exchanges for the dual update---$K+5$ in total. This is in comparison to the 2 exchanges needed for first order methods like DA and ADMM and $K+3$ exchanges needed for ESOM. While this burden is higher, the gains made by PD-QN in iterations necessary to convergence can, in many cases, still render PD-QN a preferable approach to alternatives. This tradeoff is explored in the numerical results in Section \ref{sec_numerical_results} of this paper. \end{remark} \begin{remark}\label{rmk_inner_product_negative}\normalfont For the problem in \eqref{eq_dbfgs_update} to have a solution and the update in \eqref{eq_bfgs_dist} to be valid the inner product between the neighborhood variations must be $\tbv_{n_i,t}^T\tbs_{n_i,t} > 0$. This condition imposes a restriction in functions that can be handled by PD-QN. In practical implementations, however, we can check the value of this inner product and proceed to update $\bbC_{n_i,t}$ only when it satisfies $\tbv_{n_i,t}^T\tbs_{n_i,t} > 0$. \end{remark} \begin{comment} \begin{remark}\label{remark_dual_bounds}\normalfont We stress here that the regularization parameters $\gamma$ and $ \Gamma$ used in the dual BFGS variables in \eqref{eq_dbfgs_grads} and \eqref{eq_bfgs_dist} are necessary to impose bounds on the eigenvalues of the full descent matrix $\bbC_{n_i,t+1}^{-1}+\Gamma \bbUpsilon_{n_i} $. Given positive semi-definiteness of $\bbC_{n_i,t+1}$, the eigenvalues can be bounded as \begin{align} \frac{\Gamma}{ \bar{m}_i}\bbI \preceq \bbC_{n_i,t+1}^{-1} + \Gamma \bbUpsilon_{n_i} \preceq \left(\frac{1}{\gamma} + \frac{\Gamma}{\check{m}_i}\right) \bbI \label{eq_eigen_bounds}, \end{align} where $\check{m}_i = \min_{j \in n_i} m_j$ and $\bar{m}_i = \max_{j \in n_i} m_j$. In particular, \eqref{eq_eigen_bounds} implies that $\bbC_{n_i,t+1}^{-1}$ is positive semidefinite. Thus, the role of $\Gamma$ is to prevent the algorithm from stalling if the eigenvalues of $\bbC_{n_i,t}^{-1}$ become too small, while the role of $\gamma$ is to prevent the eigenvalues of $\bbC_{n_i,t}^{-1}$ from becoming too large. Likewise, the use of the regularization term in the modified gradient variation in \eqref{eq_dbfgs_grads} is necessary for the continued satisfaction of the \emph{original} secant condition after adding the regularization term to $\bbC_{n_i,t+1}$ in \eqref{eq_bfgs_dist}. \end{remark} \end{comment} \subsection{PD-QN Summary} Here we summarize the full details of the PD-QN update, combining both the primal and dual updates in \eqref{eq_pdqn_p} and \eqref{eq_pdqn_d} respectively, restated here as \begin{align} \bbx_{t+1} &= \bbx_{t} -\mathcal{G}^{-1}_{t,K} \bbg_{t}, \label{eq_pdqn_final1} \\ \bby_{t+1} &= \bby_{t} + \alpha \bbH^{-1}_{t} \bbh_{t}, \label{eq_pdqn_final2} \end{align} A summary of the variables relevant to these updates are provided in Table \ref{tab_variables} for reference. \begin{table}[t] \centering \caption{PD-QN variable summary} \label{tab_variables} \begin{tabular}{l | ll} & Primal & Dual \\ \hline Variable & $\bbx$ & $\bby$ \\ Gradient & $\bbg$ & $\bbh$ \\ Variable variation & $\bbu$ & $\bbv$ \\ Gradient variaton & $\bbr$ & $\bbs$ \\ Local Hessian approx. & $\bbB_i$ & $\bbC_{n_i}$ \\ Global Hessian approx. & $\mathcal{G}$ & $\bbH$ \\ Local Descent direction & $\bbd_i$ & $\bbe_i$ \end{tabular} \end{table} The complete PD-QN algorithm at node $i$, including both the primal and dual updates, is summarized in Algorithm \ref{alg_pdqn}. Each node begins with initial variables $\bbx_{i,0}$ and $\bby_{i,0}$ and exchanges variables with neighbors. For each step $t$, the node computes its primal gradient in Step 3 and update the primal variable in Step 4 using Algorithm \ref{alg_primal}. They exchange the updated variables with neighbors in Step 5 and use them to compute and exchange dual gradients in Steps 6 and 7. The dual variable is updated with Algorithm \ref{alg_dual} in Step 8, after which they are exchanged in Step 9. \section{Convergence Analysis}\label{sec_convergence} We analyze the convergence of PD-QN method performed on the consensus optimization problem in \eqref{eq_primal_problem}. To begin, we make the following assumption on the eigenvalues of the objective function Hessian, \begin{assumption} \label{as_strongly_convex} The aggregate objective function $f(\bbx) = \sum_{i=1}^n f_i(\bbx_i)$ is twice differentiable and the eigenvalues of the objective function Hessian are nonnegative and bounded from above and below by positive constants $0 < \mu < L < \infty $, i.e. \begin{equation} \mu \bbI \preceq \nabla^2 f(\bbx) \preceq L\bbI. \label{eq_hessian_bounds_sc} \end{equation} \end{assumption} The upper bound $L$ on the eigenvalues of the Hessian in Assumption \ref{as_strongly_convex} implies that the associated gradient $\bbg(\bbx)$ is Lipschitz continuous with parameter $L$, i.e. $\| \bbg(\bbx) - \bbg(\bbx ')\| \leq L\|\bbx - \bbx'\|$. The aggregate objective functions is additionally strongly convex with parameter $\mu$. These are often considered standard assumptions in distributed convex optimization methods. Note that, the weak convexity (i.e. $\mu=0$) is sometimes analyzed as well, but is not considered in this paper. Many problems in distributed machine learning with weakly convex objective functions are often supplemented with a strongly convex regularizer (e.g. $\ell$-2 norm). We further make an assumption on the eigenvalues of the primal Hessian approximation matrix $\bbB_{t}$. \begin{assumption} \label{as_bfgs_bound} There exist positive constants $0 < \psi < L < \Psi $ such that the eigenvalues of the primal Hessian approximation matrix $\bbB_{t}$ are bounded from above and below as \begin{equation} \psi \bbI \preceq \bbB_{t} \preceq \Psi\bbI. \label{eq_bfgs_bound} \end{equation} \end{assumption} \begin{remark}\normalfont The bounds imposed on the eigenvalues of $\bbB_{t}$ are, in general, not standard assumptions. While the matrix $\bbB_{t}$ is guaranteed to be positive definite, the lower eigenvalue can be arbitrarily small. However, there are a number of techniques commonly used to satisfy this assumption in practice. These include both adding small regularization terms to both $\bbB_{t}$ and $\bbB^{-1}_{t}$---see, e.g. \cite{mokhtari2014res}---and using the popular limited memory version of the BFGS update in \eqref{eq_bfgs_p}, which induces the necessary bounds in \eqref{eq_bfgs_bound}---see, e.g., \cite{mokhtari2015global}. In this paper, we assume such bounds exist for the ease of analysis. We also observe in the numerical experiments in Section \ref{sec_numerical_results} that such regularization techniques are often not necessary in practice. \end{remark} \begin{algorithm}[t!] \setstretch{1.35} \small{\begin{algorithmic}[1] \REQUIRE $\bbx_{i,0}, \bby_{i,0}, \bbB_{i,0}, \bbC_{n_i,0}, \eps_d, \alpha$ \STATE Exchange initial variables with neighbors $j \in n_i$ \FOR{$t = 0,1,2, \hdots$} \STATE Grad. $\bbg_{i,t} = \bbx_{i,t} - \sum_{j \in n_i} \!\!\!w_{ij}\bbx_{j,t} + \bby_{i,t} + \alpha \nabla f_i(\bbx_{t})$ \STATE Update primal $\bbx_{i,t+1}$ with Algorithm \ref{alg_primal}. \STATE Exchange $\bbx_{i,t+1}$ with neighbors $j \in n_i$ \STATE Grad. $\bbh_{i,t} = \bbx_{i,t+1} - \sum_{j \in n_i} w_{ij} \bbx_{j,t+1}$ \STATE Exchange $\bbh_{i,t}$ with neighbors $j \in n_i$ \STATE Update dual $\bby_{i,t+1}$ with Algorithm \ref{alg_dual} \STATE Exchange $\bby_{i,t+1}$ with neighbors $j \in n_i$ \ENDFOR \end{algorithmic}} \caption{PD-QN method at node $i$} \label{alg_pdqn} \end{algorithm} \begin{assumption} \label{as_inner_product} For all $i$ and $t$, the inner product between the neighborhood dual variable and gradient vector variations is strictly positive, i.e. $\tbv_{n_i,t}^T \tbs_{n_i,t} > 0$. \end{assumption} We state this assumption explicitly to ensure all local dual Hessian approximations $\bbC_{n_i,t}$ are well defined in \eqref{eq_bfgs_dist}. While this may not hold in practice, we may set $\bbC_{n_i,t+1} = \bbC_{n_i,t}$---see Remark \ref{rmk_inner_product_negative}---which we stress does not have any bearing on the proceeding analysis. Before deriving the primary theoretical results of the PD-QN method, we first establish some properties of the Hessian approximation matrices for the primal and dual domains, denoted $\mathcal{G}_{t,K}$ and $\bbH_{t}$. The following lemmata characterize the eigenvalues of the $K$-th order inverse approximation of the primal Hessian approximation $\mathcal{G}^{-1}_{t,K}$ and the dual Hessian inverse approximation $\bbH^{-1}_{t}$, respectively. \begin{lemma}\label{lemma_G_bound} Consider the primal update in the PD-QN update introduced in \eqref{eq_pdqn_p}-\eqref{eq_hessian_K}. If Assumptions \ref{as_weight_bound}-\ref{as_bfgs_bound} hold, then the eigenvalues of the primal Hessian inverse approximation $\mathcal{G}^{-1}_{t,K}$ are uniformly bounded as \begin{equation} \lambda \bbI \preceq \mathcal{G}^{-1}_{t,K} \preceq \Lambda \bbI, \label{eq_prop_eigen_bounds_primal} \end{equation} where the constants $\lambda$ and $\Lambda$ are defined as \begin{align} \lambda := \frac{1}{2\alpha(1-\delta) + \Psi}, \quad \Lambda := \frac{1-\rho^{K+1}}{(1-\rho)(2\alpha(1-\Delta)+\psi)} \nonumber \end{align} and $ \rho:= (2\alpha(1-\delta))/(2\alpha(1-\delta)+\psi)$. \end{lemma} \begin{myproof} The proof can be found for the similar result in \cite[Lemma 2]{mokhtari2017network} and is excluded here for space considerations. \end{myproof} \begin{lemma}\label{lemma_H_bound} Consider the dual update in the PD-QN method introduced in \eqref{eq_pdqn_d}-\eqref{eq_direction_local}. Further, recall both the positive constants $\gamma$ and $\Gamma \leq 1$ as the regularization parameters of dual Hessian and the definition of its global approximation $\bbH^{-1}_{t} =\hbH^{-1}_{t} + \Gamma \bbI$. If Assumption \ref{as_inner_product} holds, the eigenvalues of the dual Hessian inverse approximation $\bbH^{-1}_{t}$ are bounded as \begin{equation} \Gamma \bbI \preceq \bbH^{-1}_{t} \preceq P \bbI, \label{eq_prop_eigen_bounds_dual} \end{equation} where $P := \left( \Gamma + n/\gamma \right)$ and $n$ is the size of network. \end{lemma} \begin{myproof} To establish the lower bound in \eqref{eq_prop_eigen_bounds_dual}, consider that $\hbH^{-1}_{t}$ is a sum of positive semidefinite matrices and is therefore a positive semidefinite matrix with eigenvalues greater than or equal to 0. The upper bound, on the other hand, follows from the fact that each $\hbH_{t}$ is the sum of $n$ matrices, where $\bbC_{n_i,t}^{-1} \preceq 1/\gamma \bbI$ for all $i$. Adding the regularization term $\Gamma\bbI$ provides the upper bound in\eqref{eq_prop_eigen_bounds_dual}. \end{myproof} In Lemmata \ref{lemma_G_bound} and \ref{lemma_H_bound} we show that there exists lower and upper bounds on the eigenvalues of both the primal and dual Hessian inverse approximation matrix. From here, we proceed to demonstrate the linear convergence of the PD-QN method. The following lemma establishes an important relationship between the primal and dual variables using the PD-QN updates. \begin{lemma}\label{lemma_pdqn_error} Consider the updates of PD-QN in \eqref{eq_pdqn_final1} and \eqref{eq_pdqn_final2}, where we recall the approximate primal and dual Hessian inverses $\mathcal{G}^{-1}_{t,K}$ and $\bbH^{-1}_{t}$. If Assumptions \ref{as_weight_bound}-\ref{as_inner_product} hold, then primal and dual iterates generated by PD-QN satisfy \begin{align}\label{opt_res_PD-QN2} &\nabla f(\bbx_{t+1})-\nabla f(\bbx^*) + \bby_{t+1}-\bby^* + \bbsigma_{t}=\bb0, \end{align} where the error vector $\bbsigma_{t}$ is defined as \begin{align}\label{esom_error_vec} \bbsigma_{t} &:= \nabla f(\bbx_{t})-\nabla f(\bbx_{t+1}) - \alpha\bbH_{t}^{-1}(\bbI-\bbZ)(\bbx_{t+1}-\bbx^*) \nonumber\\ &\qquad + \left[\mathcal{G}_{t,K}-\alpha(\bbI-\bbZ)\right](\bbx_{t+1}-\bbx_{t}). \end{align} \end{lemma} \begin{myproof} See Appendix \ref{app_lemma_pdqn_error}. \end{myproof} In Lemma \ref{lemma_pdqn_error}, we establish a relationship between the primal and dual variables, that is similar to one used in the convergence of the method of multipliers---see \cite{mokhtari2016decentralized}. The PD-QN method includes an additional error term $\bbsigma_{t}$ that encompasses two modifications to MM: (i) the use of the approximate primal Hessian approximation $\mathcal{G}_{t}$ rather than the true primal Lagrangian Hessian $\nabla^2 \ccalL(\bbx,\bbnu_{t})$ and (ii) the use of the dual quasi-Newton matrix $\bbH_{t}$ rather than a first order dual update. From here, we may establish a convergence rate of the PD-QN method, first by establishing a linear convergence rate of a Lyapunov function. To define the Lyapunov function, we first define an appended variable $\bbz_t$ and matrix $\bbJ_t$ as \begin{equation}\label{eq_aug_defs} \bbz_t = \begin{bmatrix} \bbx_t \\ \bbnu_t \end{bmatrix} \quad \bbJ_t = \begin{bmatrix} \alpha \bbR_t & \bb0 \\ \bb0 & \bbH_t \end{bmatrix}. \end{equation} \begin{theorem}\label{thm:esom_linear_convg} Consider PD-QN as introduced in \eqref{eq_pdqn_final1}-\eqref{eq_pdqn_final2}. Consider arbitrary constants $\beta>1$ and $\phi>1$ and $\zeta$ as a positive constant. Further, recall the definitions of the vector $\bbz_t$ and matrix $\bbJ_t$ in \eqref{eq_aug_defs} and consider $\hat{\delta}$ as the smallest non-zero eigenvalue of the matrix $\bbI-\bbZ$. If Assumptions \ref{as_weight_bound}-\ref{as_inner_product} hold, then the sequence of Lyapunov functions $\|\bbz_{t}-\bbz^*\|_{\bbJ._t}$ generated by PD-QN satisfies \begin{equation}\label{pdqn_lin_convg} \|\bbz_{t+1}-\bbz^*\|_{\bbJ_t}^2 \ \leq\ \frac{1}{1+\kappa_t} \ \|\bbz_{t}-\bbz^*\|_{\bbJ_t}^2. \end{equation} where the sequence $\kappa_t$ is given by \begin{align} &\kappa_t=\min \Bigg\{ \left( \frac{\beta^2}{P(\beta-1)\hat{\delta}} - \frac{2 \beta \phi \Gamma^2}{P (\phi-1)\hat{\delta}}\right)^{-1}\left(\alpha\Sigma - 2\alpha\zeta L^2/\Sigma\right), \nonumber \\ & \frac{2\alpha\hat{\delta}}{\phi\beta(\mu+L)}, \left( \Sigma - \frac{2 \beta \phi \alpha}{P(\phi-1)\hat{\delta}} \right)^{-1}\!\! \left(\frac{2\mu L}{\mu + L}\! -\! \frac{1}{\zeta}\! -\! \frac{4\alpha^2 P \zeta}{(1-\delta)^{-1}} \right) \Bigg\}.\nonumber \end{align} and $\Sigma := \Lambda^{-1} - 2\alpha(1-\delta)$. \end{theorem} \begin{myproof} See Appendix \ref{app_thm_esom_linear_convg}. \end{myproof} Theorem \ref{thm:esom_linear_convg} provides a linear convergence rate of the PD-QN method in terms of the sequence $\| \bbz_t - \bbz^*\|_{\bbJ_2}^2$. From here, it remains to show that the sequence of primal $\bbx_t$ also converges to the optimal argument $\bbx^*$ at a linear rate. The resulting corollary follows as a direct consequence of the preceding theorem, and is presented below. \begin{corollary}\label{esom_approx_error2} If Assumptions \ref{as_weight_bound}-\ref{as_inner_product} hold, then the sequence of squared errors $\|\bbx_{t}-\bbx^*\|^2$ generated by PD-QN converges to zero at a linear rate, i.e., \begin{equation}\label{ESOM_lin_convg_22} \|\bbx_{t}-\bbx^*\|^2 \leq \left(\frac{1}{1+\min_{t}\{\kappa_t\}}\right)^t \frac{\|\bbz_0-\bbz^*\|_{\bbJ_t}^2}{\alpha \Sigma}. \end{equation} \end{corollary} \begin{myproof} According to the definition of the sequence $\bbz_t$ and matrix $\bbJ_2$, we can write $\|\bbz_t-\bbz^*\|_{\bbJ_t}^2=\alpha\|\bbx_t-\bbx^*\|_{\bbR_t}^2+ \|\bbnu_t-\bbnu^*\|_{\bbH_t}^2$ and subsequently lower bounded as $\|\bbz_t-\bbz^*\|_{\bbJ_t}^2 \geq \alpha \Sigma \|\bbx_t-\bbx^*\|^2+ (1/P) \|\bbnu_t-\bbnu^*\|^2$ which implies that $\|\bbx_t-\bbx^*\|^2\leq (1/\alpha \Sigma)\|\bbz_t-\bbz^*\|_{\bbJ_t}^2$. Considering this result and linear convergence of the sequence $\|\bbz_t-\bbz^*\|_{\bbJ_t}^2$ in \eqref{pdqn_lin_convg}, the claim in \eqref{ESOM_lin_convg_22} follows. \end{myproof} The results here establish a linear convergence rate to the exact solution of the consensus problem---this rate is comparable with state of the art methods such as EXTRA \cite{shi2015extra} and ESOM \cite{mokhtari2016decentralized}. Furthermore, we stress that PD-QN does not require internal minimization steps used in pure dual methods or computation of Hessian information used in pure second order methods. We proceed to show the performance of PD-QN in numerical studies. \section{Numerical Results} \label{sec_numerical_results} \begin{figure*}[t] \centering \begin{subfigure}[t]{.45\textwidth} \centering \includegraphics[height=\textheight,width=\textwidth,keepaspectratio]{fig_aug_quad50_ab-eps-converted-to.pdf} \caption{}\label{fig:4a} \end{subfigure} \begin{subfigure}[t]{.45\textwidth} \centering \includegraphics[height=\textheight,width=\textwidth,keepaspectratio]{fig_aug_quad50_b-eps-converted-to.pdf} \caption{}\label{fig:4b} \end{subfigure} \caption{Convergence paths for exact distributed methods (a) small and (b) large condition number. PD-QN provides significant improvement in convergence time over other methods in both cases.}\label{figure_simulation_quad} \end{figure*} We provide numerical simulations of the performance of PD-QN on the consensus problem in \eqref{eq_primal_problem} to compare to other first and second order distributed methods that solve for the exact solution. In particular, we compare against first order methods dual ascent (DA) \cite{cRabbatNowak04} and D-ADMM \cite{Schizas2008-1,Shi2014-ADMM}, and second order methods ESOM \cite{mokhtari2016decentralized} and D-BFGS \cite{eisen2017decentralized}. We demonstrate these results for two common objective functions in distributed learning---linear least squares regression and logistic regression. We begin with the linear least squares regression problem, which is well known to be formulated as a quadratic program. Specifically, we consider the following objective function \begin{align} f(\bbx) := \sum_{i=1}^n \frac{1}{2} \bbx^T \bbA_i \bbx + \bbb_i^T \bbx \label{eq_simulation_problem} \end{align} where $\bbA_i \in \reals^{p \times p}$ and $\bbb_i \in \reals^p$ are parameters available to node $i$. As a means of controlling the condition number of the problem, we define the matrices $\bbA_i = \text{diag}\{ \bba_i \}$, and for a chosen condition number $10^{2\eta}$, $\bba_i$ is given $p/2$ elements chosen randomly from the interval $[1, 10^1, \hdots, 10^{\eta}]$ and $p/2$ elements chosen randomly from the interval $[1, 10^{-1}, \hdots, 10^{-\eta}]$. It is then the case that the full matrix $\sum_{i=1}^n \bbA_i$ has eigenvalues in the range $[n 10^{-\eta}, n 10^{\eta}]$ and the intended condition number. The vectors $\bbb_i$, alternatively, are chosen uniformly and randomly from the box $[0,1]^p$. In all initial simulations we fix the variable dimension $p=5$ and $n=20$ nodes and use a $d$-regular cycle for the graph, in which $d$ is an even number and nodes are connected to their $d/2$ nearest neighbors in either direction. The others parameters such as condition number $10^{\eta}$ and and number of nodes $n$ are varied by simulation. The regularization parameters for BFGS are chosen to be $\gamma = \Gamma = 10^{-1}$. For all methods we choose a constant stepsize and attempt to pick the largest stepsize for which the algorithms are observed to converge. Representative convergence paths for the five compared methods are shown in Figure \ref{figure_simulation_quad}. Specifically, the convergence paths represent the relative error with respect to the exact solution $\bbx^*$ versus the number of iterations. The exact solution $\bbx^*$ can be found in closed form fo the quadratic problem in \ref{eq_simulation_problem} and the relative error is then evaluated as \begin{align} \text{error}_{t} = \frac{1}{n} \sum_{i=1}^n \frac{\| \bbx_{i,t} - \bbx^*\|^2}{\|\bbx^*\|^2}. \label{eq_error_def} \end{align} Figure \ref{fig:4a} shows the convergence rates of all algorithms in the quadratic problem with small condition number $\eta=0$. Observe that PD-QN and D-ADMM converge substantially faster than all other methods, achieving an average error of $10^{-10}$ by iteration 100. The closest performing method in this simulation is the exact second order method ESOM, which doesn't quite reach an error of $10^{-8}$ after 400 iterations. While ESOM uses exact Hessian information in the primal update, it uses only a first order update in the dual domain, whereas PD-QN uses a quasi-Newton update in the dual domain. The decentralized ADMM method, on the other hand, is able to solve the augmented Lagraign of the quadratic objective easily in closed form, and thus experiences strong convergence rates with just first order information. For a larger condition number $\eta=1$, the corresponding convergence paths are given in Figure \ref{fig:4b}. Here, the performance of first order methods DA and D-ADMM degrades, as is commonly the case for first order methods in more ill-conditioned problems. The performance of the second order methods, as expected, degrade as well but still outperform the first order methods. PD-QN is the fastest converging method, reaching an error of $10^{-10}$ after 600 iterations. \begin{table}[h] \begin{tabular}{ll} & Average Runtime (ms) \\\hline PD-QN & 2.2 \\ DA & 0.22 \\ D-ADMM & 0.25 \\ ESOM & 0.87 \\ D-BFGS & 6.7 \end{tabular} \caption{Average runtimes per iteration for each method being compared.} \label{tab_run} \end{table} The average runtime per iteration per method is reported in Table \ref{tab_run}. It can be seen that the quasi-Newton methods have higher runtime per runtime per iteration than the first order methods and ESOM. Both first order methods and ESOM will have very low complexity per iteration for quadratic problems such as the one considered here due to the fact that the augmented Lagrangian function can be solved in closed form and has a constant Hessian function. Generally speaking, higher order methods are more beneficial in scenarios in which communication complexity is of larger concern that computation complexity, as is often the case in settings such as sensor networks and distributed computing. \begin{figure} \centering \includegraphics[width=.45\textwidth,height=.25\textheight,keepaspectratio]{aug_hist_plot-eps-converted-to.pdf} \caption{Empirical distribution of number of information exchanges needed to reach error of $10^{-5}$ for PD-QN, DA, and ESOM for quadratic objective function with small condition number. The convergence gain of PD-QN is great enough that, even with the larger communication overhead, it outperforms the first and second order method when comparing information exchanges.} \label{fig_primal_hist} \end{figure} \begin{figure} \centering \includegraphics[width=.45\textwidth,height=.2\textheight,keepaspectratio]{aug_hist2-eps-converted-to.pdf} \caption{Empirical distribution of number of information exchanges needed to reach error of $10^{-7}$ for PD-QN and ESOM for quadratic objective function with large condition number. PD-QN outperforms the second order ESOM method even with the additional communication overhead. } \label{fig_primal_hist2} \end{figure} As previously discussed in Remark \ref{remark_comm}, the communication burden of PD-QN is larger than the alternatives, making the comparison in terms of number of iterations in Figure \ref{figure_simulation_quad} not entirely complete. To provide a more comprehensive comparison, we display in Figure \ref{fig_primal_hist} an empirical distribution of the performance of PD-QN, DA, and ESOM on the quadratic program with small condition number over 1000 different randomly generated experiments . In this case, however, we display the number of information exchanges needed to reach an error of $10^{-5}$. For this problem, observe that the PD-QN requires around 300 local information exchanges per node while DA and ESOM require around 700 and 900 exchanges, respectively. Indeed, the higher communication burden of PD-QN does not in this case outweigh the gain in number of iterations. For such a comparison for an ill-conditioned problem, we present in Figure \ref{fig_primal_hist2} the empirical distributions of communication exchanges required for the problem with large condition number for the quasi-Newton PD-QN method and the exact second order ESOM method. Overall, even with the additional communication overhead the PD-QN outperforms the existing primal-dual second order alternative method by roughly half the amount of communications. This highlights the additional benefit of the quasi-Newton update in the dual domain in the PD-QN method in problems with larger condition numbers. \subsection{Logistic regression} We perform additional simulations to evaluate the performance of PD-QN on a more complex objective function with varying condition number, namely distributed logistic regression problem that is very common in machine learning. We seek to learn a linear classifier $\bbx$ to predict the binary label of a data point $v_j \in \{-1,1\}$ given a feature vector $\bbu_j \in \reals^p$. For a set of training samples, we compute the likelihood of a label given a feature vector as $P(v=1 | \bbu) = 1/(1+\exp(-\bbu^T\bbx))$ and find the classifier $\bbx$ that maximizes the log likelihood over all samples. In the distributed setting, we assume that the training set is large and distributed amongst $n$ nodes, each holding r$q$ samples. Each node $i$ then has access to an objective function that is the loss over its training samples $\{\bbu_{il}\}_{l=1}^{q_i}$ and $\{v_{il}\}_{l=1}^{q_i}$. The aggregate objective function can be defined as \begin{align} f(\bbx) := \frac{\lambda}{2}\| \bbx\|^2 + \sum_{i=1}^{n} \sum_{l=1}^{q_i} \log[ 1 + \exp(-v_{il}\bbu_{il}^T\bbx)], \label{eq_logistic_problem} \end{align} where the first term is a regularization term used to reduce overfitting and is parametrized by $\lambda \geq 0$. \begin{figure*}[t] \centering \begin{subfigure}[t]{.43\textwidth} \centering \includegraphics[height=\textheight,width=\textwidth,keepaspectratio]{fig_aug_log3-eps-converted-to.pdf} \caption{}\label{fig:5a} \end{subfigure} \begin{subfigure}[t]{.43\textwidth} \centering \includegraphics[height=\textheight,width=\textwidth,keepaspectratio]{fig_aug_log3_time-eps-converted-to.pdf} \caption{}\label{fig:5b} \end{subfigure} \caption{Convergence paths for exact distributed methods on the logistic regression problem, compared by (a) number of iterations and (b) runtime. The PD-QN method can be observed to split the difference between first order EXTRA method and second order ESOM method in terms of both metrics. }\label{figure_simulation_log} \end{figure*} For our simulations we generate an artificial dataset of feature vectors $\bbu_{il}$ with label $v_{il}=1$ from a normal distribution with mean $\mu$ and standard deviation $\sigma_{+}$, and with label $v_{il}=-1$ from a normal distribution with mean $-\mu$ and standard deviation $\sigma_{-}$. Each node $i$ receives $q_i=100$ samples and the regularization parameters is fixed to be $\lambda= 10^{-4}$. The feature vector parameters are set as $\mu=3$ and $\sigma_{+}=\sigma_{-}=1$ to make the data linearly separable. The other parameters we set the same as in earlier simulations, i.e. $n=20$ nodes connected in $d=4$-regular cycle with $p=4$. The PD-QN regularization parameters are chosen as $\Gamma = \gamma = 10^{-1}$. For the logistic regression simulations, the form of the objective function in \eqref{eq_logistic_problem} does not permit an easily computable primal minimizer as in the case of the quadratic problem in \eqref{eq_simulation_problem}. Therefore, performing dual methods such as DA, ADMM, and D-BFGS require internal optimization problems at each iteration and are generally infeasible in this type of setting. Therefore, for these simulations we compare the performance of PD-QN only against the ``primal-dual'' type methods: first order EXTRA and second-order ESOM. The resulting convergence paths are shown in Figure \ref{figure_simulation_log}. The figure on the left compares the convergence in terms of number of iterations while the figure on the right compares convergence in terms of runtime. The results demonstrate a case in which the approximate second order PD-QN method splits the difference between the first and second order methods in terms of both metrics. When compared in terms of number of iterations, PD-QN performs similarly but slightly worse to ESOM, which uses exact Hessian information, while both methods outperform the first order EXTRA method. This is result reflects the fact the logistic regression problem has a more complex Hessian that is not as easily approximated with quasi-Newton methods. The ESOM method therefore benefits for using exact second order information, but at higher computational cost. Indeed, the runtime comparison shows that EXTRA outperforms both methods because it has very low computational cost. Additionally, PD-QN outperforms ESOM in this manner due to the fact that computing exact Hessians for the logistic regression problem can be very costly, which thus makes it preferable to approximate the curvature information using the quasi-Newton updates of PD-QN. PD-QN can thus be seen here to balance the tradeoff between the iteration complexity benefits of second order computation seen in ESOM with the runtime benefits of first order methods seen in EXTRA. \subsection{Discussion} In this section, we compare the performance of PD-QN against a number of popular decentralized consensus methods that use either first or second order information for both a distributed linear least squares, or quadratic, problem and a distributed logistic regression problem. In these results we can observe a number of tradeoffs between iteration complexity, computational complexity, and communication complexity. The PD-QN method outperforms the other methods because it estimates a second-order step in both the primal and dual variable updates, whereas the other methods perform first order updates in either one or both of the primal and dual variables updates. The use of second order information allows for speed up in both the maximization and minimization of the augmented Lagrangian function. The benefit of the second order update can be observed to be even greater when problems with larger condition numbers are considered. While the use of second order updates in PD-QN leads to higher computational complexity, we observe that the iteration complexity benefits generally outweigh the computational complexity issues in comparison to other first and second order decentralized methods. \section{Conclusion} \label{sec_conclusion} We considered the problem of decentralized consensus optimization, in which nodes sought to minimize an aggregate cost function while only being aware of a local strictly convex component. The problem was solved in the dual domain through the introduction of PD-QN as a decentralized quasi-Newton method. In PD-QN, each node approximates the curvature of its local cost function and its neighboring nodes to correct its descent direction. Analytical and numerical results were established showing its convergence and improvement over existing consensus methods, respectively. \begin{appendices} \section{Proof of Lemma \ref{lemma_pdqn_error}}\label{app_lemma_pdqn_error} The details here follow closely those of a similar lemma in \cite{mokhtari2016decentralized}. Consider the primal and dual updates of PD-QN in \eqref{eq_pdqn_final1} and \eqref{eq_pdqn_final2}. To prove the result in \eqref{opt_res_PD-QN2}, we being by recalling the primal gradient $\bbg_{t} = \nabla f(\bbx_{t}) + \bby_{t} + \alpha (\bbI-\bbZ)\bbx_{t}$ and rearrange terms in \eqref{eq_pdqn_final1} to obtain \begin{align}\label{pdqn_gen_proof_001} \nabla f(\bbx_{t})+& \bby_{t}+{\alpha}(\bbI-\bbZ)\bbx_{t} +\mathcal{G}_{t,K}(\bbx_{t+1}-\bbx_{t})=\bb0. \end{align} Define $\bbL_{t}$ to be the Hessian of the augmented Lagrangian $\bbL_{t}:=\nabla^2_{\bbx \bbx} \ccalL_{\alpha}(\bbx_{t},\bby_{t}) = \nabla^2 f(\bbx_{t}) + \alpha (\bbI-\bbZ)$. We add and subtract the term $\bbL_{t}(\bbx_{t+1}-\bbx_{t})$ to \eqref{pdqn_gen_proof_001} to obtain \begin{align}\label{pdqn_gen_proof_002} &\nabla f(\bbx_{t})+\nabla^2f(\bbx_{t})(\bbx_{t+1}-\bbx_{t})+ \bby_{t} \\& +{\alpha}(\bbI-\bbZ)\bbx_{t+1}+(\mathcal{G}_{t,K}- \bbL_{t})(\bbx_{t+1}-\bbx_{t})=\bb0.\nonumber \end{align} Now using the definition of the error vector $\bbsigma_{t}$ in \eqref{esom_error_vec} we can rewrite \eqref{pdqn_gen_proof_002} as \begin{align}\label{pdqn_gen_proof_003} &\nabla f(\bbx_{t+1})+\bby_{t} +{\alpha}(\bbI-\bbZ)\bbx_{t+1} +\bbsigma_{t} \\ &\qquad +\alpha\bbH_{t}^{-1}(\bbI-\bbZ)(\bbx_{t+1}-\bbx^*) =\bb0, \nonumber \end{align} where we use the fact that $\bbL_{t} - \nabla^2f(\bbx_{t}) = \alpha(\bbI-\bbZ)$ in the definition of $\bbsigma_{t}$ in \eqref{esom_error_vec}. To prove the claim in \eqref{opt_res_PD-QN2} from \eqref{pdqn_gen_proof_003}, first consider that one of the KKT conditions of the optimization problem in \eqref{eq_primal_problem} is \begin{equation}\label{KKT_condition} \nabla f(\bbx^*) + (\bbI-\bbZ)^{1/2}\bbnu^*=\nabla f(\bbx^*) +\bby^*=\bb0. \end{equation} Subtracting the equality in \eqref{KKT_condition} from \eqref{pdqn_gen_proof_003} yields \begin{align}\label{important_result_100} &\nabla f(\bbx_{t+1})-\nabla f(\bbx^*) +\bby_{t}-\bby^* \nonumber\\ &\quad+\alpha\bbH_{t}^{-1}(\bbI-\bbZ) (\bbx_{t+1}-\bbx^*) +\bbsigma_{t} =\bb0. \end{align} Furthermore, by using the dual variable update in \eqref{eq_pdqn_final2} along with the consensus constraint $\alpha(\bbI-\bbZ)\bbx^*=\bb0$, we can additionally claim that \begin{equation}\label{important_result_200} \bby_{t} =\bby_{t+1}-\alpha \bbH^{-1}_{t} (\bbI-\bbZ)(\bbx_{t+1}-\bbx^*). \end{equation} Substituting $\bby_{t}$ in \eqref{important_result_100} by the expression in the right hand side of \eqref{important_result_200} leads to the claim in \eqref{opt_res_PD-QN2}. \section{Proof of Theorem \ref{thm:esom_linear_convg}}\label{app_thm_esom_linear_convg} The details here are again adapted from a similar result in \cite{mokhtari2016decentralized}, but modified to consider the quasi-Newton primal and dual updates present in PD-QN. To prove this result, we begin by applying a well-known lower bound for the inner product $(\bbx_{t+1}-\bbx_{t})^T(\nabla f(\bbx_{t+1})-\nabla f(\bbx^*))$ that incorporates both strong convexity constant $\mu$ and the Lipschitz constant of the gradients $L$. This inequality can be written as \begin{align}\label{proof_lin_x} &\frac{1}{\mu + L}(\mu L\|\bbx_{t+1}-\bbx^*\|^2 + \|\nabla f(\bbx_{t+1})-\nabla f(\bbx^*)\|^2) \nonumber \\ &\qquad\leq (\bbx_{t+1}-\bbx^*)^T(\nabla f(\bbx_{t+1})-\nabla f(\bbx^*)). \end{align} The result in \eqref{opt_res_PD-QN2} gives us an expression for the difference $\nabla f(\bbx_{t+1})-\nabla f(\bbx^*)$ by rearranging terms as \begin{align}\label{proof_lin_pdqn_001} \nabla f(\bbx_{t+1})-\nabla f(\bbx^*) &= -\bby_{t+1}+\bby^* - \bbsigma_{t}. \end{align} We now substitute \eqref{proof_lin_pdqn_001} into \eqref{proof_lin_x} and multiply both sides of the inequality by $2\alpha$ to obtain \begin{align}\label{proof_lin_pdqn_002} & \frac{2\alpha \mu L}{\mu+L}\|\bbx_{t+1}-\bbx^*\|^2+\frac{2\alpha}{\mu+L}\|{\nabla f}(\bbx_{t+1})-{\nabla f}(\bbx^*)\|^2\nonumber\\ & \leq -2\alpha(\bbx_{t+1}-\bbx^*)^T(\bby_{t+1}-\bby^*) -2\alpha(\bbx_{t+1}-\bbx^*)^T\bbsigma_{t}. \end{align} To proceed from here, we will substitute the modified dual variable $\bby_{t}$ back to the original dual variable $\bbnu_{t}$ by recalling the transformation $\bby_{t} = (\bbI-\bbZ)^{1/2}\bbnu_{t}$. First, we rewrite \eqref{proof_lin_pdqn_002} as \begin{align}\label{proof_lin_pdqn_003} & \frac{2\alpha \mu L}{\mu+L}\|\bbx_{t+1}-\bbx^*\|^2+\frac{2\alpha}{\mu+L}\|{\nabla f}(\bbx_{t+1})-{\nabla f}(\bbx^*)\|^2\nonumber\\ & \leq -2\alpha(\bbx_{t+1}-\bbx^*)^T(\bbI-\bbZ)^{-1/2}(\bbnu_{t+1}-\bbnu^*) \nonumber \\ & \qquad -2\alpha(\bbx_{t+1}-\bbx^*)^T\bbsigma_{t}. \end{align} Now we can derive a similar expression as in \eqref{important_result_200} with respect to $\bbnu_{t}$. Again using the fact that $(\bbI-\bbZ)^{-1/2}\bbx^* = \bb0$, we can add this term to the dual update in \eqref{eq_pdqn_final2} in terms of $\bbnu_{t}$, rearrange terms to obtain that \begin{equation}\label{proof_lin_pdqn_004} \alpha(\bbI-\bbZ)^{-1/2}(\bbx_{t+1}-\bbx^*) = \bbH_{t}(\bbnu_{t+1}-\bbnu_{t}). \end{equation} Now we can substitute \eqref{proof_lin_pdqn_004} back into \eqref{proof_lin_pdqn_003} to obtain \begin{align}\label{proof_lin_pdqn_005} & \frac{2\alpha \mu L}{\mu+L}\|\bbx_{t+1}-\bbx^*\|^2+\frac{2\alpha}{\mu+L}\|{\nabla f}(\bbx_{t+1})-{\nabla f}(\bbx^*)\|^2\nonumber\\ & \leq -2(\bbnu_{t+1}-\bbnu_{t})^T\bbH_{t} (\bbnu_{t+1}-\bbnu^*) -2\alpha(\bbx_{t+1}-\bbx^*)^T\bbsigma_{t}. \end{align} From here, we wish to write the primal and dual terms in \eqref{proof_005} in terms of a combined variable $\bbz_{t} := [\bbx_{t}; \bbnu_{t}]$. To that end, we first decompose the error term as $\bbsigma_{t}$ as $\bbsigma_{t} = \hbsigma_{t} + \left[\mathcal{G}_{t,K}-\alpha(\bbI-\bbZ)\right](\bbx_{t+1}-\bbx_{t})$, where $\hbsigma_{t} := \nabla f(\bbx_{t})-\nabla f(\bbx_{t+1}) + \alpha\bbH_{t}^{-1}(\bbI-\bbZ)(\bbx_{t+1}-\bbx^*)$. Now we rewrite \eqref{proof_lin_pdqn_005} as \begin{align}\label{proof_0051} & \frac{2\alpha \mu L}{\mu+L}\|\bbx_{t+1}-\bbx^*\|^2+\frac{2\alpha}{\mu+L}\|{\nabla f}(\bbx_{t+1})-{\nabla f}(\bbx^*)\|^2\nonumber\\ & \leq -2(\bbnu_{t+1}-\bbnu_{t})^T\bbH_{t} (\bbnu_{t+1}-\bbnu^*) \nonumber\\ & -2\alpha(\bbx_{t+1}-\bbx^*)^T \bbR_{t}(\bbx_{t+1}-\bbx_{t}) -2\alpha(\bbx_{t+1}-\bbx^*)^T\hbsigma_{t}, \end{align} where we define $\bbR_{t} := \left[\mathcal{G}_{t,K}-\alpha(\bbI-\bbZ)\right]$ for notational simplicity. We transform the vector difference products on the right hand side of \eqref{proof_0051} using the distributive property. For any vectors $\bba$, $\bbb$, and $\bbc$ we can write $ 2(\bba-\bbb)^T(\bba-\bbc)=\|\bba-\bbb\|^2+\|\bba-\bbc\|^2-\|\bbb-\bbc\|^2$. By applying this substitution into \eqref{proof_0051} for the first two terms on the right hand side, we have that \begin{align}\label{proof_005} & \frac{2\alpha \mu L}{\mu+L}\|\bbx_{t+1}-\bbx^*\|^2+\frac{2\alpha}{\mu+L}\|{\nabla f}(\bbx_{t+1})-{\nabla f}(\bbx^*)\|^2\nonumber\\ & \leq \left( \|\bbnu_{t}-\bbnu^*\|^2_{\bbH_t} -\|\bbnu_{t+1}-\bbnu_{t}\|^2_{\bbH_t}- \|\bbnu_{t+1} -\bbnu^*\|^2_{\bbH_t} \right) \nonumber \\ &+ \alpha\left( \|\bbx_{t}-\bbx^*\|^2_{\bbR_t} -\|\bbx_{t+1}-\bbx_{t}\|^2_{\bbR_t}- \|\bbx_{t+1} -\bbx^*\|^2_{\bbR_t} \right) \nonumber \\ &\qquad -2\alpha(\bbx_{t+1}-\bbx^*)^T\hbsigma_{t}. \end{align} We make an additional substitution to upper bound \eqref{proof_005}. Observe from \eqref{proof_lin_pdqn_004} that we have the equivalence $\|\bbnu_{t+1}-\bbnu_{t}\|_{\bbH_t^2}^2 = \|\bbx_{t+1}-\bbx^*\|_{\alpha^2(\bbI-\bbZ)}^2$. We may further use the upper and lower bounds of the eigenvalues of $\bbH_t$ in \eqref{eq_prop_eigen_bounds_dual} to show that $\| \bby \|^2_{\bbH_t^2} \leq \| \bby \|^2_{\bbH_t}$ for any vector $\bby$. In particular, we may consider the upper bound $\| \bby \|^2_{\bbH_t^2} \leq \| \bby \|^2 / \Gamma^2$ and the lower bound $\| \bby \|^2/P \leq \| \bby \|^2_{\bbH_t}$ and show that the former upper bound is lower than the latter lower bound, i.e. \begin{align} \frac{1}{\Gamma^2} \| \bby\|^2 &\leq \frac{1}{P} \|\bby\|^2 = \left(\frac{1}{\Gamma + n/\gamma}\right) \|\bby\|^2 \nonumber \\ \frac{1}{\Gamma} - 1 &\leq \frac{\Gamma \gamma}{n}. \label{eq_proof_eq} \end{align} The inequality in \eqref{eq_proof_eq} holds when $\Gamma \leq 1$, as the left hand side will be negative while the right hand side is positive. We proceed with the main proof by introduce the matrix $\bbJ_t := \diag(\alpha \bbR_t, \bbH_t)$ and the combined vector $\bbz_t := [\bbx_t; \bbnu_t]$ to combine the terms in \eqref{proof_005}. Furthermore, we can substitute from \eqref{proof_lin_pdqn_004} and the logic in \eqref{eq_proof_eq} that $-\|\bbnu_{t+1}-\bbnu_{t}\|_{\bbH_t} \leq -\|\bbx_{t+1}-\bbx^*\|_{\alpha^2(\bbI-\bbZ)}^2$ and rearrange terms in \eqref{proof_005} to obtain \begin{align}\label{proof_007} & \|\bbz_t-\bbz^*\|_{\bbJ_t}^2 -\|\bbz_{t+1}-\bbz^*\|_{\bbJ_t}^2 \\ &\quad\geq \frac{2\alpha}{\mu+L}\|{\nabla f}(\bbx_{t+1})-{\nabla f}(\bbx^*)\|^2 +\alpha \Sigma\|\bbx_{t+1}-\bbx_{t}\|^2 \nonumber\\ &\qquad +\|\bbx_{t+1}-\bbx^*\|^2_{\frac{2\alpha \mu L}{\mu+L}\bbI+\alpha^2(\bbI-\bbZ)} +2\alpha(\bbx_{t+1}-\bbx^*)^T\hbsigma_{t}.\nonumber \end{align} Note that the inner product $2(\bbx_{t+1}-\bbx^*)^T\hbsigma_{t}$ is bounded below by $-(1/\zeta)\|\bbx_{t+1}-\bbx^*\|^2-\zeta \|\hbsigma_{t}\|^2$ for any positive constant $\zeta>0$. Thus, the lower bound in \eqref{proof_007} can be updated as \begin{align}\label{proof_lin_pdqn_004b} &\|\bbz_{t}-\bbz^*\|_{\bbJ_t}^2-\|\bbz_{t+1}-\bbz^*\|_{\bbJ_t}^2\nonumber\\ & \geq \|\bbx_{t+1}-\bbx^*\|_{(\frac{2\alpha \mu L}{\mu+L}-\frac{\alpha}{\zeta})\bbI+\alpha^2(\bbI-\bbZ)}^2+\alpha \Sigma\|\bbx_{t+1}-\bbx_{t}\|^2 \nonumber\\ &\ +\frac{2\alpha}{\mu+L}\|{\nabla f}(\bbx_{t+1})-{\nabla f}(\bbx^*)\|^2-\alpha \zeta\|\hbsigma_{t}\|^2. \end{align} The linear convergence result in \eqref{pdqn_lin_convg} is equivalent to establishing the inequality $\|\bbz_{t}-\bbz^*\|_{\bbJ_t}^2-\|\bbz_{t+1}-\bbz^*\|_{\bbJ_t}^2 \geq \kappa_{t}\|\bbz_{t+1}-\bbz^*\|_{\bbJ}^2$. In other words, we may lower bound the right hand side of \eqref{proof_lin_pdqn_004b} by $\kappa_{t}\|\bbz_{t+1}-\bbz^*\|_{\bbJ_t}^2$, i.e., \begin{align}\label{proof_lin_pdqn_005b} &\kappa_{t}\|\bbnu_{t+1}-\bbnu^*\|_{\bbH_t}^2+\kappa_{t} \alpha \|\bbx_{t+1}-\bbx^*\|_{\bbR_t}^2\nonumber\\ & \leq \|\bbx_{t+1}-\bbx^*\|_{(\frac{2\alpha \mu L}{\mu+L}-\frac{\alpha}{\zeta})\bbI+\alpha^2(\bbI-\bbZ)}^2+\alpha \|\bbx_{t+1}-\bbx_{t}\|_{\bbR_t}^2 \nonumber\\ &\ +\frac{2\alpha}{\mu+L}\|{\nabla f}(\bbx_{t+1})-{\nabla f}(\bbx^*)\|^2-\alpha \zeta\|\hbsigma_{t}\|^2. \end{align} We will determine values of $\kappa_t$ for which the inequality in \eqref{proof_lin_pdqn_005b} is satisfied. A sufficient condition can be formulated by lower bounding the left hand side of the inequality in \eqref{proof_lin_pdqn_005b}. We may lower bound the terms $\|\bbnu_{t+1}-\bbnu^*\|^2_{\bbH_t} + \alpha \|\bbx_{t+1}-\bbx^*\|^2_{\bbR_t}$ by using the lower eigenvalue bound of $\bbH_t$---namely $P^{-1}$ from \eqref{eq_prop_eigen_bounds_dual} and the lower eigenvalue bound of $\bbR_t$---namely $\Lambda^{-1} - 2\alpha(1-\delta)$. Observe that this lower bound is obtained by combining the lower eigenvalue bound of $\mathcal{G}_{t,K}$ in \eqref{eq_prop_eigen_bounds_primal} with that of $-\alpha(\bbI-\bbZ)$ that can be found in, e.g. \cite[Proposition 1]{mokhtari2017network} From these bounds, we obtain \begin{align}\label{proof_lin_pdqn_005c} &\frac{\kappa_{t}}{P}\|\bbnu_{t+1}-\bbnu^*\|^2+\kappa_{t} \alpha \Sigma \|\bbx_{t+1}-\bbx^*\|^2\nonumber\\ & \leq \|\bbx_{t+1}-\bbx^*\|_{(\frac{2\alpha \mu L}{\mu+L}-\frac{\alpha}{\zeta})\bbI+\alpha^2(\bbI-\bbZ)}^2+\alpha \|\bbx_{t+1}-\bbx_{t}\|^2_{\bbR_t} \nonumber\\ &\ +\frac{2\alpha}{\mu+L}\|{\nabla f}(\bbx_{t+1})-{\nabla f}(\bbx^*)\|^2-\alpha \zeta\|\hbsigma_{t}\|^2, \end{align} where we define $\Sigma := \Lambda^{-1} - 2\alpha(1-\delta)$ for notational convenience. From here, we first find an upper bound for the term $\|\bbnu_{t+1}-\bbnu^*\|^2$. Consider the relation for $\bbnu_{t+1}-\bbnu^*$ derived from substituting $\bby_t = (\bbI-\bbZ)^{1/2}\bbnu_t$ into \eqref{opt_res_PD-QN2}. Further consider for any $\beta, \phi > 1$ and arbitrary vectors $\bba$, $\bbb$, $\bbc$, we can write that $(1-\beta^{-1})(1-\phi^{-1})\|\bbc^2\| \leq \|\bba + \bbb+\bbc\|^2+(\beta-1)\|\bba\|^2+(\phi-1)(1-\beta^{-1})\|\bbb\|^2$. Combining these we have that \begin{align}\label{proof_lin_pdqn_006} \|\bbnu_{t+1}-\bbnu^*\|^2 &\leq \frac{\beta^2}{(\beta-1)\Gamma \hat{\delta}}\|\bbx_{t+1}-\bbx_{t}\|_{\bbR_{t}}^2+\frac{\beta\phi}{(\phi-1)\Gamma\hat{\delta}}\|\hbsigma_{t}\|^2 \nonumber\\ & \quad + \frac{\phi\beta}{\Gamma\hat{\delta}}\|{\nabla f}(\bbx_{t+1})-{\nabla f}(\bbx^*)\|^2 , \end{align} where we define $\hat{\delta}$ as the smallest non-zero eigenvalue of $(\bbI-\bbZ)$ to simplify notation. By substituting the upper bound in \eqref{proof_lin_pdqn_006} for the squared norm $\|\bbnu_{t+1}-\bbnu^*\|^2$ in \eqref{proof_lin_pdqn_005c} we obtain a sufficient condition for the result in \eqref{proof_lin_pdqn_005c} which is given by \begin{align}\label{proof_lin_pdqn_007} &\kappa_{t} \alpha \Sigma \|\bbx_{t+1}-\bbx^*\| + \frac{\kappa_{t}\beta^2}{P (\beta-1)\Gamma\hat{\delta}}\|\bbx_{t+1}-\bbx_{t}\|_{\bbR_{t}}^2 \nonumber\\ &+ \frac{\kappa_{t}\phi\beta}{P \Gamma\hat{\delta}}\|{\nabla f}(\bbx_{t+1})-{\nabla f}(\bbx^*)\|^2 +\frac{\kappa_{t}\beta\phi\|\hbsigma_{t}\|^2 }{P (\phi-1)\Gamma\hat{\delta}}\nonumber\\ &\quad \leq \|\bbx_{t+1}-\bbx^*\|_{(\frac{2\alpha \mu L}{\mu+L}-\frac{\alpha}{\zeta})\bbI+\alpha^2(\bbI-\bbZ)}^2+\alpha \|\bbx_{t+1}-\bbx_{t}\|_{\bbR_t}^2 \nonumber\\ &\qquad +\frac{2\alpha}{\mu+L}\|{\nabla f}(\bbx_{t+1})-{\nabla f}(\bbx^*)\|^2-\alpha \zeta\|\hbsigma_{t}\|^2. \end{align} It remains to bound norm of the error term $\|\hbsigma_{t}\|^2$. Consider the standard inequality for squared norms that comes from the application of the Cauchy-Schwartz inequality, i.e. \begin{align}\label{eq_alo} \| \hbsigma_{t}\|^2 &\leq 2\| \nabla f(\bbx_{t})-\nabla f(\bbx_{t+1}) \|^2 \\ &\qquad + 2\| \alpha \bbH^{-1}_{t}(\bbI - \bbZ) (\bbx_{t+1}-\bbx^*)\|^2. \nonumber \end{align} To bound the first term on the right hand side of \eqref{eq_alo}, we can use the Lipschitz continuity of the gradient implied from Assumption \ref{as_strongly_convex} to obtain \begin{align} 2\| \nabla f(\bbx_{t})-\nabla f(\bbx_{t+1}) \| &\leq 2 L^2 \| \bbx_{t+1} -\bbx_{t}\|^2 \nonumber\\ & \leq \frac{2L^2}{\Sigma} \| \bbx_{t+1} -\bbx_{t}\|_{\bbR_t}^2.\label{eq_alo1} \end{align} The second term in \eqref{eq_alo} can subsequently be bounded using the Cauchy-Schwartz inequality along with the bound on the magnitude of $\bbH^{-1}_{t}$ from \eqref{eq_prop_eigen_bounds_dual} to obtain \begin{align}\label{eq_alo3} 2 \| \alpha \bbH^{-1}_{t} (\bbI - \bbZ) (\bbx_{t+1}-\bbx^*)\|^2 \leq 4\alpha^2 P(1-\delta) \| \bbx_{t+1}-\bbx^*\|^2. \end{align} Combining the results of \eqref{eq_alo1}-\eqref{eq_alo3}, we obtain \begin{align}\label{alg4} \| \hbsigma_{t}\|^2 &\leq 2L^2 \| \bbx_{t+1} -\bbx_{t}\|^2 + 4\alpha^2 P(1-\delta) \| \bbx_{t+1}-\bbx^*\|^2. \end{align} We proceed by substituting $\| \hbsigma_{t}\|^2$ in \eqref{proof_lin_pdqn_007} by the upper bound that is derived in \eqref{alg4}. After rearranging terms, we obtain \begin{align}\label{proof_lin_pdqn_008} & 0 \leq \|\bbx_{t+1}-\bbx^*\|_{(\frac{2\alpha \mu L}{\mu+L}- \frac{\alpha}{\zeta}-\kappa_{t} \alpha \Sigma -\frac{2\kappa_{t}\beta\phi \alpha^2 }{P(\phi-1)\hat{\delta}} -4\alpha^3P(1-\delta)\zeta)\bbI+\alpha^2(\bbI-\bbZ)}^2\nonumber\\ &\ +\left(\frac{2\alpha}{\mu+L}-\frac{\kappa_{t}\phi\beta}{P\Gamma\hat{\delta}}\right)\|{\nabla f}(\bbx_{t+1})-{\nabla f}(\bbx^*)\|^2 \nonumber\\ &\ +\!\!\Bigg[\alpha \Sigma\!-\!\frac{\kappa_{t}\beta^2}{P(\beta-1)\hat{\delta}} \!-\!\frac{2\kappa_{t}\beta\phi L^2 }{P(\phi-1)\hat{\delta}}\!-\!\frac{2\alpha \zeta L^2}{\Sigma} \Bigg]\|\bbx_{t+1}-\bbx_{t}\|_{\bbR_t}^2. \end{align} The inequality that is established in \eqref{proof_lin_pdqn_008} provides a condition on $\kappa_t$ that, when satisfied, implies the inequality in \eqref{proof_lin_pdqn_007} holds. This is turn implies \eqref{proof_lin_pdqn_005b} and subsequently, and most importantly, the linear convergence statement in \eqref{pdqn_lin_convg}. We may satisfy the inequality in \eqref{proof_lin_pdqn_008} by ensuring that the coefficients of the three terms in \eqref{proof_lin_pdqn_008} are non-negative. This is to say that \eqref{proof_lin_pdqn_008} holds if $\kappa_{t}$ satisfies \begin{align}\label{havij} &\frac{2\alpha \mu L}{\mu+L}-\frac{\alpha}{\zeta}-\kappa_{t} \alpha \Sigma -\frac{2\kappa_{t}\beta\phi\alpha^2 }{P(\phi-1)\hat{\delta}} -2\alpha^3P\zeta \geq 0, \\ &\frac{2\alpha}{\mu+L} \geq \frac{\kappa_{t}\phi\beta}{P \Gamma \hat{\delta}} \nonumber \\ &\alpha \Sigma \geq\frac {\kappa_{t}\beta^2}{P(\beta-1)\hat{\delta}}+\frac{2\kappa_{t}\beta\phi\Gamma^2 }{P(\phi-1)\hat{\delta}}+\frac{2\alpha \zeta L^2}{\Sigma}.\nonumber \end{align} Observe that the expressions in \eqref{havij} are satisfied when $\kappa_t$ is chosen as in the statement of Theorem \ref{thm:esom_linear_convg}, in which case the claim in \eqref{pdqn_lin_convg} holds. \end{appendices} \urlstyle{same} \bibliographystyle{IEEEtran}
1,314,259,992,576
arxiv
\section{Introduction} \label{Chapter1} Twistor theory, as first outlined by Roger Penrose in the late 1960s \cite{Penrose:1967wn}, is a program which relates physical objects on (in general, complex) Minkowski space-time to geometrical data in complex projective spaces called \emph{twistor spaces}. This general picture of representing physics by complex geometry is captured by three of the classic results in twistor theory, each of which is an equivalence: between zero-rest-mass fields on space-time and cohomology on twistor space (the Penrose transform) \cite{Penrose:1969ae}; between Yang-Mills instantons on space-time and holomorphic vector bundles over twistor space (the Ward Correspondence) \cite{Ward:1977ta}; and between self-dual four-manifolds and integrable complex structures on twistor space (the non-linear graviton) \cite{Penrose:1976jq}. Since its inception, twistor theory has provided proofs for theorems in classical general relativity \cite{Newman:1976gc, Hansen:1978jz, Adamo:2009vu}; informed the study of integrable systems \cite{Mason:1996rf, Hitchin:1999, Mason:2003}; and been of utility in a wide array of mathematical and physical applications (e.g., \cite{Salamon:1982, Neitzke:2007ke, Alexandrov:2012bu}). Despite these advances, twistor theory had fallen well short of its initial ambitions, namely: to serve as a unifying mathematical framework for describing both quantum field theory and gravity. By the early 2000s, this failure could be captured by two fundamental problems which had proven insurmountable in the preceding decades: the `googly problem', and the inability to make meaningful contact with quantum field theory. The first of these captures the difficulty of dealing with arbitrarily curved space-times in twistor theory. The non-linear graviton construction indicates that traditional twistor methods can be applied to any four-manifold whose anti-self-dual Weyl curvature vanishes; but for real Lorentzian space-times, the only example of such a space-time is Minkowski space itself. Hence, it seemed impossible for any progress to be made if twistors could not be adapted to the most basic of situations in general relativity. This issue is also present at the level of gauge theory: the Ward correspondence treats only self-dual gauge bundles on space-time. Much effort was dedicated to overcoming this problem starting in the late 1970s, but diminishing returns soon turned this arena of research into a no-man's land. The second problem is equally fundamental: in spite of constructs such as the Penrose transform and Ward correspondence, there was no clear proposal for how twistor theory could be used to make contact with basic questions in quantum field theory. In particular, how could twistors be used to compute physical observables like scattering amplitudes or cross-sections in a gauge theory? Once again, for a theory aiming to provide a mathematical formalism for both gravity and quantum field theory, this was a rather embarrassing problem. While Hodges' twistor diagram formalism did make some progress in this area \cite{Hodges:1980hn}, twistor theory had by-and-large failed to make an impact on the study of quantum field theory. This state of affairs changed dramatically in 2003/4, when Witten discovered that scattering amplitudes in (planar) maximally supersymmetric ($\cN=4$) super-Yang-Mills theory could be computed, at least at tree-level, via a topological B string theory in twistor space \cite{Witten:2003nn}. This not only provided an answer to the second fundamental question plaguing twistor theory, but also gave a perturbative solution to the googly problem. In the twistor-string setting, the anti-self-dual interactions of the theory are accounted for by $D1$-instantons in the target space.\footnote{Or alternatively, by a sum over worldsheet instantons in a heterotic formulation of twistor-string theory \cite{Mason:2007zv}.} Twistor-string theory has spurred an impressive list of advances in our understanding of gauge theory in general, and the planar sector of $\cN=4$ super-Yang-Mills in particular. It has led to the development of efficient techniques for computing scattering amplitudes which are non-obvious from the space-time Lagrangian (such as the MHV formalism \cite{Cachazo:2004kj} and BCFW recursion \cite{Britto:2005fq}), and this in turn has influenced the computation of real processes in QCD which are measured at particle colliders (e.g., \cite{Berger:2008ag}). It has motivated the study of dual conformal symmetry and the discovery of an infinite dimensional symmetry algebra associated with the scattering amplitudes of $\cN=4$ super-Yang-Mills \cite{Drummond:2008vq}. Furthermore, it is an important influence behind emergent space-time proposals such as the Grassmannian formalism of Arkani-Hamed and collaborators \cite{ArkaniHamed:2009dn, ArkaniHamed:2012nw}, as well as numerous other insights and advances. While Witten's theory correctly describes planar gauge theory at tree level \cite{Skinner:2010cz, Dolan:2011za}, it is not without its problems. The most glaring is that the gravitational degrees of freedom in the string theory correspond to conformal super-gravity, a theory which is widely believed to be non-physical \cite{Berkovits:2004jj}. Since these conformal gravity modes will run in loops, gauge theory scattering amplitudes calculated by twistor-string will be contaminated beyond tree-level. Recently, Skinner proposed a new twistor-string theory which eliminates the modes of conformal gravity and, in the flat-space limit, correctly produces the tree-level S-matrix of $\cN=8$ supergravity from the worldsheet theory \cite{Skinner:2013xp}. The key difference between this theory and all previous twistor-string theories is the addition of \emph{worldsheet} supersymmetry; anomaly cancellation conditions then uniquely restrict to a twistor space which manifests the maximal $\cN=8$ gravitational supersymmetry. While Skinner's model undoubtedly represents an incredible breakthrough for the twistor approach, there are still many facets of the theory which are not properly understood: it is unclear how- or if- the theory describes the analogues of scattering amplitudes for non-flat backgrounds (i.e., gauged supergravity on anti-de Sitter space), and while anomaly-free for any genus worldsheet, it is not known if the theory correctly computes scattering amplitudes beyond tree level (even at the level of a loop integrand). The most successful solution to the puzzle of studying gauge theory without gravity has been the \emph{twistor action} proposal of Mason \cite{Mason:2005zm}. This approaches gauge theory via an action functional on twistor space which is the classical generating functional for the gauge theory degrees of freedom in twistor-string theory, completely eliminating gravity from the picture! This means that physical observables can be studied to all orders in perturbation theory using the twistor action. On the gravitational side, Mason also found a twistor action functional for conformal gravity, and the existence of Skinner's twistor-string hints that an action for Einstein gravity itself should also exist. \medskip In this review, we study twistor actions as theories in their own right. That is, we consider the twistor action as the primary object (rather than the space-time theory) and attempt to study the basic structures of gauge theory and gravity from an intrinsically twistorial point of view. As we shall see, asking rather basic questions about the twistor action (e.g., `What are its Feynman rules?') leads to surprisingly interesting answers (e.g., a derivation of the MHV formalism). Furthermore, studying basic physical observables (such as correlation functions and Wilson loops) in twistor space allows us to prove powerful statements about space-time physics. Much of our presentation will focus on results which have already appeared in published form elsewhere; our aim is to provide a coherent and self-contained explanation of these findings which (hopefully) also incorporates a novel presentation. However, there are many results included here which have not been published before. These range from technical lemmas which may catch the eye of twistor theorists, to more general findings which may interest researchers interested in scattering amplitudes and the ways in which they can be studied using twistor theory. Section \ref{Chapter2} contains review material pertaining to flat-space twistor theory, some basic calculational tools, and $\cN=4$ super-Yang-Mills. In particular, we discuss the twistor correspondence between points in (complex) Minkowski space and linearly embedded Riemann spheres in twistor space, and introduce the concepts of Penrose transform and Ward correspondence. We also set out a calculus of distributional forms which will be used throughout the paper, and discuss some salient features of $\cN=4$ super-Yang-Mills theory (SYM). The reader who is already acquainted with these issues could skim this section in order to progress more quickly. Section \ref{Chapter3} deals with the twistor action of $\cN=4$ SYM (first given in \cite{Boels:2006ir}). We discuss the gauge freedom and perturbation theory of this action, and demonstrate how it can be used to arrive at a twistorial derivation of the MHV formalism. This also leads to a natural method for computing the scattering amplitudes of $\cN=4$ SYM on twistor space itself \cite{Adamo:2011cb} which manifests superconformal symmetry, and should be contrasted against the \emph{momentum twistor} approach of \cite{Bullimore:2010pj}, which computes the integrand of a scattering amplitude divided by a tree-level MHV factor and manifests \emph{dual} superconformal symmetry. The twistor action thereby allows us to compute the entire tree-level S-matrix of the gauge theory, and we also discuss the prospects for computing loop-level amplitudes in this formalism. In Section \ref{Chapter4}, we consider other natural observables in gauge theory from a twistor perspective: correlation functions involving local operators and null polygonal Wilson loops. We show that these operators have an algebro-geometric formulation in twistor space, and their expectation values can be computed using the Feynman rules of the twistor action developed in the preceding section. This allows us to provide proofs (at the level of the loop integrand) for the supersymmetric correlation function / Wilson loop correspondence \cite{Adamo:2011dq} as well as several conjectures regarding mixed Wilson loop / local operator correlators \cite{Adamo:2011cd}. Additionally, we can build on the BCFW deformation of the Wilson loop in twistor space \cite{Bullimore:2011ni} to derive novel recursion relations for these mixed operators. We switch our focus from gauge theory to gravity in Section \ref{Chapter5}, beginning with a review of the basic result in twistor theory for curved space-times: the non-linear graviton construction. We then discuss the embedding of Einstein gravity into conformal gravity on an asymptotically de Sitter background \cite{Maldacena:2011mk}. Using the Plebanski formalism, we can state this embedding precisely at the level of generating functionals for the MHV amplitudes of the two theories. On twistor space, we introduce the twistor action for conformal gravity and its minimal $\cN=4$ supersymmetric extension, and reduce its degrees of freedom to those of Einstein gravity. This not only gives a twistorial expression for the MHV generating functionals, but also produces a candidate twistor action for Einstein gravity \cite{Adamo:2013tja}. As an interesting curiosity, we also discuss the possibility of formulating twistor actions for \emph{non-minimal} $\cN=4$ conformal supergravity, such as the theory which arises from the gravitational degrees of freedom in the Berkovits-Witten twistor-string \cite{Berkovits:2004jj}. Section \ref{Chapter6} is dedicated to studying the perturbation theory associated with the conformal gravity twistor action reduced to Einstein states, as well as the Einstein twistor action itself. We show that the vertices for both of these actions correspond to the MHV amplitudes of Einstein gravity (with cosmological constant); this is accomplished by translating the iterative solution of an integral equation determining the scattering background into a diagram calculus on the Riemann sphere. We show that the resulting formulae are gauge invariant and limit onto Hodges' formula for the MHV amplitude \cite{Hodges:2012ym} when the cosmological constant is sent to zero. We then discuss the propagator structure on twistor space, arguing that it induces a MHV formalism for Einstein gravity. We conclude by providing an additional formula for the MHV amplitude which is based on BCFW recursion. Section \ref{Chapter7} concludes with a discussion of open questions and future directions related to this work. Appendices \ref{Appendix1} and \ref{Appendix2} provide some results which are useful supplements to our discussions. Appendix \ref{Appendix1} presents some results concerning superconnections in $\cN=4$ super-Yang-Mills theory, while Appendix \ref{Appendix2} defines a Coulomb branch twistor action and derives the massive MHV formalism on twistor space. \subsection{Advice for the Reader} This review is adapted from the author's D.Phil. thesis, and a word of warning may prove useful to the reader. In particular, the reader may find that the degree of precision varies substantially throughout the text: some (rather minor) results are proved explicitly while others are simply outlined or referenced. I hope that this has not been done haphazardly: my aim has been to include proofs of any results which have not appeared explicitly in prior literature, while being more concise regarding those results which can easily be looked up in extant papers. An exception to this heuristic is Section \ref{Chapter4}, where the proofs of correspondences between Wilson loops and certain correlation functions are particularly illustrative. Additionally, it should be possible to read many of the sections in a self-contained manner. Section \ref{Chapter2} should provide background for all the gauge theory considerations covered in this review, and all the new machinery needed for gravity is covered in the beginning of Section \ref{Chapter5}. So a reader who is only interested in null limits of correlation functions can skip to Section \ref{Chapter4} without missing anything essential in Section \ref{Chapter3}. The two appendices on $\cN=4$ superconnections (Appendix \ref{Appendix1}) and the Coulomb branch of $\cN=4$ Yang-Mills (Appendix \ref{Appendix2}) are also largely stand-alone, and composed of mostly unpublished material. Throughout, I have also tried to assemble a relatively comprehensive list of references, which the reader should find helpful in filling the many gaps which are sure to be found in this review. Finally, I have appropriated terms such as `lemma' or `proposition' in order to highlight concrete, important results. Some proofs are obviously more rigorous than others, and we often take for granted such constructs as manipulation inside a path integral, or working with a loop integrand. I have attempted to foreground any such assumptions, and to be honest about the degree to which they are essential in any given proof. \subsection{Summary of Results} For the reader's convenience, we list here the main results presented in this review: \begin{itemize} \item (Proposition \ref{MHVpropn}) Derivation of the MHV formalism from the twistor action. \item (Section \ref{TScatAmps}) The full tree-level S-matrix of $\cN=4$ super-Yang-Mills theory as scattering amplitudes of the twistor action. \item (Proposition \ref{corrWL}) Proof of the supersymmetric correlation function / Wilson loop correspondence. \item (Proposition \ref{locP1}, \ref{locP2}) Proofs of conjectures relating mixed Wilson loop / local operator correlators to null limits of correlation functions. \item (Proposition \ref{recurpropn}) BCFW-like recursion relations for mixed Wilson loop / local operator correlators. \item (Proposition \ref{CGDS}) Equivalence between the MHV generating functionals of conformal and Einstein gravity on de Sitter space. \item (Section \ref{CGPerT}) Perturbation theory for the conformal gravity twistor action reduced to Einstein degrees of freedom. \item (Section \ref{MHVLambda}) MHV amplitude with cosmological constant on twistor space. \item (Proposition \ref{CCBCFW}) BCFW formula for the gravitational MHV amplitude with cosmological constant in twistor space. \item (Proposition \ref{spropn}) Derivation of $\cN=4$ super-Yang-Mills superconnections from integrability conditions. \item (Proposition \ref{CBProp}) Twistor action for the Coulomb branch of $\cN=4$ super-Yang-Mills. \item (Appendix \ref{MMHVForm}) Derivation/proof of the massive MHV formalism from the Coulomb branch twistor action. \end{itemize} \section{Background Material} \label{Chapter2} This section reviews what will be considered as background material for the remainder of this paper. We begin with an overview of the basics of twistor theory, which is the primary geometric framework for all our studies, establishing notational conventions and listing some important facts. Since it was first described by Penrose in 1967 \cite{Penrose:1967wn}, twistor theory has had a long and varied history. It is not the purpose of this section to serve as an extensive review of twistor theory and its many facets; the interested reader need only consult one of the many books or papers reviewing the subject (e.g., \cite{Penrose:1972ia, Huggett:1985, Penrose:1986ca, Ward:1990}) for a more detailed exposition. We then introduce a calculus of distributional forms which will prove very useful for representing physical and geometric data on twistor space. Finally, we provide a brief overview on $\cN=4$ super-symmetric Yang-Mills theory, and some of the surprising properties of its scattering amplitudes. The reader who is already familiar with twistor theory may wish to simply skim this section to familiarize themselves with notation, before moving on to the more interesting later sections. \subsection{Twistor Theory} \subsubsection{Basic formalism} \subsubsection*{\textit{Spinor-helicity formalism}} We begin with complexified 4-dimensional Minkowski space-time $\M_{b}\cong\C^{4}$ in Lorentzian signature $(+,-,-,-)$, with coordinates $x^{\mu}$ (for $\mu=0,\ldots,3$). The complexified spin group is $\SO(4,\C)$, which is isomorphic to $\SL(2,\C)\times\SL(2,\C)/\Z_{2}$. Two-component Weyl spinors on $\M_{b}$ are in the $(\mathbf{\frac{1}{2}},0)$ and $(0,\mathbf{\frac{1}{2}})$ representations of $\SL(2,\C)\times\SL(2,\C)$, and we denote spinor indices with a capital Roman letter $A$, $A'$ respectively (we work in Penrose's abstract index notation \cite{Penrose:1984}). We will refer to the spinor representations $(\mathbf{\frac{1}{2}},0)$ and $(0,\mathbf{\frac{1}{2}})$ as the `negative chirality' and `positive chirality' spinors, respectively. Since vectors on $\M_{b}$ are in the $(\mathbf{\frac{1}{2}},\mathbf{\frac{1}{2}})$ representation, this allows us to associate a vector index $\mu$ with a pair of spinor indices $AA'$. For instance, given a vector $v=v^{\mu}\partial_{\mu}\in T\M_{b}$, we have \be{spinordecomp} v^{\mu} \leftrightarrow v^{AA'}=\frac{1}{\sqrt{2}}\left( \begin{array}{cc} v^{0}+v^{1} & v^{2}+iv^{3} \\ v^{2}-iv^{3} & v^{0}-v^{1} \end{array} \right). \ee We can raise and lower spinor indices using the $\epsilon$-spinors: \begin{equation*} \epsilon_{AB}=\left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array}\right) = \epsilon_{A'B'} \end{equation*} according to the usual conventions: \begin{equation*} v_{A}=v^{B}\epsilon_{BA}, \qquad v^{A'}=\epsilon^{A'B'}v_{B'}. \end{equation*} It is easy to see that the Minkowski metric is then given in terms of these $\epsilon$-spinors: \begin{equation*} \eta(v,w)=\eta_{\mu\nu}v^{\mu}w^{\nu}=\epsilon_{AB}\epsilon_{A'B'}v^{AA'}w^{BB'}. \end{equation*} Using these rules for lowering spinor indices, we can also see that the spinor decomposition \eqref{spinordecomp} of any vector has a nice formulation in terms of Pauli matrices, with $v_{AA'}=\sigma^{\mu}_{AA'}v_{\mu}$. An important point about the spinor-helicity formalism is that it is particularly well-adapted to studying null vectors. Suppose $v^{AA'}$ corresponds to a null vector in $\M_{b}$; then $v^{AA'}v_{AA'}=\det(v)=0$. Since $v^{AA'}$ is a $2\times 2$ matrix, and the rank of a $2\times 2$ matrix is less than two if and only if it's determinant vanishes, this implies that we can write $v^{AA'}_{\mathrm{null}}=\lambda^{A}\tilde{\lambda}^{A'}$ for a pair of spinors (one of each chirality). Additionally, the $\epsilon$-spinors provide us with a $\SL(2,\C)$-invariant inner product between pairs of spinors of the same chirality: \be{sip} \la v w \ra = \epsilon_{AB}v^{A}w^{B}, \qquad [v w]=\epsilon_{A'B'}v^{A'}w^{B'}. \ee For much of this review, we will be concerned with $\cN=4$ super-Yang-Mills theory (this will be introduced properly later in this chapter). The natural setting for this theory is chiral Minkowski super-space; we denote its complexification as $\M\cong\C^{4|8}$, and chart it with coordinates $(x^{AA'}, \theta^{A a})$, where $a=1,\ldots,4$ indexes the $\SU(4)$ R-symmetry of the theory, and the $\theta$s are anti-commuting/Grassmann/fermionic coordinates. Everything we have said about the spinor-helicity formalism goes through precisely the same in this setting for the bosonic coordinates; the only extension necessary is the notion of a `null vector' in $\M$. For this, we simply extend the observation we just made, and state that the fermionic component of a vector is null if and only if it can be decomposed as $v^{Aa}=\lambda^{A}\eta^{a}$, for some spinor $\lambda^{A}$ and some Grassmann parameter $\eta^{a}$. \subsubsection*{\textit{Twistor space}} Rather than define twistor space for both $\M_{b}$ and it's super-extension $\M$, we will simply give all definitions in the super-symmetric language \cite{Ferber:1977qx}. The analogous statements for the bosonic category should be perfectly clear from this exposition. For our purposes, twistor space, $\PT$, will be a suitable open subset of the complex projective super-space $\P^{3|4}$; its bosonic truncation $\PT_{b}$ is then just a suitable open subset of $\P^{3}$. Na\"{i}vely, $\P^{3|4}$ is just the complex projective space $\P^{3}$ with four anti-commuting `dimensions' added to it. More formally, it can be realized as a `super-scheme' \cite{Manin:1997}: that is, as the topological space $\P^{3}$ with a modified structure sheaf: \begin{equation*} \cO_{\P^{3|4}}=\cO\left(\bigoplus_{k=0}^{4}\wedge^{k}\cO_{\P^{3}}(-1)^{\oplus 4}\right). \end{equation*} Readers interested in a more formal treatment of super-schemes in the context of twistor theory may consult \cite{Wolf:2006me, Adamo:2012cd}; for this review though, the na\"{i}ve perspective on super-geometry will suffice. \begin{figure}[t] \centering \includegraphics[width=80mm]{incidence.pdf} \caption{\emph{Points in space-time correspond to complex lines in twistor space. Two space-time points are null separated if and only if their corresponding twistor lines intersect.}} \label{tcorr} \end{figure} In this spirit, $\PT$ can be charted by homogeneous coordinates \be{tsc} Z^{I}=(Z^{\alpha},\chi^{a})=(\lambda_{A},\mu^{A'},\chi^{a}), \ee where $\lambda_{A}$ and $\mu^{A'}$ are 2-component complex Weyl spinors of opposite chirality, and $\chi^a$ is an anti-commuting Grassmann coordinate\footnote{These conventions, first adopted in \cite{Witten:2003nn}, are essentially \emph{dual} to the original Penrose conventions \cite{Penrose:1986ca}.}, with $a=1,\ldots,4$ indexing the $\cN=4$ R-symmetry as before. Being homogeneous coordinates, the $Z^I$s are defined only up to the re-scalings $Z^I\sim r Z^I$ for any $r\in\C^{*}$. The space $\P^{3|4}$ is a Calabi-Yau super-manifold, in the sense that it has trivial first super-Chern class and its Berezinian sheaf has a canonical global section: $\Ber_{\P^{3|4}}\cong\cO_{\P^{3|4}}$ \cite{Sethi:1994ch, Schwarz:1995ak, Manin:1997}. This means that $\PT$ is equipped with a global holomorphic measure (the canonical section of $\Ber_{\P^{3|4}}$), which we write as: \be{volm} \D^{3|4}Z=\epsilon_{\alpha\beta\gamma\delta}Z^{\alpha}\d Z^{\beta}\wedge\d Z^{\gamma}\wedge\d Z^{\delta}\wedge\d^{4}\chi . \ee The most basic relation in twistor theory is the geometric correspondence between a point $(x,\theta)\in\M$ and a complex line\footnote{That is, a linearly embedded Riemann sphere.} $X\subset\PT$. This complex line is the representation in twistor space of the sphere of null directions uniquely associated to the point in space-time, so if two points in space-time are null separated they share a common null geodesic and hence their associated twistor lines intersect, as illustrated in Figure \ref{tcorr}. Since the conformal structure of $\M$ is given by specifying light cones, and this is equivalent to specifying complex lines in twistor space, we see that giving a complex structure on $\PT$ is the same a giving a conformal structure on $\M$. This correspondence is captured by the \emph{incidence relations}, which are just algebraic equations relating the coordinates of $\PT$ to $\M$: \be{inc} \mu^{A'} =ix^{AA'}\lambda_{A}\, , \qquad \chi^a =\theta^{Aa}\lambda_{A}, \ee where we can interpret $\lambda_A$ as homogeneous coordinates on the Riemann sphere $X\cong\P^{1}$, and $(x,\theta)$ are the parameters of the linear embedding. These also encode the space-time interpretation of a point in twistor space: let $X$ and $X'$ be lines in twistor space which intersect at the point $Z=(\lambda,\mu,\chi)$. Subtracting the incidence relations for the two points gives $(x-x')^{AA'}\lambda_A=0$ and $(\theta-\theta')^{Aa}\lambda_A=0$, so $(x-x')^{AA'}=\tilde\lambda^{A'}\lambda^A$ and $(\theta-\theta')^{Aa} = \eta^a\lambda^A$ for some Weyl spinor $\tilde\lambda$ and some Grassmann parameter $\eta$. If we vary the possible choices of $(\tilde\lambda,\eta)$, the vectors $(\tilde\lambda\lambda,\eta\lambda)$ span a totally null complex $2|4$-dimensional plane in $\M$, so every point $Z\in\PT$ is assigned such a complex null plane by the incidence relations. These totally null planes are also known as (super) $\alpha$-planes (e.g., \cite{Huggett:1985, Penrose:1986ca}). Further, since any two points $Z_{1},Z_{2}\in\PT$ define a line, a point $(x,\theta)\in\M$ can be represented on twistor space by a skew bi-twistor $X^{IJ}=Z_{1}^{[I}Z_{2}^{J]}$. A canonical way of encapsulating the relationship between twistor space and $\M$ is via the `double fibration': \begin{equation*} \xymatrix{ & \PS \ar[ld]_{p} \ar[rd]^{q} & \\ \PT & & \M } \end{equation*} Here, $\PS\cong\M\times\P^{1}$ is the projective un-primed spinor bundle over $\M$ with coordinates $(x,\theta,\lambda)$. The map $q:\PS\rightarrow\M$ is just the trivial projection, while $p:\PS\rightarrow\PT$ is specified by the incidence relations \eqref{inc}. The double fibration provides a heuristic picture for how geometrical data on twistor space can be pulled back to $\PS$ and then pushed down to physical data on $\M$; later we will explore a few striking examples of this relationship. A basic fact about twistor space that is that it carries a natural action of the super-conformal group. The (complexified) superconformal algebra of $\M$ is $\mathfrak{psl}(4|4,\C)$, and its generators can be written in twistor space as \cite{Drummond:2009fd}: \be{scongen} J^{I}_{J}=Z^{I}\frac{\partial}{\partial Z^{J}}, \ee excluding the Euler homogeneity operator $\Upsilon=Z^{I}\partial_{I}$ and the fermionic homogeneity operator $\chi^{a}\frac{\partial}{\partial \chi^{a}}$. As we will see, this makes twistor theory an ideal tool for studying physical theories which have conformal symmetry. Conformal invariance is broken by specifying an `infinity twistor' $I_{IJ}$ which obeys $X^{IJ}I_{JK}=0$ when $X$ corresponds to a point at infinity in $\M$. In terms of the spinor decomposition of a twistor, the bosonic components of $I$ are given by: \be{infty} I_{\alpha\beta}=\left( \begin{array}{cc} \epsilon^{AB} & 0 \\ 0 & 0 \end{array} \right), \qquad I^{\alpha\beta}=\left( \begin{array}{cc} 0 & 0 \\ 0 & \epsilon^{A'B'} \end{array}\right). \ee A contraction of the form $I_{IJ}Z_{1}^{I}Z_{2}^{J}$ thus breaks $\mathrm{PSL}(4,\C)$ conformal invariance, but maintains invariance under space-time translations (which do not `shift' the location of infinity). Conformally invariant contractions between bosonic twistors take the form: \begin{equation*} (Z_{1},Z_{2},Z_{3},Z_{4})=\epsilon_{\alpha\beta\gamma\delta}Z_{1}^{\alpha}Z_{2}^{\beta}Z_{3}^{\gamma}Z_{4}^{\delta}, \end{equation*} since this quantity is invariant under $\mathrm{PSL}(4,\C)$ transformations. \subsubsection{Reality structures and space-time signature} Thus far, we have consider space-time to be a complex 4-manifold $\M$; for many calculations which take place purely twistorially, this is fine since we can work holomorphically on $\PT$ and perform our computations in the framework of complex analysis and Dolbeault cohomology. However, on space-time our computations should be taking place on a real slice $\M_{\R}\subset\M$; while real physics happens on the Lorentzian-real slice of signature $(1,3)$, there is no (mathematical) obstruction to choosing other signatures for $\M_{\R}$. These different choices of space-time signature correspond to different reality structures on twistor space. From time-to-time we will need to make an explicit choice for $\M_{\R}$, so we provide a brief overview of three choices of space time signature and their consequences on twistor space. Since the distinctions between reality structures are captured entirely at the bosonic level, we leave the fermionic degrees of freedom out of this discussion. \subsubsection*{\textit{Lorenztian signature}} If we choose $\M_{\R}$ to be real Minkowski space with its metric of signature $(1,3)$, then the natural conjugation on Weyl spinors is the usual complex conjugation of special relativity. This maps the $(\mathbf{\frac{1}{2}},0)$ and $(0,\mathbf{\frac{1}{2}})$ spinor representations to one another: \begin{equation*} v^{A}=(a,b)\mapsto\bar{v}^{A'}=(\bar{a},\bar{b}), \qquad w^{A'}=(c,d)\mapsto \bar{w}^{A}=(\bar{c},\bar{d}). \end{equation*} In the spinor-helicity formalism, this means that a vector $v^{\mu}$ is real valued if and only if its $2\times 2$ spinor decomposition is Hermitian: $v^{AB'}=\bar{v}^{BA'}$. In other words, if $x^{\mu}$ is a position vector in $\M$, then it corresponds to a real point in $\R^{1,3}$ if and only if $x^{AA'}$ is Hermitian. On twistor space, this induces corresponding reality conditions. Since complex conjugation exchanges spinor chiralities, it acts as an anti-holomorphic map from twistor space to \emph{dual} twistor space, $\PT^{\vee}$: \begin{equation*} Z^{\alpha}=(\lambda_{A},\mu^{A'})\mapsto \bar{Z}_{\alpha}=(\bar{\mu}^{A},\bar{\lambda}_{A'})\in\PT^{\vee}. \end{equation*} This defines a pseudo-Hermitian metric of signature $(2,2)$ on $\PT$ which preserves the Lorentzian real form of the conformal group, $\SU(2,2)$: \begin{equation*} Z\cdot \bar{Z}\equiv g_{\alpha\bar{\beta}}Z^{\alpha}\bar{Z}^{\bar{\beta}}=Z^{\alpha}\bar{Z}_{\alpha}=\la \lambda \bar{\mu}\ra + [\bar{\lambda}\mu]. \end{equation*} Since $\PT$ is a projective space, the exact value of this $\SU(2,2)$-inner product is meaningless; its sign is an invariant notion, though. Hence, we can accordingly partition $\PT$ into three sectors \cite{Huggett:1985, Penrose:1986ca}: \be{tsets} \PT^{\pm}=\left\{Z\in\PT | \pm Z\cdot\bar{Z}\geq 0\right\}, \qquad \PN = \left\{Z\in\PT | Z\cdot\bar{Z}=0\right\}. \ee The sets $\PT^{\pm}$ correspond to the future and past tubes of $\M$ respectively; that is, the the sets on which the imaginary part of $x^{AA'}$ is past or future pointing time-like respectively. This follows from the fact that if we take $x=u+iv$, then $Z\cdot\bar{Z}= -v^{AA'}\bar{\lambda}_{A'}\lambda_{A}$ via \eqref{inc}. This has a definite sign, depending on whether $v$ is time-like and future- or past-pointing, as claimed. This indicates that $\PT^{\pm}$ is the natural choice for the regions of $\PT$ corresponding to positive/negative frequency fields on $\M$. For instance, a field of positive frequency, whose Fourier transform is supported on the future lightcone in momentum space, automatically extends over the future tube because $\e^{ip\cdot x}$ is rapidly decreasing there, bounded by its values on the real slice. If a line $X$ lies entirely in $\PN$, then \eqref{inc} tells us that \be{Minkincidence} 0= i(x-x^\dagger)^{AA'}\lambda_A\bar\lambda_{A'}\qquad\hbox{for all }\lambda\ , \ee which is possible if and only if the matrix $x^{AA'}$ is Hermitian, so $x\in\M_{\R}=\R^{1,3}$. Conversely, a point $Z\in\PN$ corresponds to a unique real null ray (the intersection of the complex $\alpha$-plane with $\M_{\R}$). Hence, the portion of twistor space corresponding to the real slice $\R^{1,3}$ is the set $\PN$. \subsubsection*{\textit{Euclidean signature}} Now suppose we choose our real slice to have Euclidean signature $(+,+,+,+)$,\footnote{Actually, the procedure described here results in a \emph{negative} definite metric. In practice this simply requires a change of sign at the end of calculations to obtain actual Euclidean results, so we will ignore the distinction from now on.} so that $\M_{\R}$ is the real Euclidean 4-space $\E\cong\R^{4}$ or, in the conformal compactification picture, $S^{4}$. The Euclidean real form of the spin group is locally isomorphic to $\SO(4,\R)\cong \SU(2)\times\SU(2)/\Z_{2}$. The Euclidean conjugation of Weyl spinors no longer interchanges spinor representations, and is given by \cite{Atiyah:1978wi, Woodhouse:1985id}: \begin{equation*} v^{A}=(a,b)\mapsto\hat{v}^{A}=(\bar{b},-\bar{a}), \qquad w^{A'}=(c,d)\mapsto\hat{w}^{A'}=(-\bar{d},\bar{c}). \end{equation*} This means that a position vector $x^{\mu}$ corresponds to a real point in $\E$ if and only if $x^{AA'}=\hat{x}^{AA'}$. Note that $\hat{\hat{v}}^{A}=-v^{A}$, leading to the nomenclature `quaternionic conjugation' for this reality structure. On twistor space, this induces an anti-holomorphic involution $\sigma:\PT\rightarrow\PT$ which has no fixed points and obeys $\sigma^{2}=-\mathrm{id}$. This means that there are no points in twistor space preserved by the reality structure, which is just another way of saying that there are no real null vectors in $\E$ (i.e., the $\alpha$-plane corresponding to $Z\in\PT$ does not intersect the real slice in a null ray). However, it is clear that $\sigma$ acts as the antipodal map on $\P^{1}$, so although it has no fixed points, it does have \emph{fixed lines}, given by $X^{\alpha\beta}=Z^{[\alpha}\hat{Z}^{\beta]}$. Hence, each point in $\PT$ corresponds to a unique point in $\E$ by this construction, which can be written explicitly using \eqref{inc}: \be{eucmap} \rho: \PT\rightarrow\E, \qquad Z^{\alpha}=(\lambda_{A},\mu^{A'})\mapsto x^{AA'}=-i\frac{\mu^{A'}\hat\lambda^A-\hat\mu^{A'}\lambda^A}{\la\lambda\hat\lambda\ra} . \ee So in Euclidean signature, twistor space is just a $\P^{1}$ fibration $\PT\rightarrow\E$, and there is no need for the double fibration picture. The conformally compactified version of this picture, with the $\P^{1}$ fibration $\PT\rightarrow S^{4}$ plays an important role in the ADHM construction of Yang-Mills instantons \cite{Atiyah:1978ri}. This fibration allows us to define a complex structure for $\PT$ in terms of coordinates on $\E$. To do this, we first specify a basis of $(0,1)$-forms on twistor space \cite{Mason:2005zm, Jiang:2008}: \be{bforms} \hat{e}^{0}=\frac{\la\hat{\lambda}\d\hat{\lambda}\ra}{\la\lambda\hat{\lambda}\ra^2}, \qquad \hat{e}^{A'}=\frac{\hat{\lambda}_{A}\d x^{AA'}}{\la\lambda\hat{\lambda}\ra}, \ee and a dual basis for $T^{0,1}\PT_{b}$: \be{bvects} \dbar_{0}=\la\lambda\hat{\lambda}\ra\lambda_{A}\frac{\partial}{\partial\hat{\lambda}_{A}}, \qquad \dbar_{A'}=\lambda^{A}\partial_{AA'}. \ee Then the complex structure is given by the anti-holomorphic Dolbeault operator \begin{equation*} \dbar=\d\hat{Z}^{\alpha}\frac{\partial}{\partial \hat{Z}^{\alpha}}=\hat{e}^{0}\dbar_{0}+\hat{e}^{A'}\dbar_{A'}. \end{equation*} Additionally, we have Woodhouse's operator \cite{Woodhouse:1985id} \be{WHO} \dhat=\d\hat{Z}^{\alpha}\frac{\partial}{\partial Z^{\alpha}}, \ee which acts as a holomorphic derivative in the anti-holomorphic directions and obeys $\dhat^{2}=0$, $\dbar\dhat=-\dhat\dbar$. Note that although Euclidean signature is less realistic than Lorentzian, it also allows us to be very explicit in relating twistorial quantities to their space-time counterparts. We will take advantage of this fact at several points later on. \subsubsection*{\textit{Split signature}} Finally, we discuss the consequences of choosing the real slice $\M_{\R}$ to have split signature $(+,+,-,-)$. \emph{A priori}, this is the least physical choice of real slice: Euclidean signature is also non-physical, but there is the long-standing Wick-rotation prescription for moving between the Lorentzian and Euclidean regimes -- in the split signature case, there is neither a notion of `time' or of `space.' However, in split signature, the spin group is locally isomorphic to $\SO(2,2,\R)\cong \SL(2,\R)\times\SL(2,\R)$, so the spinor representations are real and we get a substantial simplification. The real form of the conformal group in this case is $\mathrm{PSL}(4,\R)$, so the reality condition on twistor space is simply to take all twistors to be real-valued; that is, $\PT_{\R}\subset\RP^{3|4}$. Hence, the benefit of choosing this un-physical signature is in the ability to work entirely with real variables (both with the spinor-helicity formalism and on twistor space). This allows for a high degree of explicit calculation, for instance, constructing Yang-Mills instantons using twistor data \cite{Mason:2005qu}. Split signature was highly utilized in the early developments of twistor-string theory, as well. In Witten's original formulation \cite{Witten:2003nn}, it allowed functions of null momenta (e.g., scattering amplitudes) to be written as twistor functions on $\RP^{3|4}$ using a simple integral transform that would come to be known as the `half-Fourier' transform (c.f., \cite{Mason:2009sa}). Berkovits' subsequent re-formulation of twistor string theory as an open string theory with world-sheet boundaries ending on $\RP^{3|4}$ also relied on this choice of signature \cite{Berkovits:2004hg}. In this review, we avoid the split signature perspective, preferring to compute in the complexified setting. When a choice of space-time signature is necessary, we only ever use Lorentzian or Euclidean signatures. While this means that we lose access to the real-analytic methods of the split signature setting, we still have considerable computational power thanks to holomorphic tools at our disposal. As we shall see, the result is a more general methodology for performing twistorial calculations and a minimized reliance on an explicit choice of space-time signature. \subsubsection{Some important facts} We conclude our lightning review of twistor theory by stating some of the fundamental theorems that have emerged from twistor theory over the past forty years. These results serve as the primary tools in the remainder of our studies, and are examples of the general twistor philosophy: physical data on space-time is translated into pure geometry on twistor space. \subsubsection*{\textit{The Penrose transform}} One of the earliest results in twistor theory is a statement which allows us to represent zero-rest-mass (z.r.m.) fields on space-time in terms of cohomological data on twistor space. If $\phi_{A_{1}\cdots A_{n}}(x)$ is a spinor field on $\M_{b}$ (with $n$ symmetric negative chirality spinor indices) which satisfies the linear partial differential equation \begin{equation*} \partial^{A_{1}A'}\phi_{A_{1}\cdots A_{n}}(x)=0, \end{equation*} then we say that it is a \emph{z.r.m. field of helicity} $-\frac{n}{2}$. Similarly, we define z.r.m. fields of helicity $\frac{n}{2}$ and zero (i.e., scalars) by solutions to the equations: \begin{equation*} \partial^{AA'_{1}}\phi_{A'_{1}\cdots A'_{n}}(x)=0, \qquad \Box \phi(x)=0. \end{equation*} The Penrose transform manifests itself in the following manner \cite{Penrose:1969ae}: \begin{thm}[Penrose transform]\label{PenTran} Let $\PT_{b}\subset \P^{3}$ be a suitably chosen open subset, and $U' \subset \M_{b}$ be the corresponding open subset under the twistor double-fibration: $U'= q \circ p^{-1}(\PT_{b})$. Then we have the following isomorphism: \begin{equation*} H^{1}(\PT_{b},\cO(2h-2))\cong \left\{\mbox{On-shell z.r.m. fields on}\;U'\;\mbox{of helicity}\;h\right\}, \end{equation*} where $H^{1}$ denotes analytic cohomology and $\cO(k)$ is the sheaf of holomorphic functions which are homogeneous of degree $k$. \end{thm} Proving this isomorphism in detail is actually rather involved (c.f., \cite{Eastwood:1981jy, Ward:1990}), so we will ignore the details of the proof. For our purposes, the choice of open subset $\PT_{b}$ can usually be made to coincide with one of \eqref{tsets}, or with the exclusion of the `point at infinity' in $\M$. From now on, we cease to mention this explicitly to avoid cumbersome notation; the choice of twistor space should always be obvious from the context. In this paper, we will work with the Dolbeault representation for the cohomology of twistor space; this should be contrasted against much of the older twistor literature, where a \v{C}ech representation is utilized \cite{Penrose:1969ae, Eastwood:1981jy, Ward:1990}. To distinguish between the two representations, Dolbeault cohomology will be denoted $H^{0,k}$, and in this framework cohomology classes are given by $\dbar$-closed $(0,k)$-forms modulo $\dbar$-exact ones. Choosing the Dolbeault representation is in keeping with our general complex/holomorphic philosophy and will lead to considerably simpler computations in many cases. The Penrose transform can be realized quite explicitly thanks to beautiful integral formulae. First consider a free massless scalar $\Phi(x)$ which is a solution to the wave equation $\Box\Phi(x)=0$. Theorem \ref{PenTran} indicates that this field can be constructed from a cohomology class on (bosonic) twistor space $\phi(Z)\in H^{0,1}(\PT_{b},\cO(-2))$. Take \be{scalar} \Phi(x) = \frac{1}{2\pi i}\int_{X} \D\lambda \wedge \phi(Z)|_{X} \, , \ee where the restriction of $\phi$ to $X\cong\P^{1}$ is given by the incidence relations \eqref{inc}, and $\D\lambda=\la \lambda \d \lambda \ra$ is the weight $+2$ holomorphic measure on $X$. The fact that this yields a solution to the wave equation follows by differentiating under the integral and using the incidence relations. If instead we had worked in a \v{C}ech representation, the transform would have been given by a representative $\phi\in\check{H}^{1}(\PT_{b},\cO(-2))$ and an integral \begin{equation*} \Phi(x)=\frac{1}{2\pi i}\oint\limits_\Gamma \D\lambda \, \phi(Z)|_{X} \,, \end{equation*} where the expression is really a contour integral on $X$ with contour $\Gamma$ specified by the cohomology class. This demonstrates the advantage working with Dolbeault cohomology: no choice of contour is needed and we avoid the combinatorics of Leray covers required in the \v{C}ech setup. For other fields, the transform is realized by the natural extensions: \be{nhel} \phi_{A_{1}\cdots A_{k}}(x)=\frac{1}{2\pi i}\int_{X}\lambda_{A_{1}}\cdots\lambda_{A_{k}}\;\D\lambda\wedge\phi(Z)|_{X}, \ee \be{phel} \phi_{A'_{1}\cdots A'_{k}}(x)=\frac{1}{2\pi i}\int_{X}\D\lambda\wedge\;\frac{\partial}{\partial\mu^{A'_{1}}}\cdots\frac{\partial}{\partial\mu^{A'_{k}}}\;\phi(Z)|_{X}, \ee for negative and positive helicity respectively. Once again, the fact that the z.r.m. equations are obeyed follows by differentiating under the integral sign and noting that \begin{equation*} \partial_{AA'}=i\lambda_{A}\frac{\partial}{\partial \mu^{A'}}, \end{equation*} thanks to the incidence relations. The Penrose transform extends naturally to deal with $\cN=4$ supersymmetry, as we will discuss shortly. \subsubsection*{\textit{Momentum eigenstates}} The Penrose transform lets us define on-shell physical momentum eigenstates in terms of twistor cohomology; this will be particularly useful when we want to compare twistorial calculations to known space-time results. To begin, recall that on the complex plane $\C$ the delta function supported at the origin is naturally interpreted as a $(0,1)$-form\footnote{Actually, this is really a $(0,1)$-\emph{current}, but we will not make this distinction here.}: \begin{equation*} \bar{\delta}^{1}(z)\equiv\delta(x)\delta(y)\, \d \bar z =\frac1{2\pi i}\,\d \bar z\frac{\del}{\del\bar z}\, \frac1z\, , \qquad z=x+iy. \end{equation*} as a result of the Cauchy kernel for the $\dbar$-operator \cite{Witten:2004cp}. If $\lambda_{A}$ is chosen to be a homogeneous coordinate on $\P^1$, then this extends naturally to the Riemann sphere by taking into account a scaling integral. This allows us to define a projective version of $\bar{\delta}^{1}(z)$: \be{meig1} \bar\delta^1_m(\lambda, \lambda ')\equiv \int_{\C} \frac{\rd s}{ s^{1+m}}\,\bar\delta^2(s\lambda_{A}+\lambda'_{A})\, , \ee which is supported only when $\lambda$ and $\lambda'$ coincide projectively. One can check that $\bar\delta^1_m(\lambda,\lambda')$ has homogeneity $m$ in $\lambda$ and $-m-2$ in $\lambda'$, so that \begin{equation*} f(\lambda') = \int_{\P^1} f(\lambda)\;\bar\delta^1_m(\lambda,\lambda')\wedge\D\lambda \end{equation*} for any function $f$ of homogeneity $-m-2$ on $\P^1$. Consider a particle of helicity $h$ with on-shell momentum $p_{AA'}=p_{A}\tilde{p}_{A'}$; Theorem \ref{PenTran} tells us that this will be represented by a twistor cohomology class taking values in the sheaf $\cO(2h-2)$. Using \eqref{meig1}, we define the twistor momentum eigenstate \be{meig2} f_{2h-2}(\mu,\lambda)=\int_{\C}\frac{\d s}{s^{2h-1}}\bar{\delta}^{2}(s\lambda_{A}+p_{A})\;e^{s[\mu\;\tilde{p}]}. \ee Using the integral formulae \eqref{nhel}-\eqref{phel}, we see that this evaluates to give the appropriate z.r.m. fields on space-time \begin{equation*} p_{A_{1}}\cdots p_{A_{|2h|}}e^{ip\cdot x}, \qquad \tilde{p}_{A'_{1}}\cdots\tilde{p}_{A'_{|2h|}}e^{ip\cdot x}, \end{equation*} depending on whether $h$ is negative or positive. When we work with $\cN=4$ supersymmetry, \eqref{meig2} is easily modified by taking into account the full on-shell supermomentum $(p_{A}\tilde{p}_{A'},p_{A}\eta_{a})$: \be{meig3} f(\mu, \lambda ,\chi)=\int_{\C} \frac{\rd s}{ s}\bar\delta^2(s\lambda_{A} + p_{A})\,e^{s[[\mu\;\tilde{p}]]} , \ee where $[[\mu\tilde{p}]]=[\mu\tilde{p}]+\chi^{a}\eta_{a}$. \subsubsection*{\textit{Woodhouse representatives}} We have seen that the Penrose transform allows us to construct integral formulae for z.r.m. fields of arbitrary integer or half-integer helicity on $\M$ from cohomology classes on twistor space. But if we are given the z.r.m. field on space-time, can we construct a twistorial cohomology class which manifests the space-time degrees of freedom? We saw previously that when the real slice of $\M$ is chosen to be the Euclidean space $\E$, twistor space is just a $\P^1$ bundle $\PT\rightarrow\E$ equipped with a quaternionic conjugation. It turns out that this additional structure is very useful, and lets us write down explicit cohomological representatives for negative helicity fields. \begin{thm}[Woodhouse \cite{Woodhouse:1985id}]\label{Wrep1} Let $\phi_{A\cdots B}(x)$ be a field on $\E$ with $2h$ symmetric spinor indices, $\dhat:\Omega^{0,k}\rightarrow\Omega^{0,k+1}$ as in \eqref{WHO}, and \begin{equation*} F_{\phi}\equiv \frac{\phi_{A\cdots B}\hat{\lambda}^{A}\cdots\hat{\lambda}^{B}}{\la\lambda\hat{\lambda}\ra^{2h+1}}. \end{equation*} Then $\dhat F_{\phi}\in H^{0,1}(\PT, \cO(-2h-2))$ if and only if $\phi_{A\cdots B}$ is a z.r.m. field of helicity $-h$. Furthermore, $\dhat F_{\phi}$ is holomorphic upon restriction to the $\P^{1}$ fibers of $\PT$, and every class in $H^{0,1}(\PT, \cO(-2h-2))$ has a unique representative which is $\dhat$-exact. \end{thm} This result shows that by choosing Euclidean signature, we can build representatives for negative helicity fields which are holomorphic upon restriction to the $\P^{1}$ fibers of twistor space. Constructing such representatives for fields with a primed spinor index is a bit more complicated, since there is no longer a potential definition: \begin{thm}[Woodhouse \cite{Woodhouse:1985id}]\label{Wrep2} Let $\phi_{A'A\cdots B}(x)$ be a field on $\E$ with $n$ symmetric un-primed spinor indices and define \begin{equation*} \alpha_{\phi}=-\frac{\phi_{A'A\cdots B}\lambda^{A}\cdots\lambda^{B}}{i^{n}\la\lambda\hat{\lambda}\ra}\;\hat{\lambda}_{C}\d x^{CA'}. \end{equation*} Then $\alpha_{\phi}\in H^{0,1}(\PT,\cO(n-1))$ if and only if $\partial_{A'(C}\phi^{A'}_{A)\cdots B}=0$. Furthermore, $\alpha_{\phi}$ is holomorphic upon restriction to the $\P^{1}$ fibers of $\PT$. \end{thm} For the remainder of this paper, we will refer to the representatives $\dhat F_{\phi}$, $\alpha_{\phi}$ as \emph{Woodhouse representatives}. They will be particularly useful in our study of the twistor action Feynman rules in the following section. \subsubsection*{\textit{The Ward correspondence}} The final classic result of twistor theory that will be used extensively in our discussion of gauge theory is the \emph{Ward correspondence}. Heuristically, this can be thought of as a non-linear version of the Penrose transform dealing with Yang-Mills instantons (c.f., \cite{Ward:1990,Jost:2008} for a more detailed review of gauge theory). Let $E\rightarrow\M$ be a principal $G$-bundle over space-time, with $G$ some Lie group. The bundle $E$ encodes the information of the gauge group in the sense that $\End(E)\cong\mathfrak{g}^{\C}$, where $\mathfrak{g}^{\C}$ is the complexified Lie algebra of $G$. On $E$ we can define a connection $\nabla$ which can be written locally as \begin{equation*} \nabla=\d +A, \qquad A\in\Omega^{1}(\M, \End(E)). \end{equation*} Since $\M$ is a 4-manifold, the curvature of this connection $F$ can be decomposed into its self-dual (SD) and anti-self-dual (ASD) parts using the Hodge star: \begin{equation*} F=\d A+A\wedge A=F^{+}+F^{-}, \qquad *F^{\pm}=\pm F^{\pm}. \end{equation*} The connection $\nabla$ is said to be a \emph{Yang-Mills connection} if it is an extremum of the Yang-Mills action functional and satisfies the Yang-Mills equation: \be{YMF} S[A]=\int_{\M}\tr\left(F\wedge *F\right), \qquad \nabla *F=0, \ee where the trace is over $\End(E)$. We call the connection $\nabla$ a \emph{Yang-Mills instanton} if its curvature is purely SD: $F=F^{+}$ (such a connection is automatically Yang-Mills by the Bianchi identity). The Ward correspondence tells us that there is a duality between Yang-Mills instantons and holomorphic vector bundles over twistor space satisfying certain conditions, and is true for a wide variety of gauge groups $G$ (e.g., \cite{Atiyah:1979}), although it was first formulated by Ward for $\GL(N,\C)$ \cite{Ward:1977ta}. In this paper, we almost always work with $G=\SU(N)$ or $\U(N)$, so we state that version of the theorem here: \begin{thm}[Ward Correspondence]\label{WardCorr} Let $\PT$ be a suitable open subset of $\P^{3}$ and $\M$ the corresponding open subset in space-time. There is a one-to-one correspondence between: \begin{enumerate} \item $\SU(N)$ Yang-Mills instantons on $\M$, and \item holomorphic rank-$N$ vector bundles $V\rightarrow\PT$ such that \emph{(a.)} $V|_{X}$ is topologically trivial for $X\cong\P^{1}$ corresponding to $x\in\M$; \emph{(b.)} $\det V$ is trivial; and \emph{(c.)} $V$ admits a positive real form. \end{enumerate} \end{thm} Here, the conditions (b.) and (c.) are not terribly important. The condition that $\det V$ be trivial amounts to requiring that $V$ has a nowhere-vanishing holomorphic section, and a positive real form can be built from a reality structure on $\PT$ and the Killing form (this can even be done uniquely \cite{Atiyah:1978wi}). The important `moral' of the Ward correspondence is the equivalence between Yang-Mills instantons and holomorphic vector bundles on twistor space. The proof of this theorem relies heavily upon the integrability of the SD Yang-Mills equations and illustrates a key trend: that twistors are powerful tools for describing integrable systems. The Ward correspondence can be used to build explicit examples of Yang-Mills instantons. This was first explored for $G=\SU(2)$ by Atiyah and Ward \cite{Atiyah:1977pw} and later generalized to the ADHM construction \cite{Atiyah:1978ri}. For our purposes, it will be important to note that the correspondence continues to hold for \emph{supersymmetric} Yang-Mills theories \cite{Manin:1997}. \subsection{A Calculus of Distributional Forms} We have just seen that the Penrose transform allows us to write down momentum eigenstates as $(0,1)$-form cohomology classes in twistor space. As we will see in the next chapter, it is often easier to work with twistor states which are dual to these eigenstates in a particular way. In this section, we introduce a calculus of distributional forms on twistor space which will greatly facilitate our later discussions. Here, our presentation will focus on the $\cN=4$ supersymmetric setting, trusting the reader to understand the generalization to other amounts of supersymmetry ($\cN=0,8$ will be the most important other cases). This builds off earlier work in the real setting \cite{Mason:2009sa, Mason:2009qx} and was first set out in the complex setting by \cite{Adamo:2011cb}. \subsubsection*{\textit{Distributional forms}} Building from our earlier discussions, we begin by defining a Dolbeault delta-function of $\C^{4|4}$ by \be{DF1} \bar{\delta}^{4|4}(Z)=\prod_{\alpha=0}^{3}\bar{\delta}(Z^{\alpha})\prod_{a=1}^{4}\chi ^{a}=\bigwedge_{\alpha=0}^{3}\dbar\left(\frac{1}{Z^{\alpha}}\right)\prod_{a=1}^{4}\chi^{a}. \ee This is a $(0,4)$-form on $\C^{4|4}$ of weight zero and obeys the delta-function property in the fermionic coordinates thanks to the usual Berezinian integration rule: $\int\chi \d\chi=1$ \cite{Manin:1997}. From this, we can define a projective delta-function by: \be{DF2} \bar{\delta}^{3|4}(Z_{1},Z_{2})=\int_{\P^{1}}\frac{\D c}{c_{1}c_{2}}\bar{\delta}^{4|4}(c_{1}Z_{1}+c_{2}Z_{2})=\int_{\C}\frac{\d s}{s}\bar{\delta}^{4|4}(Z_{1}+s Z_{2}), \ee where $\D c=c_{1}\d c_{2}+c_{2}\d c_{1}$. This is a homogeneous $(0,3)$-form on $\P^{3|4}\cong\PT$ which enforces the projective coincidence of its arguments, is antisymmetric under their interchange, and obeys the natural identity \begin{equation*} f(Z')=\int_{\PT}f(Z)\bar{\delta}^{3|4}(Z,Z')\wedge\D^{3|4}Z. \end{equation*} In other words, $\bar{\delta}^{3|4}$ acts as the anti-holomorphic Dirac current on $\PT$. By integrating against a further parameter, we can obtain the $\delta$-function \be{DF3} \begin{aligned} \bar{\delta}^{2|4}(Z_{1},Z_{2},Z_3)&=\int_{\P^{2}}\frac{\D^{2} c}{c_{1}c_{2}c_3}\,\bar{\delta}^{4|4}(c_1Z_{1}+c_2Z_{2}+c_3Z_3)\\ &=\int_{\C^2}\frac{\d s}{s}\frac{\d t}{t}\,\bar{\delta}^{4|4}(Z_3+sZ_{1}+tZ_{2}) \\ &= \int_\C \frac{\d s}{s}\, \bar{\delta}^{3|4}(Z_{1},Z_{2}+s Z_3)\ , \end{aligned} \ee where $\D^{2}c=c_{1}\d c_{2}\wedge\d c_{3}+$ cyclic permutations. This has support when $Z_1$, $Z_2$ and $Z_3$ are projectively collinear, and is manifestly superconformally invariant, weightless in each $Z_{i}$, and antisymmetric under exchange of any two. Further, this is a homogeneous $(0,2)$-form on $\PT$ which has simple poles when any two of its arguments coincide. Following this pattern, we can similarly define \be{DF4} \begin{aligned} \bar{\delta}^{1|4}(Z_{1},Z_{2},Z_3, Z_4)&=\int_{\P^{3}}\frac{\D^{3} c}{c_{1}c_{2}c_3c_4}\,\bar{\delta}^{4|4}(c_1Z_{1}+c_2Z_{2}+c_3Z_3+c_4Z_4) \\ &=\int_{\C^3}\frac{\d s}{s}\frac{\d t}{t}\frac{\d u}{u}\,\bar{\delta}^{4|4}(Z_4+sZ_3+tZ_{2}+uZ_{1})\\ &= \int_\C \frac{\d s}{s} \bar{\delta}^{2|4}(Z_{1},Z_{2},Z_3+s Z_4). \end{aligned} \ee This $(0,1)$-form is supported where its arguments lie on the same plane $\P^{2}\subset\PT$, and is singular when any three are collinear. Finally, we have the rational object \be{DF5} \begin{aligned} \bar{\delta}^{0|4}(Z_{1},Z_{2},Z_3,Z_4,Z_5) \equiv[1,2,3,4,5]&=\int_{\P^4}\frac{\D^4 c}{c_1c_2c_3c_4c_5}\bar{\delta}^{4|4}\left(\sum_{i=1}^5c_iZ_i\right)\\ &=\frac{\left( (1234)\chi_5 + \mbox{cyclic}\right)^4}{(1234)(2345)(3451)(4512)(5123)}\ , \end{aligned} \ee where the second line is obtained by integration against the delta functions and $(1234)\equiv \epsilon_{\alpha\beta\gamma\delta}Z_1^\alpha Z_2^\beta Z_3^\gamma Z_4^\delta$ \cite{Mason:2009qx}. As there are no remaining bosonic delta-functions, this forces its five arguments to inhabit the same bosonic `body' $\P^3\subset\PT$ of twistor space. The notation $\bar\delta^{0|4}(Z_1,Z_2,Z_3,Z_4,Z_5)=[1,2,3,4,5]$ indicates that if we were working with \emph{momentum} twistors, this would be the standard dual superconformal invariant: the R-invariant of \cite{Drummond:2008vq}. On twistor space, this is the standard invariant of the regular superconformal group $\mathrm{PSL}(4|4,\C)$. \subsubsection*{\textit{Some identities}} We now briefly state a few properties of these distributional forms. These will prove very useful later on. \begin{lemma}\label{dfprop1} Let $\dbar$ be the anti-holomorphic Dolbeault operator with respect to the twistor coordinates $Z^{I}$. Then \begin{equation*} \dbar\bar{\delta}^{r|4}(Z_{1},\ldots,Z_{5-r})=2\pi i \sum_{i=1}^{5-r}(-1)^{i+1}\bar{\delta}^{r+1|4}(Z_{1},\ldots,\widehat{Z}_{i},\ldots,Z_{5-r}), \end{equation*} where $\widehat{Z}_{i}$ is omitted, for $r=0,\ldots,3$. Further, $\dbar\bar{\delta}^{3|4}(Z,Z')=0$. \end{lemma} \proof The fact that $\dbar\bar{\delta}^{3|4}(Z,Z')=0$ follows immediately from the fact that the $Z^{I}$s are homogeneous coordinates if the general relation holds. Since $\bar{\delta}^{4|4}$ is a top-degree form on $\C^{4|4}$, it is $\dbar$-closed. Thus, \begin{equation*} \dbar_{\mathrm{T}}\bar{\delta}^{4|4}\left(\sum_{i=1}^{5-r}c_{i}Z_{i}\right)=0, \end{equation*} where $\dbar_{\mathrm{T}}=\dbar+\dbar_{c}$ is the total $\dbar$-operator on the space of parameters $\{c_{i}\}$ together with the twistors $Z_{i}$, and $\dbar_{c}$ is the that on the $c_{i}$s alone. Therefore, we have \begin{multline*} \dbar\bar{\delta}^{r|4}(Z_{1},\ldots,Z_{5-r})=\int_{\P^{4-r}}\frac{\D^{4-r}c}{c_{1}\cdots c_{5-r}}\dbar\bar{\delta}^{4|4}\left(\sum_{i=1}^{5-r}c_{i}Z_{i}\right) \\ =-\int_{\P^{4-r}}\frac{\D^{4-r}c}{c_{1}\cdots c_{5-r}}\dbar_{c}\bar{\delta}^{4|4}\left(\sum_{i=1}^{5-r}c_{i}Z_{i}\right) \\ =\int_{\P^{4-r}}\dbar_{c}\left(\frac{\D^{4-r}c}{c_{1}\cdots c_{5-r}}\right)\bar{\delta}^{4|4}\left(\sum_{i=1}^{5-r}c_{i}Z_{i}\right)\\ =\int_{\P^{4-r}}\D^{4-r}c \left(\sum_{i=1}^{5-r}\frac{1}{c_{1}\cdots\widehat{c}_{i}\cdots c_{5-r}}\dbar_{c}\frac{1}{c_{i}}\right)\bar{\delta}^{4|4}\left(\sum_{i=1}^{5-r}c_{i}Z_{i}\right), \end{multline*} where the third line is obtained by integrating by parts. Now, we use the fact that $\dbar_{c}c_{i}^{-1}=2\pi i\bar{\delta}^{1}(c_{i})$ to obtain: \begin{multline*} \dbar\bar{\delta}^{r|4}(Z_{1},\ldots,Z_{5-r}) \\ =2\pi i \int_{\C^{3-r}}\frac{\d s_{1}}{s_{1}}\cdots\frac{\d s_{3-r}}{s_{3-r}}\left(\bar{\delta}^{4|4}(Z_{2}+s_{1}Z_{3}+\cdots +s_{3-r}Z_{5-r})+\mathrm{cyclic}\right) \\ =2\pi i \left(\bar{\delta}^{r+1|4}(Z_{2},\ldots, Z_{5-r})+\mathrm{cyclic}\right), \end{multline*} as required. $\Box$ \medskip Additionally, using these distributional forms, many integrals can be performed algebraically: \begin{lemma}\label{dfprop2} \begin{eqnarray*} \bar{\delta}^{1|4}(Z_{1},Z_{2},Z_{3},Z_{4}) & = & \int_{\PT}\bar{\delta}^{2|4}(Z_{1},Z_{2},Z)\bar{\delta}^{2|4}(Z,Z_{3},Z_{4})\;\D^{3|4}Z, \\ \left[1,2,3,4,5\right] & = & \int_{\PT}\bar{\delta}^{2|4}(Z_{1},Z_{2},Z)\bar{\delta}^{1|4}(Z,Z_{3},Z_{4},Z_{5})\;\D^{3|4}Z. \end{eqnarray*} \end{lemma} \proof By the definitions \eqref{DF3}, \eqref{DF4} we have \begin{multline*} \int_{\PT}\bar{\delta}^{2|4}(Z_{1},Z_{2},Z)\bar{\delta}^{2|4}(Z,Z_{3},Z_{4})\;\D^{3|4}Z \\ =\int_{\PT\times \C}\frac{\d s}{s}\bar{\delta}^{3|4}(Z_{1}+sZ_{2},Z)\bar{\delta}^{2|4}(Z,Z_{3},Z_{4})\;\D^{3|4}Z \\ =\int_{\C}\frac{\d s}{s}\bar{\delta}^{2|4}(Z_{1}+s Z_{2},Z_{3},Z_{4})=\bar{\delta}^{1|4}(Z_{1},Z_{2},Z_{3},Z_{4}). \end{multline*} The other identity follows in a similar fashion. $\Box$ \subsection{$\cN=4$ Super-Yang-Mills Theory} The gauge theory portion of this review focuses on maximally supersymmetric (i.e., $\cN=4$) super-Yang-Mills (SYM) theory in four space-time dimensions. This theory is special for a wide variety of reasons which make it the simplest four dimensional gauge theory. It was originally obtained by dimensional reduction from $\cN=1$ Yang-Mills theory in ten dimensions \cite{Brink:1976bc}, is UV finite and superconformal, and its space-time Lagrangian has only two tunable parameters: the gauge group and the coupling. The AdS/CFT correspondence has indicated that it has a gravitational dual in the form of a Type IIB string theory on $AdS_{5}\times S^{5}$ \cite{Maldacena:1997re, Witten:1998qj, Gubser:1998bc, Aharony:1999ti}. This provides a method for performing strong coupling calculations, and the widely believed integrability of the theory in the planar limit has enabled remarkable computational advances for physical observables (see \cite{Beisert:2010jr} for a review). While $\cN=4$ SYM is a highly idealized version of the theories we believe to actually describe the interactions of the electromagnetic, strong, and weak forces (indeed, $\cN=4$ SYM contains no bound states, and therefore cannot describe these forces), it nevertheless captures many of the essential qualities that underlie actual physical theories such as QED or QCD. For instance, the computation of 1-loop gluon scattering amplitudes in QCD can be facilitated by considering the 1-loop $\cN=4$ amplitude, along with $\cN=1$ chiral and scalar corrections (c.f., \cite{Ita:2011hi}). In this section, we provide a brief review of the space-time formulation of this theory and some of the interesting properties that its scattering amplitudes exhibit. \subsubsection{Space-time Lagrangian formulation} Let the gauge group be $G=\SU(N)$. The field content of $\cN=4$ SYM is encoded in a single vector multiplet which includes: gluons of helicity $\pm 1$ ($g^{\pm}$); 4 fermions of each $\pm\frac{1}{2}$ helicity ($\tilde{\Psi}_{a\;A'}$ and $\Psi^{a}_{A}$ respectively); and 6 complex scalars (which we can write in the $\mathbf{6}$ vector representation of $\SU(4)_{R}$ as $\Phi_{ab}$). This theory is naturally chiral, since encoding the multiplet into an on-shell superfield results in \be{ossf} X(\eta)=g^{+}+\eta^{a}\tilde{\Psi}_{a}+\cdots+\frac{1}{4!}\eta^{4}g^{-}, \ee so it is naturally expressed in the chiral Minkowski super-space $\M$. This motivates adopting a manifestly chiral expression of the theory, known as the Chalmers-Siegel formulation \cite{Chalmers:1996rq, Chalmers:1997sg}. This entails introducing an auxiliary ASD 2-form: \begin{equation*} G\in\Omega^{2\;-}(\M, \mathfrak{sl}_{N}), \qquad G=G_{AB}\d x_{AA'}\wedge \d x^{A'}_{B}. \end{equation*} The space-time Lagrangian is then written as: \be{CS1} \cL=\frac{N}{8\pi^2}\left(\cL_{1}+\frac{\lambda}{2}\cL_{2}\right), \ee where $\lambda$ is the 't Hooft coupling \begin{equation*} \lambda=\frac{g^{2}_{\mathrm{YM}}N}{8\pi^2}, \end{equation*} and the two Lagrangian terms are \be{CS-SD} \cL_{1}=\tr\left(G^{AB}F_{AB}+\Psi^{a}_{A}D^{AA'}\tilde{\Psi}_{a\;A'}-\frac{1}{4}D_{AA'}\Phi_{ab}D^{AA'}\bar{\Phi}^{ab}+\tilde{\Psi}_{a\;A'}\tilde{\Psi}_{b}^{A'}\bar{\Phi}^{ab}\right), \ee \be{CS-int} \cL_{2}=\tr\left(\frac{1}{16}\left[\bar{\Phi}^{ab},\bar{\Phi}^{cd}\right]\left[\Phi_{ab},\Phi_{cd}\right]+2\Psi^{a}_{A}\Psi^{b\;A}\Phi_{ab}-G_{AB}G^{AB}\right). \ee Here, $D_{AA'}=\partial_{AA'}+A_{AA'}$ is the gauge-covariant derivative, $F_{AB}$ is the ASD portion of the curvature via the decomposition \begin{equation*} F_{AA'BB'}=[D_{AA'},D_{BB'}]=\epsilon_{AB}F_{A'B'}+\epsilon_{A'B'}F_{AB}, \end{equation*} and $\bar{\Phi}^{ab}=\frac{1}{2}\epsilon^{abcd}\Phi_{cd}$. To see why such a Lagrangian should be equivalent to the usual $\cN=4$ SYM Lagrangian, it suffices to investigate the pure gauge theory sector, where the Chalmers-Siegel action functional looks like: \begin{equation*} S[A,G]=\int_{\M}\tr\left(F\wedge G\right)-\frac{\lambda}{2}\int_{\M}\tr\left(G\wedge G\right). \end{equation*} This gives the field equations \begin{equation*} F^{-}=\lambda G, \qquad \nabla G=0, \end{equation*} which can be seen to imply the Yang-Mills equations thanks to the Bianchi identity: \begin{equation*} \nabla *F=\nabla(F^{+}-F^{-})=\nabla(F-2F^{-})=0. \end{equation*} This means that the Chalmers-Siegel formulation agrees with the classical Yang-Mills theory up to a topological term (which is irrelevant for perturbation theory). The field equations for $\cN=4$ SYM in Chalmers-Siegel form are: \begin{eqnarray} D^{B}_{A'}G_{AB} & = & \left\{\tilde{\Psi}_{a\;A'},\Psi^{a}_{A}\right\}-\frac{1}{2}\left[\Phi_{ab},D_{AA'}\bar{\Phi}^{ab}\right], \label{yFE1} \\ D^{AA'}\tilde{\Psi}_{a\;A'} & = & \lambda \left[\Psi^{b}_{A},\Phi_{ab}\right], \label{yFE2} \\ \Box \Phi_{ab} & = & \left\{\tilde{\Psi}_{[b}^{A'},\tilde{\Psi}_{a]\;A'}\right\}+\lambda\epsilon_{abcd}\left\{\Psi^{c}_{A},\Psi_{d\;A}\right\}+\lambda\left[\Phi_{c[a},[\bar{\Phi}^{cd},\Phi_{b]d}]\right], \label{yFE3} \\ D^{AA'}\Psi^{a}_{A} & = & \left[\tilde{\Psi}_{b\;A'},\bar{\Phi}^{ab}\right], \label{FE4} \\ F_{AB} & = & \lambda G_{AB} \label{FE5}. \end{eqnarray} Supersymmetry acts on the multiplet of fields $\{A,\tilde{\Psi},\Phi,\Psi, G\}$ via the generators $\delta_{\varepsilon}$, $\delta_{\tilde{\varepsilon}}$, where $\varepsilon_{A}^{a}$ and $\tilde{\varepsilon}_{a\;A'}$ are spinors. Explicitly, this action is given by: \be{eqn: susyg1} \delta_{\varepsilon}\left( \begin{array}{c} A_{AA'} \\ \tilde{\Psi}_{a\;A'} \\ \Phi_{ab} \\ \Psi^{a}_{A} \\ G_{AB} \end{array} \right) = \left( \begin{array}{c} \varepsilon^{a}_{A}\tilde{\Psi}_{a\;A'} \\ \varepsilon^{b\;A}D_{AA'}\Phi_{ab} \\ \frac{1}{2}\varepsilon^{A\;[c}\Psi^{d]}_{A}\epsilon_{abcd} \\ \frac{1}{2}\varepsilon^{b}_{A}[\Phi_{cb},\bar{\Phi}^{ca}]-\varepsilon^{a\;B}G_{AB} \\ \varepsilon^{a}_{(A}[\Psi_{B)}^{b},\Phi_{ab}] \end{array} \right), \ee \be{eqn: susg2} \delta_{\tilde{\varepsilon}}\left( \begin{array}{c} A_{AA'} \\ \tilde{\Psi}_{a\;A'} \\ \Phi_{ab} \\ \Psi^{a}_{A} \\ G_{AB} \end{array} \right) = \left( \begin{array}{c} \lambda\tilde{\varepsilon}_{a\;A'}\Psi^{a}_{A} \\ \tilde{\varepsilon}_{a}^{B'}F_{A'B'}+\frac{\lambda}{2}\tilde{\varepsilon}_{b\;A'}[\bar{\Phi}^{bc},\Phi_{ca}] \\ \tilde{\varepsilon}^{A'}_{[a}\tilde{\Psi}_{b]\;A'} \\ \tilde{\varepsilon}_{b}^{A'}D_{AA'}\bar{\Phi}^{ab} \\ \tilde{\varepsilon}_{a}^{A'}D_{A'(A}\Psi_{B)}^{a} \end{array} \right), \ee One can verify that $\{\delta_{\varepsilon},\delta_{\tilde{\varepsilon}}\}=0$ up to the field equations. An interesting fact about $\cN=4$ SYM is that the field equations can be encoded by a system of constraints for a superconnection on a gauge bundle over $\M$ \cite{Witten:1978xx, Witten:1985nt, Harnad:1985bc}; this construction has deep connections with twistor theory (see \cite{Harnad:1988rs} for a review). Additionally, the field content of $\cN=4$ SYM is easily encoded using a supersymmetric extension of the Penrose transform. For an abelian theory, we can take the $(0,1)$-form on twistor space: \begin{multline}\label{superfield1} \cA(Z,\bar Z,\chi)= a(Z,\bar Z)+\chi^{a}\tilde{\psi}_{a}(Z,\bar Z)+\frac{\chi^{a}\chi^{b}}{2}\phi_{ab}(Z,\bar Z) \\ +\frac{\epsilon_{abcd}}{3!}\chi^{a}\chi^{b}\chi^{c}\,\psi^{d}(Z,\bar Z)+\frac{\epsilon_{abcd}}{4!}\chi^a\chi^b\chi^c\chi^{d}\, g(Z,\bar Z) \end{multline} where $a$, $\tilde{\psi}$, $\phi$, $\psi$, and $g$ have homogeneity $0$ $-1$, $-2$, $-3$ and $-4$ respectively, corresponding on-shell (i.e., $\dbar\cA=0$) to z.r.m. fields $\{F_{A'B'},\tilde{\Psi}_{a;A'}, \Phi_{ab}, \Psi_{A}^a, G_{AB}\}$ on space-time by theorem \ref{PenTran}. The integral formulae \eqref{nhel}-\eqref{phel} can now be used to build on-shell superfields on space-time encoding the $\cN=4$ multiplet: \be{superfield2} \begin{aligned} \cF_{A'B'} &= \int _{X } \frac{\del^2}{\del\mu ^{A'}\del\mu^{B'}} \, \cA(ix^{AA'}\lambda_{A}, \lambda_{A},\theta^{Aa}\lambda_{A} )\wedge \D\lambda\\ &=F_{A'B'}+\theta^{Aa}\p_{AA'}\left[\widetilde\Psi_{aB'}+ \theta^{Bb}\p_{BB'}\left(\frac{\Phi_{ab}}{2}+ \theta^{Cc}\varepsilon_{abcd}\left(\frac{\Psi_{C}^d}{3!}+ \theta^{Dd}\frac{G_{CD}}{4!} \right) \right)\right] \nonumber\\ \end{aligned} \ee and \be{Susy-Pint2} \begin{aligned} \cF_{ab} &= \int _{X } \frac{\del^2}{\del\chi^a\del\chi^b}\, \cA (ix^{AA'}\lambda_{A}, \lambda_{A},\theta^{Aa}\lambda_{A} )\wedge \D\lambda\\ &=\Phi_{ab} +\theta^{Cc}\varepsilon_{abcd}(\Psi_{C}^d+\theta^{Dd}\frac{G_{CD}}2)\ , \end{aligned} \ee and another component $\cF_{aA'}$ (which has a formula as above with a mixed $\mu$ and $\chi$ derivative). These can be interpreted as the non-zero parts of the curvature $2$-form \be{susy-curv} \cF=\cF_{A'B'}\varepsilon_{AB}\rd x^{AA'}\wedge \rd x^{BB'}+\cF_{aA'} \varepsilon_{AB}\rd x^{AA'}\wedge\rd\theta ^{Ba}+ \cF_{ab}\varepsilon_{AB}\rd \theta^{Aa}\wedge \rd \theta^{Bb} \ee of the on-shell space-time superconnection \begin{equation*} \CA=\Gamma_{AA'}(x,\theta)\d x^{AA'}+\Gamma_{a\;A}\d\theta^{Aa}. \end{equation*} In appendix \ref{Appendix1}, we demonstrate how this superconnection can be obtain explicitly for abelian and $\SU(N)$ gauge groups, but it can be understood geometrically by a supersymmetric extension of the Ward correspondence \cite{Manin:1997}. \subsubsection{Special properties of scattering amplitudes} \label{SAP} One of the fundamental observables that can be calculated in any quantum field theory are scattering amplitudes. Not only are these realistic observables in the sense that they are related to quantities measured in experimental particle physics, but they also tell us a great deal about the underlying mathematical structure of the theory. The data for a scattering amplitude is usually specified in terms of incoming on-shell states composed of a momentum and polarization vector; in four-dimensions it is convenient to replace the polarization information with the helicity data. Hence, a $n$-particle scattering amplitude $\cA_{n}$ is a function of on-shell momenta $p_{AA'}=p_{A}\tilde{p}_{A'}$ and helicity data (for gluon scattering, this is simply a $\pm 1$ label for each particle). In $\cN=4$ SYM any classical, or \emph{tree-level}, scattering amplitude can be written in the form: \be{colorstrip1} \cA^{0}_{n}=g_{\mathrm{YM}}^{n-2}\sum_{\sigma\in S_{n}/\Z_{n}}\tr\left(\mathsf{T}^{a_{\sigma(1)}}\cdots \mathsf{T}^{a_{\sigma(n)}}\right)A^{0}_{n}(\sigma(1),\ldots, \sigma(n)), \ee where the $\mathsf{T}^{a}$s are the generators of the fundamental representation of the gauge group (which we take to be $\SU(N)$), and the sum runs over all non-cyclic permutations of the $n$ particles. The $A^{0}_{n}$ is called a \emph{color-stripped} amplitude, and clearly enjoys a cyclic symmetry in its arguments by definition. At higher-order in perturbation theory, amplitudes cannot be color-stripped so easily; a general $l$-loop amplitude will contain $l+1$ color traces over the gauge group. However, if we consider the planar limit of the gauge theory (i.e., $N\rightarrow\infty$) then a single-trace term dominates (c.f., \cite{Dixon:2011xs}): \be{colorstrip2} \cA^{l}_{n}\xrightarrow{\mathrm{planar}\:\mathrm{limit}} (8\pi^2)^{l}g_{\mathrm{YM}}^{n-2}\lambda^{l} \sum_{\sigma\in S_{n}/\Z_{n}}\tr\left(\mathsf{T}^{a_{\sigma(1)}}\cdots \mathsf{T}^{a_{\sigma(n)}}\right)A^{l}_{n}(\sigma(1),\ldots, \sigma(n)). \ee Hence, in the planar limit, all amplitudes are uniquely determined by their color-stripped subamplitudes. For the remainder of this review, we will exclusively consider the color-stripped amplitudes $A^{l}_{n}$ in gauge theory. Even calculating tree-level amplitudes using a space-time Lagrangian such as \eqref{CS1} can be an involved process. Using traditional methods, amplitudes are computed using the space-time Lagrangian's Feynman rules, and the number of diagrams required grows roughly factorially with particle number! However, by organizing amplitudes according to helicity information, remarkable simplifications occur. In pure gauge theory (i.e., $\cN=0$), one can show that for $n$ gluon configurations where all the particles have the same helicity or only one particle has a different helicity, the scattering amplitudes vanish. This result follows (in a sense) because of the integrability of the Yang-Mills instanton equations. The truly remarkable result appears when we consider the first non-vanishing tree amplitude: this occurs when two gluons have a different helicity than the rest, and is referred to as the Maximal-Helicity-Violating (MHV) case. To standardize conventions, we consider MHV amplitudes to involve 2 gluons of negative helicity, and the rest of positive helicity. In this case, the tree-level scattering amplitude takes the famous Parke-Taylor form \cite{Parke:1986gb, Berends:1987me}: \be{ParkeTaylor0} \delta^{4}\left(\sum_{i=1}^{n}p_{i}\right)\frac{\la l\;m\ra^{4}}{\la 1\;2\ra\cdots \la n\;1\ra}, \ee where gluons $l$ and $m$ have negative helicity, and $\la i\;j\ra=\epsilon_{AB}p_{i}^{A}p_{j}^{B}$, etc. This can be generalized to $\cN=4$ SYM by considering scattering amplitudes as functionals of the on-shell superfield \eqref{ossf}, and extracting the portion which is homogeneous of degree $8$ in the Grassmann variables $\eta_{i}$ \cite{Nair:1988bq}. More generally, an amplitude $A^{l}_{n}$ can be expanded as: \begin{equation*} A^{l}_{n}=A^{l}_{n,0}+A^{l}_{n,1}+\cdots +A^{l}_{n, n-4}, \end{equation*} where $A^{l}_{n,k}$ has homogeneity $4(k+2)$ in $\eta_{i}$ and is referred to as a N$^k$MHV amplitude. This is the natural generalization from the $\cN=0$ setting, where a N$^k$MHV amplitude has $k+2$ gluons of negative helicity and $n-k-2$ of positive helicity. This leads to the $\cN=4$ version of the Parke-Taylor formula: \be{ParkeTaylor} A^{0}_{n,\mathrm{MHV}}=\frac{\delta^{4|8}\left(\sum_{i=1}^{n}p_{i}\right)}{\la 1\;2\ra\cdots \la n\;1\ra} , \ee where the super-momentum-conserving delta function $\delta^{4|8}$ is given by \begin{eqnarray*} \delta^{4|8}\left(\sum_{i=1}^{n}p_{i}\right) & = & \delta^{4}\left(\sum_{i=1}^{n}p_{i\;A}\tilde{p}_{i\;A'}\right)\delta^{0|8}\left(\sum_{i=1}^{n}\eta^{a}_{i}p_{i\;A}\right), \\ \delta^{0|8}\left(\sum_{i=1}^{n}\eta^{a}_{i}p_{i\;A}\right) & = & \prod_{a,A}\left(\sum_{i=1}^{n}\eta^{a}_{i}p_{i\;A}\right). \end{eqnarray*} It is easy to check that \eqref{ParkeTaylor} is superconformally invariant and is homogeneous of degree $8$ in each of the $\eta_{i}$s as required. Performing a fermionic integral to extract the appropriate $\cN=0$ component of this expression produces the factor of $\la i\;j\ra^{4}$ appearing in \eqref{ParkeTaylor0}. The fact that the MHV tree amplitude has such an elegant and simple expression is completely obscured by the traditional Lagrangian or Feynman diagram formulation of the gauge theory. Indeed, for $n=6$, there are over 200 traditional Feynman diagrams which would contribute to \eqref{ParkeTaylor}. This is a strong indicator that the theory is in fact simpler than the space-time formulation appears, and this simplification takes the form of a hidden \emph{dual} superconformal symmetry \cite{Drummond:2008vq}. It is widely believed that $\cN=4$ SYM is integrable in the planar limit; this means that it possesses an infinite-dimensional symmetry algebra (a Yangian algebra) $\mathcal{Y}[\mathfrak{psl}(4|4, \C)]$ \cite{Dolan:2004ps, Drummond:2009fd, Bargheer:2011mm}. In a loose sense, this Yangian algebra is generated by the standard superconformal algebra $\mathfrak{psl}(4|4,\C)$ (which acts on space-time) and another copy of this algebra which acts on a dual space-time: the affine space parametrizing particle momenta. Invariance of physical observables such as scattering amplitudes under this dual conformal symmetry has proven an immensely powerful tool. One well-known example of this is the Bern-Dixon-Smirnov (BDS) ansatz for the all-loop structure of MHV amplitudes in $\cN=4$ SYM \cite{Bern:2005iz}, which takes the form: \be{BDSansatz} A_{n,0}=A^{0}_{n,0}\exp\left[D_{n}(\Gamma_{\mathrm{cusp}}, G_{\mathrm{collinear}}) + F_{n,0}(p_{1},\ldots,p_{n}\;\lambda)\right], \ee where $D_{n}$ captures the IR divergences of the amplitude and is a function of the cusp anomalous dimension $\Gamma_{\mathrm{cusp}}$ and collinear anomalous dimension $G_{\mathrm{collinear}}$, while $F_{n,0}$ is a finite contribution depending on the kinematics and coupling in a specific way. Since $\Gamma_{\mathrm{cusp}}$ can be fixed completely with integrability \cite{Beisert:2006ez} and $G_{\mathrm{collinear}}$ has been calculated up to 4-loops \cite{Cachazo:2007ad}, the most interesting part of \eqref{BDSansatz} is the finite contribution $F_{n,0}$, which is explicitly specified by the details of the BDS ansatz. Although this ansatz turns out to fail at two-loops and $n=6$ in perturbation theory \cite{Bern:2008ap, Drummond:2008aq} and for large $n$ in the strong coupling regime \cite{Alday:2007he}, it does so in a way that is constrained by dual superconformal invariance. Twistors have proven a valuable tool for analysing dual superconformal invariance, thanks to Hodges' momentum twistors, which assign a twistor space to the dual affine space of null momenta \cite{Hodges:2009hk}. In this setting the dual superconformal generators take the form displayed in \eqref{scongen}, and can also be expressed in ordinary twistors as \cite{Drummond:2009fd, Mason:2009qx}: \be{dscongen} J^{(1)\;I}_{J}=\sum_{i<j}(-1)^{K}\left[Z^{I}_{i}\frac{\partial}{\partial Z_{i}^{K}}Z^{K}_{j}\frac{\partial}{\partial Z^{J}_{j}}-(i\leftrightarrow j)\right]. \ee In this fashion, the integrability of $\cN=4$ SYM becomes a powerful method for constraining physical observables such as scattering amplitudes. In this review, we will focus primarily on two other simplifying structures for which emerge as a result of the hidden simplicity of $\cN=4$ SYM: the Britto-Cachazo-Feng-Witten (BCFW) recursion relations, and the Maximal-Helicity-Violating (MHV) formalism of Cachazo, Svrcek, and Witten. These and other properties of scattering amplitudes in various theories are discussed at great length in the comprehensive review of Elvang and Huang \cite{Elvang:2013cua}. \subsubsection*{\textit{BCFW recursion}} First conjectured in \cite{Britto:2004ap} and later proven in \cite{Britto:2005fq}, the BCFW recursion relations give a recursive procedure for obtaining any gluon tree amplitude in gauge theory, and are easily extended to $\cN=4$ SYM \cite{Brandhuber:2008pf, ArkaniHamed:2008gz}. This can be derived by picking two external momenta for a scattering amplitude and analytically continuing them with a complex variable $z$ while keeping them on-shell and maintaining overall momentum conservation. The amplitude then becomes a complex function $A^{0}_{n,k}(z)$: it has simple poles wherever internal propagators go on-shell, and $A^{0}_{n,k}(0)$ is the original amplitude. These simple poles correspond to the terms arising in the BCFW recursion, so provided $A^{0}_{n,k}(z\rightarrow\infty)$ vanishes, Cauchy's theorem implies that \be{BCFR1} 0=\frac{1}{2\pi i}\int \frac{\d z}{z}A^{0}_{n,k}(z)=A^{0}_{n,k}(0)+\mbox{BCFW terms}. \ee More specifically, take the incoming particles $1$ and $n$ with on-shell supermomenta $(p_{i\;A}\tilde{p}_{i\;A'},\eta_{i\;a}p_{i\;A})$, and perform the shift: \begin{equation*} \tilde{p}_{n}\rightarrow \hat{\tilde{p}}_{n}=\tilde{p}_{n}+z\tilde{p}_{1}, \qquad \eta_{n}\rightarrow \hat{\eta}_{n}=\eta_{n}+z\eta_{1}, \qquad p_{1}\rightarrow \hat{p}_{1}=p_{1}-zp_{n}. \end{equation*} At certain values $z=z_{i}$, internal propagators in the Feynman diagram expansion of $A^{0}_{n,k}$ will go on-shell. Furthermore, one can show that as $z\rightarrow\infty$, $A^{0}_{n,k}(z)\sim z^{-1}$ \cite{Britto:2005fq, ArkaniHamed:2008yf}, so by \eqref{BCFR1} this leads to an expansion of the amplitude \be{BCFR2} A^{0}_{n,k}=\sum \int \d^{4}\eta A^{0}_{i+1\;L}(\hat{1},\ldots, i, \{-\hat{p},\eta\})\frac{1}{p^{2}}A^{0}_{n-i+1\;R}(\{\hat{p},\eta\},i+1,\ldots,\hat{n}). \ee Here the fermionic integration selects the correct sub-amplitudes $A_{L},A_{R}$ which are compatible with the overall N$^k$MHV degree, the sum is over the the possible partitions of external states between the sub-amplitudes, and $p=\sum_{j\in L}p_{j}$. Since the reduced sub-amplitudes can themselves be recursively calculated in a similar fashion, this gives a simplified way of computing the tree-level S-matrix of $\cN=4$ SYM. Indeed, a recursive formula for all tree-amplitudes has been obtained from BCFW recursion \cite{Drummond:2008cr}. A particularly simple example is the MHV tree amplitudes of \eqref{ParkeTaylor}; in this case the entire recursion is composed of `homogeneous' terms where $A_{R}$ is a 3-point anti-MHV subamplitude. Beyond tree-level, BCFW recursion can be used to compute the loop integrand of $\cN=4$ SYM to all orders in the planar limit via the all-loop recursion relations of \cite{ArkaniHamed:2010kv}. Furthermore, the BCFW shift becomes extremely simple when expressed on twistor space: $Z_{n}\rightarrow Z_{n}-zZ_{1}$, and the recursion relation \eqref{BCFR2} can be obtained via half-Fourier transform \cite{Mason:2009sa} \begin{multline}\label{BCFR3} A^{0}_{n,k}(Z_{1},\ldots, Z_{n})= \\ \sum \int_{\PT\times\C}\D^{3|4}Z\frac{\d z}{z}A^{0}_{i+1\;L}(Z_{1},\ldots,Z_{i},Z) A^{0}_{n-i+1\;R}(Z,Z_{i+1},\ldots,Z_{n}-zZ_{1}). \end{multline} The twistorial form of the BCFW recursion will be useful in our later discussion of Wilson loops and local operators in twistor space, as well as gravity. \subsubsection*{\textit{The MHV formalism}} While BCFW gives a recursive procedure for computing scattering amplitudes, the MHV rules of \cite{Cachazo:2004kj, Cachazo:2004zb, Cachazo:2004by} provide a Feynman diagram formalism for $\cN=4$ SYM which is dramatically more efficient than standard space-time Lagrangian techniques. This formalism arose by considering the geometry of the instanton moduli space of twistor-string theory near the boundary \cite{Gukov:2004ei}. In the twistor-string picture, a $n$-point N$^k$MHV tree amplitude is given by an integral over the moduli space of $n$-pointed, degree $d=k+1$ curves in $\PT$; on the boundary of this moduli space, such a curve can degenerate into $k+1$ intersecting lines, each of which corresponds to a MHV vertex. The MHV formalism asserts that N$^k$MHV tree amplitudes of $\cN=4$ SYM can be constructed entirely from such disconnected configurations: MHV vertices joined by massless scalar propagators, $1/p^2$ \cite{Cachazo:2004kj}. The MHV formalism has now been proven to be correct at tree-level (for all Yang-Mills theories) via a complex analysis argument which uses a BCFW momentum shift extended to \emph{all} the external states \cite{Risager:2005vk, Elvang:2008na, Elvang:2008vz}. It can also be extended to loop level, albeit with some caveats: it can be shown to give the correct 1-loop MHV amplitude in $\cN=4$ SYM \cite{Brandhuber:2004yw} and can be expressed in momentum twistor space \cite{Bullimore:2010pj}, where it was shown to produce the correct planar integrand to all loops for supersymmetric theories which are cut-constructible \cite{Bullimore:2010dz}.\footnote{Such loop integrands are divergent upon integrating over loop momenta or region variables, and require regularization. We will discuss this in more detail later.} In this case, a $l$-loop N$^k$MHV amplitude will involve diagrams containing $k+l+1$ MHV vertices. A key point of the MHV formalism is that the scalar propagators connecting MHV vertices in a diagram are off-shell. This means that we cannot decompose $p_{AA'}$ into a tensor product of two Weyl spinors of opposite chirality. But given the Parke-Taylor formula \eqref{ParkeTaylor}, we need a spinor $p_{A}$ for the MHV vertices to be well-defined. This is accomplished by choosing an arbitrary reference spinor $\hat{\iota}^{A'}$ (which we call the \emph{CSW reference spinor}), and defining \be{CSWspinor} p_{A}=p_{AA'}\hat{\iota}^{A'}, \ee for the off-shell propagators. It can be shown that dependence on the choice of CSW spinor drops out of the final amplitude after all MHV diagrams have been summed over. Despite its simplicity and utility, the origins of the MHV formalism have remained mysterious. There are significant gaps in any explanation coming from twistor-string theory, and the formalism is non-obvious from the gauge theory's space-time Lagrangian (although a transformation which does produce the MHV formalism from the Lagrangian can be engineered \cite{Mansfield:2005yd}). A major goal of Section \ref{Chapter3} will be to show that the MHV formalism follows naturally as the gauge-fixed Feynman rules of the twistor action for $\cN=4$ SYM. Although prior efforts had indicated that this might be true using momentum eigenstates \cite{Boels:2007qn}, our derivation will be based entirely in twistor space and will manifest superconformal invariance. \medskip The second half of this review will investigate how the themes explored in the context of $\cN=4$ SYM can be extended to gravity. Unlike BCFW recursion, which extends to Einstein (super)gravity \cite{Bedford:2005yy, Cachazo:2005ca, Benincasa:2007qj}, the na\"{i}ve MHV vertex expansion defined by the Risager all-line shift fails \cite{BjerrumBohr:2005jr, Bianchi:2008pu} due to the `non-holomorphicity' of graviton scattering amplitudes. \section{Twistor Action for $\cN=4$ Super-Yang-Mills} \label{Chapter3} In this section, we will apply our background knowledge of twistor theory to formulate maximally supersymmetric gauge theory in four dimensions (i.e., $\cN=4$ SYM) as a gauge theory on twistor space. Our primary tool will be the twistor action for $\cN=4$ SYM \cite{Mason:2005zm, Boels:2006ir}, which can be thought of as an effective action for twistor-string theory which captures only the gauge theory contributions, eliminating the conformal gravity modes. After recalling the basic definition and properties of the twistor action, we set out a rigorous derivation of its Feynman rules in a particular axial gauge. The main result is a proof that these Feynman rules reproduce the MHV formalism of \cite{Cachazo:2004kj}; this provides a proof of the MHV formalism (at tree level), indicates its twistorial nature, and allows us to easily compute IR finite scattering amplitudes on twistor space. We demonstrate how all tree-level scattering amplitudes can be calculated in this manner, and also discuss the status of loop-level calculations in perturbation theory. The main advantage of this formalism (besides being dramatically more efficient than space-time techniques) is that it manifest superconformal invariance--up to the choice of reference spinor in the MHV formalism. \subsection{Definition and Basic Properties} The setting for this chapter will be $\cN=4$ supersymmetric twistor space $\PT\subset\P^{3|4}$; we have a gauge group $G=\SU(N)$, bundle $E\rightarrow\PT$ (which can be thought of as the space-time gauge bundle pulled back to twistor space) with $\End(E)\cong\mathfrak{sl}_{N}$. For simplicity, we will assume that this bundle is topologically trivial: $c_{1}(E)=0$, although this assumption can be relaxed to $c_{1}(E|_{X})=0$ without serious consequences to any of our results. As we noted earlier, the field content of $\cN=4$ SYM can be encoded in a homogeneous $(0,1)$-form $\cA\in\Omega^{0,1}(\PT,\cO\otimes\End(E))$, \be{tsconn} \cA=a+\chi^{a}\tilde{\psi}_{a}+\frac{\chi^{a}\chi^{b}}{2!}\phi_{ab}+\frac{\epsilon_{abcd}}{3!}\chi^{a}\chi^{b}\chi^{c}\psi^{d}+\frac{\chi^4}{4!}g, \ee which has no components in the anti-holomorphic directions, and each bosonic component corresponds to a space-time field via the Penrose transform.\footnote{Technically, for $G\neq\U(1)$, this requires a non-abelian generalization of the Penrose transform. This can be defined by finding a holomorphic trivialization of the bundle $E|_{X}$ on the $\P^{1}$ fibers of twistor space; we will discuss this explicitly in the next chapter.} This acts as a $(0,1)$-connection (or, equivalently, an endomorphism-valued complex structure) on $E$. We want an action functional on twistor space which mimics the structure of the Chalmers-Siegel action \eqref{CS1}; this requires an instanton term plus an ASD interaction term. By theorem \ref{WardCorr}, the bundle $E$ with connection $\dbar+\cA$ is equivalent to a $\cN=4$ Yang-Mills instanton on $\M$ provided $F^{0,2}=\dbar\cA+\cA\wedge\cA=0$. Since $F^{0,2}=0$ are the Euler-Lagrange equations of the holomorphic Chern-Simons functional, we can account for the SD portion of the theory on twistor space with: \be{TASD} S_{1}[\cA]=\frac{i}{2\pi}\int_{\PT}\D^{3|4}Z\wedge\tr\left(\cA\wedge\dbar\cA+\frac{2}{3}\cA\wedge\cA\wedge\cA\right). \ee Beyond the Ward Correspondence, there is substantial motivation for this action capturing the instanton sector. An early form of this action was derived by Sokatchev for self-dual $\cN=4$ SYM \cite{Sokatchev:1995nj}, and the holomorphic Chern-Simons functional can be seen as an artifact of twistor-string theory, which in Witten's original formulation is a topological B model with target $\PT$ \cite{Witten:2003nn}. Open strings are stretched between $D5$-branes wrapped on $\P^{3|4}$; hence the theory of open $D5-D5$ strings is described by precisely the holomorphic Chern-Simons functional \cite{Witten:1992fb}. Accounting for the ASD interactions of the gauge theory is a more subtle problem, though. In twistor-string theory, these interactions take the form of $D1-D5$ instantons in Witten's model, or world-sheet instantons in the heterotic model \cite{Mason:2007zv, ReidEdwards:2012tq}. At the level of a generating functional, such a contribution looks like \begin{equation*} \int_{\M_{\R}}\d^{4|8}X\,\det\left(\dbar+\cA\right)|_{X}, \end{equation*} where $\d^{4|8}X$ is the measure over the space of $X\cong\P^{1}$ in $\PT$ corresponding to points in the chosen real slice $\M_{\R}\subset\M$, and $(\dbar+\cA)|_{X}$ is the complex structure induced by $\cA$ restricted to the line $X$. However, under gauge transformations this determinant picks up exponential anomalies which lead to the conformal gravity modes of the twistor-string, so such a generating functional is not suitable for a twistor action which contains only the gauge theory. The correct non-local term is given by taking the logarithm of the twistor-string generating functional: \be{TAInt} S_{2}[\cA]=\int_{\M_{\R}}\d^{4|8}X\,\log\det\left(\dbar+\cA\right)|_{X}, \ee which essentially amounts to a WZW action, as first noted in \cite{Abe:2004ep}. Of course, we are glossing over some subtleties here, because $\det(\dbar+\cA)|_{X}$ is not a function, but rather a section of a determinant line bundle over the space of connections: \begin{equation*} \det(\dbar+\cA)|_{X}\in\Gamma(\cL), \qquad \cL\rightarrow\mathrm{Conn}(E\rightarrow\PT)|_{X}\cong\mathrm{Conn}(E\rightarrow\P^{1}). \end{equation*} Hence, $\det(\dbar+\cA)|_{X}$ should be understood as a $\zeta$-regularized determinant. The determinant line bundle comes equipped with a natural connection (the Quillen connection) \cite{Quillen:1985}, whose curvature can be computed using the Bismut-Freed index theorem \cite{Bismut:1986, Freed:1986zx}. In our case, the data for this is given by the diagram: \begin{equation*} \xymatrix{ & \cL \ar[d] \\ \mathrm{Conn}(E\rightarrow \P^{1})\times\M \ar[d] \ar[r]^{e} & \mathrm{Conn}(E\rightarrow \P^{1}) \\ \M & } \end{equation*} where $e$ is the natural evaluation map. The Bismut-Freed index theorem then states that \begin{equation*} F^{(\cL)}=\int \mathrm{Td}(\M)\ch(T\M\oplus E|_{X}) = 0, \end{equation*} since $E$ and $\M$ are both topologically trivial. In other words, $\log\det(\dbar+\cA)|_{X}$ can safely be treated as a function on $\M$ (at least locally), so \eqref{TAInt} is well-defined. We now have the full $\cN=4$ SYM twistor action as originally derived in \cite{Boels:2006ir}: \be{TwistorAction} S[\cA]=S_{1}[\cA]+\lambda S_{2}[\cA], \ee which is invariant under gauge transformations on twistor space:\footnote{In the real category, the normalization of a Chern-Simons action is fixed by requiring that the partition function is gauge invariant under `large' gauge transformations; this results in the familiar $\frac{k}{4\pi}$ normalization. A holomorphic Chern-Simons theory on a Calabi-Yau 3-fold $M$ is defined up to periods of $H_{3}(M,\Z)$, which are generically dense in $\C$ \cite{Thomas:1997}. However, since $H_{3}(\P^{3},\Z)=0$ we avoid this ambiguity and our arbitrary normalization$\frac{i}{2\pi}$ is fine.} \be{gaugefreedom} (\dbar+\cA)\rightarrow\gamma(\dbar+\cA)\gamma^{-1}, \qquad \gamma\in\Gamma(E,\End(E)), \ee with $\gamma$ homotopic to the identity, and $\gamma\rightarrow\mathbb{I}_{N}$ asymptotically. This follows because the exponential anomalies caused by gauge transformations in the determinant of \eqref{TAInt} become additional terms thanks to the logarithm; these terms vanish upon performing the fermionic integrations in $\d^{4|8}X$ (c.f., \cite{Boels:2006ir}). Notice that bosonically, a gauge transformation $\gamma$ is a function of three complex, or six real, variables. This means that the twistor action has substantially more gauge freedom than the space-time $\cN=4$ SYM Lagrangian. This freedom can be fixed or reduced by imposing gauge conditions; two of these will be particularly important for our purposes. \subsubsection*{\textit{Woodhouse Gauge}} The \emph{Woodhouse gauge} condition \cite{Woodhouse:1985id}: \be{WGauge} \dbar^{*}|_{X}\cA|_X = 0 \ee imposes the condition that $\cA$ is holomorphic upon restriction to all fibers $X\cong\P^1$ of twistor space. There are residual gauge transformations preserving this gauge condition \begin{equation*} \dbar^{*}|_{X}\dbar|_{X}\gamma =\Delta_{\P^{1}}\gamma =0, \end{equation*} for each $\P^{1}$ fiber of twistor space. But this is just the homogeneous harmonicity condition, so $\gamma$ cannot depend on the fiber coordinate. The remaining gauge freedom is reduced to precisely that of space-time gauge transformations: $\gamma=\gamma(x)$. In addition, recall that with Euclidean reality conditions explicit cohomological representatives can be constructed in Woodhouse gauge using theorems \ref{Wrep1}-\ref{Wrep2}. These facts are crucial in the following theorem, which establishes that the twistor action indeed provides a full perturbative description of $\cN=4$ SYM: \begin{thm}[Boels, Mason, \& Skinner \cite{Boels:2006ir}]\label{BMSthm} The twistor action $S[\cA]$ is classically equivalent to the Chalmers-Siegel action \eqref{CS1} in the sense that solutions to its Euler-Lagrange equations are in one-to-one correspondence with solutions to the field equations \eqref{yFE1}-\eqref{FE5} up to space-time gauge transformations. Additionally, upon fixing Woodhouse gauge and Euclidean reality conditions, $S[\cA]$ is equal to the Chalmers-Siegel action. \end{thm} This theorem confirms that the twistor action describes $\cN=4$ SYM at the Lagrangian level, and also indicates that any results which we prove using the twistor action will also be true for the space-time theory (at least perturbatively). \subsubsection*{\textit{Axial/CSW Gauge}} The gauge freedom of the twistor action can also be reduced by choosing an axial gauge on twistor space. This corresponds to a choice of holomorphic 1-dimensional distribution $D\subset T^{1,0}\PT$ with the requirement that $\cA|_{\overline{D}}=0$. More concretely, if we take $D$ to be the span of some holomorphic vector field $V$, then the axial gauge is the condition that \begin{equation*} \overline{V}\lrcorner\cA=0. \end{equation*} The simplest axial gauge available on twistor space is when $V$ corresponds to a null translation in space-time. This is known as the \emph{CSW gauge} after \cite{Cachazo:2004kj}, and corresponds to the choice of a reference twistor $Z_{*}$ which induces a foliation of $\PT$ by those lines which pass through $Z_{*}$. The CSW gauge is the condition that $\cA$ vanish when restricted to the leaves of this foliation: \be{CSWgauge} \overline{Z_{*} \cdot \frac{\partial}{\partial Z}}\lrcorner \cA =0. \ee It was initially argued using momentum eigenstates that the Feynman rules for the twistor action in the CSW gauge corresponded to the MHV formalism \cite{Boels:2007qn} on momentum space. However, this argument was far from rigorous, and was not self-contained on twistor space. We now present the rigorous derivation of the CSW gauge-fixed Feynman rules for the twistor action, and a purely twistorial derivation of the MHV formalism \cite{Adamo:2011cb}. \subsection{Feynman Rules} We begin by fixing CSW gauge, making the choice of fixed reference twistor to correspond to the `point at infinity' in $\M$: \begin{equation*} Z_{*}=(0,\iota^{A'},0)\in\P^{3|4}. \end{equation*} The gauge condition \eqref{CSWgauge} reduces the number of independent components of the $(0,1)$-connection $\cA$ from three to two; this eliminates the cubic Chern-Simons vertex in $S_{1}[\cA]$. Since this cubic vertex corresponds to the anti-MHV three-point amplitude, the choice of CSW gauge eliminates this vertex; anti-MHV amplitudes will of course still exist, but are now constructed from the remaining vertices of the theory. The gauged-fixed twistor action is therefore reduced to: \be{CSWgf} S[\cA]=\frac{i}{2\pi}\int_{\PT}\D^{3|4}Z\wedge\tr\left(\cA\wedge\dbar\cA\right)+\lambda\int_{\M_{\R}}\d^{4|8}X\;\log\det(\dbar+\cA)|_{X}. \ee As usual, the propagator is determined by the quadratic portion of the action. However, there are two such contributions in \eqref{CSWgf}: one from the kinetic Chern-Simons portion and another from the perturbative expansion of the $\log\det$ (see below). Since the latter occurs as part of a generating functional of vertices, we choose to treat it perturbatively at the expense of including a two-point vertex in our formalism (as we discuss below). \subsubsection{Vertices} In the CSW gauge, all vertices of the twistor action come from the $\log\det$, and can be made explicit by perturbatively expanding \cite{Boels:2006ir, Boels:2007qn}: \be{detexp} \log\det(\dbar+\cA)|_{X}=\tr\left(\log \dbar|_{X}\right)+\sum_{n=2}^{\infty}\frac{1}{n}\int_{X^n}\tr\left(\dbar^{-1}|_{X}\cA_{1}\dbar^{-1}|_{X}\cA_{2}\cdots\dbar^{-1}|_{X}\cA_{n}\right), \ee where $\cA_{i}$ is the field inserted at a point $Z_{i}\in X$, and $\dbar^{-1}|_{X}$ is the Green's function for the $\dbar$-operator restricted to $X$. Since the line $X$ can be written as a skew bi-twistor $X^{IJ}=Z_{A}^{[I}Z_{B}^{J]}$, we can introduce a coordinate $\sigma^{A}=(\sigma^{0},\sigma^{1})$ on $X$ and write \begin{equation*} Z(\sigma)=Z_{A}\sigma^{0}+Z_{B}\sigma^{1}. \end{equation*} This allows us to express $\dbar^{-1}|_{X}$ in terms of the Cauchy kernel in these coordinates: \begin{equation*} (\dbar^{-1}|_{X}\cA)(\sigma_{i-1})=\frac{1}{2\pi i}\int_{X}\frac{\cA(Z(\sigma_{i}))\wedge\D\sigma_{i}}{(i-1\;i)}, \end{equation*} with $(i-1\;i)=\epsilon_{AB}\sigma^{A}_{i-1}\sigma^{B}_{i}$ the $\SL(2,\C)$-invariant inner product on the $\P^{1}$ coordinates. Therefore, the $n^{\mathrm{th}}$ term in the perturbative expansion of the $\log\det$ gives: \be{detexp2} \frac{1}{n}\left(\frac{1}{2\pi i}\right)^{n}\int_{\M_{\R}}\d^{4|8}X \int_{X^n}\tr\left(\prod_{i=1}^{n}\frac{\cA(Z(\sigma_{i}))\wedge\D\sigma_{i}}{(i-1\;i)}\right). \ee Note that we consider the index $i$ modulo $n$ (i.e., $i=i+n$); this corresponds to the cyclic particle ordering of a color-stripped amplitude. In order to obtain a formula for the vertex which manifests conformal invariance, we represent the measure $\d^{4|8}X$ as a volume form on the moduli space of degree one maps $Z:\P^{1}\rightarrow\PT$: \be{vmeasure} \d^{4|8}X=\frac{\d^{4|4}Z_{A}\wedge\d^{4|4}Z_{B}}{\vol\;\GL(2,\C)}. \ee The quotient by the volume of $\GL(2,\C)$ transformations accounts for the redundancy in $\sigma$ and $Z_{A,B}$; this is the $\SL(2,\C)$ automorphism group of $\P^{1}$ and the $\C^{*}$ scaling freedom. This choice allows us to write down the superconformally invariant formula \be{TAvertex} V(1,\ldots,n)=\int_{\CM_{n,1}}\frac{\d^{4|4}Z_{A}\wedge\d^{4|4}Z_{B}}{\vol\;\GL(2,\C)}\int_{X^n}\tr\left(\prod_{i=1}^{n}\frac{\cA(Z(\sigma_{i}))\wedge\D\sigma_{i}}{(i-1\;i)}\right), \ee with $\CM_{n,d}$ the space of maps $Z:\P^{1}\rightarrow\PT$ of degree-$d$ and $n$ marked points. This is easily recognizable as the twistor-string formulation of the MHV amplitude as an integral over the space of lines in $\PT$ \cite{Roiban:2004yf}, and is a Dolbeault analogue of Nair's original twistor formulation \cite{Nair:1988bq}. Indeed, the Parke-Taylor amplitude can be recovered explicitly by inserting the on-shell momentum eigenstates: \be{YMeig} \cA_{i}=\int_{\C}\frac{\d s}{s}\bar{\delta}^{2}(s\lambda_{i\;A}-p_{i\;A})e^{s[[\mu_{i}\tilde{p}_{i}]]}. \ee We fix the $\GL(2,\C)$ freedom by setting $\sigma_{i}=\lambda_{i}$ and quotienting out by the scale of $Z_{A,B}$. This gives \begin{multline} \int_{\CM_{n,1}}\frac{\d^{4|4}Z_{A}\wedge\d^{4|4}Z_{B}}{\vol\;\GL(2,\C)}\int_{X^n}\,\prod_{i=1}^{n}\frac{\cA(Z(\sigma_{i}))\wedge\D\sigma_{i}}{(i-1\;i)} \\ =\int_{\M_{\R}}\d^{4|8}x \int\; \prod_{i=1}^{n}\frac{\d s_{i}}{s_{i}}\bar{\delta}^{2}(s\lambda_{i\;A}-p_{i\;A})e^{s[[\mu_{i}\tilde{p}_{i}]]}\frac{\D\lambda_{i}}{\la i-1\;i\ra} \\ =\frac{1}{\prod_{i=1}^{n}\la i\;i+1\ra}\int_{\M_{\R}}\d^{4|8}x\;\exp\left(i\sum_{i=1}^{n}p_{i}\cdot x+\eta_{a\;i}p_{A\;i}\theta^{Aa}\right) \\ =\frac{\delta^{4|8}\left(\sum_{i=1}^{n}p_{i}\right)}{\la 1\;2\ra\cdots \la n\;1\ra}=A^{0}_{n,0}, \end{multline} as expected. In the final step, we used Nair's lemma \cite{Nair:1988bq} to express the delta-function as an integral over the real space-time. Hence, we see that on-shell the vertices of the twistor action are the MHV amplitudes of $\cN=4$ SYM. Determining the form of the twistor propagator in CSW gauge is a bit more involved, though. \subsubsection{Propagator} The propagator is fixed by the kinetic part of the holomorphic Chern-Simons action \begin{equation*} \int_{\PT}\D^{3|4}Z\wedge\tr\left(\cA\wedge\dbar\cA\right), \end{equation*} to be the inverse of the $\dbar$-operator on $\PT$ acting on $(0,1)$-forms in the CSW gauge: \begin{equation*} \dbar \Delta(Z_{1},Z_{2})=\bar{\delta}^{3|4}(Z_{1},Z_{2}), \qquad \overline {Z_{*}\cdot\frac{\partial}{\partial Z_{1}}}\, \lrcorner \,\Delta=\overline {Z_{*}\cdot\frac{\partial}{\partial Z_{2}}}\, \lrcorner \,\Delta=0. \end{equation*} In the end, we will see that the correct form of the propagator is given simply by one of our distributional forms: $\Delta(Z_{1},Z_{2})=\bar{\delta}^{2|4}(Z_{1},*,Z_{2})$. However, the steps necessary for a careful derivation of this using cohomological representatives are rather involved. We reduce the problem to one on bosonic twistor space $\PT_{b}$ by performing the $\d^{4}\chi$ fermionic integrals in the kinetic portion of the action to obtain: \be{kin} \int_{\PT_{b}}\D^{3}Z\wedge\tr\left(g\wedge\dbar a+\psi^{a}\wedge\dbar\tilde{\psi}_{a}+\frac{\epsilon^{abcd}}{4}\phi_{ab}\wedge\dbar\phi_{cd}\right). \ee From this, we see that the propagator must be a sum of terms, each of which is a kernel for $\dbar$ on $\PT_{b}$ taking values in the proper homogeneity configurations. More formally, we have: \be{prop1} \Delta=(\chi_{2})^{4}\Delta_{0,-4}+\chi_{1}(\chi_{2})^{3}\Delta_{-1,-3}+(\chi_{1})^{2}(\chi_{2})^{2}\Delta_{-2,-2}, \ee where each bosonic propagator obeys: \begin{equation*} \Delta_{k,l}\in H^{0,2}((\PT_{b}\times\PT_{b})\setminus\Delta, \cO(k,l)), \qquad \dbar \Delta_{k,l}=(\dbar_{1}+\dbar_{2})\Delta_{k,l}=\bar{\delta}_{\Delta}, \end{equation*} for $\Delta\subset\PT_{b}\times\PT_{b}$ the diagonal and $\bar{\delta}_{\Delta}$ the anti-holomorphic Dirac current. The inverse of the $\dbar$-operator on non-projective complex manifolds is given locally by the Bochner-Martinelli kernel \cite{Griffiths:1978}. Most attempts at building kernels for $\dbar$ on $\P^{n}$ are rooted in complex analysis (e.g., \cite{Polyakov:1987, Berndtsson:1988}), and geometric efforts work with a positive definite (i.e., Fubini-Study) metric \cite{Gotmark:2008}. While these results are impressive in their generality, they are unwieldy for physical calculations. By using the natural machinery of twistor theory reviewed in Section \ref{Chapter2}, we can obtain a simple answer in CSW gauge. The basic roadmap is to begin with a space-time representative for the propagator in Feynman gauge, transform it to twistor space using Woodhouse representatives, and then make a gauge transformation to arrive at CSW gauge. These calculations are rather involved, so we only outline them here; the interested reader need only consult appendix D of \cite{Adamo:2011cb}. Let us consider the propagator component $\Delta_{-2,-2}$ as an example, since this is when the computations are easiest. In order to utilize Woodhouse representatives, we need to impose Euclidean reality conditions, for which the CSW gauge condition reads: \be{EuclCSW} \hat{\iota}^{A'}\lambda^{A}\partial_{AA'}\lrcorner\Delta_{-i,j}=N^{\alpha}\frac{\partial}{\partial\hat{Z}^{\alpha}}\lrcorner\Delta_{i,j}=0, \qquad N^{\alpha}=(0,\hat{\iota}^{A'}). \ee Now, on space-time, $\Delta_{-2,-2}$ is just the scalar propagator \begin{equation*} \Delta_{-2,-2}(x_{1},x_{2})=\frac{1}{(x_1-x_2)^2}, \end{equation*} which is a z.r.m. field on $\E\times\E$ away from the diagonal. Hence, we can apply theorem \ref{Wrep1} to construct a Woodhouse representative for the propagator: \begin{multline}\label{eqn: wg22} \Delta^{\mathrm{W}}_{-2,-2}(Z_{1},Z_{2})=\dhat_{1}\dhat_{2}\left(\frac{1}{(1,\hat{1},2,\hat{2})}\right) \\ =2\frac{(\d\hat{Z}_{1},\hat{1},2,\hat{2})\wedge(1,\hat{1},\d\hat{Z}_{2},\hat{2})}{(1,\hat{1},2,\hat{2})^3}-\frac{(\d\hat{Z}_{1},\hat{1},\d\hat{Z}_{2},\hat{2})}{(1,\hat{1},2,\hat{2})^2}, \end{multline} using the fact that \begin{equation*} (x_{1}-x_{2})^{2}=\frac{(1,\hat{1},2,\hat{2})}{\la \lambda_{1}\hat{\lambda}_{1}\ra \la\lambda_{2}\hat{\lambda}_{2}\ra}. \end{equation*} The twistor propagator \eqref{eqn: wg22} is a $(0,2)$-form on $\PT_{b}\times\PT_{b}$, is $\dbar$-closed away from the diagonal, and is in Woodhouse gauge on each factor. Now, we want to exploit the freedom of adding a $\dbar$-exact $(0,1)$-form on $\PT_{b}\times\PT_{b}$ in order to transform \eqref{eqn: wg22} into CSW gauge: \begin{equation*} N\cdot\dhat_{i}\lrcorner(\Delta^{\mathrm{W}}_{-2,-2}+\dbar f)=0, \end{equation*} with $i=1,2$ labelling the factor of twistor space. Such a $f$ can indeed be found, and after accounting for potential gauge anomalies resulting from delta-functions, we find \be{eqn: csw22*} \Delta_{-2,-2}=\frac{(N,\hat{1},2,\hat{2})(1,\hat{1},N,\hat{2})}{(1,\hat{1},2,\hat{2})}\dbar_{1}\left(\frac{1}{(1,\widehat{N},2,\hat{2})}\right)\wedge\dbar_{2}\left(\frac{1}{(1,\hat{1}, 2,\widehat{N})}\right), \ee which is obviously in CSW gauge since the form component is skewed with $N$. This procedure can be carried out for $\Delta_{-1,-3}$ and $\Delta_{0,-4}$ (with a few additional subtleties) by again applying theorems \ref{Wrep1}-\ref{Wrep2} and finding the appropriate gauge transformation, resulting in \cite{Adamo:2011cb}: \be{eqn: csw13*} \Delta_{-1,-3}=i\frac{[(1,\hat{1},N,\hat{2})]^{2}}{(1,\hat{1},2,\hat{2})}\dbar_{1}\left(\frac{1}{(1,\widehat{N},2,\hat{2})}\right)\wedge\dbar_{2}\left(\frac{1}{(1,\hat{1},2,\widehat{N})}\right), \ee \be{eqn: csw04*} \Delta_{0,-4}=2\frac{\la\hat{\lambda}_{2}\lambda_{1}\ra[(1,\hat{1},N,\hat{2})]^2}{(1,\hat{1},2,\hat{2}) \la\lambda_{2}\hat{\lambda}_{2}\ra}\dbar_{1}\left(\frac{1}{(1,\widehat{N},2,\hat{2})}\right)\wedge\dbar_{2}\left(\frac{1}{(1,\hat{1},2,\widehat{N})}\right). \ee These bosonic components define the full supersymmetric propagator via \eqref{prop1}, where the homogeneity factors appearing at the front of each of \eqref{eqn: csw22*}-\eqref{eqn: csw04*} are balanced by the fermionic coordinates of twistor space. Furthermore, the reference twistor $Z_{*}=-\widehat{N}$, so the full propagator in CSW gauge contains an overall factor of \begin{equation*} \dbar_{1}\left(\frac{1}{(1,\rf,2,\hat{2})}\right)\wedge\dbar_{2}\left(\frac{1}{(1,\hat{1},2,\rf)}\right), \end{equation*} which is supported only on the set $(1,\rf,2,\hat{2})=(1,\hat{1},2,\rf)=0$. This restricts $Z_1$ to lie in the plan spanned by $\{Z_{*},Z_{2},\hat{Z}_2\}$, and $Z_2$ to lie in the plane spanned by $\{Z_{*},Z_1, \hat{Z}_1\}$. In $\PT$, these two planes must intersect in a line containing the reference twistor: $(\rf ,2,\hat{2})\cap(\rf, 1,\hat{1})=X_{*}$. But this is only possible if $Z_{1}$ and $Z_{2}$ are also contained in $X_{*}$; in other words: $Z_{1}$, $Z_{2}$ and $Z_{*}$ are collinear in twistor space (see Figure \ref{PropGeo}). \begin{figure} \centering \includegraphics[width=4 in, height=3 in]{PropGeo.pdf}\caption{\textit{Twistor space support of the propagator}}\label{PropGeo} \end{figure} Our distributional forms then allow us to represent the twistor space propagator for $\cN=4$ SYM in CSW gauge by \be{TAprop} \Delta(Z_{1},Z_{2})=\bar{\delta}^{2|4}(Z_{1},\rf, Z_{2}). \ee Note that although our derivation here began in Euclidean signature, the result for the full propagator is signature-independent and superconformally invariant up to choice of $Z_{\rf}$. The fact that $\Delta$ is in CSW gauge follows from our derivation, and it is the propagator for the kinetic operator $\dbar$ thanks to lemma \ref{dfprop1}: \begin{equation*} \dbar\Delta=2\pi i\left( \bar{\delta}^{3|4}(Z_{1},Z_{2})+\bar{\delta}^{3|4}(Z_{1},\rf)+ \bar{\delta}^{3|4}(Z_{2},\rf)\right) . \end{equation*} The first term is the anti-holomorphic Dirac current we want, and the other two terms do not contribute to the physical portion of the propagator. This is because $\Delta(Z_{1},Z_{2})$ should be a $(0,1)$-form in each variable; the second and third terms in the above expression have $(0,2)$-form components in $Z_{1}$ and $Z_{2}$, however. In any case, these contributions correspond to unphysical poles in momentum space, which are an expected feature of the axial gauge we are working in. They can be removed entirely by restricting ourselves to the twistor space $\PT\subset\P^{3|4}$ that excludes the `point at infinity' (i.e., $\PT=\{Z | \lambda\neq 0\}$), where the error terms do not have support. Finally, the tensor structure of the propagator is included by accounting for the gauge group, and writing these gauge indices explicitly gives: \be{TAcolorprop} \Delta(Z_{1},Z_{2})^{i k}_{j l}=\bar{\delta}^{2|4}(Z_{1},\rf, Z_{2})\left(\delta^{i}_{l}\delta^{k}_{j}-\frac{1}{N}\delta^{i}_{j}\delta^{k}_{l}\right). \ee We often suppress the color structure of the propagator in our following discussions. \subsection{The MHV Formalism} Having derived the CSW gauge-fixed Feynman rules of the twistor action, it is natural to ask what they correspond to on momentum space. Given that we have shown the twistor vertices to be equivalent to the MHV amplitudes of $\cN=4$ SYM on-shell, a natural guess would be the MHV formalism of \cite{Cachazo:2004kj}. In this subsection, we will demonstrate that this is, in fact, true. We choose $Z_{*}=(0,\iota^{A'},0)$, and assume (without loss of generality) that $\iota$ is normalised with respect to Euclidean reality conditions: $[\hat{\iota}\iota]=1$. To begin, we pull back the twistor propagator $\Delta(Z,Z')$ to the primed spinor bundle using the twistor double fibration: \begin{equation*} p:\PS\rightarrow\PT, \qquad (x^{AA'},\theta^{aA},\lambda_{A})\mapsto (\lambda_{A}, ix^{AA'}\lambda_{A},\theta^{aA}\lambda_{A}). \end{equation*} This can be accomplished using the definition of $\bar{\delta}^{2|4}$ and the incidence relations which define the map $p$: \begin{multline}\label{mMHV1} p^{*}\Delta=\int_{\C^2}\frac{\d s}{s}\frac{\d t}{t}\bar{\delta}^{2}(s\iota^{A'}-ix^{AA'}\lambda_{A}-itx^{\prime AA'}\lambda_{A}^{\prime})\bar{\delta}^{2|4}(\lambda_{A}+t\lambda_{A}^{\prime}) \\ =\int_{\C^2}\frac{\d s}{s}\frac{\d t}{t}\bar{\delta}^{2}(s\iota^{A'}-iy^{AA'}\lambda_{A})\bar{\delta}^{2|4}(\lambda_{A}+t\lambda_{A}^{\prime}). \end{multline} where the second line follows from the support of the delta functions and we abuse notation by writing \begin{equation*} \bar{\delta}^{2|4}(\lambda_{A}+t\lambda'_{A})\equiv \bar{\delta}^{2}(\lambda_{A}+t\lambda'_{A})\bar{\delta}^{0|4}(\theta^{aA}\lambda_{A}+t\theta^{\prime aA}\lambda'_{A}). \end{equation*} Since this expression is independent of $x+x'$, we are free to perform a Fourier transform in $y=x-x'$ to obtain the momentum space version of the propagator: \be{mMHV2} \widetilde{\Delta}=\int\d^{4|8}y\; e^{ip\cdot y}\frac{\d s}{s}\frac{\d t}{t}\bar{\delta}^{2}(s\iota^{A'}-iy^{AA'}\lambda_{A})\bar{\delta}^{2|4}(\lambda_{A}+t\lambda_{A}^{\prime}). \ee Note that this Fourier transform takes into account the \emph{super}-momentum via \begin{equation*} p\cdot y=p_{AA'}(x-x')^{AA'}+p_{A}\eta_{a}(\theta-\theta')^{aA}. \end{equation*} Now, $\bar{\delta}^{2}(s\iota^{A'}-iy^{AA'}\lambda_{A})$ is a $(0,2)$-form multiplied by four real delta-functions, which uniquely restrict $y$ to be \begin{equation*} y^{AA'}=i\frac{s\iota^{A'}\hat{\lambda}^{A}-\bar{s}\hat{\iota}^{A'}\lambda^{A}}{\la\lambda\hat{\lambda}\ra}. \end{equation*} Taking into account the $(0,2)$-form components as well as the resulting Jacobian factor leaves us with: \begin{equation*} \widetilde{\Delta}=\int\;\exp\left[-p_{AA'}\frac{s\iota^{A'}\hat{\lambda}^{A}-\bar{s}\hat{\iota}^{A'}\lambda^{A}}{\la\lambda\hat{\lambda}\ra} \right]\frac{\d s}{s}\frac{\d t}{t}\frac{s\d\bar{s}\;\D\hat{\lambda}}{\la\lambda\hat{\lambda}\ra^2}\bar{\delta}^{2}(s\iota^{A'}-iy^{AA'}\lambda_{A})\bar{\delta}^{2|4}(\lambda_{A}+t\lambda_{A}^{\prime}). \end{equation*} The $s$-integrals can now be performed to give a holomorphic delta-function: \begin{equation*} \widetilde{\Delta}=\int \frac{\d t}{t}\frac{\bar{\delta}^{1}_{0}(p|\hat{\iota}],\lambda)}{[\iota |p|\lambda\ra^2}\bar{\delta}^{2|4}(\lambda_{A}+t\lambda_{A}^{\prime})=\frac{\bar{\delta}^{1}_{0}(p|\hat{\iota}],\lambda)\wedge\bar{\delta}^{1}_{0}(p|\hat{\iota}],\lambda')}{p^2}. \end{equation*} The fermionic dependence is easily re-introduced using the delta function: \be{mMHV4} \tilde{\Delta}(p,\lambda,\lambda')=\frac{\bar{\delta}^{1}([\hat{\iota}|p|\lambda\ra)\wedge\bar{\delta}^{1}([\hat{\iota}|p|\lambda'\ra)}{p^2}\delta^{0|4}\left(p_{AA'}\hat{\iota}^{A'}(\theta-\theta')^{aA}\right). \ee But this is precisely the propagator for the MHV formalism on momentum space: the scalar $p^{-2}$ propagator and the prescription that off-shell momentum spinors are defined using the CSW reference spinor and the rule $p_{A}=p_{AA'}\hat{\iota}^{A'}$.\footnote{In \eqref{mMHV4}, the prescription is actually that $\lambda_{A}$ for a propagator leg is defined by $\lambda_{A}=\hat{\iota}^{A'}p_{AA'}$. After inserting momentum eigenstates, this is easily seen to reduce to the CSW prescription.} Additionally, since the proof required Euclidean reality conditions (i.e., the Fourier transform to momentum space is performed on the Euclidean-real slice), this definition of the propagator automatically incorporates the Feynman $i\epsilon$-prescription. To complete a proof that the twistor Feynman rules are equivalent to the momentum space MHV formalism, we must still show that the vertices can be extended off-shell and that the 2-point vertex does not enter the formalism. For the first point, it suffices to demonstrate that off-shell momentum eigenstates reduce to the standard states \eqref{YMeig} on-shell. For simplicity, we will prove this for the abelian case, but the proof can be extended with only notational complications to $\SU(N)$. We begin with the abelian superconnection for $\cN=4$ SYM (derived in appendix \ref{Appendix1}), $\CA=\Gamma_{AA'}\d x^{AA'}+\Gamma_{a\;A}\d\theta^{aA}$. With Euclidean reality conditions and Woodhouse gauge, the multiplet of the theory takes the form: \begin{eqnarray*} A_{AA'}=e^{ip\cdot x}\varepsilon_{AA'}, \qquad \tilde{\Psi}_{a\;A'}=e^{ip\cdot x}\xi_{A'}\eta_{a}, \qquad \Phi_{ab}=\frac{e^{ip\cdot x}}{2}\eta_{a}\eta_{b}, \\ \Psi^{a}_{A}=\frac{e^{ip\cdot x}}{3!}p_{a}\epsilon^{abcd}\eta_{b}\eta_{c}\eta_{d}, \qquad G_{AB}=\frac{e^{ip\cdot x}}{4!}p_{A}p_{B}\eta^{4}, \end{eqnarray*} where the polarization spinors are defined in relation to the CSW reference spinor: \begin{equation*} p_{A}=p_{AA'}\hat{\iota}^{A'}, \qquad \hat{\iota}^{A'}p^{A}\varepsilon_{AA'}=1, \qquad \hat{\iota}^{A'}\xi_{A'}=1. \end{equation*} The superconnection components can be written in terms of these Woodhouse representatives, and then pulled back to $\PS$ to give an off-shell momentum eigenstate $\cA$ in the Woodhouse gauge. The transformation to CSW gauge requires finding a function $\gamma$ such that \begin{equation*} \hat{\iota}^{A'}\lambda^{A}\partial_{AA'}\lrcorner(\cA+\d\gamma)=0. \end{equation*} A short calculation shows that the required gauge transformation is: \be{osmr} \gamma=i\frac{e^{ip\cdot x}}{\la p\lambda\ra}\left[[\hat{\iota}|\varepsilon|\lambda\ra+(\eta\cdot\chi)\left(1+i\frac{(\eta\cdot\tilde{\chi})}{2}-\frac{(\eta\cdot\tilde{\chi})^2}{3!}-i\frac{(\eta\cdot\tilde{\chi})^3}{4!}\right)\right], \ee where $\chi^{a}=\theta^{aA}\lambda_{A}$ and $\tilde{\chi}^{a}=\theta^{aA}p_{A}$. It is then easy to see that the off-shell momentum eigenstate in CSW gauge takes the form \begin{multline}\label{osmr2} \cA^{\mathrm{off-shell}}=\bar{\delta}^{1}(\la p\lambda\ra)e^{ip\cdot x}\left[[\hat{\iota}|\varepsilon|\lambda\ra+(\eta\cdot\chi)\left(1+i\frac{(\eta\cdot\tilde{\chi})}{2}-\frac{(\eta\cdot\tilde{\chi})^2}{3!}-i\frac{(\eta\cdot\tilde{\chi})^3}{4!}\right)\right] \\ +\cA_{AA'}\d x^{AA'}+\cA_{a\;A}\d\theta^{aA}. \end{multline} The precise form of the remaining components of the eigenstate are not important because on-shell, they vanish. Additionally, it is easy to see that the first component of \eqref{osmr2} reduces to \eqref{YMeig} on-shell, and it descends from $\PS$ to $\PT$ as required. Finally, the 2-point vertex vanishes in momentum space for trivial reasons of momentum conservation. The most nontrivial case is when the vertex is in the middle of a Feynman diagram with propagators attached to each leg with supermomenta $(p,\eta)$ and $(p',\eta')$. The fermionic part of the momentum conserving delta function then reduces to $\la p\,p'\ra^4\delta^{0|4}(\eta)\delta^{0|4}(\eta')$ and so the spinor products cancel those in the Parke-Taylor denominator, yielding an overall $\la p\, p'\ra^2$ in the numerator. The bosonic delta function then forces $p+p'=0$ and the numerator factor forces the vertex to vanish. In summary, we have proven the following fact: \begin{propn}\label{MHVpropn} After the choice of Euclidean reality conditions, the Feynman rules of the twistor action in CSW gauge are equivalent to the MHV formalism on momentum space. \end{propn} Note the importance of Euclidean reality conditions in this proposition. Although the twistor space vertices and propagator are independent of the choice of space-time signature, their translation to momentum space is not. This can be seen as a consequence of the Feynman $i\epsilon$-prescription for the propagator, as well as the need to write down explicit representatives when pulling back to the spinor bundle. We will now see that calculating amplitudes on twistor space (where signature choices need not be made) avoids many of the technical issues encountered in this subsection while working with momentum space representatives. \subsection{Scattering Amplitudes in Twistor Space} \label{TScatAmps} \subsubsection{Amplitudes and cohomology} Scattering amplitudes are functionals of asymptotic states; via the Penrose transform, we can represent these using momentum eigenstates which take values in $H^{0,1}(\PT,\cO)$ as given in \eqref{YMeig}. As we will see, the twistor space MHV formalism provides a natural way to calculate the \emph{kernel} for scattering amplitudes. For a $n$-particle scattering amplitude, such a kernel will live in the $n$-fold product of the dual of $H^{0,1}(\PT,\cO)$; in other words, the amplitude itself is obtained by pairing the kernel with momentum eigenstates. At first, a natural choice for this pairing seems to be a Hilbert space structure on $H^{1}(\PT,\cO(k))$; however, this requires a choice of space-time signature and is actually non-local on twistor space \cite{Eastwood:1981}. A much more natural pairing is given by the duality between $(0,1)$-forms and distributional $(0,2)$-forms with compact support on twistor space. This is given simply by: \begin{equation*} \Omega^{0,1}(\PT,\cO)\times\Omega^{0,2}_{c}(\PT,\cO)\rightarrow\C, \qquad (\phi,\alpha)\mapsto\int_{\PT}\D^{3|4}Z\wedge\phi\wedge\alpha. \end{equation*} Hence, on twistor space, we will represent a $n$-particle amplitude as an element of \begin{equation*} A(1,\ldots, n)\in\Omega^{0,2}_{c}\left(\bigoplus_{i=1}^{n}\PT_{i},\cO\right). \end{equation*} The region of compact support is determined by ensuring that amplitudes manifest crossing symmetry: scattering states can be chosen to have both positive and negative frequency. For instance, if we work in Lorentzian signature, then crossing symmetry dictates that the compact support of the twistor space kernel be contained in $\PN=\PT^{+}\cap\PT^{-}$. Furthermore, the amplitude should be independent of the choice of momentum eigenstates within the same cohomology class of $H^{0,1}(\PT,\cO)$. By taking $\phi=\dbar f$ in the above pairing, this requires the compactly supported $(0,2)$-form to be $\dbar$-closed. Hence, we should find that the twistor space amplitude takes values in $H^{0,2}_{c}(\oplus_{i=1}^{n}\PT_{i},\cO)$. Now, recall the form of the twistor space MHV vertex from \eqref{TAvertex}: \begin{equation*} V(1,\ldots,n)=\int_{\CM_{n,1}}\frac{\d^{4|4}Z_{A}\wedge\d^{4|4}Z_{B}}{\vol\;\GL(2,\C)}\int_{X^n}\tr\left(\prod_{i=1}^{n}\frac{\cA(Z(\sigma_{i}))\wedge\D\sigma_{i}}{(i-1\;i)}\right) \end{equation*} To obtain the kernel amplitude on twistor space, we want to insert a $(0,2)$-form representative for $\cA$ rather than a $(0,1)$-form momentum eigenstate. This is accomplished by using the elemental state: \be{elemental} \cA_{i}=\bar{\delta}^{3|4}(Z_{i},Z(\sigma_{i})), \qquad Z(\sigma_{i})=Z_{A}\sigma_{i}^{0}+Z_{B}\sigma^{1}_{i}. \ee This forces the twistor for the $i^{\mathrm{th}}$ external state to lie on the line parametrized by $\sigma_{i}$, and after integrating with respect to $\D\sigma_{i}$, reduces to a $(0,2)$-form as desired. The external twistors $Z_{i}$ are then integrated out against the $(0,1)$-form wavefunctions to obtain the final amplitude. Hence, the twistor space vertex can be written as: \be{Tvertex} V(1,\ldots,n)=\int \frac{\d^{4|4}Z_{A}\wedge\d^{4|4}Z_{B}}{\vol\;\GL(2,\C)}\int_{X^n}\prod_{i=1}^{n}\frac{\bar{\delta}^{3|4}(Z_{i},Z(\sigma_{i}))\wedge\D\sigma_{i}}{(i-1\;i)}, \ee ignoring the color trace.\footnote{For the remainder of this section, we take the color trace to be implicit; this is fine as long as we continue to impose the cyclic symmetry of color-stripped amplitudes.} An unfortunate consequence of this choice is that the MHV vertex can no longer be interpreted as taking values in cohomology, since \begin{equation*} \dbar V(1,\ldots, n)=2\pi i \sum_{i=1}^{n}\bar{\delta}^{3|4}(i,i+1)V(1,\ldots,\widehat{i},\ldots n), \end{equation*} meaning that MHV amplitude will not take values in $H^{0,2}_{c}(\oplus_{i=1}^{n}\PT_{i},\cO)$. However, these anomalies are supported at the collinear limits and are expressing the standard IR singularity structure of the amplitude. This indicates that the collinear IR divergences of an amplitude lead to anomalies in gauge invariance, since our pairing with external wavefunctions is no longer independent of the choice of cohomological representative. Since we have already fixed the CSW gauge, this means that a different choice of gauge would lead to different expressions for the amplitudes. But this is an expected phenomenon in quantum field theory; for instance, a similar mechanism gives rise to anomalies in superconformal invariance of $\cN=4$ SYM \cite{Beisert:2010gn} and can be dealt with by performing suitable deformations of the super-conformal algebra generators. Hence, we treat these IR divergences as a relic of our choice of gauge, and choose generic external twistors in the same way that one usually chooses generic (i.e., non-collinear) external momenta for a scattering amplitude. With \eqref{Tvertex}, we can readily verify that the MHV vertices of the twistor action obey the inverse soft limit \cite{ArkaniHamed:2009si}: \begin{lemma}\label{isl} The MHV vertex \eqref{Tvertex} obeys the \emph{inverse soft limit}: \begin{equation*} V(1,\ldots,n+1)=V(1,\ldots,n)\bar{\delta}^{2|4}(n,n+1,1). \end{equation*} \end{lemma} \proof Define a non-projective coordinate $s$ on the $n+1^{\mathrm{st}}$ copy of $X$ by \begin{equation*} s=\frac{(n+1\;1)}{(n\;n+1)}. \end{equation*} Under this change of variables, we have \begin{equation*} Z(\sigma_{n+1})=Z(\sigma_{n})+sZ(\sigma_{1}), \qquad \frac{\d s}{s}=\frac{\D\sigma_{n+1}}{(n\;n+1)(n+1\;1)}, \end{equation*} and the vertex becomes: \begin{multline*} V(1,\ldots,n+1)=\int \frac{\d^{4|4}Z_{A}\wedge\d^{4|4}Z_{B}}{\vol\;\GL(2,\C)}\prod_{i=1}^{n}\frac{\bar{\delta}^{3|4}(Z_{i},Z(\sigma_{i}))\wedge\D\sigma_{i}}{(i-1\;i)} \\ \times \frac{\d s}{s}\bar{\delta}^{3|4}(Z_{n+1},Z(\sigma_{n})+sZ(\sigma_{1})). \end{multline*} On the support of the delta functions involved, we can set $Z(\sigma_{n})=Z_{n}$ and $Z(\sigma_{1})=Z_{1}$ in the last factor and the proof is complete. $\Box$ \medskip This property was first noted in twistor space using split signature reality conditions \cite{Mason:2009sa}, and can be applied repeatedly to the vertex. If every such application of the inverse soft limit is taken with respect to $Z_{1}$, then one obtains the formula \be{Tvertex2} V(1,\ldots,n)=V(1,2)\prod_{i=2}^{n}\bar{\delta}^{2|4}(1,i-1,i), \ee where $V(1,2)$ is the two-point vertex of the theory. While this minimizes the number of remaining integrals and manifests superconformal invariance, it no longer exhibits the explicit cyclic symmetry of the twistor-string version of the vertex. Indeed, there are many possible reductions of the $n$-point vertex to formulae like \eqref{Tvertex2}, depending on how the inverse soft limits are taken. One can move between these (equivalent) representations by repeated application of the cyclic identity for the four-point vertex: \be{4ptcyclic} V(1,2,3)\bar{\delta}^{2|4}(1,3,4)=V(2,3,4)\bar{\delta}^{2|4}(2,4,1). \ee The two-point vertex is an essential part of the reduced form of the MHV vertex on twistor space. We have seen that it cannot contribute to the Feynman diagram calculus of the twistor action due to momentum conservation; however, it would be nice to have a purely twistorial argument for this. \subsubsection*{\textit{The two-point vertex}} The fermionic integrals in the twistor expression for $V(1,2)$ can be performed algebraically, resulting in a Jacobian factor of $(1\; 2)^4$ and leaving \be{2pt1} V(1,2)=\int\limits_{\M_{\R}\times(\P^1)^2}\frac{\d^{4}Z_{A}\wedge\d^{4}Z_{B}}{\vol\;\GL(2,\C)}\,\D\sigma_{1}\D\sigma_{2}(1\;2)^{2}\bar{\delta}^{3}_{0,-4}(Z_{1},Z(\sigma_{1}))\bar{\delta}^{3}_{0,-4}(Z_{2},Z(\sigma_{2})). \ee Here, we define \begin{equation*} \bar{\delta}^{3}_{p,-p-4}(Z_1,Z_2)= \int_{\C}\frac{\d s}{s^{1+p}} \, \bar \delta^4(sZ_1+Z_2) \end{equation*} where the subscripts denote the homogeneity in the first and second entry respectively. The $\vol\;\GL(2,\C)$ quotient can be fixed by setting $\sigma_{1}=(1,0)$ and $\sigma_{2}=(0,1)$ on $\P^{1}$, and then reducing $\d^{4}Z_{A}\d^{4}Z_{B}$ to projective integrals. Removing the appropriate Jacobian factor we obtain \be{2pt2} V(1,2)=\oint\limits_{\M_{\R}\times(\P^1)^2}\D^{3}Z_{A}\wedge\D^{3}Z_{B}\,\bar{\delta}^{3}_{0,-4}(Z_{1},Z_{A})\bar{\delta}^{3}_{0,-4}(Z_{2},Z_{B}), \ee where the contour is now understood as arising from integrating $Z_A$ and $Z_B$ over the $\P^1$ corresponding to $x\in\M$ and then integrating over the real slice $\M_{\R}$. This is an integral of a $12$-form over an $8$-dimensional contour so that we are left with a $(0,2)$-form in each factor of $Z_{1}$ and $Z_{2}$, as expected for a twistor space vertex. Now, using a simple bosonic extension of lemma \ref{dfprop1} we can see that: \begin{equation*} \dbar \bar{\delta}^{2}_{0,0,-4}(Z_{1},Z_{2},Z_{3})=2\pi i\left( \bar{\delta}^{3}_{0,-4}(Z_{1},Z_{3})+\bar{\delta}^{3}_{0,-4}(Z_{2},Z_{3})\right). \end{equation*} Note that there are only two terms here because in \begin{equation*} \bar{\delta}^{2}_{0,0,-4}(Z_{1},Z_{2},Z_{3})=\int_{\P^2}\frac{c_{3}^{3}\;\D^{2}c}{c_{1}c_{2}} \bar{\delta}^{4}(c_{1}Z_{1}+c_{2}Z_{2}+c_{3}Z_{3}), \end{equation*} there is no pole in $c_{3}$. This means that we can write the two-point vertex as a $\dbar$-exact form: \be{2pt3} V(1,2)=\dbar\left(\frac{1}{2\pi i}\oint\limits_{\M_{\R}\times(\P^1)^2}\D^{3}Z_{A}\wedge\D^{3}Z_{B}\,\bar{\delta}^{2}_{0,0,-4}(Z_{1},Z_{B},Z_{A})\bar{\delta}^{3}_{0,-4}(Z_{2},Z_{B})\right), \ee since $\D^{3}Z_{A}\wedge\D^{3}Z_{B}=0$ on the support of $\bar{\delta}^{3}_{0,-4}(Z_{B}, Z_{A})$. This immediately indicates that the two-point vertex cannot enter the Feynman diagram calculus when one of its legs is an external particle, since in this case we could integrate by parts to get a $\dbar$-operator acting on an external wavefunction, which lives in cohomology and therefore gives zero. Unfortunately, it is not so obvious that its contribution vanishes when inserted on an internal leg, since in that case integration by parts moves the $\dbar$-operator onto a propagator rather than a cohomology class. More explicitly, if we consider the two-point vertex inserted between two twistor propagators connecting lines spanned by $(i,j)$ and $(k,l)$, this gives: \begin{equation*} \int\limits_{\M_{\R}\times(\P^{1})^{2}}\D^{3}Z_{A}\D^{3}Z_{B}\,\bar{\delta}^{1}_{0,0,0,-4}(i,j,*,Z_{A})\;\bar{\delta}^{1}_{0,0,0,-4}(k,l,*,Z_{B}). \end{equation*} We can re-write these delta functions as \begin{equation*} \bar{\delta}^{1}_{0,0,0,-4}(i,j,*,Z_{A})=\int_{\C^{2}}\frac{\d^{2}t}{t_{i}t_{j}}\;\bar{\delta}^{1}_{-4,0}(\lambda_{A},t_{i}\lambda_{i}+t_{j}\lambda_{j})\;\bar{\delta}^{2}(\mu_{A}+\iota+ t_{i}\mu_{i}+t_{j}\mu_{j}), \end{equation*} and break explicit conformal invariance by setting $\D^{3}Z_{A}\D^{3}Z_{B}=\d^{4}x\D\lambda_{A}\D\lambda_{B}\la A\;B\ra^{2}$. Hence, the internal two-point contribution can be re-written as: \begin{multline*} \int \d^{4}x\;\D\lambda_{A}\;\D\lambda_{B}\la A\;B\ra^{2}\frac{\d^{4}t}{t_{i}t_{j}t_{k}t_{l}}\bar{\delta}^{1}_{-4,0}(\lambda_{A},t_{i}\lambda_{i}+t_{j}\lambda_{j}) \bar{\delta}^{1}_{-4,0}(\lambda_{B},t_{k}\lambda_{k}+t_{l}\lambda_{l})\\ \bar{\delta}^{2}(\mu_{A}+\iota+ t_{i}\mu_{i}+t_{j}\mu_{j})\bar{\delta}^{2}(\mu_{B}+\iota+ t_{k}\mu_{k}+t_{l}\mu_{l}). \end{multline*} A lengthy calculation allows us to perform all the parameter and $\P^{1}$ integrations against the delta functions, and several applications of the Schouten identity leaves the result: \be{Kermit} \int_{\M_{\R}}\frac{[\iota|(y-x)|(z-x)|\iota]^{2}\;\la i\;j\ra\la k\;l\ra }{[\iota|(y-x)|i\ra [\iota|(y-x)|j\ra [\iota|(z-x)|k\ra [\iota|(z-x)|l\ra}\frac{\d^{4}x}{(y-x)^{2}(z-x)^{2}}, \ee where $y$ corresponds to the line $(i,j)\subset\PT$ and $z$ corresponds to $(k,l)$. The integrand of \eqref{Kermit} is exactly the same as that arising in the computation of the so-called `Kermit' diagrams in the momentum twistor MHV formalism \cite{Bullimore:2010pj, Lipstein:2012vs, Lipstein:2013}, where it plays a non-vanishing role in the 1-loop MHV integrand. If such contributions are to vanish (as we know they must from our momentum space arguments), the crucial difference must manifest itself at the level of the real contour $\M_{\R}$ which is chosen. In other words, we demand that $\M_{\R}$ be chosen such that \eqref{Kermit} vanishes, while in the momentum twistor formalism, a different real contour must be chosen. Finally, we present a few additional expressions of the twistor two-point vertex which are simpler than \eqref{2pt3}, but require an explicit choice of space-time signature. If we choose Euclidean signature, then twistor space becomes a $\P^{1}$-bundle over $\M$, and we can perform the $\D^{3}Z_{A}$ integral over the whole of twistor space, leaving \be{2pt4} V(1,2)=\int_{X_{1}}\D^{3}Z_{B}\;\bar{\delta}^{3}_{0,-4}(Z_{2},Z_{B}), \ee where $X_{1}\cong\P^{1}$ is the Euclidean line in $\PT$ containing $Z_{1}$ (i.e., $X_{1}=Z_{1}\wedge\hat{Z}_{1}$). This means that we can parametrize $Z_{B}=\hat{Z}_{1}+tZ_{1}$, and integrate \begin{multline}\label{2pt5} V(1,2)=(1,\hat{1},\d\hat{Z}_{1},\d\hat{Z}_{1})\int_{\C^2}t^{2}\;\d s\;\d t\;\bar{\delta}^{4}(Z_{2}+sZ_{1}+t\hat{Z}_{1}) \\ =(1,\hat{1},\d\hat{Z}_{1},\d\hat{Z}_{1})\bar{\delta}^{2}_{0,-1,-3}(2,1,\hat{1}). \end{multline} Although we will not use this form of the two-point vertex in our scattering amplitude calculations, it demonstrates that all residual integrations in the MHV vertices of the theory can be performed explicitly if one is willing to make a choice of space-time signature. \subsubsection{Tree-level amplitudes} Having determined the CSW gauge-fixed Feynman rules of the twistor action, we will now compute the full tree-level S-matrix of $\cN=4$ SYM in twistor space. Using the twistor kernel formulation, we will see that generic diagram contributions to a $n$-point N$^k$MHV tree amplitude can be computed algebraically (where generic means $n>>k$). Non-generic diagrams will fall into two classes: boundary diagrams and boundary-boundary diagrams. In all cases, we will see that the twistor MHV formalism provides an efficient calculation at tree-level. The classification of Feynman diagrams into generic, boundary, and boundary-boundary is essentially geometric from the twistor point of view. In twistor space a MHV vertex corresponds to a line with the legs of the vertex given by points on this linearly embedded $\P^1$. The twistor propagator forces a marked point on one line to be collinear with the reference twistor $Z_{*}$ and a marked point on another line. For any two lines in general position, there is a unique line connecting these two marked points which passes through $Z_{*}$; hence any diagram contributing to a N$^k$MHV tree amplitude will contain $k+1$ MHV vertices/lines and $k$ propagators. \emph{Generic} diagrams will be those with no adjacent propagator insertions on any of the vertices. \emph{Boundary} diagrams are those which have adjacent propagator insertions on at least one vertex, but have at least two external particles attached to all vertices. This last condition means that after propagators are integrated out, a line in twistor space can still be associated to each vertex using the external legs. \emph{Boundary-boundary} diagrams have adjacent propagator insertions on a vertex with fewer than two external legs; in this case not all integrals can be performed algebraically without making some choices. To illustrate how calculations proceed in the twistor framework, we begin by considering the NMHV tree amplitude, since there are only generic diagrams which contribute here. \subsubsection*{\textit{Example: NMHV tree}} \begin{figure} \centering \includegraphics[width=5.5 in, height=1.5 in]{TNMHV.pdf}\caption{\textit{Twistor support of a NMHV tree diagram}}\label{TNMHV} \end{figure} For a $n$-point tree-level NMHV amplitude, the only diagrams which contribute are those with two vertices joined by a single propagator, as illustrated in Figure \ref{TNMHV}. Our discussion of the two-point vertex indicates that every possible diagram in this situation will be generic (i.e., at least two external particles on each vertex). The twistor space Feynman rules therefore indicate that the contribution from such a diagram is given by: \be{NMHV1} \int_{\PT^{2}}\D^{3|4}Z\;\D^{3|4}Z'\;V(j+1,\ldots,i,Z)\;\bar{\delta}^{2|4}(Z,*,Z')\;V(Z',i+1,\ldots,j). \ee We can use the inverse soft limit to write this as \begin{multline}\label{NMHV2} V(j+1,\ldots,i)\;V(i+1,\ldots,j) \times \\ \int_{\PT^{2}}\D^{3|4}Z\;\D^{3|4}Z'\;\bar{\delta}^{2|4}(i,j+1,Z)\;\bar{\delta}^{2|4}(Z,*,Z')\;\bar{\delta}^{2|4}(j, i+1,Z') \\ =V(j+1,\ldots,i)\;V(i+1,\ldots,j)\;[i,j+1,*,i+1,j], \end{multline} with the last line following by lemma \ref{dfprop2}. Hence, these generic diagrams can be evaluated algebraically against the delta functions on twistor space, and correspond to the two lines (each remaining vertex factor) together with their unique transversal through the reference twistor (the R-invariant). The NMHV tree amplitude on twistor space is then given by a sum over the contributing diagrams, which is equivalent to \be{NMHV3} A^{0}_{n,1}=\sum_{i<j}V(j+1,\ldots,i)\;V(i+1,\ldots,j)\;[i,j+1,*,i+1,j]. \ee Since each vertex is conformally invariant, and the R-invariant is the standard invariant of the superconformal group, this twistorial form of the amplitude manifests the superconformal symmetry of $\cN=4$ SYM, up to the choice of reference twistor $Z_{*}$. \subsubsection*{\textit{Generic diagrams}} The NMHV calculation above extends directly to each propagator of a generic N$^k$MHV diagram, where there are no adjacent propagator insertions, as depicted in Figure \ref{PropR}. Using the inverse soft limit, we can strip off a $\bar{\delta}^{2|4}$ from each vertex leaving MHV vertices which no longer depend on the propagator insertion points $Z_{1}$, $Z_{2}$. Hence, the propagator leads to a R-invariant factor: \be{generic} \int\D^{3|4}Z_{1}\D^{3|4}Z_{2}\;\bar{\delta}^{2|4}(\alpha,\beta,Z_{1})\;\bar{\delta}^{2|4}(Z_{1},*,Z_{2})\;\bar{\delta}^{2|4}(\gamma,\kappa,Z_{2})=[\alpha,\beta,*,\gamma,\kappa]. \ee Here, $\alpha$ and $\beta$ are the two nearest particles on the left-hand side of the propagator insertion, and $\gamma$ and $\kappa$ are the nearest particles on the right-hand side of the insertion. \begin{figure} \centering \includegraphics[width=2 in, height= 1 in]{PropR.pdf}\caption{\textit{Propagator contributions for generic diagrams}}\label{PropR} \end{figure} When $n>>k$, we can see that these sorts of diagrams will dominate the contributions to the tree-level N$^k$MHV amplitude. Proceeding inductively, a tree-level generic N$^k$MHV diagram gives a product of $k$ R-invariants, one for each propagator depending on $Z_{*}$ and each adjacent external twistor. These are multiplied by the $k+1$ MHV vertices depending only on the external particles. \subsubsection*{\textit{Boundary diagrams}} For boundary terms, there are some vertices which have adjacent propagator insertions. The resulting formulae are similar to the generic case: we obtain a product of $k+1$ MHV vertices (one for each vertex containing only the external particles at that vertex) and $k$ R-invariants (one for each propagator). However, because of adjacent propagator insertions, some of the entries in the R-invariants are now shifted. The rule for the shifts can be obtained by studying each end of the propagator separately; to give the most general case, we compute the shifts at a vertex with $k$ adjacent propagators, as in Figure \ref{kboundary}. As in the generic case, we can decompose the central vertex into \begin{equation*} V(i_{1},\ldots,i_{2})\;\bar{\delta}^{2|4}(i_{2},[2k-1],i_{1})\;\prod_{j=1}^{k-1} \bar{\delta}^{2|4}(i_{2},[2j-1],[2j+1]). \end{equation*} Clearly, we have made a choice by taking this form of the decomposition, both with respect to the overall orientation of the diagram and to each inverse soft limit. Other choices will yield equivalent final answers upon utilizing cyclical identities. \begin{figure} \centering \includegraphics[width=3.5 in, height=2 in]{kboundary.pdf}\caption{\textit{N$^k$MHV boundary term with $k$ adjacent propagators}}\label{kboundary} \end{figure} The factor of $V(i_{1},\ldots, i_{2})$ will be left as part of our final answer, but we want to use the delta functions to integrate out the $Z_{[2j-1]}$ corresponding to propagator insertions. The relevant integrals are: \be{adjacent-props} \int \prod_{j=1}^{k}\D^{3|4}Z_{[2j-1]} \; \bar{\delta}^{2|4}([2j],*,[2j-1])\;\bar{\delta}^{2|4}(i_{2},[2j-1],[2j+1]), \ee where $Z_{[2k+1]}=Z_{i_1}$. We can proceed inductively using the fact that $Z_{[2j]}$ must lie on the line $(c_{j},d_{j})$ in twistor space. Indeed, performing the $\D^{3|4}Z_{[2k-1]}$ integral leaves $\bar{\delta}^{1|4}([2k],*,i_{2},i_{1})$, which forces its three arguments to be co-planar. But since $Z_{[2k-1]}\in (i_{1},i_{2})$ already, this indicates that we must have $Z_{[2k-1]}=(i_{1},i_{2})\cap(*,c_{k},d_{k})$. In this fashion, we easily deduce that \eqref{adjacent-props} is equal to: \be{adjp2} \prod_{j=1}^{k}\bar{\delta}^{1|4}([2j],*,i_{2},[2j+1]), \qquad Z_{[2j-1]}=(i_{1},i_{2})\cap(*,c_{j},d_{j}). \ee Upon connecting with the propagator legs in the $(c_{j},d_{j})$ vertices, we obtain a product of R-invariants. In other words, the total contribution for the diagram in Figure \ref{kboundary} reads: \be{adjp3} V(i_{1},\ldots,i_{2})\prod_{j=1}^{k}V(c_{j}\ldots,d_{j})\;[[2j-1],i_{2},*,c_{j},d_{j}]. \ee This immediately leads to a general rule for computing R-invariant contributions in both generic and boundary diagrams. \begin{itemize} \item Each MHV vertex in a diagram contributes a factor of the MHV tree amplitude that depends only on the external legs at that vertex. \item Each propagator contributes a factor $[\widehat{i}_{1},i_{2},*,\widehat{j}_{1},j_{2}]$, where $i_{1}$, $i_{2}$ are the external particles nearest to one side of the propagator insertion and $j_{1}$, $j_{2}$ are nearest to the opposite side, with $i_{1}<i_{2}$ and $j_{1}<j_{2}$ in the cyclic ordering. Let $p$ be the propagator insertion point on a vertex; then \be{shiftrule1} Z_{\widehat{i}_1}=\left\{ \begin{array}{cl} Z_{i_1} & \mbox{if } p\;\mbox{is adjacent to } i_{1} \\ (i_{1},i_{2})\cap(*,c,d) & \mbox{if } p\;\mbox{is adjacent to the propagator} \\ & \mbox{connecting to the line } (c,d). \end{array} \right. \ee The rule for $\widehat{j}_{1}$ follows by taking $i\leftrightarrow j$. \end{itemize} \subsubsection*{\textit{Boundary-boundary diagrams}} The final class of potential MHV diagrams in twistor space are those in which some vertices have fewer than two external legs. In this case, the prescription of \eqref{shiftrule1} breaks down, as there is no line $(i_{1},i_{2})$ to use in the definition of the shifts. See Figure \ref{boundary} for simple examples of such diagrams; the worst case is when there are \emph{no} external legs on the vertex in question and the simplest example of such a diagram is the N$^3$MHV `cartwheel' diagram. \begin{figure} \centering \includegraphics[width=5 in, height=1.5 in]{boundary.pdf}\caption{\textit{Boundary-boundary terms at N$^2$MHV and N$^3$MHV}}\label{boundary} \end{figure} Using our standard techniques, the cartwheel can be reduced to: \begin{multline*} V(j+1,\ldots ,i_{1})V(i_{1}+1,\ldots ,i_{2})V(i_{2}+1,\ldots ,j) \times \\ \int\D^{3|4}Z_{[3]}\D^{3|4}Z_{[5]}\bar{\delta}^{1|4}(j+1,i_{1},\rf, [5])\;\bar{\delta}^{1|4}(i_{2}+1,j,\rf, [3])\;V([3],[5])\;\left[i_{1}+1,i_{2},\rf, [3],[5]\right], \end{multline*} where $V(\cdot, \cdot)$ is the two-point MHV amplitude given by \eqref{2pt1}. Clearly, we cannot perform the two remaining twistor integrals without specifying additional constraints. The case where there is a single external particle leaves one un-resolved integral. Note that although we cannot reduce boundary-boundary terms to a simple expression in terms of shifted twistors, they are still fully described by the twistorial MHV formalism. It is possible to reduce these further using the remaining delta functions, but it seems to be impossible to obtain an expression built only out of R-invariants and MHV vertices. However, with a choice of real contour these remaining integrals could be performed (and do not introduce divergences); this would simply entail the introduction of new signature-dependent machinery such as the form of the two-point vertex presented in \eqref{2pt5} A full N$^k$MHV tree amplitude is computed by summing the contributions for all generic, boundary, and boundary-boundary diagrams for the given specification of external particles and MHV degree. We conclude our discussion of the tree-level amplitudes with a more non-trivial example. \subsubsection*{\textit{Example: N$^2$MHV tree}} N$^2$MHV tree amplitudes provide the simplest example where all three classes of diagram contribute. The twistor space support of the generic diagrams is illustrated in Figure \ref{TN2MHV}. Applying our usual rules gives a contribution from all generic diagrams of the form: \begin{multline}\label{N2Gen} A^{\mathrm{gen}}_{n,2}=\sum_{i_{1}+1<i_{2}<j_{1}-1<j_{2}-2}[i_{1},j_{2}+1,*,i_{1}+1,j_{2}]\;[i_{2},j_{1}+1,*,i_{2}+1,j_{1}] \\ \times V(j_{2}+1,\ldots,i_{1})\;V(i_{1}+1,\ldots,i_{2},j_{1}+1,\ldots,j_{2})\;V(j_{1},\ldots,i_{2}+1). \end{multline} The R-invariants are obtained by integrating out the internal twistors in the usual algebraic fashion. \begin{figure} \centering \includegraphics[width=6 in, height=1.5 in]{TN2MHV.pdf}\caption{\textit{Twistor support of a generic N$^{2}$MHV tree diagram}}\label{TN2MHV} \end{figure} A boundary diagram, on the other hand, is one in which the propagator insertions are adjacent on the middle vertex (see Figure \ref{TN2C}). In this case we must apply the rules given above for assigning R-invariants to the propagators, since some of the entries must be shifted. Performing the inverse soft limit decomposition with respect to $Z_{[3]}$ and then $Z_{[2]}$, we obtain: \begin{multline}\label{N2B} A^{\mathrm{bound}}_{n,2}=\sum_{i+1<j_{1}<j_{2}-1}[i,j_{2}+1,*,j_1+1,j_{2}]\; [\widehat{j_2},j_1+1,\rf,i+1,j_1] \\ \times V(j_{2}+1,\ldots, i)\; V(j_1+1,\ldots,j_2)\; V(i+1,\ldots,j_1), \end{multline} where \begin{equation*} Z_{\widehat{j_2}}= (j_2,j_1+1)\cap (*,i,j_2+1)\, . \end{equation*} An equivalent shifted contribution could be defined by taking the inverse soft limit with respect to different internal twistors. \begin{figure} \centering \includegraphics[width=6 in, height=1.5 in]{TN2C.pdf}\caption{\textit{Twistor support of a boundary N$^{2}$MHV tree diagram}}\label{TN2C} \end{figure} Finally, we must account for the boundary-boundary contributions. Our discussion of the two point vertex narrows such diagrams down to those with a single external particle on the middle vertex (an example is given by the first diagram of Figure \ref{boundary}). For such a diagrams, we obtain the contribution \begin{multline}\label{N2BB} A^{\mathrm{bb}}_{n,2}=\sum_{j+1<i<j-2} V(j+1,\ldots , i)\;V(i+2,\ldots ,j) \\ \times \int \D^{3|4}Z_{[2]} \; V([2],i+1)\; [i+2,j,* ,i+1,[2] ]\;\bar{\delta}^{1|4}(j+1,i,*,[2])\, . \end{multline} As discussed before, the remaining integral could be performed in various ways, for instance by introducing Euclidean reality conditions. The full N$^{2}$MHV amplitude is a sum over generic, boundary and boundary-boundary diagrams using \eqref{N2Gen}, \eqref{N2B} and \eqref{N2BB}: \begin{equation*} A^{0}_{n,2}=A^{\mathrm{gen}}_{n,2}+A^{\mathrm{bound}}_{n,2}+A^{\mathrm{bb}}_{n,2}. \end{equation*} \subsubsection{Loop-level amplitudes} Clearly, the twistor action provides an efficient mechanism for calculating tree-level scattering amplitudes in $\cN=4$ SYM via the MHV formalism. If we truly want to think of the twistor action as defining a quantum field theory on twistor space, then we must be able to describe computations at all loop orders in perturbation theory, though. While $\cN=4$ SYM is UV finite, its generic scattering amplitudes have IR divergences at loop-level, which require regularization. We begin by illustrating that our twistorial formalism extends directly from the tree-level setting in the case of those loop diagrams which are IR finite. Then, we will discuss how IR divergences appear for a generic amplitude and potential regularization strategies on twistor space. \subsubsection*{\textit{Finite examples}} Using the MHV formalism for $\cN=4$ SYM, one can easily identify loop level diagrams which are finite. Although such diagrams are certainly not generic, they nevertheless provide an interesting example of how our twistor methods enable simple calculations. For instance, in the planar sector at $1$-loop NMHV, the triangle diagrams of the form illustrated in Figure \ref{LNMHV} are finite. On twistor space, this corresponds to being able to perform all integrals in a well-defined and algebraic fashion. \begin{figure} \centering \includegraphics[width=4 in, height=2.25 in]{LNMHV.pdf}\caption{\textit{Triangular 1-loop NMHV diagram}}\label{LNMHV} \end{figure} The contribution coming from a diagram of this form can be computed by first performing the integrals in $Z_{[2]}$, $Z_{[4]}$, and $Z_{[6]}$ trivially, and then using the boundary diagram rule to do the remaining integrals in terms of shifted twistors. The result is \begin{multline}\label{LNMHV1} V(i+1,\ldots, j)\;V(j+1,\ldots, k)\;V(k+1,\ldots, i) \\ \times [k+1,i,*,\widehat{i},i+1]\;[i+1,j,*,\widehat{j},j+1]\;[j+1,k,*,\widehat{k},k+1], \end{multline} where the shifted twistors are: \begin{equation*} Z_{\widehat{i}}=(i+1,j)\cap(j+1,k,*),\quad Z_{\widehat{j}}=(j+1,k)\cap(k+1,i,*), \quad Z_{\widehat{k}}=(k+1,i)\cap(i+1,j,*). \end{equation*} Unpacking the definition of the R-invariants, it is easy to see that \eqref{LNMHV1} is indeed finite. An even simpler example is available if we allow ourselves to consider the non-planar sector: a (strictly) non-planar 1-loop MHV diagram is not only finite, but its twistor space support remains planar, as illustrated in Figure \ref{NPMHV}. In this case, all the integrals can be performed as in a generic diagram, and we are left with a contribution of the form: \be{NPMHV1} V(i_{1},j_{1},\ldots, i_{4},j_{4},\ldots)\;V(i_{2},j_{2},\ldots, i_{3},j_{3},\ldots )\;[i_{1},j_{1},*,i_{2},j_{2}]\;[i_{3},j_{3},*,i_{4},j_{4}]. \ee \begin{figure} \centering \includegraphics[width=3 in, height=2.25 in]{NPMHV.pdf}\caption{\textit{Strictly non-planar 1-loop MHV diagram in twistor space}}\label{NPMHV} \end{figure} Although the answer is simple, the geometry of this situation is still quite important. The twistor propagator forces insertion points on each vertex to lie on the transversal through the reference point $Z_{*}$. Since the transversal between two lines and a point in $\PT$ is unique, this forces the two insertion points on each vertex to be equal. In the strictly non-planar setting this is finite because the external states separate the propagator insertions in the color ordering; however, when propagator insertions are adjacent a pole arises from the Parke-Taylor denominator. As we shall see, in the planar 1-loop MHV this leads to a double divergence. \subsubsection*{\textit{Generic loop diagrams}} Generic loop diagrams in $\cN=4$ SYM will contain IR divergences coming from when internal momenta are collinear with the external states. The simplest example of this is captured at 1-loop by the planar MHV amplitude; see Figure \ref{TLoop} for the twistor support of a generic diagram contributing to this amplitude. For such a diagram, the geometry in twistor space forces the propagator insertions on each line to coincide: $Z_{[1]}=Z_{[4]}$ and $Z_{[2]}=Z_{[3]}$. \begin{figure} \centering \includegraphics[width=5.5 in, height=2 in]{TLoop.pdf}\caption{\textit{Twistor support of the 1-loop MHV amplitude}}\label{TLoop} \end{figure} We can easily evaluate all the integrals contributing to this diagram; this results in some shifted twistors due to the adjacent propagator insertions: \be{pMHV1-loop} V(i+1, \ldots, j)\;V(j+1,\ldots, i)\;[i, j+1, *,\widehat{j},i+1]\;[j,i+1,*,\widehat{i},j+1], \ee with shifts given by the usual rule \begin{equation*} \widehat{i}=(i,j+1)\cap(j,i+1,*), \qquad \widehat{j}=(j,i+1)\cap(i,j+1,*). \end{equation*} Recalling the definition of the R-invariant \eqref{DF5}, we can see a single divergence coming from each propagator factor since \begin{equation*} [j,i+1,*,\widehat{i},j+1]=\frac{\delta^{0|4}\left((j,i+1,*,\widehat{i})\chi_{j+1}+\mathrm{cyclic}\right)}{(j,i+1,*,\widehat{i})\cdots(j+1,j,i+1,*)}. \end{equation*} The shifts indicate that $Z_{\widehat{i}}$ is coplanar with $Z_{j}$, $Z_{i+1}$, and $Z_{*}$, so $(j,i+1,*,\widehat{i})=0$. Similarly, we get a divergence in the other R-invariant from $(i,j+1,*,\widehat{j})=0$. However, we also get a numerator factor of zero in \eqref{pMHV1-loop} from the fermionic portions of the R-invariants. It is easy to see that the two $\delta^{0|4}$s are proportional, so their product must vanish. This leaves us with a `$0/0$' type expression for the diagram in Figure \ref{TLoop}, which clearly requires a regularization scheme. Indeed, at 1-loop it is known that the only true IR divergences should come from those diagrams in which one of the vertices is a 3-point vertex (i.e., only two external particles) \cite{Brandhuber:2004yw, Bena:2004xu, Lipstein:2013}. Hence, the required regularization on twistor space must first treat the fermionic contributions carefully and then implement a correct IR regularization mechanism. It is easy to see that at higher loops and MHV degree, the divergence structure we have observed here will persist. To say that the twistor action for $\cN=4$ SYM is well-defined as a \emph{quantum} field theory, we must be able to give (at least in principle) a regularization scheme on twistor space. Of course, the most widely used regularization scheme is dimensional regularization; this is particularly well-adapted to traditional Feynman integrals which appear in space-time calculations, although it is manifestly unphysical, breaks dual superconformal symmetry, and obscures the underlying integrability of the theory. Furthermore, dimensional regularization seems impossible to implement on twistor space, which is not well-defined for a $4-2\varepsilon$-dimensional space-time. Motivated by the AdS/CFT correspondence, Alday, Henn, Plefka and Schuster proposed a `mass regularization' scheme for scattering amplitudes that preserves dual superconformal symmetry \cite{Alday:2009zm}. In this scheme, one gives some of the scalars in $\cN=4$ SYM a vacuum expectation value (VEV) by moving out onto the Coulomb branch in a particular direction; at loop level this keeps external particles as well as totally internal loop lines massless, while particles running around the external edges of the loops acquire masses (see \cite{Henn:2011xk} for a review). These masses then act as a regulator for the theory. Building off the earlier findings of \cite{Craig:2011ws}, Kiermaier proposed a \emph{massive MHV formalism} which demonstrates that the MHV formalism at the origin of the moduli space extends to the Coulomb branch of $\cN=4$ SYM \cite{Kiermaier:2011cr}. In accordance with the scalar VEV structure associated to the Coulomb branch, there are three types of vertices which serve the purpose of the single MHV vertex in the original formalism, and the massless scalar propagator is replaced by a massive one. This massive MHV formalism has been shown to be correct using recursive arguments \cite{Elvang:2011ub}. In Appendix \ref{Appendix2}, we show that the Coulomb branch of $\cN=4$ SYM can be accessed on twistor space, leading to a twistor action derivation of Kiermaier's massive MHV formalism. Combined with the mass regularization scheme, this provides a mechanism for regularizing IR divergences on twistor space. Unfortunately, the Coulomb branch MHV rules on twistor space are not as elegant as those at the origin of the moduli space.\footnote{In particular, the massive propagator on twistor space takes the form of an infinite series. Upon translating this to space-time, we see that it can be re-summed to $(p^{2}-m^{2})^{-1}$, but there does not appear to be an elegant resummation procedure that is self-contained in twistor space.} So rather than study this formalism here, we take the following facts: \begin{itemize} \item There is a twistor action for the Coulomb branch of $\cN=4$ SYM. \item Its Feynman rules in CSW gauge are equivalent to the massive MHV formalism of \cite{Kiermaier:2011cr}. \item This allows us (in principle) to implement the mass regularization scheme on twistor space. \end{itemize} The interested reader need only consult Appendix \ref{Appendix2} for proofs of these facts. An alternative option could be provided by the work of Lipstein and Mason \cite{Lipstein:2013}, which provides a mechanism for correctly regulating 1-loop Kermit integrals in \emph{momentum} twistor space. This formalism incorporates the Feynman $i\epsilon$-prescription and correctly regulates the integral, albeit in a non-trivial way. If one could adopt this methodology to the twistor space integrals here, it could provide another regularization mechanism (which is related to both dimensional and mass regularization) for computing loop amplitudes twistorially. Clearly, the incomplete picture of regulation for loop amplitudes is a shortcoming of the twistor action approach as presented here. Rather than dwell on this issue, let us instead consider how the twistor action can be used to study other interesting gauge theoretic observables. \section{Wilson Loops, Local Operators, and Correlation Functions} \label{Chapter4} In the study of any gauge theory, interesting physical observables include correlation functions of local operators and Wilson loops, and gauge theory on twistor space is no exception. In the previous section, we demonstrated that scattering amplitudes at tree-level and beyond could be computed efficiently using the twistor action for $\cN=4$ SYM; now we further explore the utility of the twistor action by studying gauge invariant local operators, null polygonal Wilson loops, and their expectation values in $\cN=4$ SYM on twistor space. Recall that in a (bosonic) gauge theory, a Wilson loop is given by computing the trace of the holonomy of a connection 1-form $A$ around some closed path $\gamma$: \be{bWilsonloop} W_{R}[\gamma]=\tr_{R}\mathrm{Hol}[A,\gamma]=\tr_{R}\cP \exp\left(-\oint_{\gamma}A\right), \ee where $R$ is the representation of the gauge group in which the trace is taken, and $\cP$ is the `path-ordering' symbol. Besides forming a natural class of gauge invariant observables, Wilson loops arise in a wide variety of applications in both pure mathematics and physics. In $\cN=4$ SYM, these operators can be extended to compute the holonomy of the full $\cN=4$ superconnection: \be{sWilsonloop} W_{R}[\gamma]=\tr_{R}\cP\exp\left(-\oint_{\gamma}\CA\right)=\tr_{R}\cP\exp\left(-\oint_{\gamma}\Gamma_{AA'}\d x^{AA'}+\Gamma_{aA}\d\theta^{aA}\right), \ee where $\gamma\subset\M$ is now understood to be a curve in the full chiral superspace. Null polygonal Wilson loops (i.e., when the curve $\gamma$ is a null polygon $C\subset\M$) are of particular interest beyond belonging to this class of important operators. Motivated by the AdS/CFT correspondence, Alday and Maldacena first conjectured the duality between the expectation value of a $n$-cusp null polygonal Wilson loop in the fundamental representation of the gauge group and $n$-particle gluon scattering amplitudes by studying these objects in the strong coupling regime (i.e., using string theory in the $AdS_{5}\times S^{5}$ geometry near the boundary) \cite{Alday:2007hr}. In this picture, the gluon null momenta of the scattering process become the edges of the null polygon. This amplitude / Wilson loop duality can be understood as arising from a non-compact T-duality which maps the string scattering worldsheet on the AdS-boundary to a minimal surface with the null polygon as its boundary, and interchanges the superconformal and dual superconformal groups \cite{Berkovits:2008ic}. From a purely gauge-theoretic point of view, this means that the Wilson loop lives in a dual affine Minkowski space, on which the dual superconformal group acts. Differences between points in this space correspond to momenta, and the momentum conservation condition is automatically encoded by the fact that the null polygon $C$ is closed. Since the original conjecture of Alday and Maldacena, a wide variety of studies have been performed at both strong and weak coupling which indicate that the duality should be true (c.f., \cite{Drummond:2007aua, Brandhuber:2007yx, Drummond:2007cf, Drummond:2007bm, Drummond:2008aq, Gorsky:2009dr}). For the fully supersymmetric Wilson loop of $\cN=4$ SYM \eqref{sWilsonloop}, performing explicit computations can be rather complicated due to the form of the superconnection $\CA$ (see Appendix \ref{Appendix1}), so proving general statements was difficult. Translating the supersymmetric Wilson loop to twistor space has provided an efficient means of checking the amplitude / Wilson loop duality for arbitrary MHV degree and loop order at the level of the integrand (for both the Wilson loop and scattering amplitudes) \cite{Mason:2010yk}. Furthermore, it has been shown that the twistor Wilson loop has the same singularity structure as scattering amplitudes; this essentially constitutes a twistor-theoretic proof of the original conjecture at the level of the integrand \cite{Bullimore:2011ni}. However, the duality between Wilson loops and other gauge-theoretic objects does not stop at scattering amplitudes. In this section, we study some conjectured correspondences between null polygonal Wilson loops and correlation functions of local operators. By working with these objects on twistor space, we not only obtain analytic proofs, but can also derive efficient calculational mechanisms, just as in the case of scattering amplitudes. \subsection{Local Operators and Wilson Loops in Twistor Space} Gauge invariant local operators in $\cN=4$ SYM include Konishi, dilaton, or indeed any chiral primary operators. In this review, we restrict our attention to the `1/2-BPS' operators; these have a non-anomalous conformal dimension and do not require renormalization \cite{Alday:2010zy}. Later, we will be working at the level of the loop integrand; although the integrand of a correlation function is simply a rational function for \emph{any} choice of local operators, protected operators such as the 1/2-BPS operators allow us to more plausibly extend our claims to the full loop \emph{integral}. These 1/2-BPS operators are built from pairs of scalars: \be{BPS} \cO(x)=\cO_{abcd}(x)=\tr(\Phi_{ab}(x)\Phi_{cd}(x))-\frac{\epsilon_{abcd}}{12}\tr(\Phi^{2}(x)). \ee For an abelian gauge group, it is easy to see how to express $\cO$ in twistor space using the Penrose transform: \begin{multline*} \cO^{\U(1)}(x)=\int_{X\times X}\D\lambda\wedge\D\lambda'\wedge\phi_{ab}(\lambda)\wedge\phi_{cd}(\lambda') \\ -\frac{\epsilon_{abcd}}{12}\int_{X\times X}\D\lambda\wedge\D\lambda'\wedge\phi^{ef}(\lambda)\wedge\phi_{ef}(\lambda'), \end{multline*} where $\phi_{ab}(\lambda)$ denotes the pullback of $\phi_{ab}$ to the line $X$ charted by $\lambda$. This can be naturally generalized to $\cN=4$ supersymmetry by using $\frac{\partial^{2}\cA}{\partial\chi^{2}}$ instead of $\phi_{ab}$: \begin{multline}\label{abelian} \cO^{\U(1)}(x,\theta)=\int_{X\times X}\D\lambda\wedge\D\lambda'\wedge\frac{\partial^{2}\cA}{\partial\chi^{a}\partial\chi^{b}}(\lambda)\wedge\frac{\partial^{2}\cA}{\partial\chi^{c}\partial\chi^{d}}(\lambda') \\ -\frac{\epsilon_{abcd}}{12}\int_{X\times X}\D\lambda\wedge\D\lambda'\wedge\frac{\partial^{2}\cA}{\partial\chi_{e}\partial\chi_{f}}(\lambda)\wedge\frac{\partial^{2}\cA}{\partial\chi^{e}\partial\chi^{f}}(\lambda'), \end{multline} where \begin{equation*} \frac{\partial^{2}\cA}{\partial\chi^{a}\partial\chi^{b}}=\phi_{ab}+\epsilon_{abcd}\chi^{c}\psi^{d}+\frac{1}{2!}\epsilon_{abcd}\chi^{c}\chi^{d}g. \end{equation*} Of course, for a non-abelian gauge group, the twistorial operator \eqref{abelian} is not well-defined: we cannot integrate $\phi_{ab}$ over $X$ because it takes values in the the Lie algebra of the gauge group. What we need is a frame for $E\rightarrow\PT$ which provides a holomorphic trivialization of $E|_{X}$. Now, $E|_{X}$ is holomorphic (because $X$ has only one complex dimension) and topologically trivial by assumption (i.e., $c_{1}(E)=0$), so all that is required is a gauge transformation $\gamma$ which obeys: \begin{equation*} \gamma(\dbar+\cA)|_{X}\gamma^{-1}=\dbar|_{X}. \end{equation*} As it turns out, such a $\gamma$ can be found generically. Since $X$ is rational and $E|_{X}$ is topologically trivial and holomorphic, the Birkhoff-Grothendieck theorem tells us that \begin{equation*} E|_{X}\cong\bigoplus^{r}_{i=1}\cO(a_{i}), \qquad \sum_{i=1}^{r}a_{i}=0, \end{equation*} where $r=\mathrm{rank} E$. When $\cA=0$, all of the $a_{i}=0$ and we are just working on the trivial bundle $\cO^{\oplus r}$. As we work perturbatively around this trivial background, the holomorphic trivialization will continue to hold generically provided $\cA$ is sufficiently small. Since $X$ is linearly embedded, the holomorphic trivialization given by $\gamma$ is unique. If we define \be{frame} U_{X}(\lambda,\lambda')=\gamma(x,\lambda)\gamma^{-1}(x,\lambda'), \ee then $U_{X}$ is formally a Green's function for $(\dbar+\cA)|_{X}$, and acts as \begin{equation*} U_{X}(\lambda,\lambda)=\mathbb{I}, \qquad U_{X}(\lambda,\lambda'): E|_{\lambda'}\rightarrow E|_{\lambda}. \end{equation*} Thus, it is natural to interpret $U_{X}$ as the twistor space parallel propagator for the gauge bundle $E$ along $X$. This allows us to write down an immediate non-abelian generalization of \eqref{abelian} for our 1/2-BPS operators: \begin{multline}\label{nabelian} \cO(x,\theta)=\int_{X\times X}\D\lambda\; \D\lambda'\; \tr\left[U_{X}(\lambda,\lambda')\frac{\partial^{2}\cA(\lambda')}{\partial\chi^{a}\partial\chi^{b}}U_{X}(\lambda',\lambda)\frac{\partial^{2}\cA(\lambda)}{\partial\chi^{c}\partial{\chi}^{d}}\right] \\ -\frac{\epsilon_{abcd}}{12}\int_{X\times X}\D\lambda\; \D\lambda'\; \tr\left[U_{X}(\lambda,\lambda')\frac{\partial^{2}\cA(\lambda')}{\partial\chi_{e}\partial{\chi}_{f}}U_{X}(\lambda',\lambda)\frac{\partial^{2}\cA(\lambda)}{\partial\chi^{e}\partial{\chi}^{f}}\right]. \end{multline} The bosonic portion of this operator is depicted in Figure \ref{operator}. The following lemma ensures us that this supersymmetric operator is well-defined. \begin{figure} \centering \includegraphics[width=1.7 in, height=1 in]{nonAbPhi2.pdf}\caption{\textit{The twistor space form of the local space-time operator} $\tr\Phi^2(x)$, \textit{involving holomorphic Wilson lines; arrows indicate the flow of the color trace.}}\label{operator} \end{figure} \begin{lemma} $\cO(x,\theta)$ is a well-defined, gauge invariant operator on $\PT$, and corresponds to the chiral half of the 1/2-BPS multiplet of $\cN=4$ SYM. \end{lemma} \proof By \eqref{frame}, $U_{X}$ is the unique solution of \begin{equation}\label{gfunct} (\dbar+\cA)|_{X}U_{X}=0. \end{equation} Since $\cA$ depends on $\theta$ only through $\chi^{a}=\theta^{aA}\lambda_{A}$, we can differentiate with respect to $\theta$ to get: \begin{equation*} (\dbar+\cA)|_{X}(\lambda^{A}\partial_{aA}U_{X})=0. \end{equation*} This implies that $U^{-1}_{X}(\lambda^{A}\partial_{aA}U_{X})$ is globally holomorphic on $X\cong\P^{1}$; by Liouville's theorem, we then have \begin{equation}\label{gfunct2} U^{-1}_{X}(\lambda^{A}\partial_{aA}U_{X})=\lambda^{A}\Gamma_{aA}(x,\theta). \end{equation} One can show that $\nabla_{aA}=\partial_{aA}+\Gamma_{aA}$ transforms as a connection and obeys the condition \begin{equation*} \left\{\nabla_{a(A},\nabla_{B)b}\right\}=0. \end{equation*} By lemma \ref{lemma: odd}, this means that $\Gamma_{aA}$ is the odd superconnection of $\cN=4$ SYM, with curvature given by $\cF_{ab}$. Using the fact that $U_{X}(\lambda,\lambda)=\mathbb{I}$, it follows that $\cF_{ab}=\partial^{A}_{[a}\Gamma_{b]A}$. By \eqref{gfunct}, we have that \begin{equation*} \int_{X}\frac{\la\lambda'' \lambda'\ra\;\D\lambda}{\la\lambda'' \lambda\ra \la\lambda \lambda' \ra}U_{X}(\lambda'',\lambda)(\dbar+\cA)|_{X}U_{X}(\lambda,\lambda')=0. \end{equation*} Noting that \begin{multline*} \int_{X}\frac{\la\lambda'' \lambda'\ra\;\D\lambda}{\la\lambda'' \lambda\ra \la\lambda \lambda' \ra}U_{X}(\lambda'',\lambda)\dbar U_{X}(\lambda,\lambda') \\ =-\int_{X}\dbar\left(\frac{\la\lambda'' \lambda'\ra\;\D\lambda}{\la\lambda'' \lambda\ra \la\lambda \lambda' \ra}\right)U_{X}(\lambda'',\lambda)U_{X}(\lambda,\lambda')=-U_{X}(\lambda'',\lambda'), \end{multline*} we can differentiate with respect to $\theta$ to obtain \begin{equation*} \frac{\partial U_{X}(\lambda'',\lambda')}{\partial\theta^{Aa}}=\int_{X}\frac{\la\lambda'' \lambda'\ra\;\D\lambda}{\la\lambda'' \lambda\ra \la\lambda \lambda' \ra}U_{X}(\lambda'',\lambda)\lambda_{A}\frac{\partial\cA(\lambda)}{\partial\chi^{a}}U_{X}(\lambda,\lambda'). \end{equation*} From \eqref{gfunct2}, we have \begin{equation*} \Gamma_{aA}=\frac{1}{\la\lambda'' \lambda'\ra}U_{X}^{-1}(\lambda'',\lambda')\lambda^{''A}\frac{\partial U_{X}(\lambda'',\lambda')}{\partial\theta^{Aa}} =\int_{X}\frac{\D\lambda}{\la\lambda \lambda' \ra}U_{X}(\lambda',\lambda)\frac{\partial\cA(\lambda)}{\partial\chi^{a}}U_{X}(\lambda,\lambda'). \end{equation*} This indicates that the fermionic curvature is given by \begin{equation*} \cF_{ab}=\partial^{A}_{[a}\Gamma_{b]A}=-\int_{X}\D\lambda\;U_{X}(\lambda',\lambda)\frac{\partial^{2}\cA(\lambda)}{\partial\chi^{a}\partial\chi^{b}}U_{X}(\lambda,\lambda'), \end{equation*} and hence that \begin{equation*} \cO(x,\theta)=\tr\left(\cF_{ab}\cF_{cd}\right)-\frac{\epsilon_{abcd}}{12}\tr\left(\cF^{2}\right). \end{equation*} Since $\cF_{ab}$ is a curvature of the $\cN=4$ superconnection, the operator $\cO(x,\theta)$ is manifestly gauge invariant on $\PT$. Finally, we can use the results of Appendix \ref{Appendix1} to expand the operator in powers of $\theta$: \begin{equation*} \cO(x,\theta)=\cO(x)+3\theta^{eA}\tr\left(\Phi_{ab}\Psi_{cdeA}\right)-\theta^{gA}\frac{\epsilon_{abcd}}{4}\tr\left(\bar{\Phi}^{ef}\Psi_{efgA}\right)+O(\theta^{2}), \end{equation*} which is equivalent to the 1/2-BPS supermultiplet with $\bar{\theta}=0$, as desired. $\Box$. \medskip This procedure can be duplicated for practically any choice of local operator in $\cN=4$ SYM, including those which are not protected (see \cite{Adamo:2011dq} for the Konishi operator). It turns out that $U_{X}$ also provides the definition of a null polygonal Wilson loop in twistor space \cite{Mason:2010yk, Bullimore:2011ni}. Recall that for $\cN=4$ SYM, the fully supersymmetric Wilson loop $W[C]$ is given by \eqref{sWilsonloop}: the trace of the holonomy of $\CA$ around a null polygon $C$ in $\M$ with $n$ cusps labelled by $(x_{i},\theta_{i})$ (fixing $R$ to be the fundamental representation for now). The basic twistor correspondence tells us that each cusp $x_{i}$ is equivalent to a line $X_{i}\cong\P^{1}$ in $\PT$; since these cusps are pairwise null-separated, $X_{i}$ intersects $X_{i-1}$ and $X_{i+1}$ in points $Z_{i-1}$ and $Z_{i}$ respectively. This translates the space-time null polygon into a nodal elliptic curve in twistor space. Likewise, the space-time superconnection $\CA$ is translated into the $(0,1)$-connection $\cA$ on $E\rightarrow\PT$. To compute the Wilson loop, we simply need to parallel transport $\cA$ around the nodal elliptic curve using $U_{X}$. This leads us to the definition: \be{WL} W[C]=\tr\;\mathrm{Hol}_{Z_{n}}[C]=\tr\left[ U_{X_n}(Z_{n},Z_{n-1})U_{X_{n-1}}(Z_{n-1},Z_{n-2})\cdots U_{X_1}(Z_{1},Z_{n})\right], \ee where we abuse notation by writing $C$ for both the space-time nully polygon and the twistor nodal elliptic curve. We will also abbreviate coordinates in $\M$ by their bosonic part: $(x,\theta)$ will be written $x$. It has now been confirmed that \eqref{WL} coincides with the supersymmetric space-time Wilson loop of Caron-Huot \cite{CaronHuot:2010ek} up to terms proportional to the equations of motion \cite{Belitsky:2011zm}. While this is clearly a well-defined and gauge invariant object on twistor space, it can also be seen to be equivalent to the na\"ive holomorphic generalization of a \emph{real} Wilson loop given by \eqref{sWilsonloop}. Indeed, since the holomorphic frame is a solution to \eqref{gfunct}, we can formally expand it as a Born series: \begin{multline} U_{X}(Z,Z')=\frac{1}{1+\dbar^{-1}|_{X}\cA}=\mathbb{I}+\sum_{k=1}^{\infty}(-1)^{k}\int_{X^{k}}\bigwedge_{i=1}^{k}\omega_{Z,Z'}(Z_{i})\wedge\cA(Z_{i}) \\ \equiv \cP\exp\left(-\int_{X}\omega_{Z,Z'}\wedge\cA\right), \end{multline} where $\omega_{Z,Z'}$ is a meromorphic 1-form on $X$ with simple poles at $Z$, $Z'$ and \begin{equation*} \bigwedge_{i=1}^{k}\omega_{Z,Z'}(Z_{i})=\frac{\la\lambda \lambda'\ra}{\la\lambda\lambda_{1}\ra\la\lambda_{1}\lambda_{2}\ra\cdots\la\lambda_{k}\lambda'\ra}\frac{\D\lambda_{1}}{2\pi i}\wedge\cdots\wedge\frac{\D\lambda_{k}}{2\pi i}. \end{equation*} The concatenation of these frames about the nodal curve $C$ then gives the rather aesthetically appealing identification: \be{WL2} W[C]=\tr\;\mathrm{Hol}_{Z_{n}}[C]=\tr\;\cP\exp\left(-\int_{C}\omega\wedge\cA\right), \ee where $\omega$ is now the meromorphic 1-form on $C$ with simple poles at each node $Z_{i}$. \subsection{Correlation Function / Wilson Loop Correspondence} The duality between scattering amplitudes and supersymmetric null polygonal Wilson loops received its first analytic proof using the twistor Wilson loop (at the level of the loop integrand) \cite{Bullimore:2011ni}, and this picture allows the computation of the all-loop integrand for scattering amplitudes in a remarkably efficient fashion \cite{Mason:2010yk} (see \cite{Adamo:2011pv} for a review emphasizing the role of the twistor action in these developments). The correspondence between null polygonal Wilson loops and other physically interesting observables doesn't stop here though. In \cite{Eden:2010zz, Alday:2010zy, Eden:2010ce} it was conjectured that, in the limit where the insertion points become pairwise null separated, the ratio of certain $n$-point correlation functions in $\cN=4$ SYM is equal to the expectation value of a null polygonal Wilson loop in the adjoint representation. More formally, if $\{\cO(x_{i})\}_{i=1,\ldots,n}$ are gauge invariant local operators in $\cN=4$ SYM, then this conjecture takes the form: \be{corrW} \lim_{x_{i,i+1}^{2}\rightarrow 0}\frac{\la \cO(x_{1})\ldots\cO(x_{n})\ra}{\la\cO(x_{1})\ldots\cO(x_{n})\ra^{\mathrm{tree}}}=\la W_{\mathrm{adj}}[C]\ra \xrightarrow{\mathrm{planar}\:\mathrm{limit}} \la W[C]\ra^{2}, \ee where $x^{2}_{i,j}=(x_{i}-x_{j})^{2}$, $C$ is the resulting null polygon with $n$ cusps, $W_{\mathrm{adj}}$ is the Wilson loop in the adjoint representation, and $W$ the Wilson loop in the fundamental representation. Difficult calculations in perturbation theory on space-time have confirmed this conjecture through examples \cite{Eden:2011yp, Eden:2011ku}, and it is was also expected to hold at the level of the loop integrand \cite{Eden:2010ce}.\footnote{The loop integrand indicates allowing all possible Feynman diagrams at the given order in perturbation theory, but we do not perform the integrals over the locations of Lagrangian insertions corresponding to the loop variables. For the Wilson loop, the integrand is always well-defined, whereas for scattering amplitudes it is well-defined only in the planar limit \cite{ArkaniHamed:2010kv}.} Beyond its intrinsic interest as a conjecture about two interesting classes of observables in gauge theory, \eqref{corrW} also has practical implications. In \cite{Belitsky:2011zm}, it was conjectured that the correlation function / Wilson loop correspondence should actually be \emph{more} robust than the amplitudes / Wilson loop duality. Indeed, carefully considering the null limit on the left-hand side of \eqref{corrW} can be thought of as providing a regularization mechanism for the Wilson loop on the right-hand side \cite{Alday:2010zy}. In some sense, this means that the proper strong-coupling tool for studying scattering amplitudes is actually the null limit of correlation functions, since these objects are well-behaved (i.e., lack the cusp divergences of the Wilson loop that is approached in the limit). We want to evaluate \be{CW1} \lim_{x_{i,i+1}^{2}\rightarrow 0}\frac{\la \cO(x_{1})\ldots\cO(x_{n})\ra}{\la\cO(x_{1})\ldots\cO(x_{n})\ra^{\mathrm{tree}}} \ee using our twistorial local operators \eqref{nabelian} with respect to the $\cN=4$ SYM twistor action. It is well known that the tree level contribution in the denominator goes as \begin{equation*} \la \cO(x_{1})\cdots\cO(x_{n})\ra^{\mathrm{tree}} \sim \frac{1}{x_{12}^{2}x_{23}^{2}\cdots x_{n1}^{2}}, \end{equation*} so we can neglect any contribution from the numerator which does not have a counterbalancing divergence in the null limit. \begin{figure}[t] \centering \includegraphics[width=100mm]{polygons.pdf}\caption{\textit{As the $n$ generic points $(x,\theta)$ become null separated in $\M$, the corresponding $n$ $\P^1$s in $\PT$ intersect to form the nodal elliptic curve $C$.}}\label{polygons} \end{figure} In twistor space, the geometry of the null limit is elegantly manifested (see Figure \ref{polygons}). As the null polygon is approached in space-time, the $n$ lines $\{X_{i}\}$ intersect sequentially to form the nodal elliptic curve $C\subset\PT$. To evaluate the numerator of \eqref{CW1}, we apply Wick's theorem to the twistorial path integral \begin{equation*} \int [\cD\cA] \cO(x_{1})\cdots\cO(x_{n})\;e^{-S[\cA]}, \end{equation*} under the assumptions of \emph{normal ordering} and \emph{genericity}. The normal ordering assumption means that we can exclude any contractions between fields or frames inserted on the same lines in twistor space; the genericity assumption means that the MHV vertices generated by the second term $S_{2}[\cA]$ in the twistor action \eqref{TAInt} are not null separated from any of the operator insertions. The latter condition is simply that the lines appearing in the perturbative expansion \eqref{detexp} do not intersect any of the lines where an operator insertion lives. Contractions will occur between insertions of the twistor $(0,1)$-connection $\cA$. These appear in operator insertions $\frac{\partial^{2}\cA}{\partial\chi^{2}}$, the perturbative expansion of the holomorphic frames $U_{X}$, and in the MHV vertex insertions from $S_{2}[\cA]$. We will thus have three classes of contractions to consider: \begin{itemize} \item Contractions involving a MHV vertex. \item Contractions between non-adjacent $X_{i}$s. \item Contractions between adjacent $X_{i}$s (i.e., between fields on $X_{i}$ and $X_{i\pm 1}$). \end{itemize} We are free to choose a gauge on twistor space in which to perform these calculations; following the lessons of Section \ref{Chapter3}, let us fix CSW gauge. Then the twistor space propagator is given by \eqref{TAprop}. Lines in $\PT$ can be parametrized by $Z(s_{i})=Z_{A_i}+s_{i}Z_{B_i}$, with $s_{i}$ acting as an inhomogeneous coordinate on $X_{i}$. The measure in homogeneous coordinates $\D\lambda_{i}$ is then written as $\la A_{i} B_{i}\ra \d s_{i}$. Without loss of generality, we can choose $Z_{A_i}$ and $Z_{B_i}$ to be the intersection points that $X_{i}$ develops with $X_{i-1}$ and $X_{i+1}$ respectively in the null limit (i.e., as $x_{i,i+1}^{2}\rightarrow 0$, $Z_{B_{i}}\rightarrow Z_{A_{i+1}}$). Similarly, we parametrize a line $X$ corresponding to an arbitrary MHV vertex from the twistor action as $Z(t)=Z_{A}+t Z_{B}$. Without loss of generality, we assume that the fixed CSW reference twistor $Z_{*}$ has no fermionic part (i.e., $\chi_{*}=0$). \subsubsection*{\textit{Contractions involving an MHV vertex}} Consider an arbitrary MHV vertex supported on $X$ and the operators and frames supported on any of the $X_{i}$. The contraction between a field $\cA$ in a holomorphic frame $U_{X_i}$ and a field $\cA$ on $X$ is given by \be{C1} \left\la \overbrace{\cA|_{X_{i}}\cA|_{X}} \right\ra=\int_{\C^{2}} \frac{\d s_{i}}{s_{i}}\frac{\d t}{t} \Delta(Z(s_{i}), Z(t))=[A_{i},B_{i},*,A,B]. \ee Genericity means that (even in the null limit) $X_{i}\cap X=\emptyset$, so contractions of the type \eqref{C1} are always finite by the definition of the R-invariant. Additionally, we can have a contraction between an insertion of $\frac{\partial^{2}\cA}{\partial\chi^{2}}$ on $X_{i}$ and a field $\cA$ on $X$, which leads to \begin{multline}\label{C2} \left\la \overbrace{\frac{\partial^{2}\cA}{\partial\chi^{a}\partial\chi^{b}}|_{X_{i}}\cA|_{X}} \right\ra = \frac{\partial^{2}}{\partial\chi_{A_i}^{a}\partial\chi_{B_{i}}^{b}}[A_{i},B_{i},*,A,B] \\ =\frac{\delta^{0|2}_{ab}\left(\chi_{A_i}(B_{i}*A B) +\chi_{B_{i}}(*ABA_{i})+\chi_{A}(BA_{i}B_{i}*)+\chi_{B}(A_{i}B_{i}*A)\right)}{(A_{i}B_{i}AB)(BA_{i}B_{i}*)(A_{i}B_{i}*A)}. \end{multline} This is also finite thanks to genericity, so we can neglect all contributions to \eqref{CW1} due to contractions between an operator and MHV vertices. \subsubsection*{\textit{Contractions between non-adjacent $X_{i}$s}} Contractions between non-adjacent operator insertions on $X_{i}$ and $X_{j}$ (for $j\neq i+1,\; i-1$) follow in a similar fashion to \eqref{C1} and \eqref{C2}. In this class, we have potential contributions from contractions between frames, between a frame and an insertion of $\frac{\partial^{2}\cA}{\partial\chi^{2}}$, or between two insertions of $\frac{\partial^{2}\cA}{\partial\chi^{2}}$. Short calculations show that these are given by: \be{C3} \left\la \overbrace{\cA|_{X_{i}}\cA|_{X_{j}}} \right\ra=\int_{\C^{2}} \frac{\d s_{i}}{s_{i}}\frac{\d s_{j}}{s_j} \Delta(Z(s_{i}), Z(s_j))=[A_{i},B_{i},*,A_{j},B_{j}], \ee \begin{multline}\label{C4} \left\la \overbrace{\frac{\partial^{2}\cA}{\partial\chi^{a}\partial\chi^{b}}|_{X_{i}}\cA|_{X_{j}}} \right\ra = \frac{\partial^{2}}{\partial\chi_{A_i}^{a}\partial\chi_{B_{i}}^{b}}[A_{i},B_{i},*,A_{j},B_{j}] \\ =\frac{\delta^{0|2}_{ab}\left(\chi_{A_i}(B_{i}*A_{j}B_{j}) +\chi_{B_{i}}(*A_{j}B_{j}A_{i})+\chi_{A_{j}}(B_{j}A_{i}B_{i}*)+\chi_{B_{j}}(A_{i}B_{i}*A_{j})\right)}{(A_{i}B_{i}A_{j}B_{j})(B_{j}A_{i}B_{i}*)(A_{i}B_{i}*A_{j})}, \end{multline} \begin{multline}\label{C5} \left\la \overbrace{\frac{\partial^{2}\cA}{\partial\chi^{a}\partial\chi^{b}}|_{X_{i}}\frac{\partial^{2}\cA}{\partial\chi^{c}\partial\chi^{d}}|_{X_{j}}} \right\ra =\frac{\partial^{4}}{\partial\chi^{a}_{A_{i}}\partial\chi^{b}_{B_{i}}\partial\chi^{c}_{A_{j}}\partial\chi^{d}_{B_{j}}}[A_{i},B_{i},*,A_{j},B_{j}] \\ =\frac{\epsilon_{abcd}}{(A_{i}B_{i}A_{j}B_{j})}, \end{multline} respectively. Because the $X_{i}$ only become \emph{pairwise} null separated in the limit, $X_{i}\cap X_{j}=\emptyset$ and all three of \eqref{C3}-\eqref{C5} are finite. So no contractions from this class contribute to the ratio \eqref{CW1} in the null limit. \subsubsection*{\textit{Contractions between adjacent $X_{i}$s}} Finally, we must consider contractions between operator insertions on $X_{i}$ and $X_{i+1}$. In this case, we need to be careful because the two lines will intersect in the null limit, and a regularization procedure is needed to isolate the behavior of the Wick contractions as the limit is approached. The simplest mechanism is given by a framing procedure: take two copies of the singular configuration in twistor space separated by a small parameter, and then consider the limit as this parameter is taken to zero. While this is not gauge invariant on space-time, we work at the level of the loop integrand in twistor space and the framing regulator is perfectly well-defined at the level of this rational function. More precisely, as $x^{2}_{i,i+1}\rightarrow 0$, we assume that $Z_{A_{i+1}}=Z_{B_i}+\varepsilon Z$ for some twistor $Z$ and $\varepsilon$ our small parameter. In this scheme, the numerator of the R-invariant becomes: \begin{multline*} \delta^{0|4}\left(\varepsilon\chi_{A_{i}}(B_{i}* Z B_{i+1})+\;\mathrm{cyclic}\right)=\prod_{a=1}^{4}\left[\varepsilon\chi^{a}_{A_{i}}(B_{i}* Z B_{i+1})+ \chi^{a}_{B_i}(Z B_{i}B_{i+1}A_{i}) \right.\\ \left. +\varepsilon\chi^{a}_{B_i}(* Z B_{i+1}A_{i})+\chi^{a}_{B_i}(B_{i+1}A_{i}B_{i}*)+\varepsilon\chi^{a}_{B_{i+1}}(A_{i}B_{i}* Z)\right] \sim O(\varepsilon^{4}), \end{multline*} while the denominator behaves as: \begin{multline*} \varepsilon^{4}(B_{i+1}A_{i}B_{i}* )(A_{i}B_{i}* Z)(B_{i}* Z B_{i+1})(* Z B_{i+1} A_{i})(Z B_{i+1} A_{i} B_{i}) \\ + \varepsilon^{3}(B_{i+1}A_{i}B_{i}*)(A_{i}B_{i}* Z)(B_{i}* Z B_{i+1})(* B_{i}B_{i+1}A_{i})(Z B_{i+1}A_{i}B_{i}). \end{multline*} Let us apply this to contractions between the frames $U_{X_i}$ and $U_{X_{i+1}}$: \begin{equation*} \left\la \overbrace{\cA|_{X_{i}}\cA|_{X_{i+1}}} \right\ra=\int_{\C^{2}} \frac{\d s_{i}}{s_{i}}\frac{\d s_{i+1}}{s_{i+1}} \Delta(Z(s_{i}), Z(s_{i+1}))=[A_{i},B_{i},*,A_{i+1},B_{i+1}]. \end{equation*} The null limit gives \be{C6} \lim_{x^{2}_{i,i+1}\rightarrow 0}\left\la \overbrace{\cA|_{X_{i}}\cA|_{X_{i+1}}} \right\ra= \lim_{\varepsilon\rightarrow 0}[A_{i},B_{i},*, (B_{i}+\varepsilon Z), B_{i+1}] \sim \lim_{\varepsilon\rightarrow 0} \frac{\varepsilon^{4}}{\varepsilon^{4}+\varepsilon^{3}} =0, \ee as a consequence of $\cN=4$ supersymmetry in the numerator of the R-invariant. Similarly, the contraction between an insertion of $\frac{\partial^{2}\cA}{\partial\chi^{2}}$ on $X_{i}$ and a field $\cA$ in $U_{X_{i+1}}$ gives \begin{multline*} \left\la \overbrace{\frac{\partial^{2}\cA}{\partial\chi^{a}\partial\chi^{b}}|_{X_{i}}\cA|_{X_{i+1}}} \right\ra = \frac{\partial^{2}}{\partial\chi_{A_i}^{a}\partial\chi_{B_{i}}^{b}}[A_{i},B_{i},*,A_{i+1},B_{i+1}] = \\ \frac{\delta^{0|2}_{ab}\left(\chi_{A_i}(B_{i}*A_{i+1}B_{i+1}) +\chi_{B_{i}}(*A_{i+1}B_{i+1}A_{i})+\chi_{A_{i+1}}(B_{i+1}A_{i}B_{i}*)+\chi_{B_{i+1}}(A_{i}B_{i}*A_{i+1})\right)}{(A_{i}B_{i}A_{i+1}B_{i+1})(B_{i+1}A_{i}B_{i}*)(A_{i}B_{i}*A_{i+1})}, \end{multline*} which is finite upon passing to the null limit: \be{C7} \lim_{x^{2}_{i,i+1}\rightarrow 0} \left\la \overbrace{\frac{\partial^{2}\cA}{\partial\chi^{a}\partial\chi^{b}}|_{X_{i}}\cA|_{X_{i+1}}} \right\ra \sim \lim_{\varepsilon\rightarrow 0}\frac{\varepsilon^{2}}{\varepsilon^{2}} =1. \ee Hence, these contributions also contribute nothing to the overall ratio \eqref{CW1} in the null limit. Finally, we must consider the contraction between insertions of $\frac{\partial^{2}\cA}{\partial\chi^{2}}$ on each of $X_{i}$ and $X_{i+1}$. From \eqref{C5}, it is easy to see that \begin{multline*} \left\la \overbrace{\frac{\partial^{2}\cA}{\partial\chi^{a}\partial\chi^{b}}|_{X_{i}}\frac{\partial^{2}\cA}{\partial\chi^{c}\partial\chi^{d}}|_{X_{i+1}}} \right\ra =\frac{\partial^{4}}{\partial\chi^{a}_{A_{i}}\partial\chi^{b}_{B_{i}}\partial\chi^{c}_{A_{i+1}}\partial\chi^{d}_{B_{i+1}}}[A_{i},B_{i},*,A_{i+1},B_{i+1}] \\ =\frac{\epsilon_{abcd}}{(A_{i}B_{i}A_{i+1}B_{i+1})}. \end{multline*} Rather than regulate this contraction, note that its behavior is evident after integrating the contraction over the respective operator insertion sites: \begin{multline}\label{C8} \int\limits_{X_{i}\times X_{i+1}}\D\lambda_{i}\;\D\lambda_{i+1} \left\la \overbrace{\frac{\partial^{2}\cA}{\partial\chi^{a}\partial\chi^{b}}|_{X_{i}}\frac{\partial^{2}\cA}{\partial\chi^{c}\partial\chi^{d}}|_{X_{i+1}}} \right\ra\\ =\la A_{i}B_{i}\ra \la A_{i+1}B_{i+1}\ra \frac{\partial^{4}}{\partial\chi^{a}_{A_{i}}\partial\chi^{b}_{B_{i}}\partial\chi^{c}_{A_{i+1}}\partial\chi^{d}_{B_{i+1}}} \int_{\C^{2}}\frac{\d s_{i}}{s_{i}}\frac{\d s_{i+1}}{s_{i+1}} \Delta(Z(s_{i}),Z(s_{i+1})) \\ =\epsilon_{abcd}\frac{\la A_{i}B_{i}\ra \la A_{i+1}B_{i+1}\ra}{(A_{i}B_{i}A_{i+1}B_{i+1})}=\frac{\epsilon_{abcd}}{(x_{i}-x_{i+1})^{2}}. \end{multline} Such a contraction thus diverges as $x^{2}_{i,i+1}\rightarrow 0$ in the null limit, precisely the singular behavior needed to counterbalance the tree-level denominator of \eqref{CW1}. Hence, \eqref{C8} are the \emph{only} contractions which survive in the null limit. \medskip \begin{figure}[tp] \centering \includegraphics[width=150mm]{limit.pdf}\caption{\textit{The surviving contributions in the null limit form a doubled trace of the holonomy around $C$ corresponding to a Wilson loop in the adjoint representation.}}\label{limit} \end{figure} So in the numerator of \eqref{CW1}, only those contributions which contract all adjacent insertions of $\frac{\partial^{2}\cA}{\partial\chi^{2}}$ survive. These cancel the tree-level denominator and leave two holomorphic frames $U_{X}$ on each of the $X_{i}$. In the null limit, the trace around these intersecting lines yields the integrand of the twistor Wilson loop in the adjoint representation; see Figure \ref{limit}. Hence, we have shown \be{CW2} \lim_{x_{i,i+1}^{2}\rightarrow 0}\frac{\la \cO(x_{1})\ldots\cO(x_{n})\ra}{\la\cO(x_{1})\ldots\cO(x_{n})\ra^{\mathrm{tree}}}=\left\la W_{\mathrm{adj}}[C]\right\ra, \ee at the level of the integrand and for gauge group $\SU(N)$ or $\U(N)$. In the planar limit (i.e., $N\rightarrow\infty$), the twistor propagator suppresses any mixing between fundamental and anti-fundamental representations thanks to the color structure \eqref{TAcolorprop}: \begin{equation*} \Delta(Z_{1},Z_{2})^{i k}_{j l}=\bar{\delta}^{2|4}(Z_{1},\rf, Z_{2})\left(\delta^{i}_{l}\delta^{k}_{j}-\frac{1}{N}\delta^{i}_{j}\delta^{k}_{l}\right). \end{equation*} Hence, we can decompose the adjoint representation into the product of fundamental and anti-fundamental representations, and at the level of the Wilson loop, we have: \begin{equation*} \left\la W_{\mathrm{adj}}[C]\right\ra = \left\la W[C]\;\widetilde{W}[C]\right\ra = \left\la W[C]\right\ra^{2}. \end{equation*} This means we have proven the supersymmetric correlation function / Wilson loop correspondence \eqref{corrW}, as first reported in \cite{Adamo:2011dq}. More formally, our result is: \begin{propn}\label{corrWL} Let $\{\cO(x_{i})\}_{i=1,\ldots, n}$ be gauge invariant local operators in $\cN=4$ SYM and $C$ be the null polygon resulting from the limit where these operators become pairwise null separated (i.e., $x_{i,i+1}^{2}=0$). Then at the level of the integrand, \be{corrW2} \lim_{x_{i,i+1}^{2}\rightarrow 0}\frac{\la \cO(x_{1})\ldots\cO(x_{n})\ra}{\la\cO(x_{1})\ldots\cO(x_{n})\ra^{\mathrm{tree}}}=\la W_{\mathrm{adj}}[C]\ra \xrightarrow{\mathrm{planar}\:\mathrm{limit}} \la W[C]\ra^{2}, \ee where all expectation values are assumed to be generic and normal ordered, and $W[C]$ is the Wilson loop in the fundamental representation. \end{propn} From our basic knowledge of the twistor action, twistor Wilson loop, and the scattering amplitudes/Wilson loop duality, there are several immediate corollaries of this fact. The resulting null polygon $C\subset\M$ defines a set of $n$ null (super-)momenta, which satisfy momentum conservation and hence define data for a scattering amplitude. This allows us to relate the supersymmetric correlation function / Wilson loop correspondence to scattering amplitudes in the planar limit: \begin{corol}\label{CcorrWL} Fix the planar limit of $\cN=4$ SYM. The following statements hold at the level of the loop integrand: \be{corrW3} \lim_{x_{i,i+1}^{2}\rightarrow 0}\frac{\la \cO_{b}(x_{1})\ldots\cO_{b}(x_{n})\ra}{\la\cO(x_{1})\ldots\cO(x_{n})\ra^{\mathrm{tree}}}= \left(\sum_{l=0}^{\infty}\frac{A^{l}_{n,0}}{A^{0}_{n,0}}\right)^{2}, \ee \be{corrW4} \lim_{x_{i,i+1}^{2}\rightarrow 0}\frac{\la \cO(x_{1})\ldots\cO(x_{n})\ra^{\mathrm{SD}}}{\la\cO(x_{1})\ldots\cO(x_{n})\ra^{\mathrm{tree}}}=\left(1+\frac{A^{0}_{n,1}}{A^{0}_{n,0}}+\cdots +\frac{A^{0}_{n,n-4}}{A^{0}_{n,0}}\right)^{2}, \ee where all expectation values are assumed to be generic and normal ordered, $\{\cO_{b}(x_{i})\}_{i=1,\ldots,n}$ are the bosonic version of the local operators, and $\la \cO(x_{1})\ldots\cO(x_{n})\ra^{\mathrm{SD}}$ denotes the expectation value with respect to the self-dual portion of the theory. \end{corol} \proof \eqref{corrW3} follows from the fact that for the bosonic operators in the planar limit, we will recover the square of the bosonic Wilson loop \eqref{bWilsonloop} about the contour $C\subset\M_{b}$ \cite{Mason:2010yk}. The most basic form of the scattering amplitude / Wilson loop duality indicates that this is the ratio of the all loop MHV amplitude divided by the tree-level MHV amplitude, all squared. For \eqref{corrW4}, the self-dual truncation indicates taking the expectation value with respect to $S_{1}[\cA]$ in twistor space. This eliminates the MHV vertex insertions from $S_{2}[\cA]$, which constitute the loop corrections. Hence, the fundamental Wilson loop in the self-dual theory gives all the $n$-point tree-level amplitudes, normalized by the tree-level MHV amplitude. $\Box$ \medskip There are several facts worth noting before we move on. Firstly, proposition \ref{corrWL} establishes the correspondence between correlation functions and Wilson loops for finite-rank gauge group. This immediately confirms that this correspondence is more robust than the scattering amplitudes / Wilson loop duality, which only holds in the planar limit. Of course, this is to be expected: scattering amplitudes and Wilson loops are defined in \emph{dual} spaces related by a sort of T-duality, whereas our correlation functions are defined on the same space as the Wilson loops. If one wanted to extend this correspondence to account for full loop \emph{integrals}, then this indicates that the same regularization procedure can be used for both the Wilson loop and the correlation function. Upon passing to the planar limit, corollary \ref{CcorrWL} tells us that this will define the (square of the) regularized scattering amplitudes. \subsection{Mixed Wilson Loop / Local Operator Correlators} The supersymmetric correlation function / Wilson loop correspondence can naturally be generalized by considering null limits of local operator insertions in which some local operators remain in general position (i.e., not null separated). In the planar limit, Alday, Buchbinder, and Tseytlin conjectured that such a process would lead to mixed Wilson loop / local operator correlators \cite{Alday:2011ga} \be{locW1} \lim_{x_{i,i+1}^{2}\rightarrow 0}\frac{\la \cO(x_{1})\ldots\cO(x_{n})\cO(y)\ra}{\la\cO(x_{1})\ldots\cO(x_{n})\ra}\sim \frac{\la W^{n}[C]\cO(y)\ra}{\la W^{n}[C]\ra}\equiv \cC^{n}_{1}(W^{n}, y). \ee The intuition for this is based upon the case when $\cO(y)=\cO_{\mathrm{dil}}(y)$, the dilaton operator. Since $\cO_{\mathrm{dil}}$ is (up to a re-scaling) the $\cN=4$ SYM Lagrangian (c.f., \cite{Liu:1999kg}), one can use proposition \ref{corrWL} in conjunction with integration-by-parts inside the path integral to arrive at the right-hand side of \eqref{locW1}--albeit with the position $y$ of the dilaton operator integrated over. While this proposal has been confirmed at weak coupling for twist-2 local operators using dimensional regularization \cite{Engelund:2011fg}, it is hardly obvious that the integral over position can be omitted or that \eqref{locW1} holds for arbitrary local operators. It turns out that the twistorial point of view once again allows us to prove these claims with relative ease (at the level of the integrand). Note that there are many reasons to be interested in the mixed correlators which are conjectured to appear on the right-hand side of \eqref{locW1}. These mixed correlators are a natural candidate for interpolating between Wilson loops and generic correlation functions; their structure is highly constrained by conformal invariance; and studying $\cC^{n}_{1}$ provides information about the Wilson loop OPE \cite{Berenstein:1998ij}. Indeed, for $n=4$ (where the strong coupling solution for the Wilson loop is explicitly known \cite{Alday:2007hr}) one can show that $\cC^{4}_{1}$ is a function of a single conformal cross-ratio, and hence explicit strong coupling calculations are possible. In this setting, the functional dependence can be determined precisely in the strong coupling regime using a semi-classical string theory approximation in the AdS-geometry \cite{Zarembo:2002ph, Roiban:2010fe, Zarembo:2010rr, Costa:2010rz, Buchbinder:2010ek, Alday:2011ga, Hernandez:2012zj}. This method has also been used to study a similar correlator involving a circular Wilson loop (i.e., $n\rightarrow\infty$) \cite{Alday:2011pf}, and in this case some progress can also be made for the inclusion of two local operators in general position \cite{Buchbinder:2012vr}. Furthermore, while null polygonal Wilson loops have UV divergences coming from their cusps, the mixed correlators appear to be UV finite since these divergence should cancel between the numerator and denominator. This has been checked explicitly to two loops for the $n=4$ Wilson loop and the dilaton operator \cite{Alday:2012hy, Alday:2013ip}, and is expected to hold to all orders in perturbation theory. This indicates that studying mixed correlators at the level of the loop integrand is in fact a mathematically safer endeavour than for scattering amplitudes or null polygonal Wilson loops on their own. \subsubsection{Null limits in twistor space} We now seek to confirm the conjecture of \cite{Alday:2011ga}: that mixed correlators are equivalent to null limits of ratios of correlation functions. Without loss of generality, let all operators in question be 1/2-BPS operators given on space-time by \eqref{BPS} and twistor space by \eqref{nabelian}. Recall that we could easily modify our discussion to account for any local operators (such as Konishi or dilaton); as in the proof of proposition \ref{corrWL}, the only thing that matters is the null limit. Provided all limits exist (as we will show), we can separate the limit of interest as \be{eqn: l2} \lim_{x^{2}_{i,i+1}\rightarrow 0}\frac{\la \cO(x_{1})\cdots\cO(x_{n})\cO(y)\ra}{\la \cO(x_{1})\cdots\cO(x_{n})\ra^{\mathrm{tree}}}\times \lim_{x^{2}_{i,i+1}\rightarrow 0}\frac{\la \cO(x_{1})\cdots\cO(x_{n})\ra^{\mathrm{tree}}}{\la \cO(x_{1})\cdots\cO(x_{n})\ra}, \ee where the insertion $y$ is in general position (i.e., not null separated from any of the $x_{i}$). However, using proposition \ref{corrWL}, it is easy to see that the limit we are actually interested in calculating is computed by: \be{newlimit} \lim_{x^{2}_{i,i+1}\rightarrow 0}\frac{\la \cO(x_{1})\cdots\cO(x_{n})\cO(y)\ra}{\la \cO(x_{1})\cdots\cO(x_{n})\ra^{\mathrm{tree}}}\times \frac{1}{\la W^{n}_{\mathrm{adj}}[C]\ra}. \ee Once again, the tree level contribution in the denominator goes as \begin{equation*} \la \cO(x_{1})\cdots\cO(x_{n})\ra^{\mathrm{tree}} \sim \frac{1}{x_{12}^{2}x_{23}^{2}\cdots x_{n1}^{2}}, \end{equation*} so we are interested in extracting those contributions from the numerator which counterbalance this classical factor in the null limit. \begin{figure} \centering \includegraphics[width=3 in, height=1.5 in]{locops1.pdf}\caption{\textit{The geometry of the null limit in} (a.) \textit{space-time, and} (b.) \textit{twistor space.}}\label{locops1} \end{figure} As before, this situation has a nice formulation in twistor space: we begin with $n+1$ lines $X_{1},\ldots,X_{n},Y\subset\PT$ for each local operator. In the limit, the first $n$ of these intersect each other sequentially to form the nodal curve corresponding to the resulting null polygon in $\M$; the final operator in general position, $\cO(y)$, lies on a line $Y$ which does not intersect \emph{any} of the others. This configuration is illustrated in Figure \ref{locops1}. We can now obtain the desired result, first reported in \cite{Adamo:2011cd}: \begin{propn}\label{locP1} Let $\{\cO(x_{i}), \cO(y)\}_{i=1,\ldots,n}$ be gauge invariant local operators in $\cN=4$ SYM, and $C$ be the null polygon resulting from the limit where the first $n$ of these operators become pairwise null separated (i.e., $x_{i,i+1}^{2}=0$). Then at the level of the integrand, \be{locW} \lim_{x^{2}_{i,i+1}\rightarrow 0}\frac{\la \cO(x_{1})\cdots\cO(x_{n})\cO(y)\ra}{\la \cO(x_{1})\cdots\cO(x_{n})\ra} = \frac{\la W^{n}_{\mathrm{adj}}[C]\cO(y)\ra}{\la W^{n}_{\mathrm{adj}}[C]\ra} \xrightarrow{\mathrm{planar}\:\mathrm{limit}} 2\frac{\la W^{n}[C]\cO(y)\ra}{\la W^{n}[C]\ra}, \ee where all expectation values are assumed to be generic and normal ordered, and $W^{n}[C]$ is the Wilson loop in the fundamental representation. \end{propn} \proof In this setting, we have the same classes of contractions as in proposition \ref{corrWL}, along with an additional class involving frames and operator insertions along the line $Y$. By the genericity assumption, any contractions involving $Y$ and a MHV vertex will produce a R-invariant, or a second derivative of a R-invariant, which will be finite in the null limit. Additionally, since $Y$ does not intersect any of the $\{X_{i}\}$, all other possible contributions from the local operator in general position will be finite in the null limit (this follows using methods identical to those for the proof of proposition \ref{corrWL}). This leaves two holomorphic frames $U_{X}$ on each of the $X_{i}$ and the local operator $\cO(y)$ in general position after the null limit, so we have \begin{equation*} \lim_{x^{2}_{i,i+1}\rightarrow 0}\frac{\la \cO(x_{1})\cdots\cO(x_{n})\cO(y)\ra}{\la \cO(x_{1})\cdots\cO(x_{n})\ra^{\mathrm{tree}}}=\la W^{n}_{\mathrm{adj}}[C]\cO(y) \ra. \end{equation*} Now, passing to the planar limit of the gauge theory (i.e., $N\rightarrow\infty$), we can decompose the adjoint representation into the product of fundamental and anti-fundamental representations to write: \begin{equation*} \la W^{n}_{\mathrm{adj}}[C]\cO(y) \ra = \la W^{n}[C] \widetilde{W}^{n}[C]\cO(y) \ra. \end{equation*} The tensor structure of the propagator \eqref{TAcolorprop} suppresses contractions between operators and frames on $Y$ with the Wilson loops which mix fundamental and anti-fundamental representations in the planar limit, as depicted in Figure \ref{locops2}. This means that in the large $N$ limit, we have \begin{equation*} \la W^{n}[C] \widetilde{W}^{n}[C]\cO(y) \ra = 2 \la W^{n}[C] \ra \la W^{n}[C] \cO(y)\ra, \end{equation*} as required. $\Box$ \begin{figure} \centering \includegraphics[width=4 in, height=1.5 in]{locops2.pdf}\caption{\textit{Contractions which are} (a.) \textit{leading, and} (b.) \textit{suppressed in the planar limit. The solid and dashed lines are meant to distinguish the fundamental and anti-fundamental representations.}}\label{locops2} \end{figure} Note that the last step in this proof is a rather explicit manifestation of the following general heuristic for a planar gauge theory: given three operators $\cO_{1}$, $\cO_{2}$, and $\cO_{3}$, their expectation value should obey \begin{equation*} \la \cO_{1} \cO_{2} \cO_{3} \ra = \la \cO_{1}\ra \la \cO_{2} \cO_{3}\ra + \la \cO_{2}\ra \la \cO_{3} \cO_{1}\ra + \la \cO_{3}\ra \la \cO_{1} \cO_{2}\ra. \end{equation*} In the case of interest, two of these terms are equal while the third vanishes due to normal ordering. \subsubsection*{\textit{Additional operators and null limits}} There are natural generalizations of the null limit we have considered here; in particular, we could leave an arbitrary number of local operators in general position. This extension was first proposed in \cite{Alday:2011ga} and has been investigated at weak \cite{Engelund:2011fg} and strong coupling \cite{Adamo:2011cd, Buchbinder:2012vr}. For instance, consider \begin{equation*} \lim_{x^{2}_{i,i+1}\rightarrow 0}\frac{\la \cO(x_{1})\cdots\cO(x_{n})\cO(y_{1})\cdots\cO(y_{k})\ra}{\la \cO(x_1)\cdots\cO(x_n)\ra}, \end{equation*} where the $k$ operators $\cO(y_{1}),\ldots,\cO(y_k)$ remain in general position relative to the $x_{i}$ and each other. The proof of proposition \ref{locP1} can easily be adapted to this situation, giving \begin{equation*} \frac{1}{\la W_{\mathrm{adj}}[C]\ra} \sum_{j=0}^{k-2} \sum_{\{i_{1},\ldots, i_{j}\}\subset \{1,\ldots, k\}} \la W_{\mathrm{adj}}[C]\cO(y_{i_{1}})\cdots\cO(y_{i_{j}})\ra\: \la\cO(y_{i_{j+1}})\cdots\cO(y_{i_{k}})\ra, \end{equation*} with the range of the sum dictated by normal ordering. Taking the planar limit splits the first factor into two correlators with fundamental Wilson loops as before, and introduces another sum over partitions of the remaining operators. Of course, we can generalize this further by also allowing the $k$ additional operators to become pairwise null separated, forming a second null polygon $D$. This results in new divergences which must be balanced by the appropriate denominator; the natural choice is: \be{locW2*} \lim_{x^{2}_{i,i+1},y^{2}_{j,j+1}\rightarrow 0} \frac{\la \cO(x_{1})\cdots\cO(x_{n})\cO(y_{1})\cdots\cO(y_{k})\ra}{\la \cO(x_{1})\cdots\cO(x_{n})\ra \la\cO(y_{1})\cdots\cO(y_{k})\ra}. \ee By proposition \ref{corrWL}, this is: \begin{equation*} \lim_{x^{2}_{i,i+1},y^{2}_{j,j+1}\rightarrow 0} \frac{\la \cO(x_{1})\cdots\cO(x_{n})\cO(y_{1})\cdots\cO(y_{k})\ra}{\la \cO(x_{1})\cdots\cO(x_{n})\ra^{\mathrm{tree}} \la\cO(y_{1})\cdots\cO(y_{k})\ra^{\mathrm{tree}}}\frac{1}{\la W_{\mathrm{adj}}[C]\ra \la W_{\mathrm{adj}}[D]\ra}, \end{equation*} where the tree-level denominator has the expected singularity structure: \be{diverge} \la \cO(x_{1})\cdots\cO(x_{n})\ra^{\mathrm{tree}} \la \cO(y_{1})\cdots\cO(y_{k})\ra^{\mathrm{tree}} \sim \frac{1}{x_{12}^{2}x_{23}^{2}\cdots x_{n1}^{2}} \times \frac{1}{y_{12}^{2}y_{23}^{2}\cdots y_{k1}^{2}}. \ee So once again, we need to extract compensating divergences from the numerator of \eqref{locW2*}. We can break the numerator into a sum of connected and disconnected components: \begin{multline*} \frac{1}{\la W_{\mathrm{adj}}[C]\ra \la W_{\mathrm{adj}}[D]\ra} \lim_{x^{2}_{i,i+1},y^{2}_{j,j+1}\rightarrow 0}\left( \frac{\la \cO(x_{1})\cdots\cO(x_{n})\cO(y_{1})\cdots\cO(y_{k})\ra^{\mathrm{conn}}}{\la \cO(x_{1})\cdots\cO(x_{n})\ra^{\mathrm{tree}} \la\cO(y_{1})\cdots\cO(y_{k})\ra^{\mathrm{tree}}} \right. \\ + \left. \frac{\la \cO(x_{1})\cdots\cO(x_{n})\ra \la\cO(y_{1})\cdots\cO(y_{k})\ra}{\la \cO(x_{1})\cdots\cO(x_{n})\ra^{\mathrm{tree}} \la\cO(y_{1})\cdots\cO(y_{k})\ra^{\mathrm{tree}}} + \frac{\{\mbox{all other disconnected}\}}{\la \cO(x_{1})\cdots\cO(x_{n})\ra^{\mathrm{tree}} \la\cO(y_{1})\cdots\cO(y_{k})\ra^{\mathrm{tree}}} \right), \end{multline*} and analyse each term by performing all contractions in twistor space and looking at their degree of divergence. Because none of the $X_{i}$ and $Y_{j}$ ever intersect in twistor space (we assume that the two sets of operators become pairwise null separated independently), the proof of proposition \ref{corrWL} indicates that the only contractions which produce the correct degree of divergence in the first term are those between $\frac{\partial^{2}\cA}{\partial \chi^{2}}$ on adjacent $X$s and adjacent $Y$s. Hence, \begin{equation*} \lim_{x^{2}_{i,i+1},y^{2}_{j,j+1}\rightarrow 0}\frac{\la \cO(x_{1})\cdots\cO(x_{n})\cO(y_{1})\cdots\cO(y_{k})\ra^{\mathrm{conn}}}{\la \cO(x_{1})\cdots\cO(x_{n})\ra^{\mathrm{tree}} \la\cO(y_{1})\cdots\cO(y_{k})\ra^{\mathrm{tree}}}=\la W_{\mathrm{adj}}[C] W_{\mathrm{adj}}[D]\ra^{\mathrm{conn}}. \end{equation*} The second term is easily evaluated by applying proposition \ref{corrWL}: \begin{equation*} \lim_{x^{2}_{i,i+1},y^{2}_{j,j+1}\rightarrow 0}\frac{\la \cO(x_{1})\cdots\cO(x_{n})\ra \la\cO(y_{1})\cdots\cO(y_{k})\ra}{\la \cO(x_{1})\cdots\cO(x_{n})\ra^{\mathrm{tree}} \la\cO(y_{1})\cdots\cO(y_{k})\ra^{\mathrm{tree}}}= \la W_{\mathrm{adj}}[C]\ra \la W_{\mathrm{adj}}[D]\ra . \end{equation*} The remaining terms (composed of all other disconnected components from the correlation function) involve all the usual contractions which give a vanishing contribution (e.g., contractions between non-adjacent lines, contractions between operators and fields on any line with a MHV vertex), and additionally contain no connected component with enough lines in twistor space to form a full Wilson loop in the null limit. So any term in this sum of disconnected components will contain some divergences of the form $x_{i,i+1}^{-2} y_{j,j+1}^{-2}$, but never the full array appearing in \eqref{diverge}. Thus, all remaining disconnected terms vanish in the null limit. In both of the generalizations discussed here, we can pass to the planar limit in twistor space by invoking the twistor propagator with its color structure given by \eqref{TAcolorprop}. More formally, we have: \begin{propn}\label{locP2} Let $\{\cO(x_{i}), \cO(y_{j})\}^{i=1,\ldots,n}_{j=1,\ldots,k}$ be gauge invariant local operators in $\cN=4$ SYM, $C$ be the null polygon resulting from the limit where the $\{\cO(x_{i})\}$ become pairwise null separated (i.e., $x_{i,i+1}^{2}=0$), and $D$ be the null polygon when the $\{\cO(y_{j})\}$ become null separated ($y^{2}_{j,j+1}=0$). Then at the level of the integrand, \begin{multline}\label{locW2} \lim_{x^{2}_{i,i+1}\rightarrow 0}\frac{\la \cO(x_{1})\cdots\cO(x_{n})\cO(y_{1})\cdots\cO(y_{k})\ra}{\la \cO(x_1)\cdots\cO(x_n)\ra} \\ = \frac{1}{\la W_{\mathrm{adj}}[C]\ra} \sum_{j=0}^{k-2} \sum_{\{i_{1},\ldots, i_{j}\}\subset \{1,\ldots, k\}} \left\la W_{\mathrm{adj}}[C]\cO(y_{i_{1}})\cdots\cO(y_{i_{j}})\right\ra\: \la\cO(y_{i_{j+1}})\cdots\cO(y_{i_{k}})\ra \\ \xrightarrow{\mathrm{planar}\:\mathrm{limit}} \frac{1}{\la W[C]\ra^{2}} \sum_{\cP_{k}} \left\la W[C]\cO(y_{i_{1}})\cdots\cO(y_{i_{j}})\right\ra \;\la W[C]\cO(y_{i_{j+1}})\cdots\cO(y_{i_{l}})\ra \\ \times \la \cO(y_{i_{l+1}})\cdots\cO(y_{i_{k}})\ra, \end{multline} where all expectation values are assumed to be generic and normal ordered, $W[C]$ is the Wilson loop in the fundamental representation, and $\sum_{\cP_{k}}$ is the sum over relevant partitions of $\{1,\ldots,k\}$. If we allow the remaining $k$ operators to also become null separated, then \begin{multline}\label{locW3} \lim_{x^{2}_{i,i+1},y^{2}_{j,j+1}\rightarrow 0} \frac{\la \cO(x_{1})\cdots\cO(x_{n})\cO(y_{1})\cdots\cO(y_{k})\ra}{\la \cO(x_{1})\cdots\cO(x_{n})\ra \la\cO(y_{1})\cdots\cO(y_{k})\ra} = 1+\frac{\la W_{\mathrm{adj}}[C] W_{\mathrm{adj}}[D]\ra^{\mathrm{conn}}}{\la W_{\mathrm{adj}}[C]\ra \la W_{\mathrm{adj}}[D]\ra} \\ \xrightarrow{\mathrm{planar}\:\mathrm{limit}} 1+2\frac{\la W[C]\; W[D]\ra^{\mathrm{conn}}}{\la W[C]\ra \la W[D]\ra}. \end{multline} \end{propn} \subsubsection{Recursion relations} The scattering amplitude / Wilson loop duality follows by showing that both objects obey the same recursive relations: the BCFW recursions (see Section \ref{SAP} for a quick reminder). Proving this is particularly natural in twistor space, where the BCFW recursion is implemented by performing a one-parameter shift of one of the nodes. Without loss of generality, this takes the form: \be{BCFW1} \widehat{Z_{n}}(t)=Z_{n}+tZ_{n-1}, \qquad t\in\C, \ee which shifts the $n^{\mathrm{th}}$ node along the line $(n-1, n)\cong\P^{1}\subset\PT$, as illustrated in Figure \ref{BCF1}. It was shown in \cite{Bullimore:2011ni} that the variation of the expectation value of this deformed Wilson loop is supported only on self-intersections or intersections with Lagrangian insertions, precisely reproducing the all-loop BCFW recursion of \cite{ArkaniHamed:2010kv}. This is a holomorphic analogue of the loop equations \cite{Makeenko:1979pb} and skein relations which arise in the study of real knot theory \cite{Witten:1988hf}. Beyond establishing the duality with amplitudes at the level of the loop integrand, these recursion relations also provide a means for actually computing the Wilson loop integrand. We will see that a BCFW-like recursion relation for the correlator \begin{equation*} \la W^{n}[C]\cO(y)\ra, \end{equation*} can also be derived, enabling the computation of the integrand for such correlators. Once again, the key to doing this will be studying the problem in twistor space. Recall the null polygonal Wilson loop in twistor space is given by: \begin{multline}\label{WL*} W[C]\equiv W[1,2,\ldots, n]=\tr\: \mathrm{Hol}_{Z_n}[C]= \\ \tr\left[ U(Z_{n},Z_{n-1})U(Z_{n-1},Z_{n-2})\cdots U(Z_{1},Z_{n})\right], \end{multline} where $\mathrm{Hol}_{Z}[C]$ denotes the holonomy about $C$ at base point $Z$ and the $Z_{i}$ are the nodes of the resulting curve in twistor space. The BCFW shift \eqref{BCFW1} results in a one-parameter family of nodal curves in twistor space and their corresponding family of Wilson loops: \begin{equation*} C(t)=(1,2)\cup (2,3)\cup\cdots \cup (n-1, \hat{n}(t))\cup (\hat{n}(t), 1), \qquad W[C(t)]=W[1,\ldots n-1, \hat{n}(t)], \end{equation*} where we have adopted the shorthand $\hat{n}(t)$ for $\widehat{Z_{n}}(t)$. \begin{figure} \centering \includegraphics[width=3.5 in, height=2.5 in]{BCF1.pdf}\caption{\textit{The BCFW-like deformation at the level of the twistor Wilson loop.}}\label{BCF1} \end{figure} Formally, we can think of $t\in \C$ as a coordinate on the moduli space of maps from $\Sigma\cong\P^{1}$ into $\PT$ with two fixed points (the nodes at $Z_{n}$ and $Z_{n-1}$). We will be interested in the variation of our correlator with respect to \emph{anti-holomorphic} dependence on this coordinate; this requires a $\dbar$-operator on the moduli space $\overline{M}_{0,2}(\P^{3|4},1)$.\footnote{These moduli spaces are, strictly speaking, algebraic stacks. However, for the case of a genus zero Riemann surface and target space $\PT$, they are unobstructed and have a versal family which can be treated as an algebraic space \cite{Adamo:2012cd}.} Formally, this can be constructed by considering the diagram: \begin{equation*} \xymatrix{ \overline{M}_{0,3}(\P^{3|4},1) \ar[d]^{\rho} \ar[rr]^{\Phi} & & \PT \\ \overline{M}_{0,2}(\P^{3|4},1) & & } \end{equation*} where $\rho$ is the forgetful functor which throws away an extra marked point, and $\Phi$ is the `universal instanton' \cite{Adamo:2012cd}. Since the universal curve is just $\overline{M}_{0,3}(\P^{3|4},1)\cong \overline{M}_{0,2}(\P^{3|4},1)\times\Sigma$, this map simply takes $f\in \overline{M}_{0,2}(\P^{3|4},1)$ and $z\in\Sigma$ to $f(z)\in\PT$. Hence, we can take the complex structure on $\PT$ given by $\dbar$, and define $\bar{\delta}$ on $\overline{M}_{0,2}(\P^{3|4},1)$ both formally and heuristically: \begin{equation*} \bar{\delta}=\rho_{*}\Phi^{*}\dbar, \qquad \bar{\delta}=\d\bar{t}\frac{\partial}{\partial \bar{t}}. \end{equation*} In \cite{Bullimore:2011ni}, the twistor action and Wilson loop were used to study $\bar{\delta}\la W[C(t)]\ra$; we will use the same methodology to study the correlator between a Wilson loop and single local operator. The key relation is the following: \begin{lemma}[Bullimore \& Skinner \cite{Bullimore:2011ni}] The infinitesimal variation of $W[C]$ with respect to $\bar{t}$ is given by: \be{WLvar} \bar{\delta}\;W[C]=-\int_{C} \omega(Z)\wedge\d \bar{Z}^{\bar{\alpha}}\wedge\bar{\delta}\bar{Z}^{\bar{\beta}}\; \tr\left( F^{0,2}_{\bar{\alpha}\bar{\beta}}\mathrm{Hol}_{Z}[C]\right), \ee where $\omega(Z)$ is a meromorphic 1-form on $C$ with simple poles at each node $Z=Z_{i}$, and $F^{0,2}=\dbar\cA+\cA\wedge\cA$ is the anti-holomorphic curvature of the gauge connection on twistor space. \end{lemma} By inserting this into the path integral for $\bar{\delta}\la W[C(t)]\ra$ with respect to the twistor action \eqref{TwistorAction}, a holomorphic analogue of the loop equations \cite{Makeenko:1979pb} was found which lead to the all-loop BCFW recursion relations of \cite{ArkaniHamed:2010kv}. Since scattering amplitudes are also determined by BCFW recursion, this proves that the two observables are actually the same. In our case, we want to consider $\bar{\delta}\la W[C(t)]\cO(y)\ra$ for any $\U(N)$ gauge group. Since the BCFW-like deformation \eqref{BCFW1} only acts on the Wilson loop, we can use \eqref{WLvar} to consider: \be{BCFW2} \bar{\delta}\la W[C(t)]\cO(y)\ra =-\frac{1}{N}\int [\mathcal{D}\cA]\left[ \int_{C(t)} \omega(Z)\wedge\d \bar{Z}^{\bar{\alpha}}\wedge\bar{\delta}\bar{Z}^{\bar{\beta}} \tr\left( F^{(0,2)}_{\bar{\alpha}\bar{\beta}}\mathrm{Hol}_{Z}[C]\right) \cO(y)\right] e^{-S[\cA]}, \ee where $\cO(y)$ is our 1/2-BPS operator \eqref{nabelian}, $S[\cA]$ is the twistor action \eqref{TwistorAction}, and we have included a normalization factor of $1/N$. As noted earlier, the twistor action can be decomposed into a holomorphic Chern-Simons portion accounting for the SD sector of the theory (or tree-level for the Wilson loop) and a non-local contribution encoding the ASD interactions (or loop-level for the Wilson loop). These are given by $S_{1}$ \eqref{TASD} and $S_{2}$ \eqref{TAInt} respectively. \subsubsection*{\textit{Holomorphic linking contribution}} We begin by considering the classical piece of \eqref{BCFW2} corresponding to $S_{1}[\cA]$. For an abelian gauge group, this will produce contributions corresponding to holomorphic linking between the irreducible components of $C$ \cite{Atiyah:1981, Penrose:1988, Khesin:2000ng, Frenkel:2005qk}. For a general gauge group, this provides a formal path-integral definition for holomorphic linking. Now, note that \begin{equation*} \frac{\delta S_{1}[\cA]}{\delta \cA(Z)}= N\;F^{(0,2)}(Z), \end{equation*} so that \begin{equation*} \bar{\delta}\la W[C(t)]\cO(y)\ra^{\mathrm{tree}}=\frac{1}{N^2}\int [\mathcal{D}\cA]\left[ \int_{C(t)}\omega(Z)\wedge\tr\left(\mathrm{Hol}_{Z}[C(t)]\frac{\delta}{\delta\cA(Z)}e^{-S_{1}[\cA]}\right)\cO(y)\right]. \end{equation*} Integrating by parts within the path integral moves the variational derivative onto the holonomy, yielding \cite{Bullimore:2011ni}: \begin{multline*} \tr\left(\frac{\delta}{\delta\cA(Z)}\mathrm{Hol}_{Z}[C(t)]\right)= \\ \sum_{j=1}^{n}\int_{C_{j}(t)}\omega_{j-1,j}(Z')\wedge\bar{\delta}^{3|4}(Z,Z')\tr\left[U(Z,Z_{n})\cdots U(Z_{j},Z')\right]\tr\left[U(Z',Z_{j-1})\cdots U(Z_{1},Z)\right], \end{multline*} where $C_{j}(t)=(j-1,j)$ is the $j^{\mathrm{th}}$ component of the nodal curve $C(t)$, and $\omega_{j-1,j}(Z)$ is the meromorphic 1-form on $C_{j}(t)$ with poles at $Z_{j-1}$, $Z_{j}$. \begin{figure} \centering \includegraphics[width=3.5 in, height=1.5 in]{BCF2.pdf}\caption{\textit{A holomorphic linking contribution when the curve $C(t)$ intersects itself.}}\label{BCF2} \end{figure} The $\bar{\delta}^{3|4}(Z,Z')$ only has support at $t\in\C$ where the deformed Wilson loop intersects itself, and the trace structure results in a factorization of the holonomy into two Wilson loops around the nodal curves $C'(t)$ and $C''(t)$, obtained by ungluing $C(t)$ at the intersection point $Z=Z'$. This configuration is illustrated in Figure \ref{BCF2}. This leaves us with: \be{BCFWt} \bar{\delta}\la W[C(t)]\cO(y)\ra^{\mathrm{tree}}=- \int\limits_{C(t)\times C(t)} \omega(Z)\wedge \omega(Z')\wedge\bar{\delta}^{3|4}(Z,Z') \left\la W[C'(t)]\;W[C''(t)]\;\cO(y)\right\ra, \ee where we have absorbed a normalization factor of $1/N$ into each Wilson loop. This is the analogue of the holomorphic linking term of the loop equations derived in \cite{Bullimore:2011ni}, but now with a local operator in general position. Note that in the planar limit of the gauge theory, the correlator can be re-written as \begin{equation*} \left\la W[C'(t)]\;W[C''(t)]\;\cO(y)\right\ra = \la W[C'(t)]\cO(y)\ra\; \la W[C''(t)]\ra + \la W[C'(t)]\ra\; \la W[C''(t)]\cO(y)\ra. \end{equation*} \subsubsection*{\textit{Contributions from MHV vertices and local operator}} We still have to account for the contributions to $\bar{\delta}\la W[C(t)]\cO(y)\ra$ from $S_{2}[\cA]$ and the local operator $\cO(y)$. Although the genericity assumption tells us that the nodal curve $C(t=0)$ never intersects any line $X$ corresponding to a MHV vertex or the line $Y$ corresponding to the local operator, as $t$ varies it sweeps out a plane $(n-1, n, 1)$ which \emph{all} lines in general position will intersect. The contribution from MHV vertices is given by \be{PA1} -\frac{\lambda}{N} \int[\mathcal{D}\cA]\int_{\Gamma} \d^{4|8}X\;\left[\int_{C(t)}\omega(Z)\wedge\tr\left( \delta\log\det(\dbar+\cA)|_{X}\mathrm{Hol}_{Z}[C(t)]\right)\cO(y)\right]e^{-S[\cA]}, \ee where the factor of $\lambda$ comes from $S_{2}[\cA]$, $1/N$ is for normalization, and $\Gamma=\M_{\R}\subset\M$ the real contour. The variation of the logdet can be found by standard methods (c.f., \cite{Mason:2001vj}); if we assume that $X$ is given by the span of $Z_{A}$ and $Z_{B}$, then \begin{equation*} \delta\log\det(\dbar+\cA)|_{X}=\int\limits_{X\times S^{1}\times S^{1}} \omega_{A,B}(Z')\wedge \frac{\D\lambda_{A} \wedge\D\lambda_{B}}{\la AB\ra^{2}}\tr\left(U(Z_{B},Z')\delta\cA(Z')\right), \end{equation*} where $\omega_{A,B}(Z')$ is the meromorphic differential on $X$ with poles at $Z_{A}$ and $Z_{B}$, and $\lambda_{A},\lambda_{B}$ are the homogeneous coordinates of these points on $X$. The integral over $S^{1}\times S^{1}$ is a contour integral surrounding the poles at $Z_{A}=Z_{B}=Z'$. The integral over the positions of $Z_{A}$ and $Z_{B}$ on $X$ can be combined with the measure $\d^{4|8}X$ to give a conformally invariant measure: \begin{equation*} \d^{4|8}X\wedge\frac{\D\lambda_{A} \wedge\D\lambda_{B}}{\la AB\ra^{2}}=\D^{3|4}Z_{A}\wedge\D^{3|4}Z_{B}. \end{equation*} Hence, the integrand of our path integral expression \eqref{PA1} is: \begin{multline}\label{PA2} -\frac{\lambda}{N}\oint\limits_{\Gamma\times S^{1}\times S^{1}}\D^{3|4}Z_{A}\wedge\D^{3|4}Z_{B}\:\int\limits_{C(t)\times X}\omega(Z)\wedge\omega_{A,B}(Z')\wedge\bar{\delta}^{3|4}(Z,Z') \\ \times \tr\left(U(Z_{B},Z')\mathrm{Hol}_{Z}[C(t)]\right)\cO(y), \end{multline} with the $\bar{\delta}^{3|4}(Z,Z')$ ensuring that this is supported only when $C(t)$ intersects $X$ at $Z=Z'$. As shown in \cite{Bullimore:2011ni}, this configuration can naturally be interpreted as a forward limit where the MHV vertex at $x\in\M$ becomes null separated from the point corresponding to the line $(\hat{n}(t),1)\subset\PT$. More formally, we can replace $C(t)$ with a new curve $\widetilde{C(t)}$ which has an additional component such that: \begin{equation*} \widetilde{C(t)}\cap X=\{Z',Z_{B}\}, \qquad \lim_{Z_{B}\rightarrow Z'}\widetilde{C(t)}\rightarrow C(t), \end{equation*} \begin{equation*} \widetilde{C(t)}\cup X=(1,2)\cup\cdots\cup (n-1,\hat{n}(t))\cup (Z',B)\cup (B,1). \end{equation*} This forward limit curve is pictured in Figure \ref{BCF3}. \begin{figure} \centering \includegraphics[width=4 in, height=1.5 in]{BCF3.pdf}\caption{\textit{An intersection of the curve $C(t)$ with a MHV vertex $X$ (left) can be expressed as a forward limit of a new curve $\widetilde{C(t)}\cup X$ (right).}}\label{BCF3} \end{figure} Now the combined contours and delta-functions in \eqref{PA2} allow us to replace \be{BCFW3} \frac{1}{N}\tr\left(U(Z_{B},Z')\mathrm{Hol}_{Z}[C(t)]\right)=W[\widetilde{C(t)}\cup X], \ee and the contribution to $\bar{\delta}\la W[C(t)]\cO(y)\ra$ from the MHV vertices becomes: \be{BCFWmhv} -\lambda \oint\limits_{\Gamma\times S^{1}\times S^{1}}\D^{3|4}Z_{A}\wedge \D^{3|4}Z_{B} \left[ \int\limits_{C(t)\times X} \omega(Z)\wedge\omega_{A,B}(Z')\wedge\bar{\delta}^{3|4}(Z,Z')\left\la W[\widetilde{C(t)}\cup X]\;\cO(y)\right\ra \right]. \ee Once again, this is the natural analogy of the second term in the holomorphic loop equations of \cite{Bullimore:2011ni}. Finally, we must account for when $C(t)$ intersects $Y$, the line corresponding to the operator $\cO(y)$. Clearly, the geometry of this configuration is identical to the intersection with a MHV vertex; the difference is that there are more fields and a more complicated R-symmetry on $Y$ due to the 1/2-BPS operator. Nevertheless, we will see that this contribution can be treated similarly to the MHV vertices. For simplicity, take $\cO(y)=\cO_{abab}$, and suppose that $Y$ is given by the span of $Z_{C}$ and $Z_{D}$. As before, we start with $\delta\log\det(\dbar+\cA)|_{Y}$, but now only integrated over a \emph{fermionic} contour in $\M$: \begin{multline*} - \oint\limits_{\tilde{\Gamma}\times S^{1}\times S^{1}}\d^{0|4}\theta^{abab} \wedge\frac{\D\lambda_{C}\wedge\D\lambda_{D}}{\la CD\ra^{2}}\\ \times\left[\int\limits_{C(t)\times Y}\omega(Z)\wedge\omega_{C,D}(Z')\wedge\bar{\delta}^{3|4}(Z,Z')\;\left\la W[\widehat{C(t)}\cup Y]\right\ra\right], \end{multline*} where $\d^{0|4}\theta^{abab}=\d\theta^{aA}\d\theta^{b}_{A}\d\theta^{aB}\d\theta^{b}_{B}$, $\tilde{\Gamma}$ is the corresponding fermionic contour (which, as usual, can be evaluated algebraically), and $\widehat{C(t)}$ is the forward limit curve associated with $Y$. The R-symmetry of this measure extracts fermionic derivatives of the field $\cA$ from the holomorphic frames in $W[\widehat{C(t)}\cup Y]$, but supersymmetry dictates that they must be inserted at two different places on the line $Y$. The remainder of the contour $\tilde{\Gamma}$ integrates these insertion points over $Y$. Explicitly, in $W[\widehat{C(t)}\cup Y]$ we can use the properties of the holomorphic frame to write: \begin{equation*} U(Z_{D},Z')=U(Z_{D},Z_{C}) U(Z_{C},Z'), \end{equation*} and $Z'=Z_{D}$ on the support of the $S^{1}\times S^{1}$ contour and $\bar{\delta}^{3|4}(Z,Z')$. The integral over $\d^{0|4}\theta^{abab}$ then pulls two derivatives from each of these frames on $Y$, and what is left is integrated over $Y$ to give: \begin{equation*} \int_{Y\times Y} \D\lambda_{C}\wedge\D\lambda_{D}\; U(Z_{D},Z_{C})\frac{\partial^{2}\cA(Z_{C})}{\partial\chi^{a}\partial\chi^{b}} U(Z_{C},Z_{D})\frac{\partial^{2}\cA(Z_{D})}{\partial\chi^{a}\partial\chi^{b}}. \end{equation*} Of course, this is our 1/2-BPS operator $\cO(y)=\cO_{abab}(y)$, and is inserted in the color trace running over the remaining holomorphic frames of the Wilson loop at the point $Z'$. But this is precisely what we expect for the configuration where the deformed Wilson loop $C(t)$ intersects $Y$ at the point $Z=Z'$! In other words, the 1/2-BPS operator is also captured by the variation of $S_{2}[\cA]$, but integrated over a partial fermionic contour corresponding to its R-symmetry structure.\footnote{Recall that $\log\det(\dbar+\cA)|_{X}$ is not locally gauge invariant; it must be integrated over some contour in order to kill the exponential anomalies associated with conformal gravity \cite{Boels:2006ir}. The algebraic integral over $\tilde{\Gamma}$ gives the gauge invariance of the 1/2-BPS operators, as desired.} Thus we obtain a third contribution to $\bar{\delta}\la W[C(t)]\cO(y)\ra$ of the form \be{BCFWop} -\oint\limits_{\tilde{\Gamma}\times S^{1}\times S^{1}}\d \mu^{abab} \left[\int\limits_{C(t)\times Y}\omega(Z)\wedge\omega_{C,D}(Z')\wedge\bar{\delta}^{3|4}(Z,Z') \left\la W[\widehat{C(t)}\cup Y]\right\ra \right], \ee where \begin{equation}\label{fmeasure} \d\mu^{abab}=\d^{0|4}\theta^{abab} \wedge\frac{\D\lambda_{C}\wedge\D\lambda_{D}}{\la CD\ra^{2}}. \end{equation} \subsubsection*{\textit{The recursion relation}} The all-loop BCFW-like recursion for our mixed correlator is given by combining \eqref{BCFWt}, \eqref{BCFWmhv}, \eqref{BCFWop} and then integrating over $t$: \be{recur1} -\int_{\C}\frac{\d t}{t}\wedge\bar{\delta}\la W[C(t)]\cO(y)\ra =\int_{\C} \frac{\d t}{t}\wedge \left( \Lambda^{\mathrm{tree}}+\Lambda^{\mathrm{MHV}}+\Lambda^{\mathrm{Op}}\right), \ee where the $\Lambda$s are given by the deformation contributions we just calculated. Integration by parts immediately gives \begin{equation*} -\int_{\C}\frac{\d t}{t}\wedge\bar{\delta}\la W[C(t)]\cO(y)\ra = \left\la W[1,2,\ldots , n]\cO(y)\right\ra-\left\la W[1,2,\ldots ,n-1]\cO(y)\right\ra, \end{equation*} which is just the difference in the correlators at $t=0$ and $t=\infty$. Let us consider the contribution $\Lambda^{\mathrm{tree}}$ explicitly; the other two contributions can be treated in an identical manner. Using \eqref{BCFWt}, we have \begin{multline*} \int_{\C}\frac{\d t}{t}\wedge \Lambda^{\mathrm{tree}}= \int\limits_{\C\times C(t)\times C(t)} \frac{\d t}{t}\wedge\omega(Z)\wedge\omega(Z')\wedge\bar{\delta}^{3|4}(Z,Z') \\ \times \left\la W[C'(t)]W[C''(t)]\cO(y)\right\ra, \end{multline*} where $\bar{\delta}^{3|4}(Z,Z')$ has support only when the curve $C(t)$ intersects itself. For every $j=3,\ldots n-1$ there will be some value of $t=t_{j}$ for which the line $(\hat{n}(t_{j}),1)$ intersects $(j-1,j)$. If we label those intersection points as $I_{j}$, then clearly we have \cite{Bullimore:2011ni} \begin{eqnarray*} C'(t_{j})=(1,2)\cup(2,3)\cup\cdots\cup (j-1, I_{j}) \\ C''(t_{j})=(I_{j},j)\cup(j,j+1)\cup\cdots\cup(\hat{n}(t_{j}),1). \end{eqnarray*} For each such contribution at $t_{j}$ we can parametrize the positions of $Z$ and $Z'$ by \begin{equation*} Z=\widehat{Z_{n}}(t)+sZ_{1}=Z_{n}+tZ_{n-1}+sZ_{1}, \qquad Z'=Z_{j-1}+r Z_{j}, \end{equation*} so the meromorphic differentials become \begin{equation*} \omega(Z)=\frac{\d s}{s}, \qquad \omega(Z')=\frac{\d r}{r}. \end{equation*} Thus, we have \begin{multline} \int_{\C}\frac{\d t}{t}\wedge \Lambda^{\mathrm{tree}}= \sum_{j=3}^{n-1}\int_{\C^{3}}\frac{\d t}{t}\frac{\d s}{s}\frac{\d r}{r}\wedge\bar{\delta}^{3|4}(Z_{n}+tZ_{n-1}+sZ_{1},Z_{j-1}+r Z_{j}) \\ \times \left\la W[1,\ldots, j-1, I_{j}]W[I_{j},j,\ldots, n-1,\hat{n}(t_{j})]\cO(y)\right\ra \\ = \sum_{j=3}^{n-1}[n-1,n,1,j-1,j]\left\la W[1,\ldots, j-1, I_{j}]W[I_{j},j,\ldots, n-1,\hat{n}_{j}]\cO(y)\right\ra, \end{multline} where $[A,B,C,D,E]$ is the standard R-invariant and we have abbreviated $\hat{n}(t_{j})=\hat{n}_{j}$. We can perform similar parametrizations for $\Lambda^{\mathrm{MHV}}$ and $\Lambda^{\mathrm{Op}}$, leading to the full all-loop recursion relation: \begin{propn}\label{recurpropn} Let $W[C]=W[1,\ldots,n]$ be the Wilson loop in the fundamental representation around the n-cusp null polygon $C$, $\cO(y)$ be a local operator in general position, and $Y=\mathrm{span}\{Z_{C},Z_{D}\}$ be the $\P^{1}\subset\PT$ corresponding to this position. Then \begin{multline}\label{recur2} \left\la W[1,\ldots, n]\cO(y)\right\ra = \left\la W[1,\ldots, n-1]\cO(y)\right\ra \\ + \sum_{j=3}^{n-1}[n-1,n,1,j-1,j]\left\la W[1,\ldots, j-1, I_{j}]W[I_{j},j,\ldots, n-1,\hat{n}_{j}]\cO(y)\right\ra \\ +\lambda \oint\limits_{\Gamma\times S^{1}\times S^{1}}\D^{3|4}Z_{A}\wedge\D^{3|4}Z_{B} [n-1,n,1,A,B]\left\la W[1,\ldots, n-1,\hat{n}_{AB},Z',B]\cO(y)\right\ra \\ + \oint\limits_{\tilde{\Gamma}\times S^{1}\times S^{1}} \d\mu^{abab}[n-1,n,1,C,D]\left\la W[1,\ldots, n-1, \hat{n}_{CD},Z',D]\right\ra, \end{multline} where the measure $\d\mu^{abab}$ is given by \eqref{fmeasure}; the contours $\Gamma$ and $\tilde{\Gamma}$ are over $(4|8)$- and $(0|4)$-dimensional real slices of the space of lines in $\PT$ respectively; and the contours $S^{1}\times S^{1}$ ensure $Z_{A,B}, Z_{C,D}\rightarrow Z'$ in their respective integrals.\footnote{As mentioned earlier, recall that in the planar limit $\la W[1,\ldots, j-1, I_{j}]W[I_{j},j,\ldots, n-1,\hat{n}_{j}]\cO(y)\ra = \la W[1,\ldots, j-1, I_{j}]\ra\; \la W[I_{j},j,\ldots, n-1,\hat{n}_{j}]\cO(y)\ra +\la W[1,\ldots, j-1, I_{j}]\cO(y)\ra\; \la W[I_{j},j,\ldots, n-1,\hat{n}_{j}]\ra$, so this indeed constitutes a recursion relation.} \end{propn} \subsubsection*{\textit{Some loop integrand computations}} In the study of scattering amplitudes, the primary utility of the all-loop BCFW recursion relations has been their ability to enable simple computations of loop integrands (e.g., \cite{ArkaniHamed:2010kv}). The recursion relation we have just defined holds at the level of the loop integrand of the correlator $\la W[1,\ldots, n]\cO(y)\ra$; this will be a rational function of the nodes $Z_{i}$, the line $Y$ as indexed by $Z_{C},Z_{D}$, and the loops as indexed by an internal region coordinate $X=\mathrm{span}\{Z_{A},Z_{B}\}$. In computing the $l$-loop \emph{integral} of the correlator, the internal regions must be integrated over using \begin{equation*} \d^{4|8}X=\frac{\d^{4|4}Z_{A}\wedge\d^{4|4}Z_{B}}{\mathrm{vol}\;\GL(2,\C)}, \end{equation*} in the usual fashion. Denoting the $l$-loop integrand of the correlator by $G^{l}_{n}$, this means that \be{cintegrand1} G^{l}_{n}=G^{l}_{n}\left(Z_{1},\ldots, Z_{n},C,D; (A,B)_{1},\ldots, (A,B)_{l}\right), \ee with an implicit symmetrization over the loop variables. This integrand can be further expanded in the fermionic twistor variables: \be{cintegrand2} G^{l}_{n}=G^{l}_{n,0}+G^{l}_{n,1}+G^{l}_{n,2}+\cdots +G^{l}_{n,n-4}, \ee where $G^{l}_{n,k}$ is of order $4k$ in $\chi$, and can be thought of as the analogue of a N$^k$MHV integrand for our mixed correlators. In this language, we can re-write our recursion relation in a slightly more appealing fashion: \begin{multline}\label{recur3} G^{l}_{n,k}=G^{l}_{n-1,k} \\ +\sum_{n_{1},k_{1},l_{1},j} [n-1,n,1,j-1,j] W^{l_{1}}_{n_{1},k_{1}}(1,\ldots, j-1,I_{j})\;G^{l_{2}}_{n_{2},k_{2}}(I_{j},j,\ldots, n-1,\hat{n}_{j}) \\ +\lambda \int \frac{\d^{4|4}Z_{A}\wedge\d^{4|4}Z_{B}}{\mathrm{vol}\;\GL(2,\C)} [n-1,n,1,A,B]\;G^{l-1}_{n+2,k+1}(1,\ldots, n-1,\hat{n}_{AB},\hat{A},B) \\ +\int \d^{0|4}\theta_{CD}\; [n-1,n,1,C,D]\;W^{l}_{n+2,k}(1,\ldots, n-1,\hat{n}_{CD},\hat{C},D), \end{multline} where $W^{l}_{n,k}$ is the usual $l$-loop integrand of the Wilson loop and \begin{eqnarray*} n_{1}+n_{2}=n+2, \qquad k_{1}+k_{2}=k-1, \qquad l_{1}+l_{2}=l, \\ \hat{A}=(A,B)\cap (n-1,n,1), \qquad \hat{C}=(C,D)\cap (n-1,n,1). \end{eqnarray*} Since the $l$-loop integrand for the Wilson loop is known \cite{Mason:2010yk}, this makes it possible for us to compute the integrands $G^{l}_{n,k}$ recursively. For instance, consider the tree-level analogue of the NMHV amplitude: $G^{0}_{n,1}$. If we perform an implicit summation over the possible BCFW-type shifts, then \eqref{recur3} gives: \begin{multline}\label{NMHVtree} G^{0}_{n,1}=\sum_{i<j}[i-1,i,1,j-1,j] \mathbb{I}\times \mathbb{I} \\ +\int \d^{0|4}\theta_{CD}\;\sum_{i}[i-1,i,1,C,D]\left(\sum_{j<k} [j-1,j,1,k-1,k]\right), \end{multline} where the range of the second summation in the second term is over $1,\ldots,n-1,\hat{n}_{CD},\hat{C},D$. Note that as predicted, $G_{n,1,0}=G_{n,1,0}(Z_{1},\ldots,Z_{n},C,D)$. Usually we consider mixed correlators which are normalized by the expectation value of the Wilson loop $\la W[1,\ldots, n]\ra$; including this in the present calculation has the effect of eliminating the first term in \eqref{NMHVtree}, as it corresponds to the NMHV contribution from the Wilson loop itself. If we wanted to compute the analogue of a 1-loop MHV integrand for our correlator, a quick inspection of \eqref{recur3} shows that we know all the required ingredients: \begin{multline*} G^{1}_{n,0}=\lambda \int \frac{\d^{4|4}Z_{A}\wedge\d^{4|4}Z_{B}}{\mathrm{vol}\;\GL(2,\C)} \sum_{i}[i-1,i,1,A,B]\;G^{0}_{n+2,1}(1,\ldots, n-1,\hat{n}_{AB},\hat{A},B)\\ + \int \d^{0|4}\theta_{CD}\sum_{i}[i-1,i,1,C,D]\;W^{1}_{n+2,0}(1,\ldots, n-1,\hat{n}_{CD},\hat{C},D). \end{multline*} Using \eqref{NMHVtree} as well as the known contributions from the Wilson loop \cite{ArkaniHamed:2010kv, Mason:2010yk} gives \begin{multline}\label{MHVloop} G^{1}_{n,0}(Z_{1},\ldots,Z_{n},C,D;\;(A,B)_{1}) = \\ \lambda \int \frac{\d^{4|4}Z_{A}\wedge\d^{4|4}Z_{B}}{\mathrm{vol}\;\GL(2,\C)} \sum_{i}[i-1,i,1,A,B]\left(\sum_{j<k}[j-1,j,1,k-1,k]\right. \\ \left. +\int \d^{0|4}\theta_{CD}\sum_{j} [j-1,j,1,C,D]\sum_{k<l}[k-1,k,1,l-1,l]\right) \\ +\lambda \int \frac{\d^{4|4}Z_{A}\wedge\d^{4|4}Z_{B}}{\mathrm{vol}\;\GL(2,\C)}\d^{0|4}\theta_{CD}\sum_{i}[i-1,i,1,C,D]\sum_{j<k}[1,j-1,j,A,B'][1,k-1,k,A,B''], \end{multline} where the first sum in each term ranges from $i=1,\ldots, n$ and the remaining sums range over $1,\ldots,n-1,\hat{n}_{CD},\hat{C},D$. In the second line, the shifted twistors $B'$, $B''$ correspond to the intersections between lines and planes given by: \begin{equation*} B'=(A,B)\cap(1,k-1,k), \qquad B''=(A,B)\cap(1,j-1,j). \end{equation*} \medskip Of course, many of the terms here will actually vanish upon performing the fermionic integrals, and more will be subtracted from the first term if the quotient by the pure Wilson loop integrand is included. It would be interesting to compare the results for the integrand generated by our recursion relation against other computations, such as the $\bar{Q}$-anomaly techniques of \cite{Bullimore:2011kg}. It is also worth mentioning that mixed Wilson loop / local operator correlators can be studied from a very different perspective when the Wilson loop under consideration is (topologically) circular, rather than a null polygon.\footnote{Although the Wilson loops considered in this setting only couple to three of the scalars of $\cN=4$ SYM \cite{Drukker:2007dw}, rather than the full superconnection $\CA$ as we considered here.} In particular, if the Wilson loop is defined on a $S^{2}\subset\M$, then the configuration with an arbitrary number of scalar chiral primary operators also inserted on the sphere is $1/8$-BPS, and the computation can be localized to two-dimensional Yang-Mills theory on the sphere \cite{Pestun:2009nn}. This in turn allows one to compute the correlator to all values of the coupling via a matrix model calculation (c.f., \cite{Giombi:2012ep} and the references therein). It would be interesting to know if twistor theory has anything to add to this perspective, since it entails non-null data and non-perturbative results. Rather than pursue these issues further, we will now turn to the study of gravity, and attempt to apply the methods we have used in gauge theory to that new setting. \section{Twistor Actions, Conformal and Einstein Gravity} \label{Chapter5} In the previous sections, we saw that by studying gauge theory on twistor space, we were able to learn many interesting things about the physical theory. In particular, efficient calculational mechanisms like the MHV formalism were manifested explicitly on twistor space, and computations involving gauge invariant local operators and null polygonal Wilson loops were also streamlined. It seems natural to ask if similar insights can be found in the study of gravity via twistor methods. As one might expect, the story is much more complicated in this setting. Dealing with generally curved space-times is a long-standing difficulty for twistor theory, referred to as the `googly problem' \cite{Penrose:1999cw}. Twistor-string theory provides a perturbative solution to the googly problem for gauge theory, and there was hope that it would yield a similar mechanism for the study of gravity. However, all twistor-string theories based on Witten's model contain conformal gravity degrees of freedom \cite{Berkovits:2004jj}; this theory has fourth-order equations of motion and is widely considered to be non-physical (see \cite{Fradkin:1985am} for a review). Indeed, any attempt to remove these degrees of freedom by a gauging appears to result in a free theory which misses an entire self-duality sector \cite{AbouZeid:2006wu, Nair:2007md}. The twistor-string of Skinner appears to correctly describe Einstein gravity (at least in the flat space limit) \cite{Skinner:2013xp}, but it is not clear in what way it connects to an action principle for gravity itself. However, a twistor action for conformal gravity has been known for some time \cite{Mason:2005zm}. There is also a mixture of similarities and differences between the basic structures of scattering amplitudes in gauge theory and gravity. Graviton amplitudes posses the same `MHV-like' structure of gauge theory amplitudes, in the sense that $n$-graviton amplitudes involving $n$ or $n-1$ gravitons of the same helicity vanish.\footnote{The notion of helicity is well-defined in general relativity provided one restricts to positive-frequency fields \cite{Ashtekar:1986}.} However, the functional form of gravity amplitudes is more complicated than their gauge theory counterparts. This is due to the underlying permutation invariance of gravity, as there is no color trace to enforce a cyclic ordering on external particles. Indeed, the analogue of a Parke-Taylor amplitude for gravity (Hodges' formula) was only recently discovered \cite{Hodges:2012ym}. As it turns out, the ability for us to treat conformal gravity twistorially is actually an advantage rather than an obstruction, at least at tree-level. This is due to an observation of Maldacena \cite{Maldacena:2011mk} that the tree-level S-matrices of these two theories are equivalent on a de Sitter background when Einstein scattering states are used. In this section, we study the conformal gravity twistor action, with a view to extracting the Einstein gravity subsector. After reviewing some basic facts about twistor theory for curved backgrounds, we give a brief summary of the Maldacena argument and apply it to amplitude generating functionals in Einstein and conformal gravity. We then perform the reduction to Einstein gravity at the level of the twistor action, extracting a twistorial expression for the MHV amplitude generating functional. This procedure also leads to a proposal for the twistor action of Einstein gravity itself. \subsection{Background} While our study of gauge theory took place on `flat' twistor space associated to Minkowski space-time, gravity requires twistor machinery adapted to curved backgrounds (possibly with cosmological constant). We begin with a brief review of the necessary background material for this `curved' twistor theory, including some basic facts about de Sitter space, the non-linear graviton construction, and local twistor connection. The reader need only consult the references for further details. \subsubsection*{\textit{de Sitter geometry}} The homogeneous Einstein geometries of Minkowski, de Sitter, and anti-de Sitter space are the simplest solutions to the field equations of general relativity: they are space-times with only scalar curvature (in the form of a cosmological constant), and are hence conformally flat (c.f., \cite{Hawking:1973}). In four dimensions, each has a conformal compactification which is topologically $S^{1}\times S^{3}/\Z_{2}$ and can be realized as a quadric in $\RP^{5}$. Although we will focus on de Sitter space (when the cosmological constant is positive) in much of what follows, many of the results in both this section and Section \ref{Chapter6} hold for \emph{anti-}de Sitter space as well. \begin{figure} \centering \includegraphics[width=2 in, height=1.7 in]{dS1.pdf}\caption{\textit{de Sitter space as the quadric} $Q\subset\RP^{5}$ \textit{and the identification of infinity.}}\label{dS1} \end{figure} Before conformal compactification, de Sitter space is topologically $\R\times S^{3}$, and can be realized as the pseudosphere in $\R^{1,4}$ with coordinates $(w, x^{\mu})$, $\mu=0,\ldots, 3$ via the embedding relation: \begin{equation*} \eta_{\mu\nu}x^{\mu}x^{\nu}-w^{2}=x^{2}-w^{2}=-\frac{3}{\Lambda}, \qquad \eta_{\mu\nu}=\mathrm{diag}(1,-1,-1,-1), \end{equation*} where $\Lambda >0$ is the cosmological constant. Writing de Sitter space in this fashion makes manifest its isometry group $\SO(1,4)$, which is the Lorentz group inherited from the embedding space. We denote this space as $dS_{4}$. The aforementioned conformal compactification embeds $dS_{4}$ into $\RP^{5}$ with homogeneous coordinates $(t,w,x^{\mu})$ as the $t\neq 0$ portion of the quadric: \begin{equation*} 2Q\equiv t^{2}-w^{2}+x^{2}=0, \end{equation*} with scale-invariant metric \be{dSmetric1} \d s^{2}=\frac{3}{\Lambda}\frac{\d t^{2}-\d w^{2}+\eta_{\mu\nu}\d x^{\mu}\d x^{\nu}}{t^{2}}. \ee The intersection of $Q$ with the plane $t=0$ corresponds to the $S^{3}$ at infinity, and is the identification of the past ($\scri^{-}$) and future ($\scri^{+}$) infinities; see Figure \ref{dS1}. Note that if we work on the patch where $t=\sqrt{3/\Lambda}$, then we recover the description of de Sitter space as the pseudosphere in $\R^{1,4}$. Two particularly useful coordinate patches on de Sitter space are the affine and Poincar\'e patches. The \emph{affine} patch is $t+w=1$; after a proper re-scaling of the affine coordinates $x^{\mu}$ the metric for this patch becomes \be{dSmetric2} \d s^{2}=\frac{\eta_{\mu\nu}\d x^{\mu} \d x^{\nu}}{(1-\Lambda x^{2})^2}. \ee In a sense, working with this slicing of global de Sitter space is rather awkward: de Sitter infinity is represented by finite points in the affine space where $x^{2}=\Lambda^{-1}$, and \emph{vice versa}. Here, the null infinity of the affine space intersects the infinity of $dS_{4}$ in a $S^{2}$ at spatial infinity. The main advantage of working with this slicing is that it is well-behaved in the $\Lambda\rightarrow 0$ limit: in this case \eqref{dSmetric2} simply becomes the usual Minkowski metric (see Figure \ref{dS2}, (\emph{a}.)). The more conventional \emph{Poincar\'{e}} patch of de Sitter space, where $x^{0}+w=1$, has metric: \be{dSmetric3} \d s^{2}=\frac{3}{\Lambda}\frac{\d t^{2}-\delta_{ij}\d x^{i}\d x^{j}}{t^{2}}, \ee and $t=0$ is infinity minus a point. The light cone of this point divides global de Sitter into two halves ($t>0$ and $t<0$) corresponding to what a physical observer situated at $\scri^{\pm}$ could actually see. This slicing also manifests the three-dimensional rotation and translation symmetries of $dS_{4}$, but is certainly not well-behaved in the $\Lambda\rightarrow 0$ limit; see Figure \ref{dS2}, (\emph{b}.). \begin{figure} \centering \includegraphics[width=3.25 in, height=1.5 in]{dS2.pdf}\caption{\textit{De Sitter space on the affine patch} (\emph{a}.), \textit{and the Poincar\'{e} patch} (\emph{b}.)}\label{dS2} \end{figure} \subsubsection*{\textit{Non-linear graviton}} It is natural to ask if the twistor formalism used in Sections \ref{Chapter2}-\ref{Chapter4} extends to the study of curved space-times. The following result, known as the \emph{non-linear graviton} construction, establishes precisely how this can happen: \begin{thm}[Penrose \cite{Penrose:1976js}, Ward \cite{Ward:1980am}]\label{NLG} There is a one-to-one correspondence between: \emph{(a.)} self-dual space-times\footnote{A self-dual (SD) space-time is one whose anti-self-dual Weyl curvature and trace-free Ricci tensor vanish.} $M$, and \emph{(b.)} twistor spaces $\CPT$, a complex projective 3-manifold obtained as a complex deformation of $\PT$, containing a rational curve $X_{0}$ with normal bundle $\cN_{X_0}\cong\cO(1)\oplus \cO(1)$. There is a metric in this self-dual conformal class with scalar curvature $R=4\Lambda$ if and only if $\CPT$ is equipped with: \begin{itemize} \item a non-degenerate holomorphic contact structure specified by $\tau\in\Omega^{1,0}(\CPT, \cO(2))$, and \item a holomorphic 3-form $\D^{3}Z\in\Omega^{3,0}(\CPT,\cO(4))$ obeying $\tau\wedge \d\tau=\frac{\Lambda}{3}\D^{3}Z$. \end{itemize} \end{thm} Here, the line bundle $\cO(1)\rightarrow\CPT$ is defined to be the dual of the fourth-root of $\Omega^{3}(\CPT)\cong\cO(-4)$; this exists on a neighborhood of the rational curve $X_{0}$ by assumption on $\cN_{X_0}$. The non-projective curved twistor space $\mathscr{T}$ is also defined as the total space of $\cO(-1)\rightarrow\CPT$. The requirement that $\CPT$ arise as a (integrable) complex deformation of $\PT$ results in a four-parameter family of rational curves $\{X\}_{x\in\C^4}$ in a neighborhood of $X_0$, each with normal bundle $\cN_{X}\cong\cO(1)\oplus\cO(1)$. This is a consequence of Kodaira-Spencer theory. Thus, points $x\in M$ (for $M$ obeying the conditions of this theorem) correspond to rational, but not necessarily linearly embedded, curves $X\subset\CPT$. The self-dual conformal structure on $M$ corresponds to requiring that if two of these curves $X,Y$ intersect in $\CPT$, then the points $x,y\in M$ are null separated. In the Einstein case, the contact 1-form $\tau$ serves as a holomorphic measure on these curves, while $\D^{3}Z$ provides a holomorphic measure on twistor space itself, as our notation suggests. Furthermore, it is known that $\CPT$ is uniquely associated with $M$, in the sense that any two space-times which have the same twistor space will be isomorphic in a neighborhood of conformal infinity \cite{LeBrun:1982}. The other important tools of twistor theory on $\PT$--namely the Penrose transform and Ward correspondence--still hold for $\CPT$ as well \cite{Hitchin:1980hp}. As usual, $\CPT$ fits into the twistor double fibration: \begin{equation*} \xymatrix{ & \PS \ar[ld]_{p} \ar[rd]^{q} & \\ \CPT & & M } \end{equation*} Provided $M$ is not curved too severely, it follows that $\PS\cong M\times\P^{1}$, which can be charted with $(x^{\mu},\sigma_{A})$. With this assumption, the map $q: \PS\rightarrow M$ is the trivial projection, while $p:\PS\rightarrow\CPT$ is specified by the generalized incidence relations: \be{incidence} Z^{\alpha}:M\times\P^{1}\rightarrow\CPT, \qquad Z^{\alpha}=Z^{\alpha}(x^{\mu},\sigma_{A})=\left(\lambda_{A}(x,\sigma), \mu^{A'}(x,\sigma)\right). \ee For consistency with the case $M=\M$, we demand that $Z^{\alpha}$ be homogeneous of degree one in $\sigma$. In the Einstein case, when $\Lambda=0$ the contact structure $\tau$ becomes degenerate and we have a fibration $\CPT\rightarrow\P^{1}$; in this case `one-half' of the incidence relations become the identity map (i.e., $\lambda_{A}(x,\sigma)=\sigma_{A}$). According to theorem \ref{NLG}, $\CPT$ arises as a complex deformation of $\PT$, and $Z^{\alpha}$ must be holomorphic with respect to the deformed complex structure. In a coordinate-free language, this complex structure is specified by an endomorphism $J:T_{\CPT}\rightarrow T_{\CPT}$ which squares to $J^2=-1$. This induces a splitting of complexified tangent bundle into holomorphic and anti-holomorphic parts, and defines Dolbeault operators $\partial^{J}$, $\dbar^{J}$. Integrability of $J$ corresponds to the vanishing of its Nijenhuis tensor, $N_{J}\in\Omega^{0,2}(\CPT, T^{1,0}_{\CPT})$. So by theorem \ref{NLG}, the field equations for $M$ to be self-dual correspond to $N_{J}=0$, while the requirement that the map $Z^{\alpha}$ be holomorphic is $\dbar^{J}Z^{\alpha}=0$. It will often be convenient for us to work in coordinates, with a specific choice of background complex structure. Taking as our background the flat complex structure of $\PT$, we can denote the complex structure on $\CPT$ as a (small but finite) deformation: \begin{equation*} \dbar_{f}=\dbar +f= \d \bar{Z}^{\bar{\alpha}} \frac{\partial}{\partial \bar{Z}^{\bar{\alpha}}}+ f, \end{equation*} for $f\in\Omega^{0,1}(\PT, T^{1,0}_{\PT})$. The corresponding coordinate basis for $T^{0,1}_{\CPT}$ and $\Omega^{1,0}(\CPT)$ is then: \begin{eqnarray} T^{0,1}_{\CPT}=\mathrm{span}\left\{\frac{\partial}{\partial\bar{Z}^{\bar{\alpha}}}+f^{\alpha}_{\bar{\alpha}}\frac{\partial}{\partial Z^{\alpha}}\right\}, \label{gbasis} \\ \Omega^{1,0}(\CPT)=\mathrm{span}\{\D Z^{\alpha}\}=\mathrm{span}\left\{\d Z^{\alpha}-f^{\alpha}\right\}, \label{gfbasis} \end{eqnarray} where we have denoted $f=f^{\alpha}\partial_{\alpha}=f^{\alpha}_{\bar{\alpha}}\d \bar{Z}^{\bar{\alpha}}\partial_{\alpha}$. The requirement that the form $f^{\alpha}$ descend from $\mathscr{T}$ to $\CPT$ is satisfied so long as \be{fgf} \partial_{\alpha}f^{\alpha}=0, \qquad \bar{Z}^{\bar{\alpha}}f^{\beta}_{\bar{\alpha}}=0. \ee With this choice of background, the integrability of the complex structure is equivalent to \be{contact} \dbar_{f}^{2}=\left(\dbar f^{\alpha}+\left[f,f\right]^{\alpha}\right)\partial_{\alpha}=0, \qquad \left[f,f\right]^{\alpha}=f^{\beta}\wedge\partial_{\beta}f^{\alpha}, \ee and holomorphicity of the map $Z^{\alpha}$ is \be{holomap} \dbar|_{X} Z^{\alpha}-f^{\alpha}(Z)=0, \ee where $\dbar|_{X} =\d\bar{\sigma}\frac{\partial}{\partial\bar{\sigma}}$ is the $\dbar$-operator on $X\subset\CPT$ pulled back to $\PS$. This equation has a four-complex parameter family of solutions regardless of whether \eqref{contact} is satisfied \cite{Penrose:1976js, Hansen:1978jz}, and when $\Lambda=0$ it can be thought of as the good cut equation for $M$ \cite{Eastwood:1982, Adamo:2010ey}. While we have focused on the $\cN=0$ version of the non-linear graviton here, the construction has a natural generalization to $\cN>0$--just like the Ward Correspondence for Yang-Mills instantons \cite{Wolf:2007tx}. \subsubsection*{\textit{Local twistor formalism}} If we hope to implement the Penrose transform concretely on curved twistor spaces, we must have a means of defining things like twistor indices. That is to say, our twistor coordinates $Z^{\alpha}(x,\sigma)$ are abstract on $\CPT$ until they are pulled back to the spinor bundle $\PS$. To get a concrete coordinate basis on the curved twistor space, we must use the \emph{local twistor formalism}. Note that this formalism will make sense for any (complex) space-time $M$, whether or not it satisfies the conditions of theorem \ref{NLG}. Local twistors are defined at points $x\in M$, and so constitute a complex rank-four bundle over space-time: \begin{equation*} \xymatrix{ Z^{\underline{\alpha}}=(\lambda_{A},\mu^{A'}) \ar[r] & \mathbb{LT} \ar[d]\\ & M } \end{equation*} Let $\mathbf{t}\in T_{x}M$ be a vector at $x$; we can compute the infinitesimal variation of the local twistor bundle in the direction of $\mathbf{t}$ as \cite{Penrose:1986ca} \be{LTT1} \nabla_{\mathbf{t}} Z^{\underline{\alpha}}(x) =\left(t^{BB'}\nabla_{BB'}\lambda_{A}-it^{BB'}P_{ABA'B'}\mu^{A'},\;t^{BB'}\nabla_{BB'}\mu^{A'}-it^{BA'}\lambda_{B}\right), \ee where the tensor $P_{\mu\nu}$ is given by: \begin{equation*} P_{ABA'B'}=\Phi_{ABA'B'}-\Lambda\epsilon_{AB}\epsilon_{A'B'}, \end{equation*} with $\Phi_{ABA'B'}$ the trace-free portion of the Ricci tensor. This local twistor transport along the vector $\mathbf{t}$ defines a \emph{local twistor connection}, which is equivalent to the Cartan conformal connection on $M$ \cite{Friedrich:1977}. The curvature of the local twistor connection is computed by considering \be{ltc1} i\left(\nabla_{\mathbf{t}}\nabla_{\mathbf{u}}-\nabla_{\mathbf{u}}\nabla_{\mathbf{t}}-\nabla_{[\mathbf{t},\mathbf{u}]}\right)Z^{\underline{\beta}}=Z^{\underline{\alpha}}F_{\underline{\alpha}}^{\underline{\beta}}(\mathbf{t},\mathbf{u}). \ee For a general complex space-time, we find \cite{Penrose:1986ca}: \begin{equation*} F^{\underline{\beta}}_{\underline{\alpha}}(\mathbf{t},\mathbf{u})=\left( \begin{array}{cc} it^{C}_{D'}u^{DD'}\Psi_{CDB}^{A} & t_{D}^{C'}u^{DD'}\nabla^{A}_{A'}\widetilde{\Psi}^{B'A'}_{C'D'}+t^{C}_{D'}u^{DD'}\nabla^{B'}_{B}\Psi^{BA}_{CD} \\ 0 & -it^{C'}_{D}u^{DD'}\widetilde{\Psi}_{C'D'A'}^{B'} \end{array}\right), \end{equation*} where $\Psi_{ABCD}$ and $\widetilde{\Psi}_{A'B'C'D'}$ are the ASD and SD Weyl spinors respectively. So on a SD background $M$, the local twistor bundle $\LT$ is half-flat and the Ward transform applies \cite{Hitchin:1980hp}. In other words, when $M$ satisfies the conditions of the non-linear graviton construction, we can obtain a rank-four bundle $\T^{\underline{\alpha}}\rightarrow\CPT$ by applying the Ward correspondence to $\LT\rightarrow M$. More formally, it can be shown that $\T^{\underline{\alpha}}\cong (J^{1}\cO(-1))^{\vee}$, where $J^1$ is the first jet bundle \cite{LeBrun:1986}. Abusing terminology, we also refer to this bundle $\T^{\underline{\alpha}}\rightarrow\CPT$ as the `local twistor bundle.' The bundle $\T^{\underline{\alpha}}$ lets us assign meaning to tensors on $\CPT$. In particular, by choosing a holomorphic frame $H^{\underline{\alpha}}_{\alpha}$ for $\T^{\underline{\alpha}}$, we can translate twistor indices (in $\CPT$) into local twistor indices (in $\T^{\underline{\alpha}}$) \cite{Mason:1990}. For instance, consider a tensor $f^{\alpha\cdots}_{\beta\cdots}\in H^{0,1}(\CPT,\cO(n-2))$ for $n<0$. After contracting with the holomorphic frame, we get a $(0,1)$-form valued section of $\T^{\underline{\alpha}\cdots}_{\underline{\beta}\cdots}\otimes\cO(n-2)$, and can then apply the Penrose transform to obtain a field on $M$: \begin{equation*} \int_{X}\lambda_{A_{1}}\cdots\lambda_{A_{n}} f^{\underline{\alpha}\cdots}_{\underline{\beta}\cdots}\wedge\tau =\Gamma^{\underline{\alpha}\cdots}_{\underline{\beta}\cdots A_{1}\cdots A_{n}}, \qquad \nabla^{A_{1}A'}\Gamma^{\underline{\alpha}\cdots}_{\underline{\beta}\cdots A_{1}\cdots A_{n}}=0. \end{equation*} In the zero-rest-mass field equation, it is understood that the covariant derivative $\nabla^{AA'}$ acts via the local twistor connection on any local twistor indices of the object in question. This is because the holomorphic frame $H^{\underline{\alpha}}_{\alpha}$ on $\T^{\underline{\alpha}}$ corresponds to a covariantly constant frame for $\LT\rightarrow M$. From now on, we will drop the underline notation, and assume that the distinction between concrete and local twistor indices is clear from the context. We can use \eqref{LTT1} to derive relevant expressions for how $\nabla$ acts on quantities with a single twistor index, say $\Gamma^{\beta}_{A\cdots}=(\Phi_{BA\cdots}, \Psi^{B'}_{A\cdots})$: \be{ltc2} \nabla^{AA'}\Gamma^{\beta}_{A\cdots}=\left( \begin{array}{c} \nabla^{AA'}\Phi_{BA\cdots} \\ \nabla^{AA'}\Psi^{B'}_{A\cdots} \end{array}\right) + \left( \begin{array}{cc} 0 & iP^{AA'}_{BB'} \\ i\epsilon^{AB}\epsilon^{A'B'} & 0 \end{array}\right) \left( \begin{array}{c} \Phi_{BA\cdots} \\ \Psi^{B'}_{A\cdots} \end{array}\right). \ee Similar rules for dual twistor indices as well as higher-rank tensors can be derived or looked up in \cite{Penrose:1986ca}. The gauge freedom of such objects on space-time can be determined by computing the Penrose transform of $Z^{\gamma}f^{\alpha\cdots}_{\beta\cdots}$ and then imposing the condition $Z^{\beta}f^{\alpha\cdots}_{\beta\cdots}=0$ \cite{Mason:1990}. \subsection{Einstein Gravity from Conformal Gravity} The main stumbling block for the twistor-string revolution was the presence of conformal gravity degrees of freedom \cite{Berkovits:2004jj}. In the setting of twistor-strings, this appeared to correspond to non-minimal $\cN=4$ conformal supergravity (CSG) coupled to $\cN=4$ SYM. As we saw in Sections \ref{Chapter3} and \ref{Chapter4}, this issue could be side-stepped in the study of gauge theory by working directly with a twistor action. A first hope would be to attempt a similar procedure for the study of Einstein gravity; a particularly attractive route is presented by the embedding of Einstein gravity inside conformal gravity. In this subsection, we review this embedding and derive a precise version of it at the level of MHV amplitudes. \subsubsection{Conformal gravity} Conformal gravity is obtained from the conformally invariant action \be{CGA1} S^{\mathrm{CG}}[g]=\frac{1}{\varepsilon^{2}}\int_{M}\d\mu\;C^{\mu\nu\rho\sigma}C_{\mu\nu\rho\sigma}=\frac{1}{\varepsilon^{2}}\int_{M}\d\mu\left(\Psi^{ABCD}\Psi_{ABCD}+\widetilde{\Psi}^{A'B'C'D'}\widetilde{\Psi}_{A'B'C'D'}\right), \ee where $\varepsilon^{2}$ is a dimensionless coupling constant, $\d\mu=\d^{4}x\sqrt{|g|}$ is the volume element, and $C_{\mu\nu\rho\sigma}$ is the Weyl curvature tensor. The field equations of this action are the vanishing of the Bach tensor, $B_{\mu\nu}$, which can be written in a variety of different forms thanks to the Bianchi identities: \begin{multline}\label{Bach} B_{\mu\nu}=2\nabla^{\rho}\nabla^{\sigma}C_{\rho\mu\nu\sigma}+C_{\rho\mu\nu\sigma}R^{\rho\sigma} \\ =\left(2\nabla_{\rho}\nabla_{(\mu}R^{\rho}_{\nu)}-\Box R_{\mu\nu}-\frac{2}{3}\nabla_{\mu}\nabla_{\nu}R -2R_{\rho\mu}R^{\rho}_{\nu}+\frac{2}{3}R_{\mu\nu}R \right)_0\\ = 2(\nabla_{A'}^C\nabla_{B'}^D+\Phi^{CD}_{A'B'} )\Psi_{ABCD}=2(\nabla_{A}^{C'}\nabla_{B}^{D'}+\Phi^{C'D'}_{AB})\widetilde{\Psi}_{A'B'C'D'}, \end{multline} where the subscript `0' denotes trace-free part. The last line implies that the field equations are satisfied whenever $M$ is Einstein, or when its Weyl curvature is either self-dual or anti-self-dual. In our study of Yang-Mills theory, the Chalmers-Siegel action \eqref{CS1} allowed us to expand around the SD sector. We can perform a similar expansion for conformal gravity by first considering the action:\footnote{Note that the field equations of conformal gravity can be understood as the Yang-Mills equations of the local twistor connection \cite{Merkulov:1984nz}; hence, a Chalmers-Siegel-like expansion must exist.} \be{CGA2} S^{\mathrm{CG}}[g]=\frac{2}{\varepsilon^{2}}\int_{M}\d\mu\;\Psi^{ABCD}\Psi_{ABCD}. \ee This differs from \eqref{CGA1} by \begin{equation*} \frac{1}{\varepsilon^2}\int_{M}\d\mu\left(\Psi^{ABCD}\Psi_{ABCD}-\widetilde{\Psi}^{A'B'C'D'}\widetilde{\Psi}_{A'B'C'D'}\right), \end{equation*} which is equal to $\frac{12\pi^2}{\varepsilon^{2}}(\tau(M)-\eta(\partial M))$, where $\tau(M)$ is the signature of $M$ and $\eta(\partial M)$ is the $\eta$-invariant of the conformal boundary \cite{Hitchin:1997}. Hence, \eqref{CGA2} is equal to the conformal gravity action up to a topological term which will be irrelevant in perturbation theory. To expand around the SD sector, we introduce the totally symmetric spinor field $G_{ABCD}$ as a Lagrange multiplier, and write the action as \cite{Berkovits:2004jj}: \be{CGA3} S^{\mathrm{CG}}[g,G]=\int_{M}\d\mu \left(G^{ABCD}\Psi_{ABCD}-\varepsilon^{2}G^{ABCD}G_{ABCD}\right). \ee This has field equations \cite{Mason:2005zm} \be{CGFE} \Psi^{ABCD}=\varepsilon^{2}G^{ABCD}, \qquad \left(\nabla^{C}_{A'}\nabla^{D}_{B'}+\Phi^{CD}_{A'B'}\right)G_{ABCD}=0, \ee so integrating out $G$ returns \eqref{CGA2}. But now $\varepsilon^{2}$ becomes a parameter for expanding about the SD sector: when $\varepsilon=0$, we have a SD solution. This means that $G_{ABCD}$ can be thought of as a linear ASD solution propagating on the SD background, and $\varepsilon^{2}$ plays the role of the 't Hooft coupling $\lambda$ as an expansion parameter around the SD sector. \subsubsection{Embedding Einstein gravity in conformal gravity} We now review a recent argument by Maldacena, which states that on-shell and after imposing certain boundary conditions, the conformal gravity and Einstein-Hilbert actions agree on de Sitter space \cite{Maldacena:2011mk}. Note that many of the claims we will make were originally stated for asymptotically hyperbolic Riemannian four-manifolds; their extension to Lorentzian space-times which are asymptotically de Sitter follows by analytic continuation. The Einstein-Hilbert action in the presence of a cosmological constant is \begin{equation*} S^{\mathrm{EH}}[g]=\frac{1}{\kappa^{2}}\int_{M}\d\mu\; (R-2\Lambda), \qquad \kappa^{2}=16\pi G_{N}. \end{equation*} On a de Sitter space, the field equations are $R_{\mu\nu}=\Lambda g_{\mu\nu}$, so the action reads \begin{equation*} S^{\mathrm{EH}}[dS_{4}]=\frac{2\Lambda}{\kappa^2}\int_{dS_{4}}\d\mu =\frac{2\Lambda}{\kappa^2}V(dS_{4}), \end{equation*} where $V(M)$ is the volume of $M$. For any asymptotically de Sitter manifold, this volume will be infinite, so the action functional must be modified by the Gibbons-Hawking boundary term \cite{Gibbons:1976ue}. Additionally, we must include the holographic renormalization counter-terms (which also live on the boundary) in order to render the volume finite \cite{Balasubramanian:1999re, Skenderis:2002wp}. This leaves us with the so-called renormalized Einstein-Hilbert action \cite{Miskovic:2009bm}: \begin{equation*} S^{\mathrm{EH}}_{\mathrm{ren}}[g]=\frac{1}{\kappa^{2}}\left[\int_{M}\d\mu\left(R-2\Lambda\right)-2\int_{\partial M} \d\tilde{\mu}\; K -\int_{\partial M}\d\tilde{\mu}\;\mathcal{L}_{\mathrm{ct}}\right], \end{equation*} where $\d\tilde{\mu}$ is the volume element on the boundary, $K$ is the extrinsic curvature of $\partial M$, and $\mathcal{L}_{\mathrm{ct}}$ is the holographic renormalization Lagrangian of counter-terms. For instance, on de Sitter space \be{EHren2} \mathcal{L}_{\mathrm{ct}}[dS_4]=\frac{2}{\ell_{dS}}+\frac{\ell_{dS}}{2}\tilde{R}, \ee where $\ell_{dS}$ is the de Sitter curvature radius and $\tilde{R}$ is the intrinsic curvature tensor of the conformal boundary. The important message is the fact that $S^{\mathrm{EH}}_{\mathrm{ren}}[M]$ is finite, and \begin{equation*} S^{\mathrm{EH}}_{\mathrm{ren}}[M]=\frac{2\Lambda}{\kappa^2}V_{\mathrm{ren}}(M), \end{equation*} where $V_{\mathrm{ren}}$ is the renormalized volume of the space-time \cite{Graham:1999jg}. In other words, the on-shell renormalized Einstein-Hilbert action is equal (up to a constant proportional to $\Lambda$) to the renormalized volume of the asymptotically de Sitter space-time. The next step is to relate this observation to conformal gravity. Suppose that $M$ were an abstract Riemannian 4-manifold which was compact without boundary. Then the Chern-Gauss-Bonnet formula states that \begin{equation*} \chi(M)=\frac{1}{8\pi^{2}}\int_{M}\d\mu \left(C^{\mu\nu\rho\sigma}C_{\mu\nu\rho\sigma}-\frac{1}{2}R_{\mu\nu}R^{\mu\nu}+\frac{1}{6}R^{2}\right). \end{equation*} If $M$ were additionally Einstein $(R_{\mu\nu}=\Lambda g_{\mu\nu})$, then this would immediately imply that \be{CGB1} S^{\mathrm{CG}}[M]=\frac{8\pi^{2}\chi(M)}{\varepsilon^{2}}-\frac{2\Lambda^{2}}{3\varepsilon^{2}}V(M). \ee Of course, when $M$ is (Lorentzian) asymptotically de Sitter, the Chern-Gauss-Bonnet formula requires a boundary term, and the volume needs renormalization. However, the left-hand side of \eqref{CGB1} is canonically defined and independent of the conformal compactification of $M$, so all that is required is to properly renormalize the right-hand side. A remarkable theorem of Anderson tells us that the relationship \eqref{CGB1} continues to hold when the boundary terms for the Euler characteristic and volume are taken into account \cite{Anderson:2001}. In other words, we have \begin{equation*} S^{\mathrm{CG}}[M]=\frac{8\pi^{2}\widehat{\chi}(M)}{\varepsilon^{2}}-\frac{2\Lambda^{2}}{3\varepsilon^{2}}V_{\mathrm{ren}}(M), \end{equation*} where $\widehat{\chi}$ is the renormalized Euler characteristic. Working on an asymptotically de Sitter background, we will always be perturbing around the topologically trivial flat case ($\chi=0$), so we have: \be{CGB2} S^{\mathrm{CG}}[dS_{4}]=-\frac{\Lambda\;\kappa^{2}}{3\varepsilon^{2}}S^{\mathrm{EH}}_{\mathrm{ren}}[dS_{4}]. \ee What does this tell us about the scattering amplitudes of the two theories, though? The answer is obvious using the perturbiner formalism \cite{Rosly:1996vr, Rosly:1997ap}. Formally, the tree-level S-matrix of any theory with fields $\phi$ and action $S[\phi]$ is obtained by first taking asymptotic states $\{\phi_{1},\ldots,\phi_{n}\}$ which are positive frequency at $\scri^{-}$ if incoming, and negative frequency at $\scri^{+}$ if outgoing. We then construct a classical solution $\phi_{\mathrm{cl}}$ (the scattering background) such that $\phi_{\mathrm{cl}}-\sum_{i}\epsilon_{i}\phi_{i}$ is positive frequency at $\scri^{+}$ and negative frequency at $\scri^{-}$. Then the tree-level scattering amplitude on this classical background is given by: \be{perturbiner} \cA(\phi_{1},\ldots,\phi_{n})=\left.\frac{\partial^{n} S\left[\phi_{\mathrm{cl}}-\sum^{n}_{i=1}\epsilon_{i}\phi_{i}\right]}{\partial\epsilon_{1}\cdots\partial\epsilon_{n}}\right|_{\epsilon_{1}=\cdots =\epsilon_{n}=0}. \ee Hence, if two theories agree on a classical background then the tree-level S-matrix of one can be computed with the other, provided the asymptotic states can be singled out in a coherent way. Equation \eqref{CGB2} confirms that conformal and Einstein gravity agree (up to constants) on a classical de Sitter background. We also know that Einstein solutions sit inside the space of all solutions to the Bach equations of conformal gravity. All that remains is to show that asymptotic Einstein scattering states can be consistently singled out within the conformally invariant theory. Maldacena argues that this can be done by employing `Neumann' boundary conditions on the metric as follows \cite{Maldacena:2011mk}. For any asymptotically de Sitter space-time, we can expand the line element in Fefferman-Graham coordinates \cite{Fefferman:1985}. On the Poincar\'{e} patch of \eqref{dSmetric3}, this looks like: \be{FG} \d s^{2}=\frac{-\d t^{2}+\d x^{i}\otimes \d x^{j}\left( g^{(0)}_{ij}(x)-t^{2}g^{(2)}_{ij}(x)-t^{3}g^{(3)}_{ij}(x)+\cdots\right)}{-t^{2}}. \ee The important point is that this expansion has no $O(t)$ term in the numerator; since a conformal transformation can be made to eliminate the $t^{-2}$ factor, this means that asymptotically de Sitter space-times are conformal to metrics which obey $\partial_{t} g|_{t=0}=0$. This is a Neumann boundary condition on the metric, and it can be made gauge invariant via: the $t=0$ slice of $dS_{4}$ is totally geodesic with respect to the ambient metric \cite{Maldacena:2011mk}. Since conformal gravity has fourth-order equations of motion, one expects it to have four solutions given a single momentum in a Fourier transform picture. Restricting our attention to positive frequency fields should eliminate two of these solutions, while the Neumann boundary condition gets rid of a third. As asymptotically de Sitter spaces are conformal to solutions respecting these conditions, it follows that the remaining solution must be the Einstein one. Hence, calculation of conformal gravity amplitudes at tree-level restricted to Einstein states will give $-\Lambda/3$ times the corresponding Einstein amplitudes. In particular, they will degenerate as $\Lambda \rightarrow 0$, but by construction we will find that the $n$-point conformal gravity amplitude will be a polynomial of degree $n-1$ in $\Lambda$, so it will be relatively straightforward in practice to divide by $\Lambda$ and take $\Lambda\rightarrow 0$. \subsubsection{Graviton scattering in de Sitter space} We begin by showing how the relationship between conformal and Einstein gravity is manifested for generating functionals of scattering amplitudes involving two negative helicity gravitons. To do this, we use the chiral formulation of general relativity. For a general space-time $M$, the metric is given by a tetrad of 1-forms as $\d s^{2}=\epsilon_{AB}\epsilon_{A'B'}e^{AA'}\otimes e^{BB'}$. This information can be packaged nicely into three ASD 2-forms: \begin{equation*} \Sigma^{AB}=e^{A'(A}\wedge e^{B)}_{A'}, \end{equation*} and combined with the ASD spin connection $\Gamma_{AB}$ to provide the basic variables for Plebanski's chiral formulation of gravity \cite{Plebanski:1977zz}. In the presence of a cosmological constant, this action takes the form: \be{eqn: PA} S[\Sigma, \Gamma]=\frac{1}{\kappa^2}\int_{M} \left(\Sigma^{AB}\wedge F_{AB}-\frac{\Lambda}{6}\Sigma^{AB}\wedge\Sigma_{AB}\right), \ee where \be{eqn: ASDcurv} F_{AB}=\d\Gamma_{AB}+\Gamma^{C}_{A}\wedge \Gamma_{BC} \ee is the curvature of the ASD spin connection. This action produces two field equations, to which we append a third (the condition that $\Sigma^{AB}$ be derived from a tetrad) \cite{Capovilla:1991qb}: \begin{eqnarray} \D \Sigma^{AB} & = & 0, \label{FE1} \\ F_{AB} & = & \Psi_{ABCD}\Sigma^{CD}+\frac{\Lambda}{3}\Sigma_{AB}, \label{FE2} \\ \Sigma^{(AB}\wedge\Sigma^{CD)} & = & 0 . \label{FE3} \end{eqnarray} Here, $\D$ is the covariant derivative with respect to the ASD spin connection, so explicitly, \begin{equation*} \D\Sigma^{AB}=\d\Sigma^{AB}+2\Gamma^{(A}_{C}\wedge\Sigma^{B)C}. \end{equation*} In the context of graviton scattering amplitudes, the MHV amplitude with two negative helicity gravitons can be pictured geometrically as the classical scattering of these two gravitons off a SD background. This SD background will be built perturbatively from the $n-2$ positive helicity gravitons in a $n$-particle graviton MHV amplitude \cite{Mason:2008jy}. In such a background, $\Psi_{ABCD}=0$, which means that \eqref{FE2} can be solved for $\Sigma$ in terms of $F$ and then \eqref{FE1} and \eqref{FE3} may be combined to give a condition on the curvature of the ASD spin connection. Hence, a SD solution $(\Sigma_{0}, \Gamma_{0})$ obeys \cite{Capovilla:1990qi}: \begin{eqnarray} \Sigma_{0}^{AB} & = & \frac{3}{\Lambda} F^{AB}_{0}, \label{SD1} \\ F_{0(AB}\wedge F_{0\; CD)} & = & 0. \label{SD2} \end{eqnarray} If we now consider small perturbations away from this SD background of the form $\Sigma= \Sigma_{0}+\sigma_{0}$, $\Gamma = \Gamma_{0}+\gamma$, then we obtain a set of linearized field equations: \begin{eqnarray} \D_{0}\sigma^{AB} & = & -2\gamma^{(A}_{C}\wedge\Sigma^{B)C}_{0}, \label{LFE1} \\ \D_{0}\gamma_{AB} & = & \psi_{ABCD}\Sigma^{CD}_{0}+\frac{\Lambda}{3}\sigma_{AB}, \label{LFE2} \\ \sigma^{(AB}\wedge\Sigma^{CD)}_{0} & = & 0, \label{LFE3} \end{eqnarray} where $\D_{0}$ is the covariant derivative with respect to the background ASD spin connection $\Gamma_{0}$. \begin{lemma}\label{ZRM} The linearized field $\psi_{ABCD}=\psi_{(ABCD)}$ may be interpreted as linearized ASD Weyl spinor propagating on the SD background $(\Sigma_{0},\Gamma_{0})$. \end{lemma} \proof It suffices to show that $\psi_{ABCD}$ obeys the zero-rest-mass equation for spin $-2$ fields: $\nabla^{AA'}\psi_{ABCD}=0$, where $\nabla$ is the background connection. Act on both sides of \eqref{LFE2} with the background covariant derivative $\D_{0}$: \begin{equation*} \D_{0}^{2}\gamma_{AB}= 2F_{0\; C(A}\wedge\gamma_{B)}^{C}=(\D_{0}\psi_{ABCD})\Sigma^{CD}_{0}+\frac{\Lambda}{3}\D_{0}\sigma_{AB}. \end{equation*} Now use \eqref{LFE1} and \eqref{SD1} to obtain \begin{equation*} 2F_{0\; C(A}\wedge\gamma_{B)}^{C}=(\D_{0}\psi_{ABCD})\Sigma^{CD}_{0}-2F_{0 (A}^{C}\wedge\gamma_{B)C} \qquad \Rightarrow \D_{0}\psi_{ABCD}=0, \end{equation*} as required. $\Box$ \medskip Geometrically, we can conceptualize the framework of linearized solutions on a SD background in the following way. Let $\mathcal{S}$ be the space of solutions to the full field equations \eqref{FE1}-\eqref{FE3}; solutions to the linearized equations \eqref{LFE1}-\eqref{LFE3} form a vector space $V$. We identify $V$ with the tangent space to $\mathcal{S}$ over the point $(\Sigma_{0},\Gamma_{0})$ representing an SD solution: $T_{(\Sigma_{0},\Gamma_{0})}\mathcal{S}= V$. The vector space $V$ itself can be split into SD and ASD sectors by a short exact sequence resolution. A linearized SD solution is completely characterized by the ASD spin connection, since the linearized SD field equations read \be{eqn: SDLFE} \sigma_{AB} =\frac{3}{\Lambda}\D_{0}\gamma_{AB}, \qquad \D_{0}\gamma^{(AB}\wedge F_{0}^{CD)} = 0. \ee Hence, we define the SD portion of $V$ by \begin{equation*} V^{+}=\left\{(\sigma,\gamma)\in V \: : \: \D_{0}\gamma^{(AB}\wedge F_{0}^{CD)} = 0\right\}. \end{equation*} We can therefore define $V^{-}$ by the quotient map in the short exact sequence: \begin{equation*} 0\longrightarrow V^{+} \hookrightarrow V \longrightarrow V^{-} \longrightarrow 0, \end{equation*} with \begin{equation*} V^{-}\equiv V/ V^{+}= \left\{(\sigma,\gamma)\in V\right\} / \left\{\gamma \: : \: \D_{0}\gamma^{(AB}\wedge F_{0}^{CD)} = 0\right\} . \end{equation*} The space of solutions to the field equations $\mathcal{S}$ comes equipped with a natural symplectic form $\omega$ given by the boundary term in the action \cite{Ashtekar:2008jw}: \be{eqn: symp} \omega = \frac{1}{\kappa^{2}}\int_{C}\delta\Sigma^{AB}\wedge\delta\Gamma_{AB}, \ee where $C$ is a Cauchy surface in $M$ (when $\Lambda>0$, there is always a slicing where $C\cong S^3$ topologically) and $\delta$ is the exterior derivative on $\mathcal{S}$. We have the following lemma: \begin{lemma} The form $\omega$ on $\mathcal{S}$ is independent of choice of Cauchy surface, and defines a symplectic form on $\mathcal{S}/\mathrm{Diff}^{+}_{0}(M)$. \end{lemma} \proof Let $C_{1}$ and $C_{2}$ be any two Cauchy surfaces in $M$ bounding some region $R$. Then using $\delta^2=0$ and Stokes' theorem, it follows that \begin{equation*} \int_{C_{1}-C_{2}}\delta\Sigma^{AB}\wedge\delta\Gamma_{AB} = \delta\int_{\partial R}\Sigma^{AB}\wedge\delta\Gamma_{AB} = \delta \int_{R} \d\left(\Sigma^{AB}\wedge\delta\Gamma_{AB}\right). \end{equation*} Assuming the field equations hold in $R$, \eqref{FE1} implies that \begin{multline*} \delta \int_{R} \d\left(\Sigma^{AB}\wedge\delta\Gamma_{AB}\right)= \delta\left( \int_{R} -2\Gamma^{(A}_{C}\wedge\Sigma^{B)C}\wedge\delta\Gamma_{AB}+\Sigma^{AB}\wedge\d\delta\Gamma_{AB} \right) \\ =\delta \left(\int_{R} \Sigma^{AB}\wedge\D\delta\Gamma_{AB}\right) \sim \delta (\delta S[\Sigma,\Gamma]) =0, \end{multline*} by the nilpotency of $\delta$ on $\mathcal{S}$. Hence, $\omega$ is independent of choice of Cauchy surface, and is furthermore invariant under diffeomorphisms of $M$ as well as rotations of the spin frame (since all spinor indices are contracted). By definition, it is easy to see that $\omega$ annihilates transformations of the form $\Sigma\rightarrow\Sigma+\delta\sigma$ or $\Gamma\rightarrow\Gamma+\delta\gamma$ and so descends to $\mathcal{S}/\mathrm{Diff}^{+}_{0}(M)$. Finally, it is clear that $\delta\omega=0$, indicating that it is a symplectic form on this space. $\Box$ \medskip We can use this symplectic form to define an inner product between states in the fiber of the tangent space $V$. Let $h_{i}, h_{j}\in V$ be two linearized solutions, and define their inner product to be: \be{ip} \la h_{i}|h_{j}\ra = -\frac{i}{\kappa^{2}}\int_{C}\sigma^{AB}_{j}\wedge\gamma_{i\;AB}. \ee An important fact about this inner product (which is obvious in the $\Lambda=0$ setting, c.f., \cite{Mason:2008jy}) is that it annihilates the SD sector: \begin{lemma}\label{SDsec} Let $h_{i},h_{j}\in V^{+}$ on the SD background with $(\Sigma_{0},\Gamma_{0})$. Then $\la h_{i}|h_{j} \ra=0$, or equivalently: for all $h_{i}\in V^{+}$, $\la h_{i}|\cdot\ra|_{V^+}=0$. \end{lemma} \proof The inner product must clearly be skew-symmetric under interchange of $h_{i}$ and $h_{j}$, so we have \begin{equation*} \la h_{i}|h_{j} \ra =-\frac{i}{2\kappa^{2}}\int_{C}\left(\sigma^{AB}_{j}\wedge\gamma_{i\;AB}-\sigma^{AB}_{i}\wedge\gamma_{j\;AB}\right). \end{equation*} Now, suppose $h_{j}\in V^{+}$; then from \eqref{LFE2} it follows that $\D_{0}\gamma_{j\;AB}=\frac{\Lambda}{3}\sigma_{j\;AB}$. Furthermore, in the $\Lambda=0$ limit, we know that $\D_{0}\rightarrow\d$ so that any SD perturbation of the ASD spin connection must be pure gauge. In other words, we know that $\gamma_{j}^{AB}|_{\Lambda=0}=0$, so we can write $\gamma_{j}^{AB}=\Lambda \nu_{j}^{AB}$ for some array of space-time 1-forms $\nu_{i}^{AB}$. Then the linearized SD field equation gives $\sigma_{j\;AB}=3\D_{0}\nu_{j\;AB}$. Feeding this into the inner product, gives: \begin{multline*} -\frac{i}{2\kappa^{2}}\int_{C}\left(3\d\nu^{AB}_{j}\wedge\gamma_{i\;AB}+6\Gamma_{0\;C}^{(A}\wedge\nu^{B)C}_{j}\wedge\gamma_{i\;AB}-\sigma^{AB}_{i} \wedge\gamma_{j\;AB}\right)\\ =\frac{i}{2\kappa^{2}}\int_{C}\left(3\nu^{AB}_{j}\wedge\D_{0}\gamma_{i\;AB}-\sigma^{AB}_{i}\wedge\gamma_{j\;AB}\right), \end{multline*} where the second line follows by integration by parts and a re-arranging of index contractions. Now, using $\gamma_{j\;AB}=\Lambda\nu_{j\;AB}$, we have: \begin{equation*} \la h_{i}|h_{j}\ra = \frac{i}{2\kappa^{2}}\int_{C}\nu_{j}^{AB}\wedge\left(3\D_{0}\gamma_{i\;AB}-\Lambda\sigma_{i\;AB}\right)=\frac{3i}{2\kappa^{2}}\int_{C}\nu_{j}^{AB}\wedge\psi_{i\;ABCD}\Sigma^{CD}_{0}, \end{equation*} using \eqref{LFE2} for $h_{i}$. So, if $h_{i}\in V^{+}$ as well, $\psi_{i\;ABCD}=0$ and the inner product vanishes as desired. $\Box$ \bigskip Hence, if $h_{i}\in V^{+}$, it follows that the inner product annihilates all other states $h_{j}$ in $V^{+}$. In other words, the inner product vanishes on linearized SD solutions. To use this inner product to define ASD solutions at the boundary of our space-time, we simply take a one-parameter family of Cauchy hypersurfaces $C_{t}\rightarrow\scri^{\pm}$ as $t\rightarrow\pm\infty$. Then we say that $h_{j}=(\sigma_{j},\gamma_{j})$ is ASD at $\scri^{\pm}$ if \be{ASDlim} \lim_{t\rightarrow\pm\infty} \int_{C_t}\sigma^{AB}_{j}\wedge\gamma_{i\;AB}=0 \qquad \mbox{for all}\:\: h_{i}=(\sigma_{i},\gamma_{i})\in V^{-}. \ee \subsubsection*{\textit{Graviton MHV amplitudes}} A $n$-graviton MHV amplitude will consist of $n-2$ SD and $2$ ASD incoming gravitons. Following \cite{Mason:2008jy}, we assume that the $n-2$ SD gravitons can be absorbed into a SD background space-time $M$, which can be perturbatively expanded to recover the individual particle content. Reversing the momentum of one of the two remaining gravitons, the MHV amplitude is the probability for a pure ASD state at $\scri^{-}$ to propagate across $M$ and evolve into a SD state at $\scri^{+}$. This is illustrated in Figure \ref{dS3}. \begin{figure} \centering \includegraphics[width=3.6 in, height=1.5 in]{dS3.pdf}\caption{\textit{Geometric picture of MHV graviton scattering}}\label{dS3} \end{figure} We can express this situation mathematically using our inner product \eqref{ip}. For the incoming state, we take $h_{1}\in V^{-}$ at $\scri^{-}$; since the inner product annihilates the SD sector, the amplitude for it to evolve into something self-dual at $\scri^{+}$ is given by its contraction with a state $h_{2}\in V^{-}$ at $\scri^{+}$. In other words, we need to compute the inner product between two states $h_{1}|_{\scri^{-}}\in V^{-}$, $h_{2}|_{\scri^{+}}\in V^{-}$ at the future conformal boundary $\scri^{+}$:\footnote{This form for the `scattering amplitude' does not actually constitute a \emph{physical} observable, since the measurement is performed by integrating over all of $\scri^{+}$. This is a space-like hypersurface, so no physical observer can perform this measurement. Hence, \eqref{ip*} is a `meta-observable' in the sense proposed by the dS/CFT correspondence \cite{Witten:2001kn, Strominger:2001pn}, but limits nicely to the asymptotically flat definition of a scattering amplitude as $\Lambda\rightarrow 0$.} \be{ip*} \la h_{2}|h_{1}\ra =-\frac{i}{\kappa^{2}}\int_{\scri^{+}}\sigma^{AB}_{1}\wedge\gamma_{2\;AB}. \ee Before proceeding, one might ask: how do we know that the all SD or one ASD graviton amplitudes vanish? Even with a cosmological constant, the SD Einstein equations are integrable; this is captured in the chiral formalism by the fact that the SD sector is fully characterized by a single relation, \eqref{SD2}. Furthermore, lemma \ref{SDsec} tells us that the inner product on linearized spin-2 fields (i.e., gravitons) vanishes on the SD sector. The first fact ensures the vanishing of the all SD graviton scattering amplitude, while the second fact tells us that any scattering amplitude involving only a single ASD graviton also vanishes. Hence, Einstein gravity does indeed possess `MHV-like' behavior, as desired. Now, we would like to get \eqref{ip*} into a form which is an integral over the SD background $M$; this would allow us to perturbatively expand the background to recover the $n-2$ SD gravitons of the scattering amplitude. The following proposition allows us to do just that: \begin{propn} The amplitude $\la h_{2}|h_{1}\ra$ is given by the formula: \be{MHV1} \la h_{2}|h_{1}\ra =\frac{i}{\kappa^{2}}\int_{M}\left(\Sigma^{AB}_{0}\wedge\gamma_{1\;A}^{C}\wedge\gamma_{2\;CB}-\frac{\Lambda}{3}\sigma^{AB}_{1}\wedge\sigma_{2\;AB}\right), \ee where $M$ is a SD background space-time described by $(\Sigma_{0},\Gamma_{0})$. \end{propn} \proof Recall that $\partial M=\scri^{+}-\scri^{-}$, so Stokes' theorem gives \begin{equation*} -\frac{i}{\kappa^{2}}\int_{\scri^{+}}\sigma^{AB}_{1}\wedge\gamma_{2\;AB}=-\frac{i}{\kappa^{2}}\int_{M}\left(\d\sigma_{1}^{AB}\wedge\gamma_{2\;AB}+\sigma^{AB}_{1}\wedge\d\gamma_{2\;AB}\right)-\frac{i}{\kappa^{2}} \int_{\scri^{-}}\sigma^{AB}_{1}\wedge\gamma_{2\;AB}. \end{equation*} Now, the second term on the right vanishes, since we have assumed that $h_{1}\in V^{-}$ at $\scri^{-}$. Using the linearized field equations \eqref{LFE1}, \eqref{LFE2} it follows that \begin{eqnarray*} \d\sigma_{1}^{AB} & = & -2\gamma_{1\;C}^{(A}\wedge\Sigma^{B)C}_{0}-2\Gamma_{0\;C}^{(A}\wedge\sigma_{1}^{B)C},\\ \d\gamma_{2\;AB} & = & \psi_{2\;ABCD}\Sigma^{CD}_{0}+\frac{\Lambda}{3}\sigma_{2\;AB}-2\Gamma_{0\;C(A}\wedge\gamma_{2\;B)}^{C}. \end{eqnarray*} This means that we can re-write our amplitude as: \begin{multline*} \frac{i}{\kappa^{2}}\int_{M}\left(\Sigma_{0}^{AB}\wedge\gamma_{1\;A}^{C}\wedge\gamma_{2\;CB}+\sigma^{AB}_{1}\wedge\Gamma_{0\;A}^{C}\wedge\gamma_{2\;CB}+\sigma^{AB}_{1}\wedge\Gamma_{0\;CA}\wedge\gamma_{2\;B}^{C}\right. \\ \left. -\frac{\Lambda}{3}\sigma_{1}^{AB}\wedge\sigma_{2\;AB}-\sigma^{AB}_{1}\wedge\psi_{2\;ABCD}\Sigma_{0}^{CD}\right). \end{multline*} However, the final term vanishes due to the linearized field equation \eqref{LFE3} and the fact that $\psi_{ABCD}=\psi_{(ABCD)}$, while the second and third terms cancel after restructuring the spinor indices. The resulting expression agrees with \eqref{MHV1}, but we must still verify that it has the correct gauge invariance: if one of the ASD states is pure gauge, the amplitude must vanish. Without loss of generality, suppose that $h_{1}$ is pure gauge, so that $\psi_{1\;ABCD}=0$. By \eqref{eqn: SDLFE}, we know that $\frac{\Lambda}{3}\sigma_{1}^{AB}=\D_{0}\gamma_{1}^{AB}$, and putting this into \eqref{MHV1} and integrating by parts gives us \begin{equation*} \la h_{2}| h_{1,\;\psi_{1}=0}\ra =\frac{i}{\kappa^{2}}\int_{M}\left(\Sigma_{0}^{AB}\wedge\gamma_{1\;A}^{C}\wedge\gamma_{2\;CB}+\gamma_{1}^{AB}\wedge\D_{0}\sigma_{2\;AB}\right)-\int_{\partial M}\gamma_{1}^{AB}\wedge\sigma_{2\;AB}. \end{equation*} The boundary term vanishes at $\scri^{+}$ since $h_{2}|_{\scri^{+}}\in V^{-}$, and also at $\scri^{-}$ since $h_{1}$ is pure gauge. This leaves us with the bulk terms, which can be evaluated using the linearized field equation \eqref{LFE1} for $h_{2}$: \begin{multline*} \int_{M}\left(\Sigma_{0}^{AB}\wedge\gamma_{1\;A}^{C}\wedge\gamma_{2\;CB}+\gamma_{1}^{AB}\wedge\D_{0}\sigma_{2\;AB}\right) \\ =\int_{M}\left(\Sigma_{0}^{AB}\wedge\gamma_{1\;A}^{C}\wedge\gamma_{2\;CB}-2\gamma_{1}^{AB}\wedge\gamma_{2\;C(A}\wedge\Sigma_{0\;B)}^{C}\right) =0, \end{multline*} with the final equality following after re-arranging contractions on spinor indices. $\Box$ \medskip The expression \eqref{MHV1} provides a generating functional for the MHV amplitudes, but how do we actually extract a formula for the $n$-point amplitude? In particular, we still need to perturbatively expand the SD background $M$ to pull out the `hidden' $n-2$ self-dual gravitons. On a flat background, this was done by transforming the problem to twistor space, where the perturbative expansion can be achieved by making a suitable coordinate transformation on the spinor bundle \cite{Mason:2008jy}. There are a variety of obstructions to doing this with a cosmological constant, including the fact that twistor space no longer fibers over $\P^{1}$. Hence, we instead approach the problem via conformal gravity \emph{before} moving to twistor space. \subsubsection*{\textit{Relationship with conformal gravity}} It is easy to see that the generating functional for scattering amplitudes in conformal gravity is given by the second term in \eqref{CGA3}. Evaluated on-shell with Einstein scattering states, this is: \be{CGGF} \la h_{2}|h_{1}\ra^{\mathrm{CG}}=\frac{2i}{\varepsilon^{2}}\int_{M} \d\mu \; \psi_{1}^{ABCD}\psi_{2\;ABCD}, \ee where $M$ is again the SD background which encodes the $n-2$ remaining gravitons. By \eqref{CGB2}, this inner product should be equal to some constant multiple of \eqref{MHV1} on-shell (i.e., applying the equations of motion), and this is indeed the case. \begin{propn}\label{CGDS} On-shell, $\la h_{2}|h_{1}\ra=-\frac{3\varepsilon^{2}}{\Lambda\kappa^{2}}\la h_{2}|h_{1}\ra^{\mathrm{CG}}$. \end{propn} \proof \eqref{CGGF} can be rewritten as \begin{equation*} \la h_{2}|h_{1}\ra^{\mathrm{CG}}=\frac{i}{\varepsilon^2}\int_{M}\psi_{1}^{ABCD}\Sigma_{0\;CD}\wedge\psi_{2\;ABEF}\Sigma_{0}^{EF}. \end{equation*} Using the linearized field equation \eqref{LFE2} for $h_{2}$, this becomes \begin{equation*} \la h_{2}|h_{1}\ra^{\mathrm{CG}}=\frac{i}{\varepsilon^2}\int_{M}\psi_{1}^{ABCD}\Sigma_{0\;CD}\wedge\left(\D_{0}\gamma_{2\;AB}-\frac{\Lambda}{3}\sigma_{2\;AB}\right). \end{equation*} Integrating by parts in the first term gives \begin{equation*} -\int_{M}\D_{0}\psi_{1}^{ABCD}\Sigma_{0\;CD}\wedge\gamma_{2\;AB}+\int_{\partial M}\psi_{1}^{ABCD}\Sigma_{0\;CD}\wedge\gamma_{2\;AB}=\int_{\partial M}\psi_{1}^{ABCD}\Sigma_{0\;CD}\wedge\gamma_{2\;AB}, \end{equation*} using lemma \ref{ZRM}. In the second term, a combination of both field equations \eqref{LFE2} for $h_{1}$ and \eqref{LFE1} for $h_{2}$ as well as integration by parts leaves \begin{equation*} -\frac{2\Lambda}{3}\int_{M}\gamma_{1}^{AB}\wedge\gamma_{2\;C(A}\wedge\Sigma_{0\;B)}^{C}+\frac{\Lambda^{2}}{9}\int_{M}\sigma_{1}^{AB}\wedge\sigma_{2\;AB}-\frac{\Lambda}{3}\int_{\partial M}\gamma_{1}^{AB}\wedge\sigma_{2\;AB}. \end{equation*} Combining both calculations gives: \begin{multline*} \la h_{2}|h_{1}\ra^{\mathrm{CG}}=\frac{i}{\varepsilon^2}\left(-\frac{2\Lambda}{3}\int_{M}\gamma_{1}^{AB}\wedge\gamma_{2\;C(A}\wedge\Sigma_{0\;B)}^{C}+\frac{\Lambda^{2}}{9}\int_{M}\sigma_{1}^{AB}\wedge\sigma_{2\;AB} \right) \\ -\frac{i}{\varepsilon^2}\left(\int_{\partial M}\psi_{1}^{ABCD}\Sigma_{0\;CD}\wedge\gamma_{2\;AB}-\frac{\Lambda}{3}\int_{\partial M}\gamma^{AB}_{1}\wedge\sigma_{2\;AB}\right) \\ =-\frac{\Lambda \;\kappa^{2}}{3\varepsilon}\la h_{2}|h_{1}\ra +\mbox{boundary terms}. \end{multline*} So the proof is complete if we can show that the boundary terms vanish. Applying \eqref{LFE2} to the first of these terms leaves us \begin{equation*} \mbox{boundary terms} \sim \int_{\partial M}\D_{0}\gamma_{1}^{AB}\wedge\gamma_{2\;AB}-\frac{\Lambda}{3}\int_{\partial M}\gamma_{2}^{AB}\wedge\sigma_{1\;AB}-\frac{\Lambda}{3}\int_{\partial M}\gamma_{1}^{AB}\wedge\sigma_{2\;AB}, \end{equation*} with the second and third terms cancelling due to skew symmetry in $h_{1},h_{2}$. Finally, \begin{multline*} \int_{\partial M}\D_{0}\gamma_{1}^{AB}\wedge\gamma_{2\;AB}=\int_{\scri^{+}}\D_{0}\gamma_{1}^{AB}\wedge\gamma_{2\;AB}-\int_{\scri^{-}}\D_{0}\gamma_{1}^{AB}\wedge\gamma_{2\;AB} \\ =-\int_{\scri^{+}}\gamma_{1}^{AB}\wedge\D_{0}\gamma_{2\;AB}-\int_{\scri^{-}}\D_{0}\gamma_{1}^{AB}\wedge\gamma_{2\;AB}=0, \end{multline*} by the fact that $h_{1}|_{\scri^{-}}\in V^{-}$ and $h_{2}|_{\scri^{+}}\in V^{-}$, as required. $\Box$ \medskip At this point, we have established that Einstein gravity MHV amplitudes can be computed via the conformal gravity generating functional, but we need a good theory for operationalizing this calculation. It turns out that this is provided for us by translating the generating functional to twistor space. For this, we need a twistor action. \subsubsection{Remarks on $\cN=4$ conformal super-gravity} Before proceeding directly to a discussion of the twistor action for conformal gravity, let us make some brief remarks about how the embedding of Einstein gravity into conformal gravity extends to the supersymmetric setting. Analogues of conformal gravity with extended supersymmetry were first constructed in \cite{Bergshoeff:1980is}, and it is believed that these theories are well-defined for $\cN\leq 4$ (c.f., \cite{Ferrara:1977ij, deWit:1978pd}). As we saw in our discussion of gauge theory, $\cN=4$ supersymmetry is most natural from our perspective since this results in a Calabi-Yau twistor space. The maximally supersymmetric $\cN=4$ conformal supergravity (CSG) comes in two phenotypes: \emph{minimal} and \emph{non-minimal} based upon the presence of a certain global symmetry. Einstein supergravity embeds into minimal CSG, but \emph{not} into the non-minimal models. The field content of $\cN=4$ CSG consists of the spin-2 conformal gravitons along with bosonic fields $V^{a}_{\mu\;b}$, anti-self-dual tensors $T^{ab}_{\mu\nu}$, scalars $\{E_{ab}, D^{ab}_{cd}, \varphi\}$ and fermions $\{\psi^{a}_{\mu}, \chi^{a}_{bc}, \lambda_{a}\}$, where $a=1,\ldots,4$ is a $\SU(4)$ $R$-symmetry index. \emph{Minimal} $\cN=4$ CSG is characterized by a global $\SU(1,1)$ symmetry acting non-linearly on the complex scalar $\varphi$ \cite{Bergshoeff:1980is}, and is related to the presence of $\cN=4$ Poincar\'e supergravity sitting inside the CSG \cite{Cremmer:1977tt}. This symmetry is manifested by replacing $\varphi$ with a doublet of complex scalars $\Phi_{\alpha}=(\Phi_{1},\Phi_{2})$ which transform under $\SU(1,1)\times\U(1)$ according to \begin{equation*} \Phi_{\alpha}\mapsto \mathsf{M}_{\alpha}^{\beta}\Phi_{\beta}, \qquad \Phi_{\alpha}\mapsto e^{i\lambda(x)}\Phi_{\alpha}, \qquad \mathsf{M}\in\SU(1,1),\;\lambda\in C^{\infty}(M,\C), \end{equation*} subject to the constraint $\eta^{\alpha\beta}\overline{\Phi}_{\beta}\Phi_{\alpha}=1$ for $\eta$ the quadratic form on $\SU(1,1)$. By gauge-fixing the local $\U(1)$ symmetry, one obtains the scalar $\varphi$ as a parametrization of the coset space $\SU(1,1)/\U(1)$, where $\U(1)$ is the diagonal subgroup. Since we are interested in scattering processes with external states corresponding to conformal gravitons, it is particularly enlightening to consider the effects of this symmetry on the portions of the Lagrangian including the spin-2 fields and the scalar. Clearly, the Lagrangian must contain the $(\mbox{Weyl})^2$ term of the $\cN=0$ action \eqref{CGA1}, but since $\varphi$ is charged under the global $\SU(1,1)$ symmetry, there can be no coupling between the conformal gravitons and this complex scalar. Furthermore, the $\SU(4)_{R}$-symmetry of the remaining fields excludes any other couplings between bosonic or fermionic fields and the Weyl curvature. This leads to a unique Lagrangian \begin{equation*} \cL^{\mathrm{min}}=C^{\mu\nu\rho\sigma}C_{\mu\nu\rho\sigma}+\varphi \Box^{2}\bar{\varphi} +\cdots\, , \end{equation*} where the multitude of remaining kinetic and interaction terms will be irrelevant for our purposes. Einstein supergravities at $\cN=4$ can be constructed from minimal CSG \cite{deRoo:1985jh} and so restricting to Einstein scattering states, Maldacena's argument should still apply and we can extract the tree-level Einstein gravity scattering amplitudes (see Figure \ref{CSGs} (\emph{a})). Minimal CSG can be obtained as a gauge theory of the superconformal group $\SU(2,2|4)$. A weaker version of the minimal Lagrangian can also be obtained by coupling abelian $\cN=4$ SYM to a $\cN=4$ CSG background \cite{deRoo:1984gd, deRoo:1985jh} and extracting the UV divergent portion of the partition function \cite{Liu:1998bu, Buchbinder:2012uh}. It has also been shown that minimal $\cN=4$ CSG interacting with a $\SU(2)\times\U(1)$ $\cN=4$ SYM theory is finite and power-counting renormalizable \cite{Fradkin:1981jc, Fradkin:1985am}. If we remove this $\SU(1,1)$ symmetry, then new interaction terms can appear in the Lagrangian resulting in \emph{non-minimal} $\cN=4$ CSG, which was first conjectured to exist in \cite{Fradkin:1983tg, Fradkin:1985am}. Indeed, (local) conformal invariance still allows for terms such as \begin{equation*} \cL^{\mathrm{non-min}}=C^{\mu\nu\rho\sigma}C_{\mu\nu\rho\sigma}+\varphi \Box^{2}\bar{\varphi} + f(\varphi) C^{\mu\nu\rho\sigma}C_{\mu\nu\rho\sigma}+ig(\varphi)C^{\mu\nu\rho\sigma}C^{*}_{\mu\nu\rho\sigma} +\cdots, \end{equation*} for $f,g$ arbitrary real-analytic functions. For generic choices of these functions, the scalar will provide a source for the Weyl curvature in the bulk, and \emph{vice versa}. At the level of scattering amplitudes, conformal graviton states in the non-minimal theory can interact with the scalar in the bulk via three-point vertices of the form $\varphi (\mbox{Weyl})^2$, so there will be Feynman diagrams for which there is no analogue in Einstein gravity, as illustrated in Figure \ref{CSGs} (\emph{b}). Without a consistent algorithm to subtract these diagrams, Maldacena's argument \emph{cannot} be applied to non-minimal CSG. Non-minimal $\cN=4$ CSG can be obtained by coupling non-abelian $\cN=4$ SYM to the $\cN=4$ CSG background and again extracting the UV divergent partition function. While there is some doubt over whether non-minimal conformal supergravity is well-defined at the quantum level \cite{Romer:1985yg, Buchbinder:2012uh}, there is substantial evidence that the twistor-string theory of Berkovits and Witten corresponds to non-minimal $\cN=4$ CSG coupled to $\cN=4$ SYM \cite{Berkovits:2004jj}. Indeed, spurious amplitudes related to the non-minimal coupling between conformal gravitons and scalars were found explicitly in \cite{Dolan:2008gc, Adamo:2012nn}. \begin{figure} \centering \includegraphics[width=3.25 in, height=1.25 in]{CSGs.pdf}\caption{\textit{In minimal} $\cN=4$ \textit{CSG, external gravitons only couple to other gravitons in the bulk} (\emph{a}.); \textit{in the non-minimal model they can couple to the scalar} $\varphi$ (\emph{b}.).}\label{CSGs} \end{figure} \subsection{Twistor Action for Conformal Gravity} In our study of $\cN=4$ SYM, the twistor action arose by translating the structure of the Chalmers-Siegel action to twistor space. We now show that the same thing can be done with the conformal gravity action \eqref{CGA3}. We give this action in Mason's original coordinate-free formulation as well as in a form using an explicit choice of background complex structure, and both constructions naturally generalize to $\cN=4$ to give minimal theories. Restricting to the Einstein subsector gives us both a twistorial formulation for the MHV generating functional \eqref{CGGF}, as well as a candidate twistor action for Einstein gravity itself \cite{Adamo:2013tja}. \subsubsection{$\cN=0$ action} \subsubsection*{\textit{Coordinate-free approach}} Let us begin by deriving a twistor action which avoids any explicit choices of coordinates or background complex structure. The first term in the space-time action \eqref{CGGF} corresponds to the SD sector of solutions to the Bach equations. By theorem \ref{NLG}, we know that this is equivalent to a twistor space $\CPT$ with integrable complex structure $J$, so the associated Nijenhuis tensor $N_{J}$ must vanish. We can encode this requirement in an action functional by introducing a Lagrange multiplier field $\mathscr{G}\in\Omega^{3,0}(\CPT,\Omega^{1,1})$ and taking \cite{Mason:2005zm} \be{CFTA1} S_{1}[J,\mathscr{G}]=\int_{\CPT}N_{J}\lrcorner\mathscr{G}, \ee which has field equations \begin{equation*} N_{J}=0, \qquad \dbar^{J}\mathscr{G}=0. \end{equation*} This action is invariant under diffeomorphisms as well as $\mathscr{G}\rightarrow \mathscr{G}+\dbar^{J}\gamma$ for $\gamma\in\Omega^{3,0}(\CPT,\Omega^{1,0})$, so we can interpret $\mathscr{G}$ as a cohomology class. The vanishing of $N_{J}$ corresponds on space-time to $\Psi_{ABCD}=0$, which is the first equation of \eqref{CGFE} when $\varepsilon=0$. So to establish that \eqref{CFTA1} describes self-dual conformal gravity, we need to show that $\mathscr{G}$ corresponds to a space-time field satisfying the Bach equation. Write $\mathscr{G}=g\otimes\D^{3}Z$, where $g\in\Omega^{1,1}(\CPT,\cO(-4))$ and $\D^{3}Z$ is the tautologically defined section of $\Omega^{3,0}(\CPT,\cO(4))$. The field equations indicate that $g\in H^{0,1}(\CPT,\Omega^{1,0}(-4))$, so we can apply the Penrose transform. Picking an arbitrary representative of the conformal class, we can construct an array of space-time fields from $g_{\alpha}\D Z^{\alpha}$: \be{ASD1} \Gamma_{\delta ABC}=\left( \begin{array}{c} G^{D}_{ABC} \\ \gamma_{D'ABC} \end{array}\right) = \int_{X}\tau\wedge\lambda_{A}\lambda_{B}\lambda_{C}g_{\delta}, \qquad \nabla^{AA'}\Gamma_{\delta ABC}=0. \ee Recalling that $\nabla$ acts on $\Gamma_{\delta ABC}$ via the local twistor connection as in \eqref{ltc2}, \be{ASD2*} \nabla^{AA'}\Gamma_{\delta ABC}=0 \leftrightarrow \left\{ \begin{array}{c} \nabla^{AA'}G^{D}_{ABC}-i\gamma^{A'D}_{BC} = 0 \\ \nabla^{AA'}\gamma_{D'ABC}-i\Phi^{AA'}_{DD'}G^{D}_{ABC} = 0 \end{array}\right. . \ee Now, the Penrose transform of $Z^{\alpha}g_{\delta}$ is given by \begin{equation*} \int_{X}\tau \wedge Z^{\alpha}\lambda_{B}\lambda_{C}g_{\delta}=\left( \begin{array}{c} \Gamma_{\delta ABC} \\ 0 \end{array}\right), \end{equation*} because the restriction to $X\subset\CPT$ implies that $X_{\alpha\beta}Z^{\alpha}=0$, where $X_{\alpha\beta}\in\T_{[\alpha\beta]}$ corresponds to the point $x\in M$ \cite{Mason:1990}. Hence, imposing the usual local twistor gauge-fixing $Z^{\alpha}g_{\alpha}=0$ on twistor space has the consequence $G^{A}_{ABC}=0$ on space-time. Therefore, $G_{DABC}=G_{(DABC)}$ and we can substitute the first z.r.m. equation into the second to obtain \begin{equation*} \nabla^{AA'}\nabla^{D}_{D'}G_{BCDA}+\Phi^{AA'}_{DD'}G^{D}_{ABC}=\left(\nabla^{AA'}\nabla^{D}_{D'}+\Phi^{ADA'}_{D'}\right)G_{ABCD}=0, \end{equation*} which is the required Bach equation. This is precisely what is predicted for a linearized ASD field in conformal gravity on twistor space \cite{Mason:1987}. Hence, \eqref{CFTA1} is indeed the twistor action for self-dual conformal gravity. We still need to describe the ASD interactions, which are given by the second term in \eqref{CGA3}. This is easy though, since we already know that $\mathscr{G}$ encodes the space-time Lagrange multiplier $G_{ABCD}$ in its $(1,1)$-form part $g$. Indeed, since $g$ is a cohomology class, \be{GPent} G_{ABCD}(x)=\int_{X}\sigma_{A}\sigma_{B}\sigma_{C}\sigma_{D}\;g(Z(x,\sigma)), \ee and the interaction term on twistor space becomes: \be{TCGInt} S_{2}[J,\mathscr{G}]=\int_{\PS\times_{M}\PS}\d\mu\;(\sigma_{1}\sigma_{2})^{4}g_{1}\wedge g_{2}, \ee where $\PS\times_{M}\PS\cong M\times X\times X$, $(\sigma_{1}\sigma_{2})$ is the $\SL(2,\C)$-invariant inner product between the homogeneous coordinates on $X$, and $\d\mu$ is a measure on the space of rational curves $X\subset\CPT$. Of course, this integral must be performed over a \emph{real} four-dimensional slice determined by some reality conditions which single out $\CPT_{\R}$. Hence, we have the full conformal gravity twistor action: \be{CFTA2} S[J,\mathscr{G}]=S_{1}[J,\mathscr{G}]-\varepsilon^{2}S_{2}[J,\mathscr{G}]. \ee The following theorem ensures that this twistor action is as good as the one we used in our study of gauge theory: \begin{thm}[Mason \cite{Mason:2005zm}]\label{MasThm} The twistor action $S[J,\mathscr{G}]$ is classically equivalent to the conformal gravity action \eqref{CGA3} off-shell in the sense that solutions to its Euler-Lagrange equations are in one-to-one correspondence with solutions to the field equations \eqref{CGFE} up to space-time diffeomorphisms. Additionally, upon fixing Woodhouse gauge and Euclidean reality conditions, $S[J,\mathscr{G}]$ is equal to the space-time action. \end{thm} \subsubsection*{\textit{Flat background complex structure}} For the purposes of this review, it will actually be advantageous for us to work with the twistor action using an explicit choice of background complex structure. In particular, we take the background complex structure to be $\dbar$ associated to the flat twistor space $\PT$ of Minkowski space; the complex structure on the curved twistor space is then given by $\dbar_{f}=\dbar+f$, which is integrable by \eqref{contact} whenever \be{Nijenhuis} \dbar^{2}_{f}=\left(\dbar f^{\alpha}+[f,f]^{\alpha}\right)\partial_{\alpha}\equiv N^{\alpha}\partial_{\alpha} =0. \ee Of course, this is not equal to the full Nijenhuis tensor (which is generally a non-polynomial object), but the vanishing of the two quantities is equivalent thanks to the Newlander-Nirenberg theorem in our chosen coordinate frame. The action now becomes a functional of $f\in\Omega^{0,1}(\CPT,T^{1,0}_{\CPT})$ and $g\in\Omega^{1,1}(\CPT,\cO(-4))$, and the only change is with respect to the self-dual contribution: \be{TCGSD} S_{1}[g,f]=\int_{\CPT}\D^{3}Z\wedge g_{\alpha}\wedge N^{\alpha}, \ee where $g_{\alpha}\in\Omega^{0,1}(\CPT, \cO(-5))$, subject to $Z^{\alpha}g_{\alpha}=0$. The self-dual field equations are \be{SDFES} N^{\alpha}=0, \qquad \dbar_{f}g=0, \ee and the gauge freedom is \cite{Mason:2005zm} \begin{equation*} g_{\alpha}\rightarrow g_{\alpha}+\partial_{\alpha}\gamma +\dbar_{f}\chi_{\alpha}, \end{equation*} for $\gamma\in\Omega^{0}(\CPT,\cO(-4))$, $\chi_{\alpha}\in\Omega^{0}(\CPT,\cO(-5))$. As in the coordinate-free setting, this means that we can consider $g_{\alpha}$ to be a cohomology class on $\CPT$, and it again corresponds to the space-time field $G_{ABCD}$ via the Penrose transform. The ASD interactions are still encoded by $S_{2}$ from \eqref{TCGInt}, but now with the understanding that the $\P^1$ fibers are holomorphic with respect to $\dbar_{f}$. In other words, the rational curves in twistor space are constructed by the constraint \eqref{holomap}. We denote this representation of the twistor action by \be{TCG} S[g,f]=S_{1}[g,f]-\varepsilon^{2}S_{2}[g,f], \ee and use it almost exclusively in what follows. While this sacrifices some formal flexibility, it also enables us to be quite explicit with some calculations, as we will see when considering Einstein degrees of freedom. \subsubsection{Minimal $\cN=4$ action} The $\cN=0$ conformal gravity twistor action generalizes naturally to $\cN=4$ supersymmetry. In this setting, the curved twistor space is topologically an open subset of $\P^{3|4}$, and points in the chiral complex space-time $M$ still correspond to rational curves $X\subset\CPT$. The twistor map from $\PS$ to $\CPT$ is promoted to: \be{sincid} Z^{I}(x^{\mu},\theta^{Aa}, \sigma_{B})=\left(\lambda_{A}(x,\theta,\sigma), \mu^{A'}(x,\theta,\sigma), \chi^{a}(x,\theta,\sigma)\right), \ee and the canonical holomorphic section of the Berezinian is denoted by $\D^{3|4}Z$. Considering the complex structure to be a finite deformation of the flat one on $\PT$, the data for the twistor action becomes \be{sdata} \dbar_f=\dbar+ f^I\frac\p{\p Z^I} \, , \qquad g=g_I \D Z^I\in \Omega^{1,1}(\CPT)\, , \quad \D Z^I=\rd Z^I-f^I\, . \ee This means that the holomorphic curves $X$ in twistor space are constructed by the supersymmetric analogue of \eqref{holomap} \be{sholomap} \dbar|_{X} Z^{I}(x,\theta,\sigma)=f^{I}(Z). \ee With these structures in play, we can easily write down the $\cN=4$ generalization of the twistor action \eqref{TCG}: \be{minTA1} S_{1}[g,f]=\int_{\CPT}\D^{3|4}Z\wedge g_{I}\wedge \left(\dbar f^{I}+[f,f]^{I}\right), \ee \be{minTA2} S_{2}[g,f]=\int\limits_{\PS\times_{M}\PS}\d\mu\wedge g_{1}\wedge g_{2}, \ee where $\d\mu$ is now promoted to a measure on the $(4|8)$-dimensional space of curves $X\subset\CPT_{\R}$. One interesting consequence of the $\cN=4$ supergeometry is that the conditions $\partial_{I}f^{I}=0$ and $Z^{I}g_{I}=0$ are not sufficient to fix the gauge freedoms \begin{equation*} f^{I}\rightarrow f^{I}+Z^{I}\alpha, \qquad g_{I}\rightarrow\partial_{I}\beta, \end{equation*} for $\alpha,\beta\in\Omega^{0,1}(\CPT,\cO)$. This follows because $\beta$ has homogeneity zero, as opposed to the homogeneity $-4$ gauge transformations from the $\cN=0$ setting. On the $\cN=4$ twistor space, we can expand $g$ in the anti-commuting variables as \begin{equation*} g=g^{0}+\chi^{a}g^{-1}_{a}+\cdots \frac{\chi^{4}}{4!}g^{-4}, \end{equation*} where each $g^{k}\in\Omega^{0,1}(\CPT,\Omega^{1,0}(k))$. Our calculations at $\cN=0$ already showed us that $g^{-4}$ corresponds to the ASD spinor field $G_{ABCD}$ which satisfies the Bach equation on space-time. We can use the Penrose transform and local twistor formalism to investigate the other components of $g$ with $\cN=4$. For instance, consider $g^{0}\in H^{0,1}(\CPT, \Omega^{1,0})$, whose Penrose transform of this object was first described in \cite{Mason:1990}. Write $g^{0}=a_{\alpha}\D Z^{\alpha}$ for $a_{\alpha}\in H^{0,1}(\CPT, \cO(-1))$. Choosing an arbitrary conformal frame, the Penrose transform gives: \be{cptr1} \Gamma_{\alpha B'}=\left( \begin{array}{c} \Psi^{A}_{B'} \\ \Phi_{A'B'} \end{array} \right) =\int_{X}\tau\wedge\frac{\partial a_{\alpha}}{\partial \mu^{B'}}, \qquad \nabla^{BB'}\Gamma_{\alpha B'}=0. \ee Using the local twistor connection, the z.r.m. equations of \eqref{cptr1} can be written on space-time as: \begin{equation*} \left\{\begin{array}{ccc} \nabla^{BB'}\Psi^{A}_{B'}-i\epsilon^{BA}\Phi^{B'}_{B'} & = & 0 \\ \nabla^{BB'}\Phi_{A'B'} & = & 0 \end{array} \right. , \end{equation*} while the Penrose transform of $Z^{\alpha}a_{\alpha}$ gives the conditions $\nabla_{BB'}\Psi^{B}_{A'}-i\epsilon_{B'A'}\Phi^{B'}_{B'}=0$ and $\Phi^{B'}_{B'}=0$. This means that we can write $\Psi_{AA'}=\Box\varphi$, and the content of \eqref{cptr1} is reduced to $\Box^{2}\varphi =0$, the z.r.m. equation for a conformal scalar. An identical procedure will give the following equations for the remaining components: \begin{eqnarray} g^{-1}_{a} \Rightarrow & \Box\nabla_{BB'}\psi_{a}^{B}-i\nabla_{AA'}\left(\Phi^{AA'}_{CB'}\psi_{a}^{C}\right) =0, \\ g^{-2}_{ab} \Rightarrow & \left(\nabla_{AA'}\nabla_{BB'}+\Phi_{ABA'B'}\right)T_{ab}^{AB} = 0, \\ g^{-3\;a} \Rightarrow & \left(\nabla_{BD'}\nabla^{AA'}\nabla_{CC'}+\Phi^{AA'}_{CC'}\nabla_{BD'}\right)\eta^{a\;D'}_{AC} = 0. \end{eqnarray} These correspond to the linearized spinor, ASD tensor, and conformal gravitino z.r.m. equations of $\cN=4$ CSG, so $g$ is the natural supersymmetric extension of the $\cN=0$ Lagrange multiplier field on twistor space. This means that $g$ defines a chiral superfield on space-time: \be{s-tfield} \mathcal{G}(x,\theta)=\int_{X} g(Z(x,\theta,\sigma)), \ee where $\mathcal{G}$ has an expansion like: \begin{equation*} \mathcal{G}(x,\theta)=\varphi +\theta^{a}_{A}\psi_{a}^{A}+\cdots +\theta^{4\;ABCD}G_{ABCD}+\cdots , \end{equation*} and the space-time translation of our $\cN=4$ twistor action will look like \be{CSUGRAct} S[\mathcal{W},\mathcal{G}]=\int_{M}\d\mu \left(\mathcal{W}(x,\theta)\;\mathcal{G}(x,\theta)-\varepsilon^{2}\mathcal{G}(x,\theta)^{2}\right) \rightarrow \frac{1}{\varepsilon^2}\int_{M}\d\mu \; \mathcal{W}(x,\theta)^2, \ee where $\mathcal{W}(x,\theta)$ is the a chiral superfield which, on-shell, is a Lorentz scalar encoding the ASD $\cN=4$ Weyl multiplet (c.f., \cite{Bergshoeff:1980is}). It has been shown that this superfield action has the correct linear reduction for $\cN=4$ CSG \cite{Berkovits:2004jj} and must correspond to the \emph{minimal} theory since there are no cubic (or higher) couplings between the dilaton and Weyl curvature. This is evident directly from twistor space as well, since we have a $\U(1)$-symmetry \begin{equation*} g\rightarrow e^{4i\alpha}g, \qquad \chi^{a}\rightarrow e^{-i\alpha}\chi^{a}, \end{equation*} which eliminates all $\varphi (\mbox{Weyl})^2$ couplings at the level of twistor representatives.\footnote{This actually corresponds to a degenerate limit of the $\SU(1,1)$ symmetry of minimal $\cN=4$ CSG; see \cite{Adamo:2013tja} for additional discussion. This does not affect our ability to isolate Einstein degrees of freedom, since Einstein supergravity still forms a subsector of this degenerate theory \cite{Cremmer:1977tt}.} Since all our considerations in this review will be at tree-level for gravity, there is no particularly compelling reason to consider the $\cN=4$ twistor action as opposed to the $\cN=0$ action. However, the supersymmetric action is `cleaner' in the sense that $S_{2}[g,f]$ doesn't need an explicit weighting factor of $(1 2)^4$, and the Calabi-Yau nature of the twistor space is also advantageous. Hence, we will often choose to work with the $\cN=4$ framework in the future when performing explicit calculations. \subsection{Einstein gravity} Given the twistor action for conformal gravity (or its minimal $\cN=4$ extension), we now want to extract the Einstein subsector using the Maldacena argument outlined earlier. In particular, by restricting to Einstein degrees of freedom on a de Sitter background, the conformal gravity twistor action should encode the scattering amplitudes of general relativity. We perform this reduction here explicitly, and show that it gives a twistorial expression for the MHV generating functional of proposition \ref{CGDS}. Additionally, this reduction allows us to conjecture a form of the twistor action for Einstein gravity itself. \subsubsection{Reduction to Einstein degrees of freedom} The first step in reducing the degrees of freedom in the twistor action to Einstein gravity is to break conformal invariance. This is accomplished by introducing an infinity twistor, just as in $\M$. Since we work on a background with cosmological constant, the infinity twistor differs from \eqref{infty} by now having rank four: \be{inftyCC} I_{\alpha\beta}=\left( \begin{array}{cc} \epsilon^{AB} & 0 \\ 0 & \Lambda \epsilon_{A'B'} \end{array} \right), \qquad I^{\alpha\beta}=\left( \begin{array}{cc} \Lambda \epsilon_{AB} & 0 \\ 0 & \epsilon^{A'B'} \end{array}\right). \ee These can be generalized easily to $\cN=4$ supersymmetry, with the fermionic components of the infinity twistor corresponding to a gauging of the R-symmetry \cite{Wolf:2007tx}. Since we will not be concerned with this gauging, we leave these fermionic components implicit. We will also adopt the notation: \begin{equation*} I_{IJ}A^{I}B^{J}\equiv\la A,B\ra, \qquad I^{IJ}A_{I} B_{J}\equiv [A,B], \end{equation*} for contractions with the infinity twistor (with identical conventions for the $\cN=0$ twistors). Theorem \ref{NLG} tells us that an Einstein solution corresponds to a weighted contact structure on $\CPT$ specified by the 1-form $\tau$. The infinity twistor gives a canonical structure to $\tau$, and also defines a (weighted) Poisson structure and bracket on $\CPT$: \be{infstruct} \tau= \la Z,\D Z\ra, \qquad \Pi=I^{IJ}\partial_{I}\wedge\partial_{J}, \qquad \{f,g\}=[\partial f,\partial g]. \ee The complex deformation $\dbar_{f}$ must now respect both the Poisson and contact structures; this means that $f$ must be Hamiltonian with respect to $\Pi$: \begin{equation*} \cL_{f}\Pi =0\: \Rightarrow f=[\partial h, \partial], \qquad h\in\Omega^{0,1}(\CPT,\cO(2)). \end{equation*} Note that if $h=\dbar\gamma$, then $f=\dbar(\Pi(\gamma))$ is pure gauge, so we can take $h$ to be a cohomology class. In the $\cN=0$ setting, the Penrose transform tells us that this will correspond to a graviton of helicity $+2$. Feeding this into \eqref{TCGSD}, we obtain: \begin{multline}\label{EinR1} S_{1}[g,f]\rightarrow S_{1}[g,h]=\int_{\CPT}\D^{3}Z\wedge g_{\alpha}\wedge I^{\alpha\beta}\partial_{\beta}\left(\dbar h +\frac{1}{2} \left\{h,h\right\}\right) \\ =\int_{\CPT}\D^{3}Z\wedge I^{\alpha\beta}\partial_{\alpha}g_{\beta}\wedge\left(\dbar h +\frac{1}{2} \left\{h,h\right\}\right), \end{multline} with the last line following via integration by parts. We can identify $I^{\alpha\beta}\partial_{\alpha}g_{\beta}$ as the other graviton in the Einstein reduction: \begin{lemma} For $g_{\alpha}\in H^{0,1}(\CPT, \cO(-5))$, the Penrose transform of $I^{\alpha\beta}\partial_{\alpha}g_{\beta}$ can be identified with a graviton of helicity $-2$. \end{lemma} \proof Recall the Penrose transform of $g_{\alpha}$ given by \eqref{ASD1}. In the de Sitter conformal structure, the z.r.m. equations \eqref{ASD2*} become: \be{ASD2} \nabla^{AA'}\Gamma_{\delta ABC}=0 \leftrightarrow \left\{ \begin{array}{c} \nabla^{AA'}G^{D}_{ABC}-i\gamma^{A'D}_{BC} = 0 \\ \nabla^{AA'}\gamma_{D'ABC}-i\Lambda\epsilon^{A'}_{D'}G^{A}_{ABC} = 0 \end{array}\right. . \ee Using the fact that $G_{ABCD}=G_{(ABCD)}$, we can immediately reduce these to \begin{eqnarray} \nabla^{AA'}G^{D}_{ABC}-i\gamma^{A'D}_{BC} & = & 0, \label{zrm1}\\ \nabla^{AA'}\gamma_{D'ABC} & = & 0 . \label{zrm2} \end{eqnarray} We now want to lower the homogeneity of $g_{\alpha}$ by applying a twistorial derivative; the Penrose transform of such an operation obeys \cite{Mason:1990}: \begin{multline*} \partial_{\alpha}g_{\beta}\xrightarrow{\mathrm{Penrose}\;\mathrm{transform}}\Phi_{\alpha\beta CDEF}=\left( \begin{array}{c} 3\epsilon^{A}_{(C}\Gamma_{\beta DEF)} \\ -i\nabla_{CA'}\Gamma_{\beta DEF} \end{array}\right) \\ =\left( \begin{array}{cc} 3\epsilon^{A}_{(C}G^{B}_{DEF)} & 3\epsilon^{A}_{(C}\gamma_{|B'|DEF)} \\ -i\nabla_{CA'}G^{B}_{DEF}+\epsilon^{B}_{C}\gamma_{A'DEF} & -i\nabla_{CA'}\gamma_{B'DEF}+\Lambda\epsilon_{A'B'}G_{CDEF} \end{array}\right). \end{multline*} Using \eqref{inftyCC}, we can deduce: \be{ASD3} I^{\alpha\beta}\partial_{\alpha}g_{\beta}\xrightarrow{\mathrm{Penrose}\;\mathrm{transform}}-\Lambda G_{ABCD}-i\nabla_{AA'}\gamma^{A'}_{BCD}. \ee It suffices to show that this obeys the spin-2 z.r.m. field equation for an ASD field. Using \eqref{zrm1}, we have: \begin{equation*} I^{\alpha\beta}\partial_{\alpha}g_{\beta}\xrightarrow{\mathrm{Penrose}\;\mathrm{transform}}\psi_{ABCD}\equiv -\Lambda G_{ABCD}-\nabla_{AA'}\nabla^{EA'} G_{BECD}. \end{equation*} Now, note that $\nabla_{AA'}\nabla^{EA'}=\frac{1}{2}\epsilon^{E}_{A}\Box+\epsilon^{EF}\Box_{AF}$, where $\Box_{AF}=\nabla_{A'(A}\nabla^{A'}_{F)}$. This leaves us with \begin{multline*} \psi_{ABCD}=-\Lambda G_{ABCD}-\frac{1}{2}\Box G_{ABCD}-\epsilon^{EF}\Box_{AF} G_{BECD} \\ =-\Lambda G_{ABCD}-\frac{1}{2}\Box G_{ABCD}+\Lambda\left( G_{ABCD}-G_{ABCD}-2G_{ABCD}+G_{ABCD}+G_{ABCD}\right)\\ =-\Lambda G_{ABCD}-\frac{1}{2}\Box G_{ABCD}. \end{multline*} Using \eqref{zrm1}, it follows that \begin{equation*} \nabla^{AA'}\psi_{ABCD}=-i\left(\Lambda\gamma^{A'}_{BCD}+\frac{1}{2}\Box\gamma^{A'}_{BCD}\right). \end{equation*} Now, \eqref{zrm2} tells us that $\nabla^{AA'}\gamma_{D'ABC}=0$, so any higher derivatives will also vanish. In particular, \begin{equation*} \nabla_{DA'}\nabla^{AA'}\gamma_{D'ABC}=\frac{1}{2}\Box\gamma_{D'DBC}+\Lambda\gamma_{D'DBC}=0, \end{equation*} which immediately implies that $\nabla^{AA'}\psi_{ABCD}=0$. Since this is the spin-2 ASD zero-rest-mass field equation, the proof is complete. $\Box$ \medskip The Penrose transform tells us that we can represent $I^{\alpha\beta}\partial_{\alpha}g_{\beta}$ by an element of $H^{0,1}(\CPT,\cO(-6))$. But given any $\tilde{h}\in H^{0,1}(\CPT,\cO(-6))$ we can also write $g_{\alpha}=I_{\alpha\gamma}Z^{\gamma}\tilde{h}$, which obeys $g_{\alpha}\in H^{0,1}(\CPT,\cO(-5))$ and $Z^{\alpha}g_{\alpha}=0$. Hence, \eqref{EinR1} becomes: \begin{multline*} S_{1}[g,h]\rightarrow S_{1}[\tilde{h},h]=\int_{\CPT}\D^{3}Z\wedge I^{\alpha\beta}\partial_{\alpha}\left(I_{\beta\gamma}Z^{\gamma}\tilde{h}\right)\wedge\left(\dbar h +\frac{1}{2} \left\{h,h\right\}\right) \\ = 2\Lambda \int_{\CPT}\D^{3}Z\wedge\tilde{h}\wedge\left(\dbar h +\frac{1}{2} \left\{h,h\right\}\right). \end{multline*} This process goes through in exactly the same fashion for the self-dual $\cN=4$ action; the only difference is that $h$ now encodes the positive helicity graviton \emph{multiplet}, and $\tilde{h}\in H^{0,1}(\CPT,\cO(-2))$ now encodes the negative helicity multiplet. The resulting action is \be{EinR2} S_{1}[g,f]\rightarrow S_{1}[\tilde{h},h]=2\Lambda\int_{\CPT}\D^{3|4}Z\wedge\tilde{h}\wedge\left(\dbar h +\frac{1}{2} \left\{h,h\right\}\right). \ee As expected, this is precisely the self-dual twistor action for Einstein gravity, up to the factor of $\Lambda$ required by \eqref{CGB2} \cite{Mason:2007ct, Adamo:2012nn}. The corresponding reduction for the second term of the twistor action follows easily: \be{EinR3} S_{2}[g,f]\rightarrow S_{2}[\tilde{h},h]=\int\limits_{\PS\times_{M}\PS} \d\mu\wedge\tau_{1}\wedge\tau_{2}\wedge\tilde{h}_{1}\wedge \tilde{h}_{2}, \ee using the $\cN=4$ formalism. So the reduction of the conformal gravity twistor action to Einstein wavefunctions is simply \be{EinCGTA} S[\tilde{h},h]=S_{1}[\tilde{h},h]-\varepsilon^{2}S_{2}[\tilde{h},h]. \ee The remaining diffeomorphism freedom on $\CPT$ is captured by the transformations: \begin{equation*} Z^{\alpha}\rightarrow Z^{\alpha}+\left\{Z^{\alpha},\chi\right\}, \qquad h\rightarrow h+\dbar\chi +\left\{h,\chi\right\}, \end{equation*} for $\chi$ a weight $+2$ function. Now observe that we have arrived at a twistorial expression for the generating functional of MHV amplitudes in Einstein gravity. In particular, we know that the conformal gravity generating functional is given by $S_{2}[g,f]$, so proposition \ref{CGDS} tells us that \be{TGenF} \la \tilde{h}_{2}|\tilde{h}_{1}\ra = -\frac{3\varepsilon^{2}}{\Lambda\kappa^{2}}S_{2}[\tilde{h},h], \ee where $S_{2}$ is given by \eqref{EinR3} for $\cN=4$, or with an additional factor of $(12)^4$ for $\cN=0$. The positive helicity gravitons of the amplitude are encoded by the non-linear SD background $M$, which serves as the space of rational curves $X$ in twistor space constructed by solving \eqref{sholomap} on Einstein states: \begin{equation*} \dbar|_{X} Z^{I}=I^{IJ}\partial_{J}h(Z). \end{equation*} \subsubsection{Einstein twistor actions} We can go beyond simply using the reduction to Einstein states to write down the amplitude generating functional, though. Dividing \eqref{EinCGTA} by a power of $\Lambda$ in accordance with \eqref{CGB2}, we arrive at a functional which is a candidate twistor action for Einstein gravity itself. In particular, for $\cN=4$ we have \cite{Adamo:2013tja}: \begin{multline}\label{EinTA4} S^{\mathrm{Ein}}_{\cN=4}[\tilde{h},h]=\int_{\CPT}\D^{3|4}Z\wedge\tilde{h}\wedge\left(\dbar h+\frac{1}{2}\left\{h,h\right\}\right) \\ -\frac{\varepsilon^{2}}{\Lambda \kappa^{2}}\int\limits_{\PS\times_{M}\PS}\d\mu\wedge\tau_{1}\wedge\tau_{2}\wedge\tilde{h}_{1}\wedge\tilde{h}_{2}. \end{multline} This action has the correct self-dual reduction when $\varepsilon=0$ \cite{Mason:2007ct}, is obtained directly from the embedding of Einstein gravity into conformal gravity, is well-defined off-shell in any gauge, and (as we show in Section \ref{Chapter6}) produces the correct MHV amplitudes.\footnote{The only prior proposal for such an action, given in \cite{Mason:2008jy}, is really a generating functional for flat-space MHV amplitudes of Einstein gravity in `BGK' form. It does not extend to space-times with cosmological constant and may not even be diffeomorphism invariant.} While all of these facts indicate that \eqref{EinTA4} is a correct proposal, there is currently no known analogue of theorem \ref{MasThm} for this Einstein action, which \emph{proves} that it corresponds to Einstein gravity. The basic reason for this is that we arrived at \eqref{EinTA4} by using our explicit choice of background complex structure. While this resulted in a well-defined action functional, the geometric meaning of the terms in the Einstein twistor action is no longer clear. In particular, how are we to interpret the self-dual action? It certainly does not contain a Nijenhuis tensor (even in some special coordinate frame). This should be contrasted against \eqref{CFTA1} for conformal gravity, where the geometrical meaning is clear and no background complex structure has been chosen. The proposed Einstein twistor action has a field equation analogous to $N_{J}=0$: \be{GREOM1} \D^{3|4}Z\wedge\left(\dbar h+\frac{1}{2}\left\{h,h\right\}\right)=\frac{\varepsilon^{2}}{\Lambda\kappa^{2}}\;\d\mu\wedge\tau\;\int_{X_1}\tilde{h}_{1}\wedge\tau_{1}, \ee where $X_1$ is the rational curve in $\CPT_{\R}$ (fixed by the reality conditions) which contains $Z$. If one could show that this was a consistent subset of the field equations of the $\cN=4$ CSG twistor action, then it would prove that \eqref{EinTA4} is correct by the Maldacena argument. A related approach would be to show that the Feynman rules of the two twistor actions are consistent with respect to Maldacena's argument; this would show that \eqref{EinTA4} is correct perturbatively. Finally, let us point out that our Einstein twistor action has natural generalizations which should account for supergravities with $\cN\leq 8$. For $\cN=0$ general relativity, we have \begin{multline}\label{EinTA0} S^{\mathrm{Ein}}_{\cN=0}[\tilde{h},h]=\int_{\CPT}\D^{3}Z\wedge\tilde{h}\wedge\left(\dbar h+\frac{1}{2}\left\{h,h\right\}\right) \\ -\frac{\varepsilon^{2}}{\Lambda \kappa^{2}}\int\limits_{\PS\times_{M}\PS}\d\mu\;(\sigma_{1}\sigma_{2})^{4}\;\tilde{h}_{1}\wedge\tau_{1}\wedge\tilde{h}_{2}\wedge\tau_{2}. \end{multline} For $\cN=8$ supersymmetry, twistor space is topologically $\P^{3|8}$ and the single graviton multiplet is encoded by $\mathcal{H}\in\Omega^{0,1}_{\CPT}(2)$, which incorporates the negative helicity graviton in the term $\chi^{8}\tilde{h}$. This leads to an action: \begin{multline}\label{EinTA8} S^{\mathrm{Ein}}_{\cN=8}[\mathcal{H}]=\int_{\CPT}\D^{3|8}Z\wedge \mathcal{H}\wedge\left(\dbar \mathcal{H}+\frac{1}{3}\left\{\mathcal{H},\mathcal{H}\right\}\right) \\ -\frac{\varepsilon^{2}}{\Lambda \kappa^{2}}\int\limits_{\PS\times_{M}\PS}\d\mu\;\frac{\mathcal{H}_{1}\wedge\tau_{1}\wedge \mathcal{H}_{2}\wedge\tau_{2}}{(\sigma_{1}\sigma_{2})^{4}}. \end{multline} \subsection{Non-minimal Twistor Actions} Before proceeding to study the Einstein reduction of the conformal gravity twistor action, let us make some remarks on the possibility of formulating \emph{non-minimal} $\cN=4$ CSG in twistor space. Since we cannot consistently embed Einstein supergravity in such a theory, such a twistor action won't be useful in obtaining Einstein amplitudes. Hence, this subsection can be treated as a curiosity and simply skipped over by the reader who is not interested. We outline here a proposal for a twistor action describing a particular version of non-minimal $\cN=4$ CSG due to Berkovits and Witten \cite{Berkovits:2004jj}. While not attempting to prove this proposal, we argue that its perturbation theory will produce all of the expected tree-level scattering amplitudes. Of course, there are unresolved questions as to whether such a theory is well-defined at the quantum level \cite{Romer:1985yg, Buchbinder:2012uh}, but all of our considerations here will be classical. Non-minimal versions of $\cN=4$ CSG are highly non-unique: arbitrary analytic functions can couple the scalar $\varphi$ to the conformal gravitons of the theory. This can also be captured at the level of a chiral superspace action. In the minimal case, we saw that the action \eqref{CSUGRAct} served to define a chiral superspace action in terms of $\mathcal{W}$. However, since $\mathcal{W}$ has conformal weight zero, an action of the form \begin{equation*} S[\mathcal{W}]=\int_{M}\d\mu\;F(\mathcal{W}) +\int_{\bar{M}}\d\bar{\mu}\;\overline{F(\mathcal{W})}, \end{equation*} where $\bar{M}$ is the anti-chiral super-manifold, will be conformal and supersymmetric for \emph{any} holomorphic function $F$. While $F(\mathcal{W})=\mathcal{W}^2$ corresponds to the minimal theory, other choices clearly lead to interactions between the scalars and conformal gravitons. For instance, $F(\mathcal{W})=\mathcal{W}^3$ will clearly give a Lagrangian term $\varphi \Psi^{ABCD}\Psi_{ABCD}$. The twistor-string theory of Berkovits and Witten appears to correspond to a very particular choice of non-minimal $\cN=4$ CSG, with holomorphic function $F(\mathcal{W})=e^{2\mathcal{W}}$ \cite{Berkovits:2004jj}. We refer to this as Berkovits-Witten CSG, or BW-CSG for short. As a classical $\cN=4$ theory, it is easy to distinguish BW-CSG from the minimal theory by looking at its scattering amplitudes. In the twistor-string theory for BW-CSG one finds a degree zero three-point amplitude of the form \cite{Berkovits:2004jj, Dolan:2008gc, Adamo:2012nn}: \be{BW3pt} \int \D^{3|4}Z\wedge\left(\partial_{K}f^{I}_{1}\partial_{I}f_{2}^{J}\partial_{J}f^{K}_{3} -\partial_{J}f^{I}_{1}\partial_{K}f_{2}^{J}\partial_{I}f_{3}^{K}\right). \ee Applying the Penrose transform, it is easy to see that this amplitude corresponds to a term $\bar{\varphi} \widetilde{\Psi}^{A'B'C'D'}\widetilde{\Psi}_{A'B'C'D'}$ in the space-time action. Similarly, at degree one, there are amplitudes with an arbitrary number of $g$-insertions; at three-points, this provides the parity conjugate of \eqref{BW3pt}. The $n$-point version of this amplitude is clearly generated by the chiral part of the space-time action: \begin{equation*} \int_{M}\d\mu\;\exp\left(\mathcal{W}(x,\theta)\right)=\sum_{n=2}^{\infty}\int_{M^{0}}\d\mu^{0}\;\varphi^{n-2}\;\Psi^{ABCD}\Psi_{ABCD}+\cdots, \end{equation*} where $\d\mu^{0}$ denotes the measure on the bosonic body $M^0$. Parity invariance demands that we therefore have $n$-point analogues of \eqref{BW3pt}, coming from the anti-chiral part of the space-time action. Let us try to find a corresponding twistor action: our strategy is to proceed by requiring the twistorial theory to produce the tree-level scattering amplitudes of BW-CSG. To begin, we note that BW-CSG still has an anti-MHV three point amplitude (like the minimal theory); this comes from the self-dual twistor action we had before: \be{BW-SD} S_{1}[g,f]=\int_{\CPT}\D^{3|4}Z\wedge g_{I}\wedge\left(\dbar f^{I}+[f,f]^{I}\right). \ee Similarly, the twistorial version of $\int \d\mu e^{\mathcal{W}}$ is an easy generalization of \eqref{minTA2} \be{BWchiral} S^{\mathrm{chiral}}[g,f]=\int_{M}\d\mu\;\exp\left(\int_{X} g\right). \ee If we expand in fermionic variables, it is clear that on space-time this is the chiral portion of the action \begin{equation*} S^{\mathrm{chiral}}\sim \int \d\mu\; \exp(\varphi)\;\Psi^{ABCD}\Psi_{ABCD}+\cdots, \end{equation*} as expected. We still need to obtain the parity conjugates of the amplitudes generated by \eqref{BWchiral}. Consider a holomorphic Chern-Simons theory on the tangent bundle $T^{1,0}_{\CPT}$: \be{hCS} S^{\mathrm{hCS}}[g,f]=\int_{\CPT}\D^{3|4}Z\wedge\tr\left(f\wedge\dbar f+\frac{2}{3}f\wedge f\wedge f\right) \ee Clearly, the cubic term in this action leads to the three-point amplitude \eqref{BW3pt} of BW-CSG. The quadratic term in \eqref{BW-SD} leads to a $g-f$-propagator, so we can tie any number of $\bar{\mbox{MHV}}$-vertices onto \eqref{BW3pt} to form a $n$-point amplitude which has all $f$ external states. These all-$f$ amplitudes form the parity-conjugate set to the all-$g$ amplitudes generated by \eqref{BWchiral}. Hence, we conjecture that the twistor action \be{BWTA} S^{\mathrm{BW-CSG}}[g,f]=S_{1}[g,f]+S^{\mathrm{hCS}}[g,f]-\varepsilon^{2}S^{\mathrm{chiral}}[g,f], \ee should be (classically) equivalent to the non-minimal $\cN=4$ CSG of Berkovits and Witten. Of course, our argument relies entirely upon the fact that \eqref{BWTA} has the same tree amplitudes as BW-CSG. Furthermore, it is rather unfortunate that the anti-chiral portion of the space-time action is encoded only implicitly (i.e., we do not have an explicit $\exp(\bar{\mathcal{W}})$ term on twistor space). In a sense, this is to be expected because parity invariance is often obscured in twistor space \cite{Witten:2004cp}. \section{Gravity Tree Amplitudes in Twistor Space} \label{Chapter6} In the previous section, we saw how the embedding of Einstein gravity into conformal gravity was manifested at the level of twistor actions. Now we operationalize these insights to actually compute scattering amplitudes in Einstein gravity on both de Sitter and flat backgrounds. Our particular focus will be on the MHV amplitude, with two negative helicity gravitons (or $\cN=4$ graviton multiplets) and the rest positive helicity. To proceed, we first develop the perturbation theory associated to our twistor actions, identifying the propagators and vertices just as we did in the Yang-Mills case. The main difference from gauge theory arises in the complicated structure of the vertices, which require their own perturbative expansion in terms of a diagram calculus on $\P^1$. Applying this formalism, we show that the vertices of the twistor actions (for both \eqref{EinCGTA} and \eqref{EinTA4}) correspond to the MHV amplitudes, for which we obtain an expression for any value of the cosmological constant. In the flat-space limit, we show that this limits onto Hodges' formula for the MHV amplitude \cite{Hodges:2012ym}. Finally, we provide an alternative formula for the MHV amplitude based on BCFW recursion in twistor space. \subsection{Feynman Rules} \label{CGPerT} We have two routes open to us for computing Einstein gravity amplitudes on twistor space: via the conformal gravity action \eqref{minTA1}-\eqref{minTA2}, or via the proposed Einstein gravity action \eqref{EinTA4}.\footnote{For ease of notation, we will work with the $\cN=4$ formalism. As ever, the $\cN=0$ content is can be extracted by a fermionic integral.} In either case, we need to develop the Feynman rules associated to the twistor action. In our study of $\cN=4$ SYM, we saw that the CSW gauge (an axial gauge given by a reference twistor $Z_{*}$) was optimal for performing amplitude calculations. For the conformal gravity twistor action, the CSW gauge is a choice of coordinates and gauge for $g$ such that one of the anti-holomorphic form components of $f$ and $g$ vanish in the direction of $Z_{*}$: \be{CGCSW-gauge} \overline{Z_*\cdot\frac{\partial}{\partial Z}} \lrcorner f=0=\overline{Z_*\cdot\frac{\partial}{\partial Z}} \lrcorner g \, , \ee with identical restrictions on $h$ and $\tilde h$ in the Einstein case. As in the gauge theory case, this eliminates the cubic vertex from the self-dual portion of the twistor action, and leaves us with: \be{GFCG} S[g,f]=\int_{\CPT}\D^{3|4}Z\wedge g_{I}\dbar f^{I} -\varepsilon^{2}\int\limits_{\PS\times_{M}\PS}\d\mu\; g_{1}\wedge g_{2}, \ee for $\cN=4$ CSG and \be{GFEin} S[\tilde{h},h]=\int_{\CPT}\D^{3|4}Z\wedge\tilde{h}\wedge\dbar h - \frac{\varepsilon^{2}}{\Lambda \kappa^{2}}\int\limits_{\PS\times_{M}\PS}\d\mu\wedge\tau_{1}\wedge\tau_{2}\wedge\tilde{h}_{1}\wedge\tilde{h}_{2}, \ee for the Einstein action. In each case, we see that the kinetic term is provided by the gauge-fixed portion of the self-dual action while the vertices must be generated by the remaining non-self-dual interactions. Equation \ref{TGenF} tells us that (upon restricting to Einstein states and dividing by the appropriate power of $\Lambda$) this interaction term should be the generating functional of MHV amplitudes. In other words, the second term in \eqref{GFCG} or \eqref{GFEin} plays the role of $\log\det(\dbar+\cA)$ from $\cN=4$ SYM. Clearly, we need a method for perturbatively expanding these terms as generating functionals to obtain the vertices. Since this structure is universal (i.e., arises for both actions), we address it after first discussing the propagator. \subsubsection{Twistor propagators} For the proposed Einstein gravity twistor action, the kinetic term is simply \begin{equation*} S^{\mathrm{kin}}[\tilde{h},h]=\int_{\CPT}\D^{3|4}Z\wedge\tilde{h}\wedge\dbar h, \end{equation*} so we know that the propagator will look like a distributional form $\bar{\delta}^{2|4}$ with appropriate weights. Indeed, the correct propagator in CSW gauge is easily seen to be \be{Einprop} \Delta^{\mathrm{Ein}}(Z_{1},Z_{2})= \bar{\delta}^{2|4}_{2,0,-2}(Z_{1},*,Z_{2})=\int_{\C^2}\frac{\d s}{s}t\;\d t\;\bar{\delta}^{4|4}(Z_{1}+sZ_{*}+tZ_{2}), \ee where the subscript denotes the weights. In the conformal gravity twistor action, the kinetic term reads \begin{equation*} S^{\mathrm{kin}}[\tilde{h},h]=\int_{\CPT}\D^{3|4}Z\wedge g_{I}\wedge\dbar f^{I}, \end{equation*} so the kinetic operator is once again $\dbar$, but now the propagator will have a tensor structure that must account for the freedom in $g_{I}$ and $f^{I}$. As mentioned in the previous section, the $\cN=4$ geometry makes this situation somewhat ambiguous since the $\partial_{I}f^{I}=0$ and $Z^{I}g_{I}=0$ conditions do not fix the gauge freedom in $f,g$. Focusing on the $\cN=0$ representatives, we know that the twistor propagator must impose $\p_\alpha f^\alpha=0$ and $Z^\alpha g_\alpha=0$. Since we are on a projective twistor space and the freedom in $f^{\alpha}$ corresponds to adding multiples of $Z^{\alpha}$, we only really need to deal with the condition on $g_{\beta}$. This can be accounted for with the tensor structure of the propagator, leaving us with \be{bCGprop} \Delta^\alpha_\beta(Z_{1},Z_{2})=\delta^\alpha_\beta \bar{\delta}^{2}_{1,0,-5}(Z_{1},*,Z_{2}) -\frac{1}{4} Z_{1}^{\alpha}\frac{\partial}{\partial Z_{2}^{\beta}} \bar{\delta}^{2}_{0,0,-4}(Z_{1},*,Z_{2}), \ee so that $Z^{\prime \beta}\Delta_\beta^\alpha=0$ (up to an irrelevant anomaly proportional to the reference twistor). This is then extended to each propagator component to build the full $\cN=4$ propagator. Of course, restricting $\cN=4$ CSG to Einstein degrees of freedom sets $f^{I}=I^{IJ}\partial_{J}h$ and $g_{I}=I_{IJ}Z^{J}\tilde{h}$, which automatically fixes the gauge freedom. Hence, for calculations in conformal gravity restricted to Einstein states (what we are ultimately interested in), we can always take the $\cN=4$ CSG propagator to be: \be{CGprop} \Delta^{I}_{J}(Z_{1},Z_{2})|_{\mathrm{Ein}}= \delta^{I}_{J}\bar{\delta}^{2|4}_{1,0,-1}(Z_{1},*,Z_{2})=\delta^{I}_{J}\int_{\C^2}\frac{\d s}{s}\;t\;\d t\;\bar{\delta}^{4|4}(Z_{1}+sZ_{*}+tZ_{2}). \ee \subsubsection{Vertices and tree diagrams} In CSW gauge, the vertices of both twistor actions are generated by the interaction term $S_{2}$. Upon restriction to Einstein states, this is generating functional is equivalent in both actions, and is given by \be{CGGF2} S_{2}[\tilde{h},h]=\int\limits_{\PS\times_{M}\PS}\d\mu\wedge\tau_{1}\wedge\tau_{2}\wedge\tilde{h}_{1}\wedge\tilde{h}_{2}. \ee In order to obtain vertices for conformal gravity itself, one could simply choose two \emph{independent} reference twistors $I^{IJ}$ and $\tilde{I}_{IJ}$. This is equivalent to giving a basis for conformal gravity polarization states in terms of Einstein degrees of freedom \cite{Adamo:2013tja}. Equation \eqref{TGenF} indicates that these vertices should correspond (on-shell) to MHV amplitudes. Twistorially, we can see this by noting that the generating functional contains two negative helicity gravitons in $\tilde{h}_{1}$, $\tilde{h}_{2}$, and the self-dual background space-time $M$ encodes the positive helicity gravitons via theorem \ref{NLG}. What we need is a way of systematically expanding this background to recover the $n-2$ individual positive helicity wavefunctions. \subsubsection*{\textit{Perturbative iteration and measure}} The non-linear graviton construction tells us that $M$ is realized twistorially as the space of holomorphic curves $X\subset\CPT$ which are constructed by solving \eqref{sholomap} \be{GCE} \dbar Z^{I}(x,\sigma)=f^{I}(Z)\equiv I^{IJ}\partial_{J}h(Z), \ee where we abbreviate $\dbar=\dbar|_{X}$ and $(x,\sigma)=(x^{\mu}, \theta^{Aa}, \sigma_{B})$ from now on to lighten notation. Generically, the functional form of $Z^{I}$ may be very complicated; to simplify the situation we look for a coordinate transformation on $\PS$ which trivializes the incidence relations \eqref{sincid}. This provides us with a mechanism for perturbatively expanding the SD background $M$, which we later realize as a calculus of tree diagrams on $\P^1$. By assumption, $Z^{I}(x,\sigma)$ is homogeneous of degree one in $\sigma$, so we can always find an array $\cX^{I A}(x,\sigma)$ which obeys \be{Auxcord1} \sigma_{A}\cX^{I A}(x,\sigma)=Z^{I}(x,\sigma). \ee \emph{A priori}, the $\cX$ are $(8|8)$ complex functions on $\PS$, but requiring them to be projective and obey \eqref{Auxcord1} reduces the number of independent components to $(5|8)$, so the $\cX$s act as a change of coordinates on $\PS$. There is considerable freedom in the choice of $\cX^{I A}$; one viable choice is to take a surface $\mathcal{S}\subset\CPT$. Each curve $X\subset\CPT$ will intersect $\mathcal{S}$ at a unique point $Z^{I}(x,\xi)$, as illustrated in Figure \ref{Gauge1}. We then define: \be{Auxcord2} \cX^{I A}(x,\sigma)=\frac{Z^{I}(x,\sigma)\xi^{A}(x)-Z^{I}(x,\xi)\sigma^{A}}{(\sigma\xi)}, \ee which clearly obeys \eqref{Auxcord1}. \begin{figure}[t] \centering \includegraphics[width=3.20 in, height=2 in]{Gauge1.pdf}\caption{\textit{The surface $\mathcal{S}$ induces a choice for the coordinates $\cX$ on $\PS$.}}\label{Gauge1} \end{figure} Now, suppose $X^{I A}\sigma_{A}$ is the homogeneous solution to \eqref{holomap}, \begin{equation*} X^{J A}=(\delta^{A}_{B}, x^{AB'}, \theta^{bA}). \end{equation*} We can (formally) solve \eqref{holomap} by a Picard-like iteration,\footnote{Note that the iteration here is a perturbative expansion for the amplitude generating functional, and differs from the (actual) Picard expansion which appears in \cite{Mason:2008jy}. A translation between the two can be achieved, but only after some rather non-obvious applications of the Schouten identity (c.f., \cite{Nguyen:2009jk}).} with $Z^{I}_{0}=X^{I A}\sigma_{A}$ and \be{Picard1} Z_{i}^{I}(x,\sigma)=X^{I A}\sigma_{A}+\dbar^{-1}\left(f^{I}(Z_{i-1})\right). \ee In the flat background limit, this iteration can be understood as perturbatively solving the good cut equation; in the space-time context, one would require a Green's function for the $\eth$-operator rather than $\dbar$ (c.f., \cite{Adamo:2009vu}). Since $f^{I}$ is a form of weight $+1$, there is an ambiguity in what we mean by $\dbar^{-1}$. It is natural to make a choice which is compatible with our coordinates $\cX$, so that the iteration becomes: \be{Picard2} Z_{i}^{I}(x,\sigma)=X^{I A}\sigma_{A}+\int_{\P^1}\frac{\D\sigma'}{(\sigma\sigma')}\frac{(\xi\sigma)^2}{(\xi\sigma')^2}f^{I}(Z_{i-1}(\sigma')). \ee Here, $\xi\in\P^{1}$ is an arbitrary point on the Riemann sphere reflecting the (two-fold) ambiguity in inverting the $\dbar$-operator; physical observables such as scattering amplitudes should be independent of $\xi$ at the end of our calculations. This choice induces an expansion for the $\cX$s: \be{Picard3} \cX^{I A}(x,\sigma)=X^{I A}+\xi^{A}\int_{\P^1}\frac{\D\sigma'}{(\sigma\sigma')}\frac{(\xi\sigma)}{(\xi\sigma')^2}f^{I}(X\cdot\sigma')+\cdots. \ee Clearly, the $\cX$s are redundant coordinates depending on the choice of spin frame, but we can now read off their $\sigma$-dependence from \eqref{Picard3}: \be{Picard4} \dbar \cX^{I A}(x,\sigma)=\frac{\xi^{A}f^{I}}{(\xi\sigma)}. \ee This enables us to take the exterior derivative of $\cX$ with respect to the space-time coordinate $x$, finding \be{Yder} \dbar\left(\d_{x}\cX^{I A}(x,\sigma)\right)=\partial_{J} f^{I}\frac{\xi^{A}\sigma_{B}}{(\xi\sigma)}\d_{x}\cX^{J B}(x,\sigma). \ee Since $\partial_{I}f^{I}=0$, this means that the top-degree form $\d^{8|8}\cX$ is holomorphic in $\sigma$ and of weight zero; by Liouville's theorem, it is therefore independent of $\sigma$. But this means that \begin{equation*} \d\mu = \frac{\d^{8|8}\cX}{\mathrm{vol}\;\GL(2,\C)}=\frac{\d^{8|8}X}{\mathrm{vol}\;\GL(2,\C)}, \end{equation*} is an invariant volume form on the space-time $M$ itself. Now we want to implement this iteration by perturbatively expanding \eqref{CGGF2}. \emph{A priori}, the deformation of $Z^{I}$ given by \eqref{Picard1} can act on the wavefunctions $\tilde{h}_{1,2}$ (which have non-polynomial dependence on $Z$) or the contact structures $\tau_{1,2}$. From \eqref{Picard2}, the action on a wavefunction is given by \be{thdef} \tilde{h}_{1}\rightarrow \int_{\P^1}\frac{\D\sigma_{i}\;(\xi\;1)^{2}}{(1\;i)(\xi\;i)^{2}}[\partial\tilde{h}_{1}, \partial h_{i}], \ee where we use the shorthand $[\partial\tilde{h}_{1}, \partial h_{i}] = I^{IJ}\partial_{1 I}\tilde{h}_{1}\partial_{i\;J}h_{i}$. As for the contact structures, recall that each $\tau$ is quadratic in $Z$: \begin{equation*} \tau=\la Z, \partial Z\ra=I_{IJ}Z^{I}\;\partial Z^{J}, \end{equation*} and can therefore absorb at most two deformations. Furthermore, since the deformation always carries a power of the infinity twistor $I^{IJ}$, and this will contract with the $I_{JK}$ in $\tau$, such a deformation will always result in a power of $\Lambda$. A bit of algebra shows that a single deformation of the contact structure (say, $\tau_{1}$) results in \cite{Adamo:2013tja}: \be{tauprop1} \psi^{1}_{i}= \Lambda \frac{\D\sigma_i (\xi 1)^4}{(1i)^{2}(\xi i)^2} \rd_1\left( \frac{ (i1)}{(\xi 1)^2}Z^I_1\right)\partial_{iI} h_i =\Lambda \frac{\D\sigma_1 \D\sigma_i (\xi 1)}{(1i)^{2}(\xi i)^2}\left[(\xi i)\;Z^{I}_{1}+(1i)\;Z^{I}(\xi)\right]\partial_{iI} h_i \, , \ee which is then integrated over the $\P^{1}$ corresponding to $\sigma_{i}$. The second expression here uses the linearity of $Z^{I}(x,\sigma)$ in $\sigma$. If a second deformation acts at the same contact structure, we can use the first expression to arrive at \be{tauprop2} \omega^{1}_{ij}=-\Lambda\frac{\D\sigma_1 \D\sigma_i \D\sigma_j (1\xi)^{4}(ij)}{(1i)^{2}(1j)^{2}(\xi i)^{2}(\xi j)^{2}}\left[\partial_i,\partial_j \right] h_i h_j \, , \ee which is now integrated over the additional $\P^{1}$ corresponding to $\sigma_{j}$. Inspecting the expression for $\psi^{1}_{i}$ in \eqref{tauprop1}, we can actually manipulate it into a format which is $\d_{i}$-exact: \be{psip} \psi^{1}_{i}=2\Lambda \D\sigma_{1}\; \sigma_{1A}\;\d_{i} \left(\frac{\sigma^{A}_{i}(\xi 1)h_{i}}{(1i)^{2}(\xi i)}\right). \ee As a result, we may be tempted to conclude that such deformations vanish. If the new wavefunction $h_{i}$ is undeformed by any additional iterations, then this is indeed true \cite{Adamo:2013tja}. However, we will see that for generic perturbative expansions, $h_{i}$ will also receive deformations and the contribution from $\psi^{1}_{i}$ is non-trivial. \subsubsection*{\textit{Tree diagram calculus}} At this point, the perturbative iteration gives us a method for expanding the SD background in \eqref{CGGF2} to extract the positive helicity gravitons of the vertex. To compute the $n$-point vertex, we must expand to order $n-2$, since each iteration introduces a single new positive helicity graviton via \eqref{thdef}, \eqref{tauprop1}, or \eqref{tauprop2}. There are many different ways in which this expansion can be performed. For instance, when $n=3$ there are four distinct contributions, since the deformation of $Z^{I}$ can act at $\tau_{1,2}$ or $\tilde{h}_{1,2}$. Clearly, the number of possible expansions grows dramatically with $n$, since we can also perturb the resulting wavefunctions $\{h_{i}\}$ at higher order. It turns out that we can represent each such iteration uniquely by a \emph{forest of tree graphs} on $n+2$ vertices. Suppose we have perturbatively expanded the generating functional to order $n-2$ (so there are $n-2$ positive helicity wavefunctions $h_i$). Then the graph associated to given expansion is constructed by: \begin{itemize} \item Draw a black vertex for the each negative helicity wavefunction $\tilde{h}_{1}$, $\tilde{h}_{2}$. \item Draw a grey vertex for each contact structure $\tau_{1}$, $\tau_{2}$. \item Draw a white vertex for each of the $n-2$ positive helicity wavefunctions $h_{i}$. \item Draw an arrow connecting each white vertex to its source in the expansion. \end{itemize} \begin{figure}[h] \centering \includegraphics[width=2.1 in, height=0.35 in]{Vertex.pdf}\caption{\small{\textit{Building blocks for Feynman diagrams}}}\label{FeynRules} \end{figure} By following the arrows through the diagram, one can trace each branch of the diagram back to its source, which must be one of the grey or black vertices representing the factors of the original generating function \eqref{CGGF2}. So starting anywhere in the graph, if we follow the arrows then eventually we will wind up at a grey or black vertex. In other words, each graph corresponds to a forest of trees each of which is \emph{rooted} at a grey or black vertex. The fact that each connected component is a \emph{tree graph} follows from the fact that each step in the expansion produces a new positive helicity wavefunction (so we can't follow arrows and end up completing a loop). Some examples of diagrams which either contribute (\emph{a}.) or are excluded from the contribution to the 5-point vertex are shown in Figure \ref{Diags}. \begin{figure}[t] \centering \includegraphics[width=4.20 in, height=2 in]{Diags2.pdf}\caption{\small{\textit{Some diagrams for the 5-point vertex which have a non-vanishing \emph{(a.)}, or excluded/vanishing \emph{(b.)} contribution.}}}\label{Diags} \end{figure} Of course, we want to associate each of these diagrams with some integrand on the moduli space $\CM_{n,1}$; summing all the contributions and integrating should give the vertex just like in the Yang-Mills story. The computational dictionary required is simple and determined by the perturbative iteration itself, which endows the edges with weights. An arrow from a white vertex $i$ to a white or black vertex $j$ corresponds to a weight \be{piprop} \HH_{ij}=\frac{I^{IJ}\;(\xi j)^{2}}{(ij)\;(\xi i)^2}\frac{\partial}{\partial Z^{I}(\sigma_i)}\frac{\partial}{\partial Z^{J}(\sigma_j)}=\frac{(\xi j)^{2}}{(ij)\;(\xi i)^2}[\partial_{i},\partial_{j}], \ee while an arrow from a white vertex to a grey vertex corresponds to $\psi^{1,2}_{i}$ from \eqref{tauprop1}. If a grey vertex has two incoming arrows, say from white vertices $i$ and $j$, then we can account for both with the weight factor $\omega^{1,2}_{ij}$ from \eqref{tauprop2}. This picture can be made precise by writing the vertex generating functional in a way that makes the construction of the SD background $M$ via \eqref{GCE} explicit. Introducing a Lagrange multiplier $Y\in\Omega^{1,0}(\P^{1},T^{*}_{\CPT})$, this becomes \cite{Adamo:2013tja}: \begin{equation*} S_{2}[\tilde{h},h]=\int_{M}\d\mu \left[\int_{X}Y_{I}\left(\dbar Z^{I}-I^{IJ}\partial_{J}h\right)+\frac{\varepsilon^{2}}{\Lambda \kappa^{2}}\left(\int_{X} \tilde{h}\wedge\tau\right)^2 \right]. \end{equation*} Integrating out $Y$ returns \eqref{GCE}, but keeping it in play allows us to perform the perturbative expansion of $S_{2}$ to order $n-2$ by using Feynman diagrams on $\P^1$ with vertices \begin{equation*} V_{\tilde{h}}=\int\limits_{X\times X}\tau_{1}\wedge\tau_{2}\wedge\tilde{h}_{1}\wedge\tilde{h}_{2}, \qquad V_{h_i}=\int_{X}[Y,\partial_{i}h_i], \end{equation*} for $i=3,\ldots,n$. Contractions occur via the $\P^1$ propagator for the $YZ$-system (which arises in the similar context of the Berkovits-Witten twistor-string \cite{Berkovits:2004jj}): \begin{equation*} \left\la Y_{I}(\sigma_{i})\;Z^{J}(\sigma_{j})\right\ra = \delta^{J}_{I}\frac{\D\sigma_{i}}{(ij)}\frac{(\xi j)^{2}}{(\xi i)^2}, \end{equation*} and working classically, this results in the the forests of trees we just described. In sum, the vertices of the twistor actions are given by summing these weighted tree diagrams. If we write the set of diagrams contributing to the $n$-point vertex as $\cF^{n}$, then this has a natural disjoint union splitting based on the number of arrows which are incoming at the grey vertices. That is, \begin{equation*} \mathcal{F}^{n}=\bigsqcup_{k=0}^{4}\mathcal{F}^{n}_{k}, \end{equation*} where each diagram $\Gamma\in\mathcal{F}^{n}_{k}$ is a forest on $n+2$ vertices which has $k$ arrows into the grey vertices (we cannot have $k>4$ because $\tau$ is only quadratic in $Z$). We then write the $n$-point vertex of the twistor action--somewhat heuristically--as \be{GVert1} \mathcal{V}_{n}=\frac{1}{\Lambda}\sum_{k=0}^{4}\sum_{\Gamma\in\mathcal{F}^{n}_{k}}\int\limits_{M\times(\P^1)^n}\d\mu\;F_{\Gamma}\;\tau_{1}\tilde{h}_{1}\;\tau_{2}\tilde{h}_{2}\prod_{i=3}^{n}h_{i}\;\D\sigma_{i}, \ee where $F_{\Gamma}$ encodes the contribution from diagram $\Gamma$ built out of the weights.\footnote{Here, we think of the integral over $\CM_{n,1}$ as being over $\M\times (\P^1)^n$.} Below, we will turn this somewhat esoteric formula into a concrete expression for the MHV amplitude. Before proceeding, one should note that these tree diagrams first arose in the context of a semi-classical connected tree formalism for the worldsheet CFT of Berkovits-Witten twistor-string theory \cite{Adamo:2012xe}. In that arena, trees were needed to extract the minimal content from the non-minimal $\cN=4$ CSG in the twistor-string at MHV; the fact that we obtain the same formalism directly from the minimal twistor action proves that trees indeed isolate the minimal content. The more puzzling question of why trees were required in the worldsheet CFT (which in principle should include all loop and disconnected diagrams) found its resolution in Skinner's twistor-string \cite{Skinner:2013xp}: there worldsheet supersymmetry suppresses the loops and the resulting tree diagrams lead directly to the flat-space amplitudes of Einstein gravity. \subsection{The MHV Amplitude} \label{MHVLambda} The embedding of Einstein gravity into conformal gravity tells us that the vertices of the twistor action should correspond to MHV amplitudes on-shell. We have just described how to obtain these vertices by summing weighted tree diagrams in \eqref{GVert1}, but we still need a concrete method for performing this sum. It turns out that summing forests of tree graphs (with weights) is a natural operation in algebraic combinatorics; the key result in this area is the \emph{Matrix-Tree theorem} (an analogue of Kirchoff's theorem for directed graphs), which relates the counting of graphs with weights to a determinant of the Laplacian matrix of the graph (c.f., \cite{Stanley:1999, vanLint:2001, Stanley:2012} or \cite{Feng:2012sy} for an overview with direct connections to gravity amplitudes). For an arbitrary oriented graph $G$ with $n$ vertices, let us denote the edge from vertex $i$ to vertex $j$ by $(i,j)\in\mathcal{E}$, where $\mathcal{E}$ denotes the set of edges in $G$. If the edge $(i,j)$ is endowed with weight $w_{ij}\in\C$, then the weighted Laplacian matrix of $G$ is the $n\times n$ matrix with entries \begin{equation*} \mathcal{L}_{ij}(G)=\left\{ \begin{array}{c} -w_{ij} \;\mathrm{if}\;i\neq j\;\mathrm{and}\;(i,j)\in\mathcal{E}\\ \sum_{(i,k)\in\mathcal{E}}w_{ik}\;\mathrm{if}\;i=j \\ 0 \;\;\mathrm{otherwise} \end{array}\right. . \end{equation*} The Matrix-Tree theorem for rooted forests on the directed graph $G$ is then: \begin{thm}[Weighted Matrix-Tree Theorem for Forests]\label{MTT} Let $\mathcal{F}^{(i_1,\ldots i_r)}(G)$ be the set of forests of $G$ rooted at $\{i_1,\ldots, i_r\}$ and $\mathcal{L}(G)$ be the weighted Laplacian matrix of $G$. For each $F\in\mathcal{F}^{(i_1,\ldots i_r)}(G)$, denote by $E_F\subset\mathcal{E}$ the set of edges in the forest. Then \be{MTT*} \left|\mathcal{L}(G)^{i_{1}\cdots i_{r}}_{i_{1}\cdots i_{r}}\right|= \sum_{F\in\mathcal{F}^{(i_1,\ldots i_r)}(G)}\left(\prod_{(i,j)\in E_{F}}w_{ij}\right), \ee where $ \left|\mathcal{L}(G)^{a\cdots b}_{c\cdots d}\right|$ denotes the determinant of $\cL(G)$ with the rows $\{a,\ldots, b\}$ and columns $\{c,\ldots, d\}$ removed. \end{thm} A proof of this particular version of the matrix-tree theorem can be found in \cite{Feng:2012sy}. Our goal is now to apply theorem \ref{MTT} to formula \eqref{GVert1} and obtain an explicit formula for the MHV amplitude. This formula will be defined initially with $\Lambda\neq 0$, so we actually find a twistorial expression for the MHV `scattering amplitude' on a de Sitter (or anti-de Sitter) background.\footnote{Clearly, the asymptotically flat definition of the S-matrix no longer holds on de Sitter backgrounds. While $\scri^{-}$ to $\scri^{+}$ scattering is still mathematically defined (i.e., a meta-observable in the sense of \cite{Witten:2001kn}), no physical observer can integrate over the whole space-time. In AdS, the situation is improved and we can consider correlation functions in the boundary CFT. We make some remarks about how to interpret the twistor MHV formula in these contexts in Section \ref{Chapter7}.} Passing to the $\Lambda\rightarrow 0$ limit, we will recover Hodges' formula for the flat-space MHV amplitude. \subsubsection{Summing diagrams and the vertex formula} In the case of the twistor action vertices, we can apply theorem \ref{MTT} to each subsector of $\cF^{n}=\sqcup_{k=0}^{4}\mathcal{F}^{n}_{k}$ successively. Let us begin with $\cF^{n}_{0}$; this includes all diagrams which have no arrows into the grey vertices (i.e., there are no deformations of the contact structures $\tau_{1,2}$). In this case, the directed graph $G^{n}_{0}$ which forms the input for the Matrix-Tree theorem can be built from $n-2$ white vertices and $2$ black vertices, since they grey vertices play no role. The edges of $G^{n}_{0}$ feature all possible perturbative expansions which could produce the $n-2$ white vertices. So each white vertex has $n-1$ outgoing edges (one to every other vertex) and $n-3$ incoming edges (one from every other white vertex), while each black vertex has $n-2$ incoming edges and no outgoing edges. See Figure \ref{G} for an illustration of this configuration. \begin{figure} \centering \includegraphics[width=4 in, height=1 in]{G.pdf}\caption{\textit{The graph $G^{n}_{0}$ features all possible edges which could contribute to $\cF_{0}^{n}$}}\label{G} \end{figure} Up to a rank-two error term and conjugation (both of which are irrelevant), the weighted Laplacian matrix associated to $G^{n}_{0}$ is given by \cite{Adamo:2012xe}: \be{wLap1} \mathcal{L}(G^{n}_{0})=\HH=\left( \begin{array}{cccccc} \HH_{11} & \HH_{12} & \cdots & \HH_{1n} \\ \HH_{21} & \ddots & & \HH_{2n} \\ \vdots & & \HH_{n-1\;n-1} & \HH_{n-1\;n} \\ \HH_{n1} & \cdots &\HH_{n\;n-1} & \HH_{nn} \end{array}\right), \qquad \HH_{ii}=-\sum_{j\neq i}\HH_{ij}, \ee where the off-diagonal entries are precisely the weights $\HH_{ij}$ from \eqref{piprop}. As there are no grey vertices in play, each forest of trees contributing to the vertex in the class $\cF^{n}_{0}$ must be rooted at one of the two black vertices. Then theorem \ref{MTT} indicates that the required sum of weights is accomplished by taking the determinant $|\HH^{12}_{12}|$, with the rows and columns corresponding to $\tilde{h}_{1}$, $\tilde{h}_{2}$ removed from the weighted Laplacian \eqref{wLap1}. We can also write the undeformed contact structures as \begin{equation*} \tau_{1}=\la Z_{1},\partial Z_{1}\ra= I_{IJ}X_{A}^{I}\sigma_{1}^{A}\;X^{J}_{B}\d\sigma_{1}^{B}=X^{2}\;\D\sigma_{1}, \end{equation*} where we abbreviate $X^{2}=\la X_{A},X^{A}\ra$. Combined with the Matrix-Tree theorem, this gives us the contribution to the vertex $\mathcal{V}_{n}$ from graphs in $\cF^{n}_{0}$: \be{cont0} \int \d\mu\;(X^{2})^{2}\;\left| \HH^{12}_{12}\right|\;\prod_{i=1}^{n}h_{i}\;\D\sigma_{i}, \ee where we understand that $h_{1,2}\equiv\tilde{h}_{1,2}$. Precisely the same process of considering the graph $G^{n}_{k}$ of all possible deformations, writing down the weighted Laplacian, and then applying the Matrix-Tree theorem allows us to account for the contributions from every other sector $\cF^{n}_{k>0}$. For instance, in $\cF^{n}_{1}$ there is a single arrow incoming to one of the grey vertices, which can come from any of the $n-2$ white vertices in play, say $i$. This gives an overall factor of $\psi^{1}_{i}$ or $\psi^{2}_{i}$, depending on which contact structure is deformed. The remaining portions of the diagram are again accounted for by the weighted Laplacian $\HH$, but now we have to eliminate \emph{three} rows and columns when applying theorem \ref{MTT}, since the trees can also be rooted at vertex $i$ now. The result for $\cF^{n}_{1}$ is therefore: \be{cont1} \sum_{\Gamma\in\mathcal{F}^{n}_{1}}\int \d^{4|8}x\;X^{2}\;F_{\Gamma}\;\prod_{i=1}^{n}h_{i}\;\D\sigma_{i} =\int \d\mu\;X^{2}\sum_{i=3}^{n}\psi^{1}_{i}\;\left| \HH^{12i}_{12i}\right|\; \prod_{j=1}^{n}h_{j}\;\D\sigma_{j}+(1\leftrightarrow 2), \ee where the factor of $X^{2}$ is due to the undeformed contact structure still in play. Proceeding in this fashion, we arrive at the formula for the $n$-point vertex \cite{Adamo:2013tja}: \begin{multline}\label{MHVamp} \mathcal{V}_{n}=\frac{1}{\Lambda}\int \d\mu\;\left[ (X^2)^2\left|\HH^{12}_{12}\right|+ X^{2} \sum_{i}\psi^{1}_{i}\left|\HH^{12i}_{12i}\right| +X^{2}\sum_{i,j}\omega^{1}_{ij}\left|\HH^{12ij}_{12ij}\right| \right. \\ \left. +\sum_{i,j}\psi^{1}_{i}\psi^{2}_{j}\left|\HH^{12ij}_{12ij}\right| +\sum_{i,j,k}\psi^{1}_{i}\omega^{2}_{jk}\left|\HH^{12ijk}_{12ijk}\right| +\sum_{i,j,k,l}\omega^{1}_{ij}\omega^{2}_{kl}\left|\HH^{12ijkl}_{12ijkl}\right|\right]\prod_{m=1}^{n}h_{m}\;\D\sigma_{m}\:+(1\leftrightarrow 2). \end{multline} In this expression, the sums are understood to run over all indices which are not excluded from the determinant, and also to symmetrize on those indices. For instance, in the first term of the second line $\sum_{i,j}$ runs over all $i,j=3,\ldots n$ with $i\neq j$. The expression \eqref{MHVamp} makes sense \emph{off-shell} (i.e., as a vertex of the twistor action) since no step in our derivation from $S_{2}[\tilde{h},h]$ assumed that the wavefunctions were $\dbar$-closed on twistor space. A non-trivial test on this formula for $\mathcal{V}_{n}$ is independence of the reference spinor $\xi\in\P^{1}$. This entered the definition of the perturbative iteration due to the ambiguity in defining $\dbar^{-1}$ on $\P^1$. From \eqref{Picard3}, we see that a change in $\xi$ induces a variation in $\cX$, which are coordinates on the projective spinor bundle $\PS$. Hence, a variation in $\xi$ should correspond to a diffeomorphism on $\PS$ under which $\mathcal{V}_{n}$ should be invariant. This can be checked explicitly by considering the infinitesimal variation generated by $\d_{\xi}=\d\xi^{A}\frac{\partial}{\partial\xi^{A}}$, and it can be shown that \cite{Adamo:2013tja}: \be{grgauge} \d_{\xi}\mathcal{V}_{n}=\int\frac{\d^{8|8}X}{\mathrm{vol}\;\GL(2,\C)}\frac{\partial}{\partial X^{IA}} V^{IA}=0, \ee where $V^{IA}$ are the components of a smooth vector field (roughly speaking, on $\CM_{n,1}$).\footnote{Of course, this argument is on-shell in nature; if $\mathcal{V}_{n}$ appears inside a Feynman diagram it need not be $\xi$-independent on its own. Only the full amplitude being calculated needs to be independent of the reference spinor.} So on-shell, \eqref{MHVamp} is a well-defined formula for the MHV amplitude with cosmological constant. However, the on-shell condition actually allows us to simplify the expression substantially, as we will show now. \subsubsection{The amplitude} Equation \eqref{MHVamp} provides a perfectly valid representation of the MHV amplitude with $\Lambda\neq 0$; by inserting momentum eigenstates or some other on-shell wavefunctions, we pass from a vertex to the amplitude $\mathcal{V}_{n}\rightarrow\cM_{n,0}$. It turns out that this formula can be simplified considerably on-shell, though. To begin, note that all the weights which appear in \eqref{MHVamp} take the form of differential operators, given by \eqref{piprop}, \eqref{tauprop1}, and \eqref{tauprop2}. These operators act on the wavefunctions $\{h_{i}\}$, and when these are chosen to be momentum eigenstates this action becomes rather complicated, involving derivatives of delta-functions due to the $\Lambda\neq0$ infinity twistor. Clearly, things would be much simpler if we could treat things algebraically. This can be accomplished by working with \emph{dual twistor} wavefunctions: \be{dtwf} h(Z(\sigma_{i}))=\int_{\C}\frac{\d t_{i}}{t_{i}^{1+w_{i}}}\exp\left(i t_{i}W_{i}\cdot Z(\sigma_{i})\right), \qquad w_{i}=\left\{ \begin{array}{cc} -2 & \mbox{if}\;i=1, 2 \\ 2 & \mbox{otherwise} \end{array} \right. . \ee Here $W_{i\;I}=(\tilde{\mu}^{A}, \tilde{\lambda}_{i A'})$ are coordinates on $n$ copies of dual twistor space, $\PT^{\vee}$. These wavefunctions have been used before in other contexts \cite{Mason:2009sa, Cachazo:2012pz}, and can be paired with momentum eigenstates in an appropriate manner to obtain functionals of momenta at the end of any calculation. Furthermore, the scaling parameters $t_{i}$ can be absorbed into the worldsheet coordinates by defining a new set of non-homogeneous coordinates: $\sigma_{i}t_{i}\rightarrow\sigma_{i}$, $\d t_{i}\D\sigma_{i}\rightarrow\d^{2}\sigma_{i}$. With \eqref{dtwf}, all the weights in our diagram calculus become purely algebraic. In particular, we now have: \begin{equation*} \HH_{ij}=-\frac{[W_{i},W_{j}]}{(ij)}, \end{equation*} while deformations of the contact structure are \begin{equation*} \psi^{1}_{i}=\Lambda\;i\frac{(\xi 1)\;W_{i\;I}}{(1i)^{2}(\xi i)^2}\left[(\xi i)\;Z^{I}(\sigma_{1})+(1i)\;Z^{I}(\xi)\right], \qquad \omega^{1}_{ij}=\Lambda \frac{[W_{i},W_{j}]\;(1\xi)^{4}(ij)}{(1i)^{2}(1j)^{2}(\xi i)^{2}(\xi j)^{2}}. \end{equation*} In this framework all the ingredients of \eqref{MHVamp} are transformed from differential operators to algebraic functions of the dual twistors. Furthermore, the product of wavefunctions and $\P^1$ measures can also be expressed compactly and in a manner that uses a generalization of momentum to the dual twistor framework. In particular, we have \begin{equation*} \prod_{i=1}^{n}h_{i}\;\D\sigma_{i}=e^{i\mathcal{P}\cdot X}\;\d^{2}\sigma, \qquad \cP_{I}^{A}=\sum_{i=1}^{n}W_{i\;I}\sigma_{i}^{A}, \qquad \d^{2}\sigma\equiv \prod_{i=1}^{n}\d^{2}\sigma_{i}, \end{equation*} with $\mathcal{P}$ playing the role of total `momentum' in this framework. Now, using the dual twistor wavefunctions \eqref{dtwf} note that the second term in the first line of \eqref{MHVamp} can be written as \begin{multline*} \int\d\mu\;X^{2}\sum_{i} \psi^{1}_{i}\left|\HH^{12i}_{12i}\right|\;e^{i\cP\cdot X}\d^{2}\sigma \\ =\Lambda \int \d\mu\;X^{2}\sum_{i}\left|\HH^{12i}_{12i}\right|\left(\frac{(\xi 1)(\xi i)\sigma_{1}^{A}+(\xi 1)(1i)\xi^{A}}{(1i)^{2}(\xi i)^{2}}\right)\frac{\partial e^{i\cP\cdot X}}{\partial\sigma_{i}^{A}}\d^{2}\sigma , \end{multline*} using the dual twistor expression for $\psi^{1}_{i}$. This is motivated by the alternative expression for $\psi^{1}_{i}$ given by \eqref{psip}, in which it is expressed as a derivative with respect to $\sigma_i$. Hence, we can integrate by parts with respect to $\d^{2}\sigma_{i}$ to find: \begin{multline*} \int\d\mu\;X^{2}\sum_{i} \psi^{1}_{i}\left|\HH^{12i}_{12i}\right|\;e^{i\cP\cdot X}\d^{2}\sigma \\ =-\Lambda\int\d\mu\;X^{2}e^{i\cP\cdot X}\sum_{i}\frac{\partial}{\partial\sigma_{i}^{A}}\left(\left|\HH^{12i}_{12i}\right|\frac{(\xi 1)(\xi i)\sigma_{1}^{A}+(\xi 1)(1i)\xi^{A}}{(1i)^2(\xi i)^2}\right) \d^{2}\sigma \\ =-\Lambda\int\d\mu\;X^{2}e^{i\cP\cdot X}\sum_{i,j}\left|\HH^{12ij}_{12ij}\right|\frac{[W_{i},W_{j}](1\xi)^{4}(ij)}{(1i)^{2}(1j)^{2}(\xi i)^{2}(\xi j)^{2}}\d^{2}\sigma \\ =-\int\d\mu\;X^{2}\sum_{i,j}\omega^{1}_{ij}\left|\HH^{12ij}_{12ij}\right| e^{i\cP\cdot X}\d^{2}\sigma . \end{multline*} with the third line following after symmetrizing over $(i\leftrightarrow j)$ and several applications of the Schouten identity. Thus, we see that following an integration by parts the second term in \eqref{MHVamp} cancels the third term. A similar calculation demonstrates that the fourth and fifth terms also cancel with each other, so the amplitude can be written much more compactly as: \be{MHVamp2} \cM_{n,0}=\frac{1}{\Lambda}\int\d\mu\;\left[ (X^2)^2\left|\HH^{12}_{12}\right| +\sum_{i,j,k,l}\omega^{1}_{ij}\omega^{2}_{kl}\left|\HH^{12ijkl}_{12ijkl}\right|\right]\prod_{m=1}^{n}h_{m}\;\D\sigma_{m}\:+(1\leftrightarrow 2), \ee where we have restored arbitrary twistor wavefunctions and homogeneous coordinates. Clearly this formulation is an improvement over \eqref{MHVamp} in terms of simplicity, although the arguments which lead to it are on-shell in nature (i.e., based upon the dual-twistor wavefunctions) so we do not expect it to be suitable for use as a \emph{vertex} of the twistor action. \subsubsection{Hodges' formula from the flat-space limit} The formulae \eqref{MHVamp}, \eqref{MHVamp2} provide expressions for the MHV amplitude of Einstein gravity with non-vanishing cosmological constant. While we have checked that these expressions are gauge-invariant with \eqref{grgauge}, another obvious check we should perform is the flat space-limit, where $\cM_{n,0}$ should reproduce the flat-space scattering amplitude. While several forms of the flat-space MHV amplitude for gravity have been known for some time (e.g., the BGK or BCFW formulas \cite{Berends:1988zp, Mason:2008jy, Bedford:2005yy}), the optimal one was only recently discovered by Hodges \cite{Hodges:2012ym}. In $\cN=4$ language, Hodges' formula reads: \be{HForm} \cM^{\mathrm{Hodges}}_{n,0}(\Lambda=0)=\int \d\mu\; \frac{(12)^{2}}{(1i)^{2}(2i)^{2}}\left|\HH^{12i}_{12i}\right|\prod_{j=1}^{n}h_{j}\;\D\sigma_{j}, \ee where the entries of the matrix $\HH$ are now built from the flat-space infinity twistor \eqref{infty}. This is optimal (compared to all previous formulations) in the sense that it makes no reference to an ordering of the external gravitons, requires no explicit sum over permutations, and is the natural analogue of the Parke-Taylor amplitude \eqref{ParkeTaylor} from Yang-Mills theory. In flat space, the weighted Laplacian $\HH$ has some nice properties, which we make note of here: \begin{lemma}[Hodges \cite{Hodges:2012ym}]\label{Co-ranklem} With $\Lambda=0$ and inserting momentum eigenstates into \eqref{HForm}, the matrix $\HH$ is independent of $\xi\in\P^{1}$, obeys \be{cr3} \sum_{j=1}^{n}\HH_{ij}\sigma_{j}^{A}\sigma_{j}^{B}=0, \ee and hence has co-rank 3. \end{lemma} \proof After partially integrating over $\d\mu$ in \eqref{HForm}, momentum conservation emerges using traditional momentum eigenstates in the form \begin{equation*} \sum_{j=1}^{n}\tilde{p}^{A'}_{j}\sigma^{A}_{j}=0. \end{equation*} Now, suppose we chose a second reference spinor $\zeta\in\P^{1}$. The only change in $\HH$ is in the diagonal entries: \begin{multline*} \Delta\HH_{ii}=-\sum_{j\neq i}\frac{[ij]}{(ij)}\left(\frac{(\xi j)^{2}}{(\xi i)^{2}}-\frac{(\zeta j)^2}{(\zeta i)^2}\right) \\ =-\sum_{j\neq i}\frac{[ij]}{(ij)}\left(\frac{(\xi j)(\zeta i)-(\xi i)(\zeta j)}{(\xi i)(\zeta i)}\right)\left(\frac{(\xi j)}{(\xi i)}+\frac{(\zeta j)}{(\zeta i)}\right) \\ =\frac{(\zeta \xi)\tilde{p}_{A'\;i}}{(\xi i)(\zeta i)}\left(\frac{\xi_{A}}{(\xi i)}+\frac{\zeta_{A}}{(\zeta i)}\right)\sum_{j=1}^{n}\tilde{p}_{j}^{A'}\sigma_{j}^{A}=0, \end{multline*} so $\HH$ is independent of $\xi$. Then we have \begin{equation*} \sum_{j=1}^{n}\HH_{ij}\sigma_{j}^{A}\sigma_{j}^{B}=\sum_{j\neq i}\frac{[ij]}{(ij)}\left[\sigma_{j}^{A}\sigma_{j}^{B}-\sigma_{i}^{A}\sigma_{i}^{B}\frac{(\xi j)^2}{(\xi i)^2}\right] =0, \end{equation*} due to independence of $\xi$. Since the symmetric array $\sigma_{j}^{A}\sigma_{j}^{B}$ has three degrees of freedom, the matrix $\HH$ has co-rank 3 as claimed. $\Box$ This lemma allows us to conclude that our formula for $\cM_{n,0}$ in \eqref{MHVamp2} would vanish as $\Lambda\rightarrow 0$ if we had not included the overall factor of $\Lambda^{-1}$, which appears due to the embedding of Einstein gravity inside conformal gravity. Indeed, suppose we had not included this factor, then the flat-space limit would be \begin{equation*} \lim_{\Lambda\rightarrow 0}\int\d\mu\;(X^2)^{2}\;\left|\HH^{12}_{12}\right|\;\prod_{i=1}^{n}h_{i}\;\D\sigma_{i}, \end{equation*} since $\omega^{1}_{ij}\sim O(\Lambda)$ means that we can drop the second term in \eqref{MHVamp2} in the limit. But when $\Lambda\rightarrow 0$, $X^{2}=1$ and lemma \ref{Co-ranklem} tells us that $\HH$ has co-rank three so the twice-reduced determinant vanishes after performing the $\d\mu$ integral against momentum eigenstates. However, it still appears that our formula for $\cM_{n,0}$ (now with the correct factor of $\Lambda^{-1}$) is a long way off from Hodges' formula. If we use dual twistor wavefunctions \eqref{dtwf}, then we are interested in \be{3red1} \lim_{\Lambda\rightarrow 0}\frac{1}{\Lambda}\int \frac{\d^{8|8}X}{\mathrm{vol}\;\GL(2,\C)}\;(X^2)^{2} \left|\HH^{12}_{12}\right|\;e^{i\cP\cdot X}\;\d^{2}\sigma. \ee Although this expression is finite as $\Lambda\rightarrow 0$ by lemma \ref{Co-ranklem}, it is based on a twice-reduced determinant. How can we get to the thrice-reduced determinant which is the basis for Hodges' formula? The answer is provided by noticing that we can represent each factor of $X^{2}$ in \eqref{3red1} by a differential `wave operator' acting on $e^{i\cP\cdot X}$: \be{waveop} X^{2}\rightarrow \Box :=\frac{I_{IJ}}{(12)}\frac{\partial}{\partial W_{1\;I}}\frac{\partial}{\partial W_{2\;J}}. \ee Doing this allows us to re-write the twice-reduced contribution to $\cM_{n,0}$ as \be{3red2} \frac{1}{\Lambda}\int \frac{\d^{8|8}X}{\mathrm{vol}\;\GL(2,\C)}\d^{2}\sigma\;\left|\HH^{12}_{12}\right|\;\Box^{2}e^{i\cP\cdot X} =\frac{1}{\Lambda}\int \frac{\d^{2}\sigma}{\mathrm{vol}\;\GL(2,\C)}\left|\HH^{12}_{12}\right|\;\Box^{2}\delta^{8|8}(\cP). \ee On the support of this delta-function, we know that $\HH$ has co-rank three by lemma \ref{Co-ranklem}, so we can integrate by parts once with respect to $\frac{\partial}{\partial W_{2}}$ to give \begin{multline*} -\frac{1}{\Lambda}\int \frac{\d^{2}\sigma}{\mathrm{vol}\;\GL(2,\C)}\frac{\partial}{\partial W_{2\;J}}\left|\HH^{12}_{12}\right|\frac{I_{IJ}}{(12)}\frac{\partial}{\partial W_{1\;I}}\Box\delta^{8|8}(\cP) \\ =-\int \frac{\d^{2}\sigma}{\mathrm{vol}\;\GL(2,\C)} \sum_{i}\frac{(\xi 2)^{2}}{(12)(i2)(\xi i)^{2}}\left|\HH^{12i}_{12i}\right|\;W_{i}\cdot\frac{\partial}{\partial W_{1}} \Box \delta^{8|8}(\cP). \end{multline*} Once again, the support of the delta-function indicates that we can take $W_{i}\cdot\frac{\partial}{\partial W_{1}}=\sigma_{1}\cdot\frac{\partial}{\partial\sigma_{i}}$, and then integrate by parts once again with respect to $\d^{2}\sigma_{i}$. This leaves us with \begin{multline} \int \frac{\d^{2}\sigma}{\mathrm{vol}\;\GL(2,\C)} \sum_{i}\frac{(12)^{2}}{(1i)^{2}(2i)^{2}}\left|\HH^{12i}_{12i}\right|\;\Box\delta^{8|8}(\cP) \\ +\int \frac{\d^{2}\sigma}{\mathrm{vol}\;\GL(2,\C)} \sum_{i,j}\left(\frac{(\xi 2)^{2}(1\xi)(ji)+(\xi 2)^{2}(1j)(\xi i)}{(12)(i2)(ji)(\xi i)(\xi j)^{2}}\right)\HH_{ij}\;\left|\HH^{12ij}_{12ij}\right|\;\Box\delta^{8|8}(\cP). \end{multline} The contribution from the second line can be further simplified by noting that the summation entails symmetrization, term-by-term, in both $1\leftrightarrow 2$ and $i\leftrightarrow j$. A straightforward calculation involving several applications of the Schouten identity allows us to reduce this to \begin{equation*} \int \frac{\d^{2}\sigma}{\mathrm{vol}\;\GL(2,\C)} \sum_{i,j}\left(\frac{(\xi 1)^{2}(i2)(j2)+(\xi 2)^{2}(i1)(j1)}{(1i)(2i)(1j)(2j)(\xi i)(\xi j)}\right)\HH_{ij}\;\left|\HH^{12ij}_{12ij}\right|\;\Box\delta^{8|8}(\cP). \end{equation*} Upon using the symmetry of $i\leftrightarrow j$ and the basic properties of determinants, we are finally left with an expression for the amplitude which has no overall factor of $\Lambda^{-1}$ and now features thrice-reduced determinants: \begin{multline}\label{3red3} \cM_{n,0}=\int \d\mu \left[\sum_{i,j}\left(\frac{(\xi 1)^{2}(i2)(j2)+(\xi 2)^{2}(i1)(j1)}{(1i)(2i)(1j)(2j)(\xi i)(\xi j)}\right)\left|\HH^{12i}_{12j}\right| \right. \\ \left. \sum_{i}\frac{(12)^{2}}{(1i)^{2}(2i)^{2}}\left|\HH^{12i}_{12i}\right| \right]\;\prod_{m=1}^{n}h_{m}\;\D\sigma_{m}, \end{multline} where we have reverted to arbitrary twistor wavefunctions and taken all remaining $\Lambda$-dependence to zero. Now, the summation in the second line of \eqref{3red3} only appearance of the reference spinor $\xi\in\P^1$ is in the diagonal entries of the matrix $\HH$. But lemma \ref{Co-ranklem} tells us that $\HH$ is actually \emph{independent} of the choice of $\xi$ in the flat-space limit. So the second line of \eqref{3red3} is independent of $\xi$. The first line is also independent of $\xi$ on its own; this can be shown directly with a residue computation \cite{Adamo:2012xe}. This means that we can freely set $\xi=\sigma_{1}$, leaving \be{Flat2} \cM_{n,0}(\Lambda=0)=\int \d\mu\left[\sum_{i,j}\frac{(12)^{2}}{(1i)(1j)(2i)(2j)}\left|\HH^{12i}_{12j}\right| +\sum_{i}\frac{(12)^{2}}{(1i)^{2}(2i)^{2}}\left|\HH^{12i}_{12i}\right|\right]\;\prod_{k=1}^{n}h_{k}\;\D\sigma_{k}. \ee Finally, we note that on the support of overall momentum conservation, every term in \eqref{Flat2} is equivalent. This follows from the basic properties of reduced determinants and is built into the Hodges' formula itself, which has many equivalent expressions \cite{Hodges:2012ym, Cachazo:2012kg}. So up to an irrelevant integer constant (which can be accounted for with proper normalizations), we find: \begin{equation*} \lim_{\Lambda\rightarrow 0}\cM_{n,0}=\int \d\mu\; \frac{(12)^{2}}{(1i)^{2}(2i)^{2}}\left|\HH^{12i}_{12i}\right|\;\prod_{j=1}^{n}h_{j}\;\D\sigma_{j}= \cM^{\mathrm{Hodges}}_{n,0}(\Lambda=0), \end{equation*} as required. \subsubsection{Towards the MHV formalism} The primary utility of the twistor action for $\cN=4$ SYM was that it naturally encoded the MHV formalism for gauge theory. The allowed us to easily compute tree-level amplitudes directly, and also formed the basis for loop-level computations at the level of the integrand to all orders in perturbation theory. On twistor space, the building blocks for the MHV formalism on twistor space were the twistor propagator and the vertices, which were easily seen to correspond on-shell to the Parke-Taylor amplitudes. It is easy to see that we have now built the same building blocks on twistor space for our gravitational actions. For both the conformal and Einstein gravity actions, we now have an expression for the vertices given by \eqref{MHVamp}, and their respective propagators restricted to Einstein states are given by \eqref{CGprop} and \eqref{Einprop}. Clearly, this is enough to define a MHV formalism on twistor space along the lines of the one we developed for gauge theory in Section \ref{Chapter3}. If we could translate this prescription to momentum space (or even operationalize efficiently in twistor space) it would represent a major breakthrough, since traditional definitions of an MHV formalism for gravity break down \cite{BjerrumBohr:2005jr, Bianchi:2008pu}. Recall that on twistor space, the MHV degree $k$ of an amplitude is the count of the number of external $\tilde{h}$s in an amplitude minus 2. Since each vertex of the twistor action contains two $\tilde{h}$s, and each propagator $\Delta(Z_{1},Z_{2})$ takes the place of one $\tilde{h}$, we have \be{MHV-deg} k= |\mathcal{V}|+l-1 \ee where $|\mathcal{V}|$ is the number of vertex insertions and $l$ is the number of loops in the diagram. So to compute a N$^k$MHV tree amplitude, we must sum diagrams with $k+1$ MHV vertices and $k$ propagators--just like we did for $\cN=4$ SYM. Now, if our proposal for the Einstein gravity twistor action is correct then it should not matter whether we perform the computation with the Einstein action or the conformal gravity action restricted to Einstein states. For instance, a NMHV diagram for the Einstein action would correspond to a contribution of the form \be{grNMHV1} \int \D^{3|4}Z_{1}\;\D^{3|4}Z_{2}\;\mathcal{V}(Z_{1},\ldots)\;\Delta^{\mathrm{Ein}}(Z_{1},Z_{2})\;\mathcal{V}(\ldots, Z_{2}), \ee where the vertex is given by \eqref{MHVamp} and the propagator by \eqref{Einprop}. The analogous calculation in conformal gravity involves replacing $I_{IJ}Z^{J}_{1}\tilde{h}_1$ in one vertex and $I^{IJ}\partial_{2J}h_{2}$ in the other with a propagator and then dividing by the overall factor of $\Lambda$ required by the embedding of Einstein gravity. In particular, \eqref{grNMHV1} should be equal to \be{grNMHV2} \frac{1}{\Lambda}\int \D^{3|4}Z_{1}\;\D^{3|4}Z_{2}\;\mathcal{V}^{J}(Z_{1},\ldots)\;\Delta^{I}_{J}(Z_{1},Z_{2})\;\mathcal{V}_{I}(\ldots, Z_{2}), \ee where the propagator is given by \eqref{CGprop} and the vertices now carry a twistor index. Showing that \eqref{grNMHV1} is equal to \eqref{grNMHV2} would establish that the Einstein twistor action is correct at the level of perturbation theory, and initial calculations indicate that this is true. Regardless of the validity of the Einstein twistor action, it is clear that conformal gravity induces a MHV formalism on twistor space. However, the structure of this formalism is significantly different from previous momentum space proposals. The functional form of the vertex $\mathcal{V}_{n}$ begins with a twice-reduced determinant, as in \eqref{MHVamp}. This indicates that in flat space, the MHV formalism on twistor space will \emph{not} simply correspond to an off-shell extension of the Hodges formula linked with $p^{-2}$ propagators (or at least not in an obvious way). A better idea of what is happening on momentum space could be had by translating the twistor propagator to momentum space as we did in the build up to proposition \ref{MHVpropn}, but its action on the vertices could be hard to deduce. Obviously this is an important goal for future research. \subsection{BCFW Formulae} We conclude this section by presenting an alternative route to formulae for amplitudes with cosmological constant by using BCFW recursion. It is known that BCFW recursion holds for gravity scattering amplitudes on a flat background \cite{Bedford:2005yy, Cachazo:2005ca, Benincasa:2007qj}. As we will see, this can be easily extended to backgrounds with cosmological constant. By determining the three-point seed amplitudes (i.e., MHV and $\overline{\mbox{MHV}}$), we can in principle compute all tree-level amplitudes using momentum eigenstates, although we focus on the $n$-point MHV amplitude here. \subsubsection{Three-point amplitudes} \subsubsection*{\textit{Anti-MHV 3-point}} The three-point $\overline{\mbox{MHV}}$ amplitude comes from the cubic vertex in $S_{1}[\tilde{h},h]$, where no Picard iteration is needed. Using either the conformal gravity twistor action restricted to Einstein states or the Einstein action itself: \be{MHVbar1} \cM_{3,-1}(1,2,3;\Lambda)=\int_{\CPT}\D^{3|4}Z\wedge\tilde{h}_{1}\wedge\left\{h_{2},h_{3}\right\}. \ee To evaluate this, we must insert momentum eigenstates for the gravitons. We use eigenstates with 4-momentum $p^{AA'}=p^{A}\tilde{p}^{A'}$ and fermionic momentum $\eta_{a}p_{A}$ \be{gmomeig} \tilde{h}_{i}=\int_{\C}s_{i}\;\d s_{i}\;\bar{\delta}^{2}(s_{i}\lambda_{i}-p_{i})e^{s_{i}[[\mu_{i}\tilde{p}_{i}]]}, \qquad h_{i}=\int_{\C}\frac{\d s_{i}}{s^{3}_{i}}\;\bar{\delta}^{2}(s_{i}\lambda_{i}-p_{i})e^{s_{i}[[\mu_{i}\tilde{p}_{i}]]}. \ee From the point of view of de Sitter space, such eigenstates are rather un-natural since they are singular on a finite light cone and do not recognize infinity. In other words, they are adapted to the affine patch \eqref{dSmetric2} rather than the Poincar\'{e} patch \eqref{dSmetric3}. As we shall see, the pay-off for making this seemingly awkward choice is formulae that limit nicely as $\Lambda\rightarrow 0$. Plugging this into \eqref{MHVbar1} gives: \begin{multline*} \int_{\CPT}\D^{3|4}Z\wedge\tilde{h}_{1}\wedge\left(\frac{\partial h_{2}}{\partial\mu_{2\;A'}}\frac{\partial h_{3}}{\partial\mu_{3}^{A'}}-\Lambda \frac{\partial h_{2}}{\partial\lambda_{2\;A}}\frac{\partial h_{3}}{\partial\lambda_{3}^{A}}\right) \\ =\int \D^{3|4}Z\frac{s_{1}\;[2\;3]}{s_{2}^{2}s_{3}^{2}}\prod_{i=1}^{3}\d s_{i}\bar{\delta}^{2}(s_{i}\lambda_{i}-p_{i})\;\left(1-\Lambda\Box_{p}\right) \exp\left(\sum_{i=1}^{3} s_{i}[[\mu_{i}\tilde{p}_{i}]]\right) \\ =\frac{[23]^{2}}{[12]^{2}[31]^{2}}\left(1-\Lambda\Box_{p}\right)\delta^{4|8}\left(\sum_{i=1}^{3}p_{i}\right). \end{multline*} Here, the second line follows by noting that $\frac{\partial}{\partial \lambda_{A}}$ can be re-expressed as a derivative with respect to $p_{A}$, which in turn leads to $\tilde{p}_{A'}\p/\p p_{AA'}$ when acting eventually on the momentum conserving delta function. We therefore have: \be{MHVbar2} \cM_{3,-1}(1,2,3;\Lambda)=\frac{[23]^{2}}{[12]^{2}[31]^{2}}\left(1-\Lambda\Box_{p}\right)\delta^{4|8}\left(\sum_{i=1}^{3}p_{i}\right). \ee As claimed, this limits nicely to the flat-space result as $\Lambda\rightarrow 0$, and the $\Box_{p}$ manifests the breaking of Poincar\'{e} symmetry in de Sitter space. \subsubsection*{\textit{MHV 3-point}} The three-point MHV amplitude is the first non-trivial application of our perturbative iteration, which acts only once to produce a single positive helicity wavefunction $h$. This can act at either contact structure or negative helicity wavefunction; however we know that the perturbation of a single contact structure is $\d$-exact by \eqref{psip}. Since there are no further perturbations at three points, any deformation of the contact structures vanishes by Stokes' theorem \cite{Adamo:2012xe, Adamo:2013tja}. So the only deformations are of the form \begin{equation*} \tilde{h}_{i}\rightarrow \int_{\P^1}\frac{\D\sigma_{j}\;(\xi\;i)^{2}}{(i\;j)(\xi\;j)^{2}}I^{IJ}\partial_{I}\tilde{h}_{i}\partial_{J}h_{j}. \end{equation*} and the three-point amplitude becomes \be{MHV3pt1} \cM_{3,0}(1,2,3;\Lambda)= \frac{1}{\Lambda}\int \d\mu\; (X^{2})^{2}\frac{(\xi 3)^{2}}{(13)\;(\xi 1)^{2}}[\partial_{1},\partial_{3}]\prod_{i=1}^{3}h_{i}\D\sigma_{i} + (2\leftrightarrow 3). \ee Since this exhausts the perturbative iteration at three points, the incidence relations are $Z^{I}=X^{I}_{A}\sigma^{A}$, allowing us to write \begin{equation*} \partial_{I}\tilde{h}_{3}=\frac{\sigma^{B}_{2}}{(32)}\frac{\partial \tilde{h}_{3}}{\partial X^{I B}}. \end{equation*} Inserting this into \eqref{MHV3pt1}, we can now integrate by parts with respect to $X$. Our choice means that $\frac{\partial}{\partial X}$ annihilates $\tilde{h}_{2}$ as well as $I^{IJ}\partial_{J}h_{1}$, since this vector is divergence-free. Hence, the only contribution is: \begin{multline*} \frac{4}{\Lambda}\int \d\mu\; X^{2} I_{IK}Z_{2}^{K} \frac{(\xi 3)^{2}}{(32)(13)(\xi 1)^{2}}I^{IJ}\tilde{h}_{3}\partial_{J}h_{1} \tilde{h}_{2}\prod_{i=1}^{3}\D\sigma_{i} +(2\leftrightarrow 3) \\ =-4 \int \d\mu\;X^{2}\frac{(\xi 3)^{2}}{(32)(13)(\xi 1)^{2}}\tilde{h}_{3}\;Z_{2}\cdot\partial_{1} h_{1}\tilde{h}_{2}\prod_{i=1}^{3}\D\sigma_{i}+(2\leftrightarrow 3). \end{multline*} As $Z^{I}(x,\sigma)$ is a degree one function in $\sigma$ by assumption, we can let the differential operator $Z_{2}\cdot\partial_{1}$ act as \begin{equation*} Z_{2}\cdot\partial_{1}\sim \sigma_{2}\cdot\frac{\partial}{\partial\sigma_{1}}. \end{equation*} This enables us to integrate by parts with respect to $\D\sigma_{1}$, obtaining \begin{equation*} 4 \int \d\mu\;X^{2}\frac{(\xi 3)^{2}}{(32)(13)(\xi 1)^{2}}\left(\frac{(32)(\xi 1)-2(\xi 2)(13)}{(13)(\xi 1)}\right) h_{1}\tilde{h}_{2}\tilde{h}_{3}\prod_{i=1}^{3}\D\sigma_{i} +(2\leftrightarrow 3). \end{equation*} After two applications of the Schouten identity and inserting momentum eigenstates \eqref{gmomeig}, we have: \begin{multline}\label{MHV3pt2} \cM_{3,0}(1,2,3;\Lambda)=\int \d\mu\;X^{2}\frac{(23)^{2}}{(12)^{2}(31)^{2}}h_{1}\tilde{h}_{2}\tilde{h}_{3}\prod_{i=1}^{3}\D\sigma_{i} \\ =\frac{\la 23\ra^{2}}{\la 12\ra^{2} \la 31\ra^{2}}(1-\Lambda \Box_{p})\delta^{4|8}\left(\sum_{i=1}^{3}p_{i}\right), \end{multline} with the $\Box_{p}$ arising from the Fourier transformation of $X^{2}=1-\Lambda x^{2}$. Once again, note that this has the correct $\Lambda\rightarrow 0$ limiting behavior. \subsubsection{$\cN=8$ supergravity and BCFW} These three-point amplitudes can be used to seed the tree-level BCFW recursion for Einstein gravity. First we confirm that BCFW recursion indeed extends to gravitational scattering amplitudes on (anti-)de Sitter backgrounds \cite{Adamo:2012nn}. \begin{lemma}\label{GBCFWlem} BCFW recursion is valid for gravitational scattering amplitudes ($0\leq\cN\leq8$) on backgrounds with cosmological constant. \end{lemma} \proof BCFW recursion is derived by picking two external momenta for a scattering amplitude and analytically continuing them with a complex variable $z$ while keeping them on-shell and maintaining overall momentum conservation. The amplitude then becomes a complex function $\cM(z)$: it has simple poles wherever internal propagators go on-shell, and $\cM(0)$ is the original amplitude. These simple poles correspond to the terms arising in the BCFW recursion, so provided $\cM(z\rightarrow\infty)$ vanishes, Cauchy's theorem implies the recursion. In the $\Lambda=0$ case, it was proven that $\cM(z\rightarrow\infty)=0$ using a background field method in \cite{ArkaniHamed:2008yf}. With $\Lambda\neq 0$, $\cM(z)$ still has simple poles corresponding to propagators going on-shell, so the only potential subtlety arises with the fall-off as $z\rightarrow\infty$, and it suffices to show that the methods of \cite{ArkaniHamed:2008yf} still work. In the large $z$ regime, we are interested in quadratic fluctuations on a classical background, where the fluctuations correspond to the two shifted particles and the soft background looks like de Sitter space. For our gravitational amplitudes, this entails inserting a metric $g_{\mu\nu}+h_{\mu\nu}$, and extracting the portion which is quadratic in $h$ \cite{Christensen:1979iy}: \begin{multline*} \cL_{\mathrm{quad}}=\sqrt{-g}\left[\frac{1}{4}\tilde{h}^{\mu\nu}(2R_{\mu\rho}g_{\mu\sigma}-2R_{\mu\rho\nu\sigma}-g_{\mu\rho}g_{\nu\sigma}\Box)h^{\rho\sigma}-\frac{1}{2}\nabla^{\rho}\tilde{h}_{\rho\mu}\nabla^{\sigma}\tilde{h}^{\mu}_{\sigma}\right. \\ \left. -\tilde{h}(R_{\rho\sigma}-\frac{1}{4}g_{\rho\sigma}R)h^{\sigma}_{\mu}-\frac{1}{2}\Lambda\tilde{h}^{\mu\nu}h_{\mu\nu}\right], \end{multline*} where $\tilde{h}_{\mu\nu}=h_{\mu\nu}-\frac{1}{2}g_{\mu\nu}h$, and $h=g_{\mu\nu}h^{\mu\nu}$. To this, we add the de Donder gauge-fixing term, as well as a Lagrangian density for a conformally-invariant scalar field, leaving us with: \begin{multline*} \cL_{\mathrm{quad}}=\sqrt{-g}\left[\frac{1}{4}\tilde{h}^{\mu\nu}(2R_{\mu\rho}g_{\mu\sigma}-2R_{\mu\rho\nu\sigma}-g_{\mu\rho}g_{\nu\sigma}\Box)h^{\rho\sigma}-\tilde{h}(R_{\rho\sigma}-\frac{1}{4}g_{\rho\sigma}R)h^{\sigma}_{\mu}\right. \\ \left. -\frac{1}{2}\Lambda\tilde{h}^{\mu\nu}h_{\mu\nu}+\frac{1}{2}g^{\mu\nu}\nabla_{\mu}\phi\nabla_{\nu}\phi -\Lambda\phi^{2}\right]. \end{multline*} Now, we take our background metric $g_{\mu\nu}$ to be de Sitter, and implement the field re-definition used in \cite{Bern:1999ji}: \begin{equation*} h_{\mu\nu}\rightarrow h_{\mu\nu}+g_{\mu\nu}\phi, \qquad \phi\rightarrow \frac{h}{2}+\phi. \end{equation*} A bit of tensor algebra reveals that the quadratic Lagrangian transforms to become: \begin{equation*} \cL_{\mathrm{quad}}\rightarrow\sqrt{-g}\left[\frac{1}{4}g^{\mu\nu}\nabla_{\mu}h^{\sigma}_{\rho}\nabla_{\nu}h^{\rho}_{\sigma}-\frac{1}{2}h_{\mu\nu}h_{\rho\sigma}R^{\mu\rho\nu\sigma}+\frac{1}{2}g^{\mu\nu}\nabla_{\mu}\phi\nabla_{\nu}\phi -\Lambda\phi^{2}\right]. \end{equation*} This transformation successfully eliminates all the trace terms, and after decoupling the re-defined scalar field, the Lagrangian is exactly the same as the one used in the flat background calculation. From this point, the proof that $\cM(z\rightarrow\infty)$ vanishes follows in exactly the same fashion as in the $\Lambda=0$ case of \cite{ArkaniHamed:2008yf}, as desired. $\Box$ \medskip BCFW recursion for Einstein gravity is most easily expressed for maximal (i.e., $\cN=8$) SUGRA, where there is only a single supermultiplet \cite{Cremmer:1978km}. $\cN=8$ SUGRA is an interesting theory in its own right: it is obtained by dimensional reduction from 11-dimensions \cite{Cremmer:1979up}; has a valid `no-triangle' hypothesis \cite{BjerrumBohr:2008ji}; contains a non-linear global $E_{7(7)}$ symmetry \cite{Cremmer:1978ds}; possesses additional recursive-like relations (the so-called `bonus relations'); has an S-matrix which is well-defined everywhere in the moduli space; and may even be UV finite \cite{Bern:2006kd}. However, we will simply use $\cN=8$ supersymmetry as a convenient calculational tool. Supersymmetric BCFW recursion for gravity \cite{ArkaniHamed:2008gz} is still seeded by the three-point amplitudes, and its translation into twistor space is well-understood for a flat background \cite{Mason:2009sa}. Here we rewrite those formulae in a notation that is suggestive of twistor actions and twistor-string theory and extend them to $\Lambda\neq 0$. We will be working directly with the Einstein amplitudes so the overall factor of $\Lambda$ present in the formulae above will be absent and we can take $\Lambda\rightarrow 0$ if desired. For the remainder of this section, we work in $\cN=8$ supertwistor space $\PT_{[8]}$ so that $Z^I=(Z^\alpha,\chi^a)$ where now $a=1,\ldots ,8$ and the corresponding holomorphic volume form $\D^{3|8}Z$ now has weight $-4$ (so $\PT_{[8]}$ is no longer Calabi-Yau). We can embed the graviton fields into the $\cN=8$ framework by setting \be{4in8} \cH= h+\cdots +\frac{\chi^{8}}{8!} \tilde h\, . \ee A generic $\cH$ of homogeneity degree two will encompass the full $\cN=8$ linear gravity supermultiplet in the same way that $\cA$ encoded the full $\cN=4$ SYM multiplet. Of course, one may ask the natural question: is this operation well defined on a de Sitter background? The immediate answer is `no,' simply because there is no unbroken unitary representation of supersymmetry in de Sitter space \cite{Witten:2001kn}. For our purposes though, the $\cN=8$ supersymmetry is just a formal tool we use to encode both $\cN=0$ graviton helicities in the single field \eqref{4in8}. After computing the amplitude in the $\cN=8$ formalism, we can truncate immediately to $\cN=0$ by performing fermionic integrations in the usual fashion. Furthermore, from the perturbative point of view our amplitudes are just polynomials in $\Lambda$, and one can simply reverse the sign to consider a (perturbative) amplitude for gauged supergravity on anti-de Sitter space, where supersymmetry is unbroken. The BCFW recursion in \cite{Mason:2009sa} was based on a split signature framework in which the twistors are totally real and the $n$-point amplitude was represented as a distribution on $n$ copies of $\PT_{\R}\subset\RP^{3|8}$. We begin by translating this setup to the complex setting adopted throughout this review. In Chapter \ref{Chapter3}, we represented scattering amplitudes on twistor space in terms of their integral kernel. With $\cN=4$ supersymmetry, this entailed picking twistor representatives in $\Omega^{0,2}(\PT,\cO)$. With $\cN=8$ supersymmetry, the relevant pairing between twistor wavefunctions takes the form: \begin{equation*} \Omega^{0,1}(\PT_{[8]},\cO(k))\times\Omega^{0,2}_{c}(\PT_{[8]},\cO(4-k))\rightarrow\C , \qquad (\phi,\alpha)\mapsto\int_{\PT_{[8]}}\D^{3|8}Z\wedge\phi\wedge\alpha. \end{equation*} The twistor wavefunctions for $\cN=8$ SUGRA take values in $H^{0,1}(\PT_{[8]},\cO(2))$, so this means that we should represent our scattering states in the integral kernel as: \be{delta-fn-w} \cH_{i}=\bar\delta^{3|8}_{2,2}(Z_i,Z(\sigma_{i}))=\int_\C \frac{\d s}{s^3}\bar\delta^{4|8}(Z_i+sZ(\sigma_{i}))\, . \ee As in Chapter \ref{Chapter3}, integration with respect to $\D\sigma_{i}$ in our calculations reduces this to a $(0,2)$-form of weight +2 as desired. The recursion is seeded by the three point $\overline{\mbox{MHV}}$ and MHV amplitudes. The formulae \eqref{MHVbar2}, \eqref{MHV3pt2} extend easily to $\cN=8$ SUGRA; removing the overall factor of $\Lambda$ gives \be{MHV-bar-N=8} \cM^{\cN =8}_{3,-1}(1,2,3)= \int_{\PT_{[8]}} \D^{3|8}Z\wedge \cH_{3}\; \{\cH_{1},\cH_{2}\}, \ee and \be{MHV3ptN=8} \cM^{\cN =8}_{3,0}(1,2,3)=\int\d\mu\;X^{2} \prod_{i=1}^3 \frac { \cH_{i}\; \D\sigma_i }{(\sigma_i\cdot\sigma_{i+1})^2}, \ee where we use the notation\footnote{The fermionic parts of the infinity twistor correspond to some gauging of the R-symmetry of supergravity \cite{Wolf:2007tx}; for our purposes we can let these components be zero.} \begin{equation*} Z^{I}(\sigma)=X^{IA}\sigma_{A}. \end{equation*} From \eqref{BCFR3}, BCFW on twistor space becomes \be{recursion} \cM(Z_1,\ldots,Z_n)=\sum_{L ,R}\int_{\C\times \PT_{[8]}}\D^{3|8}Z \frac{\d z}z \cM_L(Z_1, Z_2,\ldots,Z_i,Z)\;\cM_R(Z,Z_{i+1},\ldots,Z_n+z Z_1) \ee where the sum is over all $1<i<n-1$ and permutations fixing $1$ and $n$. In the solution to the recursion relations, a particularly important role is played by the contributions in which either $\cM_L$ or $\cM_R$ is a three point amplitude. Up to various shifts, these are the main terms involved in solving the recursion relations inductively. In these cases it emerges from three-particle kinematics that the contributions are only nontrivial when $\cM_L$ is MHV or $\cM_R$ is the $\overline{\mbox{MHV}}$. The latter case is known as the `homogeneous term,' and when the amplitude under consideration is the $n$-point MHV amplitude, the homogeneous terms form the entire recursion. The integrations for the homogeneous term were performed explicitly in \cite{Mason:2009sa} for the split signature, flat background case; we extend them to the complex $\Lambda\neq 0$ setting here. Our starting point is the MHV recursion: \be{homogeneous} \cM_{n,0}(1,\ldots, n)=\sum \int \D^{3|8}Z\frac{\d z}{z}\cM(Z_{1},\ldots, Z_{n-1},Z)\;\cM_{3,-1}(Z, Z_{n-1},Z_{n}+zZ_{1}). \ee In the integral kernel formalism, the three point seed amplitude is obtained from \eqref{MHV-bar-N=8}: \begin{multline*} \cM_{3,-1}(Z, Z_{n-1},Z_{n}+zZ_{1})=\int \D^{3|8}Z'\;\bar{\delta}^{3|8}_{2,2}(Z_{n}+zZ_{1},Z')\left\{\bar{\delta}^{3|8}_{2,2}(Z,Z'), \bar{\delta}^{3|8}(Z_{n-1},Z')\right\} \\ =\left[\frac{\partial}{\partial Z}, \frac{\partial}{\partial Z_{n-1}}\right]\;\bar{\delta}^{3|8}(Z,Z_{n}+zZ_{1}) \; \bar{\delta}^{3|8}(Z_{n-1},Z_{n}+zZ_{1}), \end{multline*} simply using the properties of the complex distributional forms. This leaves us with a recursion of the form \begin{equation*} \sum \int \D^{3|8}Z\frac{\d z}{z}\cM(Z_{1},\ldots, Z_{n-1},Z)\;\left[\partial, \partial_{n-1}\right]\; \bar{\delta}^{3|8}_{2,2}(Z,Z_{n}+zZ_{1}) \; \bar{\delta}_{2,2}^{3|8}(Z_{n-1},Z_{n}+zZ_{1}). \end{equation*} Integrating by parts with respect to $\D^{3|8}Z$ and using the delta-function support leaves us with \be{homr1} -\sum I^{IJ}\int \frac{\d z}{z}\frac{\partial}{\partial Z_{n-1}^{I}}\cM(1,\ldots, n-1)\frac{\partial}{\partial Z^{J}_{n-1}}\bar{\delta}_{2,2}^{3|8}(Z_{n-1},Z_{n}+zZ_{1}). \ee Now, a simple calculation shows that \begin{multline*} \frac{\partial}{\partial Z^{J}_{n-1}}\bar{\delta}^{3|8}_{2,2}(Z_{n-1},Z_{n}+zZ_{1})=-\frac{\partial}{\partial Z^{J}_{n-1}}\bar{\delta}^{3|8}_{2,2}(Z_{n}+zZ_{1},Z_{n-1}) \\ =-\frac{\partial}{\partial Z^{J}_{n-1}}\int_{\C}\frac{\d s}{s^{3}} \bar{\delta}^{4|8}(Z_{n}+zZ_{1}+sZ_{n-1}) = \frac{\partial}{\partial Z_{n}^{J}}\int_{\C}\frac{\d s}{s^2}\bar{\delta}^{4|8}(Z_{n}+zZ_{1}+sZ_{n-1}), \end{multline*} so we can rewrite \eqref{homr1} as \be{homr2} -\sum I^{IJ}\int \frac{\d z}{z}\frac{\d s}{s^{2}}\frac{\partial}{\partial Z_{n-1}^{I}}\cM(1,\ldots, n-1)\frac{\partial}{\partial Z_{n}^{J}}\bar{\delta}^{4|8}(Z_{n}+zZ_{1}+sZ_{n-1}). \ee But at this point we can solve the recursion completely by noting that \begin{equation*} \int_{\C^{2}}\frac{\d z}{z}\frac{\d s}{s^{2}}\bar{\delta}^{4|8}(Z_{n}+zZ_{1}+sZ_{n-1})=\bar{\delta}^{2|8}_{0,1,3}(Z_{1},Z_{n-1},Z_{n}). \end{equation*} Taking into account the sum over BCFW decompositions, we find that the full homogeneous term in the $\cN=8$ recursion is: \begin{equation}\label{hgs-term} \cM_{n,0}=\sum_{i\neq 1,n}I^{IJ}\frac{\partial}{\partial Z_{n}^{I}}\bar{\delta}^{2|8}_{0,1,3}(Z_{1},Z_{i},Z_{n})\frac{\partial}{\partial Z_{i}^{J}} \cM_{n-1,0}(1,\ldots, i,\ldots, n-1). \end{equation} The final step is to show that this can be re-expressed in a manner which immediately allows us to obtain the integral kernel we are after for any value of $\Lambda$. \begin{propn}\label{CCBCFW} The $n$-point MHV amplitude for $\Lambda\neq0$ is given by BCFW recursion on twistor space as \begin{multline}\label{grav-MHV-BCFW} \cM_{n,0}(Z_1,\ldots, Z_n;\Lambda)=\int\d\mu\;\left(\prod_{i=4}^n \frac{[\p_{i}, \p_{i-1}]\; \cH_{i} \D\sigma_i }{(i\; i-1)}\right)\frac{\cH_3 \D\sigma_3 \; \cH_2 \D\sigma_2 \; \cH_1\tau_1}{(32)^2(21)^2(1n)^2}\\ +\mathrm{Perms}(2,\ldots, n-1), \end{multline} where $\cH_i=\bar\delta^{3|8}_{2,2}(Z_i,Z(\sigma_i))$ and the terms in the product are ordered with increasing $i$ to the left. \end{propn} \proof It suffices to demonstrate that the first step of the recursion obeys this pattern, after which \eqref{grav-MHV-BCFW} follows inductively. At four points, \eqref{hgs-term} gives \begin{multline*} \cM_{4,0}=I^{IJ}\frac{\partial}{\partial Z_{4}^{I}}\bar{\delta}^{2|8}_{0,1,3}(Z_{1},Z_{3},Z_{4})\frac{\partial}{\partial Z_{3}^{J}} M_{3,0}(1,2,3) \\ =\int\frac{\d s}{s^2}\frac{\d t}{t}I^{IJ}\partial_{4\;I}\bar{\delta}^{4|8}(Z_{4}+sZ_{3}+tZ_{1})\;\partial_{J\;3}\frac{\cH_3 \D\sigma_3 \; \cH_2 \D\sigma_2 \; \cH_1\tau_1}{(32)^2(21)^2(13)^2}, \end{multline*} using \eqref{MHV3ptN=8}. Now define $q\sigma_{4}=s\sigma_{3}+t\sigma_{1}$; this implies the following relations: \begin{equation*} s=q\frac{(14)}{(13)}, \qquad t=q\frac{(34)}{(31)}, \qquad qZ(\sigma_{4})=sZ(\sigma_{3})+tZ(\sigma_{4}). \end{equation*} From this, we obtain \begin{equation*} \int \d s\;\d t \frac{(13)}{q^{3}(43)}I^{IJ}\partial_{4\;I}\bar{\delta}^{4|8}(Z_{4}+qZ(\sigma_{4}))\partial_{J\;3}\frac{\cH_3 \D\sigma_3 \; \cH_2 \D\sigma_2 \; \cH_1\tau_1}{(32)^2(21)^2(14)^2}, \end{equation*} using the support of the delta-functions in play. Now, using the relations above, we have \begin{multline*} (31)\d s\wedge\d t=\frac{(31)}{2}\left(\d s\wedge\d t-\d t\wedge\d s\right)=\frac{(31)}{2}\left(\frac{(13)}{(14)}\d s\wedge\frac{s}{q}\d t-\frac{(31)}{(34)}\d t\wedge \frac{t}{q}\d s\right) \\ =\frac{(31)}{2}\d q\wedge \left(\frac{s}{q}\d t -\frac{t}{q}\d s\right)=\frac{\d q}{2}\wedge\D\sigma_{4}, \end{multline*} neglecting terms which will wedge to zero in the overall expression. Finally, this leaves us with: \begin{multline*} \int \frac{\D\sigma_{4}}{(43)}I^{IJ}\frac{\d q}{q^3}\partial_{I\;4}\bar{\delta}^{4|8}(Z_{4}+qZ(\sigma_{4}))\partial_{J\;3}\frac{\cH_3 \D\sigma_3 \; \cH_2 \D\sigma_2 \; \cH_1\tau_1}{(32)^2(21)^2(14)^2} \\ =\int \frac{\D\sigma_{4}}{(43)}I^{IJ}\partial_{I\;4}\cH_{4}\partial_{J\;3}\frac{\cH_3 \D\sigma_3 \; \cH_2 \D\sigma_2 \; \cH_1\tau_1}{(32)^2(21)^2(14)^2}, \end{multline*} as required. $\Box$ \medskip Of course, we should still be free to take the $\Lambda\rightarrow 0$ limit of this expression, in which case it should be comparable to Hodges' formula derived in the previous section. To do this, we must multiply by generic wave-functions and integrate out the $Z_i$; but at this point \eqref{grav-MHV-BCFW} seems to entail a sum over chains rather than trees. We can reconcile these two pictures by invoking the arguments of \cite{Drummond:2009ge} to use a cyclically ordered version of the recursion in which we take just the one term in \eqref{hgs-term} and then sum the final result over all permutations of $2$ to $n-1$. More explicitly, one can perform a derivation similar to the one given here, but using \emph{dual twistor} wavefunctions. In the $\Lambda\rightarrow 0$ limit, this reproduces the recursion initially derived by Hodges from $\cN=7$ supersymmetry \cite{Hodges:2011wm}. This indicates that \eqref{grav-MHV-BCFW} indeed has the correct flat-space limit. It would be interesting to see if one can prove the equivalence between this formula and \eqref{MHVamp2}, though. \section{Open Questions and Future Directions} \label{Chapter7} In this review, we have explored many facets of the twistor action approach to gauge theory and gravity. For Yang-Mills theory, we have seen that the twistor action manifests the MHV formalism, computes the tree-level S-matrix, and even allows for some progress in the study of loop amplitudes. Furthermore, we showed that local operators and null polygonal Wilson loops in gauge theory have a natural expression in twistor space; this led to proofs of several interesting correspondences to all levels in perturbation theory (at the level of the integrand). While the situation is a bit more complicated for gravity, we were still able to make significant progress by utilizing the embedding of Einstein gravity inside conformal gravity on a de Sitter background. This enabled us to derive a formula for the MHV amplitude in the presence of a cosmological constant directly from the twistor action as well as using BCFW recursion. We also arrived at a conjecture for the twistor action of Einstein gravity, which is supported by a correct self-dual reduction, the appropriate MHV amplitudes, and gauge invariance. In many ways, these results raise more questions than they answer: Can general progress be made at loop-level in twistor theory? Do other gauge theories in other dimensions, have a twistor action description? Is there a sensible MHV formalism for gravity? What applications--if any--do these results have in pure mathematics? We conclude this review with a brief overview of some open problems and potential future directions for research in this field. This is by no means an exhaustive list, and is heavily biased by the opinions and interests of the author. \subsection{Gauge Theory} The gauge theory under consideration in this review was maximally supersymmetric Yang-Mills theory; however, a twistor action description exists for \emph{all} Yang-Mills theories in four-dimensions. For $\cN=0,1,2$, these are simply provided by more subtle versions of the $\cN=4$ action studied in Chapters \ref{Chapter3} and \ref{Chapter4} \cite{Mason:2005zm, Boels:2006ir}. Even the $\cN=3$ theory (which is really equivalent to $\cN=4$) has a description in terms of ambitwistor space \cite{Mason:2005kn}. Hence, the most exciting questions with respect to gauge theory are not about \emph{how} to provide perturbative descriptions of Yang-Mills theories, but rather what can be achieved with the descriptions we have, and what other theories can we hope to study? \subsubsection*{\textit{Beyond tree-level and the planar limit}} The treatment of Chapter \ref{Chapter3} was most complete at tree-level. For loop amplitudes, we saw that a generic amplitude on twistor space will require some sort of regulation which accounts for the `0/0' behavior of the shifted R-invariants. While the twistor action can be defined for the Coulomb branch of $\cN=4$ SYM, and even leads to the massive MHV formalism (see Appendix \ref{Appendix2}), it remains to be seen how--or if--this gives the proper regulating behavior on twistor space. A similar issue arises in the context of the twistor Wilson loop for the planar sector of $\cN=4$ SYM: here, one can obtain the loop integrand but it is unclear how this can be evaluated correctly. In the simplest case, the question becomes: how do we evaluate the `Kermit' integral for the one-loop MHV amplitude in twistor space? Recently, Mason and Lipstein demonstrated that the Kermit integral can immediately be cast in $\d\log$-form in twistor space \cite{Lipstein:2012vs}, and using a suitable choice of contours, be properly integrated \cite{Lipstein:2013}. This throws open the doorway to obtaining loop \emph{amplitudes} rather than just integrands using the twistor Wilson loop. Furthermore, we saw in Chapter \ref{Chapter3} that a careful treatment of the integration contour was required to obtain the correct behavior of the two-point vertex for the Feynman rules of the twistor action itself. Applying the methodology of \cite{Lipstein:2013} to the twistor action could lead to a method for isolating the correct IR behavior of loop amplitudes, and therefore extend the techniques reviewed here to generic loop amplitudes. Furthermore, while the twistor Wilson loop only describes amplitudes in the planar limit of $\cN=4$ SYM (i.e., the scattering amplitude/Wilson loop duality only holds in the planar limit), there is no such restriction on studying the amplitudes of the twistor action itself. Integrability techniques have always provided a substantial amount of power in the planar limit, and it appears that they can successfully be used to determine the planar S-matrix to \emph{all} values of the coupling \cite{Basso:2013vsa, Basso:2013aha}! Hence, it seems natural for us to study twistor theory outside of the planar limit, where it may lead to new insights not accessible to the powerful integrability methods. \subsubsection*{\textit{The Grassmannian approach}} Pioneered by Arkani-Hamed and various collaborators \cite{ArkaniHamed:2009si, ArkaniHamed:2009dn, ArkaniHamed:2010kv}, the Grassmannian approach to scattering amplitudes aims (very roughly speaking) to associated a $n$-particle N$^{k}$MHV amplitude with a top-degree form on the Grassmannian $\mathrm{Gr}(k+2,n)$. The power of this method lies primarily in its ability to manifest all the symmetries of the scattering amplitudes it is describing: in particular, both superconformal and \emph{dual} superconformal invariance can be manifested in the Grassmannian \cite{ArkaniHamed:2009vw}. This Grassmannian formalism provides a description for the integrand which is manifestly in $\d\log$-form. Additionally, there is an interesting correspondence between the Grassmannian formulae and bipartite graphs on planar Riemann surfaces; these graphs have become known as `on-shell diagrams,' and encode properties of scattering amplitudes such as BCFW factorization \cite{ArkaniHamed:2012nw}. On-shell diagrams (and their generalizations) can also be used to represent classes of $\cN=1,2$ quiver gauge theories, where operations on the graphs get reinterpreted as dualities and limits of the field theory (e.g., Seiberg duality or Higgsing) \cite{Franco:2012mm, Xie:2012mr}. However, the Grassmannian approach lacks the coherent organizing principle we usually associate to a `theory' in physics. In particular, the entire approach is dictated by a set of symmetries which are used to specify the three-point amplitudes; everything else follows from BCFW recursion. But where did these symmetries come from? Presumably, they were inherited from a physical theory (defined in terms of a Lagrangian, or some other organizing principle) which is lurking off-stage. In other words, the Grassmannian approach provides an efficient way for building scattering amplitudes and a representation that manifests many of their symmetries; it is \emph{not} a physical theory, though. Hence, finding a theory which produces the Grassmannian formalism (or from which the formalism follows naturally) seems like an important goal, and twistor actions may provide the answer. These obviously constitute an organizing principle, and throughout this review we have seen how they manifest symmetries which are obscured on space-time. Furthermore, it has already been shown explicitly at 1-loop that the twistor Wilson loop provides the sought-after $\d\log$-form of the planar integrand \cite{Lipstein:2012vs}. If this can be extended to an algorithm for all loops, then it should be clear that the twistor action can deliver the same sort of representations as the Grassmannian. Indeed, there may even be a precise map between the two formulations which matches the MHV formalism of the twistor action to the BCFW foundations of the Grassmannian approach. \subsubsection*{\textit{Instantons and non-perturbative data}} Everything we have studied here lies within the realm of perturbative quantum field theory. In a sense, this is dictated by the fact that our twistor actions (for gauge theory or conformal gravity) are based on a Chalmers-Siegel expansion for the space-time theory. For Yang-Mills theory, the Chalmers-Siegel Lagrangian differs from the initial Yang-Mills Lagrangian by the topological term $\int \tr(F\wedge F)$, which does not affect the perturbation theory. Unfortunately, this indicates that it will not be possible to compute non-perturbative quantities (such as a partition function) using the twistor action formulation as we currently understand it. Nevertheless, there are hints that twistor theory could have something to say about gauge theoretic invariants. It has been known for some time that Donaldson theory on 4-manifolds can be recast in terms `instanton counting' invariants of a topologically twisted $\cN=2$ Yang-Mills theory \cite{Witten:1988ze}. On $\R^{4}$ or $S^4$, there is not sufficient topology to get interesting invariants in Donaldson-Witten theory; however, one can still define instanton counting invariants of the gauge theory itself by working equivariantly with respect to a subgroup of the Lorentz group. This leads to Nekrasov's partition function, which can be thought of as counting instantons on the four-dimensional $\Omega$-background \cite{Nekrasov:2002qd}. Twistor methods have always been well suited to describing instanton calculations, and the Coulomb branch of these gauge theories can also be described on twistor space (see Appendix \ref{Appendix2}). It would be fascinating if the Ward correspondence could be adapted to this equivariant setting, perhaps leading to a twistorial method for computing Nekrasov's partition function and hence the instanton prepotential of Seiberg-Witten theory. \subsubsection*{\textit{Other gauge theories}} An obvious question to ask about the twistor action program is whether it extends to other gauge theories in other dimensions. In an arbitrary number of space-time dimensions, one can define twistors to be the pure spinors of the conformal group (in four-dimensions, the purity condition is trivial) and many results such as the Penrose transform can be proven, albeit often in more complicated forms. Particularly interesting candidates are ABJM theory (a $\cN=6$ Chern-Simons theory with dual superconformal symmetry) or $\cN=8$ SYM in three-dimensions, and the elusive $\cN=(0,2)$ theory in six-dimensions. Momentum twistor techniques have already been employed to study the scattering amplitudes of the three-dimensional theories (e.g., \cite{Huang:2010rn, Lipstein:2012kd}), and there has been some progress towards establishing the analogue of a Ward correspondence for the six-dimensional $\cN=(0,2)$ theory \cite{Mason:2011nw, Saemann:2011nb}. However, definitions of a twistor action for any of these theories in the general non-abelian regime remain a long way off. Finding such twistor actions could prove an important breakthrough in understanding these theories more generally. As we have seen throughout, this could lead to efficient calculational tools like the MHV formalism, and in the case of the $\cN=(0,2)$ theory there is no known Lagrangian description at all. For $\cN=8$ SYM, one might hope to proceed by `dimensional reduction' on the twistor action for $\cN=4$ SYM. This could be accomplished by imposing an axial gauge defined by a time-like vector representing the dimensional reduction in space-time, although subtleties involving gauge invariance of any resulting MHV formalism may arise. For the six-dimensional theory, one requires a twistorial treatment of non-abelian, self-dual gerbes. While twistor actions have been found for the linear fields of $\cN=(0,2)$, it remains to be seen whether this construction can be extended to the fully non-linear, non-abelian regime \cite{Mason:2012va}. It appears that a six-dimensional superconformal theory containing a non-abelian tensor multiplet can be formulated at the level of equations of motion in twistor space, using the Penrose-Ward transform \cite{Saemann:2012uq, Saemann:2013pca}. While this construction has yet to pass various tests associated with the $\cN=(0,2)$ theory (e.g., reduction to super-Yang-Mills in five dimensions), and its precise connection with M-theory is still unclear, it is nevertheless an exciting arena of research--one in which twistor actions may play an important clarifying role. \subsubsection*{\textit{Holomorphic linking and elliptic curves}} From the twistor Wilson loop of $\cN=4$ SYM, we know that scattering amplitudes can be interpreted as \emph{holomorphic linking} between irreducible components of a nodal elliptic curve. This is the natural generalization of the Gauss linking number to the holomorphic category. It may be possible to use holomorphic linking to provide an alternative definition for scattering amplitudes in terms of abstract objects and operations in algebraic geometry. For an abelian holomorphic Chern-Simons theory, holomorphic linking can be understood entirely in terms of homological algebra \cite{Frenkel:2005qk}. Translating these ideas into physical language and then generalizing them to the non-abelian setting should define a set of `homological Feynman rules' which allows us to interpret scattering amplitudes as abstract (but well-defined) objects in algebraic geometry. On a related note, one can ask: is it possible to define a holomorphic Wilson loop on \emph{arbitrary} elliptic curves which has the twistor Wilson loop as its limit when the curve degenerates? This is more difficult than it sounds, because the moduli space of bundles on an elliptic curve is non-empty (recall that on each $\P^{1}$ component of the nodal curve, we could apply the Birkhoff-Grothendieck theorem). It may be possible to proceed by working with bundles `close' to the trivial bundle and applying the results of Atiyah. One could then consider the expectation values of these Wilson loops with respect to a holomorphic Chern-Simons theory, and hope to define holomorphic analogues of knot invariants \cite{Witten:1988hf}. \subsubsection*{\textit{Hopf algebras}} Hopf algebras are associative, coassociative bi-algebras equipped with a certain structure known as an `antipode.' These algebras play a fascinating role in the description of renormalization in gauge theories (c.f., \cite{Connes:1999yr, Brown:2011pj}), but they are also lurking behind many of the loop-level structures in UV finite theories such as $\cN=4$ SYM. For instance, the multiple zeta values and polylogarithms which appear in loop amplitudes have natural Hopf algebras associated with them \cite{Goncharov:2002}. There is a natural structure in $\cN=4$ SYM which also seems highly amenable to a Hopf algebra description: the BCFW recursion relation. It may be possible to understand BCFW recursion as a Hopf algebra structure, with factorization corresponding to a co-product and the 3-point seed amplitudes corresponding to the primitive elements. This could in turn lead to a diagrammatic mechanism for computing information about the transcendental functions making up scattering amplitudes. If it exists, such a structure will persist not just for $\cN=4$ SYM but indeed for \emph{any} gauge theory which obeys BCFW recursion. \subsection{Gravity} Much of what we were able to say about gravity in this review was accomplished in a rather roundabout fashion, by working via conformal gravity. Hence, the most obvious open problem is to either prove the validity of the Einstein twistor action \cite{Adamo:2013tja}, or else find another proposal that works. Skinner's discover of a twistor-string theory which describes the flat-space amplitudes of $\cN=8$ supergravity certainly indicates that a correct twistor action should exist, although the crucial presence of worldsheet supersymmetry in this theory may prove an obstacle. With such a clear goal in mind, the remainder of this section is devoted to other interesting directions that research on twistor theory and gravity could take. \subsubsection*{\textit{Twistor-string theory}} Skinner's twistor-string theory \cite{Skinner:2013xp} is anomaly free with $\cN=8$ supersymmetry, includes explicitly the conformal symmetry-breaking infinity twistor, and produces the tree-level S-matrix of $\cN=8$ SUGRA on a flat background \cite{Cachazo:2012kg}. While it seems the worldsheet theory is well-defined for $\Lambda\neq 0$, it is not known how to compute gauge-invariant correlators in this regime. The problem is analogous to proving $\xi$-independence as encountered in Section \ref{Chapter6}: when one attempts to compute a worldsheet correlation function in the twistor-string, the answer is not independent of reference spinors or the location of picture changing operators. This indicates that either something is missing from our description of the twistor-string (e.g., a new class of vertex operators which do not contribute in the $\Lambda\rightarrow 0$ limit), or else that the twistor-string fails to describe gravity in the presence of a cosmological constant. In this regard, the twistor action approach (even via conformal gravity) appears to have an edge on the twistor-string as we currently understand it. In any case, the string theory is anomaly free for all values of the worldsheet genus. This indicates that one should, in principle, be able to compute loop-level amplitudes for $\cN=8$ supergravity. However, there are several features of the twistor-string which can be treated na\"ively at genus zero which will become more complicated on non-rational worldsheets (see section 5 of \cite{Skinner:2013xp} for a good overview of these issues). Once again, there may be additional vertex operators which need to be taken into account, so determining the full spectrum of such operators is clearly important for both computing loops as well as with a cosmological constant. Understanding how to overcome either of these issues will represent a breakthrough in our ability to apply twistor methods to a quantum theory of gravity. \subsubsection*{\textit{Beyond MHV}} A major lesson from Sections \ref{Chapter5} and \ref{Chapter6} is that the conformal gravity twistor action is not so dissimilar from the twistor action for $\cN=4$ SYM. Both have $\dbar$ as their kinetic operator, and both have vertices which correspond to MHV amplitudes. In the gauge theory setting, we were able to extend these notions off-shell to derive the MHV formalism. For Einstein gravity, the traditional definition of the MHV formalism by a Risager shift fails \cite{BjerrumBohr:2005jr, Bianchi:2008pu}, but it is easy to see that this is not the unique definition for such a formalism. This raises the intriguing possibility that a MHV formalism for Einstein gravity could be defined by extending the Feynman rules for the conformal gravity twistor action off-shell and then restricting to Einstein states for the external legs as proposed in \cite{Adamo:2013tja}. It may also be possible to approach Einstein gravity directly, either by working with Skinner's twistor-string or by developing a twistor action for general relativity directly. It is worth noting that an MHV-like expansion for the Einstein sector has been developed and checked numerically \cite{Penante:2012wd} by `relaxing delta-functions' in a Grassmannian representation of the S-matrix \cite{He:2012er, Cachazo:2012pz}, but it remains to be seen if this can be translated into a compact prescription or checked analytically. \subsubsection*{\textit{Graviton non-gaussianities}} An important goal for future research is to translate formulae for `scattering amplitudes' with cosmological constant (such as \eqref{MHVamp2} or \eqref{grav-MHV-BCFW}) into momentum expressions which make physical sense. The methods reviewed here are directed towards obtaining answers which limit to scattering amplitudes as $\Lambda\rightarrow 0$; however, for computations relevant to cosmological observables in a de Sitter background one must utilize the `in-in formalism.' (c.f., \cite{Maldacena:2002vr}). In this picture one works on the Poincar\'e patch of de Sitter space, and uses Bunch-Davies vacua for the wavefunctions. These states are then integrated from the horizon to the operator insertion point, and then back to the horizon rather than out to $\scri$. The computation of such graviton correlators with a cosmological constant is of substantial interest in both cosmology \cite{Maldacena:2002vr, Maldacena:2011nz} and the AdS/CFT correspondence \cite{Raju:2012zr}. In particular, from the cosmological point of view, the three-point correlators represent the first deviation from the Gaussian spectrum of background fluctuations predicted by single field inflationary models of the universe. If we can translate our three-point formulae into this framework, it would demonstrate that twistor methods can be applied to these issues. Furthermore, the current state-of-the-art for these computations in the AdS/CFT setting is $n=4$ \cite{Raju:2012zs}; equations \eqref{MHVamp2} or \eqref{grav-MHV-BCFW} should give the MHV correlator for all $n$ though! Of course, translating this expression into something that will prove useful from the cosmology or AdS/CFT perspectives remains a non-trivial task. Choosing a contour in $\CM_{n,1}$ corresponding to the Poincar\'{e} slicing is easy; one just needs to line up with the usual contour integral (c.f., \cite{Maldacena:2002vr}). More difficult is finding twistor representatives for the Bunch-Davies vacua (i.e., the scattering states) and appropriately fixing the $\GL(2,\C)$ freedom in a way that respects the de Sitter group. \acknowledgments This review is adapted from my D.Phil. thesis at the University of Oxford. As such, thanks must go first and foremost to Lionel Mason for being an excellent supervisor and collaborator, as well as to Mat Bullimore and David Skinner for collaboration on various projects and numerous interesting conversations over the years. I have also benefited at various times from the insight, interest, and encouragement of Fernando Alday, Philip Candelas, Rob Clancy, Michael Green, Michael Gr\"ochenig, Keith Hannabuss, Andrew Hodges, Frances Kirwan, Arthur Lipstein, Xenia de la Ossa, Roger Penrose, Ron Reid-Edwards, Markus R\"oser, James Sparks, George Sparling, Arkady Tseytlin, and Pierre Vanhove. This work was supported primarily by the National Science Foundation (Graduate Research Fellowship 1038995), as well as the Clarendon Scholarship and Balliol College.
1,314,259,992,577
arxiv
\section{#1}\setcounter{lemma}{0}} \title{Normal subgroups and relative centers of linearly reductive quantum groups} \author{Alexandru Chirvasitu} \begin{document} \date{} \newcommand{\Addresses}{ \bigskip \footnotesize \textsc{Department of Mathematics, University at Buffalo, Buffalo, NY 14260-2900, USA}\par\nopagebreak \textit{E-mail address}: \texttt{achirvas@buffalo.edu} }} \maketitle \begin{abstract} We prove a number of structural and representation-theoretic results on linearly reductive quantum groups, i.e. objects dual to that of cosemisimple Hopf algebras: (a) a closed normal quantum subgroup is automatically linearly reductive if its squared antipode leaves invariant each simple subcoalgebra of the underlying Hopf algebra; (b) for a normal embedding $\mathbb{H}\trianglelefteq \mathbb{G}$ there is a Clifford-style correspondence between two equivalence relations on irreducible $\mathbb{G}$- and, respectively, $\mathbb{H}$-representations; and (c) given an embedding $\mathbb{H}\le \mathbb{G}$ of linearly reductive quantum groups the Pontryagin dual of the relative center $Z(\mathbb{G})\cap \mathbb{H}$ can be described by generators and relations, with one generator $g_V$ for each irreducible $\mathbb{G}$-representation $V$ and one relation $g_U=g_Vg_W$ whenever $U$ and $V\otimes W$ are not disjoint over $\mathbb{H}$. This latter center-reconstruction result generalizes and recovers M\"uger's compact-group analogue and the author's quantum-group version of that earlier result by setting $\mathbb{H}=\mathbb{G}$. \end{abstract} \noindent {\em Key words: quantum group; cosemisimple Hopf algebra; comodule; cotensor; center; linearly reductive; antipode} \vspace{.5cm} \noindent{MSC 2020: 16T05; 20G42; 16T20} \section*{Introduction} The quantum groups in the title are as in \cite[\S 1.2]{pw}: objects $\bG$ dual to corresponding Hopf algebras $\cO(\bG)$, with the latter regarded as the algebra of regular functions on (the otherwise non-existent) linear algebraic quantum group $\bG$. Borrowing standard linear-algebraic-group terminology (e.g. \cite[Chapter 1, \S 1, Definition 1.4]{fkm}), the linear reductivity condition then simply means that the Hopf algebra $\cO(\bG)$ is cosemisimple. The unifying thread through the material below is the concept of a (closed) normal quantum subgroup. In the present non-commutative setting normality can be defined in a number of ways that are frequently equivalent \cite[Theorem 2.7]{wnorm}. We settle here on the concept introduced in \cite[\S 1.5]{pw} (and recalled in \Cref{def:norm}): a quotient Hopf algebra \begin{equation*} \cO(\bG)\to \cO(\bH) \end{equation*} dual to a closed quantum subgroup $\bH\le \bG$ is normal if that quotient is an $\cO(\bG)$-comodule under both adjoint coactions $\cO(\bG)\to \cO(\bG)^{\otimes 2}$: \begin{equation*} x\mapsto x_2\otimes S(x_1)x_3\quad\text{and}\quad x\mapsto x_1S(x_3) \otimes x_2 \end{equation*} One piece of motivation for the material is the observation (cf. \Cref{re:clsnorm}) that classically, normal closed subgroups of linearly reductive algebraic groups are again linearly reductive. The non-commutative version of this remark, appearing as \Cref{th:isred} below, can be phrased (in somewhat weakened but briefer form) as follows. \begin{theorem} A normal quantum subgroup $\bH\trianglelefteq \bG$ of a linearly reductive quantum group is again linearly reductive, provided the squared antipode of $\cO(\bH)$ leaves invariant all simple subcoalgebras of the latter. \end{theorem} In particular, this recovers the classical version: in that case the squared antipode is trivial. Keeping with the theme of what is (or isn't) afforded by normality, another motivating strand is that of {\it Clifford theory} (so named for \cite{clif}, where the relevant machinery was introduced). This is a suite of results relating the irreducible representations of a (finite, compact, etc.) group and those of a normal subgroup via induction/restriction functors; the reader can find a brief illuminating summary in \cite[\S 2]{cst} (in the context of finite groups). Hopf-algebra analogues (both purely algebraic and analytic) abound. Not coming close to doing the literature justice, we will point to a selection: \cite{with-clif-hopf,with-clif-alg,bur-clif,sch-rep,oz}, say, and the references therein. \cite[\S 5, especially Theorem 5.4]{cks} provides a version for {\it compact quantum groups} \cite{wor-cqg}, which are (dual to) cosemisimple complex Hopf $*$-algebras with positive Haar integral (the {\it CQG algebras} of \cite[Definition 2.2]{dk}); they thus fit within the confines of the present paper. The following result paraphrases and summarizes \Cref{th.simh}, \Cref{th.simb} and \Cref{pr.2rels}. To make sense of it: \begin{itemize} \item In the language of \Cref{se:cliff}, the surjection $\cO(\bG)\to \cO(\bH)$ of \Cref{th:clifsum} is $H\to B$. \item As explained in \Cref{se.prel}, for a quantum group $\bG$ the symbol $\widehat{\bG}$ denotes its category of irreducible representations (i.e. simple right $\cO(\bG)$-comodules). \item $\mathrm{Ind}^{\bG}_{\bH}$ and $\mathrm{Res}^{\bG}_{\bH}$ denote the induction and restriction functors respectively, as discussed in \Cref{subse:resind}. \end{itemize} \begin{theorem}\label{th:clifsum} Let $\bH\trianglelefteq\bG$ be a normal embedding of linearly reductive quantum groups, and consider the binary relation $\sim$ on $\widehat{\bG}\times \widehat{\bH}$ defined by \begin{equation*} \widehat{\bG}\ni V\sim W\in \widehat{\bH}\Leftrightarrow \mathrm{hom}_{\bH}\left(\mathrm{Res}^{\bG}_{\bH}V,W\right)\ne 0\Leftrightarrow \mathrm{hom}_{\bG}\left(V,\mathrm{Ind}^{\bG}_{\bH}W\right)\ne 0. \end{equation*} The following statements hold. \begin{enumerate}[(a)] \item The left-hand slices \begin{equation*} \mathrm{slice}_W:=\{V\in \widehat{\bG}\ |\ V\sim W\},\ W\in \widehat{\bH} \end{equation*} of $\sim$ are the classes of an equivalence relation $\sim_{\bG}$, given by \begin{equation*} V\sim_{\bG} V'\Leftrightarrow \mathrm{Res}^{\bG}_{\bH}V\text{ and }\mathrm{Res}^{\bG}_{\bH}V'\text{ have the same simple constituents}. \end{equation*} \item The right-hand slices \begin{equation*} {}_V\mathrm{slice}:=\{W\in \widehat{\bH}\ |\ V\sim W\},\ V\in \widehat{\bG} \end{equation*} are the finite classes of an equivalence relation. \end{enumerate} \end{theorem} A third branch of the present discussion has to do with the {\it relative centers} of the title: having defined the center $Z(\bG)$ of a linearly reductive quantum group (\Cref{def:cent}), and given a closed linearly reductive quantum subgroup $\bH\le \bG$, one can then make sense of the relative center $Z(\bG,\bH)$ as the {\it intersection} $\bH\cap Z(\bG)$; see \Cref{def:relcent}. Though not immediately obvious, it follows from \cite[\S 3]{chk} (cited more precisely in the text below) that for embeddings $\bH,\bK\le \bG$ of linearly reductive quantum groups, operations such as the intersection $\bH\cap \bK$ and the quantum subgroup $\bH\bK$ generated by the two are well defined and behave as usual when $\bK$, say, is normal (hence the relevance of normality, again). The initial spark of motivation for \Cref{se:rel} was provided by the main result of \cite{mug} (Theorem 3.1 therein), reconstructing the center of a compact group $\bG$ as a universal grading group for the category of $\bG$-representations. This generalizes to linearly reductive {\it quantum} groups \cite[Proposition 2.9]{chi-coc}, and, as it turns out, goes through in the relative setting; per \Cref{th:caniso}: \begin{theorem}\label{th:canisopre} Let $\bH\le \bG$ be an embedding of linearly reductive quantum groups, and define the relative chain group $C(\bG,\bH)$ by generators $g_V$, $V\in \widehat{\bG}$ and relations $g_U=g_Vg_W$ whenever $U$ and $V\otimes W$ have common simple constituents over $\bH$. Then, the map \begin{equation*} C(\bG,\bH)\ni g_V\mapsto W\in \widehat{Z(\bG,\bH)}\quad\text{where}\quad \mathrm{Res}^{\bG}_{Z(\bG,\bH)}\cong \text{sum of copies of }W \end{equation*} is a group isomorphism. \end{theorem} Or, in words: mapping $g_V$ to the ``central character'' of $V$ restricted to $Z(\bG,\bH)$ gives an isomorphism $C(\bG,\bH)\cong \widehat{Z(\bG,\bH)}$. The ``plain'' (non-relative) version \cite[Proposition 2.9]{chi-coc} (and hence also its classical compact-group counterpart \cite[Theorem 3.1]{mug}) are recovered by setting $\bH=\bG$. Although strictly speaking outside the scope of the present paper, some further remarks, suggestive of an intriguing connection to semisimple-Lie-group representation theory, will perhaps serve to further motivate the relative chain groups discussed in \Cref{th:canisopre}. \Cref{def:cg} was inspired by the study of plain (non-relative) chain groups of connected, semisimple Lie groups $\bG$ with finite center, studied in \cite[\S 4]{chi-cc}; specifically, the problem of whether \begin{equation}\label{eq:hmdet} \mathrm{hom}_{\bH}(\sigma'',\sigma\otimes\sigma')\ne 0,\quad \sigma,\ \sigma',\ \sigma''\in \widehat{\bM} \end{equation} for a compact-group embedding $\bH\le \bM$ arises naturally while studying the direct-integral decomposition of a tensor product of two {\it principal-series} representations of such a Lie group $\bG$. To summarize, consider the setup of \cite{mart-dec} (to which we also refer, along with its own references, for background on the following). \begin{itemize} \item a connected, semisimple Lie group $\bG$ with finite center, with its {\it Iwasawa decomposition} \begin{equation*} \bG = \bK\bA\bN \end{equation*} ($\bK\le \bG$ maximal compact, $\bA$ abelian and simply-connected, $\bN$ nilpotent and simply-connected); \item the corresponding decomposition \begin{equation*} \bP=\bM\bA\bN \end{equation*} of a minimal parabolic subgroup, with $\bM\le \bK$ commuting with $\bA$; \item the resulting {\it principal-series} unitary representations \begin{equation*} \pi_{\sigma,\nu}:=\mathrm{Ind}_{\bP}^{\bG}(\sigma\otimes\nu\otimes\mathrm{triv}), \end{equation*} where $\sigma\in \widehat{\bM}$ and $\nu\in\widehat{\bA}$ unitary irreducible representations over those groups. \end{itemize} One is then interested in which $\pi_{\sigma'',\nu''}$ are {\it weakly contained} \cite[Definition F.1.1]{bdv} in tensor products $\pi_{\sigma,\nu}\otimes \pi_{\sigma',\nu'}$ (i.e. feature in a direct-integral decomposition of the latter); we write \begin{equation*} \pi_{\sigma'',\nu''}\preceq \pi_{\sigma,\nu}\otimes \pi_{\sigma',\nu'}. \end{equation*} It turns out that in the cases worked out in the literature there is a closed subgroup $\bH\le \bM$ that determines this weak containment via \Cref{eq:hmdet}. Examples: \begin{itemize} \item When the (connected, etc.) Lie group $\bG$ is {\it complex}, one can simply take $\bH=\bZ(\bG)$ (the center of $\bG$, which is always automatically contained in $\bM$). This follows, for instance, from \cite[Theorem 3.5.5]{wil-tens} in conjunction with \cite[Theorems 1 and 2]{mart-dec}. \item For $\bG=\mathrm{SL}(n,\bR)$, $n\ge 2$ one can again set $\bH=Z(\bG)$: \cite[\S 4]{repk} for $n=2$ and \cite[p.210, Theorem]{mart-dec} for the rest. \item Finally, for {\it real-rank-one} $\bG$ the main result of \cite{mart-dec}, Theorem 16 of that paper, provides such an $\bH\le \bM$ (denoted there by $\bM_0$; it is in general non-central, and in fact not even normal). \end{itemize} The phenomenon presumably merits some attention in its own right. \subsection*{Acknowledgements} This work is partially supported by NSF grant DMS-2001128 \section{Preliminaries}\label{se.prel} Everything in sight (algebras, coalgebras, etc.) will be linear over a fixed algebraically closed field $k$. We assume some background on coalgebras and Hopf algebras, as covered by any number of good sources such as \cite{swe,abe,mont,rad}. \begin{notation} A number of notational conventions will be in place throughout. \begin{itemize} \item $\Delta$, $\varepsilon$ and $S$ denote, respectively, coproducts, counits and antipodes. They will occasionally be decorated with letters indicating which coalgebra, Hopf algebra, etc. they are attached to; $S_{H}$, for instance, is the antipode of the Hopf algebra $H$. \item We use an un-parenthesized version of {\it Heyneman-Sweedler notation} (\cite[Notation 1.4.2]{mont} or \cite[\S 2.1]{rad}): \begin{equation*} \Delta(c) = c_1\otimes c_2,\ ((\Delta\otimes\id)\circ \Delta)(c) = c_1\otimes c_2\otimes c_3 \end{equation*} and so on for coproducts and \begin{equation*} c\mapsto c_0\otimes c_1,\quad c\mapsto c_{-1}\otimes c_0 \end{equation*} for right and left comodule structures respectively. \item $\cO(\bG)$, $\cO(\bH)$, and so on denote Hopf algebras over a fixed algebraically closed field $k$; they are to be thought of as algebras of representative functions on linear algebraic {\it quantum groups} $\bG$, $H$, etc. \item An {\it embedding} $\bH\le \bG$ of quantum groups means a Hopf algebra surjection $\cO(\bG)\twoheadrightarrow \cO(\bH)$ and more generally, a morphism $\bH\to \bG$ is one of Hopf algebras in the opposite direction $\cO(\bG)\to \cO(\bH)$. \item Categories of (co)modules are denoted by $\cM$, decorated with the symbol depicting the (co)algebra, with the left/right position of the decoration matching the chirality of the (co)module structure. Examples: ${}_A\cM$ means left $A$-modules, $\cM^C$ denotes right $C$-comodules, etc. Comodule structures are right unless specified otherwise. \item These conventions extend to {\it relative Hopf modules} (\cite[\S 8.5]{mont} or \cite[\S 9.2]{rad}): if, say, $A$ is a right comodule algebra \cite[Definition 4.1.2]{mont} over a Hopf algebra $H$ with structure \begin{equation*} A\ni a\mapsto a_0\otimes a_1\in A\otimes H \end{equation*} then $\cM^H_A$ denotes the category of right $A$-modules internal to $\cM^H$; that is, right $A$-modules $M$ that are also right $H$-comodules via \begin{equation*} m\mapsto m_0\otimes m_1 \end{equation*} such that \begin{equation*} (ma)_0\otimes (ma)_1 = m_0a_0\otimes m_1a_1. \end{equation*} There are analogues $\cM_H^C$, say, for right $H$-module coalgebras $C$, left- or half-left-handed versions thereof, and so on. \item An additional `$f$' adornment on one of the above-mentioned categories means {\it finite-dimensional} (co)modules: $\cM^C_f$ is the category of finite-dimensional right $C$-comodules, for instance. \item Reprising a convention common in the operator-algebra literature (e.g. \cite[\S 2.3.2, \S 18.1.1]{dixc}), $\widehat{C}$ denotes the isomorphism classes of simple and hence finite-dimensional \cite[Theorem 5.1.1]{mont} (right, unless specified otherwise) $C$-comodules and $\widehat{\bG}=\widehat{\cO(\bG)}$. The purely-algebraic and operator-algebraic notations converge when $\bG$ is compact and $\cO(\bG)$ denotes the Hopf algebra of representative functions on $\bG$: $\widehat{\bG}$ as defined above can then be identified with the set of isomorphism classes of irreducible unitary $\bG$-representations. \item In the same spirit, it will also occasionally be convenient to write \begin{equation*} \mathrm{Rep}(\bG):=\cM^{\cO(\bG)}. \end{equation*} \end{itemize} \end{notation} The linear algebraic quantum groups $\bG$ in the sequel will frequently be {\it linearly reductive}, in the sense that the Hopf algebra $\cO(\bG)$ is {\it cosemisimple} \cite[\S 2.4]{mont}: $\mathrm{Rep}(\bG)$ is a semisimple category, i.e. every comodule is a direct sum of simple subcomodules. Equivalently (\cite[Definition 2.4.1]{mont}), $\cO(\bG)$ is a direct sum of simple subcoalgebras. Cosemisimple Hopf algebras $H$ are equipped with unique unital {\it integrals} $\int:H\to k$ \cite[Theorem 2.4.6]{mont} and hence have bijective antipodes (by \cite[Corollary 5.4.6]{dnr}, say); more is true, though. Still assuming $H$ cosemisimple, for a simple comodule $V\in \widehat{H}$ the canonical coalgebra morphism \begin{equation*} \mathrm{End}(V)^*\cong V^*\otimes V\to H \end{equation*} (conceptually dual to the analogous map $A\to \mathrm{End}(V)$ giving $V$ a module structure over an algebra $A$) is one-to-one and gives the direct-sum decomposition \begin{equation}\label{eq:pwdec} H=\bigoplus_{V\in \widehat{H}} (V^*\otimes V) = \bigoplus_{V\in \widehat{H}} C_V \end{equation} into simple subcoalgebras $C_V:=V^*\otimes V$ (the {\it Peter-Weyl} decomposition, in compact-group parlance: \cite[Definition 2.2]{dk}, \cite[Theorem 27.40]{hr2}, etc.) that makes $H$ cosemisimple to begin with. With this in place, not only is the antipode $S:=S_H$ bijective but in fact its square leaves every $C_V$, $V\in \widehat{H}$ invariant and acts as an automorphism thereon \cite[Theorem 7.3.7]{dnr}. We refer to $C_V=V^*\otimes V$ as the {\it coefficient coalgebra} of the simple $H$-comodule $V$. This is the coalgebra associated to $V$ in \cite[Proposition 2.5.3]{dnr}, and is the smallest subcoalgebra $C\le H$ for which the comodule structure \begin{equation*} V\to V\otimes H \end{equation*} factors through $V\otimes C$. \subsection{Restriction, induction and the like}\label{subse:resind} Given a coalgebra morphism $C\to D$, the {\it cotensor product} (\cite[Definition 8.4.2]{mont} or \cite[\S 10]{bw}) $-\square_D C$ is right adjoint to the natural ``scalar corestriction'' functor $\cM^C\to \cM^D$: \begin{equation}\label{eq:cotens} \begin{tikzpicture}[auto,baseline=(current bounding box.center)] \path[anchor=base] (0,0) node (l) {$\cM^C$} +(2,0) node (m) {$\bot$} +(4,0) node (r) {$\cM^D$} ; \draw[->] (l) to[bend left=16] node[pos=.5,auto] {$\scriptstyle \text{cores}$} (r); \draw[->] (r) to[bend left=16] node[pos=.5,auto] {$\scriptstyle -\square_DC$} (l); \end{tikzpicture} \end{equation} the central symbol indicating that the top functor is the left adjoint. When $\bH\le \bG$ is, say, an inclusion of compact groups and $C\to D$ the corresponding surjection $\cO(\bG)\to \cO(\bH)$ of algebras of representative functions, the cotensor functor \begin{equation*} -\square_{\cO(\bH)}\cO(\bG):\mathrm{Rep}(\bH)\to \mathrm{Rep}(\bG) \end{equation*} is naturally isomorphic with the usual {\it induction} $\mathrm{Ind}_H^\bG$ \cite[p.82]{rob}. For that reason we repurpose this same notation for the general setting of quantum-group inclusions, writing \begin{equation*} \mathrm{Ind}_{\bH}^\bG:= -\square_{\cO(\bH)}\cO(\bG):\mathrm{Rep}(\bH) \to \mathrm{Rep}(\bG) \end{equation*} for any quantum-group inclusion $\bH\le \bG$; for consistency, we also occasionally also denote the rightward functor in \Cref{eq:cotens} by \begin{equation*} \mathrm{Res}_{\bH}^\bG:\mathrm{Rep}(\bG) \to \mathrm{Rep}(\bH). \end{equation*} \section{Normal subgroups and automatic reductivity}\label{se:norm} Consider a quantum group embedding $\bH\le \bG$, expressed as a surjective Hopf-algebra morphism $\pi:\cO(\bG)\to \cO(\bH)$. As is customary in the literature on quantum homogeneous spaces (e.g. \cite[proof of Theorem 2.7]{wnorm}), we write \begin{align*} \cO(\bG/\bH) &:= \{x\in\cO(\bG)\ |\ (\id\otimes\pi)\Delta(x) = x\otimes 1\}\\ \cO(\bH\backslash \bG) &:= \{x\in\cO(\bG)\ |\ (\pi\otimes\id)\Delta(x) = 1\otimes x\}.\\ \end{align*} According to \cite[Definition 1.1.5]{ad} a quantum subgroup $\bH\le \bG$ would be termed {\it normal} provided the two quantum homogeneous spaces $\cO(\bG/\bH)$ and $\cO(\bH\backslash \bG)$ coincide. This will not quite do for our purposes (see \Cref{ex:borel}), so instead we follow \cite[\S 1.5]{pw} (also, say, \cite[Definition 2.6]{wnorm}, relying on the same source) in the following \begin{definition}\label{def:norm} The quantum subgroup $\bH\le \bG$ cast as the surjection $\pi:\cO(\bG)\to \cO(\bH)$ \begin{itemize} \item {\it left-normal} if $\pi$ is a morphism of left $\cO(\bG)$-comodules under the left adjoint coaction \begin{equation*} \mathrm{ad}_{l}:=\mathrm{ad}_{l,\bG}:x\mapsto x_1S(x_3)\otimes x_2. \end{equation*} \item {\it right-normal} if similarly, $\pi$ is a morphism of right $\cO(\bG)$-comodules under the right adjoint coaction \begin{equation}\label{eq:radj} \mathrm{ad}_{r}:=\mathrm{ad}_{r,\bG}:x\mapsto x_2\otimes S(x_1)x_3. \end{equation} \item {\it normal} if it is both left- and right-normal. \end{itemize} \end{definition} The following result is essentially a tautology in the framework of \cite[\S 1.2]{chk}, but only because in that paper the definition of a normal quantum subgroup is more restrictive (see \cite[Definition 1.2.3]{chk}, which makes an additional (co)flatness requirement). \begin{theorem}\label{th:isred} Let $\bH\le \bG$ be a left- or right-normal quantum subgroup of a linearly reductive group such that $S^2$ leaves every simple subcoalgebra of $\cO(\bH)$. $\bH$ is then linearly reductive and normal. \end{theorem} \begin{remark} The condition that $S^2$ leave invariant the simple subcoalgebras is certainly necessary for cosemisimplicity \cite[Theorem 7.3.7]{dnr}, but I do not know if it is redundant as a hypothesis in the context of \Cref{th:isred}. \end{remark} In particular, the squared-antipode condition of \Cref{th:isred} is automatic when $S^2=\id$ (i.e. when $\cO(\bG)$, or $\bG$, is {\it involutory} or {\it involutive} \cite[Definition 7.1.12]{rad}). We thus have \begin{corollary} Left- or right-normal quantum subgroups of involutive linearly reductive quantum groups are normal and linearly reductive. \mbox{}\hfill\ensuremath{\blacksquare} \end{corollary} The proof of \Cref{th:isred} requires some preparation. First, a simple remark for future reference. \begin{lemma}\label{le:bijant} Let $\pi:H\to K$ be a surjective morphism of Hopf algebras with $H$ cosemisimple. $K$ then has bijective antipode, and hence $\pi$ intertwines antipode inverses. \end{lemma} \begin{proof} That a morphism of bialgebras intertwines antipodes or antipode inverses as soon as these exist is well known, so we focus on the claim that $S_{K}$ is bijective. By the very definition of cosemisimplicity $H$ is the direct sum of its simple (hence finite-dimensional \cite[Theorem 5.1.1]{mont}) subcoalgebras $C_i\le H$. The assumption is that $\pi$ is a morphism of Hopf algebras, so the antipode $S:=S_H$ restricts to maps \begin{equation}\label{eq:fsts} S:\ker(\pi|_{C_i})\to \ker(\pi|_{S(C_i)}), \end{equation} injective because $S$ is bijective. On the other hand though, for cosemisimple Hopf algebras the squared antipode leaves every subcoalgebra invariant \cite[Theorem 7.3.7]{dnr}, so \begin{equation*} S^2:\ker(\pi|_{C_i})\to \ker(\pi|_{S^2(C_i)}) = \ker(\pi|_{C_i}), \end{equation*} being a one-to-one endomorphism of a finite-dimensional vector space, must be bijective. Since that map decomposes as \Cref{eq:fsts} followed by its (similarly one-to-one) analogue defined on $S(C_i)$, \Cref{eq:fsts} itself must be bijective, and hence the inverse antipode $S^{-1}$ leaves $\ker(\pi)$ invariant. This, in essence, was the claim. \end{proof} The conclusion of \Cref{le:bijant} is by no means true of arbitrary bijective-antipode Hopf algebras $H$: \begin{example}\label{ex:nobij} \cite[Theorem 3.2]{schau-ff} gives an example of a Hopf algebra $H$ with bijective antipode and a Hopf ideal $I\trianglelefteq H$ that is not invariant under the inverse antipode. In other words, even though $H$ has bijective antipode, the quotient Hopf algebra $H\to H/I$ does not. \end{example} \pf{th:isred} \begin{th:isred} The proof proceeds gradually. \vspace{.5cm} {\bf Step 1: normality.} According to \Cref{le:bijant} the antipode $S:=S_{\cO(\bG)}$ and its inverse both leave the kernel $\cK$ of the surjection \begin{equation*} \pi:\cO(\bG)\to \cO(\bH) \end{equation*} invariant, so $S(\cK)=\cK$. The fact that left- and right-normality are equivalent now follows from \cite[Proposition 1.5.1]{pw}. \vspace{.5cm} {\bf Step 2: The homogeneous spaces $\bG/\bH$ and $\bH\backslash \bG$ coincide.} This means that \begin{equation}\label{eq:homogs} \cO(\bH\backslash \bG) = \cO(\bG/\bH) =: A, \end{equation} and follows from \cite[Lemma 1.1.7]{ad}. \vspace{.5cm} {\bf Step 3: Reduction to trivial $\bG/\bH$.} The subspace $A\le \cO(\bG)$ of \Cref{eq:homogs} is in fact a Hopf subalgebra \cite[Lemma 1.1.4]{ad}. $A$ is also invariant under the right adjoint action \begin{equation*} \cO(\bG)\otimes \cO(\bG) \ni x\otimes y\mapsto S(y_1) x y_2\in \cO(\bG) \end{equation*} (\cite[Lemma 1.20]{chk}), so by \cite[Lemma 1.1.11]{ad} the left ideal \begin{equation*} \cO(\bG)A^-\le \cO(\bG)\text{ where }A^-:=\mathrm{ker}(\varepsilon|_A) \end{equation*} is bilateral. The quotient $\cO(\bG)/\cO(\bG)A^-$ must then be a {\it cosemisimple} quotient Hopf algebra \cite[Theorem 2.5]{chi-ff} $\cO(\bG)\to \cO(\bK)$, and we have an exact sequence \begin{equation*} \begin{tikzpicture}[auto,baseline=(current bounding box.center)] \path[anchor=base] (0,0) node (ll) {$k$} +(1.5,.5) node (l) {$\cO(\bG/\bK)$} +(3.5,.5) node (m) {$\cO(\bG)$} +(5.5,.5) node (r) {$\cO(\bK)$} +(7,0) node (rr) {$k$} +(1.5,0) node () {$\parallel$} +(1.5,-.5) node () {$\cO(\bG/\bH)$} ; \draw[->] (ll) to[bend left=6] node[pos=.5,auto] {$\scriptstyle $} (l); \draw[->] (l) to[bend left=6] node[pos=.5,auto] {$\scriptstyle $} (m); \draw[->] (m) to[bend left=6] node[pos=.5,auto] {$\scriptstyle $} (r); \draw[->] (r) to[bend left=6] node[pos=.5,auto] {$\scriptstyle $} (rr); \end{tikzpicture} \end{equation*} of quantum groups in the sense of \cite[\S 1.2]{ad}, with everything in sight cosemisimple. Since furthermore $A^-$ is annihilated by the original surjection $\cO(\bG)\to \cO(\bH)$, $\bH$ can be thought of as a quantum subgroup of $\bK$ (rather than $\bG$): \begin{equation*} \cO(\bK)\to \cO(\bH). \end{equation*} I now claim that the corresponding homogeneous space is trivial: \begin{equation}\label{eq:khhk} \cO(\bK/\bH) = \cO(\bH\backslash \bK) = k. \end{equation} To see this, consider a simple representation $V\in\widehat{\bK}$ that contains invariant vectors over $\bH$. Because $\cO(\bK)$ is cosemisimple, $V$ is a subcomodule (rather than just a subquotient) of a simple comodule $W\in \widehat{\bG}$, and it follows that \begin{equation*} W|_{\bH}\ge V|_{\bH} \end{equation*} contains invariant vectors. The fact that \Cref{eq:homogs} is a Hopf subalgebra means that it is precisely \begin{equation*} \bigoplus_{U}C_U,\ U\in\widehat{\bG}\text{ and }U|_{\bH}\text{ has invariant vectors}, \end{equation*} so $C_W\le A$ and the restriction $W|_{\bK}$ decomposes completely as a sum of copies of the trivial comodule $k$. But then $V\le W|_{\bK}$ itself must be trivial, proving the claim \Cref{eq:khhk}. Now simply switch the notation back to $\bG:=\bK$ to conclude Step 3: \begin{equation}\label{eq:nohomog} \cO(\bG/\bH) = \cO(\bH\backslash \bG) = k. \end{equation} This latter condition simply means that for an $\cO(\bG)$-comodule $V$ its $\bG$- and $\bH$-invariants coincide: \begin{equation*} \mathrm{hom}_{\bG}(k,V) = \mathrm{hom}_{\bH}(k,V). \end{equation*} Equivalently, since \begin{equation*} \mathrm{hom}_{\bG}(V,W) = \mathrm{hom}_{\bG}(k,W\otimes V^*), \end{equation*} this simply means that the restriction functor \begin{equation}\label{eq:res} \mathrm{Rep}(\bG)\ni V\mapsto V|_{\bH}\in \mathrm{Rep}(\bH) \end{equation} is full (for both left and right comodules, but here we focus on the latter). \vspace{.5cm} {\bf Step 4: Wrapping up.} Because the restriction functor \Cref{eq:res} is full, simple, non-isomorphic $\bG$-representations that remain simple over $\bH$ also remain non-isomorphic. Now, assuming $\bH\le \bG$ is not an isomorphism (or there would be nothing to prove), some irreducible $V\in\widehat{\bG}$ must become reducible over $\bH$. There are two possibilities to consider: \begin{enumerate}[(a)] \item\label{item:1} All simple subquotients of the reducible representation $V|_{\bH}$ are isomorphic. We then have (in $\mathrm{Rep}(\bH)$) a surjection of $V$ onto a simple quotient thereof, which then embeds into $V$ again. All in all this gives a non-scalar endomorphism of $V$ over $\bH$, contradicting the fullness of the restriction functor \Cref{eq:res}. \item\label{item:2} $V$ acquires at least two non-isomorphic simple subquotients $V_i$, $i=1,2$ over $\bH$. Then, the image of the coefficient coalgebra $C_V=V^*\otimes V$ of \Cref{eq:pwdec} through $\pi:\cO(\bG)\to \cO(\bH)$ will contain both \begin{equation*} C_{V_i} = V_i^*\otimes V_i\le \cO(\bH),\ i=1,2 \end{equation*} as (simple) subcoalgebras. The requirement that $S^2(C_{V_i})=C_{V_i}$ means that the simple comodules $V_i$ are isomorphic to their respective double duals $V_i^{**}$ (as $\cO(\bH)$-comodules, not just vector spaces). But then \begin{equation*} C_{V_i} = V_i^*\otimes V_i\cong V_i^*\otimes V_i^{**} \end{equation*} contains an $\bH$-invariant vector, namely the image of the {\it coevaluation} \cite[Definition 9.3.1]{maj-fnd} \begin{equation*} \mathrm{coev}_{V_i^*}:k\to V_i^*\otimes V_i^{**}. \end{equation*} It follows that the space of $\bH$-invariants of the $\cO(\bG)$-comodule $\pi(C_V)$ is at least 2-dimensional, whereas that of $\bG$-invariants is at most 1-dimensional (because the same holds true of $C_V=V^*\otimes V$). This contradicts the fullness of \Cref{eq:res} and hence our assumption that $\bH\le \bG$ is {\it not} an isomorphism. \end{enumerate} The proof of the theorem is now complete. \end{th:isred} \begin{remark} Left and right normality are proven equivalent to an alternative notion (\cite[Definition 2.3]{wnorm}) in \cite[Theorem 2.7]{wnorm} in the context of {\it CQG algebras}, i.e. complex cosemisimple Hopf $*$-algebras with positive unital integral (this characterization is equivalent to \cite[Definition 2.2]{dk}). The substance of \Cref{th:isred}, however, is the cosemisimplicity claim; this is of no concern in the CQG-algebra case, as a Hopf $*$-algebra that is a quotient of a CQG algebra is automatically again CQG (as follows, for instance, from \cite[Proposition 2.4]{dk}), and hence cosemisimple. \end{remark} \begin{example}\label{ex:borel} The weaker requirement that $\cO(\bG/\bH)=\cO(\bH\backslash \bG)$ for normality would render \Cref{th:isred} false. Let $\bG$ be a semisimple complex algebraic group and $\bB\le \bG$ a {\it Borel subgroup} \cite[Part II, \S 1.8]{jntz}. The restriction functor \begin{equation*} \mathrm{Res}:\mathrm{Rep}(\bG)\to \mathrm{Rep}(\bB) \end{equation*} is full \cite[Part II, Corollary 4.7]{jntz}, so in particular \begin{equation*} \cO(\bG/\bB) = \mathrm{hom}_\bB(\mathrm{triv},\cO(\bG)) = \mathrm{hom}_\bG(\mathrm{triv},\cO(\bG)) = \bC \end{equation*} and similarly for $\cO(\bB\backslash \bG)$. This means that $\cO(\bG/\bB)=\cO(\bB\backslash \bG)$, but $\bB$ is nevertheless not reductive. \end{example} \begin{remark}\label{re:clsnorm} The classical (as opposed to quantum) analogue of \Cref{th:isred} admits an alternative, more direct proof relying on the structure of reductive groups: \begin{itemize} \item In characteristic zero linear reductivity is equivalent (by \cite[p.88 (2)]{nag}, for instance) to plain reductivity \cite[\S 11.21]{brl}, i.e. the condition that the {\it unipotent radical} $\cR_u(\bG)$ of $\bG$ (the largest normal connected unipotent subgroup) be trivial. Assuming $\bG$ is reductive, for any normal $\bK\trianglelefteq \bG$ the corresponding unipotent radical $\cR_u(\bK)$ is characteristic in $N$ and hence normal in $\bG$, meaning that \begin{equation*} \cR_u(\bK)\le \cR_u(\bG)=\{1\} \end{equation*} and hence $N$ is again reductive (so linearly reductive, in characteristic zero). \item On the other hand, in positive characteristic $p$ \cite[p.88 (1)]{nag} says that the linearly reductive groups $\bG$ are precisely those fitting into an exact sequence \begin{equation*} \{1\}\to \bK\to \bG\to \bG/\bK\to \{1\} \end{equation*} with $\bK$ a closed subgroup of a torus and $\bG/\bK$ finite of order coprime to $p$. Clearly then, normal subgroups of $\bG$ have the same structure. \end{itemize} \end{remark} \section{Clifford theory}\label{se:cliff} We work with an exact sequence \Cref{eq:seq} \begin{equation}\label{eq:seq} k\to A\to H\to B\to k \end{equation} of cosemisimple Hopf algebras in the sense of \cite[p. 23]{ad}. Note that we additionally know that $H$ is left and right coflat over $B$ (simply because the latter is cosemisimple) and left and right faithfully flat over $A$ (by \cite[Theorem 2.1]{chi-ff}). We will make frequent use of \cite[Theorem 1]{tak-ff}, to the effect that \begin{equation}\label{eq:equiv-ahb} \begin{tikzpicture}[auto,baseline=(current bounding box.center)] \path[anchor=base] (0,0) node (1) {$\cM_A^H$} +(5,0) node (2) {$\cM^B$}; \draw[->] (1) to[bend left=10] node [pos=.5,auto] {$\scriptstyle M\mapsto M/MA^-$} (2); \draw[->] (2) to[bend left=10] node [pos=.5,auto] {$\scriptstyle N\otimes A\mapsfrom N$} (1); \end{tikzpicture} \end{equation} is an equivalence, where the $-$ superscript denotes kernels of counits. Upon identifying $\cM^B$ with $\cM^H_A$ via \Cref{eq:equiv-ahb}, the adjunction \begin{equation}\label{eq:adj} \begin{tikzpicture}[auto,baseline=(current bounding box.center)] \path[anchor=base] (0,0) node (1) {$\cM^H$} +(5,0) node (2) {$\cM^B$}; \draw[->] (1) to[bend left=10] node [pos=.5,auto] {$\scriptstyle \mathrm{corestrict}$} (2); \draw[->] (2) to[bend left=10] node [pos=.5,auto] {$\scriptstyle -\square_B H$} (1); \end{tikzpicture} \end{equation} becomes \begin{equation}\label{eq:adj-alt} \begin{tikzpicture}[auto,baseline=(current bounding box.center)] \path[anchor=base] (0,0) node (1) {$\cM^H$} +(5,0) node (2) {$\cM^H_A$.}; \draw[->] (1) to[bend left=10] node [pos=.5,auto] {$\scriptstyle -\otimes A$} (2); \draw[->] (2) to[bend left=10] node [pos=.5,auto] {$\scriptstyle \mathrm{forget}$} (1); \end{tikzpicture} \end{equation} We will freely switch points of view between the two perspectives provided by \Cref{eq:adj,eq:adj-alt}. Consider the following binary relation $\sim_B$ on $\widehat{B}$. \begin{definition}\label{def.simb} For $V,W\in \widehat{B}$, $V\sim_B W$ provided there is a simple $H$-comodule $U$ such that $V$ and $W$ are both constituents of the corestriction of $U$ to $B$. \end{definition} Similarly, we will study the following relation on $\widehat{H}$: \begin{definition}\label{def.simh} For $V,W\in \widehat{H}$ we set $V\sim_H W$ provided $\hom^B(V,W)\ne 0$. \end{definition} \begin{remark} In other words, $\sim_H$ signifies the fact that the corestrictions of $V$ and $W$ to $\cM^B$ have common simple constituents. \end{remark} Our first observation is that $\sim_H$ is an equivalence relation, and provides an alternate characterization for it. \begin{theorem}\label{th.simh} $\sim_H$ is an equivalence relation on $\widehat{H}$, and moreover, for $V,W\in\widehat{H}$ the following conditions are equivalent \begin{enumerate} \renewcommand{\labelenumi}{(\arabic{enumi})} \item $V\sim_H W$; \item as $B$-comodules, $V$ and $W$ have the same simple constituents; \item $V$ embeds into $W\otimes A\in \cM^H$. \end{enumerate} \end{theorem} \begin{proof} Note first that (2) clearly defines an equivalence relation on $\widehat{H}$, so the first statement of the theorem will be a consequence of \begin{equation*} (1) \Leftrightarrow (2) \Leftrightarrow (3). \end{equation*} We prove the latter result in stages. \vspace{.5cm} {\bf $(1)\Leftrightarrow (3)$.} By definition, $V\sim_H W$ if and only if \begin{equation*} \hom^B(V,W)\ne 0. \end{equation*} Via \Cref{eq:equiv-ahb} and the hom-tensor adjunction \Cref{eq:adj-alt}, this hom space can be identified with \begin{equation}\label{eq:vwa} \hom^H_A(V\otimes A,W\otimes A) \cong \hom^H(V,W\otimes A). \end{equation} The simplicity of $V\in \widehat{H}$ now implies that every non-zero element of the right hand side of \Cref{eq:vwa} is an embedding, hence finishing the proof of the equivalence of (1) and (3). \vspace{.5cm} {\bf $(1)\Leftrightarrow (2)$.} Let us denote by $\mathrm{const}(\bullet)$ the set of simple constituents of a $B$-comodule $\bullet$. By definition $V\sim_HW$ means that {\it some} of the simple constituents of $V$ and $W$ as objects in $\cM^B$ coincide, so (2) is clearly stronger than (1). Conversely, note that by the equivalence (1) $\Rightarrow$ (3) proven above, whenever $V\sim_HW$ we have \begin{equation}\label{eq:const} \mathrm{const}(V)\subseteq \mathrm{const}(W\otimes A), \end{equation} where the respective objects are regarded as $B$-comodules via the corestriction functor $\cM^H\to \cM^B$. In turn however, given that $A\in \cM^H$ breaks up as a sum of copies of $k$ in $\cM^B$ (because of the exactness of \Cref{eq:seq}), the right hand side of \Cref{eq:const} is simply $\mathrm{const}(W)$. All in all, we have \begin{equation*} V\sim_HW\Rightarrow \mathrm{const}(V)\subseteq \mathrm{const}(W). \end{equation*} This together with the symmetry of $\sim_H$ (obvious by definition from the semisimplicity of $\cM^B$) finishes the proof of (1) $\Rightarrow$ (2) and of the theorem. \end{proof} \begin{theorem}\label{th.simb} $\sim_B$ is an equivalence relation on $\widehat{B}$ with finite classes. \end{theorem} \begin{proof} We know from \Cref{th.simh} above that as $U$ ranges over $\widehat{H}$, the sets $\mathrm{const}(U)$ of constituents of $U\in \cM^B$ partition $\widehat{B}$, thus defining an equivalence relation on the latter set. The definition of $\sim_B$ ensures that $V\sim_B W$ if and only if $V$ and $W$ fall in the same set $\mathrm{const}(U)$, and hence $\sim_B$ coincides with the equivalence relation from the previous paragraph. Finally, the statement on finiteness of classes is implicit in their description given above: an equivalence class is the set of simple constituents of a simple $H$-comodule $U$ viewed as a $B$-comodule, and it must be finite because $\mathrm{dim}(U)$ is. \end{proof} \Cref{th.simh,th.simb} establish a connection between the equivalence relations $\sim_H$ and $\sim_B$ on $\widehat{H}$ and $\widehat{B}$ respectively. We record it below. Before getting to the statement, recall the notation $\mathrm{const}(\bullet)\subseteq \widehat{B}$ for the set of simple summands of an object $\bullet\in \cM^B$. With that in mind, we have the following immediate consequence of \Cref{th.simh,th.simb}. \begin{proposition}\label{pr.2rels} The range of the map \begin{equation*} \widehat{H}\to \text{ finite subsets of }\widehat{B} \end{equation*} sending $V\in\widehat{H}$ to $\mathrm{const}(V)$ consists of the equivalence classes of $\sim_B$, and its fibers are the classes of $\sim_H$. \mbox{}\hfill\ensuremath{\blacksquare} \end{proposition} \section{Relative chain groups and centers}\label{se:rel} \begin{definition}\label{def:cg} Let $\bH\le \bG$ be an inclusion of linearly reductive quantum groups. The {\it (relative) chain group} $C(\bG,\bH)$ is defined by \begin{itemize} \item generators $g_V$ for simple comodules $V\in \widehat{\bG}$; \item relations \begin{equation}\label{eq:crel} \mathrm{hom}_{\bH}(U,V\otimes W)\ne 0\Rightarrow g_U = g_V g_W; \end{equation} that is, one such relation whenever the restrictions of $U$ and $V\otimes W$ to $\bH$ have non-trivial common summands (i.e. $U$ and $V\otimes W$ are not {\it disjoint} over $\bH$). \end{itemize} We write $C(\bG):=C(\bG,\bG)$. \end{definition} \begin{remark}\label{re:chains} For chained inclusions $\bK\le \bH\le \bG$ we have a map $C(\bG,\bK)\to C(\bG,\bH)$ sending the class of $V\in\widehat{\bG}$ in the domain to the class of the selfsame $V$ in the codomain. This is easily seen to be well-defined and a group morphism. \end{remark} Recall \cite[Definition 2.10]{chi-coc}. \begin{definition}\label{def:cent} Let $\bG$ be a linearly reductive quantum group. Its {\it center} $Z(\bG)\le \bG$ is the quantum subgroup dual to the largest Hopf algebra quotient \begin{equation*} \pi:\cO(\bG)\to \cO(Z(\bG)) \end{equation*} that is central in the sense of \cite[Definition 2.1]{chi-coc}: \begin{equation*} \pi(x_1)\otimes x_2 = \pi(x_2)\otimes x_1\in \cO(Z(\bG))\otimes \cO(\bG),\ \forall x\in \cO(\bG). \end{equation*} \end{definition} The relative version of this construction, alluded to in the title, is as follows. \begin{definition}\label{def:relcent} Let $\bH\le \bG$ be an embedding of linearly reductive quantum groups. The corresponding {\it relative center} $Z(\bG,\bH)$ is the {\it intersection} $Z(\bG)\cap \bH$ denoted by $Z(\bG)\wedge \bH$ in \cite[Definition 1.15]{chk}. This is a quantum subgroup of both $\bH$ and $Z(\bG)$ (and hence also of $\bG$), and is automatically linearly reductive by \cite[Proposition 3.1]{chk}. \end{definition} Each irreducible $\bG$-representation breaks up as a sum of mutually isomorphic (one-dimensional) representations over the center $Z(\bG)$, and hence gets assigned an element of $\widehat{Z(\bG)}$: its {\it central character}. Two such irreducible representations that are not disjoint over $\bH$ must have corresponding central characters agreeing on \begin{equation*} Z(\bG,\bH):=Z(\bG)\cap \bH \end{equation*} (the {\it relative center} associated to the inclusion $\bH\le \bG$), so we have a canonical morphism \begin{equation}\label{eq:can} \cat{can}:C(\bG,\bH)\to \widehat{Z(\bG,\bH)} \end{equation} \begin{theorem}\label{th:caniso} For any embedding $\bH\le \bG$ of linearly reductive quantum groups \Cref{eq:can} is an isomorphism. \end{theorem} \begin{proof} Consider the commutative diagram \begin{equation}\label{eq:cans} \begin{tikzpicture}[auto,baseline=(current bounding box.center)] \path[anchor=base] (0,0) node (l) {$C(\bG)$} +(3,.5) node (u) {$C(\bG,\bH)$} +(3,-.5) node (d) {$\widehat{Z(\bG)}$} +(6,0) node (r) {$\widehat{Z(\bG,\bH)}$} ; \draw[->] (l) to[bend left=6] node[pos=.5,auto] {$\scriptstyle $} (u); \draw[->] (u) to[bend left=6] node[pos=.5,auto] {$\scriptstyle \cat{can}$} (r); \draw[->] (l) to[bend right=6] node[pos=.5,auto,swap] {$\scriptstyle \cat{can}$} node[pos=.5,auto] {$\scriptstyle \cong$} (d); \draw[->] (d) to[bend right=6] node[pos=.5,auto,swap] {$\scriptstyle $} (r); \end{tikzpicture} \end{equation} where \begin{itemize} \item the upper left-hand morphism is an instance of the maps noted in \Cref{re:chains}; \item the bottom right-hand map is the (plain) group surjection dual to the quantum-group inclusion $Z(\bG)\cap \bH\le Z(\bG)$; \item and the fact that the bottom left-hand map is an isomorphism is a paraphrase of \cite[Proposition 2.9]{chi-coc} in conjunction with \cite[Definition 2.10]{chi-coc}. \end{itemize} The surjectivity of the bottom composition entails that of \Cref{eq:can}, so it remains to show that the latter is one-to-one. Let $V\in\widehat{\bG}$ be a simple comodule where $Z(\bG,\bH)$ operates with trivial character, i.e. one whose class in $C(\bG,\bH)$ is annihilated by \Cref{eq:can}. We can then form the quantum subgroup \begin{equation*} Z(\bG)\bH:=Z(\bG)\vee \bH\le \bG \end{equation*} generated by $Z(\bG)$ and $\bH$ as in \cite[Definition 1.15]{chk} (the `$\vee$' notation is used there; we suppress the symbol here for brevity), which then satisfies, according to \cite[Theorem 3.4]{chk}, a quantum-flavored isomorphism theorem: \begin{equation*} \bH/Z(\bG,\bH) \stackrel{\cong}{\longrightarrow} Z(\bG)\bH/Z(\bG) \end{equation*} via the canonical map induced from $\bH\to Z(\bG)\bH$. Since $V$ (or rather its restriction $V|_{\bH}$) is a representation of the former group because $Z(\bG,\bH)$ operates trivially, it lifts to a $Z(\bG)\bH$-representation with $Z(\bG)$ acting trivially. In summary: \begin{equation*} \text{The restriction }V|_{\bH}\text{ extends to a }Z(\bG)\bH\text{-representation $W$ with trivial }Z(\bG)\text{-action}. \end{equation*} But then the induced representation $\mathrm{Ind}_{Z(\bG)\bH}^\bG W$ again has trivial central character, and hence so do all of its simple summands $V_1$. The adjunction \Cref{eq:cotens} yields \begin{equation*} \mathrm{hom}_{Z(\bG)\bH}(V_1|_{Z(\bG)\bH},W)\cong \mathrm{hom}_\bG\left(V_1,\mathrm{Ind}_{Z(\bG)\bH}^\bG W\right)\ne \{0\}, \end{equation*} meaning that $V_1$ fails to be disjoint from $W$ over $Z(\bG)\bH$ and hence also from \begin{equation*} V|_{\bH} = W|_{\bH}\text{ over }\bH. \end{equation*} To conclude, observe that \begin{itemize} \item \Cref{eq:can} agrees on $V$ and $V_1$ due to the noted non-disjointness \begin{equation*} \mathrm{hom}_{\bH}(V_1,V)\ne 0; \end{equation*} \item while the bottom left-hand map $\cat{can}:C(\bG)\to \widehat{Z(\bG)}$ of \Cref{eq:cans} annihilates $V_1$ because the latter has trivial central character; \item and hence the top right-hand $\cat{can}$ map in \Cref{eq:cans} must also annihilate $V$. \end{itemize} This being the desired conclusion, we are done. \end{proof}
1,314,259,992,578
arxiv
\section{Introduction} Deep learning has achieved remarkable success over the past decade and is active in various fields, including computer vision, natural language processing (NLP), and data mining. Although neural models can perform well in most tasks, they require a huge amount of data and a high computation cost to train, making the trained model a valuable intellectual property. As a result, it is essential for us to prevent the neural models from being used without authorization. In the last few years, many methods have been proposed to safeguard deep neural networks and they can be roughly divided into two types: {\em watermarking}~\cite{DBLP:conf/uss/AdiBCPK18}, and {\em secure authorization}~\cite{DBLP:journals/corr/abs-2008-05966}. In the watermarking approaches, the owners can verify the ownership of the neural model based on a unique watermark. However, due to the catastrophic forgetting problem \cite{kemker2018measuring}, the watermark-based neural models \cite{kuribayashi2020deepwatermark, song2017machine} {\color{black} are known to be vulnerable to certain} malicious attacks~\cite{wang2019attacks}, {\color{black} which may lead to the loss of their watermarks}. On the other hand, in the secure authorization approaches, the owners of the neural network want to ensure that users can only access the model with authorization. Recently, \citet{wang2022nontransferable} proposed a new perspective with non-transferable learning (NTL) to protect the model from illegal access to unauthorized data. The method trains the model to have a good performance only in the authorized domain while performing badly in the unauthorized domain. However, such an approach has some limitations: {\color{black} 1) their work relies on a significant amount of labeled data from the target domain, while such labels are usually not easy to acquire,} 2) the access to the unauthorized domain can no longer be regained, if required, after {\color{black} the model is learned}. \begin{figure}[t!]\hspace{-0.2cm} \centering \includegraphics[width=0.4\textwidth]{intro.pdf} \caption{Overview of our unsupervised non-transferable with secret keys.} \label{fig:intro} \end{figure} {\color{black} In this work, we propose our new NTL method named \underline{U}nsupervised \underline{N}on-\underline{T}ransferable \underline{L}earning (UNTL) for the text classification tasks.} As Figure~\ref{fig:intro} shows, our model can perform well in the source domain while performing badly in the target domain. In addition, we propose {\color{black} secret key modules}, which can help recover the ability of the model in the target domain. Our contributions include: \begin{itemize} \item We propose a novel unsupervised non-transferable learning approach for text classification tasks. {\color{black} Different from existing approaches, our model can still perform well without the need of the label information in the target domain.} \item We introduce two different methods, {\color{black} namely {\em Prompt-based Secret Key} and {\em Adapter-based Secret Key}, that} allow us to recover the ability of the model to perform classification on the target domain. \item {\color{black} Extensive experiments show that our proposed models can perform well in the source domain but badly in the target domain. Moreover, the access to the target domain can still be regained using the secret key.} \end{itemize} {\color{black} To the best of our knowledge, our work is the first approach for learning under the unsupervised non-transferable learning setup, which also comes with the ability to recover the access to the target domain.\footnote{Our code and data are released at \url{https://github.com/ChaosCodes/UNTL}.}} \section{Related Work} In this section, we briefly survey ideas that are related to our work from two fields: domain adaptation and intellectual property protection. Furthermore, we discuss some limitations in the existing methods which we will tackle with our approach. In domain adaptation, given a source domain and a target domain with unlabeled data or a few labeled data, the goal is to improve the performance in the target task using the knowledge from the source domain. \citet{ghifary2014domain}, \citet{tzeng2014deep}, and \citet{zhu2020deep} applied a Maximum Mean Discrepancy regularization method~\cite{gretton2012kernel} to maximize the invariance information between different domains. \citet{ganin2016domain} and \citet{schoenauer-sebag2018multidomain} tried to match the feature space distributions of the two domains with adversarial learning. In contrast to the methods above, \citet{wang2022nontransferable} analyzed domain adaptation in a different way and proposed non-transferable learning (NTL) to prevent the knowledge transfer from the source to the target domain by enlarging the discrepancy between the representations in different domains. In intellectual property protection, due to the significant value and its vulnerability against malicious attacks of learned deep neural networks, it is crucial to propose intellectual property protection methods to defend the owners of the deep neural networks (DNNs) from any loss. Recently, two different approaches to safeguard DNNs have been proposed: watermarking~\cite{DBLP:conf/uss/AdiBCPK18} and secure authorization~\cite{DBLP:journals/corr/abs-2008-05966}. In the watermarking approaches, researchers designed a digital watermark that can be embedded into data such as video, images, and so on. With the detection of the unique watermark, we could verify the ownership of the copyright of the data. Based on these ideas, \citet{song2017machine} and \citet{kuribayashi2020deepwatermark} embedded the digital watermarks into the parameters of the neural networks. \citet{zhang2020model} and \citet{wu2020watermarking} proposed a framework to generate images with an invisible but extractable watermark. However, they are vulnerable to some active attack algorithms~\cite{wang2019attacks, DBLP:conf/asiaccs/Chen0BDJLS21} which first detect the watermark and then rewrite or remove it. On the other hand, the secure authorization approach seeks to train a model that generates inaccurate results without authorization. \citet{DBLP:journals/corr/abs-2008-05966} proposed a key-based framework that ensures correct model functioning only with the correct secret key. In addition, \citet{wang2022nontransferable} were inspired by domain generalization and proposed non-transferable learning (NTL), which achieves secure authorization by reducing the model's generalization ability in the specified unauthorized domain. Although the NTL model can effectively prevent the access to the unauthorized domain, it requires target labels during training, which may not always be easy to obtain. Furthermore, there is no mechanism to recover the access to the unauthorized domain when needed. In this paper, we present a new NTL model and show that our model can still have a good performance even in the absence of the target labels which are, however, indispensable in the work of \citet{wang2022nontransferable}. Besides, we extend it to a secret key-based version. With our method, the authorized users can still access the target domain with the provided keys. \section{Approach} In this section, we first introduce our proposed {U}nsupervised {N}on-{T}ransferable {L}earning (UNTL) approach in Sec.~\ref{sec:UNTL}, followed by a discussion on its practical limitation -- it lacks the ability to regain the access to the target domain. Next, we discuss {\color{black}our secret key-based methods} in Sec.~\ref{sec:key_methods} to address this limitation. \looseness=-1 \subsection{UNTL Text Classification}\label{sec:UNTL} \paragraph{Problem Description}\label{description} First of all, we present our definition of the unsupervised non-transferable learning task without labeled data from the target domain. \textcolor{black}{Following~\citet{DBLP:journals/corr/abs-2010-03978}, we consider that a domain consists of three parts: input space $X$, label space $Y$, and the joint probability distribution $p(X, Y)$.} Given a source domain $\mathcal{D}_s=\{\boldsymbol{x}_i,\boldsymbol{y}_i\}_{i=1}^{N_s}$ and a target domain $\mathcal{D}_t=\{\boldsymbol{x}_j\}_{j=1}^{N_t}$ with unlabeled samples, where $\boldsymbol{y}_i \in \mathbb{R}^C$ is a one-hot vector indicating the label of $\boldsymbol{x}_i$, $C$ is the number of classes, and $N_s, N_t$ refer to the number of examples in the source and target domain respectively. The goal of our UNTL method is to prevent the knowledge transfer from the source to the target domain, i.e., to train the model so that it performs well on the source domain data but poorly on the target domain data, without the requirement of accessing the label information of the target data. \paragraph{Text Classification} In our work, we use a BERT-based model~\cite{devlin2018bert} $\psi$ as our feature extractor for the input sentence and consider the final hidden state $h$ of the token \texttt{[CLS]} as the feature representation, where we denote $h$ as $\psi\left(\boldsymbol{x}\right)$. A simple feed-forward network ${\fontfamily{lmr}{\text{FFN}}}\left(\cdot \right)$ will be added on top of BERT as a classifier to predict the label. The formal loss function can be: \begin{equation} \mathcal{L}_\textrm{CE} = \mathbb{E}_{\left(\boldsymbol{x}, \boldsymbol{y}\right)\sim\mathcal{D}_s}[ {\fontfamily{lmr}{\text{CE}}}\left({\fontfamily{lmr}{\text{FFN}}}\left(\psi\left(\boldsymbol{x} \right) \right), \boldsymbol{y}\right)] \end{equation} where $\mathcal{D}_s$ is the source domain dataset and ${\fontfamily{lmr}{\text{CE}}}$ indicates the cross entropy function. \paragraph{Maximum Mean Discrepancy} To enlarge the distance between the representations of the source and target domains, we follow \citet{wang2022nontransferable} and use Maximum Mean Discrepancy~\cite{gretton2012kernel} (MMD) to achieve this goal. MMD is a kernel two-sample test and can be used as a metric to determine whether two data distributions $p$ and $q$ are similar. MMD defines the metric function as follows: \begin{equation}\label{eq:mmd1} d_{{p,q}} = ||\mathbb{E}_{\boldsymbol{x}\sim p}[\psi\left(\boldsymbol{x}\right)] - \mathbb{E}_{\boldsymbol{x}'\sim q}[\psi\left(\boldsymbol{x}'\right)]||^2_{\mathcal{H}_k} \end{equation} {\color{black} where $\mathcal{H}_k$ is the reproducing kernel Hilbert space (RKHS) with a kernel $k$, whose operation is $k\left(\boldsymbol{z}, \boldsymbol{z}'\right) = e^{-||\boldsymbol{z}-\boldsymbol{z}'||^2}$ and function $\psi$ maps the sentence input into RKHS. The smaller the distance $d_{p,q}$, the more similar the two distributions $p$ and $q$.}\looseness=-1 In our work, we use MMD to increase the distance between the feature representations of the source and the target domain, forcing the feature extractor $\psi$ to extract domain-dependent representations rather than maximizing the inter-domain invariance. To prevent the high MMD from dominating the entire loss, we follow \citet{wang2022nontransferable} and set an upper bound for it. Therefore, based on Equation~\ref{eq:mmd1}, our MMD loss can be formulated as: \begin{equation} \mathcal{L}_{\textrm{MMD}}\left(\mathcal{S,T}\right) = -\min\left(c, {d}_{\mathcal{S,T}}\right) \end{equation} where $c$ is the upper bound for MMD, and {\color{black}$\mathcal{S,T}$ are data distributions of the source and target domains respectively.} With this loss, we only maximize ${d}_{\mathcal{S,T}}$ when it is smaller than the upper bound $c$. \paragraph{Domain Classifier} Despite being able to enlarge the gap between the source and target domains to some extent, the MMD loss lacks the explicit ability to clearly draw the boundary between the representations of different domains, especially when the knowledge between domains is similar. {\color{black} Therefore, we hypothesize that using MMD alone may not be sufficient to yield the optimal empirical performance.} To mitigate this issue, we draw inspirations from the Domain-Adversarial Neural Networks~\cite{ganin2016domain} and propose an additional domain classifier added on top of the feature extractor. This classifier is trained to predict the domain with the feature representations. We employ a cross-entropy loss to train the domain classifier. By optimizing this loss, the representations of different domains are encouraged to be more distinct. Specifically, we use 0 to indicate the source domain and 1 to indicate the target domain. We can formulate the domain classification (DC) loss as: {\color{black} \begin{equation} \begin{aligned} \mathcal{L}_{\textrm{DC}}\left(\mathcal{S}, \mathcal{T}\right) = \mathbb{E}_{\boldsymbol{x}^S\sim \mathcal{S}}[{\fontfamily{lmr}{\text{CE}}}\left({\fontfamily{lmr}{\text{FFN}}_{dc}}\left(\psi\left(\boldsymbol{x}^S\right)\right), 0\right)] \\ +\ \mathbb{E}_{\boldsymbol{x}^T\sim\mathcal{T}}[{\fontfamily{lmr}{\text{CE}}}({\fontfamily{lmr}{\text{FFN}}_{dc}}\left(\psi\left(\boldsymbol{x}^T\right)\right), 1)] \end{aligned} \end{equation} where ${\fontfamily{lmr}{\text{FFN}}_{dc}}$ is the domain classifier}. With this DC loss as a regularization term, the boundary of feature representation between the source and the target can be clearer, facilitating the non-transferable learning. \paragraph{Objective Function} In this task, our goal is to train a model that can perform well on the source domain while performing badly on the target domain. To achieve this goal, we propose a loss function for the unsupervised non-transferable learning, which contains three terms. The first term is the cross-entropy loss $\mathcal{L}_{\textrm{CE}}$ for text classification to integrate knowledge about the the downstream task into the model. The second term is the domain classification loss $\mathcal{L}_{\textrm{DC}}$ and the third is MMD loss $\mathcal{L}_{\textrm{MMD}}$. The latter two terms jointly contribute to enlarging the gap between the representations of the source and target domains to prevent the knowledge transfer. Finally, we can get our overall loss which is written as: \begin{equation}\label{eq:total} \mathcal{L}_{\textrm{UNTL}} = \mathcal{L}_{\textrm{CE}} +\beta\cdot \mathcal{L}_{\textrm{DC}}+ \lambda\cdot\mathcal{L}_{\textrm{MMD}} \end{equation} where $\beta$ and $\lambda$ are the scaling hyperparameters. \paragraph{Theoretical Analysis} {\color{black} Different from \cite{wang2022nontransferable} where they use information bottleneck theory \cite{DBLP:journals/corr/physics-0004057} to show the feasibility of non-transferable learning, we turn to a more general theory of domain adaptation}~\cite{ben2006analysis, wang2018theoretical}. Here, we present an analysis of the effectiveness of the unsupervised setting based on this theory. \begin{theorem}~\cite{ben2010theory} Let $\mathcal{H}$ be a hypothesis space (of a particular VC dimension), for any $h \in \mathcal{H}$. Given a source domain $\mathcal{D_S}$ and a target domain $\mathcal{D_T}$: \begin{equation} \label{eq:theorem1} \epsilon_{\mathcal{T}}\left(h\right) \le \epsilon_{\mathcal{S}}\left(h\right) + \frac{1}{2} d_{\mathcal{H}\Delta\mathcal{H}}\left(\mathcal{D}_{\mathcal{S}},\mathcal{D}_{\mathcal{T}}\right) + C \end{equation} where $\epsilon_{\mathcal{S}}\left(h\right)$ and $\epsilon_{\mathcal{T}}\left(h\right)$ are the expected source and target errors respectively, $C = \min\limits_{h' \in \mathcal{H}} \left(\epsilon_{\mathcal{S}}\left(h'\right) + \epsilon_{\mathcal{T}}\left(h'\right)\right)$, which can be viewed as a constant and $d_{\mathcal{H}\Delta\mathcal{H}}$ is a divergence\footnote{$d_{\mathcal{H}\Delta\mathcal{H}}$ is a symmetric difference hypothesis space for a hypothesis space $\mathcal{H}$. See \citet{ben2010theory} for more details.} that measures the maximal discrepancy between two distributions under a fixed hypothesis class. \end{theorem} {\color{black} During our training process, we minimize the source error while maximizing the divergence (with the MMD and DC losses). Comparing with a baseline transfer model without the MMD and DC losses, we hypothesize the our method may yield a comparable source error, while leading to a significant larger divergence term. We believe this may lead to a significantly increase in the target error, as the changes of the above terms would effectively lead to a much looser upper bound for the target error as shown in Equation \ref{eq:theorem1}. Such a looser upper bound may lead to significant increase in target error, which can effectively prevent the knowledge from being transferred into the target domain. We will verify our hypothesis in the experiments later.\footnote{In fact, through our experiments later, we found that on average there was a 1\% increase in source error for our approach as compared to the baseline. However, there was a significant increase ($\times 10$) in the divergence term as approximated by the MMD loss, which leads to effective non-transfer learning (where we achieve good source domain performance and bad target domain performance).}} \subsection{Learning with Secret Keys} \label{sec:key_methods} With our UNTL method, we could ensure that the model performs well in the source domain whilst degrading its performance in the target domain. However, it is inconvenient if the performance in the target domain can no longer be restored after training. This can be illustrated with an example: suppose that we are running an application which supports two kinds of users, regular users and members. Suppose further that the regular users are only authorized to query the model for a limited set of data, while the members have no limits on their access to the model. Using our UNTL approach discussed above, we can limit the access of the regular users by denoting the authorized and non-authorized portions of the data as the source and target domains respectively. Then we train the model that performs well on the source domain, but poorly on the target domain. However, as the members have no limits to their access, they would require a separate model to be trained that performs well on both domains, thus doubling the computational and storage costs required for the application. To solve this issue, we extend our approach to include a secret key, $K$, that can be used to recover access to the target domain even after non-transferable learning. Without the key, the model is encouraged to perform well on the source domain while degrading the accuracy in the target domain. However, upon supplying the secret key, the model's performance on the target domain will be restored. Following the example above, this allows a single neural network to be used for all the users, whilst providing privileged access to the members through the provision of the secret key. Based on our UNTL, in this section, we present our innovative secret key-based unsupervised non-transferable learning method. The method can not only keep the normal users away from the membership area but also provide members with the specific key that allow them to access the membership area within a single model. Our intuition is to design a secret key that can revive the restricted target domain in our UNTL model. We call the method Secret Key-based Non-Transferable Learning, which has two variants$\colon$ 1) {\em Prompt-based Secret Key method}, where we add a discrete prompt as a prefix to the input sentence that serves as an explicit secret key to restore the access to the target domain, and 2) {\em Adapter-based Secret Key method}, where a trained adapter module is added to the model to transform the target embeddings into the source-like ones in an implicit manner. \begin{figure}[t!] \centering \includegraphics[width=0.45\textwidth]{prompt.pdf} \caption{Prompt-based Secret Key Methods} \label{fig:prompt} \end{figure} \paragraph{Prompt-based Secret Key} Recently, prompt-based learning \cite{DBLP:conf/eacl/SchickS21, DBLP:conf/emnlp/LesterAC21} has achieved state-of-the-art results in many tasks. Inspired by these prompt-based methods, we consider a prompt as our secret key, which users can use to access the target domain. As shown in Figure~\ref{fig:prompt}, we first assign a randomly chosen prompt $P=\{P_1, ..., P_m \}$ as the secret key, where $P_i$ is the $i$-th token in the prompt sentence and $m$ is the length of the prompt. Given an input sentence $\boldsymbol{x}$ with length $n$, we concatenate the prompt with the input $\boldsymbol{x}$ to construct an authorized input sentence. In addition, similar to inference without prompt key, we continue using the hidden representation at the position of \texttt{[CLS]} as the input of the task classifier and get the predicted label. With the introduction of the prompt-based key, we believe that there are 3 different distributions in this task: {\em source} domain, {\em target} domain and {\em target+prompt} domain. In the prompt-based secret key model, after prepending the specific prompt to the target input, the model can recover the ability to perform well in the target domain. Therefore, we try to train the feature extractor to close the distance between the target+prompt domain and the source domain while enlarging the distance between the source and the target domain without the key. To achieve this, we propose a new MMD loss: \begin{equation} \mathcal{L'}_{\textrm{MMD}}\left(\mathcal{P,S,T}\right) = \alpha \cdot d_{\mathcal{P, S}} - \min\left(c, d_{\mathcal{S,T}}\right) \end{equation} {\color{black} where $\mathcal{P}$ denotes the data distribution of} the target+prompt domain, $\alpha$ is the scaling hyperparameter and $c$ is the upper bound for MMD. In this way, we can transfer the knowledge from the source domain to the target+prompt domain but not to the original target domain. Therefore, we can extend Equation~\ref{eq:total} to get the objective function for prompt-based secret key UNTL method: \begin{equation} \label{eq:prompt} \begin{aligned} \mathcal{L}_{\textrm{prompt}} =& \mathcal{L}_\textrm{CE} + \beta \cdot \mathcal{L}_\textrm{DC}\left(\left[\mathcal{P}, \mathcal{S}\right], \mathcal{T}\right) + \\ & \lambda \cdot \mathcal{L'}_{\textrm{MMD}}\left(\mathcal{P,S,T}\right) \end{aligned} \end{equation} where $\left[\mathcal{P}, \mathcal{S}\right]$ indicates {\color{black} the data distribution of the combined domain of} source and target+prompt. \paragraph{Adapter-based Secret Key} \begin{figure}[t!] \centering \includegraphics[width=0.3\textwidth]{adapter.pdf} \caption{Adapter-based Secret Key Methods} \label{fig:adapter} \end{figure} Besides explicitly prepending the input sentences with the discrete prompt as the secret key, we could also consider adding an input adapter~\cite{houlsby2019parameter, an2022input} as the secret key. In our UNTL model, input sentences in different domains will lead to distinct performance. Intuitively, we train the input adapter to eliminate the target-like feature and convert the target embeddings into the source-like embeddings. {\color{black} Given an embedding representation, we assume it has two components: the semantic component which is essential for text classification, and the domain component that is irrelevant to the task. We train the adapter to convert the domain component of the embedding from the target domain to the source domain while maintaining the semantic component.} {\color{black}The adapter architecture is shown in Figure~\ref{fig:adapter}. In the figure, embeddings of the target sentences are first projected into a lower dimensional space $\mathbb{R}^m$ with $W_{\textrm{down}}$ before passing through a ReLU nonlinear function, and then projected back to the space $\mathbb{R}^d$ with $W_{\textrm{up}}$, where $m$ is significantly less than $d$ (in addition, there is a skip connection as shown in Figure~\ref{fig:adapter}).} With the input adapter module, the target embeddings will be transformed into the source-like ones. From this adapter-based network, we can get source-like embeddings data in the {\em target+adapter} domain which can be denoted as $\mathcal{D}_{\mathcal{A}}$. Similar to the prompt-based secret key method, we inject the knowledge from the source domain into the target+adapter domain by closing the distance $d_{\mathcal{A}, \mathcal{S}}$ between their representations. However, directly using the UNTL loss above could not make sure that the adapter can maintain the task-dependent information. {\color{black}Therefore, we construct a dataset $\{{\fontfamily{lmr}{\text{adapter}}}\left(\boldsymbol{x}_i\right), \boldsymbol{y}_i\}^{N_s}_{i=1}$ with the source domain data, where ${\fontfamily{lmr}{\text{adapter}}}\left(\boldsymbol{x}\right)$ indicates the representation of $\boldsymbol{x}$ converted by the adapter module. Then we train the model with an additional cross-entropy loss to guarantee that the embeddings converted by the adapter can contain sufficient signals about the classification task:} \begin{equation} \begin{aligned} &\mathcal{L}_{ \textrm{CE}_{\textrm{adapter}} } = \\ &\mathbb{E}_{\left(\boldsymbol{x}, \boldsymbol{y}\right)\sim\mathcal{D}_s}\left[{\fontfamily{lmr}{\text{CE}}}\left({\fontfamily{lmr}{\text{FFN}}}\left( \psi\left( {\fontfamily{lmr}{\text{adapter}}} (\boldsymbol{x}\right)\right)\right), \boldsymbol{y})\right] \end{aligned} \end{equation} The overall objective function for training the adapter-based secret key model is: \begin{equation} \label{eq:adapter_model} \begin{aligned} \mathcal{L}_{\textrm{adapter}} =& \mathcal{L}_\textrm{CE} + \mathcal{L}_{\textrm{CE}_{\textrm{adapter}}} + \beta\cdot\mathcal{L}_\textrm{DC}\left(\left[\mathcal{A}, \mathcal{S}\right], \mathcal{T}\right)\\ & + \lambda\cdot\mathcal{L'}_{\textrm{MMD}}\left(\mathcal{A,S,T}\right) \end{aligned} \end{equation} where $\left[\mathcal{A}, \mathcal{S}\right]$ indicates the {\color{black} data distribution} of the combined domain of source and target+adapter. \begin{table}[t!] \normalsize \setlength\tabcolsep{2pt} \centering \scalebox{0.8}{ \begin{tabular}{l ccccc} \toprule {Datasets} &\multicolumn{1}{c}{\textsc{SL}}&\multicolumn{1}{c}{\textsc{TE}}&\multicolumn{1}{c}{\textsc{GO}}&\multicolumn{1}{c}{\textsc{TR}}&\multicolumn{1}{c}{\textsc{FI}}\\ \midrule \#Train& 68,716 & 74,087 & 68,755 & 68,755 & 68,753\\ \#Valid& {\color{white}0}8,590 & {\color{white}0}9,261 & {\color{white}0}8,595 & {\color{white}0}8,595 & {\color{white}0}8,595\\ \#Test& {\color{white}0}1,955 & {\color{white}0}1,966 & {\color{white}0}1,945 & {\color{white}0}1,976 & {\color{white}0}1,973\\ \bottomrule \end{tabular} } \caption{Dataset statistics} \label{tab:multinli} \end{table} \section{Experiments} In this section, we first conduct experiments to verify the effectiveness of our UNTL method and then show that our proposed secret key-based UNTL approach could recover access to the target unauthorized domain when the secret key is supplied in the form of a discrete prompt or an additional adapter module. We use MultiNLI~\cite{williams2017broad} as our benchmark dataset, {\color{black} which is for a 3-class classification task with balanced labels.} We begin with some training details of all experiments and then discuss the results in different settings. \subsection{Experimental Setup} Our models are implemented in PyTorch~\cite{DBLP:conf/nips/PaszkeGMLBCKLGA19} and all experiments are conducted with NVIDIA Quadro RTX 8000 GPUs and we run three times with different seeds. We use the preprocessed MultiNLI dataset from Huggingface~\cite{lhoest2021datasets}. Based on the genre information, we divide the MultiNLI dataset into 5 parts, namely, slate (\textsc{SL}), telephone (\textsc{TE}), government (\textsc{GO}), travel (\textsc{TR}), and fiction (\textsc{FI}) as different domains. As Huggingface only has a training set and a validation set for MultiNLI, we split the training dataset into 8:1 as the training set and the evaluation set, and consider the validation set as the test set in our experiments. The dataset statistics can be found in Table~\ref{tab:multinli}. As for the model architecture, we use the pretrained language model BERT~\cite{devlin2018bert} as our feature extractor and a randomly initialized one-layer feed-forward network as the classifier. We use the Adam~\cite{kingma2014adam} optimizer with $\beta_1$ = 0.9, $\beta_2$ = 0.999. More details can be found in Appendix~\ref{sec:appendix}. \begin{table}[t!] \small \setlength\tabcolsep{3pt} \centering \scalebox{0.95}{ \begin{tabular}{l c c c c c} \toprule Src\textbackslash Tgt & \textsc{SL} & \textsc{TE} & \textsc{GO} & \textsc{TR} & \textsc{FI} \\ \midrule \multicolumn{1}{c}{\textsc{SL}} & $73.8_{\pm 0.6}$ & $73.9_{\pm 0.4}$ & $80.1_{\pm 0.2}$ &$76.6_{\pm 0.3}$ & $76.8_{\pm 0.6}$ \\ \multicolumn{1}{c}{\textsc{TE}} & $72.1_{\pm 0.4}$ & $77.5_{\pm 0.6}$ & $77.1_{\pm 0.2}$ &$74.0_{\pm 0.3}$ & $75.3_{\pm 0.9}$ \\ \multicolumn{1}{c}{\textsc{GO}} & $71.7_{\pm 0.3}$ & $72.0_{\pm 0.3}$ & $81.0_{\pm 0.2}$ &$75.9_{\pm 0.8}$ & $74.1_{\pm 0.5}$ \\ \multicolumn{1}{c}{\textsc{TR}} & $71.6_{\pm 0.5}$ & $71.6_{\pm 0.1}$ & $80.4_{\pm 0.3}$ &$79.3_{\pm 0.3}$ & $73.9_{\pm 0.4}$ \\ \multicolumn{1}{c}{\textsc{FI}} & $73.4_{\pm 0.6}$ & $74.2_{\pm 0.2}$ & $79.2_{\pm 0.4}$ &$75.4_{\pm 0.4}$ & $80.1_{\pm 0.6}$ \\ \bottomrule \end{tabular} } \caption{Performance of classification task in each domains when training on the source domain only} \label{tab:sup} \end{table} \begin{table}[t!] \small \setlength\tabcolsep{3pt} \centering \scalebox{0.95}{ \begin{tabular}{l c c c c c} \toprule Src\textbackslash Tgt & \textsc{SL} & \textsc{TE} & \textsc{GO} & \textsc{TR} & \textsc{FI} \\ \midrule \multicolumn{1}{c}{\textsc{SL}} & $71.5_{\pm 1.4}$ & $34.2_{\pm 1.1}$ & $40.0_{\pm 0.7}$ &$34.1_{\pm 1.7}$ & $36.6_{\pm 1.5}$ \\ \multicolumn{1}{c}{\textsc{TE}} & $33.7_{\pm 1.3}$ & $76.8_{\pm 0.5}$ & $35.4_{\pm 1.8}$ &$34.7_{\pm 1.0}$ & $32.6_{\pm 0.5}$ \\ \multicolumn{1}{c}{\textsc{GO}}& $38.0_{\pm 0.8}$ & $34.4_{\pm 1.1}$ & $81.1_{\pm 0.8}$ &$34.1_{\pm 0.1}$ & $34.9_{\pm 1.4}$ \\ \multicolumn{1}{c}{\textsc{TR}} & $36.9_{\pm 3.8}$ & $33.4_{\pm 0.1}$ & $34.6_{\pm 1.9}$ &$78.9_{\pm 0.8}$ & $33.6_{\pm 0.1}$ \\ \multicolumn{1}{c}{\textsc{FI}} & $39.1_{\pm 1.8}$ & $34.6_{\pm 1.0}$ & $36.0_{\pm 1.8}$ &$34.8_{\pm 0.8}$ & $78.7_{\pm 1.3}$ \\ \bottomrule \end{tabular} } \caption{Performance over the target domain for our unsupervised non-transferable learning} \label{tab:ntl} \end{table} \begin{table}[t!] \small \setlength\tabcolsep{3pt} \centering \scalebox{0.95}{ \begin{tabular}{l c c c c c} \toprule Src\textbackslash Tgt & \textsc{SL} & \textsc{TE} & \textsc{GO} & \textsc{TR} & \textsc{FI} \\ \midrule \multicolumn{1}{c}{\textsc{SL}} & $72.7_{\pm 1.1}$ & $33.0_{ \pm 0.7 }$ & $38.1_{ \pm 1.5 }$ & $36.2_{ \pm 1.3 }$ & $38.0_{ \pm 0.0 }$ \\ \multicolumn{1}{c}{\textsc{TE}} & $34.7_{ \pm 0.7 }$ & $76.7_{\pm 0.7}$ & $34.7_{ \pm 2.7 }$ & $34.1_{ \pm 1.7 }$ & $34.5_{ \pm 1.1 }$ \\ \multicolumn{1}{c}{\textsc{GO}}& $37.4_{ \pm 1.1 }$ & $32.3_{ \pm 0.8 }$ & $80.8_{\pm 0.4}$ & $33.8_{ \pm 1.6 }$ & $33.3_{ \pm 0.5 }$ \\ \multicolumn{1}{c}{\textsc{TR}} & $36.2_{ \pm 1.6 }$ & $33.4_{ \pm 0.0 }$ & $35.1_{ \pm 2.8 }$ & $79.2_{\pm 0.6}$ & $34.3_{ \pm 1.1 }$ \\ \multicolumn{1}{c}{\textsc{FI}} & $38.1_{ \pm 1.0 }$ & $34.0_{ \pm 1.8 }$ & $33.6_{ \pm 2.8 }$ & $34.0_{ \pm 1.9 }$ & $79.0_{\pm 0.7}$\\ \bottomrule \end{tabular} } \caption{Performance over the target domain for non-transferable learning with the label information in the target domain. The results are on par with the UNTL results, which suggests that source labels alone are sufficient for the UNTL task.} \label{tab:addition_ntl} \end{table} \begin{table*}[t!] \small \setlength\tabcolsep{3pt} \centering \scalebox{0.92}{ \begin{tabular}{l c c c c c } \toprule Source\textbackslash Target & \textsc{SL} & \textsc{TE} & \textsc{GO} & \textsc{TR} & \textsc{FI} \\ \midrule \multicolumn{1}{c}{\textsc{SL}} & $72.8_{\pm 0.9} \Rightarrow 68.6_{\pm 1.3}$ & $35.3_{\pm 1.0} \Rightarrow 68.9_{\pm 0.3}$ & $38.2_{\pm 1.6} \Rightarrow 74.7_{\pm 0.7}$ &$36.9_{\pm 1.7} \Rightarrow 73.1_{\pm 1.0}$ & $39.2_{\pm 1.4} \Rightarrow 70.2_{\pm 1.4}$ \\ \multicolumn{1}{c}{\textsc{TE}} & $33.1_{\pm 1.3} \Rightarrow 65.1_{\pm 1.7}$ & $76.4_{\pm 1.1} \Rightarrow 71.1_{\pm 1.2}$ & $33.5_{\pm 2.3} \Rightarrow 75.4_{\pm 0.3}$ &$34.0_{\pm 1.0} \Rightarrow 71.6_{\pm 0.4}$ & $34.2_{\pm 1.6} \Rightarrow 70.8_{\pm 0.4}$ \\ \multicolumn{1}{c}{\textsc{GO}} & $38.8_{\pm 1.5} \Rightarrow 63.5_{\pm 0.6}$ & $32.9_{\pm 0.7} \Rightarrow 66.5_{\pm 0.6}$ & $80.8_{\pm 0.9} \Rightarrow 76.8_{\pm 1.5}$ &$34.5_{\pm 1.4} \Rightarrow 72.0_{\pm 0.7}$ & $35.6_{\pm 0.7} \Rightarrow 66.2_{\pm 1.2}$ \\ \multicolumn{1}{c}{\textsc{TR}} & $37.2_{\pm 0.8} \Rightarrow 62.6_{\pm 0.8}$ & $35.4_{\pm 0.1} \Rightarrow 66.2_{\pm 1.4}$ & $34.1_{\pm 2.5} \Rightarrow 77.7_{\pm 0.3}$ & $78.8_{\pm 0.7} \Rightarrow 73.8_{\pm 1.3}$ & $36.2_{\pm 0.7} \Rightarrow 66.4_{\pm 0.5}$ \\ \multicolumn{1}{c}{\textsc{FI}} & $41.5_{\pm 0.6} \Rightarrow 67.2_{\pm 0.6}$ & $34.7_{\pm 1.0} \Rightarrow 68.8_{\pm 0.7}$ & $37.3_{\pm 0.1} \Rightarrow 76.3_{\pm 0.4}$ & $36.4_{\pm 0.2} \Rightarrow 71.7_{\pm 0.8}$ & $78.8_{\pm 0.7} \Rightarrow 76.2_{\pm 0.7}$ \\ \bottomrule \end{tabular} } \caption{Performance of prompt-based secret key NTL model. The left of the right arrow shows the accuracy (\%) of the model when prompt-based key was not attached to the input, and the right is the precision with the key.} \label{tab:prompt} \end{table*} \begin{table*}[t!] \small \setlength\tabcolsep{3pt} \centering \scalebox{0.92}{ \begin{tabular}{l c c c c c} \toprule Source\textbackslash Target & \textsc{SL} & \textsc{TE} & \textsc{GO} & \textsc{TR} & \textsc{FI} \\ \midrule \multicolumn{1}{c}{\textsc{SL}} & $72.8_{\pm 0.4} \Rightarrow 73.5_{\pm 0.5}$ & $35.0_{\pm 1.0} \Rightarrow 73.5_{\pm 0.6}$ & $37.1_{\pm 2.7} \Rightarrow 79.0_{\pm 0.2}$ &$37.4_{\pm 2.6} \Rightarrow 74.8_{\pm 0.6}$ & $42.7_{\pm 2.4} \Rightarrow 76.8_{\pm 0.4}$ \\ \multicolumn{1}{c}{\textsc{TE}} & $33.2_{\pm 1.5} \Rightarrow 71.5_{\pm 0.5}$ & $77.0_{\pm 0.8} \Rightarrow 76.7_{\pm 0.7}$ & $34.0_{\pm 1.8} \Rightarrow 77.8_{\pm 0.5}$ &$34.7_{\pm 1.1} \Rightarrow 74.3_{\pm 0.8}$ & $34.2_{\pm 1.6} \Rightarrow 75.4_{\pm 0.3}$ \\ \multicolumn{1}{c}{\textsc{GO}} & $41.0_{\pm 0.7} \Rightarrow 70.2_{\pm 0.7}$ & $33.9_{\pm 1.5} \Rightarrow 71.4_{\pm 0.6}$ & $80.1_{\pm 1.2} \Rightarrow 80.2_{\pm 1.1}$ &$35.2_{\pm 1.2} \Rightarrow 74.4_{\pm 0.3}$ & $36.7_{\pm 1.6} \Rightarrow 73.6_{\pm 1.2}$ \\ \multicolumn{1}{c}{\textsc{TR}} & $37.9_{\pm 1.6} \Rightarrow 69.9_{\pm 0.8}$ & $34.1_{\pm 1.1} \Rightarrow 71.9_{\pm 1.0}$ & $34.4_{\pm 2.5} \Rightarrow 79.2_{\pm 0.3}$ & $79.6_{\pm 0.8} \Rightarrow 79.8_{\pm 0.6}$ & $34.4_{\pm 1.9} \Rightarrow 73.8_{\pm 0.3}$ \\ \multicolumn{1}{c}{\textsc{FI}} & $41.2_{\pm 3.0} \Rightarrow 71.8_{\pm 0.4}$ & $34.1_{\pm 1.8} \Rightarrow 74.3_{\pm 0.3}$ & $35.9_{\pm 2.7} \Rightarrow 78.7_{\pm 0.3}$ & $35.5_{\pm 1.9} \Rightarrow 75.0_{\pm 0.4}$ & $79.2_{\pm 0.7} \Rightarrow 79.5_{\pm 0.5}$ \\ \bottomrule \end{tabular} } \caption{Performance of adapter-based secret key NTL model. The left of the right arrow shows the accuracy (\%) of the model when input adapter was not applied, and the right is the precision with the adapter.} \label{tab:adapter} \end{table*} \subsection{Results for UNTL} {\color{black} We first train a supervised classification model only on the source domain, shown in Table~\ref{tab:sup}, as a baseline. From the baseline results, we can observe that the knowledge for one domain can be easily transferred to others. Although only trained on the source domain, the neural network shows considerable performance on the unseen domains. In our UNTL experiments, we traverse all possible domain pairs and Table~\ref{tab:ntl} shows that the method successfully degrades the performance in the target domain to between 32.6\% and 40.0\%, which is near random choice (33.3\%) in such 3-label classification tasks.} We can observe that the largest performance degradation is from 80.4\% to 34.6\%, in which the source-target pair is Travel and Government. In addition, though the target accuracy can be decreased a lot, the model can still maintain a good performance in the source domain. The maximal average drop in the source domain is only 1\%. The results in Table~\ref{tab:ntl} suggest that our UNTL model can successfully reduce the target performance whilst maintaining a decent accuracy in the source domain even when the source and target domains are similar and without the target labels. \paragraph{Comparison with original NTL} \textcolor{black}{We also compare our method with the original NTL~\cite{wang2022nontransferable} to show that the labels in the target domain are not really necessary. Table~\ref{tab:addition_ntl} shows the performance when original NTL is applied. As we can see from Table~\ref{tab:ntl}, our UNTL model performs similarly with the NTL method in the source domain. Although NTL degrades the performance slightly better in the target domain as compared to UNTL, both methods successfully reduce the accuracy on the target domain to close to random chance and the difference is negligible. Therefore, we show empirically that labels in the target domain are not strictly necessary as our UNTL model can still succeed in preventing the knowledge transfer from the source to the target domain even without the target labels. } \subsection{Results for UNTL with secret keys} \paragraph{Prompt-based Secret Key} We continue to use all possible domain pairs in our experiments, and assign a non-task-dependent sentence `\textit{Here this a password key messages, Do not tell others.}'\footnote{\color{black} Note that this sentence that serves as a secret key is intentionally ungrammatical.} as the prompt-based key. From Table~\ref{tab:prompt}, we could see that the performance in the target domain ranges from 32.9\% to 41.5\%. Moreover, with the specific prompt, we can successfully access the target domain and get a better performance. We further make a comparison with the baseline in Table~\ref{tab:sup}, where the non-transferable learning is not used. Though prompt-based secret key could recover the ability, the average accuracy is 7\% worse than the baseline in the target domain. \begin{figure*}[t!] \centering \begin{subfigure}{0.24\linewidth} \centering \includegraphics[width=1.0\linewidth]{sup.png} \caption{Transfer learning} \end{subfigure} \centering \begin{subfigure}{0.24\linewidth} \centering \includegraphics[width=1.0\linewidth]{mmd.png} \caption{UNTL (w/o DC)} \end{subfigure} \centering \begin{subfigure}{0.24\linewidth} \centering \includegraphics[width=1.0\linewidth]{dc.png} \caption{UNTL (w/o MMD)} \end{subfigure} \centering \begin{subfigure}{0.24\linewidth} \centering \includegraphics[width=1.0\linewidth]{full.png} \caption{UNTL} \end{subfigure} \caption{Illustration of the distribution under different non-transfer methods (MMD loss and DC loss) settings} \label{dist} \end{figure*} \paragraph{Adapter-based Secret Key} In this experiment, we apply the input adapter after the embedding layer as the secret key and train our unsupervised non-transferable learning model, and the results are shown in Table~\ref{tab:adapter}. Under the adapter setting, the performance in the target domain can be similarly degraded to between 33.2\% and 42.7\% as with the prompt-based secret key. Moreover, the adapter is able to restore the degraded performance in the target domain to be on par with the baseline performance in Table~\ref{tab:sup}. {\color{black} With the additional input adapter, our method can recover the model capability in the target domain better than the prompt-based methods.} We hypothesize the reason could be that {\color{black} the models may still struggle to distinguish the target domain from the target+prompt domain in which the instances are constructed by prepending a discrete prompt in front of the input sentences.} The representations between them are hard to be divorced by the MMD loss and DC loss. On the other hand, the adapter module transforms the input sentences in the continuous space and can be also jointly trained to construct a target+adapter domain that is different from the target domain. Overall, the results demonstrate that not only we can achieve training a model that has a good accuracy in the source domain while performing poorly in the target domain, but also restore the performance in the target domain with an adapter. \paragraph{Discussion} \textcolor{black}{Here we provide a comparison between the two types of secret keys. We first start with the trade-offs between storage and performance. While the adapter-based secret key has higher storage requirements and requires an additional 99K parameters for the input adapter module, this accounts to only about 0.09\% of the BERT-base (109M parameters). In exchange, the adapter-based secret key outperforms prompt-based secret key by around 7\% in recovering the performance in the target domain.} \textcolor{black}{ We also compare the performance of the model with and without the secret keys. For the sake of simplicity, we refer to our earlier example and denote the models with and without keys as members and regular users respectively, where the members have access to the target domain, while the regular users do not. In the target domain, members will have good results, but regular users will not. In the source domain where all users have access, applying the prompt-based secret key will cause the accuracy for members to decrease by 5\%, which is undesired. In contrast, applying the adapter-based secret key does not cause such issues and the accuracy for members will be almost the same (0.2\% improvement) as for the regular users. } \subsection{Ablation Study}\label{sec:ablation} \begin{table}[t!] \small \setlength\tabcolsep{3pt} \centering \scalebox{0.9}{ \begin{tabular}{l c c c } \toprule & Source & Target & $\Delta$ \\ \midrule UNTL & 77.4 & 35.3 & 42.1 \\ $\textrm{UNTL}_{\textrm{w/o Domain Clsf}}$ & 74.0 & 35.5 & 38.5\\ $\textrm{UNTL}_{\textrm{w/o MMD}}$ & 76.6 & 43.1 & 33.5\\ \bottomrule \end{tabular} } \caption{Ablation studies of different distance methods in unsupervised non-transferable learning. $\Delta$ indicates the difference between the performance of the source and the target domain} \label{tab:ablation} \end{table} \begin{table}[t!] \small \setlength\tabcolsep{3pt} \centering \scalebox{0.86}{ \begin{tabular}{l c c c c } \toprule & Source & Target & Target+Key & {$\Delta$} \\ \midrule PSK & 77.5 & 36.0 & 69.7 & 33.7\\ $\textrm{PSK}_{\textrm{w/o DC}}$& 76.9 & 39.5 & 69.5 & 30.0\\ $\textrm{PSK}_{\textrm{w/o MMD}}$& 70.3 & 41.6 & 40.8 & -0.8\\ \midrule ASK & 77.7 & 36.1 & 74.4 & 38.3\\ $\textrm{ASK}_{\textrm{w/o DC}}$ & 70.7 & 64.4 & 66.2 & 1.8\\ $\textrm{ASK}_{\textrm{w/o MMD}}$ & 73.4 & 46.4 & 68.6 & 22.2\\ \bottomrule \end{tabular} } \caption{Ablation studies for secret key-based UNTL. PSK and ASK denote prompt-based secret key and adapter-based secret key methods respectively. $\Delta$ indicates the difference between the performance between the Target+key and Target domains.} \label{tab:key_ablation} \end{table} In this section, we investigate the impact of different UNTL losses: MMD and DC. These two losses can maximize the discrepancy between representations of the source and target domains from different aspects. The MMD loss will try to enlarge the average distance between the two domains, but the boundary may not be clear. On the other hand, the DC loss makes up for the shortcomings of the MMD loss in terms of the boundary issue. As Table~\ref{tab:ablation} shows, in UNTL, the difference between the performance of the source and target domain decreases when we remove either the MMD loss or the DC loss. {\color{black} Based on this result, we use t-SNE \cite{JMLR:v9:vandermaaten08a} to visualize the representation distribution from the output of BERT of the source domain and the target domain. As Figure~\ref{dist} shows, when only training with the DC or the MMD loss, the two distributions are close and the boundary can also be unclear. Only if we apply both losses, UNTL will be able to learn different distributions for the source and the target domains.} Furthermore, from Table \ref{tab:key_ablation}, in the secret key-based methods, due to the similar initial representations of the target+key(prompt/adapter) and target domains, without using both losses to enlarge the distance, the model fails to perform well with the key (and badly without the key) in the target domain. {\color{black}Besides, we also found that the prompt-based secret key method may rely more on the MMD loss, while the adapter-based secret key method tends to depend more on the DC loss.} We speculate that the cause may be that: 1) in the prompt-based secret key method, the domain classifier can easily differentiate between the different domains based on the prompt pattern, but the representations will still be close in the continuous space without the MMD loss, whereas 2) in the adapter-based secret key method, the initial output embeddings of the adapter module are the same as the input ones. Initially, the representations of the target+adapter domain and target domain could be highly similar to each other, resulting in a small MMD loss. Thus, when only the MMD loss is used, the adapter may be stuck in the initial state and it is difficult to make progress on separating the two domains from each other during the fine-tuning phase. {\color{black} On the other hand, the DC loss can offer a stronger supervision than the MMD loss in terms of separating such two domains. Therefore, the DC loss could be playing a more significant role in the adapter-based secret key method.} \section{Conclusion and Future Work} In this paper, we present our UNTL method, which trains a model to maintain a good performance in the source domain whilst having degraded performance in the target domain. Thereafter, we extend our approach with a secret key component that allows the restoration of the model performance in the target domain when the key is supplied, through two methods: prompt-based secret key and adapter-based secret key. The experiments conducted on MultiNLI datasets suggest that our unsupervised non-transferable learning method can allow the model to perform differently in the source domain and the target domain. The extensive experiments also demonstrate that our methods can effectively recover the ability of the model on the target domain with the specific secret key after non-transferable learning. \textcolor{black}{For future works, we plan to extend our methods to incorporate multiple secret keys to achieve more than two user access levels by parameter-efficient methods~\cite{DBLP:conf/iclr/HeZMBN22} where we can first train our UNTL model, then freeze the parameters in the pretrained UNTL model, and train additional modules, such as prefix~\cite{DBLP:conf/acl/LiL20} and adapter~\cite{houlsby2019parameter}, to realize different user levels. We also plan to explore other ways to degrade the performance in specific domains while maintaining good performance in other domains.} \section*{Limitations} In unsupervised non-transferable learning methods, after fine-tuning, the model tends to predict the same label for any input in the target domain. In other words, when the model recognizes an input coming from the target domain, it tends to consistently assign a particular label. We would like to highlight that, while our method is effective in the sense that it prevents the model from functioning well on the target domain, there is no guarantee that it would always yield ``worse performance'' as measured by accuracy as an evaluation metric. Consider an extreme scenario where the labels in the target domain are highly unbalanced -- the domain consists of instances labeled with a particular label only. At the same time, our model happens to predict that particular label for any input from the target domain. In that case, the model may seemingly perform very well with a ``high accuracy''. To resolve this known limitation, a different evaluation metric may be needed in order to properly assess the true performance of our model in target domains with unbalanced labels. \section*{Ethics Statement} Our work focuses on our unsupervised non-transferable learning in order to protect neural networks as intellectual property whilst making the secure authorization more flexible with the secret key methods. Nevertheless, we would like to point out that a malicious third party neural network provider may utilize these methods for harmful purposes. {\color{black} For example, the provider could use unsupervised non-transferable learning and the secret key methods to insert a invisible backdoor into the model and extract privacy information from it.} \section*{Acknowledgements} \textcolor{black}{We would like to thank the anonymous reviewers, our meta-reviewer and senior area chairs for their constructive comments and support on this work. We would also like to thank Vanessa Tan for her help. This research/project is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG-PhD/2021-08-007[T]). }
1,314,259,992,579
arxiv
\section{Introduction} The emergence of computational intelligence has led us to an era of excellent communication between users and systems. These human-computer communications do not require any external device or muscle intervention and enable computers to be deliberately controlled via the monitoring of brain state signals. In order to potentially improve human-machine interactions, it is crucial to analyze and interpret physiological measurements effectively to assess individual's states \cite{Ilyas2016, Padfield2019}. Brain signals can encode one's expectations as a form of \emph{prior beliefs}, which have an influence on behavior in times of uncertainty \cite{Sohn2019}. A Bayesian approach that integrates prior knowledge of an individual’s innate brain activity with newly measured data may improve individual's state detection, which can aide to characterize and control one’s actions. EEG signals are 3N--nonstationary, nonlinear and noisy \cite{Klonowski2007}. In particular, they are obscured by various forms of noise, are nonlinear due to the complexity of underlying interaction in the nervous system \cite{Stam2005,Klonowski2007,Klonowski2004} and are nonstationary due to the involvement of different time scales in brain activity \cite{Indic1999}. The 3N nature of EEG signals requires methods that can encode individual's brain history and draw statistical inferences for these signals. In this paper, we develop a Bayesian classification scheme relying on the posterior distributions of persistence diagrams, which are pertinent topological descriptors. Persistent homology is a widely used tool for topological data analysis (TDA) that captures topological features at multiple scales and produces summary representations, called persistence diagrams, that encode the lifespan of the topological features in the data. Persistent homology has proved to be promising in the field of data sciences yielding astounding results in a variety of applications in variety of applications \cite{Patrangenaru2018, Guo2018,Sizemore2018,Babichev2017,Biscio2019,Maroulas2019,Sgouralis2017,Marouls2015,Mike2016,Marchese2018,Marchese2016,Lee2017,Ichinomiya2017,Kimura2018,Maroulas2019a,Humphreys2019}. Indeed, physiological signals' features are defined by the topological changes of the signals across time. Engaging TDA in the study of physiological signals is recently emerging. The authors of \cite{Wang2018} measure the topology of EEG data with persistence landscapes to detect differences in the EEG signals during epilepsy attacks versus those produced during healthy brain activity. However, this method does not investigate the distribution of the diagrams themselves and suffers from a loss of pertinent information. Several other studies implement traditional machine learning based on feature extractions \cite {Dindin2019,Wang2017,Piangerelli2018}. As selection of an appropriate feature is crucial, these methods rely on summaries of persistence diagrams, which already summarize the underlying data themselves. We develop a Bayesian learning approach that can be applied \emph{directly} on the space of persistence diagrams. However, this learning scheme depends on the estimation of posterior probabilities, which is not straightforward due to the unusual multiset structure of persistence diagrams. To establish a Bayesian framework, we need to define the prior uncertainty and likelihood through probability distributions of persistence diagrams. By viewing persistence diagrams as finite set samples, the authors of \cite{Maroulas2019} propose a nonparametric estimation of the probability density function of random persistence diagrams. They also show that the probability density function can successfully detect the underlying dynamics of EEG signals and compare it with other pre-existing TDA methods. A prior distribution of persistence diagrams can be obtained through this density function. However, computing posteriors entirely through the random set analog of Bayes' rule may have exponential computational complexity \cite{Goodman1997}. To address this, we model random persistence diagrams as point processes. In particular, we utilize Poisson point processes which can be entirely characterized by their intensity. We commence the Bayesian framework by modeling random persistence diagrams generated from a Poisson point process with a given intensity, which captures the prior uncertainty. In the case of brain state detection, we can incorporate individual's expectations about the statistical regularities in the environment as prior knowledge \cite{Sohn2019}. Alternatively, we can choose an uninformative prior intensity when no expert opinion or information about individual's expectations is available. We construct the likelihood component of our framework by utilizing the theory of marked point processes. Indeed, we employ the topological summaries of signals in place of the actual signals. This proves to be useful for a range of physiological signal analysis \cite{Dindin2019,Wang2017,Piangerelli2018,Wang2018,Altindis2018,Campbell2019,Ilyas2016}. The application considered in this paper is the classification of EEG signals, which allows us to predict individual's brain states and advances human-computer communication techniques. Through these topological summaries, we adopt a substitution likelihood technique \cite{Jeffreys1961} rather than considering the full likelihood of the entire signal data. Next, we develop a Bayesian learning method by relying on the posterior obtained from the Bayesian framework. This method is remarkably flexible as it abides by the 3N nature of the signals and is extremely powerful as it incorporates individual's expectations or domain experts' knowledge as prior beliefs. Furthermore, the Bayes factor provides a measure of confidence that in turn dictates whether further investigation is feasible. Our model enjoys a closed form of the posterior distribution through a conjugate family of priors, e.g., the Gaussian mixtures. Hence the prior-to-posterior updates yield posterior distributions of the same family. We present a detailed example of our closed form implementation on simulated EEG signals to demonstrate computational tractability and showcase applicability in classification through Bayes factor estimation. Furthermore, we present a detailed comparison with other TDA and non-TDA based learning methods. This paper is organized as follows. Section \ref{sec:prelm} provides a brief overview of persistence diagrams and Poisson point processes. We establish the Bayesian framework for persistence diagrams in Section \ref{sec:main}. We then develop our Bayesian learning method in Section \ref{sec:class}, which is used to quantify the classification outcome. Section \ref{sec:gm_post} introduces a closed form to the posterior intensity utilizing Gaussian mixture models. To assess the capability of our algorithm, we investigate its performance in classifying EEG signals and provide comparisons with several other existing methods in Section \ref{sec:app}. Finally, we end with the conclusion in Section \ref{sec: conclusion}. \section{Background \label{sec:prelm}} We commence by discussing the background essential for building our Bayesian model. In Subsection \ref{sec:sublevel}, we start with the formation of persistence diagrams (PDs) by implementing sublevel set filtrations. In order to model the uncertainty present in these persistence diagrams, we consider them as point processes and pertinent definitions from point processes (PPs) are given in Subsection \ref{sec:ppp} . \subsection{Persistent Homology for Noisy Signals \label{sec:sublevel}} Persistent homology is a tool from TDA that provides a robust way to model the topology of real datasets by tracking the evolution of homological features and summarizing these in persistence diagrams. Several methods exist to generate persistence diagrams such as Vietoris Rips or $\check{\text{C}}$ech filtrations \cite{Edelsbrunner2010}, but such techniques require the transformation of a signal to an appropriate point cloud using Takens's delay embedding theorem. To circumvent this transformation to point clouds, we employ the sublevel set filtration method, which summarizes the shape of signals \emph{directly} in a PD by employing local critical points as tersely outlined next. Consider a signal as a bounded and continuous function of time $f(t)$ (Fig. \ref{fig:noise_robustness} (a)). The sublevel set filtration tracks the evolution of connected components in sets $f^{-1}((-\infty,r])$, as $r$ increases. The central idea is that as $r$ increases the connectivity of the set $f^{-1}((-\infty,r])$ remains unchanged except when it passes through a critical point. For a given connected component, we record the value of $r$ at which is born (when $r$ reaches a local minimum), call it ${b}$, and the value at which disappears (when $r$ reaches a local maximum), call it ${d}$, by merging with a pre-existing connected component. That is to say, whenever two connected components merge, the one born later disappears while the one born earlier persists by the elder rule \cite{Edelsbrunner2010}. Once we reach the value $\max f(t)$ in the filtration, all the sublevel sets have merged into a single connected component, and we terminate the procedure. For every connected component that arises in the filtration, we plot the points $({b},{d})$ in $\mathbb{R}^{2}$ and call the resulting collection a persistence diagram (Fig. \ref{fig:noise_robustness} (b)). To facilitate computation and preserve the geometric information, we apply the linear transformation $(b,p)=T(b,d) = (b-\min(b),d-b)$ to each point in our persistence diagrams. We refer to the resulting coordinates as birth and persistence, respectively, in $\W := \{(b,p) \in \R^{2} | \,\, b,p \geq 0\}$ and call this transformed persistence diagram a tilted representation (Fig. \ref{fig:noise_robustness} (c)). Hereafter whenever we refer to persistence diagrams, we imply their tilted representation. \begin{figure}[b] \centering \subfigure[]{\includegraphics[width=2.3in,height=1.2in]{Figures/fig_1.jpg}} \subfigure[]{\includegraphics[width=1.1in,height=1.1in]{Figures/tilted_pd_noise_instance.png}} \caption{(a) is illustrating the conversion of signals to the corresponding PDs using sublevel set filtrations. A smooth signal and a noisy version of it are presented in black and red respectively. For persistence diagrams, we make consistent color choices to instantiate the robustness of persistent homology to noise. (b) is the tilted representation of the PD in (a)} \label{fig:noise_robustness} \vspace{-0.1in} \end{figure} \subsection{Poisson Point Processes \label{sec:ppp}} One samples from a finite point process $\mathcal{P}$ on a Polish space $\mathbb{X}$ by generating a random number $N$ according to a cardinality distribution and then for $N=n$ spatially distribute $\mathbf{x}=(x_1, \cdots,x_n) \in \mathbb{X}$ according to a probability distribution. In other words, a finite point process is characterized by a probability mass function (pmf) of the cardinality and a joint probability density function (pdf) of the elements for a given cardinality. We model random persistence diagrams as Poisson point processes (PPPs), hence as points $\mathbf{(b,p)}=\mathbf{x} \in \W$. The defining feature of these point processes is that they are solely characterized by a single parameter known as the intensity. The intensity $\lambda(x)$ of a given $x \in \R^d$ is the density of the expected number of points per unit volume at $x$. Indeed, the intensity serves as an analog of the first order moment of a random variable. The intensity in a Poisson point process accounts for the joint pdf of elements, and the cardinality is Poisson with mean $\mu = \int \lambda(x)$. Considering persistence diagrams as modeled by such processes, a link is needed between the prior and the data/likelihood to conduct Bayesian analysis. The \emph{marked point process} provides this connection. Effectively, a marked point process is a special case of bivariate point process where one PP $\Psi_M$ in the Polish space $\W_M$ (containing the marks) is determined given knowledge of the PP $\Psi$ in the Polish space $\W$. A marked Poisson point process $\Psi_{M}$ is a finite PP on $\mathbb{W} \times \mathbb{W}_M$ such that: (i) $\Psi=\left(\left\{p_{n}\right\},\left\{\Pro_{n}(\bullet)\right\}\right)$ is a PPP on $\mathbb{W}$, and (ii) for a realization $(\mathbf{x},\mathbf{m}) \in \mathbb{W} \times \mathbb{W}_M$, the marks $m_i \in \mathbf{m}$ of each $x_i \in \mathbf{x}$ are drawn independently from a given stochastic kernel $\ell(\bullet|x_i)$. \section{The Bayesian Model \label{sec:main}} According to Bayes' theorem, the posterior is proportional to the product of a likelihood function and a prior. To investigate Bayesian framework for persistence diagrams, we need to compute the conditional distribution $p({D}_X \vert {D}_Y)$ by establishing the proposed Bayesian formula for persistence diagrams, $p({D}_X \vert {D}_Y) \propto \mathcal{L}({D}_Y | {D}_X) p({D}_X),$ where the likelihood $\mathcal{L}({D}_Y | {D}_X)$ and the prior $p({D}_X)$ need to be defined and computed for random persistence diagrams. We employ a likelihood model for the persistence diagrams generated from the signals which is analogous to the idea of substitution likelihood \cite{Jeffreys1961}. Next, we develop the prior and likelihood on the space of persistence diagrams. \noindent\textbf{Prior:} To model prior knowledge for the brain state classification problem, human expectations for statistical regularities in the environment and the uncertainty involved are summarized as a persistence diagram $D_X$. We assume that the underlying prior uncertainty of a persistence diagram $D_X$ is generated by a Poisson point process $\D_X$ with intensity $\lambda_{\D_X}$. An example of prior persistence diagrams is shown in Fig. \ref{fig:BayesTDA} (a) as black rectangles. Any point $x$ in a persistence diagram $D_X$ may not be observed in actual data due to the presence of noise, sparsity, and/or other unexpected scenarios. We address this instance by defining a probability function $\alpha(x)$. In particular, if $x$ is not observed in the data, the probability of this event is $(1-\alpha(x))$ and similarly $\alpha(x)$ is the probability of $x$ being observed. \noindent\textbf{Data/Likelihood Model:} EEG signals are encoded into the observed PDs, $D_Y$, using the method discussed in Section \ref{sec:sublevel}. Points $y_i \in D_Y$ are linked to points in PD $D_X$, generated by the prior PPP. We investigate the linking of these points to the prior PPP by relying on the theory of marked Poisson point processes (MPPP) \cite{Kingman1993,Jacobson2005}. The probability density of the MPPP is given by a stochastic kernel, $\ell$ such that the marks $m(x_i)$ of $x_i$ are drawn independently from $\ell(\cdot|x_i)$, which in our case plays the exact role of the likelihood (see Section \ref{sec:ppp} for details). One needs to account for all possible marks, with the more likely marks realized as larger likelihood values $\ell(y_i|x_i)$ for all $(x_i,y_i) \in D_X \times D_Y$. In order to accommodate the nature of persistence diagrams, we need to define one last point process that unveils the topological noise in the observed data. Intuitively, this point process consists of the points in the observed diagram that fail to associate with the prior. We define this as a Poisson point process $\D_{Y_U}$ with intensity $\lambda_{\D_{Y_U}}$. A sample observed persistence diagram is shown in Fig. \ref{fig:BayesTDA} (a) as red hexagons. Fig. \ref{fig:BayesTDA} (b) and (c) show different combinations of possible associations between prior and data in the green regions. However, it is evident that the associations in (b) would have higher likelihood values than that in (c) and in turn, would have more impact on posteriors. Also, for every configuration, some of the observed points do not associate with any point $x_i \in D_X$, which is shown with blue regions. We denote the features in blue regions as $D_{X_V}$, which stands for the features that vanished. If they are not vanished and make associations with features of $D_{Y}$, we denote it as $D_{X_O}$. Samples from $\D_{Y_U}$ are shown in Fig. \ref{fig:BayesTDA} (b) and (c) as yellow regions. \noindent\textbf{Posterior:} With the above model characterization, the posterior intensity which explicitly show the update of the prior has the following form \cite{Maroulas2019a}: \begin{align} \small &\lambda_{\D_X|D_{Y_{1:m}}}(x) = \rhighlight{\left(1-\alpha(x)\right)\lambda_{\D_X}(x)} + \nonumber \\ &\frac{\alpha(x)}{m} \sum_{i=1}^m\!\!\sum_{y \in \D_{Y^i}}\!\!\!\frac{\bhighlight{\ell(y|x)\lambda_{\D_X}(x)}}{\brhighlight{\lambda_{\mathcal{D}_{Y_U}}(y)}+\bhighlight{\int_{\W}\ell(y|u)\alpha(u)\lambda_{\D_X}(u) du}}.\label{postrior_operator} \end{align} In the posterior intensity density, the two terms reflect the decomposition in the prior point process. The first term is for the features of prior which may not be observed and hence the intensity is weighted by $(1-\alpha(x))$. On the other hand, the second term corresponds to the features in prior that may be observed and similarly is weighted by $\alpha(x)$. Here we observe an expression consistent with the traditional Bayes' theorem, specifically a product of prior intensity and likelihood divided by a normalizing constant. The normalizing constant consists of two terms illustrating the two instances of our data model. $\D_{Y_U}$ consists of the features that are not associated to the prior and this is evident in the first term of the normalizing factor. Consequently, the second term provides the contribution of the observed data from $\D_{Y}$, coupling with prior features to form the marked PPP. \begin{figure}[t] \centering \subfigure[]{\includegraphics[width=1in,height=1.5in]{Figures/match_1.jpg}}\hspace{0.1in} \subfigure[]{\includegraphics[width=1in,height=1.5in]{Figures/match.jpg}} \hspace{0.1in} \subfigure[]{\includegraphics[width=1in,height=1.5in]{Figures/match_2.jpg}}\hspace{0.1in} \vspace{-0.1in} \caption{(a) is a sample $D_X$ from prior Poisson PP $\D_X$ and an observed persistence diagram $D_Y$. (b) and (c) are the decomposition of $D_X$ into $D_{X_O}$ \& $D_{X_V}$ and $D_Y$ into $D_{Y_O}$ \& $D_{Y_U}$.} \label{fig:BayesTDA} \vspace{-0.2in} \end{figure} \section{Bayesian Classification \label{sec:class}} In this section, we develop a Bayesian learning approach that discriminates EEG signals from different cognitive states. In particular, we present a classification scheme based on Bayes factors of persistence diagrams generated from physiological signals. We start by extracting fundamental topological features from a collection of EEG signals and record the information in persistence diagrams using the sublevel set filtration discussed in Section \ref{sec:sublevel}. For a persistence diagram $D$ that needs to be classified, we assume that $D$ is sampled from a Poisson point process $\mathcal{D}$ in $\mathcal{H}$ with prior intensity $\lambda_{\mathcal{D}}$. Consequently, its probability density has the form $ p_{\mathcal{D}}(D)=\frac{e^{-\lambda}}{|D|!}\prod_{d \in D}\lambda_{\mathcal{D}}(d), $ where $\lambda= \int_{\W} \lambda_{\mathcal{D}}(u)du$ is the expected number of points in $\mathcal{D}$. For training sets $Q_{Y^k} := D_{Y^k_{1:n}}$ for $k = 1, \cdots, K$ from $K$ classes of random diagrams $\mathcal{D}_{Y^k}$, we obtain the posterior intensities by following the estimation process discussed in Section \ref{sec:main}. The posterior probability density of $\mathcal{D}$ given the training set $Q_{Y^k}$ defined as \vspace{-0.2in} \begin{equation} \label{eqn:poisson_posterior_density} p_{\mathcal{D}|\mathcal{D}_{Y^k}} (D|Q_{Y^k}) = \frac{e^{-\lambda}}{|D|!}\prod_{d \in D}\lambda_{D|Q_{Y^k}}(d). \end{equation} The posterior probability densities given the other training sets are obtained by analogous expressions. Consequently, the Bayes factor is defined as \vspace{-0.2in} \begin{equation} \label{eqn:bayes factor} BF^{i,j}(Q_{Y^i},Q_{Y^j})=\frac{p_{D|\mathcal{D}_{Y^i}}(D|Q_{Y^i})}{p_{D|\mathcal{D}_{Y^j}}(D|Q_{Y^j})} \end{equation} For every pair of $(i,j)$ for $1\leq i,j\leq K$ if $BF^{i,j}(Q_{Y^i},Q_{Y^j})>c$, we will assign one vote to class $Q_{Y^i}$ or otherwise for $BF^{i,j}(Q_{Y^i},Q_{Y^j})<c$. The final assignment of the class of $D$ to a class is obtained by a majority voting scheme. \section{Application to EEG \label{sec:app}} \subsection{ Conjugate family of priors for EEG signals \label{sec:gm_post}} Here, we present a a closed form of the posterior distribution through a conjugate family of priors, e.g., the Gaussian mixtures. Hence the prior-to-posterior updates yield posterior distributions of the same family. We specify the prior intensity density as $\lambda_{\mathcal{D}_X}(x) = \sum_{j = 1}^{N}c^{\mathcal{D}_X}_{j}\mathcal{N}^{*}(x;\mu^{\mathcal{D}_X}_{j},\sigma^{\mathcal{D}_X}_{j}I)$, where $N$ is the number of corresponding mixture components and $\mathcal{N}^{*}$ is the restricted Gaussian density on the wedge $\W$. In a similar fashion, we define the density of the Poisson point process $\mathcal{D}_{Y_U}$. The likelihood density is also Gaussian as $\ell(y|x) = \mathcal{N}^{*}(y;x,\sigma^{\mathcal{D}_{Y_O}}I)$ and $\alpha(x) =\alpha$. With all of these we obtain a Gaussian mixture posterior intensity density of the form \vspace{-0.3in} \begin{align} \label{eqn:mg_posterior} \lambda_{\mathcal{D}_X|D_{Y^{1:m}}}(x) & = (1-\alpha)\lambda_{\mathcal{D}_X}(x)+ \nonumber\\ &\frac{\alpha}{m} \sum_{i=1}^m\!\!\sum_{y \in \D_{Y^i}}\!\!\sum_{j=1}^{N}\!\! C_{j}^{x|y}\mathcal{N}^*(x;\mu_{j}^{x|y},\sigma_{j}^{x|y}I), \end{align} where $C^{x|y}, \mu^{x|y}$ and $\sigma^{x|y}$ are weights, mean and variance of the posterior intensity respectively, corresponding to the second part of \eqref{postrior_operator}, and these are pertinent updates of the prior parameters \cite{Maroulas2019a}. \subsection{EEG Datasets \label{sec:data}} US Army Aberdeen Proving Ground (APG) researchers have simulated noisy EEG signals based on different mental activities. We used this dataset for our analysis mainly focusing on two different frequency bands -- alpha and beta. Alpha (frequency from 8 to 13 Hz) corresponds to intense mental activity, stress, and tension, and beta (frequency 13–30Hz) correlates with active attentions and focusing on concrete problems or solutions \cite{Siuly2016}. As the dataset contains several predominant oscillations based EEG signals, a Gaussian conjugate prior produces promising results for estimating the posterior probabilities as well as for Bayes factor classification \cite{Norwich1993,vanPutten2001,Ince2017}. \subsection{Posterior estimation of EEG Datasets \label{sec:eeg_post}} We first converted the EEG signals to persistence diagrams via sublevel set filtrations. In Fig. \ref{fig:pd_signal}, we present two samples from the EEG dataset of alpha (a) and beta (d) bands respectively along with their persistence diagrams in (b) and (e). Typically EEG signals encode various forms of noise and the simulated EEG dataset accounts for this by corrupting these signals with additive noise. The signals in Fig. \ref{fig:pd_signal} have the signal to noise ratio (SNR) 0, which implies equal contribution from signal and noise. In Fig. \ref{fig:pd_signal} we illustrate a posterior intensity estimation of a noisy alpha band and a noisy beta band utilizing \eqref{eqn:mg_posterior}. To demonstrate a data-driven posterior, we employed an uninformative prior of the form $\mathcal{N}((3,3),20I)$. To present the intensity maps uniformly, we divide the intensities by their corresponding maxima and call them scaled intensities ranging from 0 to 1. \begin{figure}[t] \centering \subfigure[]{\includegraphics[width=1in,height=0.8in]{Figures/signal_alpha_snr0_1.png}}\hspace{0.1in} \subfigure[]{\includegraphics[width=0.8in,height=0.8in]{Figures/pd_alpha_snr0_signal_1.png}}\hspace{0.1in} \subfigure[]{\includegraphics[width=1in,height=1in]{Figures/post_gauss_alpha_snr0_signal_1.png}} \subfigure[]{\includegraphics[width=1in,height=0.8in]{Figures/signal_beta_snr0_1.png}}\hspace{0.1in} \subfigure[]{\includegraphics[width=0.8in,height=0.8in]{Figures/pd_beta_snr0_signal_1.png}}\hspace{0.1in} \subfigure[]{\includegraphics[width=1in,height=1in]{Figures/post_gauss_beta_snr0_signal_1.png}} \caption{(a) is an alpha band simulated EEG signal, (b) is the corresponding persistence diagram using sublevel sets and (c) is the posterior intensity map obtained from \eqref{eqn:mg_posterior}. Similarly (d) is a beta band, (e) is the corresponding persistence diagram, and (f) is the corresponding posterior intensity map.} \label{fig:pd_signal} \vspace{-0.2in} \end{figure} \subsection{EEG signal classification with Bayesian learning \label{subsec:example_class}} Detection and classification of specific patterns in the brain activity are crucial steps in understanding functional behaviors for developing human-machine communications. We have taken the first step toward engaging Bayesian learning in EEG signal analysis by implementing Gaussian posterior intensities as explained in Section \ref{sec:gm_post} and using these posteriors for a binary Bayes factor classification. From the dataset provided by APG researchers, we used two instances of additive noise in order to represent cases with two different SNR. Our considered dataset comprises SNRs such as 3 and 5, where SNR 5 has more contribution from the signal than SNR3. We followed the process discussed in Section \ref{sec:gm_post} to estimate the posterior intensity of a persistence diagram $D$ in $\mathcal{H}$ given a training set $Q_Y$, with the goal of identifying the correct class of $D$. We used the R package \href{https://github.com/maroulaslab/BayesTDA}{BayesTDA} to obtain posterior intensities. Consequently, the probability density was obtained from \eqref{eqn:poisson_posterior_density}. After computing the intensities with respect to the training sets from both of the classes, the Bayes factor was computed by \eqref{eqn:bayes factor} as the ratio of the posterior probability densities of the unknown persistence diagram $D$ given each of the two competing training sets from $Q_{Y}$ or $Q_{Y'}$. For a threshold $c$, $BF(D)>c$ implies that $D$ belongs to $Q_{Y}$ and $BF(D)<c$ implies otherwise. We implemented 10-fold cross validation for estimating the accuracy. For this we partitioned each class into 10 different sets and 9 of them for each class were used for training sets, and 1 was used for testing. We repeated this 10 times so that every partition acts as the testing data exactly once. We then found the average among all partitions. Results from the Gaussian learning scheme are presented in Fig. \ref{fig:comp}. We compared the results of Gaussian learning scheme with Artificial Neural Networks (ANNs), logistic regression (LR) with features (mean, standard deviation and entropy of the recorded coefficients) extracted from Wavelet Transform (WT). We prefer to use WT rather than Fourier transform (FT) due to its inability to analyze nonstationary nature of EEG signals \cite{Al-Fahoum2014,Fiscon2018}. Both ANN and LR have been widely applied for physiological signals classification \cite{Dindin2019,Sivasankari2014,Subasi2005,Tomioka2007,Kabir2016,Prasad2014,Bahy2016}. We also compared our result with an exiting TDA technique namely, persistence landscape \cite{Bubenik2015}. We extracted the first landscape functions of the persistence diagrams for all considered EEG signals and implemented support vector machine and logistic regression on the extracted landscape function. Our results for classifying these two bands of SNRs outperforms the other existing TDA and non-TDA based classification methods over all levels of SNRs considered here. Furthermore, the Gaussian learning scheme is able to classify almost perfectly with a high signal to noise ratio. \begin{figure}[t] \centering{ \includegraphics[width=2.5in,height=2in]{Figures/comp_2.jpg} } \caption{Comparison of our Bayesian learning method with logistic regression, Artificial Neural Network (ANN), persistence landscape with support vector machine (PLSVM), and persistence landscape with logistic regression (PLLR). ANN was trained by a standard back propagation algorithm, a sigmoid activation function, and some other parameters such as, number of hidden layers = 100, the maximum number of iterations =1000, error threshold =0.001 and learning rate =0.1.\label{fig:comp} } \vspace{-15pt} \end{figure} \section{Conclusion \label{sec: conclusion}} In this work, we have proposed a Bayesian framework for persistence diagrams that incorporates prior beliefs about signals and does not rely on any regularity assumptions such as stationarity or linearity for the computation of posterior distributions. The topological descriptors, e.g., persistence diagrams of EEG signals can decipher essential shape peculiarities by avoiding complex and unwanted geometric features. Our method perceives persistence diagrams as point processes (PPs). As required for a Bayesian paradigm, we incorporate prior uncertainty by viewing persistence diagrams as Poisson PPs with a given intensity. We model the connection between prior PP and persistence diagrams of noisy observations through marked PPs. These models the data likelihood component of the Bayesian framework. Additionally, we define the likelihood through topological summaries of a signal rather than using the entire signal. This is analogous to the substitution likelihood discussed by Jeffreys \cite{Jeffreys1961}. Relying on the posterior distributions obtained from the Bayesian framework, we develop a Bayesian learning scheme. Furthermore, we present a closed form of the posterior estimation through a conjugate family for priors. In the case of synchronized brain activity, this implementation is useful for analyzing EEG signals. This exhibits the ability of our method to recover the underlying persistence diagram, analogously to the standard Bayesian paradigm for random variables. We employ this Bayesian learning scheme for EEG signal classification. We provide a detailed comparison with some of the existing methods of signal classification and showcase that our method outperforms them. For comparison purposes, we pursue two directions. Firstly, we compare with two most widely used signal classification algorithms--neural nets and logistic regression. Secondly, we show a comparison between our method and another topological tool, namely persistence landscape, with traditional machine learning methods such as support vector machine and logistic regression. We exhibit higher accuracy for all considered cases. Thus, the Bayesian inference developed here opens up new avenues for machine learning algorithms for complex signal analysis \emph{directly} on the space of persistence diagrams. \bibliographystyle{Ieeetran}
1,314,259,992,580
arxiv
\section{#1}} \renewcommand{\theequation}{\arabic{section}.\arabic{equation}} \section{Introduction} \noindent General Relativity (GR) and Quantum Field Theory (QFT) are leading theories in modern physics. General Relativity successfully describes gravity phenomena from millimeter scale to cosmic scale \cite{GRWaves}. On the other hand, Quantum Field Theory remarkably well describes physics at scales from atomic to elementary particle scale \cite{CERN}. However, a theory that unites these two theories and provides a description of gravity at quantum scales is still missing. One attempt to construct such a theory is the approach of noncommutative (NC) geometry and noncommutative space-time. \noindent During the last twenty years there has been an ongoing effort to try to construct consistent NC gravity models. These models rely on the notion of NC space-time and/or noncommutative geometry and in a certain limit they reduce to General Relativity. One of the main problems in this approach is breaking of diffeomorphism symmetry of General Relativity. Namely, in most of NC gravity models the diffeomorphism symmetry, or at least a part of it, is broken and one needs to understand this breaking and the remaining symmetries (if any). In the following we mention some models of NC gravity. NC gravity via the twist approach \cite{TwistApproach} is based on the twisted diffeomorphism symmetry. One can write NC Einstein-Hilbert action, derive equations of motion and analyze some particular solutions based on the Killing or semi-Killing twist \cite{TwistSolutions}. However, the full meaning of the twisted symmetry remains to be understood better \cite{Chaichian}. In emergent NC gravity models dynamical quantum geometry arises from NC gauge theory given by Yang-Mills matrix models \cite{EmGravityApproach}. There are also fuzzy space gravity models and DFR models \cite{OtherApproaches}. Finally, NC gravity can be formulated as a NC gauge theory of Lorentz or (A)dS group using the enveloping algebra approach and the Seiberg-Witten (SW) map \cite{SWmapApproach, PL09}. In this approach fermions are easily coupled to gravity and it is straightforward to formulate NC supergravity models \cite{PLSUGRA}. Recently, the SW map approach was related to NC gravity models via the Fedosov deformation quantization of endomorphism bundles \cite{Dobrski}. There are also attempts to relate NC gravity models with some testable GR results like gravitational waves, cosmological solutions, Newtonian potential \cite{Ostali}. \noindent In this article we construct a NC gravity model following the NC gauge theory approach. We work with the canonical (Moyal-Weyl, $\theta$-constant) noncommutative space-time. However, the model can be straightforwardly generalized to an arbitrary NC space-time coming from an Abelian twist. The main disadvantage of the canonical NC space-time is that, by introducing a constant NC parameter we explicitly break the diffeomorphism symmetry. Therefore, it is natural to ask if this symmetry breaking has some physical explanation. In Section 5 we will provide an explanation of this diffeomorphism breaking. The gauge group of our model is chosen to be the NC $SO(2,3)_\star$ group. Motivated by different $f(R)$, $f(T)$ and other modified gravity models we study the SW map expansion of our model and obtain correction terms that are of the first, second, third and fourth order in powers of curvature and torsion. Those terms can be compared with the existing terms in modified gravity models. An advantage of our model is that the relations between different correction terms are not arbitrary but are fixed by the SW map expansion. Calculating NC gravity equations of motion, we show that noncommutativity is a source of the curvature and torsion. That is, given a flat/torsion-free space-time, noncommutativity induces nonzero curvature/torsion on this space-time. This result is not completely new, it was also discussed in \cite{MajaJohn} in a different approach to NC gravity. Especially, starting from Minkowski space-time as a solution of commutative vacuum Einstein equations, the corrections induced by our NC gravity model lead to space-time with a constant scalar curvature. Note that this article is a longer and detailed version of \cite{UsLetter}. \noindent The structure of the article is as follows: In the following section we introduce the full commutative action. For completeness, we repeat the basic notations from our previous papers \cite{MiAdSGrav}, \cite{MDVR-14}. After that, the full model consisting of a sum of three different actions is presented. The actions are a MacDovell-Mansouri type of action, a generalization of the Einstein-Hilbert action and the cosmological constant action \cite{Wilczek}. The NC generalization of this model is done in Section 3. Using the SW map the second order expansion (in the deformation parameter) of the NC gravity action is calculated. The calculations are long and tedious, so we do not go into details. Instead, we give some of the details in Appendix B. In the zeroth order the NC action reduces to the commutative action containing the Gauss-Bonnet term, Einstein-Hilbert term and the cosmological constant term. The first order correction vanishes, as expected. The first non-vanishing correction is the second order correction. It is given by the terms that are higher order in the curvature and torsion. Since the full second order correction is very complicated, in this paper we only discuss the low energy limit. Therefore, in Section 4 we write the expanded action keeping terms that are of zeroth, first and second order in the derivatives of vierbeins. The equations of motion are then obtained by varying the action with respect to the vierbeins and the spin-connection. NC corrections ($\theta$-dependent terms) appear on the right-hand side of these equations and can be interpreted as sources of curvature and/or torsion. Using these equations of motion, in Section 5 we calculate the NC correction to Minkowski space-time. We see that due to the noncommutativity, Minkowski space-time becomes curved with a constant scalar curvature and the full metric is very close in form to the metric of the AdS space-time. The coordinates in which the solution is given turn out to be Fermi-normal coordinates. This result, its relation with the diffeomorphism symmetry breaking and the work in perspective we discuss in the Conclusions. \setcounter{equation}{0} \section{Commutative model} \noindent In this section we review the commutative model. We first repeat the basic notation and then define and discuss the commutative action. \noindent Let us consider a gauge theory on four dimensional Minkowski space-time with the $SO(2,3)$ group as the gauge group. Note that through the paper we use the "mostly minus" convention for the metric, $\eta_{\mu\nu}={\rm diag}(+,-,-,-,)$. See Appendix A for more details on the conventions we use. The gauge field is valued in the $SO(2,3)$ algebra \begin{equation}} \def\ee{\end{equation} \omega_\mu = \frac{1}{2}\omega_\mu^{AB}M_{AB} ,\label{GaugePotAds} \ee where $M_{AB}$ are the generators of the $SO(2,3)$ group. The generators satisfy \begin{equation}} \def\ee{\end{equation} [M_{AB},M_{CD}]=i(\eta_{AD}M_{BC}+\eta_{BC}M_{AD}-\eta_{AC}M_{BD}-\eta_{BD}M_{AC }) . \label{AdSalgebra} \ee The $5 D$ metric is $\eta_{AB}={\rm diag}(+,-,-,-,+)$. The gauge group indices $A,B,\dots$ take values $0,1,2,3,5$, while indices $a,b,\dots$ take values $0,1,2,3$. The space-time indices we label with Greek letters. The generators $M_{AB} $ can be defined using the Clifford algebra in 5D. A representation of 5D gamma matrices is obtained from 4D gamma matrices, i. e. $\Gamma_A =(i\gamma_a\gamma_5, \gamma_5)$, where $\gamma_a$ are 4D gamma matrices. The generators $M_{AB}$ are \begin{equation}} \def\ee{\end{equation} M_{AB} =\frac{i}{4}[\Gamma_A,\Gamma_B]\ . \ee In the representation given above we obtain \begin{eqnarray} } \def\eea{\end{eqnarray} M_{ab} &=&\frac{i}{4}[\gamma_a,\gamma_b]=\frac12\sigma_{ab}\ ,\nonumber\\ M_{5a} &=&\frac{1}{2}\gamma_a\ . \label{Maba5} \eea Using this representation, the gauge filed $\omega_\mu^{AB}$ can be decomposed as: \begin{equation}} \def\ee{\end{equation} \omega_\mu = \frac{1}{2}\omega_\mu^{AB}M_{AB}=\frac{1}{4}\omega_\mu^{ab}\sigma_{ab}- \frac{1}{2l} e_\mu^{a}\gamma_a .\label{GaugePotAdsDecomp} \ee The parameter $l$ has dimension of length, while fields $e_\mu^a$ are dimensionless and $\omega_\mu^{ab}$ has dimension $1/l$. Under the $SO(2,3)$ gauge transformations the gauge field transforms as \begin{equation}} \def\ee{\end{equation} \delta_\epsilon\omega_\mu=\partial_\mu\epsilon + i[\epsilon, \omega_\mu], \label{TrLawOmegaAB} \ee with the gauge parameter denoted by $\epsilon=\frac{1}{2}\epsilon^{AB}M_{AB}$. \noindent The field strength tensor is defined in the standard way as \begin{equation}} \def\ee{\end{equation} F_{\mu\nu}=\partial_\mu\omega_\nu-\partial_\nu\omega_\mu-i[\omega_\mu,\omega_\nu] =\frac{1}{2}F^{AB}_{\mu\nu}M_{AB} . \label{FAB} \ee Its transformation law under the infinitesimal gauge transformations is given by \begin{equation}} \def\ee{\end{equation} \delta_\epsilon F_{\mu\nu}= i[\epsilon, F_{\mu\nu}]. \label{TrLawFmini} \ee Just like the gauge potential, the components of the field strength tensor $F_{\mu\nu}^{\ AB}$ decompose into $F_{\mu\nu}^{\ ab}$ and $F_{\mu\nu}^{\ a5}$: \begin{equation}} \def\ee{\end{equation} F_{\mu\nu}=\frac12\Big( R_{\mu\nu}^{\ ab}-\frac{1}{l^2}(e_\mu^ae_\nu^b-e_\mu^be_\nu^a)\Big) M_{ab} + F_{\mu\nu}^{\ a5}M _{a5} , \label{FabFa5} \ee where \begin{eqnarray} } \def\eea{\end{eqnarray} R_{\mu\nu}^{\ ab} &=& \partial_\mu\omega_\nu^{ab}-\partial_\nu\omega_\mu^{ab}+\omega_\mu^{ac}\omega_\nu^{cb} -\omega_\mu^{bc}\omega_\nu^{ca} , \label{Rab}\\ lF_{\mu\nu}^{\ a5} &=& \nabla_\mu e^a_\nu-\nabla_\nu e^a_\mu = T_{\mu\nu}^a .\label{Ta} \eea The $SO(2,3)$ gauge theory was used in \cite{stelle-west} to formulate a gravity theory using the symmetry breaking from $SO(2,3)$ to $SO(1,3)$. Then, using the equations of motion of the model one can identify the fields $\omega_\mu^{ab}$ with the spin connection and the fields $e^a_\mu$ with vierbeins. The fields strengths $R_{\mu\nu}^{\ ab}$ and $F_{\mu\nu}^{\ a5}=T_{\mu\nu}^a$ are the curvature tensor and the torsion. \noindent The symmetry breaking was introduced via the scalar field $\phi=\phi^A\Gamma_A$ which transforms in the adjoint representation of the $SO(2,3)$ group, \begin{equation}} \def\ee{\end{equation} \delta \phi = i[\epsilon,\phi] . \label{DeltaPhi} \ee Using the scalar field $\phi$ one can write the following gauge invariant actions \cite{Wilczek}: \begin{eqnarray} S_1 &=& \frac{il}{64\pi G_N}{\rm Tr} \int{\rm d}^4x \epsilon^{\mu\nu\rho\sigma} F_{\mu\nu} F_{\rho\sigma}\phi ,\label{KomDejstvo_S_1}\\ S_2 &=& \frac{1}{128 \pi G_{N}l}{\rm Tr}\int d^{4}x\epsilon^{\mu \nu \rho \sigma}F_{\mu \nu}D_{\rho}\phi D_{\sigma}\phi\phi+c.c. , \label{KomDejstvo_S_2}\\ S_3 &=& -\frac{i}{128 \pi G_{N}l}{\rm Tr}\int d^{4}x\epsilon^{\mu \nu \rho \sigma}D_{\mu}\phi D_{\nu}\phi D_{\rho}\phi D_{\sigma}\phi\phi ,\label{KomDejstvo_S_3} \end{eqnarray} with $D_\mu\phi = \partial_\mu\phi -i[\omega_\mu, \phi]$. \noindent We define our commutative model to be the sum of these three actions \begin{equation} S=c_1S_1+c_2S_2+c_3S_3 , \label{FullCommAction} \end{equation} where $c_1,c_2$ and $c_3$ are arbitrary constants that will be determined from some additional constraints. The action (\ref{FullCommAction}) is invariant under the $SO(2,3)$ gauge symmetry. This symmetry is broken to the $SO(1,3)$ gauge symmetry by choosing $\phi^a=0,\phi^5=l$. This choice is sometimes referred to as a physical gauge. After the symmetry breaking the action $S_1$ reduces to the sum of the Einstein-Hilbert term, the cosmological constant term and the Gauss-Bonnet term: \begin{equation} S_1 = -\frac{1}{16\pi G_N}\int {\rm d}^4 x\Big( \frac{l^2}{16}\epsilon^{\mu\nu\r\s} \epsilon_{abcd}R_{\mu\nu}^{\ ab}R_{\r\s}^{\ cd} + eR -\frac{6}{l^2} e \Big) . \nonumber \end{equation} The action $S_2$ reduces to the sum of the Einstein-Hilbert term and the cosmological constant term \begin{equation} S_2=-\frac{1}{16\pi G_{N}}\int d^{4}x\sqrt{-g}\Big(R-\frac{12}{l^2}\Big) .\nonumber \end{equation} Finally, the action $S_3$ reduces to the cosmological constant term only \begin{equation} S_3=-\frac{1}{16\pi G_{N}}\int d^{4}x\sqrt{-g}\Big(-\frac{12}{l^2}\Big) . \end{equation} Therefore, after the symmetry breaking our classical action is a sum of these three terms \begin{eqnarray} } \def\eea{\end{eqnarray} S&=&c_1S_1+c_2S_2+c_3S_3\nonumber\\ &=&-\frac{1}{16\pi G_{N}}\int d^{4}x\Big(c_1\frac{l^2}{16}\epsilon^{\mu\nu\r\s} \epsilon_{abcd}R_{\mu\nu}^{\ ab}R_{\r\s}^{\ cd} \nonumber\\ && +\sqrt{-g}\big( (c_1 + c_2)R -\frac{6}{l^2}(c_1+ 2c_2 + 2c_3)\big) \Big). \label{KomDejstvo} \eea Now we can partially fix the constants $c_1$, $c_2$ and $c_3$ by the requirement that the full action after the symmetry breaking reduces to the Einstein Hilbert action with the cosmological constant. The Gauss-Bonet term is topological, it does not influence the equations of motion and we will not write it further. We choose $c_1+c_2=1$, and the cosmological constant is given by $$\Lambda=-3\frac{1+c_2+2c_3}{l^2}\ . $$ Note that the cosmological constant $\Lambda$ can be positive, negative or zero, regardless of the symmetry of our model. \setcounter{equation}{0} \section{NC $SO(2,3)_\star$ gravity action} \noindent As we have mentioned in Introduction, the NC generalization of General Relativity cannot be formulated in a straightforward way. One of possible ways to achieve this is to use knowledge of the NC gauge theories and treat gravity as a gauge theory of the Poincar\'e (or AdS or dS) group. In the previous section we defined a rather general model of commutative gravity as a theory with broken $SO(2,3)$ symmetry (\ref{KomDejstvo}). This model we now generalize to the noncommutative setting. \noindent As in the previous papers \cite{MiAdSGrav, MDVR-14}, we work in the canonical (Moyal-Weyl, $\theta$-constant) NC space-time. Following the approach of deformation quantization we represent noncommutative functions as functions of commuting coordinates and algebra multiplication with the Moyal-Weyl $\star$-product: \begin{equation} \label{moyal} f (x)\star g (x) = e^{\frac{i}{2}\,\theta^{\alpha\beta}\frac{\partial}{\partial x^\alpha}\frac{\partial}{ \partial y^\beta}} f (x) g (y)|_{y\to x}\ . \end{equation} Here $\theta^{\alpha\beta}$ is a constant antisymmetric matrix and its entries are considered to be small deformation parameters\footnote{To be more precise, the Moyal-Weyl $\star$-product should be written as $$ f (x)\star g (x) = e^{\frac{i}{2}\,{\mathchar'26\mkern-9muk}\theta^{\alpha\beta}\frac{\partial}{\partial x^\alpha}\frac{\partial}{ \partial y^\beta}} f (x) g (y)|_{y\to x}\ , $$ with the small deformation parameter ${\mathchar'26\mkern-9muk}$ and arbitrary constant antisymmetric matrix elements $\theta^{\alpha\beta}$. In the usual notation ${\mathchar'26\mkern-9muk}$ is absorbed in the matrix elements $\theta^{\alpha\beta}$ and these are called deformation parameters. }. The noncommutativity (deformation) is then encoded in the $\star$-product, while all variables (fields) are functions of commuting coordinates. Integration is well defined since the usual integral is cyclic: \begin{equation}} \def\ee{\end{equation} \int {\rm d}^4 x (f\star g\star h ) = \int {\rm d}^4 x ( h\star f\star g )\ + {\mbox{boundary terms}}.\label{cyclicity} \ee Assuming that all fields are well behaved at the boundary, these terms vanish and since we are interested in the equations of motion, we will simply ignore the boundary terms throughout this paper. They become important when one discuss conserved quantities or thermodynamics of black holes. The question of boundary terms in the NC gravity action we discussed in details in \cite{MDVR-14}. In particular, from (\ref{cyclicity}) we have $\int {\rm d}^4 x (f\star g) = \int {\rm d}^4 x (g\star f) =\int {\rm d}^4 x fg$. Note that the volume element ${\rm d}^4 x$ is not $\star$-multiplied with functions under the integral. \noindent In order to construct the NC $SO(2,3)_\star$ gauge theory we use the enveloping algebra approach and the Seiberg-Witten map developed in \cite{SWMapEnvAlgebra}. We will not go into details of this construction, they can be found in \cite{MDVR-14}. Here we just write the SW map solutions for the NC gauge field, NC field strength tensor and the NC scalar field in the adjoint representation since we will use them through the paper. \noindent The noncommutative gauge field $\hat{\omega}_\mu$ is defined by the following recursive relation: \begin{equation}} \def\ee{\end{equation} {\hat\omega}_\mu^{(n+1)}= -\frac{1}{4(n+1)}\theta^{\kappa\lambda} \Big( \{{\hat \omega}_\kappa \stackrel{\star}{,} \partial_\lambda{\hat \omega}_\mu + {\hat F}_{\lambda\mu}\} \Big)^{(n)} ,\label{RecRelOmega} \ee where $\hat{\omega}^{(0)}_\mu = \omega_\mu$ is the commutative gauge field and an expression of the type $(A\star B)^{(n)} = A^{(n)}B^{(0)} + A^{(n-1)}B^{(1)} + \dots + A^{(0)}\star ^{(1)} B^{(n-1)} + A^{(1)}\star ^{(1)} B^{(n-2)} +\dots$ includes all possible terms of order $n$. Expanding this relation up to first order in the deformation parameter, we find that the NC gauge field ${\hat\omega}_\mu$ is of the form \begin{eqnarray} } \def\eea{\end{eqnarray} {\hat\omega}_\mu &=& \omega_\mu -\frac{1}{4}\theta^{\kappa\lambda}\{\omega_\kappa, (\partial_\lambda\omega_\mu + F_{\lambda\mu}\} + {\cal O}(\theta^2) \label{SO23Omega1}\\ &=& \frac{1}{4}\omega^{ab}_\mu\sigma_{ab} + \omega_\mu^a \gamma_a +\tilde{\omega}_\mu^a\gamma_a\gamma_5 + {\tilde\omega}^5_\mu\gamma_5 + \omega_\mu I \ . \label{UEAOmega} \eea It is obvious from (\ref{UEAOmega}) that the NC gauge field is valued in the enveloping algebra of the $SO(2,3)$ algebra. However, note that the enveloping algebra in this particular case is finite dimensional. This is one of the advantages of choosing the NC gauge group to be $SO(2,3)_\star$. \noindent The NC field strength tensor is defined as \begin{equation}} \def\ee{\end{equation} {\hat F}_{\mu\nu}=\partial_\mu{\hat \omega}_\nu-\partial_\nu{\hat \omega}_\mu -i[{\hat\omega}_\mu\stackrel{\star}{,} {\hat\omega}_\nu] \label{nckrivina} \ee and its transformation law under the infinitesimal NC gauge transformations is given by: \begin{equation} \delta^\star_\epsilon {\hat F}_{\mu\nu}= i[\hat{\Lambda}_\epsilon \stackrel{\star}{,} {\hat F}_{\mu\nu}]\ .\label{DeltaFStar} \end{equation} Here the NC gauge parameter $\hat{\Lambda}_\epsilon$ is introduced. It is also valued in the enveloping algebra, in zeroth order in the deformation parameter it reduces to the commutative $SO(2,3)$ gauge parameter $\epsilon$ and its higher orders can be calculated using the SW map. The SW map solution for ${\hat F}_{\mu\nu}$ follows from the definition (\ref{nckrivina}), using the result (\ref{RecRelOmega}). The recursive formula is \begin{eqnarray} } \def\eea{\end{eqnarray} {\hat F}_{\mu\nu}^{(n+1)} &=& -\frac{1}{4(n+1)}\theta^{\kappa\lambda}\Big( \{ {\hat \omega}_\kappa \stackrel{\star}{,} \partial_\lambda {\hat F}_{\mu\nu} + D_\lambda {\hat F}_{\mu\nu} \} \Big)^{(n)} \nonumber\\ && +\frac{1}{2(n+1)}\theta^{\kappa\lambda}\Big( \{ {\hat F}_{\mu\kappa}, \stackrel{\star}{,} {\hat F}_{\nu\lambda} \} \Big)^{(n)} \ .\label{RecRelR} \eea Note that we do not put a "hat" on the covariant derivative $D_\mu$, the meaning of $D_\mu$ is defined by the expression it acts on: $D_\lambda \hat{F}_{\mu\nu} = \partial_\lambda \hat{F}_{\mu\nu} -i[\hat{\omega}_\lambda \stackrel{\star}{,} \hat{F}_{\mu\nu}]$ and $D_\lambda F_{\mu\nu} = \partial_\lambda F_{\mu\nu} -i[\omega_\lambda, F_{\mu\nu}]$. One can check that \begin{eqnarray} } \def\eea{\end{eqnarray} {\hat F}_{\mu\nu} &=& F_{\mu\nu} -\frac{1}{4}\theta^{\kappa\lambda} \{\omega_\kappa,\partial_\lambda F_{\mu\nu} + D_\lambda F_{\mu\nu}\} + \frac{1}{2}\theta^{\kappa\lambda} \{F_{\mu\kappa}, F_{\nu\lambda} \} + {\cal O}(\theta^2) \label{SO23F1}\\ &=& \frac{1}{4} F^{\ ab}_{\mu\nu}\sigma_{ab} + F^a\gamma_a + \tilde{F}^a\gamma_a\gamma_5 + {\tilde F}^5_{\mu\nu}\gamma_5 + F_{\mu\nu} I . \label{UEAF} \eea Finally, the field $\hat\phi$ transforms in the adjoint representation \begin{equation}} \def\ee{\end{equation} \delta_\epsilon^\star{\hat \phi} = i[{\hat\Lambda}_\epsilon\stackrel{\star}{,}{\hat \phi}]\ . \label{DeltaPhiStar} \ee Using the previous results we find the recursive relation \begin{equation}} \def\ee{\end{equation} \hat{\phi}^{(n+1)} = -\frac{1}{4(n+1)}\theta^{\kappa\lambda} \Big( \{{\hat \omega}_\kappa \stackrel{\star}{,} \partial_\lambda {\hat{\phi}} + D_\lambda {\hat{\phi}} \} \Big)^{(n)} \ , \label{RecRelPhi} \ee with $D_\lambda {\hat \phi} = \partial_\lambda {\hat \phi} -i [{\hat \omega}_\lambda \stackrel{\star}{,} {\hat \phi} ]$ and $D_\lambda \phi = \partial_\lambda \phi -i [\omega_\lambda ,\phi ]$. The solution for $\hat{\phi}$ has the following structure \begin{eqnarray} } \def\eea{\end{eqnarray} {\hat \phi} &=& \phi -\frac{1}{4}\theta^{\kappa\lambda} \{\omega_\kappa,\partial_\lambda\phi + D_\lambda \phi\} + {\cal O}(\theta^2) \label{SO23Phi1}\\ &=& \phi^a\gamma_a\gamma_5 + \phi\gamma_5 + \frac{1}{4}\phi^{ab}\sigma_{ab} + \tilde{\phi}^a\gamma_a\ . \label{UEAPhi} \eea \noindent Having these results at hand, we are now ready to define a NC generalization of the action (\ref{KomDejstvo}). We do it term by term. \subsection{NC generalization of $S_1$} \noindent The NC generalization of the action $S_1$ (\ref{KomDejstvo_S_1}) is given by \begin{equation}} \def\ee{\end{equation} S_{1NC} = \frac{il}{64\pi G_N}{\rm Tr} \int{\rm d}^4x \epsilon^{\mu\nu\rho\sigma} \hat{F}_{\mu\nu}\star \hat{F}_{\rho\sigma}\star \hat{\phi}\, .\label{NCdejstvo_S_1} \ee The $\star$-product is the Moyal-Weyl $\star$-product (\ref{moyal}), fields with a "hat" are NC fields and we will use the SW map solutions (\ref{SO23F1}), (\ref{SO23Phi1}). Using the transformation laws (\ref{DeltaFStar}), (\ref{DeltaPhiStar}) and the cyclicity of the integral (\ref{cyclicity}) one can show that this action is invariant under the NC $SO(2,3)_\star$ gauge transformations. In the limit $\theta^{\alpha\beta}\to 0$ the action (\ref{NCdejstvo_S_1}) reduces to the commutative action (\ref{KomDejstvo_S_1}). \noindent The expansion of this action up the the second order in the deformation parameter is done in \cite{MDVR-14}. The first order correction vanishes. This is an expected result: it was shown in \cite{SWmapApproach} that, if the NC gravity action is real, then the first order (in the deformation parameter) correction has to vanish. This result holds for a wide class of NC deformations, namely the deformations obtained by an Abelian twist, see \cite{PL09}. The second order correction is given by: \begin{eqnarray} } \def\eea{\end{eqnarray} S_{1NC}^{(2)} &=& \frac{il}{64\pi G_N}\frac18\theta^{\alpha\beta}\theta^{\kappa\lambda}{\rm Tr} \int{\rm d}^4x \epsilon^{\mu\nu\rho\sigma}\Big( \frac18\{ F_{\alpha\beta},\{F_{\mu\nu},F_{\r\s}\}\}\{\phi,F_{\kappa\lambda}\} \nonumber\\ && -\frac12\{F_{\alpha\beta},\{F_{\r\s},\{F_{\kappa\mu},F_{\lambda\nu}\}\}\}\phi -\frac14\{\{F_{\mu\nu},F_{\r\s}\},\{F_{\kappa\alpha},F_{\lambda\beta}\}\}\phi \nonumber\\ && -\frac{i}{4}\{F_{\alpha\beta},[D_\kappa F_{\mu\nu},D_{\lambda}F_{\r\s}]\}\phi - \frac{i}{2}[\{D_\kappa F_{\mu\nu},F_{\r\s}\},D_{\lambda}F_{\alpha\beta}]\phi \nonumber\\ && -\frac12\{ F_{\r\s},\{F_{\alpha\mu},F_{\beta\nu}\}\}\{\phi,F_{\kappa\lambda}\} +\{ \{F_{\alpha\mu},F_{\beta\nu} \},\{F_{\kappa\r},F_{\lambda\s}\}\}\phi\nonumber\\ && +2\{F_{\r\s},\{F_{\beta\nu},\{F_{\kappa\alpha},F_{\lambda\mu}\}\}\}\phi + i\{F_{\r\s},[D_\kappa F_{\alpha\mu},D_\lambda F_{\beta\nu}]\}\phi \nonumber\\ && +2i[\{F_{\beta\nu},D_{\kappa}F_{\alpha\mu}\},D_\lambda F_{\r\s}]\phi \nonumber\\ &&-\frac{i}{4}\{\phi,F_{\kappa\lambda}\}[D_\alpha F_{\mu\nu},D_\beta F_{\r\s}] -\frac{1}{2}\{D_\kappa D_\alpha F_{\mu\nu},D_\lambda D_\beta F_{\r\s}\}\phi\nonumber\\ && + i[\{ F_{\kappa\alpha},D_\lambda F_{\mu\nu}\},D_\beta F_{\r\s}]\phi + i[\{ F_{\lambda\nu},D_\alpha F_{\kappa\mu}\},D_\beta F_{\r\s}]\phi \nonumber\\ && + i[\{ F_{\kappa\mu},D_\alpha F_{\lambda\nu}\},D_\beta F_{\r\s}]\phi\Big) \ . \label{NCDejstvo1Exp2} \eea \subsection{NC generalization of $S_2$} \noindent The NC generalization of the action $S_2$ (\ref{KomDejstvo_S_2}) is given by \begin{equation} S_{2NC}=\frac{1}{128 \pi G_{N}l}{\rm Tr} \int d^{4}x \epsilon^{\mu \nu \rho \sigma}\hat\phi\star\hat F_{\mu \nu}\star\hat D_{\rho}\hat\phi\star\hat D_{\sigma}\hat\phi + c.c. \label{NCDejstvo_S_2} \end{equation} This action is not real so we have to add its complex conjugate by hand. Following the usual steps, we expand (\ref{NCDejstvo_S_2}) up to second order in the deformation parameter. The details of the calculation are presented in Appendix B, here we just write the main steps. \noindent Using the formulae (\ref{B1}), (\ref{B2}) and (\ref{B3}) from Appendix B the first order correction follows. It is given by \begin{eqnarray} S_{2NC}^{(1)}&=& \frac{1}{128 \pi G_{N}l}{\rm Tr}\int d^{4}x \theta^{\alpha \beta}\epsilon^{\mu \nu \rho \sigma}\Big( -\frac{1}{4}\phi\{F_{\alpha \beta},F_{\mu \nu}\}D_{\rho} \phi D_{\sigma}\phi\nonumber\\ && -\frac{i}{2}D_{\alpha}\phi F_{\mu \nu}(D_{\beta}D_{\rho}\phi)D_{\sigma}\phi -\frac{i}{2}D_{\alpha}\phi F_{\mu \nu}D_{\rho}\phi (D_{\beta}D_{\sigma}\phi)+\nonumber\\ &&+\frac{1}{2}\phi\{F_{\mu \alpha},F_{\nu \beta}\}D_{\rho}\phi D_{\sigma}\phi+\frac{i}{2}\phi F_{\mu \nu}(D_{\alpha}D_{\rho}\phi)(D_{\beta}D_{\sigma}\phi)\nonumber\\&+&\frac{1}{2}\phi F_{\mu \nu}\{F_{\alpha \rho},D_{\beta}\phi\}D_{\sigma}\phi+\frac{1}{2}\phi F_{\mu \nu}D_{\rho}\phi\{F_{\alpha \sigma},D_{\beta}\phi\} \Big) + c.c. \end{eqnarray} Explicit calculation of traces gives $S^{(1)}_2=0$, so we have to calculate the second order correction. It follows from the first order action as \begin{eqnarray} S_{2NC}^{(2)}&=& \frac{1}{256 \pi G_{N}l}{\rm Tr}\int d^{4}x \theta^{\alpha \beta}\epsilon^{\mu \nu \rho \sigma}Tr\Big(-\frac{1}{4}\phi\{\hat {F}_{\alpha \beta}\stackrel{\star}{,}\hat {F}_{\mu \nu}\}\star \hat D_{\rho} \hat\phi \star \hat D_{\sigma}\phi\nonumber\\ &&-\frac{i}{2}\hat D_{\alpha}\phi\star \hat {F}_{\mu\nu}\star (\hat D_{\beta}\hat D_{\rho}\hat \phi)\star \hat D_{\sigma}\hat \phi-\frac{i}{2}\hat D_{\alpha}\hat \phi \star \hat {F}_{\mu \nu}\star \hat D_{\rho}\hat \phi \star (\hat D_{\beta}\hat D_{\sigma}\hat \phi)\label{S_2-drugi-red-1}\\ &&+\frac{1}{2}\hat \phi\star\{\hat {F}_{\mu \alpha}\stackrel{\star}{,}\hat {F}_{\nu \beta}\}\star\hat D_{\rho}\hat \phi \star\hat D_{\sigma}\hat \phi+\frac{i}{2}\hat \phi \star\hat {F}_{\mu \nu}\star(\hat D_{\alpha}\hat D_{\rho}\hat \phi)\star(\hat D_{\beta}\hat D_{\sigma}\hat \phi)\nonumber\\ &&+\frac{1}{2}\hat \phi\star \hat {F}_{\mu \nu}\star\{\hat {F}_{\alpha \rho}\stackrel{\star}{,}\hat D_{\beta}\hat \phi\}\star\hat D_{\sigma}\hat \phi+\frac{1}{2}\hat \phi\star \hat {F}_{\mu\nu}\star\hat D_{\rho}\hat \phi\{\hat {F}_{\alpha \sigma}\stackrel{\star}{,}\hat D_{\beta}\hat \phi\}\Big)^{(1)} + c.c.\nonumber \end{eqnarray} By $()^{(1)}$ it is meant the terms in the bracket are expanded up to first order in the deformation parameter. That includes expansion of the $\star$-products and the use of the SW map solutions for the corresponding fields. \noindent Using the formulae (\ref{B4}-\ref{B6}) and the general method outlined in Appendix B, we finally arrive at the second order correction for the NC action $S_2$ \begin{eqnarray} S^{(2)}_{2NC}&=&\frac{1}{256\pi G_{N}l}\int d^{4}x \epsilon^{\mu \nu \rho \sigma}\theta^{\alpha \beta}\theta^{\gamma \delta}{\rm Tr}\Bigg(\frac{1}{8}\{F_{\gamma \delta},\phi\{F_{\alpha \beta},F_{\mu \nu}\}\}D_{\rho}\phi D_{\sigma}\phi\nonumber\\ &&-\frac{i}{4}D_{\gamma}\phi D_{\delta}(\{F_{\alpha \beta},F_{\mu \nu}\})D_{\rho}\phi D_{\sigma}\phi \nonumber\\ && -\frac{i}{4}\phi [D_{\gamma}F_{\alpha \beta}, D_{\delta}F_{\mu \nu}]D_{\rho}\phi D_{\sigma}\phi-\frac{1}{4}\phi\{\{F_{\alpha \gamma},F_{\beta \delta}\},F_{\mu \nu}\}D_{\rho}\phi D_{\sigma}\phi\nonumber\\ && -\frac{1}{4}\phi\{\{F_{\mu \gamma},F_{\nu \delta}\},F_{\alpha \beta}\}D_{\rho}\phi D_{\sigma}\phi-\frac{i}{4}\phi\{F_{\alpha \beta},F_{\mu \nu}\}D_{\gamma}D_{\rho}\phi D_{\delta}D_{\sigma}\phi\nonumber\\ && -\frac{1}{4}\phi\{F_{\alpha \beta},F_{\mu \nu}\}[\{F_{\gamma \rho},D_{\delta}\phi\},D_{\sigma}\phi] +\frac{i}{4}\{F_{\gamma \delta},D_{\alpha}\phi F_{\mu \nu}\}[D_{\beta}D_{\rho}\phi, D_{\sigma}\phi]\nonumber\\ && +\frac{1}{2}(D_{\gamma}D_{\alpha}\phi) D_{\delta}F_{\mu \nu}[D_{\beta}D_{\rho}\phi, D_{\sigma}\phi]\nonumber\\ && -\frac{i}{2}\{F_{\gamma \alpha},D_{\delta}\phi\}F_{\mu \nu}[D_{\beta}D_{\rho}\phi, D_{\sigma}\phi]-\frac{i}{2}D_{\alpha}\phi\{F_{\mu \gamma},F_{\nu \delta}\}[D_{\beta}D_{\rho}\phi, D_{\sigma}\phi]\nonumber\\ && +\frac{1}{2}D_{\alpha}\phi F_{\mu \nu}\{D_{\gamma}D_{\beta}D_{\rho}\phi, D_{\delta}D_{\sigma}\phi\} -\frac{i}{2}D_{\alpha}\phi F_{\mu \nu}[\{F_{\gamma \beta},D_{\delta}D_{\rho}\phi\},D_{\sigma}\phi] \nonumber\\ && -\frac{i}{2}D_{\alpha}\phi F_{\mu \nu}D_{\beta}([\{F_{\gamma \rho},D_{\delta}\phi\},D_{\sigma}\phi])-\frac{1}{4}\{F_{\gamma \delta},\phi\{F_{\mu \alpha},F_{\nu \beta}\}\}D_{\rho}\phi D_{\sigma}\phi \nonumber\\ && +\frac{i}{2}D_{\gamma}\phi D_{\delta}(\{F_{\mu \alpha},F_{\nu \beta}\})D_{\rho}\phi D_{\sigma}\phi +\frac{i}{2}\phi [D_{\gamma}F_{\mu \alpha},D_{\delta}F_{\nu \beta}]D_{\rho}\phi D_{\sigma}\phi\nonumber\\ && +\phi \{\{F_{\mu \gamma},F_{\alpha \delta}\},F_{\nu \beta}\}D_{\rho}\phi D_{\sigma}\phi\nonumber\\ && +i\phi \{F_{\mu \gamma},F_{\nu \delta}\}D_{\alpha}D_{\rho}\phi D_{\beta}D_{\sigma}\phi\nonumber\\ && +\phi\{F_{\mu \alpha},F_{\nu \beta}\}[\{F_{\gamma \rho},D_{\delta}\phi\},D_{\sigma}\phi] -\frac{i}{4}\{F_{\gamma \delta},\phi F_{\mu \nu}\}D_{\alpha}D_{\rho}\phi D_{\beta}D_{\sigma}\phi\nonumber\\ && +\phi F_{\mu \nu}\{F_{\alpha \rho},D_{\beta}\phi\}\{F_{\gamma \sigma},D_{\delta}\phi\} -\frac{1}{2}D_{\gamma}\phi D_{\delta}F_{\mu \nu}D_{\alpha}D_{\rho}\phi D_{\beta}D_{\sigma}\phi \nonumber\\ && +i\phi F_{\mu \nu}\{D_{\alpha}(\{F_{\gamma \rho},D_{\delta}\phi\}),D_{\beta}D_{\sigma}\phi\}\nonumber\\ && +\frac{i}{2}\phi F_{\mu \nu}\{\{F_{\gamma \alpha},D_{\delta}D_{\rho}\phi\},D_{\beta}D_{\sigma}\phi\}\nonumber\\ && -\frac{1}{4}\{F_{\gamma \delta},\phi\ F_{\mu \nu}\}[\{F_{\alpha \rho},D_{\beta}\phi\},D_{\sigma}\phi]\nonumber\\ && +\frac{i}{2}D_{\gamma}\phi D_{\delta}F_{\mu \nu}[\{F_{\alpha \rho},D_{\beta}\phi\},D_{\sigma}\phi] +\frac{i}{2}\phi F_{\mu \nu}[[D_{\gamma}F_{\alpha \rho},D_{\delta}D_{\beta}\phi],D_{\sigma}\phi]\nonumber\\ && -\frac{1}{2}\phi F_{\mu \nu}D_{\gamma}(D_{\alpha}D_{\rho}\phi)D_{\delta}(D_{\beta}D_{\sigma} \phi)+\frac{1}{2}\phi F_{\mu \nu}[\{\{F_{\alpha \gamma},F_{\rho \delta}\},D_{\beta}\phi\},D_{\sigma}\phi]\nonumber\\ && +\frac{1}{2}\phi F_{\mu \nu}[\{\{F_{\gamma \beta},D_{\delta}\phi\},F_{\alpha \rho}\},D_{\sigma}\phi] \Bigg) .\label{NCDejstvo_S2-drugi-red-2} \end{eqnarray} \subsection{NC generalization of $S_3$} \noindent Finally, we consider the NC generalization of the action $S_3$ (\ref{KomDejstvo_S_3}). Inserting $\star$-products and promoting the commutative fields to the corresponding NC fields we arrive at: \begin{equation} S_{3NC}= -\frac{i}{128 \pi G_{N}l}{\rm Tr}\int \mathrm{d}^{4} x\, \varepsilon^{\mu\nu\rho\sigma} D_\mu \hat\phi \star D_\nu \hat\phi\star \hat{D}_{\rho}\hat{\phi} \star \hat{D}_{\sigma}\hat{\phi}\star \hat{\phi} .\label{NCDejstvo_S_3} \end{equation} The zeroth order of the action (\ref{NCDejstvo_S_3}) is the commutative action given by (\ref{KomDejstvo_S_3}). \noindent Following the same steps as in previous subsections and using the formulae from Appendix B we calculate the first order correction to this action: \begin{eqnarray} S_{3NC}^{(1)}&=& -\frac{i}{128\pi G_Nl^3}\theta^{\alpha\beta}\int \mathrm{d}^{4} x\epsilon^{\mu\nu\r\s}{\rm Tr}\Big( -\frac14\{F_{\alpha\beta},D_\mu\phi D_\nu\phi\}D_\r\phi D_\s\phi \phi \nonumber\\ &&+\frac12\Big(\frac{i}{2}(D_\alpha D_\mu\phi)(D_\beta D_\nu\phi) + \frac12\{F_{\alpha\mu},D_\beta\phi\}D_\nu\phi \nonumber\\ &&+\frac12D_\mu\phi\{F_{\alpha\nu} ,D_\beta\phi\}\Big) D_\r\phi D_\s\phi \phi\nonumber\\ && +D_\mu\phi D_\nu\phi\Big( \frac{i}{2}D_\alpha(D_\r\phi D_\s\phi)D_\beta\phi +\frac{i}{2}(D_\alpha D_\r\phi) (D_\beta D_\s\phi)\phi\nonumber\\ && +\frac12\{F_{\alpha\r},D_\beta\phi\} D_\s\phi \phi+\frac12D_\r \phi\{F_{\alpha\s},D_\beta\phi\}\phi\Big)\Big) .\label{NCDejstvo_S_3_prvi_red} \end{eqnarray} Again, there is no surprise to find that the calculation of traces leads to $S_{3NC}^{(1)} =0$. Therefore, the first non-vanishing correction is the second order correction. To calculate it we start from: \begin{eqnarray} S_{3NC}^{(2)}&=& -\frac{i}{256\pi G_Nl^3}\int \mathrm{d}^{4} x\epsilon^{\mu\nu\r\s}{\rm Tr}\Big( -\frac14\{\hat F_{\alpha\beta}\stackrel{\star}{,} \hat D_\mu\hat{\phi} \hat D_\nu\hat{\phi}\}\star \hat D_\r\hat{\phi}\star \hat D_\s\hat{\phi}\star \hat{\phi}\nonumber\\ &&+ \frac12\Big(\frac{i}{2}(\hat D_\alpha \hat D_\mu\hat{\phi})\star(\hat D_\beta \hat D_\nu\hat{\phi})\nonumber\\ &&+\frac12\{\hat F_{\alpha\mu}\stackrel{\star}{,} \hat D_\beta\hat{\phi}\}\star \hat D_\nu\hat{\phi}+\frac12\hat D_\mu\hat{\phi}\star\{\hat F_{\alpha\nu} \stackrel{\star}{,} \hat D_\beta\hat{\phi}\}\Big)\star \hat D_\r\hat{\phi}\star \hat D_\s\hat{\phi} \star\hat{\phi}\nonumber\\ &&+ \hat D_\mu\hat{\phi}\star \hat D_\nu\hat{\phi}\star\Big( \frac{i}{2}\hat D_\alpha(\hat D_\r\hat{\phi} \star \hat D_\s\hat{\phi})\star \hat D_\beta\hat{\phi} +\frac{i}{2}(\hat D_\alpha \hat D_\r\hat{\phi}) \star(\hat D_\beta \hat D_\s\hat{\phi})\star\hat{\phi}\nonumber\\ &&+\frac12\{\hat F_{\alpha\r}\stackrel{\star}{,} \hat D_\beta\hat{\phi}\} \star \hat D_\s\hat{\phi}\star \hat{\phi}+\frac12D_\r \hat{\phi}\star\{\hat F_{\alpha\s}\stackrel{\star}{,} \hat D_\beta\hat{\phi}\}\star\hat{\phi}\Big)\Big)^{(1)} . \label{NCDejstvo_S_3_prvidrugi_red} \end{eqnarray} Explicit calculation then gives: {\small \begin{eqnarray} S_{3NC}^{(2)} &=& -\frac{i}{256\pi G_Nl^3}\theta^{\alpha\beta}\theta^{\gamma\delta}\varepsilon^{\mu\nu\rho\sigma} {\rm Tr}\int d^{4} x\, \Bigg( \frac{1}{32}\{F_{\gamma\delta}, \{F_{\alpha\beta},D_{\mu}\phi D_{\nu}\phi\}\} D_{\rho}\phi D_{\sigma} \phi \phi\nonumber\\ && -\frac18 \Big(\frac{i}{2}[D_{\gamma}F_{\alpha \beta},D_{\delta}(D_{\mu}\phi D_{\nu}\phi)]+\frac{1}{2}\{\{F_{\alpha\gamma},F_{\beta\delta}\},D_{\mu}\phi D_{\nu}\phi\}\nonumber\\ && +\frac{i}{2}\{F_{\alpha\beta},(D_{\gamma}D_{\mu}\phi)(D_{\delta}D_{\nu}\phi)\} + \frac{1}{2}\{F_{\alpha\beta}, [D_{\mu}\phi,\{F_{\gamma\nu},D_{\delta} \phi\}]\}\Big) D_{\rho}\phi D_{\sigma} \phi \phi\nonumber\\ &&-\frac18\{F_{\alpha\beta},D_{\mu}\phi D_{\nu}\phi\}\Big( \frac{i}{2}D_{\gamma}(D_{\rho}\phi D_{\sigma}\phi)D_{\delta}\phi\nonumber\\ && +\frac{i}{2}(D_{\gamma}D_{\rho}\phi)(D_{\delta}D_{\sigma}\phi)\phi + \frac{1}{2}[\{F_{\gamma\rho},D_{\delta}\phi\},D_{\sigma}\phi] \phi\Big)\nonumber\\ &&+\frac{i}{4}\Big(-\frac{1}{4}\{F_{\gamma\delta},(D_{\alpha}D_{\mu}\phi)(D_{ \beta} D_{\nu}\phi)\} D_{\rho}\phi D_{\sigma} \phi \phi + \Big(\frac{i}{2}(D_{\gamma}D_{\alpha}D_{\mu}\phi)(D_{\delta}D_{\beta}D_{\nu} \phi)\nonumber\\ &&+\frac{1}{2}\{\{F_{\gamma\alpha},D_{\delta}D_{\mu}\phi\},D_{\beta}D_{\nu}\phi \} +\frac{1}{2}\{(D_{\alpha}\{F_{\gamma\mu},D_{\delta}\phi\}),(D_{\beta}D_{\nu} \phi)\} \Big) D_{\rho}\phi D_{\sigma} \phi \phi\nonumber\\ && + (D_{\alpha}D_{\mu} \phi)(D_{\beta}D_{\nu} \phi)\Big( \frac{i}{2}D_{\gamma}(D_{\rho}\phi D_{\sigma}\phi)D_{\delta}\phi \nonumber\\ && +\frac{i}{2}(D_{\gamma}D_{\rho}\phi)(D_{\delta}D_{\sigma}\phi)\phi + \frac{1}{2}[\{F_{\gamma\rho},D_{\delta}\phi\},D_{\sigma}\phi] \phi\Big)\Big)\nonumber\\ &&+\frac{1}{4}\Big(-\frac{1}{4}\{F_{\gamma\delta},[\{F_{\alpha\mu},D_{\beta} \phi\},D_{\nu}\phi]\} D_{\rho}\phi D_{\sigma} \phi \phi + \Big(\frac{i}{2}\{D_{\gamma}\{F_{\alpha\mu},D_{\beta}\phi\},D_{\delta}D_{\nu} \phi\}\nonumber\\ &&+\frac{i}{2}[[D_{\gamma}F_{\alpha\mu},D_{\delta}D_{\beta}\phi],D_{\nu}\phi] +\frac{1}{2}[\{\{F_{\alpha\gamma},F_{\mu\delta}\},D_{\beta}\phi\},D_{\nu}\phi] \nonumber\\ &&+\frac{1}{2}[\{F_{\alpha\mu},\{F_{\gamma\beta},D_{\delta}\phi\}\},D_{\nu}\phi] +\frac{1}{2}[\{F_{\alpha\mu},D_{\beta}\phi\},\{F_{\gamma\nu},D_{ \delta}\phi\}] \Big) D_{\rho}\phi D_{\sigma} \phi \phi \nonumber\\ &&+ \frac{i}{2}[\{F_{\alpha\mu},D_{\beta}\phi\},D_{\nu} \phi]\Big( D_{\gamma}(D_{\rho}\phi D_{\sigma}\phi)D_{\delta}\phi +(D_{\gamma}D_{\rho}\phi)(D_{\delta}D_{\sigma}\phi)\phi\nonumber\\ && -i[\{F_{\gamma\rho},D_{ \delta}\phi\},D_{\sigma}\phi] \phi\Big) \Big) \nonumber\\ &&+\frac{i}{4}\Big(-\frac{1}{4}\{F_{\gamma\delta},D_{\mu}\phi D_{\nu}\phi\}[D_{\alpha}D_{\rho}\phi,D_{\sigma}\phi]D_{\beta}\phi + \Big(\frac{i}{2}(D_{\gamma}D_{\mu}\phi)(D_{\delta}D_{\nu}\phi)\nonumber\\ &&+\frac{1}{2}[\{F_{\gamma\mu},D_{\delta}\phi\},D_{\nu}\phi]\Big)[D_{\alpha}D_{ \rho}\phi,D_{\sigma}\phi]D_{\beta}\phi\nonumber\\ && + D_{\mu}\phi D_{\nu} \phi\Big( \frac{i}{2}D_{\gamma}([D_{\alpha}D_{\rho}\phi, D_{\sigma}\phi])D_{\delta}D_{\beta}\phi +\frac{i}{2}\{D_{\gamma}D_{\alpha}D_{\rho}\phi,D_{\delta}D_{\sigma}\phi\}D_{ \beta}\phi \nonumber\\ &&+ \frac{1}{2}[\{F_{\gamma\alpha},D_{\delta}D_{\rho}\phi\},D_{\sigma}\phi] D_{\beta}\phi+ \frac{1}{2}[D_{\alpha}\{F_{\gamma \rho}, D_{\delta}\phi\}, D_{\sigma}\phi]D_{\beta}\phi\nonumber\\ &&+\frac{1}{2}[D_{\alpha}D_{\rho}\phi,\{F_{\gamma\sigma}, D_{\delta}\phi\}]D_{\beta}\phi +\frac{1}{2}[D_{\alpha}D_{\rho}\phi, D_{\sigma}\phi]\{F_{\gamma\beta}, D_{\delta} \phi\}\Big)\Big) \nonumber\\ &&+\frac{i}{4} \Big(-\frac{1}{4}\{F_{\gamma\delta},D_{\mu}\phi D_{\nu}\phi\} (D_{\alpha}D_{\rho}\phi) (D_{\beta}D_{\sigma} \phi) \phi + \Big(\frac{i}{2}(D_{\gamma}D_{\mu}\phi)(D_{\delta}D_{\nu}\phi)\nonumber\\ &&+\frac{1}{2}[\{F_{\gamma\mu},D_{\delta}\phi\},D_{\nu}\phi]\Big)(D_{\alpha}D_{ \rho}\phi)(D_{\beta}D_{\sigma}\phi)\phi \nonumber\\ && + D_{\mu} \phi D_{\nu} \phi\Big( \frac{i}{2}D_{\gamma}((D_{\alpha}D_{\rho}\phi) (D_{\beta}D_{\sigma}\phi))D_{\delta}\phi +\frac{i}{2}(D_{\gamma}D_{\alpha}D_{\rho }\phi)(D_{\delta}D_{\beta}D_{\sigma}\phi)\phi \nonumber\\ && + \frac{1}{2}\{\{F_{\gamma\alpha},D_{\delta}D_{\rho}\phi\},D_{\beta}D_{\sigma} \phi\} \phi + \frac{1}{2}\{(D_{\alpha}\{F_{\gamma\rho},D_{\delta}\phi\}), D_{\beta}D_{\sigma}\phi\}\phi\Big)\Big)\nonumber\\ && +\frac{i}{4}\Big(-\frac{1}{4}\{F_{\gamma\delta},[\{F_{\alpha\rho}, D_{\beta}\phi\},D_{\sigma}\phi]\}\phi D_{\mu}\phi D_{\nu}\phi + \Big(\frac{i}{2}\{D_{\gamma}\{F_{\alpha\rho}, D_{\beta}\phi\},D_{\delta}D_{\sigma}\phi\}\nonumber\\ &&\frac{i}{2}[[D_{\gamma}F_{\alpha\rho},D_{\delta}D_{\beta}\phi],D_{\sigma}\phi] +\frac{1}{2}[\{\{F_{\alpha\gamma},F_{\rho\delta}\},D_{\beta}\phi\},D_{\sigma} \phi] \nonumber\\ && +\frac{1}{2}[\{F_{\alpha\rho},\{F_{\gamma\beta},D_{\delta}\phi\}\},D_{\sigma} \phi] +\frac{1}{2}[\{F_{\alpha\rho},D_{\beta}\phi\},\{F_{\gamma\sigma},D_{\delta} \phi\}]\Big)\phi D_{\mu}\phi D_{\nu}\phi \nonumber\\ && + [\{F_{\alpha\rho},D_{\beta}\phi\},D_{\sigma}\phi]\Big(\frac{i}{2}D_{\gamma}\phi D_{\delta}(D_{\mu}\phi D_{\nu}\phi) \nonumber\\ && +\frac{i}{2}\phi(D_{\gamma}D_{\mu}\phi)(D_{\delta}D_{\nu}\phi) + \frac{1}{2}\phi[\{F_{\gamma\mu},D_{\delta}\phi\},D_{\nu}\phi]\Big)\Big) \Bigg) .\label{NCDejstvo_S_3_drugi_red} \end{eqnarray} } The expanded actions (\ref{NCDejstvo1Exp2}), (\ref{NCDejstvo_S2-drugi-red-2}) and (\ref{NCDejstvo_S_3_drugi_red}) are obviously invariant under the commutative $SO(2,3)$ gauge transformations, as guaranteed by the SW map. \setcounter{equation}{0} \section{Symmetry breaking and the low energy expansion} \noindent The second order expansion of the NC actions (\ref{NCdejstvo_S_1}, \ref{NCDejstvo_S_2}, \ref{NCDejstvo_S_3}), given by equations (\ref{NCDejstvo1Exp2}, \ref{NCDejstvo_S2-drugi-red-2}, \ref{NCDejstvo_S_3_drugi_red}) is explicitly invariant under the commutative $SO(2,3)$ gauge symmetry. In order to relate these expanded actions to the General Relativity and its NC corrections, we have to follow the same steps as in the commutative model, Section 2. First we have to break the $SO(2,3)$ gauge symmetry down to the $SO(1,3)$ gauge symmetry (local Lorentz symmetry). Then we have to calculate the traces and write the actions in terms of the geometric quantities (curvature, vierbeins, metric). Let us proceed step by step. \noindent The symmetry breaking is done by choosing the field $\phi$ to be of the form $\phi=(0,0,0,0,l)$. In this way the zeroth order of the actions (\ref{NCdejstvo_S_1}, \ref{NCDejstvo_S_2}, \ref{NCDejstvo_S_3}) reduces to the commutative model with the BG term, Einstein-Hilbert term and the cosmological constant term, (\ref{FullCommAction}). Then we have to calculate the traces. As we have seen, the first order correction vanishes and the second order correction is the first non-vanishing correction. It is very long and we will not write the full expressions here. Moreover, the expended actions contain terms that are fourth and lower powers of curvature and second and lower powers of torsion. To analyze the full action is very demanding. Especially, finding equations of motion is a highly non trivial calculation. Additionally, there is no guarantee that the obtained equations of motion will remain second order partial differential equations with respect to the metric and the connection. There are higher order gravity theories, like Lovelock theories, where the equations of motion remain the second order differential equations. In our case, unfortunately it is not clear what will happen with the equations of motion and a careful analysis has to be done. \subsection{Low energy effective NC gravity action} \noindent However, we can still analyze different sectors of our model, such as high energy behavior, or low energy behavior, with or without the cosmological constant, etc. In this paper we are interested in the low energy corrections. To be more precise, we keep terms that have at most two derivatives on vierbeins. Therefore, in our analysis we include terms linear in curvature, linear and quadratic in torsion. Additionally, we assume that the spin connection $\omega_\mu^{ab}$ and first order derivatives of vierbeins such as $\partial_\rho e_\alpha^b$ are of the same order. The low energy NC correction of the action $S_{1NC}$ is given by \begin{eqnarray} S_{1NC}^{(2)} &=&-\frac{1}{128\pi G_Nl^4}\int {\rm d}^{4} x\, e \theta^{\alpha\beta}\theta^{\gamma\delta}\Big( 2R_{\alpha\beta\gamma\delta} -4R_{\alpha\gamma\beta\delta}+6g_{\beta\delta}{R_{\alpha\mu\gamma}}^{\mu} \nonumber\\&& -\frac{6}{l^{2}} g_{\alpha\gamma} g_{\beta\delta} -5 T_{\alpha\beta}^a T_{\gamma\delta a} +10T_{\alpha\gamma}^a T_{\beta\delta a} -3T_{\alpha\beta\gamma}T_{\delta\mu}^{\ \ \mu}- T_{\alpha\beta\r}T^\r_{\ \gamma\delta} -8T_{\alpha\gamma\delta}T_{\beta\mu}^{\ \ \mu}\nonumber\\&& -2T_{\alpha\gamma\mu}e_\beta^b\nabla_\delta e_\mu^b -2T_{\alpha\gamma\beta}e^\r_a\nabla_\delta e_\rho^a+6T_{\delta\r\beta}e^\r_a\nabla_\alpha e_\gamma^a \nonumber\\ &&-2T_{\alpha\beta\delta}e^\r_a\nabla_\gamma e_\r^a +T_{\alpha\beta}^{\ \ \mu}e_{\delta a}\nabla_\gamma e_\mu^a +4e^{\mu}_ae_{b\beta} \nabla_{\gamma}e_{\alpha}^{a}\nabla_{\delta}e_{\mu}^{b}+4e^{\mu}_ae_{b\delta} \nabla_{\gamma}e_{\beta}^{b}\nabla_{\alpha}e_{\mu}^{a} \nonumber\\ &&+2g_{\alpha\gamma}e^{\mu}_ae^{\nu}_b \nabla_{\beta}e_{\mu}^{a}\nabla_{\delta}e_{\nu}^{b} -2g_{\alpha\gamma} e^\mu_b e^\nu_a\nabla_\delta e_\nu^b \nabla_\beta e_{\mu }^a \Big) .\nonumber \end{eqnarray} The low energy NC correction of the action $S_{2NC}$ is given by \begin{eqnarray} S_{2NC}^{(2)} &=&\frac{1}{256\pi G_Nl^4}\int {\rm d}^{4} x\, e \theta^{\alpha\beta}\theta^{\gamma\delta}\Big( 20R_{\alpha\beta\gamma\delta} -28R_{\alpha\gamma\beta\delta}-56g_{\beta\delta}{R_{\alpha\mu\gamma}}^{\mu} \nonumber\\&& +\frac{68}{l^{2}} g_{\alpha\gamma} g_{\beta\delta}+ T_{\alpha\beta}^a T_{\gamma\delta a} -11T_{\alpha\gamma}^a T_{\beta\delta a} +6T_{\alpha\beta\r}T_{\r\gamma\delta}-16T_{\alpha\gamma\beta}T_{\delta\mu}^{\ \ \mu}\nonumber\\&& +24 T_{\alpha\gamma}^{\ \ \r}T_{\r\delta\beta}+4g_{\beta\delta}T_{\gamma\s}^{\ \ \s} T_{\alpha\r}^{\ \ \r}-4g_{\beta\delta}T_{\gamma\s\r}T_{\alpha}^{\ \r\s}\nonumber\\ &&+4T_{\alpha\gamma\delta}e^\mu_b\nabla_\beta e_\mu^b +4T_{\alpha\beta\gamma}e^\mu_b\nabla_\delta e_\mu^b+2T_{\alpha\beta\r}e_\gamma^a\nabla_\delta e_\rho^a\nonumber\\ &&-12T_{\alpha\gamma}^{\ \ \r} e_{\delta a}\nabla_\beta e_\r^a -28T_{\alpha\mu\gamma}e^\mu_a\nabla_\beta e_\delta^a +4g_{\beta\gamma}T_{\alpha\r}^{\ \ \s}e^\r_b\nabla_\delta e_\s^b -4g_{\alpha\gamma}T_{\nu\beta}^{\ \ \nu}e_{\mu}^a\nabla_\delta e_\mu^a\nonumber\\ &&-40 e^{\r}_ae_{\delta b} \nabla_{\alpha}e_{\gamma}^{a}\nabla_{\beta}e_{\r}^{b} -12g_{\beta\delta} e^\mu_b e^\nu_a\nabla_\gamma e_\mu^b \nabla_\alpha e_{\nu }^a\nonumber\\&&+32 e^{\mu}_be_{\delta a} \nabla_{\alpha}e_{\gamma}^{a}\nabla_{\beta}e_{\mu}^{b} +12 g_{\beta\delta}e^\r_c e^\mu_a \nabla_\alpha e_\r^a \nabla_\gamma e_\mu^c \Big) .\nonumber \end{eqnarray} The low energy NC correction of the action $S_{3NC}$ is given by \begin{eqnarray} } \def\eea{\end{eqnarray} S_{3NC}^{(2)}&&= \int d^{4} x\, \frac{ e \theta^{\alpha\beta}\theta^{\gamma\delta}}{128\pi G_N l^4} \Big( 38 R_{\alpha\beta\gamma\delta}-44R_{\alpha\gamma\beta\delta}-36R_{\alpha\gamma}g_{ \beta\delta}+\frac{56}{l^{2}} g_{\alpha\gamma} g_{\beta\delta}\nonumber\\ &&-7 T_{\alpha\beta}^{a} T_{\gamma\delta a} +14T_{\alpha\gamma}^{a} T_{\beta\delta a}-2T_{\alpha\beta\gamma}T_{\delta\rho}^{\rho}+ 4T_{\alpha\gamma}^{\rho}T_{\delta\rho\beta} +4 g_{\beta\delta}T_{\alpha\rho}^{\rho}T_{\gamma\sigma}^{\sigma}\nonumber\\ &&-4g_{\beta\delta} T_{\alpha\rho}^{\sigma}T_{\gamma\sigma}^{\rho}+ 32\nabla_{\alpha}e_{\gamma}^{a}e_{\delta a}\nabla_{\beta}e_{\rho}^{b}e^{\rho}_{b} -32\nabla_{\alpha}e_{\gamma}^{a}e^{\rho}_{a}\nabla_{\beta}e_{\rho}^{b}e_{\delta b}\nonumber\\&& +8g_{\beta\delta}\nabla_{\alpha} e_{\rho}^a e^{\sigma}_a\nabla_{\gamma} e_{\sigma}^b e^{\rho}_b -8g_{\beta\delta}\nabla_{\alpha} e_{\rho}^a e^{\rho}_a\nabla_{\gamma} e_{\sigma}^b e^{\sigma}_b -12T_{\alpha\gamma}^{\rho}\nabla_{\beta} e_{\rho}^a e_{\delta a}\nonumber\\ &&+18T_{\alpha\beta\gamma}\nabla_{\delta}e_{\rho}^{a} e^{\rho}_{a}-8T_{\alpha\gamma\beta}\nabla_{\delta}e_{\rho}^{a} e^{\rho}_{a} +16T_{\gamma\rho\beta}\nabla_{\alpha}e_{\delta}^{a} e^{\rho}_{a} \nonumber\\ &&-4T_{\alpha\rho}^{\sigma}\nabla_{\gamma}e_{\sigma}^{a} e^{\rho}_{a}g_{\beta\delta} + 4T_{\alpha\rho}^{\rho}\nabla_{\gamma}e_{\sigma}^{a} e^{\sigma}_{a}g_{\beta\delta}\Big) . \eea Remembering that $c_1+c_2=1$ the resulting action follows \begin{eqnarray} S_{NC} &=& -\frac{1}{16\pi G_{N}}\int {\rm d}^{4}x\, e\Big(R-\frac{6}{l^2}(1+c_2+2c_3)\Big)\nonumber\\ && +\frac{1}{128\pi G_Nl^4}\int {\rm d}^{4} x\, e \theta^{\alpha\beta}\theta^{\gamma\delta}\Big( (-2+12c_2+38c_3)R_{\alpha\beta\gamma\delta} \nonumber\\ &&+(4-18c_2-44c_3)R_{\alpha\gamma\beta\delta} -(6+22c_2+36c_3)g_{\beta\delta}{R_{\alpha\mu\gamma}}^{\mu} \nonumber\\ &&+\frac{6+28c_2+56c_3}{l^{2}} g_{\alpha\gamma} g_{\beta\delta} +(5-\frac92c_2-7c_3) T_{\alpha\beta}^a T_{\gamma\delta a}\label{action-linearnoRT}\\ && +(-10+\frac92c_2+14c_3)T_{\alpha\gamma}^a T_{\beta\delta a} +(3-3c_2-2c_3)T_{\alpha\beta\gamma}T_{\delta\mu}^{\ \ \mu}\nonumber\\&&+(1+2c_2) T_{\alpha\beta\r}T^\r_{\ \gamma\delta} +8T_{\alpha\gamma\delta}T_{\beta\mu}^{\ \ \mu}\nonumber\\&&-(2c_2+4c_3) T_{\alpha\gamma\r}T^\r_{\ \delta\beta} +(2c_2+4c_3)g_{\beta\delta}T_{\gamma\s}^{\ \ \s}T_{\alpha\r}^{\ \ \r}\nonumber\\ && -(2c_2+4c_3)T_{\alpha\r\s}T_\gamma^{\ \s\r}g_{\beta\delta} +(-2+4c_2+18c_3)T_{\alpha\beta\gamma}e^\r_a\nabla_\delta e_\rho^a \nonumber\\ &&+(6-8c_2-8c_3)T_{\alpha\gamma\beta}e^\r_a\nabla_\delta e_\rho^a +(2+4c_2+12c_3)T_{\alpha\gamma}^{\ \ \mu}e_\beta^a\nabla_\delta e_\mu^a\nonumber\\ &&-T_{\alpha\beta}^{\ \ \mu}e_\delta^a\nabla_\gamma e_\mu^a + (-6-8c_2-16c_3)T_{\delta\r\beta}e^\r_a\nabla_\alpha e_\gamma^a \nonumber\\ &&-(2c_2+4c_3)g_{\alpha\gamma}T_{\mu\beta}^{\ \ \mu}e^\r_a\nabla_\delta e_\r^a -(2c_2+4c_3)g_{\beta\delta}T_{\alpha\r}^{\ \ \s} e_a^\rho \nabla_\gamma e_\s^a \nonumber\\ &&-(4+16c_2+32c_3)e^{\mu}_ae_{b\beta} \nabla_{\gamma}e_{\alpha}^{a}\nabla_{\delta}e_{\mu}^{b} +(4+12c_2+32c_3)e_{\delta a}e_{b}^\mu \nabla_{\alpha}e_{\gamma}^{a}\nabla_{\beta}e_{\mu}^{b}\nonumber\\ &&-(2+4c_2+8c_3)g_{\beta\delta}e^{\mu}_ae^{\nu}_b \nabla_{\gamma}e_{\mu}^{a}\nabla_{\alpha}e_{\nu}^{b} +(2+4c_2+8c_3)g_{\beta\delta} e^\mu_a e^\r_c\nabla_\alpha e_\r^a \nabla_\gamma e_{\mu }^c \Big) .\nonumber \end{eqnarray} To obtain this action we used that $D_\alpha F_{\mu\nu}$ is the $SO(2,3)$ covariant derivative and its components are \begin{eqnarray} } \def\eea{\end{eqnarray} &&(D_\alpha F_{\mu\nu})^{ab} = \nabla_\alpha F_{\mu\nu}^{\ \ ab} - \frac{1}{l^2}(e_\alpha ^aT_{\mu\nu}^{\ b} -e_\alpha ^bT_{\mu\nu}^{\ a}) ,\nonumber\\ &&(D_\alpha F_{\mu\nu})^{a5} = \frac{1}{l}( \nabla_\alpha T_{\mu\nu}^{a} + e_\alpha^m F_{\mu\nu m}^{\ \ \ \ a}) \, ,\nonumber\\ &&(D_\kappa D_\alpha F_{\mu\nu})^{ab} = \nabla_\kappa\nabla_\alpha F_{\mu\nu}^{\ \ ab} - \frac{1}{l^2}\Big( (\nabla_\kappa e_\alpha ^a)T_{\mu\nu}^{\ b} -(\nabla_\kappa e_\alpha ^b)T_{\mu\nu}^{\ a} + e_\alpha ^a(\nabla_\kappa T_{\mu\nu}^{\ b})\nonumber\\ && - e_\alpha ^b(\nabla_\kappa T_{\mu\nu}^{\ a}) + e_\kappa ^a(\nabla_\alpha T_{\mu\nu}^{\ b}) - e_\kappa ^b(\nabla_\alpha T_{\mu\nu}^{\ a}) + e_\kappa ^a e^m_\alpha F_{\mu\nu m}^{\ \ \ \ b} - e_\kappa ^b e^m_\alpha F_{\mu\nu m}^{\ \ \ \ a} \Big) \ ,\nonumber \eea with the $SO(1,3)$ covariant derivative \begin{eqnarray} \nabla_\alpha F_{\mu\nu}^{\ \ ab} &=& \partial_\alpha F_{\mu\nu}^{\ \ ab} + \omega_\alpha^{ac}F_{\mu\nu c}^{\ \ \ b} -\omega_\alpha^{bc}F_{\mu\nu c}^{\ \ \ a},\nonumber\\ \nabla_\alpha T_{\mu\nu}^{a} &=& \partial_\alpha T_{\mu\nu}^{a} + \omega_\alpha^{ac} T_{\mu\nu c}.\nonumber \end{eqnarray} We also used that \begin{eqnarray} (D_\alpha \phi) ^{a} &=& e_\alpha^{\ a},\nonumber\\ (D_\alpha \phi) ^{5} &=&0 ,\nonumber\\ (D_\alpha D_\beta \phi)^{a} &=&(\nabla_\alpha e_\beta)^a ,\nonumber\\ (D_\alpha D_\beta \phi)^{5} &=&-\frac{1}{l}g_{\alpha\beta}\ .\nonumber \end{eqnarray} \noindent Before we determine the equations of motion, let us briefly discuss the action (\ref{action-linearnoRT}). We see that this action is invariant under the $SO(1,3)$ gauge symmetry. However, due to the noncommutativity it is no longer invariant under the diffeomorphism symmetry. The non-invariant terms manifest themselves in two ways. Firstly, there are tensors contracted with the NC parameter $\theta^{\alpha\beta}$ such as $\theta^{\alpha\beta}\theta^{\kappa\lambda}R_{\alpha\kappa\beta\lambda}$. Since $\theta^{\alpha\beta}$ is not a tensor under the diffeomorphism symmetry (it is a constant matrix that does not transform under the diffeomorphism), those terms are also not scalars (tensors). Then there are terms in which $SO(1,3)$ covariant derivatives of vierbeins appear. Using the metricity condition \begin{equation} \nabla_\mu^{tot} e_\rho^{\ a} = \partial_\mu e_\rho^{\ a} + \omega_\mu^{ab} e_{\rho b} - \Gamma_{\mu\rho}^\sigma e_\sigma^{\ a} =0 \end{equation} the $SO(1,3)$ covariant derivative can be written as \begin{equation} \nabla_\mu e_\rho^{\ a} = \partial_\mu e_\rho^{\ a} + \omega_\mu^{ab} e_{\rho b} = \Gamma_{\mu\rho}^\sigma e_\sigma^{\ a} \ . \label{LorKovIzvTetrada} \end{equation} Therefore, the affine connection $\Gamma_{\mu\rho}^\sigma$ appears explicitly in (\ref{action-linearnoRT}). Note that this affine connection does not have to be given by the Christoffel symbols. We will see in Section 6 that the noncommutativity can generate the antisymetric part of the connection, leading to the appearance of torsion. Some of the terms with the explicit $\Gamma_{\mu\rho}^\sigma$s can be grouped to form the curvature tensor, but some will remain and make the diffeomorphism non-invariance explicit. \subsection{Low energy equations of motion} \noindent The equations of motions are obtained by varying the action (\ref{action-linearnoRT}) with respect to the vierbein and the spin connection. Some useful formulae are given in Appendix C. In this article we are interested in NC corrections to the GR solutions with vanishing torsion. Therefore, in the equations of motion we impose the condition $T_{\mu\nu}^a=0$. A more general form of the equations of motion will be presented in future work. \noindent Finally, the equation of motion for the vierbein is given by \begin{equation} R_{\alpha\gamma}^{\ \ cd}e^\gamma_d e_a^\alpha e_c^\mu-\frac12e^\mu_a R+\frac{3}{l^2}(1+c_2+2c_3)e^\mu_a=\tau_a^{\ \mu} \, , \end{equation} where \begin{eqnarray} } \def\eea{\end{eqnarray} \tau_a^{\ \mu}&=&- \frac{8\pi G_N}{e} \frac{\delta S^{(2)}_{NC}}{\delta e_\mu^a}\nonumber\\ &=&-\frac{\theta^{\alpha\beta}\theta^{\gamma\delta}}{16l^4}\Big( (-4+6c_2+22c_3)e^\mu_a R_{\alpha\beta\gamma\delta}\nonumber\\ &&+(4-18c_2+44c_3)(e^\mu_a R_{\alpha\gamma\beta\delta}- 2\delta_\alpha^\mu R_{\beta\delta \gamma a}) \nonumber\\ &&-(6+22c_2+36c_3)e^\mu_a g_{\beta\delta}R_{\alpha\gamma} +2\delta_\alpha^\mu e_{\gamma a}R_{\beta\delta}+g_{\beta\delta}R_{\gamma \lambda}e^{\lambda a} \delta_\alpha^\mu -g_{\beta\delta}R_{\alpha\lambda\gamma}^{\ \ \ \ \mu}e_a^\lambda) \nonumber\\ &&+(4+16c_2+32c_3)(e^\mu_c e_{\gamma b}e^\rho_a\nabla_\beta e_\delta^c\nabla_\alpha e_\rho^b\nonumber\\ &&- e^\mu_a e_{\beta b}e^\rho_c\nabla_\gamma e_\alpha^c\nabla_\delta e_\rho ^b - \delta^\mu_\alpha e^\rho_b\nabla_\gamma e_{\r a}\nabla_\delta e_\beta^b)\nonumber\\ &&+(2+2c_2+4c_3)g_{\beta\delta}e_a^\mu e^\rho_c e_d^\s( \nabla_\alpha e_{\r }^c\nabla_\gamma e_\s^d- \nabla_\alpha e_{\r }^d\nabla_\gamma e_\s ^c)\nonumber\\ &&+(4+6c_2+12c_3)(e^\mu_ag_{\beta\delta}e_b^\r\nabla_\alpha \nabla_\gamma e_\r ^b -e_{\delta b} e_c^\rho e^\mu_a\nabla_\beta e_{\r }^c\nabla_\gamma e_\alpha ^b \nonumber\\ &&+g_{\beta\delta}e_a^\r e^\s_b e^\mu_c\nabla_\alpha e_\s^c \nabla_\gamma e_\r ^b-g_{\beta\delta}e_a^\r e^\mu_b\nabla_\alpha \nabla_\gamma e_\r ^b)\nonumber\\ &&-(4+12c_2+32c_3)e^\mu_a\nabla_\beta e_{\delta b}\nabla_\gamma e_{\alpha}^b+(7-14c_2-54c_3)\delta^\mu_\alpha R_{\gamma\delta\beta a}\nonumber\\ &&+(5+12c_2+24c_3)(2e_{\gamma a} e_b^\mu e_c^\s \nabla_\delta e_\beta^b\nabla_\alpha e_{\s }^c + 2 e_b^\mu \nabla_\alpha e_{\gamma a}\nabla_\delta e_{\beta }^b \nonumber\\ &&- 2e_{\gamma a} e_b^\s e^\mu_c \nabla_\alpha e_\s^c\nabla_\delta e_{\beta }^b-e_{\delta a}R_{\alpha\beta\ \gamma}^{\ \ \ \mu})\nonumber\\ &&+(2+8c_2-12c_3)\delta^\mu_\alpha e_{\delta a}( e_c^\s e_b^\r \nabla_\beta e_\s^c\nabla_\gamma e_{\r }^b- e_b^\s e_c^\r \nabla_\beta e_\s^c\nabla_\gamma e_{\r }^b)\nonumber\\ &&+(6+24c_2+36c_3)\delta^{\mu}_{ \alpha} e_b^\r \nabla_\gamma e_\r^b\nabla_\beta e_{\delta a}+2(-2+4c_2+18c_3)\delta_\alpha^\mu e_{\gamma a} e_b^\r\nabla_\beta \nabla_\delta e_\r ^b\nonumber\\ &&+(2+4c_2+12c_3)\delta_\alpha^\mu e_{\gamma b} e_a^\r\nabla_\delta \nabla_\beta e_\r ^b \nonumber\\ &&-(6+8c_2+16c_3)\delta_\alpha^\mu(e_{\gamma a} e_c^\s e_b^\r \nabla_\delta e_\beta^b\nabla_\r e_{\s }^c+ e_d^\r \nabla_\delta e_\beta^d\nabla_\r e_{\gamma a}\nonumber\\ &&- e_{\gamma a} e_c^\r e^\s_d\nabla_\r e_\s^c\nabla_\delta e_{\beta }^d + e_{\gamma a} e_d^\r \nabla_\r \nabla_\delta e_{\beta }^d)\nonumber\\ &&+(6-8c_2-8c_3)\delta^\mu_\alpha e_{\gamma a} e_d^\r \nabla_\delta\nabla_\beta e_{\r }^d\nonumber\\ &&+(2+2c_2+8c_3)\delta_\alpha^\mu e_{\delta b}(e^\r_c e_a^\s \nabla_\gamma e_\s^c\nabla_\r e_{\beta }^b-e^\s_c e_a^\r \nabla_\gamma e_\s^c\nabla_\beta e_{\r }^b)\nonumber\\ &&+(6+22c_2+48c_3)\delta_\alpha^\mu e_{\beta b}(e^\s_c e_a^\r \nabla_\gamma e_\s^c\nabla_\delta e_{\r }^b-e^\s_a e_c^\r \nabla_\gamma e_\s^c\nabla_\delta e_{\r }^b)\nonumber\\ &&-2\delta_\alpha^\mu(e^\s_c e_a^\r e_{\delta b} \nabla_\beta e_\s^c\nabla_\gamma e_{\r }^b-e^\r_c e_a^\s e_{\delta b} \nabla_\beta e_\s^c\nabla_\gamma e_{\r }^b-e^\r_a e_\gamma^b \nabla_\beta \nabla_\delta e_{\r }^b )\nonumber\\ &&-(8+20c_2+44c_3)\delta_\alpha^\mu e_a^\r \nabla_\gamma e_\r^b\nabla_\beta e_{\delta }^b\nonumber\\ &&+(2c_2+4c_3)g_{\beta\delta}(\delta_\alpha^\mu e_b^\r e^\s_a \nabla_\s \nabla_\gamma e_{\r }^b\nonumber\\ &&-\delta_\alpha^\mu e_c^\s e^\r_a \nabla_\s \nabla_\gamma e_{\r }^c - \delta_\alpha^\mu e^\nu_a e_c^\r e^\s_d \nabla_\nu e_\delta^c \nabla_\gamma e_{\r }^d\nonumber\\ &&-\delta_\alpha^\mu e^\nu_d e_c^\r e^\s_a \nabla_\r e_\nu^d \nabla_\gamma e_{\s }^c- e^\r_a e_b^\nu e^\s_c \nabla_\nu e_\r^c \nabla_\gamma e_{\s }^b - e^\s_a e_c^\nu e^\r_d \nabla_\r e_\nu^d \nabla_\gamma e_{\s }^c)\nonumber\\ &&+\frac{6+28c_2+56c_3}{l^2}(g_{\alpha\gamma}g_{\beta\delta} e^{\mu}_a+4g_{\beta\delta}\delta_\alpha^\mu e_{\gamma}^a) \Big)\ . \eea Multiplying the previous equation with $e_a^\nu$ and using the metricity condition we obtain \begin{equation}} \def\ee{\end{equation} R^{\nu\mu}-\frac12g^{\mu\nu}R+\frac{3}{l^2}(1+c_2+2c_3) g^{\mu\nu}=\tau^{\mu\nu} ,\label{EoM-metric-0} \ee with \begin{eqnarray} } \def\eea{\end{eqnarray} \tau^{\mu\nu} &=&-\frac{\theta^{\alpha\beta}\theta^{\gamma\delta}}{16l^4}\Big( (-4+6c_2+22c_3)R_{\alpha\beta\gamma\delta}g^{\mu\nu}-(7-14c_2-54c_3)R^\nu_{\ \beta\gamma\delta}\delta_\alpha^\mu\nonumber\\ && +(4-18c_2-44c_3)(2R^\nu_{\ \gamma\beta\delta}\delta_\alpha^\mu+R_{\alpha\gamma\beta\delta}g^{\mu\nu})\nonumber\\ &&-(5+12c_2+24c_3)R^\mu_{\ \gamma\alpha\beta}\delta^\nu_\delta\nonumber\\ &&-(6+22c_2+36c_3)(g^{\mu\nu} g_{\beta\delta}R_{\alpha\gamma}+2\delta_\beta^\mu \delta_\delta^\nu R_{\alpha\gamma}+g_{\beta\delta}\delta_\gamma^\mu R_{\alpha}^{\ \nu}-g_{\beta\delta}R_{\alpha\ \gamma}^{\ \nu\ \mu})\nonumber\\ &&+(4+16c_2+32c_3)(g_{\s\beta}g^{\r\nu} \Gamma^{\mu}_{\alpha\gamma}\Gamma^{\s}_{\delta\r}-g^{\mu\nu} g_{\r\beta}\Gamma^{\s}_{\alpha\gamma}\Gamma^{\r}_{\s\delta})\nonumber\\ &&+(2+2c_2+4c_3)g^{\mu\nu} g_{\delta\beta}\Gamma^{\s}_{\alpha\s}\Gamma^{\r}_{\gamma\r}-(4+12c_2+32c_3)g^{\mu\nu} g_{\r\s}\Gamma^{\r}_{\beta\delta}\Gamma^{\s}_{\gamma\alpha}\nonumber\\ &&+(2+4c_2+8c_3)g^{\mu\nu}g_{\beta\delta}\Gamma^\s_{\gamma\r}\Gamma^\r_{\alpha\s}\nonumber\\ &&+(4+6c_2+12c_3)(g^{\mu\nu}(g_{\beta\delta}\partial_\alpha\Gamma^\r_{\gamma\r}-g_{\delta\r}\Gamma^\s_{\beta\s}\Gamma^\r_ { \alpha\gamma})\nonumber\\ &&+g^{\r\nu}(g_{\alpha\s} \Gamma^\s_{\beta\delta}\Gamma^\mu_{\gamma\r}+g_{\alpha\delta} \partial_\beta\Gamma^\mu_{\gamma\r}))\nonumber\\ &&+(10+24c_2+48c_3)( \delta^\nu_\gamma\Gamma^\r_{\alpha\r}\Gamma^\mu_{\beta\delta}+ \Gamma^\mu_{\delta\beta}\Gamma^\nu_{\alpha\gamma}-\delta^\nu_\gamma\Gamma^\mu_{\alpha\s}\Gamma^\s_{\beta\delta})\nonumber\\ &&- (4+4c_2+8c_3)g_{\beta\delta}g^{\s\nu} \Gamma^{\r}_{\gamma\r}\Gamma^{\mu}_{\alpha\s}+(6+24c_2+36c_3)\delta_\alpha^\mu\Gamma^{\r}_{\gamma\r}\Gamma^{\nu}_{ \beta\delta }\nonumber\\ &&+(2+8c_2-12c_3)\delta^\mu_\alpha \delta^\nu_\delta(\Gamma^\s_{\beta\s}\Gamma^\r_{\gamma\r}-\Gamma^\r_{\beta\s}\Gamma^\s_{\gamma\r})\nonumber\\ &&+(-4+8c_2+36c_3) \delta_\gamma^\nu\delta_\alpha^\mu\partial_\beta\Gamma^\r_{\delta\r}+(6-8c_2-8c_3) \delta_\gamma^\nu\delta_\alpha^\mu\partial_\delta\Gamma^\r_{\delta\beta}\nonumber\\ &&+(2+28c_3)\delta^\mu_\alpha \delta^\nu_\gamma \Gamma^\s_{\delta\r}\Gamma^\r_{\beta\s}+2\delta_\alpha^\mu g^{\r\nu}g_{\gamma\kappa}(\partial_\beta\Gamma^\kappa_{\delta\r}+\Gamma^\s_{\delta\r}\Gamma^\kappa_{\beta\s})\nonumber\\ &&-(6+8c_2+16c_3)\delta_\alpha^\mu(\delta^\nu_\gamma\Gamma^{\s}_{\r\s}\Gamma^{\r}_{ \beta\delta }+\Gamma^{\nu}_{\r\gamma}\Gamma^{\r}_{ \beta\delta }+\delta_\gamma^\nu\partial_\r\Gamma^{\r}_{\beta\delta})\nonumber\\ &&+(2+2c_2+8c_3)\delta^\mu_\alpha g^{\r\nu}g_{\tau\delta}(-\Gamma^\s_{\gamma\s}\Gamma^\tau_{ \beta\r } + \Gamma^\s_{\gamma\r}\Gamma^\tau_{\beta\s})\nonumber\\ &&+(6+22c_2+48c_3)\delta^\mu_\alpha g^{\r\nu}g_{\tau\beta}(\Gamma^\s_{\gamma\s}\Gamma^\tau_{ \delta\r } - \Gamma^\s_{\gamma\r}\Gamma^\tau_{\delta\s})\nonumber\\ &&-2\delta_\alpha^\mu g_{\kappa\delta}g^{\r\nu}(\Gamma^\tau_{\beta\tau}\Gamma^\kappa_{ \gamma\r }-\Gamma^\s_{\beta\r}\Gamma^\kappa_{ \gamma\s })\nonumber\\ &&+(2+4c_2+12c_3) \delta_\alpha^\mu g_{\tau\gamma}g^{\r\nu}(\partial_\delta\Gamma^\tau_{ \beta\r } +\Gamma^\s_{\beta\r}\Gamma^\tau_{\delta\s})\nonumber\\ &&-(8+20c_2+44c_3) \delta_\alpha^\mu g_{\tau\s}g^{\r\nu}\Gamma^\s_{\gamma\r}\Gamma^\tau_{\delta\beta}\nonumber\\ &&-(4+16c_2+32c_3) \delta_\alpha^\mu \Gamma^\r_{\beta\delta}\Gamma^\nu_{\gamma\r}-(2c_2+4c_3)g_{\beta\delta}g^{\s\nu}\Gamma^\r_{\alpha\s}\Gamma^\mu_{\gamma\r} \nonumber\\ &&+(2c_2+4c_3) \delta_\alpha^\mu g_{\beta\delta}g^{\r\nu}(\partial_\r\Gamma^\s_{ \gamma\s }-\partial_\s\Gamma^\s_{ \gamma\r } -\Gamma^\s_{\gamma\r}\Gamma^\tau_{\tau\s}+\Gamma^\s_{\r\kappa}\Gamma^\kappa_{\gamma\s}) \nonumber\\ &&+\frac{ 6+28c_2+56c_3}{l^2}(g^{\mu\nu}g_{\alpha\gamma}g_{\beta\delta}+4g_{\beta\delta} \delta_\gamma^\nu\delta_\alpha^\mu)\Big)\ .\label{EoM-metric} \eea The equation of motion obtained by varying the action (\ref{action-linearnoRT}) with respect to the spin connection is given by \begin{equation} T_{ac}^{\ \ c}e_b^\mu-T_{bc}^{\ \ c}e_a^\mu-T_{ab}^{\ \ \mu}=S_{ab}^{\ \ \mu}\ , \end{equation} where \begin{eqnarray} } \def\eea{\end{eqnarray} S_{ab}^{\ \ \mu}&=& -\frac{16\pi G_N}{e}\frac{\delta S^{(2)}_{NC}}{\delta \omega_\mu^{ab}}\nonumber\\ &=&-\frac{\theta^{\alpha\beta}\theta^{\gamma\delta}}{8l^4}\Big( (2-4c_2-36c_3)\delta_\alpha^\mu\Gamma_{\beta\r}^\r e_{\gamma b}e_{\delta a}\nonumber\\ && -(1+10c_2+22c_3)\delta_\alpha^\mu\Gamma_{\gamma\r}^\r(e_{\beta a}e_{\delta b}-e_{\beta b}e_{\delta a})\nonumber\\ && - (5+6c_2-8c_3)\delta_\alpha^\mu\Gamma_{\gamma\beta}^\s(e_{\s a}e_{\delta b}-e_{\s b}e_{\delta a})\nonumber\\ && - (3+12c_2+20c_3)g_{\beta\delta}\Big(\Gamma_{\alpha\r}^\r(e_{\gamma b}e_{ a}^\mu-e_{\gamma a}e_{ b}^\mu)-\Gamma_{\alpha\r}^\mu(e_{\gamma b}e^\r_a-e_{\gamma a}e^\r_b)\Big)\nonumber\\ && -(3+11c_2+18c_3)\Big(g_{\beta\delta}\Gamma_{ \alpha\gamma}^\s(e_{\s b}e^\mu_a-e_{\s a}e^\mu_b) +g_{\beta\s}\Gamma_{\alpha\delta}^\s(e_{\gamma b}e^\mu_a-e_{\gamma a}e^\mu_b)\Big)\nonumber\\ && - (5+14c_2+24c_3)\delta_\alpha^\mu g_{\delta\beta}\Gamma_{\s\gamma}^\r (e_{\r a}e^\s_{ b}-e_{\r b}e^\s_{ a})-g_{\delta\s}\delta^\mu_\alpha\Gamma_{\gamma\nu}^\s (e_{\beta b}e^\nu_{a}-e_{\beta a}e^\nu_{ b})\nonumber\\ && +(c_2-4c_3)\delta_\alpha^\mu g_{\delta\s}\Gamma_{\beta\nu}^\s(e_{\gamma b}e^\nu_a-e_{\gamma a}e^\nu_b)\nonumber\\ && +(4+13c_2+24c_3)\delta_\alpha^\mu g_{\s\beta}\Gamma_{\delta\nu}^\s(e_{\gamma b}e^\nu_a-e_{\gamma a}e^\nu_b) \Big)\ .\label{EoM-torsion} \eea The equations (\ref{EoM-metric}) and (\ref{EoM-torsion}) have a very clear physics interpretation. The noncommutativity is a source curvature and torsion, i.e. flat space-time becomes curved as an effect of noncommutative corrections. Also, a torsion-free solution will develop a non-zero torsion in the presence of noncommutativity. \setcounter{equation}{0} \section{NC Minkowski space-time} In order to investigate consequences of noncommutativity in more details we analyze the NC deformation of Minkovski space-time. Minkowski space-time is a vacuum solution of the Einstein equations without the cosmological constant. Therefore, we first have to assume that $1+c_2+2c_3 =0$, that is that the cosmological constant is not present in the zeroth order in the deformation parameter. Note that in our previous work \cite{MiAdSGrav, MDVR-14} we were not able to choose the value of the cosmological constant, since we only worked with the action $S_{1NC}$. Adding the other two actions $S_{2NC}$ and $S_{3NC}$ with arbitrary constants $c_1,c_2,c_3$ enables us to study a wider class of NC gravity solutions. Assuming that the solution is a small perturbation around the flat Minkowski metric \begin{equation}} \def\ee{\end{equation} g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu} , \ee where $h_{\mu\nu}$ is a small correction of the second order in the deformation parameter $\theta^{\mu\nu}$, equation (\ref{EoM-metric}) reduces to \begin{equation}} \def\ee{\end{equation} \frac{1}{2}(\partial_\s\partial^\nu h^{\s\mu}+\partial_\s\partial^\mu h^{\s\nu}-\partial^\mu\partial^\nu h -\Box h^{\mu\nu})-\frac12\eta^{\mu\nu}(\partial_\alpha\partial_\beta h^{\alpha\beta}-\Box h)=\tau^{\mu\nu} , \label{NCMinkEoMmetric} \ee with \begin{equation}} \def\ee{\end{equation} \tau^{\mu\nu}=\frac{11}{4l^6}(2\eta_{\alpha\gamma}\theta^{\alpha\mu}\theta^{\gamma\nu}+\frac{1}{2}g_ {\alpha\gamma} g_{\beta\delta} g^{\mu\nu}\theta^{\alpha\beta}\theta^{\gamma\delta}) . \nonumber \ee The second equation (\ref{EoM-torsion}) gives no contribution, that is the NC Minkowski space-time remains torsion-free in the second order of the deformation parameter. The small perturbation $h_{\mu\nu}$ we split into components $h_{00}$, $h_{0j}$ and $h_{ij}$ and we write equations separately for each component. Note that $i,j,\dots$ are space indices, they take values $1,2,3$ and we label $\psi=\delta_{ij}h^{ij}$. The $00,0j$ and $ij$ components of (\ref{NCMinkEoMmetric}) are given by: \begin{eqnarray} } \def\eea{\end{eqnarray} &&\triangle\psi-\partial_i\partial_j h^{ij}=2\tau^{00} ,\label{00}\\ &&\partial_0\partial_j h^{ij}-\partial_i\partial_j h^{j0}-\partial_0\partial_i \psi+\triangle h^{0i} =2\tau^{0i},\label{0j}\\ && -\partial_0\partial_i h^{0 i}-\partial_k\partial_i h^{k j}-\partial_0\partial_j h^{0 i}-\partial_k\partial_j h^{k i}-\partial_i\partial_j h-\partial_0^2 h^{ ij}+\triangle h^{ij}\nonumber\\ &&+\delta^{ij}(\partial_0^2h^{00}+2\partial_0\partial_m h^{0 m}+\partial_m\partial_n h^{mn})-\delta_{ij}\Box(h^{00}-\psi)=2\tau^{ij} .\label{ij} \eea In order find a solution of these inhomogeneous equations we assume the following ansatz for the components of $h_{\mu\nu}$: \begin{eqnarray} } \def\eea{\end{eqnarray} h^{00}&=&d_3\theta^{0\rho}\theta_\rho^{\ 0}r^2+d_4\theta^{0m}\theta^{0n}x^m x^n+d_5\theta^{\alpha\beta}\theta_{\alpha\beta}r^2,\nonumber\\ h^{ij}&=&d_1\theta^{i\rho}\theta_\rho^{\ j} r^2+d_2\theta^{im}\theta^{jn} x^mx^n+d_6\delta^{ij}\theta^{\alpha\beta}\theta_{\alpha\beta}r^2+d_7\theta^{\alpha\beta}\theta_{\alpha\beta} x^i x^j\nonumber\\ &&+d_9(\theta^{i\rho}\theta_\rho^{\ n}x^nx^j+\theta^{j\rho}\theta_\rho^{\ n}x^nx^i)+d_8\theta^{i\rho}\theta_\rho^{\ l}x^nx^l\delta_{ij},\nonumber\\ h^{0i}&=&d_{10}\theta^{0\rho}\theta_\rho^{\ i}r^2+d_{11}\theta^{0m}\theta^{in}x^m x^n+d_{12}\theta^{0l}\theta_l^{\ n}x^ix^l\ , \eea where $r^2=\sum_{i=1}^3x^ix^i$ and $d_1,\dots,d_{12}$ are arbitrary constants to be determined from equations (\ref{00}-\ref{ij}). Inserting this ansatz into equations (\ref{00}-\ref{ij}) leads to the following set of algebraic equations: \begin{eqnarray} } \def\eea{\end{eqnarray} d_1-d_9+d_8+6d_6-3d_7&=&\frac{33}{8l^6},\nonumber\\ 4(d_1-d_9+d_8)+3d_2+12d_6-6d_7&=&\frac{11}{4l^6},\nonumber\\ d_1-d_9+d_8+3d_2&=&-\frac{11}{2l^6},\nonumber\\ d_1-d_9+d_8+d_4&=&-\frac{11}{2l^6},\nonumber\\ d_1-d_9+d_8+8d_6-4d_7+4d_5-2d_3-d_4&=&\frac{11}{2l^6},\nonumber\\ d_1-d_9+d_8+4d_6-2d_7+2d_5&=&0,\nonumber\\ 4d_{10}+3d_{11}-2d_{12}&=&-\frac{11}{l^6} . \label{JednZaKonstante} \eea To start with, let us assume that both $\theta^{0i}$ and $\theta^{ij}$ are different from zero, $\theta^{0i}\ne0$ and $\theta^{ij}\ne0$. Then the solution of the previous set of equations is: \begin{eqnarray} } \def\eea{\end{eqnarray} && d_2=-\frac{11}{6l^6}, d_4=-\frac{11}{2l^6}, d_5=-\frac{11}{8l^6}, d_3=0,\nonumber\\ && d_1-d_9+d_8=0, \label{konstante}\\ && d_{10}=-\frac{11}{4l^6}-\frac{3d_{11}}{4}+\frac{d_{12}}{2}, d_7=2d_6-\frac{11}{8l^6} .\nonumber \eea From (\ref{konstante}) it follows that some constants will remain undetermined. The presence of undetermined constants suggests the existence of some residual symmetry. A detailed analysis of this residual symmetry we postpone for future work. In this paper we fix the undetermined constants in the following way: $d_1=d_9=d_8=0$, $d_{10}=-d_{12}=0$ and $d_6=-d_7$. Finally, the components of metric tensor follow: \begin{eqnarray} } \def\eea{\end{eqnarray} g_{00}&=& 1 - \frac{11}{2l^6}\theta^{0m}\theta^{0n}x^m x^n-\frac{11}{8l^6}\theta^{\alpha\beta}\theta_{\alpha\beta}r^2,\nonumber\\ g^{0i} &=& -\frac{11}{ 3l^6}\theta^{0m}\theta^{in}x^m x^n ,\nonumber\\ g_{ij}&=& -\delta_{ij} -\frac{11}{6l^6}\theta^{im}\theta^{jn} x^mx^n+\frac{11}{24l^6}\delta^{ij}\theta^{\alpha\beta}\theta_{\alpha\beta}r^2-\frac{11}{24l^6} \theta^{\alpha\beta} \theta_{\alpha\beta}x^i x^j. \label{NCMinkowskiMetric} \eea From the equation (\ref{EoM-metric}) it follows that the scalar curvature of the NC Minkowski space-time\footnote{Note that this result is unique and it does not depend neither on the way we choose the ansatz for solving the equation (\ref{EoM-metric}) in the case of Minkowski space-time not on the way we fix the undetermined constants.} is given by \begin{equation} R=-\frac{11}{l^6}\theta^{\alpha\beta}\theta^{\gamma\delta}\eta_{\alpha\gamma}\eta_{\beta\delta} =const. \label{RNCMikowski} \end{equation} This shows that the noncommutativity induces curvature. The sign of the scalar curvature will depend on the particular values of the parameter $\theta^{\alpha\beta}$. For example, if $\theta^{ij}=0$ and $\theta^{0i}\neq 0$ then the scalar curvature $R$ is positive. On the other hand, if $\theta^{ij} \neq 0$ and $\theta^{0i} = 0$ then the scalar curvature $R$ is negative. The induced curvature is very small, being quadratic in $\theta^{\alpha\beta}$ and it will be difficult to measure it. However, qualitatively we showed that noncommutativity is a source of curvature, just like matter fields or the cosmological constant. \noindent The Reimann tensor for this solution can be calculated easily. A very interesting (and unexpected) observation follows: knowing the components of the Riemann tensor, the components of the metric tensor can be written as \begin{eqnarray} } \def\eea{\end{eqnarray} g_{00}&=&1-R_{0m0n}x^mx^n,\nonumber\\ g_{0i}&=&-\frac23R_{0min}x^mx^n,\nonumber\\ g_{ij}&=&-\delta_{ij}-\frac13R_{imjn}x^mx^n .\label{FermiNCMinkowski} \eea This shows that the coordinates $x^\mu$ we started with, are Fermi normal coordinates. These coordinates are inertial coordinates of a local observer moving along a geodesic. The time coordinate $x^0$ is just the proper time of the observer moving along the geodesic. The space coordinates $x^i$ are defined as afine parameters along the geodesics in the hypersurface orthogonal the actual geodesic of the observer. Unlike Riemann normal coordinates which can be constructed in a small neighborhood of a point, Fermi normal coordinates can be constructed in a small neighborhood of a geodesic, that is inside a small cylinder surrounding the geodesic \cite{FermiCoordiantes}. Along the geodesic these coordinates are inertial, that is \begin{equation} g_{\mu\nu}|_{geod.} = \eta_{\mu\nu}, \quad \partial_\rho g_{\mu\nu}|_{geod.} = 0 \, . \nonumber \end{equation} The measurements performed by the local observer moving along the geodesic are described in the Fermi normal coordinates. Especially, she/he is the one that measures $\theta^{\alpha\beta}$ to be constant! In any other reference frame (any other coordinate system) observers will measure $\theta^{\alpha\beta}$ different from constant. \setcounter{equation}{0} \section{Conclusions} In this paper we constructed a NC gravity model based on the $SO(2,3)_\star$ gauge symmetry. We used the $\star$-product and the enveloping algebra approach and the SW map. An effective NC gravity action was constructed using the expansion in the small NC parameter $\theta^{\alpha\beta}$. The zeroth order of the action is the commutative action (\ref{KomDejstvo}). The first order correction vanishes. The second order correction is calculated; the calculation and the result are long and cumbersome. Therefore, we chose to analyze the model sector by sector. In this paper we were interested in the low energy sector, presumable describing physics at low curvatures. In that case the action is given by (\ref{action-linearnoRT}). The equations of motion show that, just like ordinary matter, noncommutativity plays a role of a source for curvature and/or torsion. More explicitly, in the example of NC Minkowski space time, we explicitly calculated the curvature induced by noncommutativity and showed that in the presence of noncommutativity Minkowski space-time becomes curved with a constant scalar curvature. In addition we gain a better understanding of the diffeomorphisim symmetry breaking problem in the $\theta$-constant NC space-time. Namely, the $\theta$-constant deformation is naturally defined for an inertial observer. Therefore, it is not possible to apply the $\theta$-constant deformation for GR solutions in arbitrary coordinates. With this observation we now understand the breaking of diffeomorphism symmetry in the following way: there is a preferred reference system defined by the Fermi normal coordinates and the NC parameter $\theta^{\alpha\beta}$ is constant in that particular reference system. In an arbitrary reference system the NC deformation is obtained by an appropriate coordinate transformation. We conclude that the constant NC deformation is consistent only with the reference system given by the Fermi normal coordinates. In our future work we plan to investigate other solutions of our NC gravity model, such as the NC Schwartzschild solution and cosmological solutions. Especially, we are interested in the role of Fermi normal coordinates in these solutions and in this way we hope to gain a better understanding of NC gravity. Also, using the advantage of the NC gauge theory approach we plan to include matter fields in our analysis and see the consequences of the noncommutativity on the matter part of the gravity action. \vskip1cm \noindent {\bf Acknowledgement} \hskip0.3cm We would like to thank Ilija Simonovi\'c, Milutin Blagojevi\' c, Maja Buri\' c, Dragoljub Go\v canin and Nikola Konjik for fruitful discussion and useful comments. The work is supported by project ON171031 of the Serbian Ministry of Education and Science and partially supported by the Action MP1405 QSPACE from the Europe an Cooperation in Science and Technology (COST).
1,314,259,992,581
arxiv
\section{Introduction} An almost para-Hermitian manifold is a manifold $\widetilde M$ equipped with an almost product structure $\p\ne \pm I$ and a pseudo-Riemannian metric $\widetilde g$ such that \begin{align} \label{1.1}\p^2=I,\;\; \widetilde g(\p X,\p Y)=-\widetilde g(X,Y), \end{align} for vector fields $X$, $Y$ tangent to $\widetilde M$, where $I$ is the identity map. Clearly, it follows from \eqref{1.1} that the dimension of $\widetilde M$ is even and the metric $\widetilde g$ is neutral. An almost para-Hermitian manifold is called {\it para-K\"ahler} if it satisfies $\widetilde \nabla \p=0$ identically, where $\widetilde \nabla$ denotes the Levi Civita connection of $\widetilde M$. We define $||X ||_{2}$ associated with $\widetilde g$ on $\widetilde M$ by $||X||_2=\widetilde g(X,X)$. Properties of para-K\"ahler manifolds were first studied in 1948 by Rashevski who considered a neutral metric of signature $(m,m)$ defined from a potential function on a locally product $2m$-manifold \cite{rash48}. He called such manifolds stratified spaces. Para-K\"ahler manifolds were explicitly defined by Rozenfeld in 1949 \cite{Roz}. Such manifolds were also defined by Ruse in 1949 \cite{ruse} and studied by Libermann \cite{Li} in the context of $G$-structures. There exist many para-K\"ahler manifolds, for instance, it was proved in \cite{H1} that a homogeneous manifold $\widetilde M=G/H$ of a semisimple Lie group $G$ admits an invariant para-K\"ahler structure $(\widetilde g,\p)$ if and only if it is a covering of the adjoint orbit ${\rm Ad}_G h$ of a semisimple element $h$. (For a very nice survey on para-K\"ahler manifolds, see \cite{E}.) Para-K\"ahler manifolds have been applied in supersymmetric field theories as well as in string theory in recent years (see, for instance, \cite{cortes,cortes1,cortes2}). A pseudo-Riemannian submanifold $M$ of a para-K\"ahler manifold $\widetilde M$ is called {\it invariant} if the tangent bundle of $M$ is invariant under the action of $\p$. $M$ is called {\it anti-invariant} if $\p$ maps each tangent space $T_pM, \, p\in M,$ into the normal space $T_p^\perp M$. A {\it Lagrangian submanifold} $M$ of a para-K\"ahler manifold $\widetilde M$ is an anti-invariant submanifold satisfying $\dim \widetilde M=2\dim M$. Such submanifolds have been investigated recently in \cite{nl,c2011,tjm,cm:Chen11}. A pseudo-Riemannian submanifold $M$ of a para-K\"ahler manifold $\widetilde M$ is called a \emph{$\p R$-submanifold} if the tangent bundle $TM$ of $M$ is the direct sum of an \emph{invariant} distribution $\D$ and an \emph{anti-invariant} distribution $\Dp$, i.e., $$T(M)=\D\oplus\Dp, \;\; \p\D=\D,\;\; \p\Dp\subseteq T_p^\perp(M).$$ A $\p R$-submanifold is called a \emph{$\p R$-warped product} if it is a warped product $\Ni \times_f \Na$ of an invariant submanifold $\Ni$ and an anti-invariant submanifold $\Na$. In this paper we initiate the study of $\p R$-warped products in para-K\"ahler manifolds. The basic properties of $\p R$-warped products are given in section 3. We establish in section 4 a general optimal inequality for $\p R$-warped products in para-K\"ahler manifolds involving only the warping function and the second fundamental form. In section 5, we provide the exact solutions of a PDE system associated with $\p R$-warped products. In the last section, we classify $\p R$-warped products $\Ni \times_f \Na$ with least codimension in the flat para-K\"ahler manifold which verify the equality case of the general inequality derived in section 4. \section{Preliminaries} \subsection{Warped product manifolds} The notion of warped product (or, more generally warped bundle) was introduced by Bishop and O'Neill in \cite{cm:BON69} in order to construct a large variety of manifolds of negative curvature. For example, negative space forms can easily be constructed in this way from flat space forms. The interest of geometers was to extend the classical de Rham theorem to warped products. Hiepko proved a result in \cite{cm:Hie79} which will be used in this paper. Let us recall some basic results on warped products. Let $B$ and $F$ be two pseudo-Riemannian manifolds with pseudo-Riemannian metrics $g_B$ and $g_F$ respectively, and $f$ a positive function on $B$. Consider the product manifold $B\times F$. Let $\pi_1:B\times F\longrightarrow B$ and $\pi_2:B\times F\longrightarrow F$ be the canonical projections. We define the manifold $M=B\times_f F$ and call it {\it warped product} if it is equipped with the following warped metric \begin{equation} \label{formula2.4} g(X,Y)=g_B\big(\pi_{1_{*}}\!(X),\pi_{1_{*}}\!(Y)\big)+f^2(\pi_1(p))g_F\big(\pi_{2_{*}}\!(X),\pi_{2_{*}}\!(Y)\big) \end{equation} for all $X,Y\in T_p(M)$, $p\in M$, or equivalently, \begin{equation} \label{formula2.5} g=g_B+f^2\ g_F. \end{equation} The function $f$ is called {\it the warping function}. For the sake of simplicity we will identify a vector field $X$ on $B$ (respectively, a vector field $Z$ on $F$) with its lift $\tilde X$ (respectively $\tilde Z$) on $B\times_fF$. If $\nabla$, $\nabla^B$ and $\nabla^F$ denote the Levi-Civita connections of $M$, $B$ and $F$, respectively, then the following formulas hold \begin{equation} \label{warped_conn_eq} \begin{array}{l} \nabla_XY=\nabla^B_XY,\\ \nabla_XZ=\nabla_ZX=X(\ln f)~Z,\\ \nabla_ZW=\nabla^F_ZW-g(Z,W)~\nabla(\ln f) \end{array} \end{equation} where $X,Y$ are tangent to $B$ and $Z,W$ are tangent to $F$. Moreover, $\nabla(\ln f)$ is the gradient of $\ln f$ with respect to the metric $g$. \subsection{Geometry of submanifolds} Let $M$ be an $n$-dimensional submanifold of $\widetilde{M}$. We need the Gauss and Weingarten formulas: $$ {\mathbf{(G)}}\quad \widetilde{\nabla}_X Y = \nabla_X Y + \sigma(X,Y) , \qquad {\mathbf{(W)}}\quad \widetilde{\nabla}_X \xi = -A_\xi X + \nabla^\perp_X \xi \, , $$ for vector fields $X,Y$ tangent to $M$ and $\xi$ normal to $M$, where $\nabla$ is the induced connection, $\nabla^\perp$ is the normal connection on the normal bundle $T^\perp(M)$, $\sigma$ is the second fundamental form, and $A_\xi$ is the shape operator associated with the normal section $\xi$. The mean curvature vector $H$ of $M$ is defined by $H=\frac{1}{n}{\rm trace h}$. For later use we recall the equations of Gauss and Codazzi: $$ \begin{array}{l} {\mathbf{(EG)}}\quad g\(R_{XY}Z,W\)=\widetilde g\(\widetilde R_{XY}Z,W\)+\widetilde g\(\sigma(Y,Z),\sigma(X,W)\)- \widetilde g\(\sigma(X,Z),\sigma(Y,W)\), \\[2mm] {\mathbf{(EC)}}\quad (\widetilde R_{XY}Z)^\perp=(\bar\nabla_X\sigma)(Y,Z)-(\bar\nabla_Y\sigma)(X,Z) \end{array} $$ for $X,Y,Z$ and $W$ tangent to $M$, where $R$, $\widetilde R$ are the curvature tensors on $M$ and $\widetilde M$, respectively, $(\widetilde R_{XY}Z)^\perp$ is the normal component of $\widetilde R_{XY}Z$ and $\bar\nabla$ is the van der Waerden - Bortolotti connection defined as \begin{equation} \label{WBconn:eq} (\bar\nabla_X\sigma)(Y,Z)=\nabla^\perp_X\sigma(Y,Z)-\sigma(\nabla_XY,Z)-\sigma(Y,\nabla_X,Z). \end{equation} In this paper the curvature is defined by $R_{XY}=[\nabla_X,\nabla_Y]-\nabla_{[X,Y]}$. A submanifold is called {\em totally geodesic} if its second fundamental form vanishes identically. For a normal vector field $\xi$ on $M$, if $A_\xi=\lambda \, I$, for certain function $\lambda$ on $M$, then $\xi$ is called a {\em umbilical section} (or $M$ is {\em umbilical with respect to} $\xi$). If $M$ is umbilical with respect to every (local) normal vector field, then $M$ is called a {\em totally umbilical submanifold}. A pseudo-Riemannian submanifold is called {\it minimal} if the mean curvature vector $H$ vanishes identically. And it is called {\it quasi-minimal} if $H$ is a light-like vector field. Recall that for a warped product $M=B\times_f F$, $B$ is totally geodesic and $F$ is totally umbilical in $M$. \subsection{Para-K\"ahler $n$-plane} The simplest example of para-K\"ahler manifold is the para-K\"ahler $n$-plane $({\mathbb E}^{2n}_n,{\mathcal P},g_0)$ consisting of the pseudo-Euclidean $2n$-space $\mathbb E^{2n}_n$, the standard flat neutral metric \begin{align}\label{10.2} g_0=-\text{$\sum$}_{j=1}^n dx_j^2+\text{$\sum$}_{j=1}^n dy_j^2,\end{align} and the almost product structure \begin{align}{\mathcal P}=\text{$\sum$}_{j=1}^n \text{\small$\frac{\partial}{\partial y_j}$}\otimes dx_j+\text{$\sum$}_{j=1}^n \text{\small$\frac{\partial}{\partial x_j}$} \otimes dy_j.\end{align} We simply denote the para-K\"ahler $n$-plane $({\mathbb E}^{2n}_n,{\mathcal P},g_0)$ by $\p^{n}$. \section{$\p R$-submanifolds of para-K\"ahler manifolds} For any vector field $X$ tangent to $M$, we put $P X = tan(\p X)$ and $F X = nor (\p X)$, where $tan_p$ and $nor_p$ are the natural projections associated to the direct sum decomposition $$ T_p (\widetilde{M}) = T_p (M) \oplus T_p^\perp(M) \ , \ p \in M. $$ Then $P$ is an endomorphism of the tangent bundle $T(M)$ and $F$ is a normal bundle valued 1-form on $M$. Similarly, for a normal vector field $\xi$, we put $t\xi=tan(\p \xi)$ and $f\xi=nor(\p \xi)$ for the tangential and the normal part of $\p \xi$, respectively. Let $\nu$ denote the orthogonal complement of $\p\Dp$ in $T^\perp(M)$. Then we have $$ T^\perp(M)=\p\Dp\oplus\nu. $$ Notice that $\nu$ is invariant, i.e., $\p\nu=\nu$. The following proposition characterizes $\p R$-submanifold of para-K\"ahler manifolds. A similar result is known for $CR$-submanifolds in K\"ahlerian manifolds and contact $CR$-submanifolds in Sasakian manifolds. See e.g. \cite{cm:YK83}. \begin{proposition} Let $M\to\widetilde M$ be an isometric immersion of a pseudo-Riemannian manifold $M$ into a para-K\"ahler manifold $\widetilde M$. Then a necessary and sufficient condition for $M$ to be a $\p R$-submanifold is that $F\circ P=0$. \end{proposition} \proof For $U$ tangent to $M$ we have the following decomposition $$ U=\p^2 U=P^2U+FPU+tFU+fFU. $$ By identifying the tangent and the normal parts respectively, we find $$ P^2+tF=I \quad {\rm and} \quad FP+fF=0. $$ Suppose that $M$ is a $\p R$-submanifold. After we choose $U=X\in\D$ we have $\p X=PX$ and $FX=0$. Hence $P^2=I$ and $FP=0$ on $\D$. On the other hand, if $U=Z=\Dp$, we have $PZ=0$. Hence $FP=0$ on $\Dp$ too. Conversely, suppose that $FP=0$. Put $$ \D=\{X\in T(M) : \p X\in T(M)\}\ {\rm and}\ \Dp=\{Z\in T(M) : \p Z\in T^\perp(M)\}. $$ Then by direct computations we conclude that $\D$ and $\Dp$ are orthogonal such that $T(M)=\D\oplus\Dp$. \endproof The following results from \cite{cm:Chen11} are necessary for our further computations. \begin{proposition} Let $M$ be a $\p R$-submanifold of a para-K\"ahler manifold $\widetilde M$. Then \begin{itemize} \item[(i)] the anti-invariant distribution $\Dp$ is a non-degenerate integrable distribution; \item[(ii)] the invariant distribution $\D$ is a non-degenerate minimal distribution; \item[(iii)] the invariant distribution $\D$ is integrable if and only if $\sigma(PX,Y)=\sigma(X,PY)$, for all $X,Y\in\D$; \item[(iv)] $\D$ is integrable if and only if $\dot\sigma$ is symmetric, equivalently to $\dot\sigma(PX,Y)=\dot\sigma(X,PY)$. Here $\dot\sigma$ denotes the second fundamental form of $\D$ in $M$. \end{itemize} \end{proposition} Now, let us give some useful formulas. \begin{lemma} \label{lemma_3.3} If $M$ is a $\p R$-submanifold of a para-K\"ahler manifold $\widetilde M$, then \begin{itemize} \item[(a)] $\widetilde g(A_{FZ}U,PX)=g(\nabla_UZ,X)$, \item[(b)] $A_{FZ}W=A_{FW}Z$ and $A_{f\xi}X=-A_\xi PX$, \end{itemize} for all $X,Y\in\D$, $Z,W\in\Dp$, $U\in T(M)$ and $\xi\in\Gamma(\nu)$. \end{lemma} We need the following for later use. \begin{proposition} Let $M$ be a $\p R$-submanifold of a para-K\"ahler manifold $\widetilde M$. Then \begin{itemize} \item[(i)] the distribution $\Dp$ is totally geodesic if and only if \begin{equation} \label{Dp_tg:eq} \widetilde g(\sigma(\D,\Dp),\p\Dp)=0 \end{equation} \item[(ii)] the distribution $\D$ is totally geodesic if and only if \begin{equation} \label{D_tg:eq} \widetilde g(\sigma(\D,\D),\p\Dp)=0 \end{equation} \item[(iii)] $\D$ is totally umbilical if and only if there exists $Z_0\in\Dp$such that \begin{equation} \label{Dp_tu:eq} \sigma(X,Y)=g(X,PY)~FZ_0\ ({\rm mod}~\nu)\ ,\ \forall~X,Y\in\D. \end{equation} \end{itemize} \end{proposition} \proof This can be proved by classical computations: see e.g. \cite{cm:Chen81} or \cite{cm:Mun05}. \endproof \subsection{$\p R$-products} A $\p R$-submanifold of a para-K\"ahler manifold is called a \emph{$\p R$-product} if it is locally a direct product $\Ni\times \Na$ of an invariant submanifold $\Ni$ and an anti-invariant submanifold $\Na$. The next result characterizes $\p R$-products in terms of the operator $P$. \begin{proposition}[Characterization] A $\p R$-submanifold of a para-K\"ahler manifold is a $\p R$-product if and only if $P$ is parallel. \end{proposition} \proof By straightforward computations (as in \cite[Theorem 4.1]{cm:Chen81} or \cite[Theorem 2.2]{cm:Mun05}) we may prove that $$(\nabla_UP)V=\nabla_U(PV)-P\nabla_UV=0\ ,\ \forall\; U,V\in\chi(M), $$ which implies the desired result. \endproof The following result was proved in \cite[page 224]{cm:Chen11}. \begin{proposition} \label{PR} Let $N_{\top}\times N_{\perp}$ be a ${\mathcal P}R$-product of the para-K\"ahler $(h+p)$-plane $\p^{h+p}$ with $h=\frac{1}{2}\dim N_{\top}$ and $p=\dim N_{\perp}$. If $N_{\perp}$ is either spacelike or timelike, then the ${\mathcal P}R$-product is an open part of a direct product of a para-K\"ahler $h$-plane $\p^{h}$ and a Lagrangian submanifold $L$ of $\p^{p}$, i.e., $$ \Ni\times \Na\subset \p^{h}\times L\subset \p^{h}\times \p^{p}=\p^{h+p}.$$ \end{proposition} \subsection{$\p R$-warped products} Let us begin with the following result. \begin{proposition} \label{non_exist_th} If a $\p R$-submanifold $M$ is a warped product $\Na\times_f\Ni$ of an anti-invariant submanifold $\Na$ and an invariant submanifold $\Ni$ with warping function $f:\Na\longrightarrow{\mathbb{R}}_+$, then $M$ is a $\p$R product $\Na\times N_\top^f$, where $N_\top^f$ is the manifold $\Ni$ endowed with the homothetic metric $g_\top^f=f^2g_\top$. \end{proposition} \proof Consider $X,Y\in\D$ and $Z\in\Dp$. Compute $$ \begin{array}{l} \widetilde g(\sigma(X,Y),FZ)=\widetilde g(\widetilde\nabla_XY,\p Z)=-\widetilde g(Y,\p\widetilde\nabla_XZ)= g(PY,\nabla_XZ)=\\ \qquad = g(PY,Z(\ln f)~X)=Z(\ln f)~g(X,PY). \end{array} $$ Since $\sigma(\cdot ~,\, \cdot)$ is symmetric and $g(\cdot~,\, P\, \cdot)$ is skew-symmetric, it follows that $Z(\ln f)$ vanishes for all $Z$ tangent to $\Na$. Consequently, $f$ is a constant and thus the warped product is nothing but the product $\Na\times N_\top^f$. \endproof The previous result shows that there do not exist warped product $\p R$-submanifolds in para-K\"aehler manifolds of the form $\Na\times_f\Ni$, other than $\p R$-products. Thus, in view of Proposition \ref{PR} we give the following definition: \begin{definition} {\rm A $\p R$-submanifold of a para-K\"ahler manifold $\widetilde M$ is called a \emph{$\p R$-warped product} if it is a warped product of the form: $\Ni \times_f \Na$, where $\Ni$ in an invariant submanifold, $\Na$ is an anti-invariant submanifold of $\widetilde M$ and $f$ is a non-constant function $f:N_\top\to {\mathbb R}_+$.} \end{definition} Since the metric on $N_{T}$ of a $\p R$-warped product $\Ni\times_f N_\perp$ is neutral, we simply called the $\p R$-warped product $\Ni\times_f N_\perp$ {\it space-like} or {\it time-like} depending on $N_\perp$ is space-like or time-like, respectively. The next result characterizes $\p R$-warped products in para-K\"ahler manifolds. \begin{proposition} Let $M$ be a proper $\p R$-submanifold of a para-K\"ahler manifold. Then $M$ is a $\p R$-warped product if and only if \begin{equation} \label{warped_prod_eq} A_{FZ}X=(PX(\mu))\,Z\ ,\ \forall \ X\in\D, \ Z\in\Dp, \end{equation} for some smooth function $\mu$ on $M$ satisfying $W(\mu)=0$, $\forall~W\in\Dp$. \end{proposition} The proof of this result is similar as in the case of K\"ahler or Sasakian ambient space. The key is the characterization of warped products given by Hiepko in \cite{cm:Hie79}. \section{An optimal inequality} \begin{theorem} \label{ineq:th} Let $M=\Ni\times_{f}\Na$ be a $\p R$-warped product in a para-K\"ahler manifold $\widetilde M$. Suppose that $\Na$ is space-like and $\nabla^{\perp}\(\p\Na)\subseteq\p\Na$. Then the second fundamental form of $M$ satisfies \begin{equation} \label{ineq:eq} S_\sigma \leq 2p {||\nabla\ln f||}_2+{||\sigma_\nu^{\D}||}_2, \end{equation} where $p=\dim\Na$, $S_\sigma=\widetilde g(\sigma,\sigma)$, $\nabla\ln f$ is the gradient of $\ln f$ with respect to the metric $g$ and ${||\sigma_\nu^{\D}||}_2=\widetilde g\(\sigma_\nu(\D,\D),\sigma_\nu(\D,\D)\)$. Here the index $\nu$ represents the $\nu$-component of that object. \end{theorem} \proof If we denote by $g{}_\top$ and $g{}_\perp$ the metrics on $\Ni$ and $\Na$, then the warped metric on $M$ is $g=g_\top+f^2g_\perp$. Let us consider \begin{itemize} \item on $\Ni$: an orthonormal basis $\{X_i,X_{i*}=PX_i\}$, $i=1,\ldots,h$, where\linebreak $h=\dim\Ni$; moreover, one can suppose that $\epsilon_i:=g(X_i,X_i)=1$ and hence $\epsilon_{i*}:=g(X_{i*},X_{i*})=-1$, for all $i$. \item on $\Na$: an orthonormal basis $\{\tilde Z_a\}, a=1,\ldots,p$; we put $\epsilon_a:=g_\perp(\tilde Z_a,\tilde Z_a)=1$, for all $a$; \item in each point $(x,y)\in M$: $Z_a(x,y)=\frac 1{f(x)}~\tilde Z_a(y)$; \item in $\nu$: an orthonormal basis $\{\xi_\alpha,\xi_{\alpha*}=f\xi_{\alpha*}\}$, $\alpha=1,\ldots,q$; moreover, one can suppose that $\epsilon_\alpha:=\widetilde g(\xi_\alpha,\xi_\alpha)=1$ and hence $\epsilon_{\alpha*}:=\widetilde g(\xi_{\alpha*},\xi_{\alpha*})=-1$. \end{itemize} Now, we want to compute $\widetilde g(\sigma,\sigma)=$ $$=\widetilde g\(\sigma(\D,D),\sigma(\D,\D)\)+ 2\widetilde g\(\sigma(\D,\Dp),\sigma(\D,\Dp)\)+ \widetilde g\(\sigma(\Dp,\Dp),\sigma(\Dp,\Dp)\), $$ where \begin{equation} \label{sigmadd:eq} \begin{array}{c} \widetilde g\(\sigma(\D,\D),\sigma(\D,\D)\)=\sum\limits_{i,j=1}^h \Big(\epsilon_i\epsilon_j\widetilde g\(\sigma(X_i,X_j),\sigma(X_i,X_j)\)\\[2mm] + \epsilon_{i*}\epsilon_j\widetilde g\(\sigma(X_{i*},X_j),\sigma(X_{i*},X_j)\) + \epsilon_i\epsilon_{j*}\widetilde g\(\sigma(X_{i},X_{j*}),\sigma(X_{i},X_{j*})\)\\[2mm] + \epsilon_{i*}\epsilon_{j*}\widetilde g\(\sigma(X_{i*},X_{j*}),\sigma(X_{i*},X_{j*})\) \Big), \end{array} \end{equation} \begin{equation} \label{sigmaddp:eq} \begin{array}{c} \widetilde g\(\sigma(\D,\Dp),\sigma(\D,\Dp)\)=\sum\limits_{i=1}^h\sum\limits_{a=1}^p\Big( \epsilon_i\epsilon_a\widetilde g\(\sigma(X_i,Z_a),\sigma(X_i,Z_a)\)\\[2mm] + \epsilon_{i*}\epsilon_a\widetilde g\(\sigma(X_{i*},Z_a),\sigma(X_{i*},Z_a)\)\Big) \end{array} \end{equation} and \begin{equation} \label{sigmadpdp:eq} \widetilde g\(\sigma(\Dp,\Dp),\sigma(\Dp,\Dp)\)=\sum\limits_{a,b=1}^p\epsilon_a\epsilon_b\widetilde g\(\sigma(Z_a,Z_b),\sigma(Z_a,Z_b)\). \end{equation} To do so, first we analyze $\sigma(\D,\D)$. Since $\D$ is totally geodesic, we have $\sigma(\D,\D)\in\nu$. Hence one can write the following $$ \begin{array}{rclcrcl} \sigma(X_i,X_j)&=&\sigma_{ij}^\alpha\xi_\alpha+\sigma_{ij}^{\alpha*}\xi_{\alpha*} , && \sigma(X_{i*},X_j)&=&\sigma_{i*j}^\alpha\xi_\alpha+\sigma_{i*j}^{\alpha*}\xi_{\alpha*}, \\[2mm] \sigma(X_{i*},X_{j*})&=&\sigma_{i*j*}^\alpha\xi_\alpha+\sigma_{i*j*}^{\alpha*}\xi_{\alpha*}, && \sigma(X_{i},X_{j*})&=&\sigma_{ij*}^\alpha\xi_\alpha+\sigma_{ij*}^{\alpha*}\xi_{\alpha*}. \end{array} $$ It follows that \begin{equation} \label{sdd:eq} \begin{array}{c} \widetilde g\(\sigma(\D,\D),\sigma(\D,\D)\) = \sum\limits_{i,j=1}^h\sum\limits_{\alpha=1}^q\Big\{ \big[(\sigma_{ij}^\alpha)^2-(\sigma_{ij}^{\alpha*})^2\big]-\big[(\sigma_{i*j}^\alpha)^2-(\sigma_{i*j}^{\alpha*})^2\big]\\[2mm] -\big[(\sigma_{ij*}^\alpha)^2-(\sigma_{ij*}^{\alpha*})^2\big]+\big[(\sigma_{i*j*}^\alpha)^2-(\sigma_{i*j*}^{\alpha*})^2\big] \Big\}. \end{array} \end{equation} Due to the integrability of $\D$ we deduce that $\sigma_{i*j}^\alpha=\sigma_{ij*}^\alpha$, $\sigma_{i*j}^{\alpha*}=\sigma_{ij*}^{\alpha*}$, $\sigma_{i*j*}^\alpha=\sigma_{ij}^\alpha$, $\sigma_{i*j*}^{\alpha*}=\sigma_{ij}^{\alpha*}$. Furthermore, using Lemma~\ref{lemma_3.3}, we may write $$ \widetilde g\(\sigma(X,Y),\xi\)=-\widetilde g\(\sigma(X,PY),f\xi\)\ ,\ \forall~X,Y\in\D, \ \xi\in\nu $$ and consequently we have $$ \begin{array}{l} \sigma_{ij}^\alpha=\widetilde g\(\sigma(X_i,X_j),\xi_\alpha\)= -\widetilde g\(\sigma(X_i,X_{j*}),\xi_{\alpha*}\)=\sigma_{ij*}^{\alpha*}, \\[1mm] \sigma_{ij}^{\alpha*}=-\widetilde g\(\sigma(X_i,X_j),\xi_{\alpha*}\) =\widetilde g\(\sigma(X_i,X_{j*}),\xi_{\alpha}\)=\sigma_{ij*}^{\alpha}. \end{array} $$ By replacing all these in \eqref{sdd:eq}, we obtain \begin{equation} \label{sdd0} \widetilde g\(\sigma(\D,\D),\sigma(\D,\D)\)={||\sigma^\D_\nu||}_2= 4\sum\limits_{i,j=1}^h\sum\limits_{\alpha=1}^q \big[(\sigma_{ij}^\alpha)^2-(\sigma_{ij}^{\alpha*})^2\big]. \end{equation} Let us focus now on $\widetilde g\(\sigma(\D,\Dp),\sigma(\D,\Dp)\)$. As before, we write $$ \begin{array}{rcl} \sigma(X_i,Z_a)&=&\sigma_{ia}^bFZ_b+\sigma_{ia}^\alpha\xi_\alpha+\sigma_{ia}^{\alpha*}\xi_{\alpha*} ,\\[2mm] \sigma(X_{i*},Z_a)&=&\sigma_{i*a}^bFZ_b+\sigma_{i*a}^\alpha\xi_\alpha+\sigma_{i*a}^{\alpha*}\xi_{\alpha*}. \end{array} $$ It follows that $$ \begin{array}{rcl} \widetilde g\(\sigma(X_i,Z_a),\sigma(X_i,Z_a)\)&=&- \sum\limits_{b=1}^p(\sigma_{ia}^b)^2+ \sum\limits_{\alpha=1}^q\big[(\sigma_{ia}^\alpha)^2-(\sigma_{ia}^{\alpha*})^2\big], \\[2mm] \widetilde g\(\sigma(X_{i*},Z_a),\sigma(X_{i*},Z_a)\)&=&- \sum\limits_{b=1}^p(\sigma_{i*a}^b)^2+ \sum\limits_{\alpha=1}^q\big[(\sigma_{i*a}^\alpha)^2-(\sigma_{i*a}^{\alpha*})^2\big]. \end{array} $$ We obtain \begin{equation} \label{ddp_eq} \begin{array}{l} \widetilde g\(\sigma(\D,\Dp),\sigma(\D,\Dp)\)=-\sum\limits_{i=1}^h\sum\limits_{a,b=1}^p \big[(\sigma_{ia}^b)^2-(\sigma_{i*a}^b)^2\big]\qquad\qquad\\[2mm] \qquad \qquad +\sum\limits_{i=1}^h\sum\limits_{a=1}^p\sum\limits_{\alpha=1}^q \big[(\sigma_{ia}^\alpha)^2-(\sigma_{ia}^{\alpha*})^2-(\sigma_{i*a}^\alpha)^2+(\sigma_{i*}^{\alpha*})^2\big]. \end{array} \end{equation} From Lemma~\ref{lemma_3.3} we have $$ \widetilde g\(\sigma(PX,Z),f\xi\)=-\widetilde g\(\sigma(X,Z),\xi\) $$ and consequently \begin{equation} \label{ddp_sigma:eq} \begin{array}{l} \sigma_{i*a}^{\alpha}=\widetilde g\(\sigma(X_{i*},Z_a),\xi_{\alpha}\)=-\widetilde g\(\sigma(X_i,Z_a),\xi_{\alpha*})=\sigma_{ia}^{\alpha*}, \\[2mm] \sigma_{i*a}^{\alpha*}=-\widetilde g\(\sigma(X_{i*},Z_a),\xi_{\alpha*}\)=\widetilde g\(\sigma(X_i,Z_a),\xi_\alpha)=\sigma_{ia}^\alpha\ . \end{array} \end{equation} Moreover we know that $\widetilde g\(\sigma(PX,Z),FW\)=-X(\ln f)g(Z,W)$. This yields \begin{equation} \label{ddp_logf:eq} \sigma_{ia}^b=PX_i(\ln f)~\delta_{ab} \ {\rm and}\ \sigma_{i*a}^b=X_i(\ln f)~\delta_{ab}. \end{equation} By combining \eqref{ddp_eq}, \eqref{ddp_sigma:eq} and \eqref{ddp_logf:eq} we get \begin{equation} \begin{array}{c} \widetilde g\(\sigma(\D,\Dp),\sigma(\D,\Dp)\)= p\sum\limits_{i=1}^h \big[ \big(X_i(\ln f)\big)^2-\big(PX_i(\ln f)\big)^2\big]\\[2mm] +2\sum\limits_{i=1}^h\sum\limits_{a=1}^p\sum\limits_{\alpha=1}^q \big[(\sigma_{ia}^\alpha)^2-(\sigma_{ia}^{\alpha*})^2\big]. \end{array} \end{equation} As $\widetilde g\(\sigma(X,Z),f\xi\)=-\widetilde g\(\nabla^\perp_XFZ,\xi\)$ and using the hypothesis $\nabla_{\D}^\perp\p\Dp\subseteq\p\Dp$ we get $\sigma(\D,\Dp)\subseteq\p\Dp$. Hence $\sigma_{ia}^\alpha$ and $\sigma_{ia}^{\alpha*}$ vanish. Thus \begin{equation} \label{sddp:eq} \widetilde g\(\sigma(\D,\Dp),\sigma(\D,\Dp)\)= p~g\(\nabla\ln f,\nabla\ln f\). \end{equation} Finally, we study $\widetilde g\(\sigma(\Dp,\Dp),\sigma(\Dp,\Dp)\)$. We write $$ \sigma(Z_a,Z_b)=\sigma_{ab}^cFZ_c+\sigma_{ab}^\alpha\xi_\alpha+\sigma_{ab}^{\alpha*}\xi_{\alpha*} $$ and hence $$ \widetilde g\(\sigma(\Dp,\Dp),\sigma(\Dp,\Dp)\)=-\sum\limits_{a,b,c=1}^p(\sigma_{ab}^c)^2+ \sum\limits_{a,b=1}^p\sum\limits_{\alpha=1}^q\big[ (\sigma_{ab}^\alpha)^2-(\sigma_{ab}^{\alpha*})^2\big]. $$ As $\widetilde g\(\sigma(Z,W),f\xi\)=-\widetilde g\(\nabla^\perp_ZFW,\xi\)$ and using the hypothesis $\nabla_{\Dp}^\perp\p\Dp\subseteq\p\Dp$ we get $\sigma(\Dp,\Dp)\subseteq\p\Dp$. Hence $\sigma_{ab}^\alpha$ and $\sigma_{ab}^{\alpha*}$ vanish. We conclude with \begin{equation} \label{dpdp:eq} \widetilde g\(\sigma(\Dp,\Dp),\sigma(\Dp,\Dp)\)=-\sum\limits_{a,b,c=1}^p(\sigma_{ab}^c)^2\ . \end{equation} From these we obtain the theorem. \endproof \begin{remark} If the manifold $\Na$ in Theorem~{\rm\ref{ineq:th}} is time-like, then \eqref{ineq:eq} shall be replaced by \begin{equation} \label{iineq:eq} S_\sigma \geq 2p {||\nabla\ln f||}_2+{||\sigma_\nu^{\D}||}_2. \end{equation} \end{remark} \begin{remark} {\rm For every $\p R$-warped product $\Ni\times\Na$ in a para-K\"ahler manifold $\widetilde M$, $\dim \widetilde M\geq \dim \Ni+2\dim \Na$ holds. Thus the smallest codimension is $\dim \Na$.} \end{remark} \begin{theorem} \label{small_codim} Let $\Ni\times_{f}\Na$ be a $\p R$-warped product in a para-K\"ahler manifold $\widetilde M$. If $\Na$ is space-like (respectively, time-like) and $\dim \widetilde M= \dim \Ni+2\dim \Na$, then the second fundamental form of $M$ satisfies \begin{equation} \label{ineq_small:eq} S_\sigma \leq 2p {||\nabla\ln f||}_2\;\; \hbox{ {\rm (respectively,} $S_\sigma \geq 2p {||\nabla\ln f||}_2)$}. \end{equation} If the equality sign of \eqref{ineq_small:eq} holds identically, we have \begin{equation} \label{eq} \sigma(\mathcal D,\mathcal D)= \sigma(\mathcal D^\perp,\Dp)=\{0\}.\end{equation} \end{theorem} \begin{proof} Inequality \eqref{ineq_small:eq} follows from \eqref{ineq:eq}. When the equality sign holds, \eqref{eq} follows from the proof of Theorem 4.1. \end{proof} \section{Exact solutions for a Special PDE's System} We need the exact solutions of the following PDE system for later use. \begin{proposition} \label{PDEsyst} The non-constant solutions $\psi=\psi(s_1,\ldots,s_h,t_1,\ldots,t_h)$ of the following system of partial differential equations \begin{subequations} \renewcommand{\theequation}{\theparentequation .\alph{equation}} \label{eq:pde} \begin{align} \label{eq:pde1} & \frac{\partial^2 \psi}{\partial s_i\partial s_j}+\frac{\partial\psi}{\partial s_i}~\frac{\partial\psi}{\partial s_j} +\frac{\partial\psi}{\partial t_i}~\frac{\partial\psi}{\partial t_j}=0\, ,\\ \label{eq:pde2} & \frac{\partial^2 \psi}{\partial s_i\partial t_j}+\frac{\partial\psi}{\partial s_i}~\frac{\partial\psi}{\partial t_j} +\frac{\partial\psi}{\partial t_i}~\frac{\partial\psi}{\partial s_j}=0 \ ,\;\; i,j=1,\ldots,h\, ,\\ \label{eq:pde3} & \frac{\partial^2 \psi}{\partial t_i\partial t_j}+\frac{\partial\psi}{\partial t_i}~\frac{\partial\psi}{\partial t_j} +\frac{\partial\psi}{\partial s_i}~\frac{\partial\psi}{\partial s_j}=0 \end{align} \end{subequations} are either given by \begin{equation} \label{eq:pde_sol1} \psi=\frac 12\ln\left|\big[\big(\langle {\mathbf{v}},z \rangle+c_1\big)^2- \big(\langle \j {\mathbf{v}},z\rangle+c_2\big)^2\big]\right|, \end{equation} where $z=(s_1,s_2,\ldots,s_h,t_1,t_2,\ldots,t_h)$, ${\mathbf{v}}=(a_1,a_2,\ldots,a_h,0,b_2,\ldots,b_h)$ is a constant vector in ${\mathbb{R}}^{2h}$ with $a_1\neq0$, $c_1,c_2\in{\mathbb{R}}$ and $\j{\mathbf{v}}=(0,b_2,\ldots,b_h,a_1,a_2,\ldots,a_h)$; or given by \begin{equation} \label{eq:pde_sol2} \psi=\frac 12\ln\left|\big(\langle{\mathbf{v_1}},z\rangle+c\big)\big(\langle{\mathbf{v_2}},z\rangle+d\big)\right|, \end{equation} where ${\mathbf{v_1}}=\big(0,a_2,\ldots,a_h,0,\epsilon a_2,\ldots,\epsilon a_h\big)$, ${\mathbf{v_2}}=\big(b_1,\ldots,b_h,-\epsilon b_1,\ldots,-\epsilon b_h\big)$ with $b_1\neq0$, $z$ is as above and $c,d\in{\mathbb{R}}$. Here $\langle~\, ,~\rangle$ denotes the Euclidean scalar product in ${\mathbb{R}}^{2h}$. \end{proposition} \proof Let us make some notations: $\psi_{s_i}:=\frac{\partial \psi}{\partial s_i} $; $\psi_{s_is_j}:=\frac{\partial^2\psi}{\partial s_i\partial s_j}$, and similar for $\psi_{t_i}$, $\psi_{s_it_j}$, respectively $\psi_{t_it_j}$. The same notations for any other function. If in \eqref{eq:pde2} we take $i=j$ one gets $\psi_{s_it_i}=-2\psi_{s_i}\psi_{t_i}$ for all $i=1,\ldots,h$. Since $\psi$ is non-constant, there exists $i_0$ such that at least one of $\psi_{s_{i_0}}$ or $\psi_{t_{i_0}}$ is different from $0$. Without loss of the generality we suppose $i_0=1$. Both situations yield $$ e^{2\psi}=\zeta(t_1,s_2,t_2,\ldots,s_h,t_h)+\eta(s_1,s_2,t_2,\ldots,s_h,t_h)\, , $$ where $\zeta$ and $\eta$ are functions of $2h-1$ variables such that $\zeta+\eta>0$ on the domain of $\psi$. It follows that \begin{equation} \label{eq:8} \begin{array}{l} \psi_{s_1}=\dfrac{\eta_{s_1}}{2(\zeta+\eta)}\ ,\ \psi_{s_1s_1}=\dfrac{\eta_{s_1s_1}(\zeta+\eta)-\eta_{s_1}^2}{2(\zeta+\eta)^2}\ ,\\[2mm] \psi_{t_1}=\dfrac{\zeta_{t_1}}{2(\zeta+\eta)}\ ,\ \psi_{t_1t_1}=\dfrac{\zeta_{t_1t_1}(\zeta+\eta)-\eta_{t_1}^2}{2(\zeta+\eta)^2} \ . \end{array} \end{equation} Using \eqref{eq:pde1} and \eqref{eq:pde3} we obtain \begin{equation} 2\eta_{s_1s_1}(\zeta+\eta)=\eta_{s_1}^2-\zeta_{t_1}^2\ ,\quad 2\zeta_{t_1t_1}(\zeta+\eta)=\zeta_{t_1}^2-\eta_{s_1}^2\ . \end{equation} Since $\zeta+\eta\neq0$, adding the previous relations, one gets $$ \eta_{s_1s_1}(s_1,s_2,t_2,\ldots,s_h,t_h)+\zeta_{t_1t_1}(t_1,s_2,t_2,\ldots,s_h,t_h)=0 $$ and hence, there exists a function $F$ depending on $s_2,t_2,\ldots,s_h,t_h$ such that $$ \begin{array}{l} \eta_{s_1s_1}(s_1,s_2,t_2,\ldots,s_h,t_h)=~2F(s_2,t_2,\ldots,s_h,t_h)\, , \\[2mm] \zeta_{t_1t_1}(t_1,s_2,t_2,\ldots,s_h,t_h)=-2F(s_2,t_2,\ldots,s_h,t_h)\ . \end{array} $$ At this point one integrates with respect to $s_1$ and $t_1$ respectively and one gets \begin{equation} \label{eq:11_12} \begin{array}{l} \eta(s_1,s_2,t_2,\ldots,s_h,t_h)=~Fs_1^2+Gs_1+H\, , \\[2mm] \zeta(t_1,s_2,t_2,\ldots,s_h,t_h)=-Ft_1^2-Kt_1-L\, , \end{array} \end{equation} where $G,H,L$ and $K$ are functions depending on $s_2,t_2,\ldots,s_h,t_h$ satisfying the following condition \begin{equation} \label{eq:13} 4F(H-L)=G^2-K^2. \end{equation} It follows that $\eta+\zeta=\(Fs_1^2+Gs_1+H\)-\(Ft_1^2+Kt_1+L\)$. {\bf Case 1.} Suppose $F\neq 0$; being continuous, it preserves constant sign; denote it by $\varepsilon$. From \eqref{eq:13} we have $H-L=\frac{G^2-K^2}{4F}$ which combined with \eqref{eq:11_12} yields $$ \eta+\zeta=\varepsilon\left[\Big(\varepsilon\sqrt{\varepsilon F}~s_1+\frac{G}{2\sqrt{\varepsilon F}}\Big)^2- \Big(\varepsilon\sqrt{\varepsilon F}~t_1+\frac{K}{2\sqrt{\varepsilon F}}\Big)^2\right]. $$ We make some notations: $a=\varepsilon\sqrt{\varepsilon F}$, $\gamma=\frac G{2\sqrt{\varepsilon F}}$ and $\delta=\frac K{2\sqrt{\varepsilon F}}$, all of them being functions depending on $s_2,t_2,\ldots,s_h,t_h$. We are able to re-write the function $\psi$ as \begin{equation} \label{eq:14} \psi=\frac12\ln\varepsilon\left[(as_1+\gamma)^2-(at_1+\delta)^2\right]. \end{equation} We compute now \begin{equation} \label{eq:15} \psi_{s_1}=\frac{a(as+\gamma)}{(as_1+\gamma)^2-(at_1+\delta)^2}\ ,\ \psi_{t_1}=\frac{-a(at_1+\delta)}{(as_1+\gamma)^2-(at_1+\delta)^2} \end{equation} and for $i\neq1$ \begin{equation} \label{eq:16_17} \begin{array}{l} \displaystyle \psi_{s_i}=\frac{(as_1+\gamma)(a_{s_i}s_1+\gamma_{s_i})-(at_1+\delta)(a_{s_i}t_1+\delta_{s_i})}{(as_1+\gamma)^2-(at_1+\delta)^2}\, , \\[3mm] \displaystyle \psi_{t_i}=\frac{(as_1+\gamma)(a_{t_i}s_1+\gamma_{t_i})-(at_1+\delta)(a_{t_i}t_1+\delta_{t_i})}{(as_1+\gamma)^2-(at_1+\delta)^2}\ . \end{array} \end{equation} Computing also $\psi_{s_1s_i}$, we can use \eqref{eq:pde1} for $j=1$, $i=2,\ldots,h$ and obtain $$ \begin{array}{l} [a(a_{s_i}s_1+\gamma_{s_i})+a_{s_i}(as_1+\gamma)][(as_1+\gamma)^2-(at_1+\delta)^2]\\ \qquad-a(as_1+\gamma)[(as_1+\gamma)(a_{s_i}s_1+\gamma_{s_i})-(at_1+\delta)(a_{s_i}t_1+\delta_{s_i})]\\ \qquad-a(at_1+\delta)[(as_1+\gamma)(a_{t_i}s_1+\gamma_{t_i})-(at_1+\delta)(a_{t_i}t_1+\delta_{t_i})]=0. \end{array} $$ This represents a polynomial in $s_1$ and $t_1$, identically zero, and hence, all its coefficients must vanish. Analyzing the coefficients for $s_1^3$ and $t_1^3$ we obtain $a_{s_i}=0$ and $a_{t_i}=0$ for all $i=2,\ldots,h$. Consequently $a$ is a real constant. Replacing in the previous equation we get $$ \delta_{s_i}(as_1+\gamma)-\gamma_{s_i}(at_1+\delta)-\gamma_{t_i}(as_1+\gamma)+\delta_{t_i}(at_1+\delta)=0. $$ Looking at the coefficients of $s_1$ and $t_1$ we have \begin{equation} \label{eq:19} \delta_{s_i}=\gamma_{t_i}\ {\rm and\ }\delta_{t_i}=\gamma_{s_i},\ \forall i=2,\ldots,h. \end{equation} Therefore \eqref{eq:16_17} gives \begin{equation} \label{eq:20} \psi_{s_i}=\frac{\gamma_{s_i}(as_1+\gamma)-\delta_{s_i}(at_1+\delta)}{(as_1+\gamma)^2-(at_1+\delta)^2}\ ,\ \psi_{t_i}=\frac{\gamma_{t_i}(as_1+\gamma)-\delta_{t_i}(at_1+\delta)}{(as_1+\gamma)^2-(at_1+\delta)^2}\ . \end{equation} We may compute \begin{equation} \begin{array}{l} \displaystyle \psi_{s_it_j}=\frac{\gamma_{s_it_j}(as_1+\gamma)+\gamma_{s_i}\gamma_{t_j}-\delta_{s_it_j}(at_1+\delta)-\delta_{s_i}\delta_{t_j}} {(as_1+\gamma)^2-(at_1+\delta)^2}\qquad\\[3mm] \qquad \displaystyle -2~\frac{[\gamma_{t_j}(as_1+\gamma)-\delta_{t_j}(at_1+\delta)][\gamma_{s_i}(as_1+\gamma)-\delta_{s_i}(at_1+\delta)]} {[{(as_1+\gamma)^2-(at_1+\delta)^2}]^2} \end{array} \end{equation} and using \eqref{eq:pde2} with $i,j>1$, we obtain again a polynomial in $s_1$ and $t_1$, identically zero. By comparing the coefficients of $s_1^3$ and $t_1^3$ we find $\gamma_{s_it_j}=0$ and $\delta_{s_it_j}=0$, for all $i,j=2,\ldots,h$. It follows that $\gamma_{s_i}$ depend only on $s_2,\ldots,s_h$ and $\delta_{t_i}$ depend only on $t_2,\ldots,t_h$, for all $i$. From \eqref{eq:19} we know $\gamma_{s_i}=\delta_{t_i}$. Hence, there exist constants $a_i\in{\mathbb{R}}$ such that $\gamma_{s_i}=\delta_{t_i}=a_i$, $\forall i=2,\ldots,h$. In the same way, there exist constants $b_i\in{\mathbb{R}}$ such that $\gamma_{t_i}=\delta_{s_i}=b_i$, $\forall i=2,\ldots,h$. It follows that \begin{equation} \label{eq:22} \begin{array}{l} \gamma(s_2,t_2\ldots,s_h,t_h)=\sum\limits_{i=2}^ha_is_i+\sum\limits_{i=2}^hb_it_i+c_1, \\[2mm] \delta(s_2,t_2\ldots,s_h,t_h)=\sum\limits_{i=2}^hb_is_i+\sum\limits_{i=2}^ha_it_i+c_2\ ,\quad c_1,c_2\in{\mathbb{R}}. \end{array} \end{equation} We conclude with $$ \begin{array}{l} \psi=\frac12~\ln\varepsilon\big[(as_1+a_2s_2+b_2t_2+\ldots+a_hs_h+b_ht_h+c_1)^2\qquad\\[2mm] \qquad \qquad-(at_1+b_2s_2+a_2t_2+\ldots+b_hs_h+a_ht_h+c_2)^2\big]. \end{array} $$ Hence the solution \eqref{eq:pde_sol1} is obtained with $a_1=a\neq0$. {\bf Case 2.} Let us come back to the case $F=0$ (on a certain open set). From \eqref{eq:13} we immediately find $\eta+\zeta=Gs_1-Kt_1+H$, where $G,H,K$ are functions depending on $(s_2,\ldots,s_h,t_2,\ldots,t_h)$, and $K=\epsilon G$, $\epsilon=\pm1$. Thus $$ \psi=\frac12\ln|(s_1-\epsilon t_1)G+H|. $$ We have $$ \psi_{s_1}=\frac G{2[(s_1-\epsilon t_1)G+H]}\ ,\ \psi_{t_1}=-\frac {\epsilon G}{2[(s_1-\epsilon t_1)G+H]}, $$ $$ \psi_{s_i}=\frac{(s_1-\epsilon t_1)G_{s_i}+H_{s_i}}{2[(s_1-\epsilon t_1)G+H]}\ ,\ \psi_{t_i}=\frac{(s_1-\epsilon t_1)G_{t_i}+H_{t_i}}{2[(s_1-\epsilon t_1)G+H]}\ ,\ i=2,\ldots,h, $$ $$ \psi_{s_is_1}=\frac{G_{s_i}[(s_1-\epsilon t_1)G+H]-G[(s_1-\epsilon t_1)G_{s_i}+H_{s_i})}{2[(s_1-\epsilon t_1)G+H]^2}\ ,\ i=2,\ldots,h. $$ By applying \eqref{eq:pde1} for $j=1$ and $i=2,\ldots,h$ we obtain $$ 2G_{s_i}[(s_1-\epsilon t_1)G+H]-G[(s_1-\epsilon t_1)G_{s_i}+H_{s_i}]-\epsilon G[(s_1-\epsilon t_1)G_{t_i}+H_{t_i}]=0. $$ By comparing the coefficients of $s_1$ and $t_1$ we find \begin{equation} \label{eq_x} G(G_{s_i}-\epsilon G_{t_i})=0,\quad 2G_{s_i}H-G(H_{s_i}+\epsilon H_{t_i})=0. \end{equation} Since $G\neq0$ we have $G_{t_i}=\epsilon G_{s_i}$. In the sequel, computing $$ \psi_{s_is_j}=\frac{(s_1-\epsilon t_1)G_{s_is_j}+H_{s_is_j}}{2[(s_1-\epsilon t_1)G+H]}- \frac{[(s_1-\epsilon t_1)G_{s_i}+H_{s_i}][(s_1-\epsilon t_1)G_{s_j}+H_{s_j}]}{2[(s_1-\epsilon t_1)G+H]^2} $$ for $i,j\geq2$, replacing in \eqref{eq:pde1} and comparing the coefficients of $s_1^2$ we find \linebreak $G_{s_is_j}=0$. It follows also $G_{s_it_j}=0$ and $G_{t_it_j}=0$. Hence $$ G(s_2,t_2,\ldots,s_h,t_h)=\sum\limits_{i=2}^ha_i(s_i+\epsilon t_i)+c, \quad a_i,c\in{\mathbb{R}}. $$ Moreover, $H$ should satisfy \begin{equation} \label{eq_y} 2GH_{s_is_j}-G_{s_i}(H_{s_j}-\epsilon H_{t_j})-G_{s_j}(H_{s_i}-\epsilon H_{t_i})=0\, , \end{equation} \begin{equation} \label{eq_z} 2HH_{s_is_j}-H_{s_i}H_{s_j}+H_{t_i}H_{t_j}=0. \end{equation} {\bf Case 2a.} If $G$ is a non-zero constant $c$ (and this happens when all $a_i$ vanish), then from the second equation in \eqref{eq_x} we find $H_{s_i}+\epsilon H_{t_i}=0$ for all $i\geq2$. Therefore, $H$ has the form $$ H(s_2,t_2,\ldots,s_h,t_h)=Q(s_2-\epsilon t_2,\ldots,s_h-\epsilon t_h), $$ where $Q$ is a function depending only on $h$ variables. From \eqref{eq_y} we get $H_{s_is_j}=0$ and then $Q$ is an affine function. Thus $H=\sum\limits_{i=2}^hb_i(s_i-\epsilon t_i)+d$, with $b_2,\ldots,b_h,d\in{\mathbb{R}}$. Consequently, $$ \psi=\frac12\ln\left[\sum\limits_{i=1}^hb_i(s_i-\epsilon t_i)+d\right],\;\; b_1=c\neq0\, . $$ {\bf Case 2b.} If there exists at least one $a_i\neq0$, from the second equation in \eqref{eq_x} we can express $H$ in the form $H=Q G$, where $Q$ is a function on $s_2,t_2,\ldots,s_h,t_h$. Then, for every $i\geq2$, $$ H_{s_i}+\epsilon H_{t_i}=2a_iQ+G(Q_{s_i}+Q_{t_i})\, , $$ which combined with \eqref{eq_x} gives $Q_{s_i}+\epsilon Q_{t_i}=0$. Thus, $Q=Q(s_2-\epsilon t_2,\ldots,s_h-\epsilon t_h)$. Using \eqref{eq_y}, it follows that $Q$ is an affine function and hence $H=\sum\limits_{i=2}^hb_i(s_i-\epsilon t_i)+d$, with $b_2,\ldots,b_h,d\in{\mathbb{R}}$. Consequently, $$\psi=\frac12\ln\Big\{\Big[\sum\limits_{i=1}^hb_i(s_i-\epsilon t_i)+d\Big]\Big[\sum\limits_{j=2}^ha_i(s_i+\epsilon t_i)+c\Big]\Big\}$$ with $b_1=1$. This completes the proof. \endproof \section{$\p R$-warped products in ${\p}^{h+p}$ satisfying $S_\sigma=2p{||\nabla\ln f||}_2$} In the following, we will use letters $i,j,k$ for indices running from $1$ to $h$; $a,b,c$ for indices from $1$ to $p$; and $A,B$ for indices between $1$ and $m$ with $m=h+p$. On ${\mathbb{E}}^{2(h+p)}_{h+p}$ we consider the global coordinates $(x_i,x_{h+a},y_i,y_{h+a})$ and the canonical flat para-K\"ahler structure defined as above. \begin{proposition} \label{prop_v} Let $M=\Ni\times_f\Na$ be a space-like $\p R$-warped product in the para-K\"ahler $(h+p)$-plane $\p^{h+p}$ with $h=\frac{1}{2}\dim \Ni$ and $p=\dim \Na$. If $M$ satisfies the equality case of \eqref{ineq_small:eq} identically, then \begin{itemize} \item $\Ni$ is a totally geodesic submanifold in $\p^{h+p}$, and hence it is congruent to an open part of $\p^h$; \item $\Na$ is a totally umbilical submanifold in $\p^{h+p}$. \end{itemize} Moreover, if $\Na$ is a real space form of constant curvature $k$, then the warping function $f$ satisfies ${||\nabla f||}_2=k$. \end{proposition} \proof Under the hypothesis, we know from the proof of Theorem \ref{ineq:th} that the second fundamental form satisfies $$\sigma(\mathcal D,\mathcal D)=\sigma(\mathcal D^\perp,\mathcal D^\perp)=\{0\}.$$ On the other hand, since $M=\Ni\times_f\Na$ is a warped product, $\Ni$ is totally geodesic and $\Na$ is totally umbilical in $M$. Thus we have the first two statements. The last statement of the proposition can be proved as follows. If $R^\perp$ denotes the Riemann curvature tensor of $\Na$, then we have $$ R_{ZV}W=R^\perp_{ZV}W-{||\nabla \ln f||}_2\big(g(V,W)Z-g(Z,W)V\big) $$ for any $Z,V,W$ tangent to $\Na$. See for details \cite[page 210]{cm:ONeil} (pay attention to the sign; see also page 74). If $\Na$ is a space form of constant curvature $k$, then $R$ takes the form \begin{equation} \label{eq:R_k} R_{ZV}W=\left(\frac k{f^2}-{||\nabla\ln f||}_2\right)\big(g(V,W)Z-g(Z,W)V\big). \end{equation} The equation of Gauss may be written, for vectors tangent to $\Na$, as $$ g\(R_{ZV}W,U\)=\<\widetilde R_{ZV}W,U\>+\<\sigma(V,W),\sigma(Z,U)\>-\<\sigma(Z,W),\sigma(V,U)\> \ . $$ Since the ambient space is flat and \ $\sigma(\Dp,\Dp)=0$ due to the equality in \eqref{ineq_small:eq}, it follows that $g(R_{ZV}W,U)=0$. Combining this with \eqref{eq:R_k} gives ${||\nabla\ln f||}_2=\frac k{f^2}$. This gives the statement. \endproof {\it Para-complex numbers} were introduced by Graves in 1845 \cite{graves} as a generalization of complex numbers. Such numbers have the expression $v=x+\j y$, where $x,y$ are real numbers and $\j$ satisfies $\j^{2}=1,\,\j\ne 1$. The conjugate of $v$ is $\bar v=x-\j y$. The multiplication of two para-complex numbers is defined by $$(a+\j b)(s+\j t)=(as+bt)+\j(at+bs).$$ For each natural number $m$, we put $\mathbb D^{m}=\{(x_1+\j y_1,\ldots,x_m+\j y_m) : x_i, y_i\in{\mathbb{R}}\}$. With respect to the multiplication of para-complex numbers and the canonical flat metric, $\mathbb D^{m}$ is a flat para-K\"ahler manifold of dimension $2m$. Once we identify $(x_1+\j y_1,\ldots,x_m+\j y_m)\in \mathbb D^{m}$ with $(x_1,\ldots,x_m,y_1,\ldots,y_m)\in {\mathbb{E}}^{2m}_m$, we may identify $\mathbb D^{m}$ with the para-K\"ahler $m$-plane $\p^{m}$ in a natural way. In the following we denote by $\mathbb S^{p}, \mathbb E^{p}$ and $\mathbb H^{p}$ the unit $p$-sphere, the Euclidean $p$-space and the unit hyperbolic $p$-space, respectively. \begin{theorem} Let $\Ni\times_f\Na$ be a space-like $\p R$-warped product in the para-K\"ahler $(h+p)$-plane $\p^{h+p}$ with $h=\frac{1}{2}\dim \Ni$ and $p=\dim \Na$. Then we have \begin{align}\label{IN}S_\sigma\leq 2p{||\nabla\ln f||}_2.\end{align} The equality sign of \eqref{IN} holds identically if and only if $\Ni$ is an open part of a para-K\"ahler $h$-plane, $\Na$ is an open part of $\mathbb S^{p},\, \mathbb E^{p}$ or $\mathbb H^{p}$, and the immersion is given by one of the following: {\bf 1.} $\Phi:D_{1}\times_f {\mathbb{S}}^p\longrightarrow {\p}^{h+p}$; \begin{equation} \label{case1} \begin{array}{c} \Phi(z,w)=\text{\small$\Bigg($}z_1+\bar v_1(w_0-1)\sum\limits_{j=1}^hv_jz_j,\ldots,z_h+\bar v_h(w_0-1)\sum\limits_{j=1}^hv_jz_j,\\ \qquad w_1\sum\limits_{j=1}^h{\rm j} v_jz_j,\ldots,w_p\sum\limits_{j=1}^h\j v_jz_j \text{\small$\Bigg)$},\;\; h\geq 2, \end{array} \end{equation} with warping function $$f=\sqrt{\langle \bar v,z\rangle^2-\langle \j \bar v,z\rangle^2},$$ where $v=(v_1,\ldots,v_h)\in{\mathbb{S}}^{2h-1}\subset {\mathbb{D}}^h$, $ w=(w_0,w_1,\ldots,w_p)\in{\mathbb{S}}^p$, $ z=(z_1,\ldots,z_h)\in D_{1}$ and $D_{1}=\left\{z\in{\mathbb{D}}^h : \langle \bar v,z\rangle^2>\langle \j \bar v,z\rangle^2\right\}$. {\bf 2.} $\Phi:D_{1}\times_f {\mathbb{H}}^p\longrightarrow {\p}^{h+p}$; \begin{equation} \label{case2} \begin{array}{c} \Phi(z,w)=\text{\small$\Bigg($}z_1+\bar v_1(w_0-1)\sum\limits_{j=1}^hv_jz_j,\ldots,z_h+\bar v_h(w_0-1)\sum\limits_{j=1}^hv_jz_j,\\ \qquad w_1\sum\limits_{j=1}^h\j v_jz_j,\ldots,w_p\sum\limits_{j=1}^h\j v_jz_j \text{\small$\Bigg)$},\;\; h\geq 1, \end{array} \end{equation} with the warping function $f=\sqrt{\langle \bar v,z\rangle^2-\langle \j \bar v,z\rangle^2}$, where $v=(v_1,\ldots,v_h)\in{\mathbb{H}}^{2h-1}\subset {\mathbb{D}}^h$, $w=(w_0,w_1,\ldots,w_p)\in{\mathbb{H}}^p$ and $z=(z_1,\ldots,z_h)\in D_{1}$. {\bf 3.} $\Phi(z,u):D_{1}\times_f {\mathbb{E}}^p\longrightarrow {\p}^{h+p}$; \begin{equation} \label{case3} \begin{array}{c} \Phi(z,u)=\text{\small$ \Bigg($}z_1+\dfrac{\bar v_1}{2}\Big(\sum\limits_{a=1}^pu_a^2\Big)\sum\limits_{j=1}^hv_jz_j,\ldots, z_h+\dfrac{\bar v_h}{2}\Big(\sum\limits_{a=1}^pu_a^2\Big)\sum\limits_{j=1}^hv_jz_j,\\ u_1\sum\limits_{j=1}^h\j v_jz_j,\ldots,u_p\sum\limits_{j=1}^h\j v_jz_j \text{\small$ \Bigg)$},\;\; h\geq 2, \end{array} \end{equation} with the warping function $f=\sqrt{\langle \bar v,z\rangle^2-\langle \j \bar v,z\rangle^2}$, where $v=(v_1,\ldots,v_h)$ is a light-like vector in ${\mathbb{D}}^h$, $z=(z_1,\ldots,z_h)\in D_{1}$ and $u=(u_1,\ldots,u_p)\in{\mathbb{E}}^p,$ Moreover, in this case, each leaf $\, {\mathbb{E}}^p$ is quasi-minimal in ${\p}^{h+p}$. {\bf 4.} $\Phi(z,u):D_{2}\times_f {\mathbb{E}}^p\longrightarrow {\p}^{h+p}$; \begin{equation} \label{case4} \begin{array}{c} \Phi(z,u)=\text{\small$\Bigg( $}z_1+\dfrac{v_1}{2}\! \sum\limits_{a=1}^pu_a^2,\ldots, z_h+\dfrac{v_h}{2}\!\sum\limits_{a=1}^pu_a^2, \dfrac{v_0}{2}u_1,\ldots,\dfrac{v_0}{2}u_p \text{\small$\Bigg)$},\; h\geq 1, \end{array} \end{equation} with the warping function $f=\sqrt{-\langle v,z\rangle}$, where $v_0=\sqrt{b_1}+\epsilon\j\sqrt{b_1}$ with $b_1>0$, $D_{2}=\{z\in{\mathbb{D}}^h:\langle v,z\rangle<0\}$, $v=(v_1,\ldots,v_h)=(b_1+\epsilon\j b_1,\ldots,b_h+\epsilon\j b_h)$, $\epsilon=\pm1$, $z=(z_1,\ldots,z_h)\in D_2$ and $ u=(u_1,\ldots,u_p)\in{\mathbb{E}}^p$. In each of the four cases the warped product is minimal in ${\mathbb{E}}^{2(h+p)}_{h+p}$. \end{theorem} \begin{proof} Inequality \eqref{IN} is already given in Theorem 4.4. From now on, let us assume that $\Phi:\Ni\times_f\Na\longrightarrow{\p}^{m}$ is a space-like $\p R$-warped product satisfying the equality in \eqref{IN} with $m=h+p$. Then it follows that $\nu=0$ and hence \begin{equation} \label{eq:tgtu} \sigma(X,Y)=0,\ \sigma(Z,W)=0,\ \sigma(X,Z)=\big(PX(\ln f)\big)FZ, \end{equation} for all $X,Y$ tangent to $\Ni$ and $Z,W$ tangent to $\Na$. Thus, $\Ni$ is totally geodesic in ${\p}^m$ and $\Na$ is totally umbilical ${\p}^{m}$. As $\Ni$ is invariant and totally geodesic in ${\p}^m$, it is congruent with ${\p}^h$ with the canonical (induced) para-K\"ahler structure \cite{cm:Chen11}. On ${\mathbb{E}}^{2h}_h$ we may choose global coordinates $s=(s_1,\ldots,s_h)$ and $t=(t_1,\ldots,t_h)$ such that \begin{equation} \label{A1} g_\top=-\sum\limits_{i=1}^hds_i^2+\sum\limits_{i=1}^hdt_i^2,\;\; \p\dsi=\dti,\;\; \p\dti=\dsi, \end{equation} for $i=1,\ldots,h$. Let us put $\dsi=\frac\partial{\partial s_i}\ ,$ $\dti=\frac\partial{\partial t_i}$ and so on. \smallskip Now, we study the case $p>1$. \smallskip Since $\Na$ is a space-like totally umbilical, non-totally geodesic submanifold in ${\p}^m$, it is congruent (cf. \cite{cm:AKK96}, \cite[Proposition 3.6]{cm:Chen11}) \begin{itemize} \item either to the Euclidean $p$-sphere ${\mathbb{S}}^p$, \item or to the hyperbolic $p$-plane ${\mathbb{H}}^p$, \item or to a flat quasi-minimal submanifold ${\mathbb{E}}^p$. \end{itemize} In what follows we discuss successively, all the three situations. On ${\mathbb{S}}^p$ we consider spherical coordinates $u=(u_1,\ldots,u_p)$ such that the metric $g_\perp$ is expressed by \begin{equation} \label{A3} g_\perp=du_1^2+\cos^2u_1du_2^2+\ldots+\cos^2u_1\ldots\cos^2u_{p-1}du_p^2. \end{equation} Thus, the warped metric on $M$ is given by $$ g=g_\top(s,t)+f^2(s,t)g_\perp(u). $$ Then the Levi Civita connection $\nabla$ of $g$ satisfies \begin{subequations} \renewcommand{\theequation}{\theparentequation .\alph{equation}} \label{eq:LCSp} \begin{align} \label{A5} & \nabla_{\dsi}\dsj=0\ ,\ \nabla_{\dsi}\dtj=0\ ,\ \nabla_{\dti}\dtj=0,\\ \label{A6} &\nabla_{\dsi}\dua=\frac{f_{s_i}}f~\dua\, ,\ \nabla_{\dti}\dua=\frac{f_{t_i}}f~\dua,\\ \label{A7} &\nabla_{\dua}\dub=-\tan u_a\dub\quad(a<b),\\ \label{A8} & \nabla_{\dua}\dua=\prod\limits_{b=1}^{a-1}\cos^2u_b\sum\limits_{i=1}^h\big(ff_{s_i}\dsi-ff_{t_i}\dti\big)\\ \nonumber & \qquad\qquad + \sum\limits_{b=1}^{a-1}\big(\sin u_b\cos u_b\cos^2u_{b+1}\ldots\cos^2u_{a-1}\big)\dub, \end{align} \end{subequations} for $ i,j =1,\ldots,h$ and $a, b=1,\ldots,p$. From now on we put $\psi=\ln f$. Using the relations above, we find that the Riemann curvature tensor $R$ satisfies \begin{equation} \label{eq:curb} \begin{array}{l} R(\dsi,\dua)~\dsj=\left(\dfrac{\partial^2\psi}{\partial{s_i}\partial{s_j}}+\dfrac{\partial\psi}{\partial{s_i}}\dfrac{\partial\psi}{\partial{s_j}}\right)\dua\\[2mm] R(\dsi,\dua)~\dtj=\left(\dfrac{\partial^2\psi}{\partial{s_i}\partial{t_j}}+\dfrac{\partial\psi}{\partial{s_i}}\dfrac{\partial\psi}{\partial{t_j}}\right)\dua\\[2mm] R(\dti,\dua)~\dtj=\left(\dfrac{\partial^2\psi}{\partial{t_i}\partial{t_j}}+\dfrac{\partial\psi}{\partial{t_i}}\dfrac{\partial\psi}{\partial{t_j}}\right)\dua\ . \end{array} \end{equation} Moreover we have $$ \sigma(\dsi,\dua)=\frac{\partial\psi}{\partial{t_i}}~F\dua,\ \sigma(\dti,\dua)=\frac{\partial\psi}{\partial{s_i}}~F\dua. $$ Applying Gauss' equation we find $$ \widetilde g\(\widetilde R_{XZ}Y,W\)=g\(R_{XZ}Y,W\)+\widetilde g\(\sigma(X,Y),\sigma(Z,W)\)-\widetilde g\(\sigma(X,W),\sigma(Y,Z)\), $$ for $X,Y$ tangent to $\Ni$ and $Z,W$ tangent to $\Na$. Using \eqref{eq:tgtu} and \eqref{eq:curb} we get \begin{equation} \label{eq:warping} \begin{array}{l} \displaystyle \frac{\partial^2\psi}{\partial{s_i}\partial{s_j}}+\frac{\partial\psi}{\partial{s_i}}\frac{\partial\psi}{\partial{s_j}} +\frac{\partial\psi}{\partial{t_i}}\frac{\partial\psi}{\partial{t_j}}=0\\[3mm] \displaystyle \frac{\partial^2\psi}{\partial{s_i}\partial{t_j}}+\frac{\partial\psi}{\partial{s_i}}\frac{\partial\psi}{\partial{t_j}} +\frac{\partial\psi}{\partial{t_i}}\frac{\partial\psi}{\partial{s_j}}=0\\[3mm] \displaystyle \frac{\partial^2\psi}{\partial{t_i}\partial{t_j}}+\frac{\partial\psi}{\partial{s_i}}\frac{\partial\psi}{\partial{s_j}} +\frac{\partial\psi}{\partial{t_i}}\frac{\partial\psi}{\partial{t_j}}=0,\ i=1,\ldots,h. \end{array} \end{equation} Let us first consider the case $h\geq2$. By applying Proposition \ref{PDEsyst} (case 1, in the proof), we know that there exists a constant vector $v=(a_1,a_2,\ldots,a_h,0,b_2,\ldots,b_h)$, with $a_1>0$, such that $$ \psi=\frac12\ln\left[\langle \bar v,z\rangle^2-\langle \j \bar v,z\rangle^2\right], $$ where $z=(s_1,\ldots,s_h,t_1,\ldots,t_h)$ and $\langle~,~\rangle$ denotes the pseudo-Euclidean product in ${\mathbb{E}}^{2h}_h$. If $a_1<0$ we are allowed to make the isometric transformation in ${\mathbb{E}}^{2h}_h$: $s_1\mapsto-s_1$ and $t_1\mapsto-t_1$. In the sequel, we apply Gauss' formula $$ \widetilde\nabla_{\Phi_*U}\Phi_*V=\Phi_*\nabla_UV+\sigma(U,V), \ \forall U,V\in\chi(M), $$ where $\Phi_*$ denotes the differential of the map $\Phi$. Taking $U,V\in\D$ we obtain \begin{equation} \label{embed_DD} \begin{array}{l} \displaystyle \frac{\partial^2x_A}{\partial s_i\partial s_j}=\frac{\partial^2x_A}{\partial s_i\partial t_j}=\frac{\partial^2x_A}{\partial t_i\partial t_j}=0\\[3mm] \displaystyle \frac{\partial^2y_A}{\partial s_i\partial s_j}=\frac{\partial^2y_A}{\partial s_i\partial t_j}=\frac{\partial^2y_A}{\partial t_i\partial t_j}=0. \end{array} \end{equation} For $U\in\D$ and $V\in\Dp$ we have \begin{equation} \label{embed_DDp} \begin{array}{l} \displaystyle \frac{\partial^2x_A}{\partial s_i\partial u_a}=\psi_{s_i}\frac{\partial x_A}{\partial u_a}+\psi_{t_i}\frac{\partial y_A}{\partial u_a}\ ,\quad \displaystyle \frac{\partial^2x_A}{\partial t_i\partial u_a}=\psi_{t_i}\frac{\partial x_A}{\partial u_a}+\psi_{s_i}\frac{\partial y_A}{\partial u_a}\\[3mm] \displaystyle \frac{\partial^2y_A}{\partial s_i\partial u_a}=\psi_{s_i}\frac{\partial y_A}{\partial u_a}+\psi_{t_i}\frac{\partial x_A}{\partial u_a}\ ,\quad \displaystyle \frac{\partial^2y_A}{\partial t_i\partial u_a}=\psi_{t_i}\frac{\partial y_A}{\partial u_a}+\psi_{s_i}\frac{\partial x_A}{\partial u_a}. \end{array} \end{equation} Finally, taking $U,V\in\Dp$ we obtain \begin{equation} \label{embed_DpDp} \begin{array}{l} \displaystyle \frac{\partial^2x_A}{\partial u_a\partial u_b}=-\tan u_a \frac{\partial x_A}{\partial u_b}\ ,\quad \frac{\partial^2y_A}{\partial u_a\partial u_b}=-\tan u_a \frac{\partial y_A}{\partial u_b}\ ,\ a<b\, ,\\[2mm] \displaystyle \frac{\partial^2x_A}{\partial u_a^2}=\prod\limits_{b=1}^{a-1}\cos^2u_b\sum\limits_{j=1}^h \left(ff_{s_j}\frac{\partial x_A}{\partial s_j}-ff_{t_j}\frac{\partial x_A}{\partial t_j}\right)\quad \\[2mm] \qquad \qquad \displaystyle +\sum\limits_{b=1}^{a-1}\(\sin u_b\cos u_b\cos^2u_{b+1}\ldots\cos^2u_{a-1}\)\frac{\partial x_A}{\partial u_b}\, ,\\[2mm] \displaystyle \frac{\partial^2y_A}{\partial u_a^2}=\prod\limits_{b=1}^{a-1}\cos^2u_b\sum\limits_{j=1}^h \left(ff_{s_j}\frac{\partial y_A}{\partial s_j}-ff_{t_j}\frac{\partial y_A}{\partial t_j}\right)\quad \\[2mm] \qquad \qquad \displaystyle +\sum\limits_{b=1}^{a-1}\(\sin u_b\cos u_b\cos^2u_{b+1}\ldots\cos^2u_{a-1}\)\frac{\partial y_A}{\partial u_b}\, . \end{array} \end{equation} From \eqref{embed_DD} we get \begin{equation} \label{sol:DD} \begin{array}{l} x_A(s,t,u)=\sum\limits_1^h\lambda^j_A(u)s_j+\sum\limits_1^h\rho_A^j(u)t_j+C_A(u)\, ,\\[2mm] y_A(s,t,u)=\sum\limits_1^h\tilde\rho_A^j(u)s_j+\sum\limits_1^h\tilde\lambda_A^j(u)t_j+\tilde C_A(u)\, . \end{array} \end{equation} By combining \eqref{embed_DDp} with \eqref{sol:DD} we obtain \begin{equation} \label{eq:17} \begin{array}{rl} \dfrac{\partial\tilde\lambda_A^i}{\partial u_a}=\dfrac{\partial\lambda_A^i}{\partial u_a}&= \psi_{s_i}\left[\dfrac{\partial \lambda_A^j}{\partial u_a}(u)s_j+\dfrac{\partial \rho_A^j}{\partial u_a}(u)t_j+\dfrac{\partial C_A}{\partial u_a}\right]\\ &\hskip.3in +\psi_{t_i}\left[\dfrac{\partial \rho_A^j}{\partial u_a}(u)s_j+\dfrac{\partial \lambda_A^j}{\partial u_a}(u)t_j+\dfrac{\partial \tilde C_A}{\partial u_a}\right]\, ,\\[2mm] \dfrac{\partial\tilde\rho_A^i}{\partial u_a}=\dfrac{\partial\rho_A^i}{\partial u_a}&= \psi_{t_i}\left[\dfrac{\partial \lambda_A^j}{\partial u_a}(u)s_j+\dfrac{\partial \rho_A^j}{\partial u_a}(u)t_j+\dfrac{\partial C_A}{\partial u_a}\right]\\ & \hskip.3in +\psi_{s_i}\left[\dfrac{\partial \rho_A^j}{\partial u_a}(u)s_j+\dfrac{\partial \lambda_A^j}{\partial u_a}(u)t_j+\dfrac{\partial \tilde C_A}{\partial u_a}\right]. \end{array} \end{equation} For $i=1$ we have $$ \begin{array}{l} \psi_{s_1}=\dfrac{a_1\big(a_1s_1+\sum\limits_2^ha_js_j+\sum\limits_2^hb_jt_j\big)} {\big(a_1s_1+\sum\limits_2^ha_js_j+\sum\limits_2^hb_jt_j\big)^2-\big(a_1t_1+\sum\limits_2^ha_jt_j+\sum\limits_2^hb_js_j\big)^2}\, ,\\[2mm] \psi_{t_1}=\dfrac{-a_1\big(a_1t_1+\sum\limits_2^ha_jt_j+\sum\limits_2^hb_js_j\big)} {\big(a_1s_1+\sum\limits_2^ha_js_j+\sum\limits_2^hb_jt_j\big)^2-\big(a_1t_1+\sum\limits_2^ha_jt_j+\sum\limits_2^hb_js_j\big)^2}\ . \end{array} $$ Substituting in \eqref{eq:17} we find polynomials in $s$ and $t$. Comparing the coefficients corresponding to $s_1s_i$ and $s_1t_i$, $i>1$, we find \begin{equation} \label{eq:20-21} \begin{array}{l} \lambda_A^i(u)=\frac{a_i}{a_1}\lambda_A(u)+\frac{b_i}{a_1}\rho_A(u)+\frac{c_A^i}{a_1}\ ,\ \rho_A^i(u)=\frac{b_i}{a_1}\lambda_A(u)+\frac{a_i}{a_1}\rho_A(u)+\frac{d_A^i}{a_1}\ \end{array} \end{equation} for $\ i=2,\ldots,h$, and $\lambda_A^1(u)=\lambda_A(u)$, $\rho_A^1(u)=\rho_A(u)$, where $c_A^i,d_A^i\in{\mathbb{R}}$. Comparing the coefficients of $s_1$ and $t_1$ we find that $C_A$ and $\tilde C_A$ are constants, and applying a suitable translation in ${\mathbb{E}}^{2m}_m$ if necessary, one may suppose $C_A=0$ and $\tilde C_A=0$, $A=1,\ldots,m$. Replacing in \eqref{sol:DD} and taking into account \eqref{eq:17} we get \begin{equation} \label{sol:DDp} \begin{array}{l} x_A(s,t,u)=\frac1{a_1}\lambda_A(u)\(a_1s_1+\sum\limits_2^ha_js_j+\sum\limits_2^hb_jt_j\)\\[2mm] \qquad+ \frac1{a_1}\rho_A(u)\(a_1t_1+\sum\limits_2^ha_jt_j+\sum\limits_2^hb_js_j\) +\frac1{a_1}\(\sum\limits_2^hc_A^js_j+\sum\limits_2^hd_A^jt_j\), \qquad \end{array} \end{equation} $$ \begin{array}{l} y_A(s,t,u)=\frac1{a_1}\lambda_A(u)\(a_1t_1+\sum\limits_2^ha_jt_j+\sum\limits_2^hb_js_j\)\\[2mm] + \frac1{a_1}\rho_A(u)\(a_1s_1+\sum\limits_2^ha_js_j+\sum\limits_2^hb_jt_j\) +\frac1{a_1}\(\tilde d_As_1+\tilde c_At_1+\sum\limits_2^h\tilde d_A^js_j+\sum\limits_2^h\tilde c_A^jt_j\), \end{array} $$ where $\tilde c_A$, $\tilde d_A$, $\tilde c_A^i$ and $\tilde d_A^i$ are real numbers. The third equation in \eqref{embed_DpDp} for $a=1$ gives $$ \begin{array}{l} \dfrac{\partial^2x_A}{\partial u_1^2}=\(a_1s_1+\sum\limits_2^h a_js_j+\sum\limits_2^h b_jt_j\)\left[ a_1\frac{\partial x_A}{\partial s_1}+\sum\limits_2^ha_j\frac{\partial x_A}{\partial s_j}- \sum\limits_2^hb_j\frac{\partial x_A}{\partial t_j}\right]\\[2mm] \qquad + \(a_1t_1+\sum\limits_2^h a_jt_j+\sum\limits_2^h b_js_j\)\left[ a_1\frac{\partial x_A}{\partial t_1}+\sum\limits_2^ha_j\frac{\partial x_A}{\partial t_j}- \sum\limits_2^hb_j\frac{\partial x_A}{\partial s_j}\right] \end{array} $$ which combined with the first equation in \eqref{sol:DDp} yields \begin{equation} \label{S:u1u1} \begin{array}{l} \(a_1s_1+\sum\limits_2^h a_js_j+\sum\limits_2^h b_jt_j\)\left[ \frac{\partial^2\lambda_A}{\partial u_1^2}(u)+\langle v,v\rangle \lambda_A(u)+D_A\right]\\[2mm] +\(a_1t_1+\sum\limits_2^h a_jt_j+\sum\limits_2^h b_js_j\)\left[ \frac{\partial^2\rho_A}{\partial u_1^2}(u)+\langle v,v\rangle \rho_A(u)+\tilde D_A\right]=0, \end{array} \end{equation} where $D_A=\sum\limits_2^h(a_jc_A^j-b_jd_A^j)$ and $\tilde D_A=\sum\limits_2^h(a_jd_A^j-b_jc_A^j)$. Since ${||\nabla f||}_2=-a_1^2-\sum\limits_2^ha_j^2+\sum\limits_2^hb_j^2$, Proposition~\ref{prop_v} implies $\langle v,v\rangle=1$. Hence, considering in \eqref{S:u1u1} the coefficients of $s_1$ and $t_1$ one obtains the following PDEs: \begin{equation} \frac{\partial^2\lambda_A}{\partial u_1^2}(u)+\lambda_A(u)-D_A=0,\quad \frac{\partial^2\rho_A}{\partial u_1^2}(u)+\rho_A(u)-\tilde D_A=0. \end{equation} We immediately get \begin{equation} \label{lambda_rho} \begin{array}{l} \lambda_A(u)=\cos u_1\Theta_A^{(1)}(u_2,\ldots,u_p)+\sin u_1D_A^{(1)}(u_2,\ldots,u_p)+D_A, \\[2mm] \rho_A(u)=\cos u_1\tilde\Theta_A^{(1)}(u_2,\ldots,u_p)+\sin u_1\tilde D_A^{(1)}(u_2,\ldots,u_p)+\tilde D_A \end{array} \end{equation} where $\Theta_A^{(1)}$, $D_A^{(1)}$, $\tilde\Theta_A^{(1)}$ and $\tilde D_A^{(1)}$ are functions depending on $u_2,\ldots,u_p$. The first equation in \eqref{embed_DpDp} for $a=1$ gives $ \frac{\partial^2x_A}{\partial u_1\partial u_b}=-\tan u_1\frac{\partial x_A}{\partial u_b}\ ,\ b>1 $ which combined with \eqref{sol:DDp} yields $$ \dfrac{\partial^2\lambda_A}{\partial u_1\partial u_b}=-\tan u_1\frac{\partial \lambda_A}{\partial u_b}\ ,\ \dfrac{\partial^2\rho_A}{\partial u_1\partial u_b}=-\tan u_1\frac{\partial \rho_A}{\partial u_b}\ . $$ Using \eqref{lambda_rho}, we get $\frac{\partial D_A^{(1)}}{\partial u_b}=0$, and $\frac{\partial\tilde D_A^{(1)}}{\partial u_b}=0$, $\forall b>1$, hence $D_A^{(1)}$ and $\tilde D_A^{(1)}$ are real constants. Returning to the third equation in \eqref{embed_DpDp} with $a=2$ we get $$ \begin{array}{l} \dfrac{\partial^2x_A}{\partial u_2^2}=\cos^2u_1\(a_1s_1+\sum\limits_2^h a_js_j+\sum\limits_2^h b_jt_j\)\left[ a_1\dfrac{\partial x_A}{\partial s_1}+\sum\limits_2^ha_j\dfrac{\partial x_A}{\partial s_j}- \sum\limits_2^hb_j\dfrac{\partial x_A}{\partial t_j}\right]\\[2mm] \qquad +\cos^2u_2 \(a_1t_1+\sum\limits_2^h a_jt_j+\sum\limits_2^h b_js_j\)\left[ a_1\frac{\partial x_A}{\partial t_1}+\sum\limits_2^ha_j\dfrac{\partial x_A}{\partial t_j}- \sum\limits_2^hb_j\dfrac{\partial x_A}{\partial s_j}\right]\\[2mm] \qquad +\sin u_1\cos u_1\dfrac{\partial x_A}{\partial u_1}\ . \end{array} $$ This relation together with \eqref{sol:DDp} yield a polynomial in $s$ and $t$, and considering the coefficients of $s_1$ and $t_1$ respectively, we obtain $$ \begin{array}{l} \dfrac{\partial^2\lambda_A}{\partial u_2^2}-\sin u_1\cos u_1\frac{\partial \lambda_A}{\partial u_1}+(\cos^2 u_1)\lambda_A-D_A\cos^2 u_1 =0\, , \\[2mm] \dfrac{\partial^2\rho_A}{\partial u_2^2}-\sin u_1\cos u_1\frac{\partial \rho_A}{\partial u_1}+(\cos^2 u_1)\rho_A-\tilde D_A\cos^2 u_1 =0\, . \end{array} $$ Using \eqref{lambda_rho} one gets $$ \frac{\partial ^2\Theta_A^{(1)}}{\partial u_2^2}+\Theta_A^{(1)}=0\ , \ \frac{\partial ^2\tilde \Theta_A^{(1)}}{\partial u_2^2}+\tilde\Theta_A^{(1)}=0 $$ with the solutions $$ \begin{array}{l} \Theta_A^{(1)}=\cos u_2\Theta_A^{(2)}(u_3,\ldots,u_p)+\sin u_2D_A^{(2)}(u_3,\ldots,u_p)\, ,\\[2mm] \tilde\Theta_A^{(1)}=\cos u_2\tilde\Theta_A^{(2)}(u_3,\ldots,u_p)+\sin u_2\tilde D_A^{(2)}(u_3,\ldots,u_p), \end{array} $$ where $\Theta_A^{(2)}$, $D_A^{(2)}$, $\tilde\Theta_A^{(2)}$ and $\tilde D_A^{(2)}$ are functions depending on $u_3,\ldots,u_p$. Continuing such procedure sufficiently many times, we find \begin{equation} \label{lambda_rho_1} \begin{array}{l} \lambda_A(u)=D_A^{(0)}\cos u_1\ldots\cos_{p-1}\cos u_p+ D_A^{(p)}\cos u_1\ldots\cos_{p-1}\sin u_p\quad\\[2mm] \quad + D_A^{(p-1)}\cos u_1\ldots\sin_{p-1}+\ldots+ D_A^{(2)}\cos u_1\sin u_1+D_A^{(1)}\sin u_1+D_A\, ,\\[3mm] \rho_A(u)=\tilde D_A^{(0)}\cos u_1\ldots\cos_{p-1}\cos u_p+ \tilde D_A^{(p)}\cos u_1\ldots\cos_{p-1}\sin u_p\quad\\[2mm] \quad + \tilde D_A^{(p-1)}\cos u_1\ldots\sin_{p-1}+\ldots+ \tilde D_A^{(2)}\cos u_1\sin u_1+\tilde D_A^{(1)}\sin u_1+\tilde D_A\, , \end{array} \end{equation} where $D_A^{(p)},\ldots,D_A^{(0)}$, $D_A$, $\tilde D_A^{(p)},\ldots,\tilde D_A^{(0)}$ and $\tilde D_A$ are real constants. At this point let us make the following notations $$ \begin{array}{lcl} w_0 &=& \cos u_1\ldots\cos u_{p-1}\cos u_p\\[2mm] w_p &=& \cos u_1\ldots\cos u_{p-1}\sin u_p\\[2mm] w_{p-1} &=& \cos u_1\ldots\sin u_{p-1}\\ \ldots & \ldots & \ldots\ldots\ldots\ldots\ldots\\ w_2 &=& \cos u_1\sin u_2\\[2mm] w_1 &=& \sin u_1. \end{array} $$ It follows that $\lambda_A$ and $\rho_A$ may be rewritten as \begin{equation} \label{lambda_rho_2} \lambda_A(w)=D_A+\sum\limits_{a=0}^pD_A^{(a)}w_a\, ,\quad \rho_A(w)=\tilde D_A+\sum\limits_{a=0}^p\tilde D_A^{(a)}w_a. \end{equation} Going back to \eqref{sol:DDp} we get, after a re-scaling with $a_1\neq0$ \begin{equation} \label{xAyA} \begin{array}{l} x_A(s,t,w)=\(a_1s_1+\sum\limits_2^ha_js_j+\sum\limits_2^hb_jt_j\)\sum\limits_{a=0}^pD_A^{(a)}w_a\quad\\[2mm] \qquad + \(a_1t_1+\sum\limits_2^ha_jt_j+\sum\limits_2^hb_js_j\)\sum\limits_{a=0}^p\tilde D_A^{(a)}w_a+ \sum\limits_{j=1}^h(\alpha_A^js_j+\beta_A^jt_j)\, , \\[3mm] y_A(s,t,w)=\(a_1s_1+\sum\limits_2^ha_js_j+\sum\limits_2^hb_jt_j\)\sum\limits_{a=0}^p\tilde D_A^{(a)}w_a\quad\\[2mm] \qquad + \(a_1t_1+\sum\limits_2^ha_jt_j+\sum\limits_2^hb_js_j\)\sum\limits_{a=0}^p D_A^{(a)}w_a+ \sum\limits_{j=1}^h(\tilde \alpha_A^js_j+\tilde\beta_A^jt_j). \end{array} \end{equation} Let us choose the initial conditions \begin{subequations} \renewcommand{\theequation}{\theparentequation .\alph{equation}} \label{init_cond} \begin{align} \label{IC1} & \Phi_*\partial_{s_i}(1,0,\ldots,0)=(0,\ldots,0,\stackrel{(i)}{1},0,\ldots,0,0,\ldots,0)\, ,\\ \label{IC2} & \Phi_*\partial_{t_i}(1,0,\ldots,0)=(0,\ldots,0,0,\ldots,\stackrel{(m+i)}{1},0,\ldots,0)\, ,\ i=1,\ldots,h\, ,\\ \label{IC3} & \Phi_*\partial_{u_b}(1,0,\ldots,0)=(0,\ldots,0,0,\ldots,\stackrel{(m+h+b)}{a_1 ,}0,\ldots,0)\, ,\ b=1,\ldots,p\,. \end{align} \end{subequations} From \eqref{xAyA} and \eqref{IC3} and taking into account that $$ \left.\frac{\partial w_a}{\partial u_b}\right|_{u=0}=\left\{ \begin{array}{l} 0, {\rm\ if\ } a=0\\ 0, {\rm\ if\ } b\neq a, \ a\geq1\\ 1, {\rm\ if\ } b=a\, , \end{array}\right. $$ we obtain that \begin{equation} \label{22} \begin{array}{l} D_i^{(b)}=0,\ D_{h+a}^{(b)}=0,\ \tilde D_i^{(b)}=0,\ \tilde D_{h+a}^{(b)}=0, (a\neq b),\ \tilde D_{h+b}^{(b)}=1,\\[2mm] \qquad\qquad\qquad i=1,\ldots,h;\ a,b=1,\ldots,p. \end{array} \end{equation} From \eqref{xAyA} and \eqref{IC1} we find \begin{equation} \label{23} \begin{array}{l} a_iD_j^{(0)}+b_i\tilde D_j^{(0)}+\alpha_j^i=\delta_{ij},\ a_iD_{h+a}^{(0)}+b_i\tilde D_{h+a}^{(0)}+\alpha_{h+a}^i=0\, ,\\[2mm] a_i\tilde D_j^{(0)}+b_i D_j^{(0)}+\tilde\alpha_j^i=0,\ \ a_i\tilde D_{h+a}^{(0)}+b_i D_{h+a}^{(0)}+\tilde\alpha_{h+a}^i=0\, ,\\[2mm] \qquad\qquad\qquad i,j=1,\ldots,h,\ a=1,\ldots,p,\ b_1=0. \end{array} \end{equation} Finally, from \eqref{xAyA} and \eqref{IC2} we get \begin{equation} \label{24} \begin{array}{l} b_iD_j^{(0)}+a_i\tilde D_j^{(0)}+\beta_j^i=0,\ \ b_iD_{h+a}^{(0)}+a_i\tilde D_{h+a}^{(0)}+\beta_{h+a}^i=0\, ,\\[2mm] b_i\tilde D_j^{(0)}+a_i D_j^{(0)}+\tilde\beta_j^i=\delta_{ij}, \ b_i\tilde D_{h+a}^{(0)}+a_i D_{h+a}^{(0)}+\tilde\beta_{h+a}^{i}=0\, ,\\[2mm] \qquad\qquad\qquad i,j=1,\ldots,h,\ a=1,\ldots,p,\ b_1=0\, . \end{array} \end{equation} Now, plugging \eqref{22}, \eqref{23} and \eqref{24} in \eqref{xAyA} we obtain \begin{subequations} \renewcommand{\theequation}{\theparentequation .\alph{equation}} \label{xAyA1} \begin{align} \label{xD} x_i(s,t,w)&=s_i+D_i^{(0)}(w_0-1)\(a_1s_1+\sum\limits_2^ha_js_j+\sum\limits_2^hb_jt_j\)\\ \nonumber & \quad+\tilde D_i^{(0)}(w_0-1)\(a_1t_1+\sum\limits_2^ha_jt_j+\sum\limits_2^hb_js_j\)\, , \end{align} \begin{align} \label{xDp} x_{h+a}(s,t,w) &= D_{h+a}^{(0)}(w_0-1)\(a_1s_1+\sum\limits_2^ha_js_j+\sum\limits_2^hb_jt_j\)\\ \nonumber & \ +\big[{w_a}+\tilde D_{h+a}^{(0)}(w_0-1)\big]\(a_1t_1+\sum\limits_2^ha_jt_j+\sum\limits_2^hb_js_j\)\, , \\ \label{yD} y_i(s,t,w)&=t_i+ D_i^{(0)}(w_0-1)\(a_1t_1+\sum\limits_2^ha_jt_j+\sum\limits_2^hb_js_j\)\\ \nonumber & \quad+\tilde D_i^{(0)}(w_0-1)\(a_1s_1+\sum\limits_2^ha_js_j+\sum\limits_2^hb_jt_j\)\, , \\ \label{yDp} y_{h+a}(s,t,w)&=D_{h+a}^{(0)}(w_0-1)\(a_1t_1+\sum\limits_2^ha_jt_j+\sum\limits_2^hb_js_j\)\\ \nonumber & \ +\big[{w_a}+\tilde D_{h+a}^{(0)}(w_0-1)\big]\(a_1s_1+\sum\limits_2^ha_js_j+\sum\limits_2^hb_jt_j\). \end{align} \end{subequations} Since $\Phi$ is an isometric immersion we have $\tilde g(\Phi_* U,\Phi_* V)=g(U,V)$ for every $U$ and $V$ tangent to $M$. From $\tilde g(\Phi_* \partial s_1,\Phi_* \partial s_1)=-1$ and \eqref{xAyA1} we get $$ (w_0-1)\langle D^{(0)},D^{(0)}\rangle+2\sum\limits_{a=1}^pw_a\tilde D_{h+a}^{(0)}-\frac2{a_1}~D_1^{(0)}-(w_0+1)=0 $$ for all $w\in{\mathbb{S}}^p$, where $$D^{(0)}=\(D_1^{(0)},\ldots,D_h^{(0)},D_{h+1}^{(0)},\ldots,D_{2h}^{(0)}, \tilde D_1^{(0)},\ldots,\tilde D_h^{(0)},\tilde D_{h+1}^{(0)},\ldots,\tilde D_{2h}^{(0)}\). $$ Therefore \begin{equation} \label{isom11} D_1^{(0)}=-{a_1}\, ,\ \tilde D_{h+a}^{(0)}=0, \forall a=1,\ldots,p,\ \langle D^{(0)},D^{(0)}\rangle=1\, . \end{equation} From $\tilde g(\Phi_*\partial s_1,\Phi_*\partial s_j)=0$ and $\tilde g(\Phi_*\partial s_1,\Phi_*\partial t_j)=0$, $(j\geq2)$, together with \eqref{xAyA1} and \eqref{isom11} it follows \begin{equation} \label{isom1j} D_j^{(0)}=-{a_j}-\frac{b_j}{a_1}~\tilde D_1^{(0)},\quad \tilde D_j^{(0)}={b_j}+\frac{a_j}{a_1}~\tilde D_1^{(0)},\ \forall j\geq 2. \end{equation} Finally, from $\tilde g (\Phi_*\partial s_1,\Phi_*\partial u_b)=0$, \eqref{xAyA1} and \eqref{isom11} we get $\tilde D_1^{(0)}=0$. Hence from \eqref{isom1j} one obtains $D_j^{(0)}=-a_j$ and $\tilde D_j^{(0)}=b_j$, for all $j=1,\ldots,h$ (recall $b_1=0$), which combined with $\langle D^{(0)},D^{(0)}\rangle=1$ yield $D_{h+a}^{(0)}=0.$ We conclude from \eqref{xAyA1} the following \begin{equation} \begin{array}{l} x_i(s,t,w)=s_i-a_i(w_0-1)\(a_1s_1+\sum\limits_2^ha_js_j+\sum\limits_2^hb_jt_j\)\\ \qquad\qquad +b_i(w_0-1)\(a_1t_1+\sum\limits_2^ha_jt_j+\sum\limits_2^hb_js_j\),\\[2mm] x_{h+a}(s,t,w)=w_a\(a_1t_1+\sum\limits_2^ha_jt_j+\sum\limits_2^hb_js_j\), \end{array} \end{equation} \begin{equation}\notag \begin{aligned} &y_i(s,t,w)=t_i-a_i(w_0-1)\(a_1t_1+\sum\limits_2^ha_jt_j+\sum\limits_2^hb_js_j\)\\ &\qquad\qquad +b_i(w_0-1)\(a_1s_1+\sum\limits_2^ha_js_j+\sum\limits_2^hb_jt_j\),\\& y_{h+a}(s,t,w)=w_a\(a_1s_1+\sum\limits_2^ha_js_j+\sum\limits_2^hb_jt_j\). \end{aligned} \end{equation} Computing now $x_i+\j y_i$ and $x_{h+a}+\j y_{h+a}$ one gets \eqref{case1}. \smallskip Let consider the second situation when $\Na$ is the hyperbolic space ${\mathbb{H}}^p$. On ${\mathbb{H}}^p$ consider coordinates $u=(u_1,u_2,\ldots,u_p)$ such that the metric $g_\perp$ is expressed by \begin{equation} \label{metricHp} g_\perp=du_1^2+\sinh^2u_1\left(du_2^2+\cos^2u_2 du_3^2+\ldots+ \cos^2u_2\ldots\cos^2u_{p-1}du_p^2\right), \end{equation} and the warped metric on $M$ is given by $g=g_\top(s,t)+f^2(s,t)g_\perp(u). $ Then the Levi Civita connection $\nabla$ of $g$ satisfies \begin{subequations} \renewcommand{\theequation}{\theparentequation .\alph{equation}} \label{eq:LC:Hp} \begin{align} \label{H4} & \nabla_{\dsi}\dsj=0\, ,\ \nabla_{\dsi}\dtj=0\, ,\ \nabla_{\dti}\dtj=0,\\ \label{H5} &\nabla_{\dsi}\dua=\frac{f_{s_i}}f~\dua\, ,\ \nabla_{\dti}\dua=\frac{f_{t_i}}f~\dua,\\ \label{H6} &\nabla_{\partial{u_1}}\dub=\coth u_1\dub\quad(1<b),\\ \label{H7} &\nabla_{\dua}\dub=-\tan u_a\dub\quad(1<a<b), \\ \label{H8} &\nabla_{\partial_{u_1}}\partial_{u_1}=\sum\limits_{i=1}^h\big(ff_{s_i}\dsi-ff_{t_i}\dti\big), \\\label{H9} & \nabla_{\dua}\dua=\sinh^2u_1\prod\limits_{b=2}^{a-1}\cos^2u_b\sum\limits_{i=1}^h\big(ff_{s_i}\dsi-ff_{t_i}\dti\big)\\ \nonumber & \qquad\qquad -\sinh u_1\cosh u_1\prod\limits_{b=2}^{a-1}\cos^2u_b~\partial_{u_1} \\ \nonumber & \qquad + \sum\limits_{b=1}^{a-1}\big(\sin u_b\cos u_b\cos^2u_{b+1}\ldots\cos^2u_{a-1}\big)\dub,\;\; (1<a) \end{align} \end{subequations} for any $ i,j =1,\ldots,h$ and $a, b=1,\ldots,p$. In the following we will proceed in the same way as in previous case. Since some computations are very similar we will skip them, and we will focus only on the major differences between the two cases. The function $\psi$ is obtained from Proposition~\ref{PDEsyst} (case 1 in the proof): $$ \psi=\frac12\ln\left[\langle \bar v,z\rangle^2-\langle \j \bar v,z\rangle^2\right], $$ where $v=(a_1,a_2,\ldots,a_h,0,b_2,\ldots,b_h)$, with $a_1>0$ is a constant vector. Applying Gauss' formula $\widetilde \nabla_{\Phi_*U}\Phi_*V=\Phi_*\nabla_UV+\sigma(U,V)$ for $U,V\in\D$, respectively for $U\in\D$ and $V\in\Dp$ we may write \eqref{sol:DDp}. Using Gauss' formula for $U=V=\partial_{u_1}$, we find $$ \begin{array}{l} \dfrac{\partial\lambda_A}{\partial u_1^2}+\langle v,v\rangle\lambda_A-D_A=0\quad: \quad D_A=\sum a_jc^j_A-\sum b_j\tilde c^j_A\\[2mm] \dfrac{\partial\rho_A}{\partial u_1^2}+\langle v,v\rangle\rho_A-\tilde D_A=0\quad: \quad D_A=\sum b_jc^j_A-\sum a_j\tilde c^j_A. \end{array} $$ Here $\langle v,v\rangle={||\nabla f||}_2=-1$ and consequently \begin{equation} \label{H13} \begin{array}{l} \lambda_A(u)=\cosh u_1 D_A^{(0)}(u_2,\ldots,u_p)+\sinh u_1 \Theta_A^{(0)}(u_2,\ldots,u_p)-D_A\, ,\\[2mm] \rho_A(u)=\cosh u_1 \tilde D_A^{(0)}(u_2,\ldots,u_p)+\sinh u_1 \tilde \Theta_A^{(0)}(u_2,\ldots,u_p)-\tilde D_A. \end{array} \end{equation} Taking $U=\partial_{u_1}$ and $V=\dub$, ($b>1$) we find that $D_A^{(0)}$ and $\tilde D_A^{(0)}$ are constants. Next, applying the Gauss formula for $U=V=\partial_{u_2}$ and respectively for $U=\partial_{u_2}$ and $V=\dub$, ($b>2$) we get $$ \begin{array}{l} \Theta_A^{(0)}=\cos u_2 \Theta_A^{(1)}(u_3,\ldots,u_p)+ D_A^{(1)}\sin u_2\, ,\\[2mm] \tilde\Theta_A^{(0)}=\cos u_2 \tilde \Theta_A^{(1)}(u_3,\ldots,u_p)+ \tilde D_A^{(1)}\sin u_2,\quad D_A^{(1)},\tilde D_A^{(1)}\in{\mathbb{R}}. \end{array} $$ Continuing the procedure sufficiently many times we finally get $$ \begin{array}{l} \lambda_A=-D_A+D_A^{(0)}\cosh u_1+D_A^{(1)}\sinh u_1\cos u_2+D_A^{(2)}\sinh u_1\cos u_2\sin u_3+\cdots\\ \qquad +D_A^{p-1)}\sinh u_1\cos u_2\cdots\cos u_{p-1}\sin u_p+D_A^{(p)}\sinh u_1\cos u_2\cdots \cos u_p\, , \\[2mm] \rho_A=-\tilde D_A+\tilde D_A^{(0)}\cosh u_1+\tilde D_A^{(1)}\sinh u_1\cos u_2+\tilde D_A^{(2)}\sinh u_1\cos u_2\sin u_3+\cdots\\ \qquad +\tilde D_A^{p-1)}\sinh u_1\cos u_2\cdots\cos u_{p-1}\sin u_p+\tilde D_A^{(p)}\sinh u_1\cos u_2\cdots \cos u_p. \end{array} $$ Considering the hyperbolic space ${\mathbb{H}}^p$ embedded in ${\mathbb{R}}^{p+1}_1$ with coordinates \begin{equation} \label{eq:wHp} \begin{array}{l} w_0=\cosh u_1\\ w_1=\sinh u_1\sin u_2\\ w_2=\sinh u_1\cos u_2\sin u_3\\ \ldots \ldots \ldots \\ w_{p-1}=\sinh u_1\cos u_2\ldots\cos u_{p-1}\sin u_p\\ w_p=\sinh u_1\cos u_2\ldots\cos u_{p-1}\cos u_p\, , \end{array} \end{equation} we may express $\lambda_A$ and $\rho_A$ in terms of $w=(w_0,w_1,\ldots,w_p)$: \begin{equation} \label{H17} \begin{array}{l} \lambda_A=-D_A+D_A^{(0)}w_0+D_A^{(1)}w_1+\ldots+D_A^{(p)}w_p\, ,\\[2mm] \rho_A=-\tilde D_A+\tilde D_A^{(0)}w_0+\tilde D_A^{(1)}w_1+\ldots+\tilde D_A^{(p)}w_p\, . \end{array} \end{equation} After a rescaling with the factor $a_1\neq0$ we may write $$ \begin{array}{l} x_A(s,t,w)=\(a_1s_1+\sum\limits_2^ha_js_j+\sum\limits_2^hb_jt_j\)\sum\limits_{a=0}^pD_A^{(a)}w_a\quad\\[2mm] \qquad + \(a_1t_1+\sum\limits_2^ha_jt_j+\sum\limits_2^hb_js_j\)\sum\limits_{a=0}^p\tilde D_A^{(a)}w_a+ \sum\limits_{j=1}^h(\alpha_A^js_j+\beta_A^jt_j)\, ,\\[3mm] y_A(s,t,w)=\(a_1s_1+\sum\limits_2^ha_js_j+\sum\limits_2^hb_jt_j\)\sum\limits_{a=0}^p\tilde D_A^{(a)}w_a\quad\\[2mm] \qquad + \(a_1t_1+\sum\limits_2^ha_jt_j+\sum\limits_2^hb_js_j\)\sum\limits_{a=0}^p D_A^{(a)}w_a+ \sum\limits_{j=1}^h(\tilde \alpha_A^js_j+\tilde\beta_A^jt_j) \end{array} $$ which is similar to \eqref{xAyA}. From now on we will put \begin{equation} \label{SandT} S=a_1s_1+\sum\limits_2^ha_js_j+\sum\limits_2^hb_jt_j \quad {\rm and} \quad T=a_1t_1+\sum\limits_2^ha_jt_j+\sum\limits_2^hb_js_j. \end{equation} Choose the initial point $s_{\rm{init}}(1,0,\ldots,0)$, $t_{\rm{init}}=(0,0,\ldots,0)$, $u_{\rm{init}}=(\omega,0,\ldots,0)$ with $\omega\neq0$ and the initial conditions $$ \begin{array}{l} \Phi_*\partial_{s_i}(1,0,\cdots,0,\omega,0,\cdots,0)=(0,\cdots,0,\stackrel{(i)}{1},0,\cdots,0,0,\cdots,0)\,,\\ \Phi_*\partial_{t_i}(1,0,\cdots,0,\omega,0,\cdots,0)=(0,\cdots,0,0,\cdots,\stackrel{(m+i)}{1},0,\cdots,0)\, ,\ i=1,\cdots,h\, ,\\ \Phi_*\partial_{u_1}(1,0,\cdots,0,\omega,0,\cdots,0)=(0,\cdots,0,0,\cdots,\stackrel{(m+h+1)}{a_1 ,}0,\cdots,0)\, , \\ \Phi_*\partial_{u_b}(1,0,\cdots,0,\omega,0,\cdots,0)=(0,\cdots,0,0,\cdots,\stackrel{(m+h+b)}{a_1\sinh\omega ,}0,\cdots,0),\ b=2,\cdots,p\,. \end{array} $$ A straightforward computations, similar to previous case, yield $$ \begin{array}{l} x_i(s,t,w)=s_i+a_i\big(W_0-1\big){S}- b_i\big(W_0-1\big){T}\, ,\\[2mm] x_{h+1}(s,t,w)=W_p{T},\;\; x_{h+a}(s,t,w)=w_{a-1}{T}\ , \;\; a=2,\ldots,p\, ,\\[2mm] y_i(s,t,w)=t_i+a_i\big(W_0-1\big){T}- b_i\big(W_0-1\big){S}\, ,\\[2mm] y_{h+1}(s,t,w)=W_p{S}\, ,\;\; y_{h+a}(s,t,w)=w_{a-1}{S}\ , \;\; a=2,\ldots,p\, , \end{array} $$ where $W_0=w_0\cosh\omega-w_p\sinh\omega$ and $W_p=-w_0\sinh\omega+w_p\cosh\omega$. Moreover, since $W_0^2-W_p^2=w_0^2-w_p^2$, it follows $(W_0,w_1,\ldots,w_{p-1},W_p)\in{\mathbb{H}}^p$ and after a re-notation we write $$ \begin{array}{l} x_i(s,t,w)=s_i+a_i\big(w_0-1\big){S}- b_i\big(w_0-1\big){T}\, ,\\[2mm] x_{h+a}(s,t,w)=w_{a}{T}\, , \quad a=1,\ldots,p\, ,\\[2mm] y_i(s,t,w)=t_i+a_i\big(w_0-1\big){T}- b_i\big(w_0-1\big){S}\, ,\\[2mm] y_{h+a}(s,t,w)=w_{a}{S}\, , \quad a=1,\ldots,p\, , \end{array} $$ where $(w_0,w_1,\ldots,w_p)\in{\mathbb{H}}^p$. Computing now $x_i+\j y_i$ and $x_{h+a}+\j y_{h+a}$ gives \eqref{case2}. \smallskip Let consider the third situation when $\Na$ is the flat space ${\mathbb{E}}^p$, on which we take coordinates $u=(u_1,u_2,\ldots,u_p)$ such that the metric $g_\perp$ is expressed by \begin{equation} \label{metricEp} g_\perp=du_1^2+\ldots+du_p^2. \end{equation} Then the warped metric on $M$ is given by $g=g_\top(s,t)+f^2(s,t)g_\perp(u).$ Then the Levi Civita connection $\nabla$ of $g$ satisfies \begin{subequations} \renewcommand{\theequation}{\theparentequation .\alph{equation}} \label{eq:LC:Ep} \begin{align} \label{F4} & \nabla_{\dsi}\dsj=0\ ,\ \nabla_{\dsi}\dtj=0\ ,\ \nabla_{\dti}\dtj=0\, ,\\ \label{F5} &\nabla_{\dsi}\dua=\frac{f_{s_i}}f~\dua\, ,\ \nabla_{\dti}\dua=\frac{f_{t_i}}f~\dua,\\ \label{F7} &\nabla_{\dua}\dub=0\, ,\ (a\neq b)\,,\\ \label{F8} &\nabla_{\partial_{u_a}}\partial_{u_a}=\sum\limits_{i=1}^h\big(ff_{s_i}\dsi-ff_{t_i}\dti\big)\, , \end{align} \end{subequations} for any $ i,j =1,\ldots,h$ and $a, b=1,\ldots,p$. In the following we will proceed in the same way as in previous cases. Again, we skip most computations, emphasizing only the major differences appearing in this situation. The function $\psi$ is obtained from Proposition~\ref{PDEsyst} (case 1 in the proof): $$ \psi=\frac12\ln\left[\langle \bar v,z\rangle^2-\langle \j \bar v,z\rangle^2\right], $$ where $v=(a_1,\ldots,a_h,0,t_2,\ldots,t_h)$, $a_1>0,$ is a constant vector. Applying Gauss' formula $\widetilde \nabla_{\Phi_*U}\Phi_*V=\Phi_*\nabla_UV+\sigma(U,V)$ for $U,V\in\D$, respectively for $U\in\D$ and $V\in\Dp$ we may write \eqref{sol:DDp}. Using Gauss' formula for $U=V=\partial_{u_1}$, we find $$ \begin{array}{l} \dfrac{\partial\lambda_A}{\partial u_1^2}+\langle v,v\rangle\lambda_A-D_A=0\quad: \quad D_A=\sum a_jc^j_A-\sum b_j\tilde c^j_A\\[2mm] \dfrac{\partial\rho_A}{\partial u_1^2}+\langle v,v\rangle\rho_A-\tilde D_A=0\quad: \quad D_A=\sum b_jc^j_A-\sum a_j\tilde c^j_A\, . \end{array} $$ Here $\langle v,v\rangle={||\nabla f||}_2=0$. Taking $U=\partial_{u_1}$ and $V=\dub$ ($b>1$) we find that $\frac{\partial^2\lambda_A}{\partial u_1\partial u_b}=0$ and $\frac{\partial^2\rho_A}{\partial u_1\partial u_b}=0$. As consequence, $$ \begin{array}{l} \lambda_A(u)=\frac{D_A}2~u_1^2+D_A^{(1)}u_1+\Theta_A^{(1)}(u_2,\ldots,u_p)\, ,\\[2mm] \rho_A(u)=\frac{\tilde D_A}2~u_1^2+\tilde D_A^{(1)}u_1+\tilde \Theta_A^{(1)}(u_2,\ldots,u_p)\, , \end{array} $$ where $D_A^{(1)},\tilde D_A^{(1)}$ are constants. Continuing the computations in the same manner it turns that \begin{equation} \label{F13} \begin{array}{l} \lambda_A(u)=\frac{D_A}2~\sum\limits_{a=1}^pu_a^2+\sum\limits_{a=1}^pD_A^{(a)}u_a+D_A^{(0)},\\[2mm] \rho_A(u)=\frac{\tilde D_A}2~\sum\limits_{a=1}^pu_a^2+\sum\limits_{a=1}^p\tilde D_A^{(a)}u_a+\tilde D_A^{(0)}, \end{array} \end{equation} where $D_A^{(0)},\tilde D_A^{(0)}$ and $D_A^{(a)},\tilde D_A^{(a)}$, $a=1,\ldots,p$ are constants. Choosing suitable initial conditions and taking into account the property of $\Phi$ to be isometric immersion, straightforward computations yield \begin{equation} \label{F18} \begin{array}{l} x_i=s_i+\frac12\big(a_i{S} -b_i{T}\big){\sum\limits_{1}^pu_a^2}\, ,\;\; x_{h+b}=u_b{T}\, ,\\[2mm] y_i=t_i+\frac12\big(a_i{T}- b_i{S}\big){\sum\limits_{1}^pu_a^2}\, ,\;\; y_{h+b}=u_b{S}\, , \end{array} \end{equation} where $S$ and $T$ are as in \eqref{SandT}. Computing now $x_i+\j y_i$ and $x_{h+b}+\j y_{h+b}$ one gets \eqref{case3}. In the end, consider $\Na^0=\{(s_0,t_0)\}\times{\mathbb{E}}^p$, where $(s_0,t_0)$ is a fixed point in ${\mathbb{E}}^{2h}_h$. If $\sigma_\perp^0$ is the second fundamental form of $\Na^0$ in ${\mathbb{E}}^{2m}_m$, we find ${||\sigma_\perp^0(\dua,\dua)||}_2=0.$ So, the mean curvature vector of $\Na^0$ is a light-like vector, so it is nowhere zero. If $h=1$, then $v=(a_1,0)$. Thus $||v||_2<0$. Hence, $N_\perp$ is an open part of the hyperbolic space $\mathbb H^p$. So, we obtain item {\bf 2}. \smallskip Let us now consider the case $p=1$. In this case $\Na$ is a curve, which can be supposed to be parameterized by the arc-length $u$. Hence its metric is $g_\perp=du^2$. We can make the same computations as in previous case such that \eqref{sol:DDp} holds. Yet, a first difference appear: we are not able to say something about the value of ${||\nabla f||}_2=-\sum\limits_{i=1}^ha_i^2+\sum\limits_{i=1}^hb_i^2$. Using as usual Gauss' formula (for $U=V=\dua$) one gets $$ \frac{\partial^2\lambda_A}{\partial u^2}=\langle v,v\rangle\lambda_A+D_A\, , \quad \frac{\partial^2\rho_A}{\partial u^2}=\langle v,v\rangle\rho_A+\tilde D_A\, , $$ where $D_A, \tilde D_A\in{\mathbb{R}}$. Since $\langle v,v\rangle=-\sum\limits_{i=1}^ha_i^2+\sum\limits_{i=1}^hb_i^2$ is an arbitrary constant, we have to distinguish three different cases: {\bf Case (i)} $\langle v,v\rangle=-r^2$, {\bf Case (ii)} $\langle v,v\rangle=r^2$ and {\bf Case (iii)} $\langle v,v\rangle=0$ ($r>0$). Solving the ordinary differential equations and doing the computations in the same manner as in the case when $p>1$, and after a re-scaling of the vector $v$, we obtain the first three cases stated in the theorem. \smallskip At this point we recall that the PDE system in Proposition~\ref{PDEsyst} has also other solutions. When Case 2a from the proof is considered, doing similar computations we easily get item {\bf 4} of the theorem. Much more interesting is to consider Case 2b in the proof of Proposition~\ref{PDEsyst}. We have to examine again the three situations, namely when $\Na$ is ${\mathbb{S}}^p$, ${\mathbb{H}}^p$ or ${\mathbb{E}}^p$. In the following we give only few details for the case $M={\mathbb{E}}^{2h}_h\times_f{\mathbb{S}}^p$, the other two being very similar. Here the warping function is $f=\sqrt{AB}$, where $$ A=\sum\limits_{k=1}^ha_k(s_k+\epsilon t_k)\ ,\quad B=\sum\limits_{k=1}^hb_k(s_k-\epsilon t_k)\, , $$ $\epsilon=\pm1$, $a_1=0$, $b_1=1$, $a_2\neq0$. Moreover, by Proposition~\ref{prop_v} we get $\sum\limits_{k=1}^ha_kb_k=-1$. Direct computations, analogue to those done in the first part of the proof, yield \begin{equation} \label{F18} \begin{array}{l} x_i=s_i+\dfrac{w_0-1}2\big(b_i{A}+a_i{B}\big)\, ,\;\; x_{h+b}=\dfrac{u_b}2\big(A-B)\, ,\\[2mm] y_i=t_i+\epsilon \dfrac{w_0-1}2\big(b_i{A}-a_i{B}\big)\, ,\;\; x_{h+b}=\epsilon\dfrac{u_b}2\big(A+B)\, , \end{array} \end{equation} where $(w_0,w_1,\ldots,w_p)\in{\mathbb{S}}^p$. Put $v_k=\frac\epsilon2(a_k+b_k)+\frac 12\j(a_k-b_k)$. We have $\langle v,v\rangle=1$, where $v=(v_1,\ldots,v_p)$. Computing $x_i+\j y_i$ and $x_{h+b}+\j y_{h+b}$ we obtain \eqref{case1}. Moreover, the warping function could be written as $f=\sqrt{\langle \bar v,z\rangle^2-\langle \j \bar v,z\rangle^2}$. So, we obtain again item {\bf 1} of the theorem. The converse follows from direct computations. \end{proof} \begin{remark} \rm In the case 3 of previous proof, if we choose $(s_0,t_0)=(1,0,\ldots,0)$, and $v=(1,0,\ldots,0,\sqrt{3}+2\j)$, we obtain the ``initia'' leaf $\Na^0$ given by $$ \begin{array}{l} \Phi(1,0,u)=\(1+\frac12\sum u_a^2,0,\ldots,0,\stackrel{(h)}{\frac{\sqrt{3}}2\sum u_a^2},0,\ldots,0,\stackrel{(m+h)}{-\sum u_a^2},u_1,\ldots,u_p)\, . \end{array} $$ After a translation along $x_1$ axis, followed by a rotation in the 2-plane $(x_1,x_h)$ of a suitable angle, we obtain $$ \begin{array}{l} \Phi(1,0,u)=\(0,\ldots,0,\stackrel{(h)}{-\sum u_a^2},0,\ldots,0,\stackrel{(m+h)}{-\sum u_a^2},u_1,\ldots,u_p). \end{array} $$ which represents the submanifold given in \cite[Proposition 3.6]{cm:Chen11} up to reordering of coordinates. \end{remark} \begin{remark} {\rm By applying the same method we may also classify all time-like $\p R$-warped products $\Ni\times_f\Na$ in the para-K\"ahler $(h+p)$-plane $\p^{h+p}$ satisfying $h=\frac{1}{2}\dim \Ni$, $p=\dim \Na$ and $S_\sigma= 2p{||\nabla\ln f||}_2$.} \end{remark} {\bf Acknowledgement.} The second author is supported by Fulbright Grant no. 498/2010 as a Fulbright Senior Researcher at Michigan State University, U.S.A.
1,314,259,992,582
arxiv
\section{Introduction} In a widely cited series of papers \cite{absorbing1,absorbing2,absorbing3,absorbing4,absorbing5}, Dickman, Mu{\~n}oz, Vespignani, and Zapperi (DMVZ) developed a theory of self-organized criticality as a relationship between driven dissipative systems and systems with conservation. This theory predicts a specific relationship between the abelian sandpile model of Bak, Tang, and Wiesenfeld \cite{BTW}, a driven system in which particles added at random dissipate across the boundary, and the corresponding ``fixed-energy sandpile,'' a closed system in which the total number of particles is conserved. After defining these two models and explaining the conjectured relationship between them in the DMVZ paradigm of self-organized criticality, we present data from large-scale simulations which strongly indicate that this conjecture is false on the two-dimensional square lattice. We then examine the conjecture on some simpler families of graphs in which we can provably refute it. Early experiments \cite{GM} already identified a discrepancy, at least in dimensions 4 and higher, but later work focused on dimension 2 and missed this discrepancy (it is very small). Some recent papers (e.g., \cite{BM}) restrict their study to stochastic sandpiles because deterministic sandpiles belong to a different universality class, but there remains a widespread belief in the DMVZ paradigm for both deterministic and stochastic sandpiles \cite{VD,CVSD}. Despite our contrary findings, we believe that the central idea of the DMVZ paradigm is a good one: the dynamics of a driven dissipative system should in some way reflect the dynamics of the corresponding conservative system. Our results point to a somewhat different relationship than that posited in the DMVZ series of papers: the driven dissipative model exhibits a second-order phase transition at the threshold density of the conservative model. Bak, Tang, and Wiesenfeld~\cite{BTW} introduced the abelian sandpile as a model of self-organized criticality; for mathematical background, see~\cite{frank}. The model begins with a collection of particles on the vertices of a finite graph. A vertex having at least as many particles as its degree \emph{topples\/} by sending one particle along each incident edge. A subset of the vertices are distinguished as sinks: they absorb particles but never topple. A single time step consists of adding one particle at a random site, and then performing topplings until each non-sink vertex has fewer particles than its degree. The order of topplings does not affect the outcome~\cite{dhar}. The set of topplings caused by addition of a particle is called an avalanche. Avalanches can be decomposed into a sequence of ``waves'' so that each site topples at most once during each wave. Over time, sandpiles evolve toward a stationary state in which the waves exhibit power-law statistics \cite{KLGP} (though the full avalanches seem to exhibit multifractal behavior \cite{MST,KMS}). Power-law behavior is a hallmark of criticality, and since the stationary state is reached apparently without tuning of a parameter, the model is said to be \emph{self-organized critical}. To explain how the sandpile model self-organizes to reach the critical state, Dickman \textit{et al.}~\cite{absorbing1, absorbing3} introduced an argument which soon became widely accepted: see, for example, \cite[Ch.~15.4.5]{sornette} and \cite{quant,feyredig,RS}. Despite the apparent lack of a free parameter, they argued, the dynamics implicitly involve the tuning of a parameter to a value where a phase transition takes place. The phase transition is between an active state, where topplings take place, and a quiescent ``absorbing'' state. The parameter is the \emph{density}, the average number of particles per site. When the system is quiescent, addition of new particles increases the density. When the system is active, particles are lost to the sinks via toppling, decreasing the density. The dynamical rule ``add a particle when all activity has died out'' ensures that these two density changing mechanisms balance one another out, driving the system to the threshold of instability. To explore this idea, DMVZ introduced the \emph{fixed-energy sandpile\/} model (FES), which involves an explicit free parameter $\zeta$, the density of particles. On a graph with $N$ vertices, the system starts with $\zeta N$ particles at vertices chosen independently and uniformly at random. Unlike the driven dissipative sandpile described above, there are no sinks and no addition of particles. Subsequently the system evolves through toppling of unstable sites. Usually the parallel toppling order is chosen: at each time step, all unstable sites topple simultaneously. Toppling may persist forever, or it may stop after some finite time. In the latter case, we say that the system \emph{stabilizes}; in the terminology of DMVZ, it reaches an ``absorbing state.'' A common choice of underlying graph for FES is the $n\times n$ square grid with periodic boundary conditions. It is believed, and supported by simulations \cite{stairs}, that there is a \emph{threshold density\/} $\zeta_c$, such that for $\zeta<\zeta_c$, the system stabilizes with probability tending to~$1$ as $n \to \infty$; and for $\zeta>\zeta_c$, with probability tending to~$1$ the system does not stabilize. \section{The Density Conjecture} For the driven dissipative sandpile on the $n\times n$ square grid with sinks at the boundary, as $n \to \infty$ the stationary measure has an infinite-volume limit \cite{AJ}, which is a measure on sandpiles on the infinite grid $\Z^2$. One gets the same limiting measure whether the grid has periodic or open boundary conditions, and whether there is one sink vertex or the whole boundary serves as a sink \cite{AJ} (see also \cite{pemantle} for the corresponding result on random spanning trees). The statistical properties of this limiting measure have been much studied \cite{MD,priezzhevheights,JPR}. Grassberger conjectured that the expected number of particles at a fixed site is $17/8$, and it is now known to be $17/8\pm 10^{-12}$ \cite{JPR}. We call this value the \emph{stationary density\/} $\zeta_s$ of $\Z^2$. DMVZ believed that the combination of driving and dissipation in the classical abelian sandpile model should push it toward the threshold density $\zeta_c$ of the fixed-energy sandpile. This leads to a specific testable prediction, which we call the Density Conjecture. \vspace{2pt} \noindent \textbf{Density Conjecture \cite{absorbing4}.} On the square grid, $\zeta_c = 17/8$. More generally, $\zeta_c=\zeta_s$. \vspace{2pt} Vespignani \textit{et al.}~\cite{absorbing4} write of FES on the square grid, ``the system turns out to be critical only for a particular value of the energy density equal to that of the stationary, slowly driven sandpile.'' They add that the threshold density $\zeta_c$ of the fixed energy sandpile is ``the only possible stationary value for the energy density'' of the driven dissipative model. In simulations they find $\zeta_c = 2.1250(5)$, adding in a footnote ``It is likely that, in fact, 17/8 is the exact result.'' Other simulations to estimate $\zeta_c$ also found the value very close to $17/8$ \cite{absorbing1,absorbing2}. Our goal in the present paper is to demonstrate that the density conjecture is more problematic than it first appears. Table~\ref{table:Z2} presents data from large-scale simulations indicating that $\zeta_c(\Z^2)$ is $2.125288$ to six decimals; close to but not exactly equal to $17/8$. In each trial, we added particles one at a time at uniformly random sites of the $n\times n$ torus. After each addition, we performed topplings until either all sites were stable, or every site toppled at least once. For deterministic sandpiles on a connected graph, if every site topples at least once, the system will never stabilize \cite{tardos,FMR,FLP}. We recorded $m/n^2$ as an empirical estimate of the threshold density $\zeta_c(\Z_n^2)$, where~$m$ was the maximum number of particles for which the system stabilized. We averaged these empirical estimates over many independent trials. We used a random number generator based on the Advanced Encryption Standard (AES-256), which has been found to exhibit excellent statistical properties \cite{HW:AES,TestU01}. Our simulations were conducted on a High Performance Computing (HPC) cluster of computers. \newcommand{4\times 10^{-7}}{0.0000004} \begin{table} \begin{center} \parbox{\columnwidth}{ \scriptsize \begin{minipage}[b]{1.75in} \begin{ruledtabular} \begin{tabular}{rcc} $n$ & trials & estimate of $\zeta_c(\Z_n^2)$ \\ \hline $64$ & $2^{28}$ & $2.1249561 \pm 4\times 10^{-7}$\\ $128$ & $2^{26}$ & $2.1251851 \pm 4\times 10^{-7}$\\ $256$ & $2^{24}$ & $2.1252572 \pm 4\times 10^{-7}$\\ $512$ & $2^{22}$ & $2.1252786 \pm 4\times 10^{-7}$\\ $1024$ & $2^{20}$ & $2.1252853 \pm 4\times 10^{-7}$\\ $2048$ & $2^{18}$ & $2.1252876 \pm 4\times 10^{-7}$\\ $4096$ & $2^{16}$ & $2.1252877 \pm 4\times 10^{-7}$\\ $8192$ & $2^{14}$ & $2.1252880 \pm 4\times 10^{-7}$\\ $16384$ & $2^{12}$ & $2.1252877 \pm 4\times 10^{-7}$\\ \end{tabular} \end{ruledtabular} \end{minipage} \hfill \psfrag{n}[cr][cr]{$n$} \psfrag{zc(Zn)}[bl][cl]{$\zeta_c(\Z_n^2)$} \psfrag{2.125288}[Bl][Bl]{$2.125288$} \psfrag{2.125}[Bl][Bl]{$2.125000000000$} \includegraphics[width=0.45\columnwidth]{grid-fit.eps} } \end{center} \caption{ Fixed-energy sandpile simulations on $n\times n$ tori~$\Z_n^2$. The third column gives our empirical estimate of the threshold density~$\zeta_c(\Z_n^2)$ for $\Z_n^2$. The standard deviation in each of our estimates of $\zeta_c(\Z_n^2)$ is $4\times 10^{-7}$. To six decimals, the values of $\zeta_c(\Z_{2048}^2),\dots,\zeta_c(\Z_{16384}^2)$ are all the same. The data from $n=64$ to $n=16384$ are well approximated by $\zeta_c(\Z_n^2) = 2.1252881\pm 3\times 10^{-7} - (0.390\pm 0.001) n^{-1.7}$, as shown in the graph. (The error bars are too small to be visible, so the data are shown as points.) The rapid convergence is due in part to periodic boundary conditions. We conclude that the asymptotic threshold density $\zeta_c(\Z^2)$ is $2.125288$ to six decimals. In contrast, the stationary density $\zeta_s(\Z^2)$ is $2.125000000000$ to twelve decimals. } \label{table:Z2} \end{table} \section{Phase transition at \texorpdfstring{$\zeta_c$}{threshold}} We consider the density conjecture on several other families of graphs, including some for which we can determine the exact values~$\zeta_c$ and~$\zeta_s$ analytically. Dhar \cite{dhar} defined recurrent sandpile configurations and showed that they form an abelian group. A consequence of his result is that the stationary measure for the driven dissipative sandpile on a finite graph $G$ with sinks is the uniform measure on recurrent configurations. The \emph{stationary density\/} $\zeta_s(G)$ is the expected total number of particles in a uniform random recurrent configuration, divided by the number of non-sink vertices in $G$. The threshold density~$\zeta_c$ and stationary density~$\zeta_s$ for different graphs is summarized in Table~\ref{table:summary}. The only graph on which the two densities are known to be equal is $\Z$ \cite{quant,feyredig,FMR}. On all other graphs we examined, with the possible exception of the $3$-regular Cayley tree, it appears that $\zeta_c \neq \zeta_s$. Each row of Table~\ref{table:summary} represents an infinite family of graphs~$G_n$ indexed by an integer~$n \geq 1$. For example, for~$\Z^2$ we take~$G_n$ to be the~$n \times n$ square grid, and for the regular trees we take~$G_n$ to be a finite tree of depth~$n$. As sinks in~$G_n$ we take the set of boundary sites~$G_n \setminus G_{n-1}$ (note that on trees this corresponds to wired boundary conditions). The value of~$\zeta_s$ reported is $\lim_{n \to \infty} \zeta_s(G_n)$. \begin{table}[t!] \begin{center} \begin{ruledtabular} \begin{tabular}{ccc} graph & $\zeta_s$ & $\zeta_c$ \\ \hline $\Z$ & {\bf 1} & {\bf 1} \\ $\Z^2$ & $\mathbf{17/8}=2.125$ & $2.125288\ldots$ \\ bracelet & $\mathbf{5/2} = 2.5$ & $\mathbf{2.496608\ldots}$ \\ flower graph & $\mathbf{5/3} = 1.666667\ldots$ & $\mathbf{1.668898\ldots}$ \\ ladder graph & $\mathbf{\frac74 - \frac{\sqrt{3}}{12}} = 1.605662\ldots$ & $1.6082\ldots$ \\ complete graph & $\mathbf{1/2}\times n + O(\sqrt{n})$ & $\mathbf{1}\times n-O(\sqrt{n \log n})$ \\ 3-regular tree & $\mathbf{3/2}$ & 1.50000\dots \\ 4-regular tree & $\mathbf{2}$ & 2.00041\dots \\ 5-regular tree & $\mathbf{5/2}$ & $2.51167\dots$ \\ \end{tabular} \end{ruledtabular} \end{center} \caption{Stationary and threshold densities for different graphs. Exact values are in bold. } \label{table:summary} \end{table} The exact values of $\zeta_s$ for regular trees (Bethe lattices) were calculated by Dhar and Majumdar \cite{DM}. The corresponding values of $\zeta_c$ we report come from simulations \cite{FLW:approach}. We derive or simulate the values of $\zeta_s$ and $\zeta_c$ for the bracelet, flower, ladder, and complete graphs in \cite{FLW:approach}. As an example, consider the \emph{bracelet graph\/} $B_n$, which is a cycle of $n$ vertices, with each edge doubled (see Figure~\ref{fig:graphs}). A site topples by sending out $4$ particles: $2$ to each of its two neighbors. One site serves as the sink. To compare the densities $\zeta_c$ and $\zeta_s$, we consider the driven dissipative sandpile before it reaches stationarity, by running it for time $\lambda$. More precisely, we place $\lambda n$ particles uniformly at random, stabilize the resulting sandpile, and let $\rho_n(\lambda)$ denote the expected density of the resulting stable configuration. In the long version of this paper \cite{FLW:approach} we prove \begin{theorem}[\cite{FLW:approach}] \label{braceletmain} For the bracelet graph $B_n$, in the limit as $n\to \infty$, \begin{enumerate} \item The threshold density $\zeta_c$ is the unique positive root of $\zeta = \frac52 - \frac12 e^{-2\zeta}$ (numerically, $\zeta_c = 2.496608$). \item The stationary density $\zeta_s$ is $5/2$. \item The final density $\rho_n(\lambda)$, as a function of initial density $\lambda$, converges pointwise to a limit $\rho(\lambda)$, where \[ \rho(\lambda) = \min\left (\lambda, \frac{5-e^{-2\lambda}}{2}\right) = \begin{cases} \lambda, & \lambda \leq \zeta_c \\ \frac{5-e^{-2\lambda}}{2}, & \lambda>\zeta_c. \end{cases} \] \end{enumerate} \end{theorem} Part~3 of this theorem shows that despite the inequality $\zeta_s \neq \zeta_c$, a connection remains between the driven dissipative dynamics used to define $\zeta_s$ and the conservative dynamics used to define $\zeta_c$: since the derivative $\rho'(\lambda)$ is discontinuous at $\lambda=\zeta_c$, the driven sandpile undergoes a second-order phase transition at density $\zeta_c$. For $\lambda<\zeta_c$, the driven sandpile loses very few particles to the sink, and the final density equals the initial density $\lambda$; for $\lambda > \zeta_c$ it loses a macroscopic proportion of particles to the sink, so the final density is strictly smaller than~$\lambda$. As Figure~\ref{fig:density} shows, the sandpile continues to evolve as $\lambda$ increases beyond~$\zeta_c$; in particular, its density keeps increasing. \begin{figure}[t] \begin{center} \includegraphics[width=0.3\columnwidth]{grid.eps} \hspace{0.1\columnwidth} \includegraphics[width=0.3\columnwidth]{bracelet.eps} \end{center} \begin{center} \includegraphics[width=0.3\columnwidth]{flower.eps} \hspace{0.1\columnwidth} \includegraphics[width=0.3\columnwidth]{complete.eps} \end{center} \begin{center} \includegraphics[width=\columnwidth]{bethe.eps} \end{center} \begin{center} \includegraphics[width=0.8\columnwidth]{ladder.eps} \end{center} \caption{The graphs on which we compare $\zeta_s$ and $\zeta_c$: the grid (upper left), bracelet graph (upper right), flower graph (2$^{\text{nd}}$ row left), complete graph (2$^{\text{nd}}$ row right), Cayley trees (Bethe lattices) of degree $d=3,4,5$ (3$^{\text{rd}}$ row), and ladder graph (bottom).} \label{fig:graphs} \end{figure} We are also able to prove that a similar phase transition occurs on the \emph{flower graph}, shown in Figure~\ref{fig:graphs}. Interestingly, the final density $\rho(\lambda)$ for the flower graph is a \emph{decreasing\/} function of $\lambda > \zeta_c$ (Figure~\ref{fig:density} bottom). Our proofs make use of local toppling invariants on these graphs. On the bracelet graph, since particles always travel in pairs, the parity of the number of particles on a single vertex is conserved. On the flower graph, the difference modulo~$3$ of the number of particles on the two vertices in a single ``petal'' is conserved. \begin{figure} \psfrag{r}{$\rho$} \psfrag{l}{$\lambda$} \psfrag{rc}[Bc][Bc][0.8]{$\zeta_c$} \psfrag{rs=5/2}[Bc][cc][0.8]{\hspace{30pt} $\zeta_s=5/2$} \psfrag{curve}[Bc][Bc][0.8]{\hspace{12pt}$\frac{5-e^{-2\lambda}}{2}$} \includegraphics[width=\columnwidth]{bracelet-density.eps} \\[36pt] \psfrag{rs=5/3}[cc][Bc][0.8]{\hspace{20pt}$\zeta_s=5/3$} \psfrag{curve}[Bc][tc][0.8]{$\frac{5+e^{-3\lambda}}{3}$} \includegraphics[width=\columnwidth]{flower-density.eps} \caption{Density $\rho(\lambda)$ of the final stable configuration as a function of initial density $\lambda$ on the bracelet graph (top row) and flower graph (bottom row) as the graph size tends to infinity. A phase transition occurs at $\lambda=\zeta_c$. At first glance (left panels) it appears that the driven sandpile reaches its stationary density $\zeta_s$ at this point, but closer inspection (right panels) reveals that the final density $\rho(\lambda)$ continues to change as $\lambda$ increases beyond~$\zeta_c$. These graphs are exact.} \label{fig:density} \end{figure} One might guess that the failure of the density conjecture is due only to the existence of local toppling invariants, or else to boundary effects from the sinks. The ladder graph (Figure \ref{fig:graphs}) has no local toppling invariants; moreover, it is essentially one-dimensional, so the bulk of the graph is well insulated from the sinks at the boundary. Nevertheless, we find \cite{FLW:approach} a small but undeniable difference between $\zeta_s$ and $\zeta_c$ on the ladder graph. \old{ : From the work of J\'{a}rai and Lyons~\cite{JL} and the Parry formula~\cite{parry}, we compute $\zeta_s = \frac74 - \frac{\sqrt{3}}{12}$ (numerically, $\zeta_s = 1.6057$), while our large-scale simulations ($2^{12}$ trials with $n=2^{18}$) indicate that $\zeta_c$ is about $1.6082$. } \old{ \section{Complete Graph} The minimum density of a recurrent configuration on the complete graph $K_n$ is $\frac{n-2}{2}$, and the maximum density is $n-2$. Given these bounds, we show that $\zeta_s$ and $\zeta_c$ are nearly as far apart as they can be. Let $G$ be a graph on $n$ vertices with $m$ edges and sink of degree $d$. Using a theorem of Merino Lopez \cite{MerinoLopez}, we can express the stationary density $\zeta_s(G)$ in terms of the Tutte polynomial $T(x,y)$ of $G$: \[ \zeta_s(G) = \frac1n \left( m-d + \frac{\frac{\partial T}{\partial y}(1,1)}{T(1,1)} \right) = \frac1n \left( m-d + \frac{u(G)}{\kappa(G)} \right). \] Here~$u(G)$ is the number of spanning unicyclic subgraphs of~$G$, and $\kappa(G)$ is the number of spanning trees of~$G$. In particular, for the complete graph $K_n$ it follows (see~\cite{wright}) that \[ \zeta_s(K_n) = \frac{n}{2} + \sqrt{\frac{\pi n}{8}} + o(n^{1/2}). \] On the other hand, $\zeta_c(K_n) \geq n - 2 n^{1/2} \log n$: indeed, at this density, with high probability no sites in the initial configuration are unstable. } \section{Conclusions} \old{ The conclusion of \cite{absorbing5} that ``FES are shown to exhibit an absorbing state transition with critical properties coinciding with those of the corresponding sandpile model'' deserve to be re-evaluated. One hope of the DMVZ paradigm was that critical features of the driven dissipative model, such as the exponents governing the distribution of avalanche sizes and decay of correlations, might be more easily studied in the FES by examining the scaling behavior of these observables as $\zeta \uparrow \zeta_c$. However, the failure of the density conjecture, and the continued aging of driven dissipative sandpiles beyond $\zeta_c$, suggest that the two models may not share the same critical features. } The conclusion of \cite{absorbing5} that ``FES are shown to exhibit an absorbing state transition with critical properties coinciding with those of the corresponding sandpile model'' should be re-evaluated. In response to this article, several researchers have suggested to us that perhaps the density conjecture holds for stochastic sandpiles even if not for deterministic ones. This hypothesis deserves some scrutiny. For the driven dissipative sandpile, there is a transition point at the threshold density of the FES, beyond which a macroscopic amount of sand begins to dissipate. The continued evolution of the sandpile beyond $\zeta_c$ shows that driven sandpiles have (at least) a one-parameter family of distinct critical states. While the stationary state has rightly been the object of intense study, we suggest that these additional critical states deserve further attention. \\[-10pt]
1,314,259,992,583
arxiv
\section{Introduction} Knowing the topology of the underlying network can provide several advantages to the communicating hosts. For example, the topology can be used to improve the throughput and robustness of the network \cite{ron, jones}, and it can be a necessary part of identifying bottlenecks and critical links in the network \cite{survivability}. It can also be used to monitor the network or to simply get a picture of the underlying system. However, often the owners of the network keep the topology information hidden due to privacy and security concerns \cite{sarac}. This has led to a significant amount of research on topology discovery. We develop a new method that can be used to identify general network topologies. This method only requires the interference pattern of the paths in the network which can be inferred from the data available at the end nodes. Prior work on topology discovery can be divided into two main categories: algorithms that require cooperation from the internal nodes and the algorithms that do not. Many algorithms for topology discovery, usually designed for the purpose of mapping the Internet, use ICMP commands like traceroute \cite{rocketfule, crovella, sarac}. These methods requires some level of cooperation from the network providers. The other methods, that fall under the category of network tomography \cite{vardi,survey}, use data that can be measured directly at the end nodes. Our method falls under this category as we do not seek any information from the internal nodes. In the network tomography literature significant attention has been given to the discovery of tree networks. Papers such as \cite{nowak2, towsley, yang} use probing mechanisms to infer single source multiple destination trees. There is also some work on combining these single source trees to form a multi-source multi-destination network \cite{nowak3, sattari}. In \cite{singh}, the authors provide a method for identifying minimal trees with multiple sources and multiple destinations by using distance measurements. In \cite{kelner}, the authors develop an algorithm called RGD1 that attempts to discover a general network topology. It uses a set for four nodes that share a link, called quatrets and uses them to build an approximation of the entrie network. The discovery of the quatret and placement of the nodes in the topology requires the shortest path distance between the nodes, which is inferred using packet delay. RGD1 algorithm is very close to our algorithm in terms the objective, hence we will compare its performance against ours via simulation. In order to obtain the interference pattern, we provide a simple method based on linear regression. This method uses the number of in-flight packets in the paths and the delay experienced by the packets to determine whether a given pair of paths interfere with each other. Using the resulting interference information, we formulate the topology inference problem as an integer program. We develop polynomial time algorithms to solve it optimally for networks with special topologies, namely tree or ring topology. Both of these algorithms obtain the minimal version of the network, even when the original network is not minimal. We also develop a heuristic that attempts to recover any general topology in polynomial time. The main contributions of this paper can be summarized as follows: \begin{itemize} \item We use the interference pattern of the paths to formulate an integer linear program (ILP) that obtains the network that has the fewest number of links and supports the given interferences. The solution provides a new method to discover a general network topology. \item We provide an upper bound, a lower bound and a sufficient condition for optimality for the ILP. \item We design two polynomial time algorithms to recover tree and ring networks and show that if the network is in fact a tree or a ring, the algorithms solve the ILP optimally. \item Building upon the tree and the ring algorithms, we develop a polynomial time heuristic to identify general networks. Using simulations we show that this method outperforms the RGD1 algorithm of \cite{kelner}. \end{itemize} \section{Model} \subsection{Network Model} We model the network as a graph $G=(N,E),$ where $N$ is the set of nodes and $E$ is the set of edges. We assume that all the links in the network are bidirectional and have unit capacity. Each bidirectional link $\{i,j\}$ is composed of two directed links $(i,j)$ and $(j,i)$. The network has two types of nodes: the overlay nodes, which represent hosts and can be controlled, and the underlay nodes, which represent routers that are uncontrollable and do not provide any direct feedback. We represent the set of overlay nodes by $\mathcal{O}$ and the set of underlay nodes by $U$, and $N = \mathcal{O} \cup U$. We further assume that each overlay node is connected to only one underlay node. Other that this, we do not have any knowledge of the structure of the network. The main goal of this paper is to recover the graph $G$ from data measured at the overlay nodes. All the overlay nodes are connected to each other by tunnels, which are paths that go through the underlay nodes. A tunnel $l = (l_1, l_2, ..., l_{|l|})$ consists of overlay nodes $l_1$ and $l_{|l|}$ and the rest of the nodes are underlay. Since, $l_1$ and $l_{|l|}$ are connected to only one underlay nodes, we will often refer to node $l_2$ as the parent of node $l_1$, $p(l_1)$, and node $l_{|l|-1}$ as the parent of node $l_{|l|}$, $p(l_{|l|})$. There are a total of $L = |\mathcal{O}| \times (|\mathcal{O}|-1)$ tunnels in the network. We also assume that each node $i \in N$ maintains a queue for each of it outgoing link $(i,j) \in E$. Packets from all the tunnels that uses the link $(i,j)$ gets enqueued in this queue when they reach node $i$ and are served on a first come first serve basis. \subsection{The Interference Matrix $\mathcal{F}$} Our algorithm for recovering the graph $G$ is based on whether or not any two tunnels between the overlay nodes intersect with each other. In order to identify this we propose a simple method based on linear regression. We note that depending on the measurements available, other methods such as the ones from \cite{nowak,kelner} can also be used to derive this information. Let $d_l(t)$ represent the delay experienced by a packet that enters tunnel $l$ at time $t$. Tunnels in the network can intersect with each other, hence, the path traversed by a tunnel can have packets belonging to itself and packets from other tunnels. Let $h_l(t)$ represent the number of packets that belong to tunnel $l$ that are still in the tunnel at time $t$. We will refer to these packets as packets in flight of tunnel $l$. The delay experienced by a packet entering tunnel $l$ at time $t$ is affected by the number of packets in that tunnel and other tunnels that intersect with it. Considering only a pair of tunnels $k$ and $l$, we can model the relationship between the packets in flight and delay as a linear function: $$d_l(t) = h_l(t) + \alpha_{kl} h_k(t) + \eta_l.$$ Here $\alpha_{kl}$ represents the fraction of packets of tunnel $k$ that are in the path traversed by tunnel $l$ and $\eta_l$ is random perturbation (noise). By injecting randomly generated traffic into each pair of tunnels and measuring the packet delay and packets in flight, it is possible to determine if two tunnels intersect. In particular, using linear regression it is possible to calculate the optimal parameters $\alpha_{kl}$ that minimizes the noise for each pair of tunnel $(k,l)$. When tunnels $l$ and $k$ do not intersect, the number of packets in tunnel $k$ does not affect the delay of packets entering tunnel $l$, hence, $\alpha_{kl} \approx 0.$ Otherwise, $\alpha_{kl}$ is closer to 1. We use these $\alpha_{kl}$ values to create the $L \times L$ binary interference matrix $\mathcal{F}$. If $\alpha_{kl} \approx 0$ then $\mathcal{F}_{kl} = 0$, and $\mathcal{F}_{kl} = 1$ otherwise. Moreover, $\mathcal{F}$ is symmetric, implying $\mathcal{F}_{kl} = \mathcal{F}_{lk}$. We will use the graph representation of $\mathcal{F}$ in some of our results. We refer to such a graph as the interference graph of the network, $G_{\mathcal{F}}(N_{\mathcal{F}}, E_{\mathcal{F}})$. This graph is simply the graph formed by using $\mathcal{F}$ as an adjacency matrix, where $N_{\mathcal{F}}$ consists of tunnels and an edge exists between tunnels that interfere with each other. An example of an interference matrix and its corresponding graph is given in Figure \ref{fig: minimality}. \subsection{Minimal topology} \label{sec: minimality} There exist many networks that produce the same interference matrix $\mathcal{F}$, hence, these networks are indistinguishable by our method. For example, each tunnel in the two networks shown in Figure \ref{fig: minimality} face the same interference. E.g. the tunnel $(1,...,2)$ only interferes with tunnel $(1,...,3)$ in both the networks. Hence, they produce the same $\mathcal{F}$ matrix. We are interested the smallest network, in terms of the number of links, that produces the given $\mathcal{F}$ matrix. We will call such a topology the minimal network topology. \begin{figure}[h] \centering \subfigure[Not minimal]{ \centering \includegraphics[scale=.6]{figures/not_minimal.pdf} \label{fig: not_minimal} } \subfigure[Minimal]{ \centering \includegraphics[scale=.6]{figures/minimal.pdf} \label{fig: minimal} } \subfigure[The interference matrix $\mathcal{F}$.]{ \centering \includegraphics[scale=.7]{figures/F.pdf} \label{fig: F} } \subfigure[The interference graph $G_\mathcal{F}$. Each node $(i,j)$ represents a tunnel $(i,...,j)$.]{ \centering \includegraphics[scale=.7]{figures/Fgraph.pdf} \label{fig: F} } \caption{Two topologies in Figures \ref{fig: not_minimal} and \ref{fig: minimal} produce the same $\mathcal{F}$ matrix. The white nodes are overlay and gray nodes are underlay, and the network uses the shortest path routing.} \label{fig: minimality} \end{figure} A necessary condition for a network to be minimal was identified in \cite{kelner}. Specifically, all underlay nodes must have at least three neighbors. If an underlay has only one neighbor, we can simply remove it to obtain a smaller network that is indistinguishable from the original network by using only the measurements available at the overlay. If an underlay node has two neighbors, we can connect its two neighbors and remove the node in order to obtain a smaller network with the same properties. We note that this condition is not sufficient for minimality in general. E.g. in Figure \ref{fig: not_minimal}, all the underlay nodes have 3 neighbors but the topology is not minimal. We will provide a sufficient condition for minimality, and show that the necessary condition from \cite{kelner} is also sufficient for specific topologies, namely trees and rings. In this paper we assume that the $\mathcal{F}$ matrix for a network $G(N,E)$ is given (i.e. obtained via measurements, as described earlier) and focus on obtaining the minimal network $\hat{G}^*(\hat{N}^*, \hat{E}^*)$ that supports this interference pattern. \section{Integer Programming Formulation} We formulate the problem of finding the minimal network that supports the given path interference pattern as an integer linear problem (ILP). Although a solution for this ILP is computationally intractable for large networks, studying this formulation will provide us with useful insights into the problem. Also, when the network is small, we are able to solve it optimally. \subsection{Integer program} Let us consider a network with $|\hat{N}|$ nodes. Nodes $1,..., |\mathcal{O}|$ are overlay nodes and nodes $|\mathcal{O}| + 1, ..., |\hat{N}|$ are underlay nodes. Note that the set $\mathcal{O}$ is known a priori. Let $x_{ij}^l \in \{0,1\}$ represent whether link $(i,j)$ is used by tunnel $l$, for $1\le i,j \le |\hat{N}|$, $1 \le l \le L$, and $x^l_{ii} = 0 \, \forall l$. For notational simplicity, we define another variable $x_{ij}$ which represents whether the link $\{i,j\}$ is used by any tunnel in either direction. Hence, \begin{equation} x_{ij} = \lor_{l}x_{ij}^l \lor x_{ji}^l, \qquad\qquad \forall i,j \end{equation} Here ``$\lor$'' is a logical OR operator. Note that such logical constraints can easily be transformed into a set of linear (integer) constraints \cite{optimization_book}. The objective function can be written as $$\text{minimize} \sum_{ij} x_{ij}.$$ Our network model assumes that each overlay node is connected to only one underlay link. This can be enforced by using the following constraint: \begin{equation} \sum_j x_{ij} = 1, \qquad i = 1, ..., |\mathcal{O}|. \end{equation} Again, to simplify the notation we define two new variables. Let $s(l,j) \in \{0,1\}$ represent whether tunnel $l$ begins at node $j$, and let $d(l,j)$ represent whether tunnel $l$ ends at node $j$. These values are known a priori, so we can replace these variables with their respective values while formulating a specific problem. Now we can write the next set of constraints which are essentially the flow conservation constraints. These constraints guarantee that each tunnel has a set of connected links in the network, starting and ending at its respective overlay nodes. \begin{align} \sum_i x^l_{ij} + s(l,j) = \sum_i x^l_{ji} + d(l,j), \qquad &j = 1, ..., |\hat{N}|, \nonumber\\ & l = 1, ..., L \end{align} We can see that the flow conservation constraints above allows loops to be formed in the network. Unlike max-flow type problems where loops can be removed in the post processing without harming the feasibility, removing them in our case can change the interference pattern of the tunnels. Hence we need to add constraints to avoid formation of loops. Similar problems arise in the ILP formulation of the Travelling Salesman Problem (TSP). We use the technique originally proposed by Miller-Tucker-Zemlink in \cite{TSP} to resolve this issue in TSP and add the following constraints: \begin{align} u^l_i - u^l_j + |\hat{N}| x^l_{ij} \le |\hat{N}| -1, \qquad \forall i\ne j, l = 1, ..., L. \end{align} Here, the variables $u^l_i \ge 0$ is used to assign an order to each node $i$ in each tunnel $l$. If $x^l_{ij} =1,$ then $u^l_j \ge u^l_i + 1,$ so the next node $j$ is assigned a higher value than node $i$. Otherwise, $u_i^l - u^l_j \le |\hat{N}| - 1.$ This ensures that there are enough values to assign to all the nodes that the tunnel might pass through. Finally we consider the interference constraints. For each tunnel pair $(k,l)$ we add a set of constraints depending on whether tunnels $k$ and $l$ interfere with each other. If tunnels $k$ and $l$ do not interfere we have the following constraints: \begin{align} x^k_{ij} + x^l_{ij} \le 1, \qquad \forall i,j, \text{ and } k,l: F(k,l) = 0. \end{align} This ensures that two tunnels that do not interfere with each are never assigned to the same link. If $F(k,l) = 1$, then both the tunnels $k$ and $l$ must appear together in at least one of the links. We enforce this with the following constraints \begin{align} \sum_{i,j} x^k_{ij} \land x^l_{ij} \ge 1, \qquad \forall i,j, \text{ and } k,l: F(k,l) = 1. \end{align} Here ``$\land$'' is the logical $AND$ operator, and these constraints can also be transformed into a set of linear (integer) constraints. The objective function along with the constraints (1) through (6) give the required ILP for identifying a minimal network. After solving the ILP, the graph can be recovered from the links $\{i,j\}$ for which $x_{ij} = 1$. A node that is not used by any of the tunnels can simply be removed from the recovered network. \subsection{Example} We consider a network where 6 underlay nodes are arranged to form a $3\times 2$ grid, and an overlay node is attached to each underlay node. The network uses the shortest path routing. The $30 \times 30$ interference matrix $\mathcal{F}$ is generated by determining whether two paths intersect with each other. We formulate the ILP with $|\hat{N}| = 12$ then solve it using the Gurobi solver \cite{gurobi}. \begin{figure}[h] \centering \subfigure[Original graph $G$.]{ \centering \includegraphics[scale=.7]{figures/grid32.pdf} \label{fig: original1} } \subfigure[Recovered graph $\hat{G}$.]{ \centering \includegraphics[scale=.7]{figures/grid32_recovered.pdf} \label{fig: recovered1} } \caption{Recovering a network topology by solving the ILP.} \label{fig: example1} \end{figure} The original and the recovered network are shown in Figure~\ref{fig: example1}. The recovered network has fewer nodes and edges than the original network. Link $\{7,8\}$ in the original network is used only by tunnels $(1,8,7,4)$ and $(4,7,8,1)$ in different directions. Hence there is no interference on this link, and it can be removed without changing the interference matrix. For the same reason, link $\{11,12\}$ can be removed to obtain the minimal network. Even after the removal of the links, we can see that the recovered network looks quiet similar to the original. \subsection{Upper bound} We provide an upper bound on the solution to the ILP in the previous section by using a simple algorithm given in Algorithm \ref{alg: feasible}. This algorithm produces a feasible solution to the ILP by assigning two interfering tunnels to a new link in $\hat{G}$. This algorithm can be suboptimal because in an optimal solution many tunnels can interfere at the same link. \begin{algorithm}[h!] \caption{FeasibleGraph$(\mathcal{F}, \mathcal{O})$ for obtaining a feasible network $\hat{G}$} \label{alg: feasible} \flushleft{ \begin{enumerate} \item Create a graph $\hat{G}$ with $|E_\mathcal{F}|$ edges in a line. \item For each link $\{k,l\} \in E_\mathcal{F}$ assign tunnel $k$ and $l$ to a unique edge in $\hat{G}$. All the tunnels traverse the line graph in the same direction. \item Connect links in $\hat{G}$ that have the same tunnel, if they aren't already connected, such that the tunnels form a loop free path. This can require either a single link, or a node and two links; see example in Figure \ref{fig: eg_greedy}. \item Add nodes $\hat{\mathcal{O}} = \{\hat{o}_1, \hat{o}_2, ..., \hat{o}_{|\mathcal{O}|}\}$ to $\hat{G}$. Each node $\hat{o}_i \in \hat{\mathcal{O}}$ corresponds to an overlay node $o_i \in \mathcal{O}.$ \item For each node $\hat{o}_i$ add a parent node $p(\hat{o}_i)$ and edge $\{\hat{o}_i, p(\hat{o}_i) \}$ to $\hat{G}$. \item For each tunnel $l$ that starts at $o_i$ assign tunnel $l$ to link $(\hat{o}_i, p(\hat{o}_i))$. \item For each tunnel $l$ that ends at $o_i$ assign tunnel $l$ to link $(p(\hat{o}_i), \hat{o}_i)$. \item Complete the tunnels by connecting $p(\hat{o}_i)$ to the partial tunnels formed in Step 3. \end{enumerate} } \end{algorithm} Algorithm \ref{alg: feasible} starts with a $\hat{G}$ that is a line graph with $|E_\mathcal{F}|$ edges, then maps each link in the interference graph $G_{\mathcal{F}}$ to a link in $\hat{G}$. Each edge in $G_{\mathcal{F}}$ represents two tunnels that pass through the same edge in $G$, so if there is an edge between tunnels $k$ and $l$ in $G_{\mathcal{F}}$, then tunnels $k$ and $l$ are assigned to one of the links in $\hat{G}$. When all the interferences are assigned, it is likely that the same tunnel gets assigned to links that are not attached to each other. In such a case, new links are added to create complete tunnels. An example of this process (Steps 1-3) is given in Figure \ref{fig: eg_greedy}. At the end of Step 3, all the interference constraints are satisfied. Steps 4-8 add the overlay nodes and makes sure that each overlay node is connected to a single underlay node. \begin{figure}[h] \centering \subfigure[Interference graph for tunnels $a$, $b$, $c$ and $d$.]{ \centering \includegraphics[scale=.6]{figures/interference_graph1.pdf} \label{fig: interference_greedy} } \subfigure[$\hat{G}$ after Step 2. Every pair of interfering tunnels are assigned to some link in the graph.]{ \centering \includegraphics[scale=.8]{figures/Ghat1.pdf} \label{fig: step2_greedy} } \subfigure[$\hat{G}$ after Step 3. Tunnels that are disconnected are connected using extra nodes and edges. ]{ \centering \includegraphics[scale=.8]{figures/Ghat2.pdf} \label{fig: step3_greedy} } \caption{Example of an execution of Steps 1 to 3 of Algorithm 1.} \label{fig: eg_greedy} \end{figure} We give the following lemma to show that Algorithm \ref{alg: feasible} produces a feasible solution to the ILP. Then Theorem \ref{thm: upper bound} establishes the upper bound on the number of links used by this algorithm. \begin{lemma} Algorithm \ref{alg: feasible} obtains a feasible solution to the ILP in Section 1. \end{lemma} \begin{proof} The proof of this lemma is given in Appendix \ref{app: lemma 1}. \end{proof} \begin{theorem} \label{thm: upper bound} The number of edges required for a feasible solution of the ILP, $|\hat{E}| \le |E_\mathcal{F}| + 2L|E_{\mathcal{F}}| + |O| + 2L$. \end{theorem} \begin{proof} The proof of this theorem is given in Appendix \ref{app: theorem 1}. \end{proof} \subsection{Lower bound} We establish a lower bound on the number of edges in the minimal graph using the properties of the interference graph. In order to minimize the number of links, we want to assign as many interfering tunnels as possible to the same link. However, we cannot have two tunnels be assigned to the same link if they don't interfere with each other. This property is nicely abstracted by the cliques in the interference graph $G_\mathcal{F}$. The tunnels, represented by the nodes in $G_\mathcal{F}$, that are in the same clique interfere with each other. So we can assign all of them to the same link. A lower bound is given by the minimum number of cliques required to cover all the links. For example, two cliques are needed to cover all the edges of the interference graph in Figure \ref{fig: interference_greedy}, so we need at least two links in $\hat{G}$ to represent all the interferences. In graph theory the smallest such set is known as the {\em minimum edge clique cover}\footnote{This is different from the minimum node clique cover which is the smallest set of cliques required to cover all the nodes.}, and the size of such set is known as the {\em intersection number} of the graph \cite{roberts}. Computing the minimum edge clique cover of a graph is known to be NP hard so it might not be useful for the purpose of comparing our solutions. However, in the next subsection we will use it to derive conditions when a recovered graph achieves the lower bound and guarantee optimality. The following lemma presents the lower bound result in terms of the number of directed links required to have a feasible solution. Theorem \ref{thm: cliques bound} extends this result to the case with undirected links, which is the setup in this paper. \begin{lemma} \label{lem: lower bound} Let $|\hat{E}_D|$ be the number of directed links required for a feasible solution of the ILP. Let C be the size of the minimum edge clique cover for the interference graph $G_\mathcal{F}$. Then $|\hat{E}_D | \ge C$. \end{lemma} \begin{proof} The proof of this lemma is given in Appendix \ref{app: lemma 2}. \end{proof} \begin{theorem} \label{thm: cliques bound} Let $|\hat{E}|$ be the number of undirected links required for a feasible solution of the ILP. Let C be the size of the minimum edge clique cover for the interference graph $G_\mathcal{F}$. Then, $$|\hat{E}| \ge {C \over 2}.$$ \end{theorem} \begin{proof} The proof of this theorem is given in Appendix \ref{app: theorem 2}. \end{proof} \subsection{A sufficient condition for optimality} We give a condition under which a the recovered network has the same number edges as the original network. When this condition is satisfied, the interference pattern cannot be achieved in a smaller network, so this result also provides a sufficient condition for minimality of a network. We prove this result by showing that if the condition is satisfied, then the recovered network achieves the lower bound developed in the previous subsection. We use this result in the subsequent sections to show that our polynomial time algorithms optimally solve the ILP for special networks. The main result of this subsection says that a given network is minimal if every directed edge in the network is associated with a unique interference (interfering pair of tunnels). Intuitively, this condition seems reasonable because if it is satisfied then each directed link in the graph creates a unique clique in the minimum edge clique cover of the interference graph. \begin{lemma} \label{lem: cliques equality} The size of the minimum edge clique cover of $G_\mathcal{F}$, $C=2|E|$ if and only if for each directed edge $(i,j)$ there exists a pair of tunnels $k^{ij}$ and $l^{ij}$ such that they intersect at link $(i,j)$ and nowhere else. \end{lemma} \begin{proof} The proof of this lemma is given in Appendix \ref{app: lemma 3}. \end{proof} \begin{theorem} \label{thm: minimality} Let C be the size of the minimum edge clique cover for the interference graph $G_\mathcal{F}$. Let $\hat{G}^*(\hat{N}^*, \hat{E}^*)$ be the optimal network obtained by solving the ILP. If every directed link $(i,j)\in E$ has a pair of tunnels $k^{ij}$ and $l^{ij}$ such that they intersect at link $(i,j)$ and nowhere else, then $\hat{G}^*$ has the same number of edges as the original network, i.e. $|\hat{E}^*| = |E|$. \end{theorem} \begin{proof} The proof of this theorem is given in Appendix \ref{app: theorem 3}. \end{proof} Note that this theorem provides a sufficient condition but it may not be necessary. That is, there may be graphs where $C < 2|E|$ but the ILP still produces a graph with $|E|$ edges. Also, if the number of edges in the optimal network obtained by solving the ILP is the same as the original network, then we know that the both the networks are minimal. Hence, we can use the condition in the theorem as a sufficient condition for minimality of a network. \begin{corollary} A network $G(N,E)$ is minimal if every directed link $(i,j)\in E$ has a pair of tunnels $k^{ij}$ and $l^{ij}$ such that they intersect at link $(i,j)$ and nowhere else. \end{corollary} \section{Identifying Trees} \label{sec: tree} We design a polynomial time algorithm to recover a tree network. If $G$ is a minimal tree, i.e. every non leaf nodes have at least three neighbors and all the leaf nodes are overlay, then this algorithm recovers the tree exactly. A similar result on recovering trees by using distance between the leaf nodes is given in \cite{singh}, however, the algorithm of \cite{singh} requires the network to be minimal. In the situation when the network $G$ is a non-minimal tree, our algorithm produces a $\hat{G}$ that is a minimal tree corresponding to $G$ since both the networks have the same $\mathcal{F}$ matrix. Note that there is a unique minimal tree corresponding to each non-minimal tree which can be obtained by using the process discussed in Section \ref{sec: minimality}. \subsection{Algorithm} The tree identification algorithm is given in Algorithm \ref{alg: tree}. The algorithm uses the interference matrix $\mathcal{F}$ to obtain a tree graph $\hat{G}$ with the same $\mathcal{F}$. It begins by initializing the graph $\hat{G}$ and checking for terminating conditions in Steps 1 to 3. In Step 4, the algorithm identifies a node $k_1^*$ such that when all its siblings along with itself are removed, its parent becomes a leaf node. This property will later help us compute a new $\mathcal{F}$ matrix of the reduced graph. In Step 5, this algorithm finds a group of nodes $X_{k^*}$ that consists of all the sibling nodes of $k_1^*$. Procedure 3 is used to identify such nodes; see Lemma \ref{lem: same parent} for proof. These nodes are then added to the recovered graph $\hat{G}$ in Step 6 by assigning then a common parent node, $p(X_k^*)$. Steps 7 removes the sibling nodes in $X_k^*$ from the original network $G$. Since the graph $G$ is not available, the removal is done indirectly by removing the corresponding tunnels from the $\mathcal{F}$ matrix. Note that node $k_1^*$ is not removed, instead it is renamed as the parent of the group $p(X_k^*)$ in Step 8. This works because when all the siblings of $k_1^*$ are removed, the interference of the tunnels that start or end at $k_1^*$ is the same as the tunnels that start or end at its parent node. The algorithm is iteratively applied to the reduced $\mathcal{F}$ matrix until only one or two leaf nodes remain. \begin{algorithm}[h!] \caption{IdentifyTree$(\mathcal{F}, \mathcal{O})$ for recovering a tree network} \label{alg: tree} \begin{enumerate} \item Add the nodes in $\mathcal{O}$ to $\hat{G}$. \item If $|\mathcal{O}| = 1$ return $\hat{G}$ . \item If $|\mathcal{O}| = 2$, add an edge between the two nodes in $\hat{G}$ and return $\hat{G}$ . \item Identify the tunnel $k^*$ that interferes with the largest number of other tunnels, $k^* = \argmax_k \sum_l \mathcal{F}_{kl}$. Let $k^*_1$ be the first node of tunnel $k^*$. \item For each node $i \in \mathcal{O}$, use Procedure \ref{alg: isSibling} to decide whether it has the same parent as $k^*_1$. Let $X_{k^*}$ be the set of nodes that successfully pass the test. \item Add a new node $p({X_{k^*}})$ to $\hat{G}$. Connect $p({X_{k^*}})$ to the nodes in $X_{k^*}$ in $\hat{G}$. \item For each node $i\in X_{k^*}, i\ne k^*_1$: \begin{itemize} \item Remove rows and columns corresponding to all the tunnels starting or ending at $i$ from $\mathcal{F}$. \item Remove node $i$ from $\mathcal{O}$. \end{itemize} \item Rename node $k^*_1$ to $p({X_{k^*}})$ so that any tunnel in $\mathcal{F}$ starting or ending at $k^*_1$ starts or ends at $p({X_{k^*}})$ respectively. \item Goto Step 2. \end{enumerate} \end{algorithm} An example of the graphs created after the first and the second iterations of this algorithm are shown in Figure \ref{fig: tree example}. In the first iteration, Step 4 identifies one of the tunnels that intersect with the most number of other tunnels, (5,...,1). So $X_k^* = \{5,6,7\}$ in obtained in Step 5. This avoids obtaining sibling groups such as $\{3,4\}$, which when removed does not make their parent a leaf node. Step 6 produces the $\hat{G}$ shown in Figure \ref{fig: recovered tree}, and Steps 7 and 8 result in the reduced tree shown in Figure \ref{fig: tree 1iteration}. The $\mathcal{F}$ matrix of the reduced tree is obtained by removing all the tunnels with nodes 6 and 7, then renaming node 5 to the parent node $p(5,6,7)$. Similarly the result of the second iteration is shown in Figures \ref{fig: tree 2iteration} and \ref{fig: recovered tree2}. Since there is only one group of siblings left in the graph after this iteration, the third iteration results in the $G$ with only one node. Also, the third iteration produces the $\hat{G}$ that is identical to the original graph in Figure \ref{fig: original tree}. \begin{figure}[h] \centering \subfigure[Original network $G$.]{ \centering \includegraphics[scale=.7]{figures/tree_example.pdf} \label{fig: original tree} } \\ \subfigure[Graph $G$ (implied by $\mathcal{F}$) after the first iteration.]{ \centering \includegraphics[scale=.7]{figures/tree_exampleG.pdf} \label{fig: tree 1iteration} } \subfigure[Recovered graph $\hat{G}$ after the first iteration.]{ \centering \includegraphics[scale=.7]{figures/tree_exampleGhat.pdf} \label{fig: recovered tree} } \subfigure[Graph $G$ (implied by $\mathcal{F}$) after the second iteration.]{ \centering \includegraphics[scale=.7]{figures/tree_exampleG2.pdf} \label{fig: tree 2iteration} } \subfigure[Recovered graph $\hat{G}$ after the second iteration.]{ \centering \includegraphics[scale=.7]{figures/tree_exampleGhat2.pdf} \label{fig: recovered tree2} } \caption{First two iterations of the tree identification algorithm. The third iteration (not shown) recovers the complete graph in Figure \ref{fig: original tree}.} \label{fig: tree example} \end{figure} \subsection{Analysis} In order to prove that Algorithm \ref{alg: tree} obtains the minimal tree, we first show that Step 4 identifies a node $k^*_1$ whose parent becomes a leaf node when we perform the node removal in Step 7. In Step 8 of the algorithm, this allows us to use the interference properties of the tunnels starting or ending at $k^*_1$ to obtain the interference of the tunnels of the parent node. \begin{lemma} \label{lem: tunnel id} Let $l = (l_1, l_2, ..., l_{|l|})$ be the tunnel that interferes with the largest number of other tunnels. When all the leaf nodes connected to $l_2$ are removed, $l_2$ becomes a leaf node in the resulting graph. \end{lemma} \begin{proof} The proof of the lemma is gven in Appendix \ref{app: lemma 4}. \end{proof} The following lemma shows that Procedure 3 identifies the nodes that share the same parent. The main idea behind the proof is that a path between two nodes that share the same parent interferes with only the tunnels starting or ending at these nodes. \begin{lemma} \label{lem: same parent} Two leaf nodes of a tree $i$ and $j$ share the same parent if and only if the tunnel from $i$ to $j$ does not interfere with any tunnel $l$ such that $l_1 \ne i$ or $l_{|l|} \ne j$. \end{lemma} \begin{proof} The proof of this lemma is given in Appendix \ref{app: lemma 5}. \end{proof} \floatname{algorithm}{Procedure} \begin{algorithm}[h] \caption{AreSiblings$(\mathcal{F}, i,j)$ for checking whether two nodes $i$ and $j$ share the same parent} \label{alg: isSibling} \begin{enumerate} \item Let $k$ be the tunnel going from node $i$ to $j$. \item For each tunnel $l$ in the network: \qquad If $\mathcal{F}_{kl} = 1$ and $l_1 \ne i$ and $l_{|l|} \ne j$ \qquad \qquad Nodes $i$ and $j$ don't share the same parent. \qquad \qquad Return. \item Let $k$ be the tunnel for $j$ to $i$ and repeat Step 2. \item Nodes $i$ and $j$ share the same parent. \end{enumerate} \end{algorithm} \floatname{algorithm}{Algorithm} Now we prove the following theorem that shows that the algorithm recovers the minimal tree network. \begin{theorem} If a given network $G$ is a minimal tree, then Algorithm \ref{alg: tree} recovers the network. \end{theorem} \begin{proof} The proof of this theorem is given in Appendix \ref{app: theorem 4}. \end{proof} Note that not only the recovered graph $\hat{G}$ is isomorphic to $G$, the relative positions of the overlay nodes are the same. That is if the overlay nodes $i$ and $j$ share the same parent in $G$, they also share the same parent in $\hat{G}$. Also, because of the fact that the $\mathcal{F}$ matrix for a non minimal tree is the same as that of the minimal version of the tree, and the minimal tree is unique for any non-minimal tree, we get the following corollary. \begin{corollary} If a given network $G$ is a non-minimal tree, then the tree $\hat{G}$ recovered by Algorithm \ref{alg: tree} is the unique minimal tree for $G$. \end{corollary} The following corollary states that the graph generated by the tree algorithm solves the ILP optimally. This is true simply because all minimal trees satisfy the condition of Theorem \ref{thm: minimality}. \begin{corollary} If the interference pattern in a $\mathcal{F}$ matrix can be represented in a tree, Algorithm \ref{alg: tree} produces a $\hat{G}$ that solves the ILP optimally. \end{corollary} Note that even when $G$ is not a tree, Algorithm \ref{alg: tree} can produce a tree as long as the interference can be represented by a tree. However if the interference pattern cannot be represented by a tree this algorithm will either fail Step 4, or the algorithm terminates but the recovered $\hat{G}$ has a different interference matrix than $\mathcal{F}$. \section{Identifying Rings} We now consider a situation when the $\mathcal{F}$ matrix cannot be represented in a tree. Specifically we consider a graph $G$ where the underlay nodes are arranged in a ring, and each underlay node is attached to exactly one overlay node. Also, we will assume that the network uses a shortest path routing algorithm, hence, the tunnels take the shortest paths between the overlay nodes. If the $\mathcal{F}$ matrix can be represented in a ring, our algorithm identifies the order of the overlay nodes. Note that knowing the order of the nodes gives more information than just recovering isomorphic graphs, e.g. in \cite{kelner}. Just like the tree discovery algorithm in the previous section, this algorithm can also be used to show that a particular network is not a ring. \subsection{Algorithm} The ring identification algorithm is given in Algorithm \ref{alg: ring}. This algorithm builds the ring in an incremental fashion. First, in Step 1 an overlay node $i$ and its parent node $p(i)$ are added to $\hat{G}$. The key idea behind the algorithm is in Step 2. It uses the $\mathcal{F}$ matrix to identify two overlay nodes in the ring that are closest to node $i$, i.e. two nodes such that their parents are neighbors of $p(i)$. In Steps 3 to 5 we attach the two nodes to their parents, and connect the parents to $p(i)$. \begin{algorithm}[h!] \caption{IdentifyRing$(\mathcal{F}, \mathcal{O})$ for recovering a ring network} \label{alg: ring} For each overlay node $i \in \mathcal{O}$: \begin{enumerate} \item If $i$ is not in $\hat{G}$, add two nodes $i$ and $p(i)$ to $\hat{G}$. Add an edge $\{i, p(i)\}$ to $\hat{G}$. \item Identify two tunnels starting at node $i$ that interfere with the least number of other tunnels. Call these tunnels $k^*$ and $l^*$. \item If node $k^*_{|k^*|}$ is not in $\hat{G}$, add two nodes $k^*_{|k^*|}$ and $p(k^*_{|k^*|})$. Add edge $\{k^*_{|k^*|}, p(k^*_{|k^*|})\}$. \item Add edge $\{p(i), p(k^*_{|k^*|})\}$ to $\hat{G}$ if it doesn't exist. \item Repeat Steps 3 and 4 for node $l^*_{|l^*|}$. \end{enumerate} \end{algorithm} \subsection{Analysis} We will show that Algorithm \ref{alg: ring} is guaranteed to recover the correct ring if $|\mathcal{O}| \ge 5$. For $|\mathcal{O}| = 3$, any ordering of the nodes is the same because the network links are bidirectional, so, using the algorithm is unnecessary. The algorithm might not produce the correct result for the a network with $|\mathcal{O}| = 4$ if the tunnels between the nodes in opposite sides pass through the same set of nodes. The networks in both of these situations with 3 or 4 overlay nodes in a ring are not minimal. \begin{lemma} Let $G$ be a graph where the underlay nodes are arranged in a ring, and each underlay node is connected to exactly one overlay node. Let $|\mathcal{O}| \ge 5$. Let $i$ and $j$ be two overlay nodes and $l$ be the tunnel from $i$ to $j$. Underlay nodes $p(i)$ and $p(j)$ are neighbors if and only if tunnel $l$ interferes with the fewest number of other tunnels. \end{lemma} \begin{proof} The proof of this lemma is given in Appendix \ref{app: lemma 6}. \end{proof} Algorithm \ref{alg: ring} identifies overlay nodes whose parent nodes are neighbors and pieces them together into a ring. Hence, Theorem \ref{thm: ring} follows directly from the lemma above. \begin{theorem} \label{thm: ring} If the given network $G$ is a minimal ring, Algorithm \ref{alg: ring} recovers the network. \end{theorem} Similar to the tree identification algorithm, this algorithm will produce a corresponding minimal ring if the original network is a non-minimal ring. This is true because both the rings have the same $\mathcal{F}$ matrix. Also, because a minimal ring satisfies the sufficient condition for minimality, this algorithm optimally solves the ILP for ring networks. Hence we get the following two corollaries. \begin{corollary} If a given network $G$ is a non-minimal ring with $|\mathcal{O}| \ge 5$, then the ring $\hat{G}$ recovered by Algorithm \ref{alg: ring} is the unique minimal ring for $G$. \end{corollary} \begin{corollary} If the interference pattern in the $\mathcal{F}$ matrix with $|\mathcal{O}| \ge 5$ can be represented in a ring, Algorithm \ref{alg: ring} produces a $\hat{G}$ that solves the ILP optimally. \end{corollary} \section{Identifying General networks} Inspired by the algorithms for identifying trees and rings in the previous sections, we develop a scheme for identifying general networks. A network can consist of trees and rings connected to each other. Our algorithm assumes that the network uses shortest path routing, and attempts to separate the trees from the rest of the graph, and identify these components separately. We will use Algorithm \ref{alg: tree} for recovering the trees, and we will design a new algorithm inspired by Algorithm \ref{alg: ring} for the non-tree components. Finally we will combine the discovered components to obtain the full network. This scheme is largely a heuristic, hence, we will compare its performance against another algorithm that also discovers general graphs. \subsection{Algorithm} We first present Algorithm \ref{alg: multi ring} which is designed to recover a graph where every underlay node is part of one or more cycles and only one overlay node is attached to each underlay node. The algorithm works in similar fashion as the ring recovery algorithm from the previous section. The difference is that now each underlay node can have more than two underlay neighbors. So, for each overlay node $i$, the algorithm attempts to find all the overlay nodes whose parents are neighbors of $p(i)$. For clarity, we present this part of the algorithm separately in Procedure 6. The main idea behind Procedure 6 is shown in an example in Figure \ref{fig: three neighbors}. For Node 1, the procedure first identifies two neighbors of $p(1)$ using the tunnels that start at 1 and intersects with the fewest number of other tunnels. The intuition behind this is the same as the ring algorithm from the previous section, however, when there are more than one rings, it is not guaranteed that the shortest tunnels have the fewest number interferences. It is possible that the tunnel (1,...,5) intersects with the same number of tunnels as (1,...,3). After identifying the two neighbors, the procedure avoids any tunnels that pass through these neighbors and identifies other shortest tunnels. \begin{figure}[h] \centering \includegraphics[scale=.6]{figures/cycle_heuristic.pdf} \caption{Example of Procedure 6 at work. Node $p(1)$ has three neighbors $p(2), p(3)$ and $p(4)$. Procedure 6 first attempts to identify two nodes, e.g. 2 and 4, by minimizing the number of tunnel intersections. Then node 3 is identified by using the property that tunnel (3,...,1) doesn't interfere with the tunnels (2,...,4) or (4,..,2). } \label{fig: three neighbors} \end{figure} \begin{algorithm}[h!] \caption{IdentifyRings$(\mathcal{F}, \mathcal{O})$ for recovering a non-tree network} \label{alg: multi ring} Initialize $\hat{G}$ to empty graph.\\ For each overlay node $i \in \mathcal{O}$: \begin{enumerate} \item Obtain the to neighboring nodes, $R$ = allNeighbors($i$). \item For each $r \in R$: \begin{enumerate} \item If node $r$ is not in $\hat{G}$ add nodes $r$ and $p(r)$ and edge $\{p(i), p(r)\}$ to $\hat{G}$. \item Add edge $(p(i), p(r))$ to $\hat{G}$ if it doesn't exist. \end{enumerate} \end{enumerate} \end{algorithm} \floatname{algorithm}{Procedure} \begin{algorithm} \label{proc: allNeighbors} \caption{allNeighbors$(\mathcal{F}, \mathcal{O},i)$ for finding all $j$ such that $p(i)$ and $p(j)$ are neighbors} \begin{enumerate} \item Identify two tunnels starting at node $i$ that interfere with the least number of other tunnels. Add the end nodes of these tunnels to set $R$. \item For each $n \in ({\mathcal{O} \backslash R})$, find a tunnel $k=(1,...,n)$ such that it interferes with the fewest number of tunnels and does not interfere with any tunnel $l$ such that $l_1, l_{|l|} \in R$. \item If tunnel $k$ exists, add $k_{|k|}$ to $R$ and goto Step 2. \item Return $R$. \end{enumerate} \end{algorithm} \floatname{algorithm}{Algorithm} Finally, we present Algorithm \ref{alg: general} for identifying networks with multiple rings and trees. In Step 2, this algorithm identifies sets of overlay nodes that could be a part of a tree using Procedure 3. Step 2(i) identifies the siblings, $X$, of node $i$. Step 2(ii) obtains the siblings of all the nodes in $X$. If $j$ is a sibling of $i$, then $i$ must also be a sibling of $j$. Using this property, Step 2(iii) attempts to reduce false positives. Step 2(iv) adds the nodes that are identified as part of a tree into the set of existing nodes. If some part of the tree containing the nodes in $S$ have already been identified, then these nodes must have one node in common with $S$, i.e. $S'$ exists. In such a case, nodes in $S$ is added to $S'$, otherwise $S$ is added as a new element $\mathcal{C}$. The tunnels belonging to all but one node in $S$ are removed from $\mathcal{F}$, and Step 2 is repeated on this new interference matrix. The completion of Step 2 produces the set $\mathcal{C}$ such that each element of $\mathcal{C}$ is a set of nodes that belong to the same tree. Step 3 of the algorithm retrieves the original $\mathcal{F}$ matrix. Then in Step 4, Algorithm \ref{alg: tree} is used on the elements of $\mathcal{C}$ to discover their corresponding trees. If the tree identification algorithm completes successfully, then all but one of the overlay nodes belonging to the tree are removed from the $\mathcal{F}$ matrix. The node that is not removed acts as an anchor node while combing the trees and the rest of the graph. In Step 5, the resulting $F$ matrix is then used in Algorithm 5 to recover the non-tree part of the graph. In order to combine a tree with the non-tree graph, in Step 6, the anchor node corresponding to the tree is found in the graph. Then in Steps 6(ii) and 6(iii), attempts are made to connect the tree to the anchor node at different locations in tree. The algorithm keeps the connection that minimizes the difference between the interference matrix of the resulting graph $\hat{G}$ and the original $\mathcal{F}$ matrix. \begin{algorithm}[h!] \caption{IdentifyGeneral$(\mathcal{F}, \mathcal{O})$ for recovering general networks} \label{alg: general} Initialize $\hat{G}$ to empty graph. \begin{enumerate} \item Let $\mathcal{C}$ be an empty set. Let $\mathcal{F'} = \mathcal{F}$. \item For each $i \in \mathcal{O}$ : \begin{enumerate}[i.] \item Use Procedure 3 to find the set of nodes that share the same parent as $i$. Let $X$ be the set. \item For each $j \in X$ use Procedure 3 to find the set of nodes that share the same same parent as $j$. Let $X_j$ be the set. \item Let $S = X \cap (\cap_j X_j)$ \item If $|S| > 1$, \begin{enumerate}[a)] \item Let $S' \in \mathcal{C}$ be a set of nodes such that $S' \cap S \ne \{\}$. \item If such $S'$ exists, $S' := S' \cup S$. Otherwise, $\mathcal{C} := \mathcal{C} \cup \{S\}$ \item Let $x$ be an arbitrary node in $S$. Let $S := S \backslash \{x\}$. \item Remove tunnels $l$ from $\mathcal{F}$ if $l_1 \in S$ or $l_{|l|} \in S$. Let $\mathcal{O} := \mathcal{O} \backslash S$. \item Restart Step 2. \end{enumerate} \end{enumerate} \item Let $\mathcal{F} := \mathcal{F'}$. \item For each $S \in \mathcal{C}$: \begin{enumerate}[i.] \item Use Algorithm \ref{alg: tree} on the nodes in $S$. Let $T$ be the corresponding tree. \item If the algorithm fails to produce a tree, continue. \item Remove tunnels $l$ from $\mathcal{F}$ if $l_1 \in S$ or $l_{|l|} \in S$. Let $\mathcal{O} = \mathcal{O} \backslash S$. \end{enumerate} \item Use Algorithm 5 on the remaining $\mathcal{F}$ to obtain $\hat{G}$. \item For each tree $T \in \mathcal{T}$: \begin{enumerate}[i.] \item Find the overlay node $i$ that is common to $T$ and $\hat{G}$. \item For each underlay node $j$ of $T$, add $T$ to $\hat{G}$ by replacing $i$ in $\hat{G}$ by node $j$ of the tree. Calculate the interference matrix for each $j$. \item For each underlay node $j$ of $T$ add $T$ to $\hat{G}$ by replacing $p(i)$ in $\hat{G}$ by node $j$ of the tree. Calculate the interference matrix for each $j$. \item Keep the $\hat{G}$ that produces the interference matrix closest to $\mathcal{F}$ in Steps ii and iii. \end{enumerate} \end{enumerate} \end{algorithm} \subsection{Simulation result} We compare the performance of Algorithm \ref{alg: general} against that of RGD1 algorithm from \cite{kelner}. For the implementation of RGD1, we obtain the exact length of each path by using a shortest path algorithm. All links are assumed to have unit length. We choose the parameter $Rg + \tau$ to be 4. We also tried the value of 3 and 5 for this parameter, however, the performance was not as good. The graphs used to obtain the data for the simulation were generated to be similar to the random graphs considered in \cite{kelner}. We first generate an Erd\H{o}s-R\'enyi random graph with parameters $\mathcal{G}(n,2/n)$. Then we find the largest connected component of the graph, and remove all the other nodes that do not belong to this component. We then attach overlay nodes to 80\% of the remaining nodes uniformly at random. Finally, we remove any underlay nodes that have degree less than 3 by using the process discussed in Section \ref{sec: minimality}. We generate 100 networks for each value of $n$, where $n = 10, 20, ..., 50$ and obtain the measurements required for both algorithms: distances for RGD1 and the $\mathcal{F}$ matrix for our algorithm. Finally, we use the measurements to recover the graphs. The performance of the two algorithms was measured by computing the {\em edit distance} between the original graph $G$ and the recovered graph $\hat{G}$. Edit distance measures the number of links in $\hat{G}$ that needs to be added or removed in order to make it isomorphic to $G$. This metric is similar to the metric used in \cite{kelner} to obtain the asymptotic bounds of RGD1. Unfortunately, calculating the graph edit distance is an NP-hard problem, so we use an open source tool called GEDEVO \cite{gedevo} to approximate it. \begin{figure} \centering \subfigure[Edit distances for all the iterations with n=10.]{ \centering \includegraphics[scale=.36]{figures/n10.pdf} \label{fig: n10} } \subfigure[Edit distances for all the iterations with n=50.]{ \centering \includegraphics[scale=.36]{figures/n50.pdf} \label{fig: n50} } \subfigure[Average edit distance for each value of $n$.]{ \centering \includegraphics[scale=.36]{figures/avg.pdf} \label{fig: average final} } \caption{Comparison of Algorithm \ref{alg: general} and RGD1.} \label{fig: simulation final} \end{figure} The results of the simulations are given in Figure \ref{fig: simulation final}. Figures \ref{fig: n10} and \ref{fig: n50} show the performance of the two algorithms for each of the 100 graphs that were generated. We can see that in most of the cases, our algorithm outperforms RDG1. Figure \ref{fig: average final} shows the average performance of the two algorithms across different values of $n$. Again, we can see that our algorithm outperforms RGD1. \section{Conclusion} We developed a new method for discovering the topology of a network. It uses the path interference information, which can be obtained by using the measurements available at the end nodes. Using the path interference, we formulated an integer linear program that finds a minimal graph that can contain all the interferences. We then developed polynomial time algorithms that solve the ILP for the special cases when the network is a tree or a cycle. Finally, we developed a heuristic for identifying general networks and compared its performance to a well known algorithm. Future research in the area will focus on developing better heuristics for general networks and providing performance guarantees.
1,314,259,992,584
arxiv
\section{Introduction} \label{sec-intro} One of the outstanding issues in string theory is the problem of finding realistic string compactifications and connecting them to cosmological observations. It requires several steps such as (i) choosing an appropriate setup for moduli stabilization, (ii) obtaining a meta-stable vacuum with a positive cosmological constant, and (iii) producing an inflationary model. Each of these steps is highly non-trivial and has its own obstructions. Despite of many years of research and the extensive literature on the subject, meta-stable de Sitter (dS) vacua still appear to be very difficult to get in string theory. There are no robust predictions about inflation, and no nice inflationary model from string theory was found yet. And both, dS vacua and inflation, are usually obtained in string theory at the price of adding effects which can spoil moduli stabilization (see \cite{Baumann:2014nda} for a recent review). Furthermore, most of the scenarios in string theory cannot be considered as those derived from the first principles, because of at least one of the following reasons: \begin{itemize} \item the lack of precise knowledge about quantum corrections, \item splitting the procedure of moduli stabilization into several steps which may result in ignorance of tachyonic directions spoiling meta-stability, \item the necessity to introduce additional uplifting mechanisms, \item disregarding back reaction effects. \end{itemize} The first of these issues is particularly important. While it is possible to stabilize all moduli at the classical level \cite{DeWolfe:2005uu}, several no-go theorems forbid dS vacua in such simplest supergravity compactifications \cite{Maldacena:2000mw,Ivanov:2000fg}. To avoid them, it is necessary to include either quantum corrections, both perturbative and non-perturbative, or non-geometric fluxes (see, for instance, \cite{Kachru:2003aw,Balasubramanian:2005zx,Conlon:2005ki,Westphal:2006tn,deCarlos:2009qm,Louis:2012nb,Danielsson:2012by,Blaback:2013qza,Hassler:2014mla}). The significance of explicit examples of truly ``quantum" calculations in string theory goes well beyond the problem of the cosmological constant. It is just about the string theory based computations of quantum gravity corrections that are usually put ``out of brackets" in modern phenomenologically based theoretical cosmology. Taking into account non-perturbative corrections is necessary to stabilise all moduli, provide resolution of unphysical singularities in moduli spaces, and ensure string dualities. The very possibility of explicit (or exact) non-perturbative calculations is highly non-trivial in string theory, and the known examples are very rare. One example, where such calculations have become possible, is the case of type II string compactifications on Calabi-Yau (CY) threefolds. In this case the low energy effective action (LEEA) in four dimensions preserves $N=2$ local supersymmetry (8 supercharges) and is completely determined by the geometry of its moduli space spanned by the scalar fields of $N=2$ vector and hypermultiplets. While the vector multiplet moduli space was described in full detail using mirror symmetry long ago (see, e.g., \cite{VanProeyen:1995sw} for a review), understanding of the quantum corrected hypermultiplet moduli space was very limited until recently. The advance of twistorial techniques drastically changed the situation and allowed us to get an {\it exact} description of the most of quantum effects --- at present, amongst all quantum corrections, only the so-called NS5-brane instantons remain out of control (see \cite{Alexandrov:2011va,Alexandrov:2013yva} and references therein). Thus, it is natural to apply these exact results in a more general context of moduli stabilization. Of course, this requires extending them beyond the class of compactifications where they were initially derived. In particular, the phenomenologically interesting compactifications include fluxes, localized sources such as D-branes and orientifold planes, and preserve only $N=1$ local supersymmetry (4 supercharges) in four dimensions. However, at present, quantum corrections are beyond control in such cases. On the other hand, it is possible to generate a non-trivial scalar potential for moduli stabilization in a unique way already in $N=2$ supergravity. This can be achieved by adding NS- and RR-fluxes leading to the gauging of some of the isometries of the moduli space of the original fluxless compactification. In fact, the integrated Bianchi identities give rise to certain tadpole cancellation conditions, which in the presence of fluxes generically can be satisfied only by adding orientifolds reducing supersymmetry to $N=1$ \cite{Giddings:2001yu}. However, in type IIA string theory it is possible to choose such fluxes that the tadpole cancellation condition holds automatically. This motivates us to consider $N=2$ gauge supergravity, which results from the type IIA CY compactifications with the NS $H$-fluxes and the RR $F_4$- and $F_6$-fluxes provided one ignores their back reaction. Such setup was already studied in \cite{Kachru:2004jr}. We go beyond the earlier studies, and compute the quantum corrected scalar potential in the gauged supergravity including the non-perturbative terms, which come from the instanton corrections to the geometry of the moduli space known exactly in the absence of fluxes. The idea beyond this computation is that the preserved $N=2$ supersymmetry protects the quantum corrections so that the exact non-perturbative potential, where the back reaction effects are taken into account, should not differ too much from the one obtained here. In this paper we restrict ourselves to the case of a {\it rigid} CY threefold $\mathfrak{Y}$. Such manifold has the vanishing Hodge number $h^{2,1}(\mathfrak{Y})=0$, so that the LEEA is described by $N=2$ supergravity interacting with a single hypermultiplet, called the {\it universal hypermultiplet} (UH), and some number $h^{1,1}(\mathfrak{Y})> 0$ of vector multiplets. This leads to various simplifications, such as the absence of complex structure moduli, which allow to make our analysis very explicit. Actually, one of our original motivations was to find a setup for flux compactifications which takes into account quantum corrections and, at the same time, can be treated as explicitly as possible. It should be mentioned that several attempts to take into account instanton corrections in compactifications on rigid CY already appeared in the literature, most notably, in \cite{Davidse:2005ef}. However, the analysis of \cite{Davidse:2005ef} did not include contributions of vector multiplets and, as it turned out later, was based on a misleading ansatz for D-instantons. In contrast, we consider here the full scalar potential including all moduli. Moreover, we do not assume that there exists a hierarchy allowing us to perform moduli stabilization in a step-by-step procedure, but analyze all equations on critical points on the same footing. One of our results is a simple condition on the flux parameters (see \eqref{relflux}) which allows us to find a set of {\it exact} solutions to the quantum corrected equations for all axion fields, i.e. the periods of the $B$-field and the RR 3-form potential along 2 and 3-cycles of $\mathfrak{Y}$, respectively. The role of the worldsheet and D-instanton corrections for the existence of these solutions is pivotal. Unfortunately, the equations we get on the remaining scalars, namely, dilaton and K\"ahler moduli, are too complicated to be treated in full generality. Therefore, in the beginning we restrict our attention to the perturbative approximation where all instanton contributions are neglected, but perturbative $\alpha'$ and $g_s$-corrections, controlled by the Euler characteristic of $\mathfrak{Y}$, are retained. We obtain bounds on the values of the dilaton and the CY volume, which admit the existence of critical points. In particular, we find that this class of compactifications does not allow critical points with both large volume and small string coupling, i.e. in the only region where all quantum corrections can be neglected. This can be contrasted with the result of \cite{DeWolfe:2005uu} that a more general choice of fluxes provides the moduli stabilization at classical level, but this choice must be supplemented by an orientifold projection to satisfy the tadpole cancellation condition mentioned above and leads to AdS vacua. To further analyze the critical points, we first restrict ourselves to the case with one K\"ahler modulus, i.e. to a CY with $h^{1,1}=1$. Since up to now no CY was found with such Hodge numbers, this case should only be viewed as a model convenient to test the moduli stabilization, but not having a string theory realization. In this special case we find two critical points, which both lead to a positive potential, but both turn out to be unstable. Then we turn to the general case, where we directly address the problem of stability of critical points, without trying to find them explicitly. To this end, we analyse the matrix of the second derivatives and show that it cannot be positive definite, which means that there are no meta-stable vacua. Thus, in the perturbative approximation, these simple models cannot provide stabilization of all moduli. Finally, we attempt to take into account the contributions of worldsheet and D-instantons in the simplest case of $h^{1,1}=1$. As before, we perform a numerical analysis of the second derivative matrix, which shows us again that in the physical region the matrix is never positive definite on mass shell. This result appears to be extremely non-trivial, given a very complicated analytical form of the second derivatives. The effect of instantons on the perturbative analysis for $h^{1,1}>1$ will be investigated elsewhere. The paper is organized as follows. In the next section we review some basic information about CY string compactifications, their moduli spaces, the effect of fluxes, and provide a formula for the scalar potential induced by the gauging in $N=2$ supergravity. We also compute this potential explicitly, including perturbative and non-perturbative quantum corrections, in the gauged supergravity inspired by the class of compactifications we concentrate on. In section \ref{sec-modstab} we discuss equations on critical points and find a solution for all axion fields. In section \ref{sec-pert} we study the perturbative approximation. First, we derive general bounds on critical points, then analyze in detail the case with one K\"ahler modulus, and finally perform a stability analysis in a generic case with arbitrary number of moduli. In section \ref{sec-inst} we present the results of our numerical analysis of the one-modulus case in the presence of instantons. Section \ref{sec-concl} is devoted to a discussion of our results. Several appendices contain details about special and quaternionic geometries, the metrics on $N=2$ vector and hypermultiplet moduli spaces, and our stability analysis of critical points. \section{Scalar potential from gauging} \label{sec-potential} \subsection{$N=2$ gauged supergravity and its scalar potential} \label{subsec-SUGRA} The four-dimensional LEEA of type II strings compactified on a Calabi-Yau threefold $\mathfrak{Y}$ is given by $N=2$ supergravity coupled to $N=2$ vector and hypermultiplets. In the two-derivative approximation, where one ignores the higher curvature terms appearing as $\alpha'$-corrections, the bosonic part of the action comprises only kinetic terms for the metric, vector and scalar fields arising after compactification. The couplings of these kinetic terms are, however, non-trivial, being restricted by $N=2$ supersymmetry in terms of the metrics on the vector and hypermultiplet moduli spaces, $\mathcal{M}_V$ and $\mathcal{M}_H$, parametrized by the scalars of the corresponding multiplets. Furthermore, $N=2$ supersymmetry restricts $\mathcal{M}_V$ to be a special K\"ahler manifold, with a K\"ahler potential $\mathcal{K}(z^i,\bar z^{\bar \imath})$ (with $i=1,\dots, h^{1,1}$ in type IIA) determined by a holomorphic prepotential $F(X^I)$ (with $I=(0,i)=0,\dots ,h^{1,1}$ and $z^i=X^i/X^0$), a homogeneous function of degree 2. Similarly, $\mathcal{M}_H$ must be a quaternion-K\"ahler (QK) manifold of dimension $4(h^{2,1}+1)$ \cite{Bagger:1983tt}. We denote the metrics on the two moduli spaces by $\mathcal{K}_{i\bar \jmath}$ and $g_{uv}$, respectively. The resulting theory is, however, not appropriate from the phenomenological point of view since it does not have a scalar potential, so that all moduli remain unspecified. This gives rise to the problem of {\it moduli stabilization}, i.e. generating a potential for the moduli with a local minimum and no flat directions. Local $N=2$ supersymmetry does allow a non-trivial scalar potential, but this requires to consider $N=2$ {\it gauged} supergravity. The latter can be constructed from the usual ungauged supergravity when the moduli space $\mathcal{M}_V\times \mathcal{M}_H$ has some isometries, which are to be gauged with respect to the vector fields $A^I$ comprising, besides those of vector multiplets, the gravi-photon $A^0$ of the gravitational multiplet. Physically, this means that the scalar fields affected by the isometries acquire charges under the vector fields used in the gauging. The charges are proportional to the components of the Killing vectors $k_\alpha$ corresponding to the gauged isometries. In general, the gauge group must be a subgroup of the isometry group, but in this paper we deal only with abelian gaugings of isometries of the hypermultiplet moduli space $\mathcal{M}_H$. Then the charges are characterized by the vectors ${\boldsymbol k}_I=\Theta_I^\alpha k_\alpha\in T\mathcal{M}_H$ where $\Theta_I^\alpha$ is known as the {\it embedding tensor}. It is remarkable that in $N=2$ gauged supergravity the geometry of the moduli space together with the charge vectors {\it completely} fix the scalar potential. Explicitly, it is given by \cite{D'Auria:1990fj,Andrianopoli:1996cm,deWit:2001bk}\footnote{Our conventions and normalizations are explained in Appendix \ref{ap-norm}. Note that the potential appears in the literature in the two possible forms, which are both given in \eqref{scpot-generic} and are simply related by Eq. \eqref{relcND}. In the presence of non-abelian gaugings the potential acquires additional terms which we, however, omit.} \begin{eqnarray} V &=& 4 e^\mathcal{K} {\boldsymbol k}^u_I {\boldsymbol k}^v_J g_{uv} X^I \bar X^J + e^\mathcal{K} \(\mathcal{K}^{i\bar \jmath}D_i X^I D_{\bar \jmath}\bar X^J-3 X^I \bar X^J\)\(\vec\mu_I\cdot \vec \mu_J\), \label{scpot-gen} \end{eqnarray} where $D_i X^I=(\partial_i+\partial_i \mathcal{K})X^I$ and $\vec\mu_I$ is the triplet of moment maps which quaternionic geometry of $\mathcal{M}_H$ assigns to each isometry ${\boldsymbol k}_I$ \cite{MR872143}. This result gives us an opportunity to search for the potentials ensuring moduli stabilization, using the geometric data from the ungauged theory as an input. In particular, here we employ the exact results about the non-perturbative description of $\mathcal{M}_V$ and $\mathcal{M}_H$ in type II CY compactifications, described below in subsection \ref{subsec-qcor}, to infer the impact of quantum corrections on the potential \eqref{scpot-gen} and stabilization of moduli. \subsection{Flux compactifications} \label{subsec-flux} In string theory, $N=2$ gauge supergravity can be obtained by adding closed string fluxes to a CY compactification (see \cite{Grana:2005jc} for a review). In fact, fluxes back react on the background geometry so that the simple direct product $M_4\times \mathfrak{Y}$ is not a solution of the (classical) equations of motion anymore. To get a solution, one has to add a warp factor and to consider internal manifolds with torsion \cite{Strominger:1986uh,Polchinski:1995sm,Michelson:1996pn}. Although such backgrounds are nicely described in the framework of generalized geometry \cite{Hitchin:2004ut}, the corresponding effective actions are poorly understood. Due to this reason, we accept the common strategy (see, for instance, \cite{Louis:2002ny,Giryavets:2003vd,Kachru:2004jr,DeWolfe:2005uu}) and ignore the back reaction, assuming that the compactification manifold is still a Calabi-Yau.\footnote{It should be mentioned that in the type IIA theory under consideration in this paper, it is the less justified assumption than in type IIB. In the latter case, some choices of fluxes allow the vacua where the internal manifold is a {\it conformal} Calabi-Yau space, which is not too much different from the usual Calabi-Yau manifolds. In contrast, in the type IIA case the equations of motion require the compactification manifold to be either non-K\"ahler, or even non-complex.} The LEEA for flux compactifications on CY was found in \cite{Louis:2002ny}, and was shown to perfectly fit the framework of $N=2$ gauged supergravity.\footnote{More precisely, in the presence of the so-called magnetic fluxes, it should be generalized to incorporate massive tensors. In the absence of fluxes, these tensor fields are massless and can be dualized to the scalars contributing to the hypermultiplet moduli space. After receiving a mass, they are rather dual to massive vector fields.\label{foot-magnetic}} In particular, given the LEEA, one can read off the embedding tensor $\Theta_I^\alpha$ providing a map between the fluxes and the gauged isometries. Let us briefly review these results. First, we recall the field content of the moduli spaces. In type IIA, the vector multiplet moduli space $\mathcal{M}_V$ describes the complexified K\"ahler moduli of $\mathfrak{Y}$ parametrizing deformations of the K\"ahler structure and the periods of the $B$-field along two-dimensional cycles, $z^i=b^i+\mathrm{i} t^i$. The hypermultiplet moduli space $\mathcal{M}_H$ consists of \begin{itemize} \item $u^a$ --- complex structure moduli of $\mathfrak{Y}$ ($a=1,\dots,h^{2,1}$), \item $\zeta^\Lambda,\tilde\zeta_\Lambda$ --- RR-scalars given by periods of the RR 3-form potential along three-dimensional cycles of $\mathfrak{Y}$ ($\Lambda=(0,a)=0,\dots,h^{2,1}$), \item $\sigma$ --- NS-axion, dual to the 2-form $B$-field in four dimensions, \item $\phi$ --- dilaton, determining the value of the four-dimensional string coupling, $g_s^{-2}=e^{\phi}\equiv r$. \end{itemize} The Kaluza-Klein reduction from ten dimensions, performed in \cite{Louis:2002ny}, leads to the classical metrics on $\mathcal{M}_V$ and $\mathcal{M}_H$. The former is the special K\"ahler metric $\mathcal{K}_{i\bar \jmath}$ given by the derivatives of the K\"ahler potential \begin{equation} \mathcal{K}=-\log\[ \mathrm{i}\(\bar X^I F^{\rm cl}_I-X^I\bF^{\rm cl}_I\)\], \end{equation} where $F^{\rm cl}_I=\partial_{X^I}F^{\rm cl}$ are the derivatives of the classical holomorphic prepotential \begin{equation} F^{\rm cl}(X)=-\kappa_{ijk}\, \frac{X^iX^j X^k}{6X^0}, \label{Fcl} \end{equation} which is determined by the triple intersection numbers $\kappa_{ijk}$ of $\mathfrak{Y}$. The hypermultiplet metric is given by the so-called {\it c-map} \cite{Cecotti:1989qn} which produces a QK metric out of another holomorphic prepotential characterizing the complex structure moduli. We omit its explicit expression, but mention the crucial fact that it carries a Heisenberg group of continuous isometries acting by shifts on the RR-scalars and the NS-axion. The corresponding Killing vectors are \begin{equation} \label{heis0} k^\Lambda=\partial_{\tilde\zeta_\Lambda}-\zeta^\Lambda\partial_\sigma, \qquad \tilde k_\Lambda=\partial_{\zeta^\Lambda}+\tilde\zeta_\Lambda\partial_\sigma, \qquad k_\sigma=2\partial_\sigma. \end{equation} It is these isometries that are gauged by adding fluxes. In general, type IIA strings on CY admit NS-fluxes incorporated by the following field strength of the $B$-field: \begin{equation} H^{\rm flux}_3=h^\Lambda\tilde\alpha_\Lambda-\tilde h_\Lambda\alpha^\Lambda, \end{equation} where $(\alpha^\Lambda,\tilde\alpha_\Lambda)$ is a symplectic basis of harmonic 3-forms, and RR-fluxes given by the 2- and 4-form field strengths \begin{equation} F^{\rm flux}_2=-m^i\tilde\omega_i, \qquad F^{\rm flux}_4=e_i\omega^i, \end{equation} where $\tilde\omega_i$ and $\omega^i$ are bases of $H^2(\mathfrak{Y})$ and $H^4(\mathfrak{Y})$, respectively. Besides, there are two additional parameters, $m^0$ and $e_0$. The first one is Romans mass which gives a consistent deformation of ten-dimensional type IIA supergravity \cite{Romans:1985tz}, and the second one is a constant arising after dualization of the 3-form RR potential \cite{Louis:2002ny}. They can be viewed as the fluxes $F^{\rm flux}_0$ and $F^{\rm flux}_6$, and also lead to a gauging in the effective action. Although the effective action was found in \cite{Louis:2002ny} in the presence of all these flux parameters, we set the ``magnetic" fluxes $m^I$ to zero in what follows. The reason is twofold. First, this allows to avoid complications with the simultaneous appearance of electric and magnetic charges of the NS-axion as well as massive vector fields (see footnote \ref{foot-magnetic}). Second, the vanishing of Romans mass $m^0$ allows to avoid adding orientifold planes, otherwise, needed to satisfy the D6-brane tadpole cancellation condition \cite{Kachru:2004jr}. This also allows us to keep $N=2$ supersymmetry unbroken, which partially justifies our use of the results obtained for fluxless CY compactifications. With this restriction, the gauging induced by the fluxes is characterized by the following charges \cite{Louis:2002ny}: \begin{equation} {\boldsymbol k}_0=h^\Lambda \tilde k_\Lambda+\tilde h_\Lambda k^\Lambda+e_0k_\sigma, \qquad {\boldsymbol k}_i=e_i k_\sigma, \label{ch-gauge} \end{equation} written down here as linear combinations of the Killing vectors \eqref{heis0}. \subsection{Quantum corrections} \label{subsec-qcor} The scalar potential obtained in \cite{Louis:2002ny} was found by the Kaluza-Klein reduction and, therefore, resulted from gauging of the isometries of the {\it classical} moduli space. However, both $\mathcal{M}_V$ and $\mathcal{M}_H$ are known to receive {\it quantum} corrections. Unfortunately, one has a very limited understanding of the impact of fluxes on these corrections. On the other hand, for fluxless CY compactifications the situation is much better, as we now describe. We have full control over the metric on $\mathcal{M}_V$: it receives the $\alpha'$-corrections which are all captured by a modification of the holomorphic prepotential \eqref{Fcl} \cite{Candelas:1990rm,Hosono:1993qy} \begin{equation} F(X)=F^{\rm cl}(X)+\chi_\mathfrak{Y}\,\frac{\mathrm{i}\zeta(3)(X^0)^2}{16\pi^3} -\frac{\mathrm{i}(X^0)^2}{8\pi^3}\sum_{k_i\gamma^i\in H_2^+(\mathfrak{Y})}n_{k}^{(0)} {\rm Li}_3\(e^{2\pi\mathrm{i} k_iX^i/X^0}\), \label{Ffull} \end{equation} where $\chi_\mathfrak{Y}=2(h^{1,1}-h^{2,1})$ is Euler characteristic of CY, $n_{k}^{(0)}$ are the genus-zero Gopakumar-Vafa invariants, and the sum goes over the effective homology classes, i.e. $k_i\ge 0$ for all $i$, with not all of them vanishing simultaneously. The two additional terms correspond to a perturbative correction and a contribution of worldsheet instantons, respectively. As regards $\mathcal{M}_H$, though its complete non-perturbative description is still beyond reach, a significant progress in this direction was recently achieved by using twistorial methods (see \cite{Alexandrov:2011va,Alexandrov:2013yva} for reviews). In contrast to $\mathcal{M}_V$, the hypermultiplet metric is exact in $\alpha'$, but receives $g_s$-corrections. At the perturbative level, it is known explicitly \cite{Alexandrov:2007ec} and is given by a one-parameter deformation of the classical c-map metric, whose deformation parameter is controlled by $\chi_\mathfrak{Y}$ \cite{Antoniadis:1997eg,Antoniadis:2003sw,RoblesLlana:2006ez}. At the non-perturbative level, the metric gets the instanton contributions coming from D2-branes wrapping 3-cycles (and, hence, parametrized by a charge $\gamma=(p^\Lambda, q_\Lambda)$) and NS5-branes wrapping the whole CY. The D-instantons were incorporated to all orders using the twistor description of QK manifolds \cite{RoblesLlana:2006is,Alexandrov:2008nk,Alexandrov:2008gh,Alexandrov:2009zh}, so that only NS5-instanton contributions still remain unknown (see, however, \cite{Alexandrov:2010ca,Alexandrov:2014mfa,Alexandrov:2014rca} for a recent progress on the type IIB side). Though the twistor description is rather implicit via encoding the metric into the holomorphic data on the twistor space of $\mathcal{M}_H$, in the case when only the D-instantons with ``mutually local charges" $\langle\gamma,\gamma'\rangle=0$\footnote{We use the skew symmetric product defined by $\langle\gamma,\gamma'\rangle=q_\Lambda p'^\Lambda-q'_\Lambda p^\Lambda$. The mutual locality is equivalent to the condition that there is a symplectic frame where all charges are purely electric, i.e. $p^\Lambda=0$.} are taken into account, the metric was explicitly computed in \cite{Alexandrov:2014sya}. Thus, it is natural to use these exact results for analyzing the scalar potential \eqref{scpot-gen}. Of course, it would be naive to expect that they are not going to be affected by fluxes and, eventually, their back reaction via torsion, and it is an open question whether in such situation one can trust the quantum corrections computed before the fluxes were switched on. However, the presence of $N=2$ supersymmetry allows us to think that the back reaction effects should not be too strong. Indeed, most of the results mentioned above were obtained by using only requirements of supersymmetry and a few discrete symmetries expected to survive at the non-perturbative level. Besides, this expectation is supported by the recent results about perturbative $\alpha'$ and $g_s$-corrections for compactifications on manifolds with the $SU(3)$ structure \cite{Grana:2014vva}. In the worst case, if our expectation does turn out to be wrong, the gauged supergravity obtained in this approximation and studied in this paper should only be considered as inspired by string theory. It should be noticed that instanton corrections break the continuous isometries of the classical hypermultiplet moduli space: a D-instanton of charge $\gamma$ comes with a factor $e^{2\pi\mathrm{i}(p^\Lambda\tilde\zeta_\Lambda-q_\Lambda\zeta^\Lambda)}$ and, therefore, breaks a linear combination of $k^\Lambda$ and $\tilde k_\Lambda$, whereas NS-brane instantons break all isometries of \eqref{heis0}. This raises the question, how such instantons can be consistent with the gauging induced by fluxes, since the latter can be only performed in the presence of continuous isometries? This problem was solved in \cite{KashaniPoor:2005si} where it was shown that fluxes protect from the instanton corrections precisely those isometries that are to be gauged. Applying this result to type IIA string theory on CY with $H_3$, $F_4$ and $F_6$ fluxes, one concludes from \eqref{ch-gauge} that it excludes NS5-instantons and allows only D-instantons with charges satisfying $h^\Lambda q_\Lambda-\tilde h_\Lambda p^\Lambda=0$. \subsection{Scalar potential from fluxes on rigid CY} \label{subsec-fluxrigid} In this paper we restrict our attention to the flux compactifications on a {\it rigid} Calabi-Yau manifold, i.e. when $\mathfrak{Y}$ has vanishing $h^{2,1}$ and thus does not have complex structure deformations. As a result, the capital Greek indices $\Lambda,\Sigma,\dots$ take only one value and, therefore, can be safely dropped. In the case of rigid CY, $\mathcal{M}_H$ has the lowest possible dimension and thus this case represents a nice laboratory to study quantum corrections, gaugings, fluxes, etc. (see, for instance, \cite{Strominger:1997eb,Gutperle:2000sb,Ceresole:2001wi,Antoniadis:2003sw,Davidse:2004gg,Kachru:2004jr,Bao:2009fg,Catino:2013syn}). Moreover, the metric on four-dimensional QK spaces allows an explicit parametrization \cite{Przanowski:1991ru,MR1423177}, which reduces it to a solution of an integrable system. In particular, in the presence of one continuous isometry, it is encoded in a solution of the integrable {\it Toda} equation. This fact was extensively used in several studies of instantons and their impact on moduli stabilization \cite{Ketov:2001ky,Ketov:2001gq,Ketov:2002vr,Davidse:2005ef,Alexandrov:2006hx,Alexandrov:2012np}. Here we use the explicit results of \cite{Alexandrov:2014sya} providing the {\it exact} metric on $\mathcal{M}_H$ corrected by D-instantons with mutually local charges, which was shown to be consistent with the description based on the Toda equation. As explained in the end of the previous subsection, the $H$-fluxes protect one linear combination of the isometries $k$ and $\tilde k$. Since in the rigid case the D-instanton charge is a two-dimensional vector, $\gamma=(p,q)$, the charges of the allowed D-instantons are necessarily mutually local. Thus, the metric computed in \cite{Alexandrov:2014sya} contains {\it all}\; instantons allowed by the fluxes. Explicitly, this metric is given by \begin{equation} \mathrm{d} s^2= \frac{2}{r^2}\[\(1-\frac{2r}{\mathcal{R}^2\mathbf{U}}\) \((\mathrm{d} r)^2+\frac{\mathcal{R}^2}{4}\,|\mathcal{Y}|^2\) +\frac{1}{64}\(1-\frac{2r}{\mathcal{R}^2\mathbf{U}}\)^{-1}\(\mathrm{d} \sigma +\tilde\zeta \mathrm{d} \zeta-\zeta\mathrm{d} \tilde\zeta+\cV_{(\sigma)} \)^2\], \label{mett-UHMmain} \end{equation} where all notations, such as $\mathcal{R}$, $\mathbf{U}$, $\mathcal{Y}$, $\cV_{(\sigma)}$, are explained in Appendix \ref{subap-metric}. The charge vectors \eqref{ch-gauge} corresponding to our choice of fluxes are given by \begin{equation} \begin{split} {\boldsymbol k}_0=&\, \tilde h\partial_{\tilde\zeta}+h\partial_\zeta +\(2e_0+h\tilde\zeta-\tilde h\zeta\) \partial_\sigma\, , \\ {\boldsymbol k}_i=&\, 2e_i\partial_\sigma\, . \end{split} \label{kilv-H} \end{equation} They generate isometries of the metric \eqref{mett-UHMmain} provided that the D-instanton charges are restricted to satisfy $hq=\tilde h p$. The associated moment maps $\vec\mu_I$ are computed in Appendix \ref{subap-moment} with the following result: \begin{equation} \begin{split} \mu_i^+=&\, 0, \qquad\qquad\qquad\quad \mu_i^3=\frac{e_i}{2r}, \\ \mu_0^+=&\, \frac{\mathrm{i}\mathcal{R}}{2r}\(\tilde h-\lambda h\), \qquad \mu_0^3=\frac{1}{2r}\(e_0+h\tilde\zeta-\tilde h\zeta\). \end{split} \label{momentmap-main} \end{equation} Thus, the only effect of instantons on the moment maps is contained in the function $\mathcal{R}$ determined by the equation \eqref{r-UHM}. Now we use all these data to compute the scalar potential \eqref{scpot-gen}. A simple calculation gives \begin{equation} \begin{split} V=&\, \frac{e^{\mathcal{K}}}{4r^2}\Biggl[ \frac{2|E+\mathcal{E}|^2}{1-\frac{2r}{\mathcal{R}^2\mathbf{U}}} +\mathcal{K}^{i\bar \jmath}\(e_i+E\mathcal{K}_i\)\(e_j+\bar E\mathcal{K}_{\bar \jmath}\)-3|E|^2 +4\mathcal{R}^2|\tilde h-\lambda h|^2\(\mathcal{K}^{i\bar \jmath}\mathcal{K}_i\mathcal{K}_{\bar \jmath}-1-\frac{4r}{\mathcal{R}^2\mathbf{U}}\) \Biggr], \end{split} \label{potential-main} \end{equation} where $\mathcal{K}_i=\partial_i\mathcal{K}$ and we have denoted \begin{equation} \begin{split} E=&\, e_0+h\tilde\zeta-\tilde h\zeta+e_i z^i, \\ \mathcal{E}=&\, {1\over 2}\(h\iota_{\partial_\zeta}+\tilde h\iota_{\partial_{\tilde\zeta}}\)\cV_{(\sigma)}. \end{split} \label{defE} \end{equation} Note that both the metric and the potential are invariant under the symplectic transformations induced by a change of basis of 3-cycles on $\mathfrak{Y}$. This invariance can be used to put $h$-flux to zero, which we assume from now on. In this symplectic frame, only electrically charged instantons contribute to the potential. Using this simplification, one can show that \begin{equation} \mathcal{E}=\frac{4\tilde h r\bar \vl}{\mathcal{R} (|M|^2+|v|^2)}\, , \end{equation} where the quantities appearing on the r.h.s., initially introduced in Appendix \ref{subap-metric}, can now be computed explicitly as \begin{eqnarray} v &=& 384 c\sum_{q>0}s(q) q^2 \sin(2\pi q\zeta)K_1(4\pi q\mathcal{R}), \nonumber\\ M &=& 2\lambda_2+384 c\sum_{q>0}s(q) q^2 \cos(2\pi q\zeta)K_0(4\pi q\mathcal{R}), \label{res-electric}\\ r &=& \frac{\lambda_2\mathcal{R}^2}{2}-c-\frac{24c\mathcal{R}}{\pi}\sum_{q>0}s(q) q \cos(2\pi q\zeta)K_1(4\pi q\mathcal{R}), \nonumber \end{eqnarray} whereas $\mathbf{U}$, also appearing in the potential \eqref{potential-main}, is still given by \eqref{Ab-UHM}. Here we have introduced the divisor function \begin{equation} s(q)\equiv\sigma_{-2}(q)=\sum_{d|n}d^{-2}, \end{equation} and, using \eqref{Omq} and \eqref{def-c}, expressed the DT invariants, counting the D-instantons, via the parameter $c$. As a result, all $g_s$-corrections affecting the scalar potential are controlled by just one topological number! It is in contrast to the $\alpha'$-corrections which require knowledge of an infinite set of genus-zero Gopakumar-Vafa invariants. \section{Moduli stabilization} \label{sec-modstab} Given the scalar potential \eqref{potential-main}, we can investigate whether it has local minima where all moduli are stabilized. If such minima exist, the sign of the potential evaluated at these points indicates whether they correspond to a de Sitter or an anti-de Sitter vacuum. At $h=0$ the potential explicitly depends on dilaton $r$, K\"ahler moduli $t^i$, periods $b^i$ of the $B$-field, and the RR scalar $\zeta$, and is independent of another RR scalar $\tilde\zeta$ and the NS-axion $\sigma$. This fact, however, is not a problem for moduli stabilization since these are the scalars which are used for the gauging. In the effective action, one can redefine some of the gauge fields to absorb these scalars. In such frame the scalars are ``eaten up" and thus disappear from the spectrum, whereas the corresponding gauge fields become massive. It is also important to note that in the perturbative approximation the potential depends on the fields $b^i$ and $\zeta$, known as {\it axions},\footnote{The axions also include the ``eaten up" fields $\tilde\zeta$ and $\sigma$.} only through the combination $e_i b^i-\tilde h\zeta$ appearing in \eqref{defE}. Thus, the other $h^{1,1}$ independent combinations of these fields enter the potential only via instanton corrections: $b^i$ and $\zeta$ appear in the imaginary part of the worldsheet and the D-instanton actions, respectively. This shows that the instanton corrections are indispensable for stabilization of all moduli.\footnote{In the given case of rigid CY, this argument does not allow us to conclude that D-instantons are truly necessary, since worldsheet instantons together with the combination $e_i b^i-\tilde h\zeta$ lead to a dependence on all axions. However, when $h^{2,1}>0$, it is still true that only one combination of RR-scalars appears in the perturbative potential \cite{Louis:2002ny}, so that D-instantons must be taken into account to stabilize all moduli.} The instanton corrected potential \eqref{potential-main} leads to a very complicated system of equations on its extrema. However, if one assumes that the fluxes satisfy the relation \begin{equation} e_0=(n\tilde h-\ell^i e_i)/2, \qquad n,\ell^i\in\mathbb{Z}, \label{relflux} \end{equation} there exists a very simple solution for the axions, \begin{equation} \zeta=n/2, \qquad b^i=\ell^i/2. \label{vanishsol} \end{equation} Indeed, using the expressions for the inverse metric $\mathcal{K}^{i\bar \jmath}$ \eqref{invcK} and the first derivative of the K\"ahler potential $\mathcal{K}_i$ \eqref{derK}, the scalar potential can be rewritten as \begin{equation} \begin{split} \hspace{-0.5cm} V=&\, \frac{e^{\mathcal{K}}}{4r^2}\Biggl[ \frac{2|E+\mathcal{E}|^2}{1-\frac{2r}{\mathcal{R}^2\mathbf{U}}}-2|E|^2 -e^{-\mathcal{K}}\hat N^{ij}e_i e_j +\frac{\Bigl[\,{\rm Re}\,\Bigl(E+e^{-\mathcal{K}}\hat N^{kl}\mathcal{K}_k e_l\Bigr)\Bigr]^2 \!\! +4\tilde h^2\mathcal{R}^2}{e^{-\mathcal{K}}\hat N^{i\bar \jmath}\mathcal{K}_i\mathcal{K}_{\bar \jmath}-1} -\frac{16\tilde h^2 r}{\mathbf{U}} \Biggr] \!, \end{split} \label{potall} \end{equation} where $\hat N^{ij}$ is the inverse of $N_{ij}=-2\,{\rm Im}\, F_{ij}$. Besides, it is straightforward to verify by using the explicit formulae \eqref{res-electric} and \eqref{res-FVM} that at the point \eqref{vanishsol} all the following quantities vanish: \begin{equation} \,{\rm Re}\,(E), \quad v, \quad \mathcal{E}, \quad \,{\rm Re}\,\mathcal{K}_i, \quad \partial_\zeta\mathcal{R}, \quad \partial_\zeta M, \quad \partial_\zeta \mathbf{U}, \quad \partial_{b^i} N_{jk}, \quad \partial_{b^i}\mathcal{K}. \label{quant-vanish} \end{equation} Taking also into account that $\hat N^{j\bar k}\mathcal{K}_j\mathcal{K}_{\bar k}=\hat N^{jk}\,{\rm Re}\,\mathcal{K}_j\,{\rm Re}\,\mathcal{K}_{k}+e^{2\mathcal{K}}N_{ij}t^it^j$, these results imply that the potential \eqref{potall} satisfies \begin{equation} \left.\partial_{\zeta}V\right|_{\zeta=n/2\atop b^i=\ell^i/2}=0, \qquad \left.\partial_{b^i}V\right|_{\zeta=n/2\atop b^i=\ell^i/2}=0. \label{bzvac-many} \end{equation} Thus, given the fluxes satisfying \eqref{relflux}, half-integer axions are always a solution of (at least, half of) the equations on critical points. Of course, there is no guarantee that sticking to this solution would allow to stabilize the remaining moduli and to get a local minimum, not a saddle point of the potential. Note, however, that the above properties also imply that the mixed second derivatives vanish, \begin{equation} \left.\partial_{\varphi^I}\partial_{\psi^J}V\right|_{\zeta=n/2\atop b^i=\ell^i/2}=0, \label{bzvac-many2} \end{equation} where we have introduced the collective notation for the axions, $\psi^I=(\zeta,b^i)$, and for the remaining fields, $\varphi^I=(r,t^i)$. This result means that the matrix of the second derivatives has a block-diagonal form, \begin{equation} \partial\p V=\( \begin{array}{cc} \partial_{\varphi^I}\partial_{\varphi^J}V \ &\ 0 \\ 0 \ &\ \partial_{\psi^I}\partial_{\psi^J}V \end{array}\), \label{2derM} \end{equation} so that the condition of having a local minimum gives rise to the two independent conditions on the positive definiteness of $\partial_{\varphi^I}\partial_{\varphi^J}V$ and $\partial_{\psi^I}\partial_{\psi^J}V$. Furthermore, the integers $n$ and $\ell^i$ control the signs of instanton contributions. One may expect that changing these integers, it may be possible to adjust the signs in such a way that the matrix $\partial_{\psi^I}\partial_{\psi^J}V$ becomes positive definite, thus providing a local minimum in the subspace spanned by the axions, whereas the positive definiteness of $\partial_{\varphi^I}\partial_{\varphi^J}V$ would impose certain restrictions on the critical points in the remaining subspace. Thus, in the following, we choose to work with the solution \eqref{vanishsol}. Having restricted ourselves to this solution, we can significantly simplify the potential. Using the vanishing of \eqref{quant-vanish}, we find \begin{equation} \begin{split} V^{(\varphi)}(r,t^i)\equiv \left.V\right|_{\zeta=n/2\atop b^i=\ell^i/2} =&\, \frac{e^{\mathcal{K}}}{4r^2}\[ \frac{4r(et)^2}{\mathcal{R}^2M-2r} -e^{-\mathcal{K}}\hat N^{ij}e_i e_j +\frac{4\tilde h^2\mathcal{R}^2}{e^{\mathcal{K}}N_{ij}t^i t^j-1}-\frac{16\tilde h^2 r}{M} \]. \end{split} \label{potallzero} \end{equation} Having fixed the axions, we still have to stabilize the four-dimensional dilaton $r$ and the K\"ahler moduli $t^i$. To this end, we need to solve the equations obtained by variation of the potential \eqref{potallzero} with respect to these moduli. However, we find it more natural to consider the potential as a function of $\mathcal{R}$ rather than of the dilaton $r$ because $\mathcal{R}(r)$ is defined only implicitly: see the last equation in \eqref{res-electric} where $\cos(2\pi q\zeta)$ should now be replaced by $(-1)^{nq}$. Proceeding this way and using that \begin{equation} \partial_\mathcal{R} r=\frac{\mathcal{R}}{4}\(M+2\lambda_2\), \end{equation} we obtain the following equations: \begin{subequations} \begin{eqnarray} \partial_\mathcal{R} V^{(\varphi)} &=& \frac{e^{\mathcal{K}}}{4r^2}\Biggl[\frac{\mathcal{R}}{2r}\(M+2\lambda_2\) e^{-\mathcal{K}}\hat N^{ij}e_i e_j \Biggr. \nonumber\\ && -\frac{(et)^2\mathcal{R}}{\(\mathcal{R}^2M-2r\)^2}\(\mathcal{R}^2M^2+2\lambda_2\(\mathcal{R}^2M-4r\)+4r\(M+\mathcal{R}\partial_\mathcal{R}M\)\) \nonumber\\ && \Biggl. +\frac{2\tilde h^2\mathcal{R}\(4r-\mathcal{R}^2(M+2\lambda_2)\)}{r\(e^{\mathcal{K}}N_{ij}t^i t^j-1\)} +\frac{4\tilde h^2}{M^2}\(\mathcal{R}M(M+2\lambda_2)+4r\partial_\mathcal{R}M\) \Biggr]=0, \label{derR-potallzero} \\ \partial_{t^i} V^{(\varphi)} &=& -\frac{1}{2r^2}\[ 4r e^{2\mathcal{K}}\( \frac{(et)}{\mathcal{R}^2M-2r}\( (et)N_{ij}t^j-e^{-\mathcal{K}}e_i\)-\frac{4\tilde h^2}{M}\,N_{ij}t^j\) \right. \nonumber\\ && \left. +\,{\rm Re}\, F_{ijk}\(\hat N^{jm}e_m\hat N^{kn} e_n-\frac{4e^{2\mathcal{K}}\tilde h^2\mathcal{R}^2 t^j t^k}{\(e^{\mathcal{K}}N_{ij}t^i t^j-1\)^2}\) \]=0. \label{derz-potallzero} \end{eqnarray} \label{der-potallzero} \end{subequations} Unfortunately, in their full generality, these equations are too complicated for an analytic treatment. Therefore, they should be studied either numerically or perturbatively. For instance, we can first analyze them by neglecting all non-perturbative corrections, and then add the terms with worldsheet and D-brane instantons. In the next section, we perform the first step, and then in section \ref{sec-inst} attempt the second step in the special case $h^{1,1}=1$. It is important to note that the fields to be stabilized cannot take arbitrary values, being restricted to certain physical domains. These restrictions typically appear due to various approximations used to get the scalar potential, while approaching a boundary of a physical domain corresponds to a failure of one of such approximations. The physical domains are defined by the following conditions: \begin{itemize} \item The K\"ahler moduli $t^i$ must belong to the K\"ahler cone of $\mathfrak{Y}$ and be such that the K\"ahler potential is well defined, which implies that $e^{-\mathcal{K}}>0$. This quantity is explicitly computed in \eqref{resK}. Typically, its positivity is ensured by the instanton contributions, but in the perturbative approximation with $t^i$ sufficiently small, one can reach a point where the negative perturbative correction becomes dominant over the classical volume term. This indicates the breakdown of the perturbative approximation and puts a bound on the domain of the K\"ahler moduli. \item Similarly, the K\"ahler moduli must be such that $\,{\rm Im}\,\mathcal{N}_{IJ}$, defined in \eqref{defcN} and determining the kinetic terms of the gauge fields, and its inverse computed in \eqref{relNN}, are negative definite. \item The four-dimensional dilaton $r=e^\phi$, besides being positive, should satisfy an additional bound. In \cite{Alexandrov:2014sya} it was shown that the metric \eqref{mett-UHMmain} has a curvature singularity at the hypersurface determined by the equation $r={1\over 2}\, \mathcal{R}^2\mathbf{U}$. However, the metric on the physical moduli space must be regular. Thus, the curvature singularity is an artefact of an approximation: in the case of fluxless CY compactifications, it is believed that it should be resolved by NS5-brane instantons \cite{Alexandrov:2009qq}, whereas in our case it should probably disappear after taking into account the back reaction of fluxes. This implies that close to the singularity the metric \eqref{mett-UHMmain} and, hence, the corresponding scalar potential cannot be trusted. In other words, we should require that $r>r_{\rm cr}$. In the perturbative approximation one has $r_{\rm cr}=-2c$. \end{itemize} \section{Perturbative approximation} \label{sec-pert} After dropping all instanton corrections, the scalar potential \eqref{potallzero} takes the following form: \begin{equation} V^{(\varphi)}\approx \frac{e^{\mathcal{K}}}{8r^2}\[ \frac{16\tilde h^2}{\lambda_2}\,\frac{(1-\gamma)r+2c}{1+\gamma} +\frac{4r(et)^2}{r+2c}-e^{-\mathcal{K}}\kappa^{ij}e_i e_j \], \label{pertpot} \end{equation} where the sign $\approx$ means that the equation holds in the perturbative approximation, $\kappa^{ij}$ is the inverse of $\kappa_{ij}\equiv\kappa_{ijk}t^k$, and we have introduced \begin{equation} \gamma= 3 C e^\mathcal{K}=\frac{3\chi_\mathfrak{Y}}{4\pi^3}\, \zeta(3) e^\mathcal{K} \label{deggp} \end{equation} as the variable encoding the volume $\mathcal{V}$ of the Calabi-Yau space since $e^{-\mathcal{K}}\approx 8\mathcal{V}-C$ due to \eqref{resK}. Note that both $\kappa^{ij}$ and $\gamma$ are functions of the K\"ahler moduli. The equations on critical points \eqref{der-potallzero} simplify as \begin{subequations} \begin{eqnarray} e^{-\mathcal{K}}\kappa^{ij}e_i e_j&\approx & \frac{4(et)^2 r(r+c)}{(r+2c)^2}+ \frac{8\tilde h^2}{\lambda_2}\,\frac{(1-\gamma)r+4c}{1+\gamma}\, , \label{eqdil-pertall} \\ \kappa_{ijk}\kappa^{jm}e_m\kappa^{kn}e_n &\approx & \frac{8r e^\mathcal{K}(et)}{r+2c}\(2e^\mathcal{K} (et)\kappa_{ij}t^j -e_i\) +\frac{64\tilde h^2}{\lambda_2}\, e^{2\mathcal{K}}\kappa_{ij}t^j\(\frac{2(r+c)}{(1+\gamma)^2}-r\). \label{stabK-allpert} \end{eqnarray} \label{eq-pertall} \end{subequations} The main complication here comes from the presence of the inverse matrix $\kappa^{ij}$ that introduces a non-polynomial dependence on the K\"ahler moduli. It is, however, possible to get at least one equation without such dependence. To this end, let us contract \eqref{stabK-allpert} with $t^i$. This gives \begin{equation} e^{-\mathcal{K}}\kappa^{ij}e_i e_j\approx \frac{4r (et)^2(1+\gamma)}{r+2c} +\frac{16\tilde h^2}{\lambda_2}(3+\gamma)\(\frac{2(r+c)}{(1+\gamma)^2}-r\). \label{proj-stabKt} \end{equation} Combining this equation with \eqref{eqdil-pertall} leads to \begin{equation} \frac{r(et)^2}{(r+2c)^2}\approx \frac{2\tilde h^2}{\lambda_2}\,\frac{\(2\gamma^3+9\gamma^2 +10\gamma-5\)r-8c}{(1+\gamma)^2\(\gamma(r+2c)+c\)}\, , \label{eq-et} \end{equation} which is a cubic equation on the dilaton $r$. Furthermore, substituting \eqref{eqdil-pertall} and \eqref{eq-et} into the perturbative potential \eqref{pertpot}, we find the following result for its value at critical points: \begin{equation} \begin{split} V^{(\varphi)}_{\rm cr} \approx &\, \frac{e^\mathcal{K}}{r}\[ \frac{\tilde h^2}{\lambda_2}\,\frac{1-\gamma}{ 1+\gamma} +\frac{c(et)^2}{2(r+2c)^2}\] \\ \approx &\, \frac{e^\mathcal{K}\tilde h^2}{\lambda_2 r^2}\, \frac{\gamma(1-\gamma^2)r^2-4c(1-3\gamma-2\gamma^2) r-8c^2 }{(1+\gamma)^2\(\gamma(r+2c)+c\)}\, . \end{split} \label{extvalue} \end{equation} In principle, one can solve the cubic equation \eqref{eq-et} to express $r$ in terms of the combination $e_it^i$ and the Calabi-Yau volume encoded in $\gamma$. The solution $r(t)$ is to be substituted into \eqref{stabK-allpert}, which leads to a complicated system of equations on the K\"ahler moduli. But even without explicitly solving this system, it turns out to be possible to derive some bounds on its solution. \subsection{Bounds on perturbative solutions} \label{subsec-bounds} As we noticed in the end of section \ref{sec-modstab}, the possible values of the scalar fields are restricted to satisfy certain conditions. In the perturbative approximation, two of them put simple bounds on the lowest values of the dilaton (inversely proportional to the string coupling) and the volume of CY, \begin{equation} r>2|c|, \qquad \mathcal{V}>C/8, \label{bound-rV} \end{equation} whereas the third one demands that $\,{\rm Im}\,\mathcal{N}_{IJ}$ is negative definite. The last condition is equivalent to $\,{\rm Im}\,\mathcal{N}^{IJ}v_Iv_J<0$ for any real vector $v_I$. Let us take $v_I=(-(eb),e_i)$. Then, using the perturbative result \eqref{pert-cNinv}, we arrive at the following condition: \begin{equation} \frac{e^{-\mathcal{K}}\kappa^{ij}e_i e_j}{(et)^2}<4. \label{condN} \end{equation} Let us now apply this condition to the extrema of the potential. Using equations \eqref{eqdil-pertall} and \eqref{eq-et}, we find that \begin{equation} e^{-\mathcal{K}}\kappa^{ij}e_i e_j-4(et)^2\approx \frac{8\tilde h^2}{\lambda_2 r}\,\frac{\gamma(1-\gamma^2)r^3+8c\(2-3\gamma-3\gamma^2-\gamma^3\)r^2+4c^2\(12-7\gamma-7\gamma^2-2\gamma^3\) r+32 c^3} {(1+\gamma)^2\(\gamma(r+2c)+c\)}. \label{exact-cond} \end{equation} Then \eqref{condN} implies that the r.h.s. of \eqref{exact-cond} must be negative. This severely restricts the regions in the $\gamma$-$r$ plane where the potential can have critical points. Furthermore, the positivity of \eqref{eq-et} gives another condition of the same kind. Fig. \ref{fig-region} shows the regions allowed by the two conditions, as well as those where the potential \eqref{extvalue} is positive. We observe that there is a narrow region where all conditions are satisfied so that they do not exclude the existence of meta-stable dS vacua, although they put a strong upper bound on the dilaton. \twofig{The plane $\gamma$-$(r/|c|)$ and its regions where various conditions are satisfied: the condition (4.8) corresponds to the dark grey region with the blue boundary, positivity of (4.5) holds in the pink region with the purple boundary, and the potential at the extremum (4.6) is positive in the light grey region with the brown boundary. The right picture magnifies the region close to the bifurcation point corresponding to $ \(\gamma_\star=\frac14\, (\sqrt{17}-3),r_\star=\frac{|c|}{2}\, (\sqrt{17}+7 )\). $ All three conditions are satisfied only in the very narrow region which ends at this point. If one drops the positivity of the potential, the region of large $\gamma$ and $r$ is also allowed.} {paper-regions1-small.eps}{paper-regions2-small.eps}{8.5cm}{fig-region}{0.1cm}{0.1cm} It is the important feature of our results presented in Fig. \ref{fig-region} that the above conditions do {\it not} allow solutions which have {\it both} large $r$ (small string coupling) and small $\gamma$ (large volume). Such conclusion can actually be derived analytically. Indeed, it is enough to get a milder consequence of \eqref{condN} than the negativity of \eqref{exact-cond}. For instance, one can note that the first term in \eqref{eqdil-pertall} is larger than $4(et)^2$. Then \eqref{condN} implies that the second term must be negative, which is equivalent to \begin{equation} \gamma > 1+\frac{4c}{r} \quad \Rightarrow \quad \(1+\frac{c}{r}\)\(1-\frac{C}{8\mathcal{V}}\)<\frac{3}{4}\, . \label{second-cond2} \end{equation} When both $r$ and $\mathcal{V}$ are large, which corresponds to the classical limit, this condition is clearly violated. This shows that for the set of fluxes under consideration, it is impossible to stabilize the string coupling and the volume in the region where all quantum corrections become irrelevant. Note also that the bound does not depend on the values of fluxes, which means that it is impossible to tune them in order to get arbitrarily large $r$ and $\mathcal{V}$. \subsection{One-modulus case} \label{subsec-one} Given a complicated structure of the equations on critical points of the scalar potential even in the perturbative approximation, it is natural to consider some particular cases with a low number of moduli. First, we concentrate on the simplest case with a single K\"ahler modulus, corresponding to CY with Hodge numbers $(h^{1,1},h^{2,1})=(1,0)$. To the best of our knowledge, no CY manifolds with such topological characteristics have been constructed so far, so that this case represents a fictional geometry and the corresponding gauged supergravity has no direct connection to string theory. Nevertheless, it is instructive to study it because the resulting equations allow an analytic treatment. In the one modulus case, we find an additional relation, \begin{equation} \frac{e^{-\mathcal{K}}\kappa^{ij} e_i e_j}{(et)^2}\approx \frac{4}{3+\gamma}\, , \label{onemodN} \end{equation} where $\kappa^{ij} e_i e_j=\frac{e_1^2}{\kappa_{111}t^1}$. It allows to rewrite the cubic equation on the dilaton in the form where the coefficients are functions of $\gamma$ only. Namely, combining \eqref{eqdil-pertall}, \eqref{eq-et} and \eqref{onemodN}, we find \begin{equation} (5+\gamma)(2-4\gamma-5\gamma^2-\gamma^3)r^3+4(2+\gamma-4\gamma^2-\gamma^3)cr^2-8(5-\gamma)c^2r-32c^3=0. \end{equation} Remarkably, this equation can be factorized so that all three roots can be found explicitly as \begin{equation} \begin{split} r_0 =&\, \frac{4|c|}{5+\gamma}\, , \qquad r_\pm = \frac{2|c|\(\gamma\pm \sqrt{(2+\gamma)(2-5\gamma-2\gamma^2)}\)}{2-\gamma(1+\gamma)(4+\gamma)}\, . \end{split} \label{roots} \end{equation} However, not all of them are relevant to us. First, we observe that $r_0<|c|$ and, hence, this root violates the bound \eqref{bound-rV}. The other two roots are real only when \begin{equation} \gamma<\gamma_{(1)}=\frac14\(\sqrt{41}-5\)\approx 0.3508. \label{r-bound1} \end{equation} However, in this region we have $r_-<0$. Thus, only $r_+$ should be considered, whose positivity puts a stronger bound than \eqref{r-bound1}, namely,\footnote{$\gamma_{(2)}$ is one of the roots of the denominator in \eqref{roots}.} \begin{equation} \gamma<\gamma_{(2)}\approx 0.3429. \label{r-bound2} \end{equation} We should also check the two conditions mentioned in the previous subsection: \eqref{condN} and positivity of \eqref{eq-et}. The first one is automatically satisfied due to \eqref{onemodN}, whereas the second one leads to an even stronger bound,\footnote{$\gamma_\star$ is one of the roots of the denominator in \eqref{eq-et} after substitution $r=r_+$, namely, it solves $\gamma(r_+(\gamma)+2c)+c=0$. It coincides with the bifurcation point in Fig. \ref{fig-region}, which is independent of the number of moduli. Note also that for $\gamma_\star<\gamma<\gamma_{(2)}$, the r.h.s. of \eqref{exact-cond} is positive, which seems to contradict to \eqref{condN}. In fact, there is no contradiction because in this domain one already violates the bound \eqref{r-bound3}.} \begin{equation} \gamma<\gamma_\star\approx 0.2808, \label{r-bound3} \end{equation} where $\gamma_\star$ was defined in the caption to Fig. \ref{fig-region}. \twofig{The graphs represent the same quantity $\frac{\lambda_2}{|c|\tilde h^2}(et)^2|_{r=r_+(\gamma)}$ evaluated in the one-modulus case as a function of the parameter $\gamma$ in the two ways: the blue curve represents the function (4.5) and the red curve represents the function $f^{-1}(1+3\gamma^{-1})^{2/3}\sim t^2$ obtained by using (4.17). The parameter $f$ controls the height of the second curve. For large $f$ the curves intersect at two points (the left picture with $f=26$) corresponding to two extrema of the potential, whereas for small $f$ there are no intersections (the right picture with $f=6.5$).} {paper-solve1.eps}{paper-solve2.eps}{9cm}{fig-solve}{0.0cm}{-0.0cm} Having verified all our conditions, it remains to solve the equation fixing the modulus $\gamma$. The easiest way to obtain such equation is to take \eqref{eq-et}, where one should substitute $r=r_+(\gamma)$ and \begin{equation} t^1=\(\frac{3C}{4\kappa_{111}}\(1+3\gamma^{-1}\)\)^{1/3}. \label{t-one} \end{equation} Unfortunately, a solution can be found only numerically, and it is controlled by the parameter \begin{equation} f=\frac{|c|\tilde h^2}{\lambda_2 e_1^2}\(\frac{4\kappa_{111}}{3C}\)^{2/3} =\frac{\pi\tilde h^2}{24\lambda_2 e_1^2}\(\frac{\kappa_{111}}{3\zeta(3)}\)^{2/3}, \label{param-f} \end{equation} where we have used that we are considering the case with $\chi_\mathfrak{Y}=2$. One can show that for \begin{equation} f >f_{\rm crit} \approx 9.8 \label{f-bound} \end{equation} the equation always has two solutions, and does not have any in the opposite case. The situation is demonstrated in Fig. \ref{fig-solve} which represents the two sides of Eq. \eqref{eq-et} as functions of $\gamma$. For the parameters satisfying \eqref{f-bound}, the two curves have two intersection points, but once $f$ decreases and reaches the critical value, they do not intersect anymore. \lfig{The profile of the potential on the plane $\gamma$-$(r/|c|)$. There is a local maximum at $\gamma\approx 0.27$, $r\approx 5.18|c|$ and a saddle point at $\gamma\approx 0.14$, $r\approx 2.66|c|$. The profile corresponds to the choice $f=26$, and the potential is rescaled by the factor $\frac{3\lambda_2 |c|C}{\tilde h^2}$.} {paper-potential-small.eps}{9.5cm}{fig-potential}{-0.2cm} Thus, if the $H$-flux is sufficiently large compared to the $F_4$-flux, the potential has two critical points. Remarkably, for both of them the potential turns out to be positive (the curve $r_+(\gamma)$ drawn on the $\gamma$-$r$ plane precisely fits the narrow region identified in Fig. \ref{fig-region}). Unfortunately, both critical points do not correspond to local minima. As can be seen in Fig. \ref{fig-potential}, the solution with larger $\gamma$ and $r$ corresponds to a local maximum, whereas the one with smaller parameters corresponds to a saddle point. This is also confirmed by our analysis of the matrix of the second derivatives of the potential performed in Appendix \ref{ap-second}. As a result, we conclude that in the one-modulus case the perturbative potential does not have meta-stable vacua. \subsection{Generic case: stability analysis} \label{subsec-two} Next, it is natural to analyze the case with two K\"ahler moduli. Remarkably, a Calabi-Yau manifold with Hodge numbers $(h^{1,1},h^{2,1})=(2,0)$ was constructed a few years ago in \cite{Freitag:2011}. Thus, in contrast to the one-modulus case, this one does have a mathematical realization. The intersection numbers of this CY were recently calculated in \cite{Freitag:2015}, and are given by\footnote{We are very grateful to Eberhard Freitag for informing us about his calculations.} \begin{equation} \kappa_{111}=344, \qquad \kappa_{112}=492 \qquad \kappa_{122}=600, \qquad \kappa_{222}=440. \label{internum} \end{equation} Unfortunately, these numbers do not have any particular symmetry which could help us in solving our equations. Furthermore, although it is possible to explicitly invert the $2\times 2$ matrix $\kappa_{ij}$ entering these equations, they still remain unsuitable to an analytic treatment. Due to these reasons, instead of solving the equations on critical points, we directly proceed to the analysis of meta-stability. Remarkably, it turns out that this analysis can be carried out for the general case with {\it any} number of K\"ahler moduli. The meta-stability of a vacuum requires that the matrix of the second derivatives $\partial_{\varphi^I}\partial_{\varphi^J}V^{(\varphi)}$ at the corresponding critical point is positive definite. To understand whether this can be the case for our potential, we apply the following trick. First, we note that the signature of any linear operator does not depend on the choice of a basis in the space where it acts. Therefore, we can rotate the derivatives $\partial_{t^i}$ by an invertible matrix ${\mathbf{m}_i}^j$. We choose \begin{equation} {\mathbf{m}_1}^j=t^j, \qquad {\mathbf{m}_2}^j=n^j\equiv \frac{\kappa^{jk}e_k}{e^{\mathcal{K}}(et)}\, , \end{equation} and ${\mathbf{m}_i}^j$ with $i>2$ such that together with $t^j$ and $n^j$ they form a set of linearly independent vectors. Thus, instead of $\partial_{\varphi^I}\partial_{\varphi^J}V^{(\varphi)}$, we are going to analyze a matrix of the following form: \begin{equation} \mathbf{M}=\(\begin{array}{cccc} \partial_r^2 V^{(\varphi)} & t^i\partial_{t^i}\partial_r V^{(\varphi)} & n^i\partial_{t^i}\partial_r V^{(\varphi)} \ &\ \\ t^i\partial_{t^i}\partial_r V^{(\varphi)}\ &\ t^i t^j\partial_{t^i}\partial_{t^j} V^{(\varphi)}\ &\ n^i t^j\partial_{t^i}\partial_{t^j} V^{(\varphi)}\ &\ \cdots\ \\ n^i\partial_{t^i}\partial_r V^{(\varphi)}\ &\ n^i t^j\partial_{t^i}\partial_{t^j} V^{(\varphi)}\ &\ n^i n^j\partial_{t^i}\partial_{t^j} V^{(\varphi)}\ &\ \\ & \cdots & \ &\ \cdots\ \end{array}\). \label{matder} \end{equation} Since $\mathbf{M}$ is a Hermitian matrix, we can apply Sylvester's criterion which tells us that $\mathbf{M}$ is positive definite {\it if and only if} all its leading principal minors are positive. In other words, all matrices $\Mbi{k}$ given by the upper left $k$-by-$k$ corner of $\mathbf{M}$ must have a positive determinant, i.e. $\Delta_k\equiv\,{\rm det}\, \Mbi{k}>0$. In particular, a {\it necessary} condition for $\mathbf{M}$ to be positive definite is the positivity of $\Delta_k$, $k=1,2,3$.\footnote{In the following, whenever $\Delta_k$ is mentioned, the condition $k=1,2,3$ is implied.} The crucial fact is that it is possible to express all elements of the matrix $\Mbi{3}$, and hence $\Delta_k$, in terms of $\gamma$ and $r$ only. Indeed, contracting the vector-like equation \eqref{stabK-allpert} with $n^i$, we obtain \begin{eqnarray} && \kappa_{ijk}\,\kappa^{il}e_l\,\kappa^{jm}e_m\,\kappa^{kn}e_n \approx \frac{8r e^{2\mathcal{K}}(et)}{r+2c}\(2(et)^2- e^{-\mathcal{K}} \kappa^{ij}e_ie_j\) +\frac{64\tilde h^2}{\lambda_2}\, e^{2\mathcal{K}}(et)\(\frac{2(r+c)}{(1+\gamma)^2}-r\) \label{proj-stabKn}\\ &&\approx \frac{32\tilde h^2}{\lambda_2}\, \frac{(et)e^{2\mathcal{K}}}{r+2c}\,\frac{ (5-10\gamma-13\gamma^2-2\gamma^3)r^3-2c(1-8\gamma+3\gamma^2)r^2 -4c^2 (9-8\gamma)r-8c^3(3-2\gamma)}{(1+\gamma)^2\(c+\gamma(r+2c)\)}, \nonumber \end{eqnarray} where we have used \eqref{proj-stabKt} and \eqref{eq-et} to get the second line. Then, as shown in Appendix \ref{ap-second}, using the equations \eqref{proj-stabKt}, \eqref{eq-et} and \eqref{proj-stabKn}, we can express all independent structures appearing in the entries of $\Mbi{3}$ in terms of only two variables $\gamma$ and $r$. As a result, it becomes possible to search for regions in the $\gamma$-$r$ plane where $\Delta_k$ are all positive. It is important to emphasize that, due to the use of the equations on critical points, all the parameters $\lambda_2$, $\kappa_{ijk}$, the fluxes $e_i$ and $\tilde h$ conspire into the same positive multiplicative factor in all entries of the matrix $\Mbi{3}$, and, hence, in all minors $\Delta_k$, so that the stability analysis does {\it not} depend on particular values of these parameters. The details of this analysis are presented in Appendix \ref{ap-second}. We find that there are no regions in the $\gamma$-$r$ plane where all three minors $\Delta_k$ are positive. This implies that the matrix $\mathbf{M}$ \eqref{matder} {\it cannot} be positive definite and, hence, the perturbative potential {\it cannot} have local minima for any number of K\"ahler moduli. \section{Instanton contributions in the one-modulus case} \label{sec-inst} Given the results of the previous section about the absence of meta-stable vacua in the perturbative approximation, it is natural to ask whether such vacua exist after taking into account the non-perturbative corrections generated by worldsheet and D-brane instantons. In this section we study this question in the simplest case of a fictional CY with $h^{1,1}=1$. Thus, given all our approximations, the potential analyzed here should be viewed only as inspired by string theory, rather than realizing one of its compactifications. Nevertheless, we expect that it captures the main features of the cases which do have such realization. In the presence of the non-perturbative corrections it seems to be impossible to solve our equations analytically, and we have to rely on numerical calculations. Our basic idea is to evaluate the matrix of the second derivatives of the non-perturbative scalar potential on-shell, so that the dependence on the flux parameters is completely factorized, and then look for regions in the $t$-$\mathcal{R}$ plane\footnote{In this section we drop the index $i$ at the quantities like K\"ahler moduli since it takes only one value.} such that (i) they contain the curve of possible critical points, and (ii) the resulting matrix is positive definite. More precisely, we perform the following steps: \begin{enumerate} \item Solve \eqref{derz-potallzero}, which in this case is a single equation, with respect to $(et)^2$. The solution can be represented as \begin{equation} (et)^2=\tilde h^2 \mathscr{E}(t,\mathcal{R}) \label{et2} \end{equation} with some function $\mathscr{E}(t,\mathcal{R})$. Note that this function, as well as all other functions below, also depend on the signs $(-1)^n$ and $(-1)^l$ determined by the values of the axion fields. Thus, each function appears in four different copies corresponding to four different choices of these signs. It is enough to get a local minimum with one of these copies. \item Substitute \eqref{et2} into \eqref{derR-potallzero} so that the dependence on $\tilde h^2$ is factored out and the equation reduces to \begin{equation} \mathscr{Q}(t,\mathcal{R})=0, \label{eq-factor} \end{equation} where the function $\mathscr{Q}(t,\mathcal{R})$ is independent on the flux parameters. \item Calculate the matrix \eqref{2derM}\footnote{More precisely, in the upper left entry we evaluate the derivatives with respect to $(\mathcal{R},\log t)$ instead of $(r,t)$.} and substitute \eqref{et2}, so that the dependence on $\tilde h^2$ is also factored out and the matrix takes the form \begin{equation} \partial\p V=\tilde h^2\( \begin{array}{cc} \Phi_{IJ}(t,\mathcal{R}) \ &\ 0 \\ 0 \ &\ \Psi_{IJ}(t,\mathcal{R}) \end{array}\). \label{2derV} \end{equation} \item All the steps above can be done analytically. To proceed further, we have to stick to a numerical analysis. To this end, we fix a finite number of instantons $N_{\rm inst}$ to be taken into account, and choose some values for $\lambda_2$, $\kappa$ and Gopakumar-Vafa invariants $n_{k}^{(0)}$, $k\leN_{\rm inst}$. We recall that we take a fictional CY, so that all these numbers can be chosen at will. \item Find the low bounds $\mathcal{R}_{\rm cr}$ and $t_{\rm cr}$ by demanding \begin{equation} r(\mathcal{R})> -2c, \qquad e^{-\mathcal{K}(t)}> 0, \qquad \,{\rm Im}\,\mathcal{N}_{IJ}(t)\ \mbox{is negative definite}. \label{condnum} \end{equation} Under the second condition, the last one can be shown to be equivalent to (see Appendix \ref{ap-cN}) \begin{equation} N t^2>e^{-\mathcal{K}}\qquad \mbox{or}\qquad N<0, \label{condNt} \end{equation} where $N\equiv N_{11}(t)$. The subsequent analysis is concentrated on the region $(\mathcal{R}>\mathcal{R}_{\rm cr},t>t_{\rm cr})$. \item \label{mainstep} Draw the curve $\mathscr{Q}(t,\mathcal{R})=0$ on the $t$-$\mathcal{R}$ plane, and identify the parts of this curve belonging to the regions where \begin{itemize} \item $\mathscr{E}(t,\mathcal{R})>0$, \item the matrix $\Psi_{IJ}(t,\mathcal{R})$ is positive definite, \item the matrix $\Phi_{IJ}(t,\mathcal{R})$ is positive definite. \end{itemize} \item Should such parts exist, it means that there is a range of the flux parameters that allows the existence of a local minimum of the scalar potential. This range corresponds to those values of $e$ and $\tilde h$ when the two equations, \eqref{et2} and \eqref{eq-factor}, have a common solution. The fact that such values exist is ensured by positivity of $\mathscr{E}(t,\mathcal{R})$. \end{enumerate} \threefigmod{The left picture displays the $t$-$\mathcal{R}$ plane and its regions where $\mathscr{E}(t,\mathcal{R})>0$ (blue) and $\Psi_{IJ}(t,\mathcal{R})$ is positive definite (pink). The red curve is a curve of solutions of $\mathscr{Q}(t,\mathcal{R})=0$, while the horizontal and vertical green lines correspond to $\mathcal{R}=\mathcal{R}_{\rm cr}$ and $t=t_{\rm cr}$, respectively. One can see that a part of the red curve belongs to the region where both conditions are satisfied. The right pictures display the same plane and the curve of solutions together with the regions of the positive trace (blue) and the determinant (pink) of $\Phi_{IJ}(t,\mathcal{R})$. The lower picture magnifies the part where the two regions are close to each other, in order to make clear that they do not intersect indeed. Thus, $\Phi_{IJ}$ is not positive definite near the red curve. The parameters are chosen as $N_{\rm inst}=4$, $\lambda_2=0.1$, $\kappa=10$, $n_{k}^{(0)}=100 k$.} {paper-2der-psi-inst.eps}{paper-2der-phi-inst1.eps}{paper-2der-phi-inst2.eps}{9cm}{7cm}{fig-inst-regions}{0.2cm} For practical purposes, it is convenient to split the step \ref{mainstep} into two steps: first, impose the positivity of $\mathscr{E}$ and the positive definiteness of $\Psi_{IJ}$, and only afterwards analyze $\Phi_{IJ}$. Then, typically, at the first stage we can exclude $(-1)^l=1$ and identify a finite part of the curve $\mathscr{Q}(t,\mathcal{R})=0$, not too far from the critical values, as a candidate for the position of the minima. However, for all choices of the parameters we considered, it turns out that the matrix $\Phi_{IJ}$ is {\it not} positive definite in the region around the candidate part. A typical situation is demonstrated in Fig. \ref{fig-inst-regions}. It is striking that in all our examples the regions of the positive trace and the positive determinant of $\Phi_{IJ}$ approach each other, with their boundaries going almost parallel, but {\it never} intersect. Given a highly non-trivial dependence of these functions on $t$, $\mathcal{R}$ and all the parameters, this observation begs for a deeper analytical explanation.\footnote{Actually, we found that in the deep quantum region (with small $\mathcal{R}$ and $t$) it is possible to have $\Phi_{IJ}$ positive definite and the non-perturbative scalar potential does have local minima. However, these minima spoil at least one of the conditions \eqref{condnum} and, therefore, are non-physical.} We conclude that in the one-modulus case the instanton corrections do not lead to meta-stable vacua. \section{Conclusions} \label{sec-concl} In this paper we considered a simple class of flux compactifications which preserve $N=2$ local supersymmetry in the four-dimensional low energy effective action. Ignoring the back reaction of fluxes and using the recent results on the non-perturbative description of fluxless CY compactifications, we derived a scalar potential which takes into account not only perturbative corrections, but also worldsheet and D-brane instantons. Extremizing this potential, we found that the axion fields are fixed to half-integer values, provided the fluxes satisfy the simple constraint \eqref{relflux}. The axion stabilization greatly simplifies the scalar potential and the equations on its critical points, and also leads to a factorization of the matrix of its second derivatives, which allows to disentangle the issue of stability into two independent problems in the subspaces spanned by the axions and the remaining moduli, respectively. Whereas the stability in the axion subspace is easy to achieve, our results on the stabilization of the dilaton and the K\"ahler moduli are largely negative. First, we found the bound \eqref{second-cond2} on the critical values of the CY volume and the dilaton, which shows that the scalar potential does not have critical points in the large volume, weak coupling region of the moduli space where both $\alpha'$ and $g_s$-corrections can be neglected. Second, we investigated these critical points in the perturbative approximation, but found that all of them are {\it not} stable (i.e. not local minima). Furthermore, in the case with one K\"ahler modulus, corresponding to the non-physical case of rigid CY with $h^{1,1}=1$, we extended this result to the non-perturbative level by taking into account all instanton contributions. Thus, in all these cases not all of the moduli are stabilized by the chosen set of fluxes. The direction of instability lies in the subspace spanned by the dilaton and the K\"ahler moduli. This shows the existence of a non-trivial mixture between different moduli, and a failure of the approximation where they are supposed to be stabilized in a step-by-step procedure. Our results can be compared to the no-go theorems in the literature that forbid the existence of dS vacua. For instance, \cite{Cremmer:1984hj} proves such a theorem in the approximation where the coupling of hypermultiplets is ignored, i.e. when {\it only} abelian $N=2$ vector multiplets are taken into account, whereas \cite{GomezReino:2008bi} has a similar statement in the opposite case where {\it only} hypermultiplets are present. The main differences to these papers are: (i) we take into account {\it both} types of $N=2$ matter multiplets, and (ii) obtain the stronger result that not only dS, but {\it any} vacua are unstable. At the same time, our results only apply either to the perturbative level or to CY's with Hodge numbers $(h^{1,1},h^{2,1})=(1,0)$. It was argued in \cite{Catino:2013syn} that meta-stable dS vacua can be obtained in N=2 gauged supergravity with a single hypermultiplet and a single vector multiplet, by gauging an abelian isometry of the hypermultiplet moduli space. It was based on the observation that the bound of \cite{GomezReino:2008bi} on (scalar) sGoldstini masses is relaxed in such case. Our results in the one-modulus case are not in tension with these findings because \cite{Catino:2013syn} studied the most general metrics on $\mathcal{M}_V$ and $\mathcal{M}_H$, which are consistent with the special K\"ahler and quaternion-K\"ahler properties, respectively, whereas we restricted them to those resulting from the fluxless CY compactifications. Rather, our results imply that the vacua of \cite{Catino:2013syn} are not expected to arise in string theory, at least, if the back reaction does not change the situation drastically. It is also worth mentioning that dS vacua are known to arise after the gauging of {\it non-abelian} isometries \cite{Fre:2002pd,Ceresole:2014vpa,Fre:2014pca}. The non-abelian isometries do exist in the {\it classical} supergravity where the hypermultiplet moduli space can be taken to be a quaternionic homogeneous space\footnote{For instance, the universal hypermultiplet moduli space, appearing in CY compactifications in the classical approximation, is given by the symmetric coset space $SU(2,1)/SU(2)\times U(1)$.} $G/H$ with a semi-simple stability group $H$, which allows to introduce the so-called Roo-Wagemans angles playing a crucial role in the construction of the classical dS vacua. However, any {\it quantum} correction, either perturbative or non-perturbative, breaks the non-abelian symmetries of the hypermultiplet moduli space, so that the non-abelian gaugings do not apply in quantum theory.\footnote{That is why we omitted the non-abelian contributions in the basic equation \eqref{scpot-gen} of the scalar potential in $N=2$ gauged supergravity.} Thus, the vacua constructed in \cite{Fre:2002pd,Ceresole:2014vpa,Fre:2014pca} do not appear to be relevant in the context of full string theory where quantum corrections are not ignored. Returning to our results, we note that they do not fully exclude the class of flux compactifications which inspired our potential: it remains to understand what happens at the full non-perturbative level for CY's with $h^{1,1}>1$ (i.e. in all non-fictional cases), and whether the picture we found still persists. In fact, there is a serious obstacle on this way due to the absence of any knowledge about Gopakumar-Vafa invariants for rigid CY manifolds. Usually, these invariants are calculated by using mirror symmetry \cite{Candelas:1990rm,Hosono:1993qy}. However, rigid CY's do not have mirror duals ($h^{1,1}$ cannot be zero). It is the outstanding mathematical problem to find the non-perturbative holomorphic prepotential for such manifolds. Because of this problem, it might be reasonable to drop the assumption of rigidness and consider more general CY threefolds. Since the D-instanton corrected metric on the hypermultiplet moduli space is known for any CY \cite{Alexandrov:2014sya}, it may not be difficult to generalize the derivation of the non-perturbative potential \eqref{potential-main} to a generic case. However, then both the metric and the scalar potential would become even more complicated by acquiring extra dependence on the complex structure moduli which also have to be stabilized. Finally, it should be emphasized that we considered the very restricted set of fluxes, with all magnetic fluxes, including Romans mass, being set to zero. It was chosen to preserve $N=2$ local supersymmetry that, in turn, was needed to take into account non-perturbative contributions, which are known only under very special circumstances. Of course, from both phenomenological and pure theoretical viewpoints, it would be desirable to extend our analysis to more general flux compactifications when $N=2$ local supersymmetry is broken to $N=1$. This, however, would require a much better understanding of quantum effects in $N=1$ flux compactifications, beyond the current level. Whereas their direct calculation from the first principles is hardly possible, one may hope that a combination of string dualities with geometry of the moduli spaces will become as powerful in the $N=1$ case as it turned out to be in the $N=2$ case. \acknowledgments It is our pleasure to thank Sibasish Banerjee, Eric Bergshoeff, Renata Kallosh, Amir-Kian Kashani-Poor, Ruben Minasian, Ulrich Theis and Stefan Vandoren for valuable discussions and correspondence. We are particularly grateful to Eberhard Freitag for sharing with us his results about the intersection numbers of a rigid Calabi-Yau manifold with Picard number two. SVK is also grateful to the University of Montpellier for kind hospitality extended to him during part of this investigation. SVK was supported by a Grant-in-Aid of the Japanese Society for Promotion of Science (JSPS) under No.~26400252, the World Premier International Research Centre Initiative (WPI Initiative), MEXT, Japan, and the Competitiveness Enhancement Program of Tomsk Polytechnic University in Russia.
1,314,259,992,585
arxiv
\section{Introduction} In this paper, we consider general minimax problems of the form ($n, d_1, d_2\in\mathbb{N}^+$): \begin{equation}\label{eq:main_problems} \min_{x\in \mathbb{R}^{d_1}}\max_{y\in \mathbb{R}^{d_2}}\ f(x,y), \end{equation} as well as their finite-sum counterpart \begin{equation}\label{eq:main_problems_2} \min_{x\in \mathbb{R}^{d_1}}\max_{y\in \mathbb{R}^{d_2}}\ f(x,y)\triangleq\frac{1}{n}\sum_{i=1}^{n}f_i(x,y), \end{equation} where $f,f_i$ are continuously differentiable and $f$ is $L$-Lipschitz smooth jointly in $x$ and $y$. We focus on the setting when the function $f$ is $\mu$-strongly concave in $y$ and perhaps nonconvex in $x$, i.e., $f$ is \textit{nonconvex-strongly-concave (NC-SC)}. Such problems arise ubiquitously in machine learning, e.g., GANs with regularization \citep{sanjabi2018convergence,lei2020sgd}, Wasserstein robust models \citep{sinha2018certifying}, robust learning over multiple domains \citep{qian2019robust}, and off-policy reinforcement learning \citep{dai2017learning,dai2018sbeed,huang2020convergence}. Since the problem is nonconvex in general, a natural goal is to find an approximate stationary point $\Bar{x}$, such that $\autonorm{\nabla\Phi(\Bar{x})}\leq\epsilon$, for a given accuracy $\epsilon$, where $\Phi(x)\triangleq\max_y f(x,y)$ is the primal function. This goal is meaningful for the aforementioned applications, e.g.,~in adversarial models the primal function quantifies the worst-case loss for the learner, with respect to adversary's actions. There exists a number of algorithms for solving NC-SC problems in the general setting, including GDmax \citep{nouiehed2019solving}, GDA~\citep{lin2020gradient}, alternating GDA~\citep{yang2020global,boct2020alternating,xu2020unified}, Minimax-PPA~\citep{lin2020near}. Specifically, GDA and its alternating variant both achieve the complexity of $ O(\kappa^2 \Delta L\epsilon^{-2}) $ \citep{lin2020gradient,yang2020global}, where $\kappa\triangleq \frac{L}{\mu}$ is the condition number and $\Delta\triangleq\Phi(x_0)-\inf_{x}\Phi(x)$ is the initial function gap. Recently, \citep{lin2020near} provided the best-known complexity of $ O\autopar{\sqrt{\kappa} \Delta L \epsilon^{-2}\cdot \log^2(\frac{\kappa L}{\epsilon})}$ achieved by Minimax-PPA, which improves the dependence on the condition number but suffers from an extra poly-logarithmic factor in $\frac{1}{\epsilon}$. In the finite-sum setting, several algorithms have been proposed recently, e.g., SGDmax~\citep{jin2020local}, PGSMD~\citep{rafique2018non}, Stochastic GDA~\citep{lin2020gradient}, SREDA and its variants~\citep{luo2020stochastic,xu2020enhanced}. In particular, \citep{lin2020gradient} proved that Stochastic GDA attains the complexity of $O(\kappa^3\epsilon^{-4})$. \citep{luo2020stochastic} recently showed the state-of-the-art result achieved by SREDA: when $n \geq \kappa^2$, the complexity is $\tilde{O}\autopar{n\log\frac{\kappa}{\epsilon}+\sqrt{n}\kappa^2 \Delta L\epsilon^{-2}}$, which is sharper than the batch Minimax-PPA algorithm; when $n \leq \kappa^2$, the complexity is $O\autopar{\autopar{n\kappa+\kappa^2} \Delta L\epsilon^{-2}}$, which is sharper than Stochastic GDA. \begin{table*} \centering \small \renewcommand{\arraystretch}{1.5} \begin{threeparttable}[b] \begin{tabular}{l | c | c | c} \hline \hline \textbf{Setting} & \textbf{Our Lower Bound} & \textbf{Our Upper Bound} & \textbf{Previous Upper Bound} \\ \hline \hline \multirow{2}{*}{NC-SC, general} & \multirow{2}{*}{ \makecell[c]{ $ \Omega\big(\sqrt{\kappa} \Delta L\epsilon^{-2}\big) $\vspace{0.1em}\\ Theorem \ref{THM:LB_NCSC_DETER} } } & \multirow{2}{*}{ \makecell[c]{$ \Tilde{O}(\sqrt{\kappa} \Delta L\epsilon^{-2}) $ \vspace{0.1em}\\ Section \ref{subsec algorithms} } } & \multirow{2}{*}{ \makecell[c]{ $ O(\kappa^2 \Delta L\epsilon^{-2}) $ \citep{lin2020gradient}\vspace{0.25em}\\ $ \tilde{O}\autopar{\sqrt{\kappa} \Delta L \epsilon^{-2}\log^2\frac{1}{\epsilon}} $ \citep{lin2020near} } } \\ & & & \\ \hline \multirow{3}{*}{NC-SC, FS, AS\tnote{1}\ \ } & \multirow{3}{*}{ \makecell[c]{ $ \Omega\autopar{n+\sqrt{n\kappa} \Delta L \epsilon^{-2}} $\vspace{0.1em}\\ Theorem \ref{THM:LB_NCSC_FS_AS} } } & \multirow{3}{*}{\makecell[c]{$ \tilde{O}\autopar{\autopar{n+n^{\frac{3}{4}}\sqrt{\kappa}} \Delta L \epsilon^{-2}} $\vspace{0.1em} \\ Section \ref{subsec algorithms}}} & \multirow{3}{*}{ \makecell[c]{ $ \begin{cases} \tilde{O}(n+\sqrt{n}\kappa^2 \Delta L\epsilon^{-2}) & n \geq \kappa^2\\ O\autopar{\autopar{n\kappa+\kappa^2} \Delta L\epsilon^{-2}} & n \leq \kappa^2 \end{cases} $\vspace{0.1em}\\ \citep{luo2020stochastic,xu2020enhanced} } } \\ & & & \\ & & & \\ \hline \hline \end{tabular} \begin{tablenotes} \item[1] FS: finite-sum, AS: averaged smooth; see Section \ref{SEC:MAIN_PRELIM} for definitions. \end{tablenotes} \end{threeparttable} \caption{Upper and lower complexity bounds for finding an approximate stationary point. Here $\tilde{O}(\cdot)$ hides poly-logarithmic factor in $L,\mu$ and $\kappa$. $L$: Lipschitz smoothness parameter, $\mu$: strong concavity parameter, $\kappa$: condition number $\frac{L}{\mu}$; $\Delta$: initial gap of the primal function.} \label{table:summary_results} \end{table*} Despite this active line of research, whether these state-of-the-art complexity bounds can be further improved remains elusive. As a special case by restricting the domain of $y$ to a singleton, lower bounds for nonconvex smooth minimization, e.g., \citep{carmon2019lower,carmon2019lowerII,fang2018spider,zhou2019lower,arjevani2019lower}, hardly capture the dependence on the condition number $\kappa$, which plays a crucial role in the complexity for general NC-SC smooth minimax problems. In many of the aforementioned machine learning applications, the condition number is often proportional to the inverse of the regularization parameter, and could be quite large in practice. For example, in statistical learning, where $n$ represents the sample size, the optimal regularization parameter (i.e.~with optimal empirical/generalization trade-off) leads to $\kappa=\Omega(\sqrt{n})$ \citep{shalev2014understanding}. This motivates the following fundamental questions: \textit{What is the complexity limit for NC-SC problems in the general and finite-sum settings? Can we design new algorithms to meet the performance limits and attain optimal dependence on the condition number?} \subsection{Contributions} \label{sec:contribution} Our contributions, summarized in Table \ref{table:summary_results}, are as follows: \begin{itemize} \item We establish nontrivial lower complexity bounds for finding an approximate stationary point of nonconvex-strongly-concave (NC-SC) minimax problems. In the general setting, we prove an $ \Omega\big(\sqrt{\kappa} \Delta L\epsilon^{-2}\big) $ lower complexity bound which applies to arbitrary deterministic linear-span algorithms interacting with the classical first-order oracle. In the finite-sum setting, we prove an $ \Omega\autopar{n+\sqrt{n\kappa} \Delta L \epsilon^{-2}} $ lower complexity bound (when $\kappa=\Omega(n)$)\footnote{A concurrent work by \cite{han2021lower} appeared on arXiv two weeks ago, and provided a similar lower bound result for finite-sum NC-SC problems under probabilistic arguments based on geometric random variables.} for the class of {\em averaged smooth functions} and arbitrary linear-span algorithms interacting with a (randomized) incremental first-order oracle (precise definitions in Sections \ref{SEC:MAIN_PRELIM} and \ref{sec:main_LB_NCSC}). Our lower bounds build upon two main ideas: first, we start from an NC-SC function whose primal function mimics the lower bound construction in smooth nonconvex minimization \citep{carmon2019lower}. Crucially, the smoothness parameter of this primal function is boosted by an $\Omega(\kappa)$ factor, which strengthens the lower bound. Second, the NC-SC function has an alternating zero-chain structure, as utilized in lower bounds for convex-concave settings \citep{ouyang2019lower}. The combination of these features leads to a hard instance for our problem. \item To bridge the gap between the lower bounds and existing upper bounds in both settings, we introduce a generic Catalyst acceleration framework for NC-SC minimax problems, inspired by~\citep{lin2018catalyst,yang2020catalyst}, which applies existing gradient-based methods to solving a sequence of crafted strongly-convex-strongly-concave (SC-SC) minimax subproblems. When combined with the extragradient method, the resulting algorithm achieves an $ \tilde{O}(\sqrt{\kappa} \Delta L\epsilon^{-2}) $ complexity in terms of gradient evaluations, which tightly matches the lower bound in the general setting (up to logarithmic terms in constants) and shaves off the extra poly-logarithmic term in $\frac{1}{\epsilon}$ required by the state-of-the-art \citep{lin2020near}. When combined with stochastic variance-reduced method, the resulting algorithm achieves an overall $ \tilde{O}\autopar{(n+n^{3/4}\sqrt{\kappa}) \Delta L \epsilon^{-2}} $ complexity for averaged smooth finite-sum problems, which has nearly-tight dependence on the condition number and improves on the best-known upper bound when $n\leq \kappa^4$. \end{itemize} \subsection{Related Work} \paragraph{Lower bounds for minimax problems.} Information-based complexity (IBC) theory \citep{traub1988information}, which derives the minimal number of oracle calls to attain an approximate solution with a desired accuracy, is often used in lower bound analysis of optimization algorithms. Unlike the case of minimization \citep{blair1985problem,nesterov2018lectures,agarwal2009information,woodworth2016tight,foster2019complexity,carmon2019lower,carmon2019lowerII,fang2018spider,zhou2019lower,arjevani2019lower}, lower bounds for minimax optimization are far less understood; only a few recent works provided lower bounds for finding an approximate saddle point of (strongly)-convex-(strongly)-concave minimax problems \citep{ouyang2019lower,zhang2019lower, ibrahim2020linear,xie2020lower,yoon2021accelerated}. Instead, this paper considers lower bounds for NC-SC minimax problems of finding an approximate stationary point, which requires different techniques for constructing zero-chain properties. Note that there exists another line of research on the purely stochastic setting, e.g., \citep{rafique2018non,luo2020stochastic,xu2020enhanced}; constructing lower bounds in that setting is out of the scope of this paper. \paragraph{Complexity of making gradient small.} In nonconvex optimization, most lower and upper complexity bound results are presented in terms of the gradient norm (see a recent survey \citep{danilova2020recent} and references therein for more details). For convex optimization, the optimality gap based on the objective value is commonly used as the convergence criterion. The convergence in terms of gradient norm, albeit easier to check, are far less studied in the literature until recently; see e.g., \citep{nesterov2012make,allen2018make,foster2019complexity,carmon2019lowerII, diakonikolasguzman2021} for convex minimization and \citep{diakonikolas2020halpern,diakonikolas2021potential,yoon2021accelerated} for convex-concave smooth minimax problems. \paragraph{Nonconvex minimax optimization.} In NC-SC setting, as we mentioned, there has been several substantial works. Among them, \citep{lin2020near} achieved the best dependency on condition number by combining proximal point algorithm with accelerated gradient descent. \citep{luo2020stochastic} introduced a variance reduction algorithm, SREDA, and \citep{xu2020enhanced} enhanced the analysis to allow bigger stepsize. \citep{yuan2021federated,guo2020communication} provided algorithms for NC-SC minimax formulation of AUC maximization problems with an additional assumption that the primal function satisfies Polyak-{\L}ojasiewicz condition. In addition, nonconvex-concave minimax optimization, i.e., the function $f$ is only concave in $y$, is extensively explored by \citep{zhang2020single, ostrovskii2020efficient, thekumparampil2019efficient, zhao2020primal,nouiehed2019solving,yang2020catalyst}. Recently, \citep{daskalakis2020complexity} showed that for general smooth nonconvex-nonconcave objectives the computation of approximate first-order locally optimal solutions is intractable. Therefore, another line of research is devoted to searching for solutions under additional structural properties \citep{yang2020devil,zhou2017stochastic, yang2020global, song2020optimistic,mertikopoulos2019optimistic,malitsky2019golden,diakonikolas2020efficient, lin2018solving}. \paragraph{Catalyst acceleration.} The catalyst framework was initially studied in \citep{lin2015universal} for convex minimization and extended to nonconvex minimization in \citep{paquette2018catalyst} to obtain accelerated algorithms. A similar idea to accelerate SVRG appeared in \citep{frostig2015regularizing}. These work are rooted on the proximal point algorithm (PPA) \citep{rockafellar1976monotone,guler1991convergence} and inexact accelerated PPA \citep{guler1992new}. Recently, \citep{yang2020catalyst} generalized the idea and obtained state-of-the-art results for solving strongly-convex-concave and nonconvex-concave minimax problems. In contrast, this paper introduces a new catalyst acceleration scheme in the nonconvex-strongly-concave setting, which relies on completely different parameter settings and stopping criterion. \section{Preliminaries} \label{SEC:MAIN_PRELIM} \paragraph{Notations} Throughout the paper, we use $\dom F$ as the domain of a function $F$, $\nabla F=\autopar{\nabla_x F, \nabla_y F}$ as the full gradient, $\autonorm{\cdot}$ as the $\ell_2$-norm. We use $0$ to represent zero vectors or scalars, $e_i$ to represent unit vector with the $i$-th element being $1$. For nonnegative functions $f(x)$ and $g(x)$, we say $f=O\autopar{g}$ if $f(x)\leq cg(x)$ for some $c>0$, and further write $f=\tilde{O}\autopar{g}$ to omit poly-logarithmic terms on constants $L,\mu$ and $\kappa$, while $f=\Omega\autopar{g}$ if $f(x)\geq cg(x)$ (see more in Appendix \ref{sec:Apdx_notations}). We introduce definitions and assumptions used throughout. \begin{definition}[Primal and Dual Functions] For a function $f(x,y)$, we define $\Phi(x)\triangleq\max_y f(x,y)$ as the primal function, and $\Psi(y)\triangleq\min_x f(x,y)$ as the dual function. We also define the primal-dual gap at a point $(\Bar{x},\Bar{y})$ as $ \gap_f(\Bar{x}, \Bar{y}) \triangleq \max_{y \in\mathbb{R}^{d_2}} f(\Bar{x},y) - \min_{x \in \mathbb{R}^{d_1}} f(x,\Bar{y}). $ \end{definition} \begin{definition}[Lipschitz Smoothness] We say a function $f(x,y)$ is $L$-Lipschitz smooth ($L$-S) jointly in $x$ and $y$ if it is differentiable and for any $(x_1,y_1), (x_2,y_2)\in\mathbb{R}^{d_1}\times\mathbb{R}^{d_2}$, $\autonorm{\nabla_x f(x_1,y_1)-\nabla_x f(x_2,y_2)}\leq L(\|x_1-x_2\|+\|y_1-y_2\|)$ and $\autonorm{\nabla_y f(x_1,y_1)-\nabla_y f(x_2,y_2)}\leq L(\|x_1-x_2\|+\|y_1-y_2\|)$, for some $L>0$. \end{definition} \begin{definition}[Average / Individual Smoothness] \label{defn:AS_IS} We say $f(x,y)=\frac{1}{n}\sum_{i=1}^{n}f_i(x,y)$ or $\{f_i\}_{i=1}^n$ is $L$-averaged smooth ($L$-AS) if each $f_i$ is differentiable, and for any $(x_1,y_1), (x_2,y_2)\in\mathbb{R}^{d_1}\times\mathbb{R}^{d_2}$, we have \begin{equation} \begin{split} \frac{1}{n}\sum_{i=1}^n\autonorm{\nabla f_i(x_1,y_1)-\nabla f_i(x_2,y_2)}^2\leq L^2\autopar{\autonorm{x_1-x_2}^2+\autonorm{y_1-y_2}^2}. \end{split} \end{equation} We say $f$ or $\autobigpar{f_i}_{i=1}^n$ is $L$-individually smooth ($L$-IS) if each $f_i$ is L-Lipschitz smooth. \end{definition} Average smoothness is a weaker condition than the common Lipschitz smoothness assumption of each component in finite-sum~/~stochastic minimization \citep{fang2018spider,zhou2019lower}. Similarly in minimax problems, the following proposition summarizes the relationship among these different notions of smoothness. \begin{prop} \label{prop regularized smooth} Let $f(x,y)=\frac1n\sum_{i=1}^n f_i(x,y)$. Then we have: (a) If the function $f$ is $L$-IS or $L$-AS, then it is $L$-S. (b) If $f$ is $L$-IS, then it is $(2L)$-AS. (c) If $f$ is L-AS, then $f(x,y) + \frac{\tau_x}{2}\|x-\Tilde{x}\|^2 - \frac{\tau_y}{2}\|y-\Tilde{y}\|^2$ is $\sqrt{2}(L+\max\{\tau_x, \tau_y\})$-AS for any $\Tilde{x}$ and $\Tilde{y}$. \end{prop} \begin{definition}[Strong Convexity] A differentiable function $g:\mathbb{R}^{d_1}\rightarrow\mathbb{R}$ is convex if $g(x_2)\geq g(x_1)+\autoprod{\nabla g(x_1), x_2-x_1}$ for any $x_1, x_2\in\mathbb{R}^{d_1}$. Given $\mu\geq 0$, we say $f$ is $\mu$-strongly convex if $ g(x)-\frac{\mu}{2}\autonorm{x}^2 $ is convex, and it is $\mu$-strongly concave if $-g$ is $\mu$-strongly convex. \end{definition} Next we introduce main assumptions throughout this paper. \begin{assume}[Main Settings] \label{main assumption} We assume that $f(x,y)$ in \eqref{eq:main_problems} is a \textit{nonconvex-strongly-concave (NC-SC)} function such that $f$ is $L$-S, and $f(x,\cdot)$ is $\mu$-strongly concave for any fixed $x\in\mathbb{R}^{d_1}$; for the finite-sum case, we further assume that $\autobigpar{f_i}_{i=1}^n$ is $L$-AS. We assume that the initial primal suboptimality is bounded: $\Phi(x_0)-\inf_{x}\Phi(x)\leq \Delta$. \end{assume} Under Assumption \ref{main assumption}, the primal function $\Phi(\cdot)$ is differentiable and $2\kappa L$-Lipschitz smooth \citep[Lemma 23]{lin2020near} where $\kappa\triangleq\frac{L}{\mu}$. Throughout this paper, we use the stationarity of the primal function as the convergence criterion. \begin{definition}[Convergence Criterion] For a differentiable function $\Phi$, a point $\Bar{x}\in\dom\Phi$ is called an $\epsilon$-stationary point of $\Phi$ if $\autonorm{\nabla\Phi(\Bar{x})}\leq\epsilon$. \end{definition} Another commonly used criterion is the stationarity of $f$, i.e., $\autonorm{\nabla_x f(\Bar{x},\Bar{y})}\leq \epsilon, \autonorm{\nabla_y f(\Bar{x},\Bar{y})}\leq \epsilon$. This is a weaker convergence criterion. We refer readers to \citep[Section 4.3]{lin2020gradient} for the comparison of these two criteria. \section{Lower Bounds for NC-SC Minimax Problems} \label{sec:main_LB_NCSC} In this section, we establish lower complexity bounds (LB) for finding approximate stationary points of NC-SC minimax problems, in both general and finite-sum settings. We first present the basic components of the oracle complexity framework \citep{blair1985problem} and then proceed to the details for each case. For simplicity, in this section only, we denote $x_d$ as the $d$-th coordinate of $x$ and $x^t$ as the variable $x$ in the $t$-th iteration. \subsection{Framework and Setup} We study the lower bound of finding primal stationary point under the well-known oracle complexity framework \citep{blair1985problem}, here we first present the basics of the framework. \paragraph{Function class} We consider the \textit{nonconvex-strongly-concave (NC-SC)} function class, as defined in Assumption \ref{main assumption}, with parameters $L,\mu,\Delta>0$, denoted by $ \mathcal{F}_{\mathrm{NCSC}}^{L,\mu,\Delta} $. \paragraph{Oracle class} We consider different oracles for the general and finite-sum settings. Define $z\triangleq(x,y)$. \begin{itemize} \item For the general setting, we consider the \textit{first-order oracle (FO)}, denoted as $ \mathbb{O}_{\mathrm{FO}}(f,\cdot) $, that for each query on point $ z $, it returns the gradient $ \mathbb{O}_{\mathrm{FO}}(f,z)\triangleq\autopar{\nabla_x f(x,y), \nabla_y f(x,y)}. $ \item For the finite-sum setting, \textit{incremental first-order oracle (IFO)} is often used in lower bound analysis \citep{agarwal2015lower}. This oracle for a function $f(x,y)=\frac{1}{n}\sum_{i=1}^nf_i(x,y)$, is such that for each query on point $ z $ and given $i\in[n]$, it returns the gradient of the $i$-th component, i.e., $ \mathbb{O}_{\mathrm{IFO}}(f,z,i)\triangleq\autopar{\nabla_x f_i(x,y), \nabla_y f_i(x,y)}, $. Here, we consider \textit{averaged smooth IFO} and \textit{individually smooth IFO}, denoted as $ \mathbb{O}_{\mathrm{IFO}}^{L,\mathrm{AS}}(f) $ and $ \mathbb{O}_{\mathrm{IFO}}^{L,\mathrm{IS}}(f) $, where $\autobigpar{f_i}_{i=1}^n$ is $L$-AS or $L$-IS, respectively. \end{itemize} \paragraph{Algorithm class} In this work, we consider \colorword{the class of {\em linear-span algorithms}}{black} interacting with oracle $\mathbb{O}$, denoted as ${\cal A}(\mathbb{O})$. These algorithms satisfy the following property: if we let $(z^t)_t$ be the sequence of queries by the algorithm, where $z^t=(x^t,y^t)$; then for all $t$, we have \begin{equation} \label{eq:alg_protocol_deter} z^{t+1}\in\mathrm{Span}\autobigpar{z^0,\cdots,z^t;\mathbb{O}\autopar{f,z^0},\cdots,\mathbb{O}\autopar{f,z^t}}. \end{equation} For the finite-sum case, the above protocol fits with many existing deterministic and randomized linear-span algorithms. We distinguish the general and finite-sum setting by specifying the used oracle, which is $\mathbb{O}_{\mathrm{FO}}$ or $\mathbb{O}_{\mathrm{IFO}}$, respectively. Most existing first-order algorithms, including simultaneous and alternating update algorithms, can be formulated as linear-span algorithms. It is worth pointing out that typically the linear span assumption is used without loss of generality, since there is a standard reduction from deterministic linear-span algorithms to arbitrary oracle based deterministic algorithms \citep{Nemirovski:1991, Nemirovski:1992, ouyang2019lower}. We defer this extension for future work. \paragraph{Complexity measures} The efficiency of algorithms is quantified by the \textit{oracle complexity} \citep{blair1985problem} of finding an $ \epsilon $-stationary point of the primal function: for an algorithm $\mathtt{A}\in\mathcal{A}(\mathbb{O})$ interacting with a FO oracle $\mathbb{O}$, an instance $f\in\mathcal{F}$, we define \begin{equation} T_{\epsilon}(f,\mathtt{A})\triangleq\inf \autobigpar{T\in\mathbb{N}|\|\nabla \Phi\autopar{x^T}\|\leq\epsilon} \end{equation} as the minimum number of oracle calls $\mathtt{A}$ makes to reach stationarity convergence. For the general case, we define the {\em worst-case complexity} \begin{equation} \label{eq:defn_LB_deter} \mathrm{Compl}_\epsilon\autopar{\mathcal{F},\mathcal{A},\mathbb{O}} \triangleq \underset{f\in\mathcal{F}}{\sup}\ \underset{\mathtt{A}\in{\mathcal{A}(\mathbb{O})}}{\inf}\ T_{\epsilon}(f,\mathtt{A}). \end{equation} For the finite-sum case, we consider the {\em randomized complexity} \citep{braun2017lower}: \begin{equation} \label{eq:defn_LB_FS} \mathrm{Compl}_{\epsilon}\autopar{\mathcal{F},\mathcal{A},\mathbb{O}} \triangleq \underset{f\in\mathcal{F}}{\sup}\ \underset{\mathtt{A}\in{\mathcal{A}(\mathbb{O})}}{\inf}\ \mathbb{E}\ T_{\epsilon}(f,\mathtt{A}). \end{equation} Following the motivation of analysis discussed in Section \ref{sec:contribution}, we will use the zero-chain argument for the analysis. First we define the notion of (first-order) zero-chain \citep{carmon2019lowerII} and activation as follows. \begin{definition}[Zero Chain, Activation] A function $ f:\mathbb{R}^d\rightarrow\mathbb{R} $ is a first-order zero-chain if for any $ x\in\mathbb{R}^d $, \begin{equation} \mathrm{supp}\{x\}\subseteq\{1,\cdots,i-1\}\ \Rightarrow\ \mathrm{supp}\{\nabla f(x)\}\subseteq\{1,\cdots,i\}, \end{equation} where $ \mathrm{supp}\{x\}\triangleq\{i\in[d]\ | \ x_i\neq 0\} $ and $ [d]=\{1,\cdots,d\} $. \textcolor{black}{For an algorithm initialized at $0\in\mathbb{R}^d$, with iterates $\{x^t\}_t$, we say coordinate $i$ is activated at $x^t$, if $x_i^t\neq 0$ and $x_i^s= 0$, for any $s<t$.} \end{definition} \subsection{General NC-SC Problems} First we consider the \textit{general NC-SC (Gen-NC-SC)} minimax optimization problems. Following the above framework, we choose function class $\mathcal{F}_{\mathrm{NCSC}}^{L,\mu,\Delta}$, oracle $\mathbb{O}_{\mathrm{FO}}$, linear-span algorithms ${\cal A}$, and we analyze the complexity defined in \eqref{eq:defn_LB_deter}. \paragraph{Hard instance construction} Inspired by the hard instances constructed in~\citep{ouyang2019lower,carmon2019lowerII}, we introduce the following function $ F_d:\mathbb{R}^{d+1}\times\mathbb{R}^{d+2}\rightarrow\mathbb{R} $ ($d\in\mathbb{N}^+$) and \begin{equation} \label{eq:LB_hard_instance_deter} \begin{split} F_d(x,y;\lambda,\alpha) \triangleq \lambda_1\autoprod{B_dx,y}- \lambda_2\|y\|^2-\frac{\lambda_1^2\sqrt{\alpha}}{2\lambda_2}\autoprod{e_1,x}+ \frac{\lambda_1^2\alpha}{2\lambda_2}\sum_{i=1}^{d}\Gamma(x_i)- \frac{\lambda_1^2\alpha}{4\lambda_2}x_{d+1}^2+ \frac{\lambda_1^2\sqrt{\alpha}}{4\lambda_2}, \end{split} \end{equation} where $ \lambda=(\lambda_1,\lambda_2)\in\mathbb{R}^2 $ is the parameter vector, $ e_1\in\mathbb{R}^{d+1} $ is the unit vector with the only non-zero element in the first dimension, $ \Gamma:\mathbb{R}\rightarrow\mathbb{R} $ and $B_d\in\mathbb{R}^{(d+2)\times(d+1)}$ are \begin{equation} B_d= \begin{bmatrix} & & & & 1\\ & & & 1 & -1\\ & & \iddots & \iddots &\\ & 1 & -1 & &\\ 1 & -1 & & &\\ \sqrt[4]{\alpha} & & & & \end{bmatrix}, \quad \Gamma(x) = 120\int_{1}^{x}\frac{t^2(t-1)}{1+t^2}dt. \end{equation} Matrix $ B_d $ essentially triggers the activation of variables at each iteration, and function $ \Gamma $ introduces nonconvexity in $x$ to the instance. By the first-order optimality condition of $ F_d(x,\cdot;\lambda,\alpha) $, we can compute its primal function, $\Phi_d$: \begin{equation} \label{eq:LB_hard_instance_deter_Phi} \begin{split} \Phi_d(x;\lambda,\alpha) \triangleq\ & \max_{y\in\mathbb{R}^{d+1}}F_d\left(x,y;\lambda,\alpha\right)\\ =\ & \frac{\lambda_1^2}{2\lambda_2} \autopar{ \frac{1}{2}x^\top A_dx- \sqrt{\alpha}x_1+ \frac{\sqrt{\alpha}}{2}+ \alpha\sum_{i=1}^{d}\Gamma(x_i)+ \frac{1-\alpha}{2}x_{d+1}^2 }, \end{split} \end{equation} where $A_d\in\mathbb{R}^{(d+1)\times(d+1)}$ is \begin{equation} \label{eq:matrix_A_d_definition} \begin{split} A_d = \left(B_d^\top B_d-e_{d+1}e_{d+1}^\top\right) = \begin{bmatrix} 1+\sqrt{\alpha} & -1 & & & \\ -1 & 2 & -1 & & & \\ & -1 & 2 & \ddots & & \\ & & \ddots & \ddots & -1 & \\ & & & \ddots & 2 & -1 \\ & & & & -1 & 1 \end{bmatrix}. \end{split} \end{equation} The resulting primal function resembles the worst-case functions used in lower bound analysis of minimization problems \citep{nesterov2018lectures,carmon2019lowerII}. \paragraph{Zero-Chain Construction} First we summarize key properties of the instance and its zero-chain mechanism. We further denote $\hat{e}_i\in\mathbb{R}^{d+2}$ as the unit vector for the variable $y$ and define ($k\geq 1$) \begin{equation} \begin{split} \mathcal{X}_k&\triangleq\mathrm{Span}\{e_1,e_2,\cdots,e_k\},\ \ \mathcal{X}_0\triangleq\{0\}, \\ \mathcal{Y}_k&\triangleq\mathrm{Span}\{\hat{e}_{d+2},\hat{e}_{d+1},\cdots,\hat{e}_{d-k+2}\},\ \ \mathcal{Y}_0\triangleq\{0\}, \end{split} \end{equation} then we have the following properties for $F_d$. \begin{lemma}[Properties of $ F_d $] \label{LM:NCSC_LB_F_D} For any $ d\in\mathbb{N}^+ $ and $\alpha\in\automedpar{0,1}$, $ F_d(x,y;\lambda,\alpha) $ in \eqref{eq:LB_hard_instance_deter} satisfies: \begin{itemize} \item[(i)] The function $ F_d(x,\cdot;\lambda,\alpha) $ is $ L_F $-Lipschitz smooth where $L_F=\max\autobigpar{\frac{200\lambda_1^2\alpha}{\lambda_2},2\lambda_1,2\lambda_2}$. \item[(ii)] For each fixed $ x\in\mathbb{R}^{d+1} $, $ F_d(x,\cdot;\lambda,\alpha) $ is $ \mu_F $-strongly concave where $ \mu_F=2\lambda_2 $. \item[(iii)] The following properties hold: \begin{enumerate} \item[a)] $ x=y=0\quad \Longrightarrow\quad \nabla_x F_d\in\mathcal{X}_1,\ \nabla_y F_d=0 $. \item[b)] $ x\in\mathcal{X}_k,\ y\in\mathcal{Y}_k\quad \Longrightarrow\quad \nabla_x F_d\in\mathcal{X}_{k+1},\ \nabla_y F_d\in\mathcal{Y}_k $. \item[c)] $ x\in\mathcal{X}_{k+1},\ y\in\mathcal{Y}_k\quad \Longrightarrow\quad \nabla_x F_d\in\mathcal{X}_{k+1},\ \nabla_y F_d\in\mathcal{Y}_{k+1} $. \end{enumerate} \item[(iv)] For $ L\geq\mu>0 $, if $ \lambda=\lambda^*=(\lambda_1^*,\lambda_2^*)=(\frac{L}{2},\frac{\mu}{2}) $ and $ \alpha\leq\frac{\mu}{100L} $, then $ F_d $ is $ L $-Lipschitz smooth. Moreover for any fixed $ x\in\mathbb{R}^{d+1} $, $ F_d(x,\cdot;\lambda,\alpha) $ is $ \mu $-strongly concave. \end{itemize} \end{lemma} The proof of Lemma \ref{LM:NCSC_LB_F_D} is deferred to Appendix \ref{sec:Apdx_LM_NCSC_LB_F_D}. The first two properties show that function $F_d$ is Lipschitz smooth and NC-SC; the third property above suggests that, starting from $ (x,y)=(0,0) $, the activation process follows an "alternating zero-chain" form \citep{ouyang2019lower}. That is, for a linear-span algorithm, when $ x\in\mathcal{X}_k,\ y\in\mathcal{Y}_k$, the next iterate will at most activate the $(k+1)$-th coordinate of $x$ while keeping $y$ fixed; similarly when $x\in\mathcal{X}_{k+1},\ y\in\mathcal{Y}_k$, the next iterate will at most activate the $(d-k+1)$-th element of $y$. We need the following properties of $ \Phi_d $ for the lower bound argument. \begin{lemma}[Properties of $ \Phi_d $] \label{LM:NCSC_LB_PHI} For any $ \alpha\in\automedpar{0,1} $ and $ x\in\mathbb{R}^{d+1} $, if $ x_d=x_{d+1}=0 $, we have: \begin{itemize} \item[(i)] $ \autonorm{\nabla\Phi_d(x;\lambda,\alpha)}\geq\frac{\lambda_1^2}{8\lambda_2}\alpha^{3/4} $. \item[(ii)] $\Phi_d\autopar{0;\lambda,\alpha} - \inf_{x\in\mathbb{R}^{d+1}}\Phi_d(x;\lambda,\alpha) \leq \frac{\lambda_1^2}{2\lambda_2}\left(\frac{\sqrt{\alpha}}{2}+10\alpha d\right)$. \end{itemize} \end{lemma} We defer the proof of Lemma \ref{LM:NCSC_LB_PHI} to Appendix \ref{sec:Apdx_LM_NCSC_LB_PHI}. This lemma indicates that, starting from $ (x,y)=(0,0) $ with appropriate parameter settings, the primal function $ \Phi_d $ will not approximate stationarity until the last two coordinates are activated. Now we are ready to present our final lower bound result for the general NC-SC case. \begin{theorem}[LB for Gen-NC-SC] \label{THM:LB_NCSC_DETER} For the linear-span first-order algorithm class $ \mathcal{A} $, parameters $ L,\mu,\Delta>0 $, and accuracy $\epsilon$ satisfying $\epsilon^2\leq\min\left(\frac{\Delta L}{6400}, \frac{\Delta L\sqrt{\kappa}}{38400}\right)$, we have \begin{equation} \mathrm{Compl}_\epsilon \autopar{ \mathcal{F}_{\mathrm{NCSC}}^{L,\mu,\Delta},\mathcal{A},\mathbb{O}_{\mathrm{FO}} } = \Omega\autopar{\sqrt{\kappa} \Delta L \epsilon^{-2}}. \end{equation} \end{theorem} The hard instance in the proof is established based on $F_d$ in \eqref{eq:LB_hard_instance_deter}. We choose the scaled function $f(x,y)=\eta^2F_d(\frac{x}{\eta},\frac{y}{\eta};\lambda^*,\alpha)$ as the final hard instance, which preserves the smoothness and strong convexity (by Lemma \ref{lm:scaling}), while appropriate setting of $\eta$ will help to fulfill the requirements on the initial gap and large gradient norm (before thorough activation) of the primal function. The detailed statement and proof of Theorem \ref{THM:LB_NCSC_DETER} are presented in Appendix \ref{sec:Apdx_THM_LB_NCSC_DETER}. \begin{remark}[Tightness of Theorem \ref{THM:LB_NCSC_DETER}] The best-known upper bounds for general NC-SC problems are $ O(\Delta L\kappa^2\epsilon^{-2}) $ \citep{lin2020gradient, boct2020alternating} and $ \Tilde{O}\autopar{\Delta \sqrt{\kappa}L\epsilon^{-2}\log^2\frac{1}{\epsilon}} $ \citep{lin2020near}. Therefore, our result exhibits significant gaps in terms of the dependence on $ \epsilon $ and $ \kappa $. In order to mitigate these gaps, we propose faster algorithms in Section \ref{sec:main_Catalyst}. On the other hand, compared to the $\Omega(\Delta L\epsilon^{-2})$ lower bound for nonconvex smooth minimization \citep{carmon2019lower}, our result reveals an explicit dependence on $\kappa$. \end{remark} \subsection{Finite-Sum NC-SC Problems} The second case we consider is \textit{finite-sum NC-SC (FS-NC-SC)} minimax problems, for the function class $\mathcal{F}_{\mathrm{NCSC}}^{L,\mu,\Delta}$, the linear-span algorithm class $\mathcal{A}$ and the averaged smooth IFO class $\mathbb{O}_{\mathrm{IFO}}^{L,\mathrm{AS}}$. The complexity is defined in \eqref{eq:defn_LB_FS}. \paragraph{Hard instance construction} To derive the finite-sum hard instance, we modify $F_d$ in \eqref{eq:LB_hard_instance_deter} with orthogonal matrices defined as follows. \begin{definition}[Orthogonal Matrices] For positive integers $a,b,n\in\mathbb{N}^+$, we define a matrix sequence $ \{\mathbf{U}^{(i)}\}_{i=1}^n \in \mathrm{\mathbf{Orth}}(a,b,n) $ if for each $ i, j\in\autobigpar{1,\cdots, n} $ and $ i\neq j $, $ \mathbf{U}^{(i)},\mathbf{U}^{(j)}\in\mathbb{R}^{a\times b} $ and $ \mathbf{U}^{(i)}(\mathbf{U}^{(i)})^\top=\mathbf{I}\in\mathbb{R}^{a\times a} $ and $\mathbf{U}^{(i)}(\mathbf{U}^{(j)})^\top=\mathbf{0}\in\mathbb{R}^{a\times a}$. \end{definition} Here the intuition for the finite-sum hard instance is combining $n$ independent copies of the hard instance in the general case \eqref{eq:LB_hard_instance_deter}, then appropriate orthogonal matrices will convert the $n$ independent variables with dimension $d$ into one variable with dimension $n\times d$, which results in the desired hard instance. To preserve the zero chain property, for $\{\mathbf{U}^{(i)}\}_{i=1}^n \in \mathrm{\mathbf{Orth}}(d+1,n(d+1),n)$, $\{\mathbf{V}^{(i)}\}_{i=1}^n \in \mathrm{\mathbf{Orth}}(d+2,n(d+2),n)$, $\forall n,d\in\mathbb{N}^+$ and $x\in\mathbb{R}^{n(d+1)}$, $y\in\mathbb{R}^{n(d+2)}$, we set $\mathbf{U}^{(i)}$ and $\mathbf{V}^{(i)}$ by concatenating $n$ matrices: \begin{equation} \label{eq:LB_FS_Orthogonal_Matrix} \begin{split} \mathbf{U}^{(i)} =\ & \begin{bmatrix} \mathbf{0}_{d+1} & \cdots & \mathbf{0}_{d+1} & \mathbf{I}_{d+1} & \mathbf{0}_{d+1} & \cdots & \mathbf{0}_{d+1} \end{bmatrix},\\ \mathbf{V}^{(i)} =\ & \begin{bmatrix} \mathbf{0}_{d+2} & \cdots & \mathbf{0}_{d+2} & \mathbf{I}_{d+2} & \mathbf{0}_{d+2} & \cdots & \mathbf{0}_{d+2} \end{bmatrix}, \end{split} \end{equation} where $\mathbf{0}_d, \mathbf{I}_d\in\mathbb{R}^{d\times d}$ are the zero and identity matrices respectively, while the $i$-th matrix above is the identity matrix. Hence, $\mathbf{U}^{(i)}x$ will be the $(id-d+1)$-th to the $(id)$-th elements of $x$, similar property also holds for $\mathbf{V}^{(i)}y$. The hard instance construction here follows the idea of that in the deterministic hard instance \eqref{eq:LB_hard_instance_deter}, the basic motivation is that its primal function will be a finite-sum form of the primal function $\Phi_d$ defined in the deterministic case \eqref{eq:LB_hard_instance_deter_Phi}. We choose the following functions $H_d:\mathbb{R}^{d+1}\times\mathbb{R}^{d+2}\rightarrow\mathbb{R}$, $\Gamma_d^n:\mathbb{R}^{n(d+1)}\rightarrow\mathbb{R}$ and \begin{equation} \begin{split} H_d(x,y;\lambda,\alpha) \triangleq\ & \lambda_1\autoprod{B_dx,y}- \lambda_2\|y\|^2-\frac{\lambda_1^2\sqrt{\alpha}}{2\lambda_2}\autoprod{e_1,x}- \frac{\lambda_1^2\alpha}{4\lambda_2}x_{d+1}^2+ \frac{\lambda_1^2\sqrt{\alpha}}{4\lambda_2},\\ \Gamma_d^n(x) \triangleq\ & \sum_{i=1}^n\sum_{j=i(d+1)-d}^{i(d+1)-1}\Gamma(x_j), \end{split} \end{equation} then $\bar{f}_i, \bar{f}: \mathbb{R}^{n(d+1)}\times\mathbb{R}^{n(d+2)}\rightarrow\mathbb{R}$, $\{\mathbf{U}^{(i)}\}_{i=1}^n \in \mathrm{\mathbf{Orth}}(d+1,n(d+1),n)$, $\{\mathbf{V}^{(i)}\}_{i=1}^n \in \mathrm{\mathbf{Orth}}(d+2,n(d+2),n)$ and \begin{equation} \label{eq:LB_hard_instance_FS} \begin{split} \bar{f}_i(x,y) \triangleq\ & H_d\autopar{\mathbf{U}^{(i)}x,\mathbf{V}^{(i)}y;\lambda,\alpha}+\frac{\lambda_1^2\alpha}{2n\lambda_2}\Gamma_d^n(x),\\ \bar{f}(x,y) \triangleq\ & \frac{1}{n}\sum_{i=1}^{n}\bar{f}_i(x,y) = \frac{1}{n}\sum_{i=1}^{n}\automedpar{H_d\autopar{\mathbf{U}^{(i)}x,\mathbf{V}^{(i)}y;\lambda,\alpha}+\frac{\lambda_1^2\alpha}{2n\lambda_2}\Gamma_d^n(x)}, \end{split} \end{equation} note that by denoting $\Gamma_d(x)\triangleq\sum_{i=1}^{d}\Gamma(x_i)$, it is easy to find that \begin{equation} \label{eq:Gamma_nd_equivalent} \Gamma_d^n(x)=\sum_{i=1}^n\sum_{j=i(d+1)-d}^{i(d+1)-1}\Gamma(x_j)=\sum_{i=1}^n\Gamma_d\autopar{\mathbf{U}^{(i)}x}=\sum_{i=1}^n\sum_{j=1}^{d}\Gamma\autopar{\autopar{\mathbf{U}^{(i)}x}_j}. \end{equation} Define $ u^{(i)}\triangleq\mathbf{U}^{(i)}x $, we summarize the properties of the above functions in the following lemma. \begin{lemma}[Properties of $\bar{f}$] \label{lm:properties_hard_instance_FS_AS} For the above functions $\{\bar{f}_i\}_i$ and $\bar{f}$ in \eqref{eq:LB_hard_instance_FS}, they satisfy that: \begin{itemize} \item[(i)] $\{\bar{f}_i\}_i$ is $L_F$-AS where $L_F=\sqrt{\frac{1}{n}\max\autobigpar{16\lambda_1^2+8\lambda_2^2, \frac{C_\gamma^2\lambda_1^4\alpha^2}{n\lambda_2^2}+\frac{\lambda_1^4\alpha^2}{\lambda_2^2}+8\lambda_1^2}}$. \item[(ii)] $\bar{f}$ is $\mu_F$-strongly concave on $y$ where $\mu_F=\frac{2\lambda_2}{n}$. \item[(iii)] For $n\in\mathbb{N}^+$, $ L\geq 2n\mu>0 $, if we set $ \lambda=\lambda^*=(\lambda_1^*,\lambda_2^*)=\autopar{\sqrt{\frac{n}{40}}L,\frac{n\mu}{2}} $, $\alpha=\frac{n\mu}{50L}\in\automedpar{0,1}$, then $\{\bar{f}_i\}_i$ is $L$-AS and $\bar{f}$ is $\mu$-strongly concave on $y$. \item[(iv)] Define $ \bar{\Phi}(x)\triangleq \max_y \bar{f}(x,y)$, then we have \begin{equation} \bar{\Phi}(x)=\frac{1}{n}\sum_{i=1}^{n}\bar{\Phi}_i(x), \quad \text{where}\quad \bar{\Phi}_i(x)\triangleq\Phi_d(\mathbf{U}^{(i)}x), \end{equation} while $\Phi_d$ is defined in \eqref{eq:LB_hard_instance_deter_Phi}. \end{itemize} \end{lemma} We defer the proof of Lemma \ref{lm:properties_hard_instance_FS_AS} to Appendix \ref{sec:Apdx_hard_instance_FS_AS_properties}. From Lemma \ref{LM:NCSC_LB_PHI}, we have \begin{equation} \label{eq:hard_instance_FS_initial_gap} \begin{split} \bar{\Phi}(0)-\inf_{x\in\mathbb{R}^{n(d+1)}}\bar{\Phi}(x) =\ & \sup_{x\in\mathbb{R}^{n(d+1)}}\frac{1}{n}\sum_{i=1}^{n}\autopar{\bar{\Phi}(0)-\bar{\Phi}_i(x)} \leq \frac{1}{n}\sum_{i=1}^{n}\sup_{x\in\mathbb{R}^{d+1}}\autopar{\bar{\Phi}(0)-\bar{\Phi}_i(x)}\\ =\ & \frac{1}{n}\sum_{i=1}^{n}\autopar{\sup_{x\in\mathbb{R}^{d+1}}\autopar{\Phi_d(0)-\Phi_d(\mathbf{U}^{(i)}x)}} \leq \frac{\lambda_1^2}{2\lambda_2}\autopar{\frac{\sqrt{\alpha}}{2}+10\alpha d}. \end{split} \end{equation} Define the index set $ \mathcal{I} $ as all the indices $i\in[n]$ such that $ u^{(i)}_d=u^{(i)}_{d+1}=0,\ \forall i\in\mathcal{I} $. Suppose that $ |\mathcal{I}|>\frac{n}{2} $, by orthogonality and Lemma \ref{LM:NCSC_LB_PHI} we have \begin{equation} \label{eq:LB_FS_nonconvergence} \begin{split} \autonorm{\nabla \bar{\Phi}(x)}^2 =\ & \autonorm{\frac{1}{n}\sum_{i=1}^n\nabla\bar{\Phi}_i(x)}^2 = \autonorm{\frac{1}{n}\sum_{i=1}^n\nabla\autopar{\Phi_d\autopar{\mathbf{U}^{(i)}x}}}^2 = \frac{1}{n^2}\autonorm{\sum_{i=1}^n\autopar{\mathbf{U}^{(i)}}^\top\nabla\Phi_d\autopar{\mathbf{U}^{(i)}x}}^2\\ =\ & \frac{1}{n^2}\sum_{i=1}^n\autonorm{\autopar{\mathbf{U}^{(i)}}^\top\nabla\Phi_d\autopar{\mathbf{U}^{(i)}x}}^2 = \frac{1}{n^2}\sum_{i=1}^n\autonorm{\nabla\Phi_d\autopar{u^{(i)}}}^2 \geq \frac{1}{n^2}\sum_{i\in\mathcal{I}}\autonorm{\nabla\Phi_d\autopar{u^{(i)}}}^2\\ \geq\ & \frac{1}{n^2}\frac{n}{2}\autopar{\frac{\lambda_1^2}{8\lambda_2}\alpha^{\frac{3}{4}}}^2 = \frac{\lambda_1^4}{128n\lambda_2^2}\alpha^{\frac{3}{2}}. \end{split} \end{equation} Now we arrive at our final theorem for the averaged smooth FS-NC-SC case as follows. \begin{theorem}[LB for AS FS-NC-SC] \label{THM:LB_NCSC_FS_AS} For the linear-span algorithm class $ \mathcal{A} $, parameters $ L,\mu,\Delta>0 $ and component size $n\in\mathbb{N}^+$, if $ L\geq 2n\mu>0 $, the accuracy $\epsilon$ satisfies that $ \epsilon^2 \leq \min\autopar{ \frac{\sqrt{\alpha}L^2\Delta}{76800n\mu}, \frac{\alpha L^2\Delta}{1280n\mu}, \frac{L^2\Delta}{\mu} } $ where $\alpha=\frac{n\mu}{50L}\in\automedpar{0,1}$, then we have \begin{equation} \mathrm{Compl}_\epsilon\autopar{\mathcal{F}_{\mathrm{NCSC}}^{L,\mu,\Delta},\mathcal{A},\mathbb{O}_{\mathrm{IFO}}^{L,\mathrm{AS}}} = \Omega\autopar{n+\sqrt{n\kappa} \Delta L \epsilon^{-2}}. \end{equation} \end{theorem} The theorem above indicates that for any $\mathtt{A}\in\mathcal{A}$, we can construct a function $f(x,y)=\frac{1}{n}\sum_{i=1}^{n}f_i(x,y)$, such that $ f\in\mathcal{F}_{\mathrm{NCSC}}^{L,\mu,\Delta} $ and $ \{f_i\}_i $ is $ L $-AS, and $ \mathtt{A} $ requires at least $\Omega\autopar{n+\sqrt{n\kappa} \Delta L \epsilon^{-2}}$ IFO calls to attain an approximate stationary point of its primal function (in terms of expectation). The hard instance construction is based on $\bar{f}$ and $\bar{f}_i$ above \eqref{eq:LB_hard_instance_FS}, combined with a scaling trick similar to the one in the general case. Also we remark that lower bound holds for small enough $\epsilon$, while the requirement on $\epsilon$ is comparable to those in existing literature, e.g. \citep{zhou2019lower,han2021lower}. The detailed statement and proof of the theorem are deferred to Appendix \ref{sec:Apdx_THM_LB_NCSC_FS_AS}. \begin{remark}[Tightness of Theorem \ref{THM:LB_NCSC_FS_AS}] The state-of-the-art upper bound for NC-SC finite-sum problems is $\tilde{O}(n+\sqrt{n}\kappa^2 \Delta L\epsilon^{-2})$ when $n \geq \kappa^2$ and $O\autopar{\autopar{n\kappa+\kappa^2} \Delta L\epsilon^{-2}}$ when $n \leq \kappa^2$ \citep{luo2020stochastic,xu2020enhanced}. Note that there is still a large gap between upper and lower bounds on the dependence in terms of $\kappa$ and $n$, which motivates the design of faster algorithms for FS-NC-SC case, we address this in Section \ref{sec:main_Catalyst}. \colorword{Note that a weaker result on the lower bound of nonconvex finite-sum averaged smooth minimization is $\Omega(\sqrt{n}\Delta L\epsilon^{-2})$ \citep{fang2018spider,zhou2019lower,li2020page}; here, our result presents explicitly the dependence on $\kappa$.}{black} \end{remark} \section{Faster Algorithms for NC-SC Minimax Problems} \label{sec:main_Catalyst} \iffalse \begin{algorithm}[ht] \caption{(Near-)Optimal algorithms for NC-SC Minimax problems} \begin{algorithmic}[1] \label{catalyst ncc} \STATE Input: initial point $(x_0, y_0)$, smoothness constant $L$ \FORALL{$t = 0,1,..., T$} \STATE Find an inexact solution $(x_{t+1},y_{t+1})$ to the following problem by Algorithm \ref{catalyst scsc} with initial point $(x_t,y_t)$ \begin{equation*} \min_{x\in \mathbb{R}^{d_1}}\max_{y\in \mathbb{R}^{d_2}} \left[\hat{f}_{t}(x,y)\triangleq f(x,y) + L\Vert x - x_t\Vert^2 \right] \tag{$\star$} \end{equation*} s.t. $\|x_{t+1}-x_t^*\|^2+\|y_{t+1}-y_t^*\|^2\leq \alpha_t(\|x_t-x_t^*\|^2+\|y_t-y_t^*\|^2)$, where $(x_t^*, y_t^*)$ is the optimal solution. \ENDFOR \STATE Output: $\hat{x}_{T}$, which is uniformly sampled from $x_1,...,x_{T}$. \end{algorithmic} \end{algorithm} \begin{algorithm}[t] \caption{Catalyst for SC-SC Minimax} \begin{algorithmic}[1] \label{catalyst scsc} \STATE Input: $(\mu_x,\mu_y)$-SC-SC objective function $\hat{f}$ with $\mu_x\geq\mu_y$, initial point $(x_0,y_0)$, parameter $\tau>0$ \STATE Initialization: $q = \frac{\mu_y}{\mu_y +\tau}$ and $ z_1 = y_0$. \FORALL{$t = 1,2,..., T$} \STATE Find an inexact solution $(x_t, y_t)$ to the following problem by algorithm $\mathcal{M}$ with initial point $(x_{t-1},y_{t-1})$ \begin{equation*} \min_{x\in \mathbb{R}^{d_1}}\max_{y\in \mathbb{R}^{d_2}} \left[\Tilde{f_t}(x,y)\triangleq \hat{f}(x,y) - \frac{\tau}{2}\Vert y - z_t\Vert^2 \right] \tag{$\star\star$} \end{equation*} such that $\gap_{\Tilde{f}_t}(x_t, y_t) \leq \epsilon^{(t)}$ \STATE $z_{t+1} = y_{t} + \frac{\sqrt{q}-q}{\sqrt{q}+q}(y_t-y_{t-1})$ \ENDFOR \STATE Output: $(x_T, y_T)$. \end{algorithmic} \end{algorithm} \fi In this section, we introduce a generic Catalyst acceleration scheme that turns existing optimizers for (finite-sum) SC-SC minimax problems into efficient, near-optimal algorithms for (finite-sum) NC-SC minimax optimization. Rooted in the inexact accelerated proximal point algorithm, the idea of Catalyst acceleration was introduced in \cite{lin2015universal} for convex minimization and later extended to nonconvex minimization in \cite{paquette2018catalyst} and nonconvex-concave minimax optimization in~\cite{yang2020catalyst}. In stark contrast, we focus on NC-SC minimax problems. The backbone of our Catalyst framework is to repeatedly solve regularized subproblems of the form: $$\min_{x\in \mathbb{R}^{d_1}}\max_{y\in \mathbb{R}^{d_2}} f(x, y) + L\|x-\Tilde{x}_t\|^2 - \frac{\tau}{2}\|y-\Tilde{y}_t\|^2,$$ where $\Tilde{x}_t$ and $\Tilde{y}_t$ are carefully chosen prox-centers, and the parameter $\tau\geq 0$ is selected such that the condition numbers for $x$-component and $y$-component of these subproblems are well-balanced. Since $f$ is $L$-Lipschitz smooth and $\mu$-strongly concave in $y$, the above auxiliary problem is $L$-strongly convex in $x$ and $(\mu+\tau)$-strongly concave in $y$. Therefore, it can be easily solved by a wide family of off-the-shelf first-order algorithms with linear convergence rate. Our Catalyst framework, presented in Algorithm \ref{catalyst ncc 1}, consists of three crucial components: an inexact proximal point step for primal update, an inexact accelerated proximal point step for dual update, and a linear-convergent algorithm for solving the subproblems. \paragraph{Inexact proximal point step in the primal.} The $x$-update in the outer loop, $\{x_0^t\}_{t=1}^T$, can be viewed as applying an inexact proximal point method to the primal function $\Phi(x)$, requiring to solve the following sequence of auxiliary problems: \begin{equation*} \label{auxiliary prob ncc} \min_{x\in \mathbb{R}^{d_1}}\max_{y\in \mathbb{R}^{d_2}} \left[\hat{f}_{t}(x,y)\triangleq f(x,y) + L\Vert x - x^t_{0}\Vert^2 \right]. \tag{$\star$} \end{equation*} Inexact proximal point methods have been explored in minimax optimization in several work, e.g. \citep{lin2020near, rafique2018non}. Our scheme is distinct from these work in two aspects: (i) we introduce a new subroutine to approximately solve the auxiliary problems (\ref{auxiliary prob ncc}) with near-optimal complexity, and (ii) the inexactness is measured by an adaptive stopping criterion using the gradient norms: \begin{equation} \label{ncc criteion} \|\nabla \hat{f}_t(x^{t+1}_{0}, y^{t+1}_{0})\|^2 \leq \alpha_t\|\nabla \hat{f}_t(x^t_{0}, y^t_{0})\|^2, \end{equation} where $\{\alpha_t\}_t$ is carefully chosen. Using the adaptive stopping criterion significantly reduces the complexity of solving the auxiliary problems. We will show that the number of steps required is only logarithmic in $L, \mu$ without any dependence on target accuracy $\epsilon$. Although the auxiliary problem is $(L, \mu)$-SC-SC and can be solved with linear convergence by algorithms such as extragradient, OGDA, etc., these algorithms are not optimal in terms of the dependency on the condition number when $L>\mu$ \citep{zhang2019lower}. \paragraph{Inexact accelerated proximal point step in the dual.} To solve the auxiliary problem with optimal complexity, we introduce an inexact accelerated proximal point scheme. The key idea is to add an extra regularization in $y$ to the objective such that the strong-convexity and strong-concavity are well-balanced. Therefore, we propose to iteratively solve the subproblems: \begin{equation*} \label{subprob} \min_{x\in \mathbb{R}^{d_1}}\max_{y\in \mathbb{R}^{d_2}} \left[\Tilde{f}_{t,k}(x,y)\triangleq \hat{f}_t(x,y) - \frac{\tau}{2}\Vert y - z_k\Vert^2 \right], \tag{$\star\star$} \end{equation*} where $\{z_k\}_k$ is updated analogously to Nesterov's accelerated method \citep{nesterov2005smooth} and $\tau\geq 0$ is the regularization parameter. For example, by setting $\tau=L-\mu$, the subproblems become $(L,L)$-SC-SC and can be approximately solved by extragradient method with optimal complexity, to be discussed in more details in next section. Finally, when solving these subproblems, we use the following stopping criterion $\|\nabla \Tilde{f}_{t,k}(x, y)\|^2 \leq \epsilon^t_k$ with time-varying accuracy $\epsilon^t_k$ that decays exponentially with $k$. \begin{algorithm}[t] \caption{Catalyst for NC-SC Minimax Problems} \setstretch{1.25} \begin{algorithmic}[1] \label{catalyst ncc 1} \REQUIRE objective $f$, initial point $(x_0, y_0)$, smoothness constant $L$, strong-concavity const.~$\mu$, and param.~$\tau>0$. \STATE Let $(x_{0}^0, y_{0}^0) = (x_0, y_0)$ and $q = \frac{\mu}{\mu+\tau}$. \FORALL{$t = 0,1,..., T$} \STATE Let $z_1 = y^t_{0}$ and $k=1$. \STATE Let $\hat{f}_{t}(x,y)\triangleq f(x,y) + L\Vert x - x^t_{0}\Vert^2$. \REPEAT \STATE Find inexact solution $(x^t_{k}, y^t_{k})$ to the problem below by algorithm $\mathcal{M}$ with initial point $(x^t_{k-1},y^t_{k-1})$: \begin{equation*} \begin{split} \min_{x\in \mathbb{R}^{d_1}}\max_{y\in \mathbb{R}^{d_2}} \automedpar{\Tilde{f}_{t,k}(x,y)\triangleq f(x,y) +L\|x-x^t_{0}\|^2- \frac{\tau}{2}\Vert y - z_k\Vert^2} \end{split}\tag{$\star\star$} \end{equation*} such that $\|\nabla \Tilde{f}_{t,k}(x^t_{k}, y^t_{k})\|^2 \leq \epsilon^t_k$. \STATE Let $z_{k+1} = y^t_{k} + \frac{\sqrt{q}-q}{\sqrt{q}+q}(y^t_{k}-y^t_{k-1}), k=k+1$. \UNTIL{$\|\nabla \hat{f}_t(x^t_{k}, y^t_{k})\|^2 \leq \alpha_t\|\nabla \hat{f}_t(x^t_{0}, y^t_{0})\|^2$} \STATE Set $(x^{t+1}_{0}, y^{t+1}_{0}) = (x^t_{k}, y^t_{k})$. \ENDFOR \ENSURE $\hat{x}_{T}$, which is uniformly sampled from $x^1_{0},...,x^T_{0}$. \end{algorithmic} \end{algorithm} \paragraph{Linearly-convergent algorithms for SC-SC subproblems.} Let $\mathcal{M}$ be any algorithm that solves the subproblem (\ref{subprob}) (denoting $(x^*, y^*)$ as the optimal solution) at a linear convergence rate such that after $N$ iterations: \begin{align} \Vert x_N - x^*\Vert^2+\Vert y_N-y^*\Vert^2 \leq \left(1-\frac{1}{\Lambda^{\mathcal{M}}_{ \mu, L}(\tau)}\right)^N[\Vert x_0-x^*\Vert^2 + \Vert y_0-y^*\Vert^2], \end{align} if $\mathcal{M}$ is a deterministic algorithm; or taking expectation to the left-hand side above if $\mathcal{M}$ is randomized. The choices for $\mathcal{M}$ include, but are not limited to, extragradient (EG) \citep{tseng1995linear}, optimistic gradient descent ascent (OGDA) \citep{gidel2018variational}, SVRG \citep{balamurugan2016stochastic}, SPD1-VR \citep{tan2018stochastic}, SVRE \citep{chavdarova2019reducing}, Point-SAGA \citep{luo2019stochastic}, and variance reduced prox-method \citep{carmon2019variance}. For example, in the case of EG, $\Lambda^{\mathcal{M}}_{\mu, L}(\tau) = \frac{L+\max\{2L,\tau\}}{4\min\{L, \mu+\tau\}}$~\citep{tseng1995linear}. \subsection{Convergence Analysis} In this section, we analyze the complexity of each of the three components we discussed. Let $T$ denote the outer-loop complexity, $K$ the inner-loop complexity, and $N$ the number of iterations for $\mathcal{M}$ (expected number if $\mathcal{M}$ is randomized) to solve subproblem (\ref{subprob}). The total complexity of Algorithm \ref{catalyst ncc 1} is computed by multiplying $K, T$ and $N$. Later, we will provide a guideline for choosing parameter $\tau$ to achieve the best complexity, given an algorithm $\mathcal{M}$. \begin{theorem}[Outer loop] \label{THM CATALYST NCSC} Suppose function $f$ is NC-SC with strong convexity parameter $\mu$ and L-Lipschitz smooth. If we choose $\alpha_t = \frac{\mu^5}{504L^5}$ for $t>0$ and $\alpha_0 = \frac{\mu^5}{576\max\{1,L^7\}}$, the output $\hat{x}_T$ from Algorithm \ref{catalyst ncc 1} satisfies \begin{align} \label{ncsc out complexity} \mathbb{E}\|\nabla\Phi(\hat{x}_T)\|^2 \leq \frac{268L}{5T}\Delta + \frac{28L}{5T}D_y^0, \end{align} where $\Delta = \Phi(x_0) -\inf_{x}\Phi(x), D_y^0 = \|y_0 - y^*(x_0)\|^2 $ and $y^*(x_0)=\argmax_{y\in\mathbb{R}^{d_2}} f(x_0, y)$. \end{theorem} This theorem implies that the algorithm finds an $\epsilon$ stationary point of $\Phi$ after inexactly solving (\ref{auxiliary prob ncc}) for $T = O\left( L(\Delta+D_y^0)\epsilon^{-2} \right)$ times. The dependency on $D_y^0$ can be eliminated if we select the initialization $y_0$ close enough to $y^*(x_0)$, which only requires an additional logarithmic cost by maximizing a strongly concave function. \begin{theorem}[Inner loop] \label{thm catalyst scsc} Under the same assumptions in Theorem \ref{THM CATALYST NCSC}, if we choose $\epsilon^t_k = \frac{\sqrt{2}\mu}{2}(1-\rho)^k\gap_{\hat{f}_t}(x^t_{0}, y^t_{0})$ with $\rho < \sqrt{q}=\sqrt{\frac{\mu}{\mu+\tau}}$, we have \begin{align*} \nonumber \|\nabla \hat{f}_t(x^t_{k}, y^t_{k})\|^2 \leq \automedpar{\frac{5508L^2}{\mu^2(\sqrt{q}-\rho)^2} + \frac{18\sqrt{2}L^2}{\mu}}(1-\rho)^{k}\|\nabla \hat{f}_t(x^t_{0}, y^t_{0})\|^2. \end{align*} \end{theorem} Particularly, setting $\rho = 0.9\sqrt{q}$, Theorem \ref{thm catalyst scsc} implies after inexactly solving (\ref{subprob}) for $K = \Tilde{O}\left(\sqrt{(\tau+\mu)/\mu}\log\frac{1}{\alpha_t}\right)$ times, the stopping criterion (\ref{ncc criteion}) is satisfied. This complexity decreases with $\tau$. However, we should not choose $\tau$ too small, because the smaller $\tau$ is, the harder it is for $\mathcal{M}$ to solve (\ref{subprob}). The following theorem captures the complexity for algorithm $\mathcal{M}$ to solve the subproblem. \iffalse \begin{theorem}[Solving ($\star\star$)] Under the same assumption as Theorem \ref{thm ncsc out}, with point $(x_0, y_0)$ we initialize $y$ with some algorithm $\mathcal{A}$ to find $\Tilde{y}_0$ such that $\|\Tilde{y}_0-y^*(x_0)\|\leq \frac{\epsilon}{160L}$, and we apply Algorithm \ref{catalyst ncc} with initial point $(x_0, \Tilde{y}_0)$. To find an $\epsilon$-stationary point of $f$, the total number (expected number if $\mathcal{M}$ is stochastic) of gradient evaluations is \begin{equation*} N = \Tilde{O}\left(\frac{\Lambda^{\mathcal{M}}_{L, \mu, \Tilde{L}}(\tau)L\Delta }{\epsilon^2}\sqrt{\frac{\mu_y+\tau}{\mu_y}} + T_{\mathcal{A}}\right), \end{equation*} where $\Tilde{L}$ is the Lipchitz smoothness (or average/individual smooth) constant of the auxiliary problem and $ T_{\mathcal{A}}$ is the complexity of initialization. \end{theorem} \fi \begin{theorem}[Complexity of solving subproblems ($\star\star$)] \label{thm catalyst inner} Under the same assumptions in Theorem \ref{THM CATALYST NCSC} and the choice of $\epsilon_k^t$ in Theorem \ref{thm catalyst scsc}, the number of iterations (expected number of iterations if $\mathcal{M}$ is stochastic) for $\mathcal{M}$ to solve (\ref{subprob}) such that $\|\nabla \Tilde{f}_{t,k}(x, y)\|^2 \leq \epsilon^t_k$ is $$N = O \left(\Lambda^{\mathcal{M}}_{\mu, L}(\tau) \log\left(\frac{\max\{1,L,\tau\}}{\min\{1,\mu \}} \right)\right).$$ \end{theorem} The above result implies that the subproblems can be solved within constant iterations that only depends on $L, \mu, \tau$ and $\Lambda^{\mathcal{M}}_{\mu, L}$. This largely benefits from the use of warm-starting and stopping criterion with time-varying accuracy. In contrast, other inexact proximal point algorithms in minimax optimization, such as \citep{yang2020catalyst, lin2020near}, fix the target accuracy, thus their complexity of solving the subproblems usually has an extra logarithmic factor in $1/\epsilon$. The overall complexity of the algorithm follows immediately after combining the above three theorems: \begin{corollary} \label{THM CATALYST TOTAL} Under the same assumptions in Theorem \ref{THM CATALYST NCSC} and setting in Theorem \ref{thm catalyst scsc}, the total number (expected number if $\mathcal{M}$ is randomized) of gradient evaluations for Algorithm \ref{catalyst ncc 1} to find an $\epsilon$-stationary point of $\Phi$, is \begin{equation} \Tilde{O}\left(\frac{\Lambda^{\mathcal{M}}_{\mu, L}(\tau)L(\Delta+D_y^0) }{\epsilon^2}\sqrt{\frac{\mu+\tau}{\mu}} \right). \end{equation} \end{corollary} In order to minimize the total complexity, we should choose the regularization parameter $\tau$ that minimizes $\Lambda^{\mathcal{M}}_{\mu, L}(\tau)\sqrt{\mu+ \tau}$. \subsection{Specific Algorithms and Complexities} \label{subsec algorithms} In this subsection, we discuss specific choices for $\mathcal{M}$ and the corresponding optimal choices of $\tau$, as well as the resulting total complexities for solving NC-SC problems. \paragraph{Catalyst-EG/OGDA algorithm.} When solving NC-SC minimax problems in the general setting, we set $\mathcal{M}$ to be either extra-gradient method (EG) or optimistic gradient descent ascent (OGDA). Hence, we have $\Lambda^{\mathcal{M}}_{\mu, L}(\tau) = \frac{L+\max\{2L,\tau\}}{4\min\{L, \mu+\tau\}}$~\citep{tseng1995linear, gidel2018variational, azizian2020tight}. Minimizing $\Lambda^{\mathcal{M}}_{\mu, L}(\tau)\sqrt{\mu+\tau}$ yields that the optimal choice for $\tau$ is $L - \mu$. This leads to a total complexity of \begin{equation} \Tilde{O}\left(\sqrt{\kappa}L(\Delta+D_y^0)\epsilon^{-2}\right). \end{equation} \begin{remark} The above complexity matches the lower bound in Theorem \ref{THM:LB_NCSC_DETER}, up to a logarithmic factor in $L$ and $\kappa$. It improves over Minimax-PPA \citep{lin2020near} by $\log^2(1/\epsilon)$, GDA \citep{lin2020gradient} by $\kappa^{\frac{3}{2}}$ and therefore achieves the best of two worlds in terms of dependency on $\kappa$ and $\epsilon$. In addition, our Catalyst-EG/OGDA algorithm does not require the bounded domain assumption on $y$, unlike \citep{lin2020near}. \end{remark} \paragraph{Catalyst-SVRG/SAGA algorithm.} When solving NC-SC minimax problems in the averaged smooth finite-sum setting, we set $\mathcal{M}$ to be either SVRG or SAGA. Hence, we have $\Lambda^{\mathcal{M}}_{\mu, L}(\tau) \propto n+ \big(\frac{L+\sqrt{2}\max\{2L,\tau\}}{\min\{L, \mu+\tau\}}\big)^2$~\citep{balamurugan2016stochastic}\footnote{Although \cite{balamurugan2016stochastic} assumes individual smoothness, their analysis can be extended to average smoothness.}. Minimizing $\Lambda^{\mathcal{M}}_{\mu, L}(\tau)\sqrt{\mu+\tau}$, the best choice for $\tau$ is (proportional to) $\max\Big\{{\frac{L}{\sqrt{n}}}-\mu, 0 \Big\}$, which leads to the total complexity of \begin{equation} \tilde{O}\left(\left(n+n^{\frac{3}{4}}\sqrt{\kappa} \right)L(\Delta+D_y^0)\epsilon^{-2} \right). \end{equation} \begin{remark} According to the lower bound established in Theorem \ref{THM:LB_NCSC_FS_AS}, the dependency on $\kappa$ in the above upper bound is nearly tight, up to logarithmic factors. Recall that SREDA \citep{luo2020stochastic} and SREDA-boost \citep{xu2020enhanced} achieve the complexity of $\Tilde{O}\left(\kappa^{2} \sqrt{n} \epsilon^{-2}+n+(n+\kappa) \log (\kappa)\right)$ for $n \geq \kappa^{2}$ and $O\left(\left(\kappa^{2}+\kappa n\right) \epsilon^{-2}\right)$ for $n \leq \kappa^{2}$. Hence, our Catalyst-SVRG/SAGA algorithm attains better complexity in the regime $n\leq \kappa^4$. Particularly, in the critical regime $\kappa = \Omega(\sqrt{n})$ arising in statistical learning \citep{shalev2014understanding}, our algorithm performs strictly better. \end{remark} \section{Conclusion} \label{sec:main_conclusion} In this work, we take an initial step towards understanding the fundamental limits of minimax optimization in the nonconvex-strongly-concave setting for both general and finite-sum cases, and bridge the gaps between lower and upper bounds. It remains interesting to investigate whether the dependence on $n$ can be further tightened in the complexity for finite-sum NC-SC minimax optimization. \section{Notations} \label{sec:Apdx_notations} For convenience, we summarize some of the notations used in the paper. \begin{itemize}[itemsep=2pt] \item SC / C / NC / WC: strongly convex, convex, nonconvex, weakly-convex. \item FS: finite-sum. \item $L$-S: $L$-Lipschitz smooth. $L$-IS / AS: $L$-Lipschitz individual / averaged smoothness. \item SOTA: state-of-the-art, LB / UB: lower / upper bound \item FO / IFO: first-order oracle, incremental first-order oracle, denoted by $\mathbb{O}_{\mathrm{FO}}$ and $\mathbb{O}_{\mathrm{IFO}}$. \item $\mathcal{A}$:linear-span first-order algorithm class. \item $\Phi(x)$, $\Psi(y)$: primal and dual functions of $f(x,y)$. \item $ \nabla_x f $, $ \nabla_y f $: gradients of a function $F$ with respect to $x$ and $y$. Also we set $\nabla f=\autopar{\nabla_x f, \nabla_y f}$. \item $ \nabla_{xx}^2 f $, $ \nabla_{xy}^2 f $, $ \nabla_{yx}^2 F $, $ \nabla_{yy}^2 f $: the Hessian of $F(x,y)$ with respect to different components. \item $ \{\mathbf{U}^{(i)}\}_{i=1}^n \in \mathrm{\mathbf{Orth}}(a,b,n) $: a matrix sequence where if for each $ i, j\in[1, n] $ and $ i\neq j $, $ \mathbf{U}^{(i)},\mathbf{U}^{(j)}\in\mathbb{R}^{a\times b} $ and $ \mathbf{U}^{(i)}(\mathbf{U}^{(i)})^\top=\mathbf{I}\in\mathbb{R}^{a\times a} $ and $\mathbf{U}^{(i)}(\mathbf{U}^{(j)})^\top=\mathbf{0}\in\mathbb{R}^{a\times a}$. Sometimes we use $u^{(i)}\triangleq\mathbf{U}^{(i)}x$. \item $e_i$: unit vector with the $i$-th element as $1$. \item $0$: zero scalars or vectors. \item $ \mathcal{X}_k=\mathrm{Span}\{e_1,e_2,\cdots,e_k\}, \mathcal{Y}_k=\mathrm{Span}\{e_{d+1},e_d,\cdots,e_{d-k+2}\}, \mathcal{X}_0=\mathcal{Y}_0=\{0\} $. \item $a\vee b\triangleq\max\autobigpar{a, b}$, $a\wedge b\triangleq\min\autobigpar{a, b}$. \item $\autonorm{\cdot}$: $\ell_2$-norm. \item $\mathbb{N}^+$: all positive integers. \item $\mathbb{N}$: all nonnegative integers. \item $\dom f$: the domain of a function $f$. \item $d_1, d_2\in\mathbb{N}^+$: dimension numbers of $x$ and $y$. \item $x_d$: the $d$-th coordinate of $x$, $x^t$: the variable $x$ in the $t$-th iteration (in Section \ref{sec:main_LB_NCSC} and Appendix \ref{sec:Apdx_THM_LB_NCSC} only) \end{itemize} \section{Useful Lemmas and Proofs of Section \ref{SEC:MAIN_PRELIM}} \begin{lemma} [Lemma B.2 \citep{lin2020near}] \label{lin's lemma} Assume $f(\cdot, y)$ is $\mu_x$-strongly convex for $\forall y\in\mathbb{R}^{d_2}$ and $f(x, \cdot)$ is $\mu_y$-strongly concave for $\forall x\in\mathbb{R}^{d_1}$ (we will later refer to this as $(\mu_x, \mu_y)$-SC-SC)) and $f$ is $L$-Lipschitz smooth. Then we have \begin{enumerate}[label=\alph*)] \item $y^*(x) = \argmax_{y\in\mathbb{R}^{d_2}} f(x,y)$ is $\frac{L}{\mu_y}$-Lipschitz; \item $\Phi(x) = \max_{y\in\mathbb{R}^{d_2}} f(x,y)$ is $\frac{2L^2}{\mu_y}$-Lipschitiz smooth and $\mu_x$-strongly convex with $\nabla\Phi(x) = \nabla_xf(x, y^*(x))$; \item $x^*(y) = \argmin_{x\in\mathbb{R}^{d_1}} f(x,y)$ is $\frac{L}{\mu_x}$-Lipschitz; \item $\Psi(y) = \min_{x\in\mathbb{R}^{d_1}} f(x,y)$ is $\frac{2L^2}{\mu_x}$-Lipschitiz smooth and $\mu_y$-strongly concave with $\nabla\Psi(y) = \nabla_yf(x^*(y), y)$. \end{enumerate} \end{lemma} \begin{lemma} \label{criterion relation} Under the same assumptions as Lemma \ref{lin's lemma}, we have \begin{enumerate}[label=\alph*)] \item $\gap_f(x,y) \leq \frac{L^2}{\mu_y}\|x-x^*\|^2 + \frac{L^2}{\mu_x}\|y-y^*\|^2 $, where $(x^*, y^*)$ is the optimal solution to $\min_{x\in \mathbb{R}^{d_1}}\max_{y\in \mathbb{R}^{d_2}}\ f(x,y)$. \item $\gap_f(x,y) \leq \frac{1}{2\mu_x}\|\nabla_x f(x,y)\|^2 + \frac{1}{2\mu_y}\|\nabla_y f(x,y)\|^2$. \item $\frac{\mu_x}{2}\|x-x^*\|^2 + \frac{\mu_y}{2}\|y-y^*\|^2 \leq \gap_f(x,y). $ \item $\|\nabla_x f(x, y)\|^2 + \|\nabla_y f(x,y)\|^2 \leq 4L^2(\|x-x^*\|^2+\|y-y^*\|^2)$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}[label=\alph*)] \item Because $\Phi(x)$ is $\frac{2L^2}{\mu_y}$-smooth by Lemma \ref{lin's lemma} and $\nabla\Phi(x^*) = 0$, we have $ \Phi(x) - \Phi(x^*) \leq \frac{L^2}{\mu_y}\|x-x^*\|^2. $ Similarly, because $\Psi(y)$ is $\frac{2L^2}{\mu_x}$-smooth and $\Psi(y^*) = 0$, we have $\Psi(y^*)-\Psi(y) \leq \frac{L^2}{\mu_x}\|y-y^*\|^2$. We reach the conclusion by noting that $\gap_f(x,y) = \Phi(x) - \Psi(y) $ and $\Phi(x^*) =\Psi(y^*)$. \item Because $f(\cdot, y)$ is $\mu_x$-strongly-convex and $\nabla_x f(x^*(y), y) = 0$, we have $f(x,y) - \min_x f(x,y) \leq \langle \nabla_x f(x^*(y), y), x-x^*(y)\rangle + \frac{1}{2\mu_x}\|\nabla_x f(x, y)-\nabla_x f(x^*(y), y)\|^2\leq\frac{1}{2\mu_x}\|\nabla_x f(x, y)\|^2$. Similarly, we have $\max_y f(x,y) - f(x,y) \leq \frac{1}{2\mu_y}\|\nabla_y f(x,y)\|^2$. Then we note that $\gap_f(x,y)=\max_y f(x,y) - f(x,y) + f(x,y) - \min_x f(x,y)$. \item Because $\Phi(x)$ is $\mu_x$ strongly-convex and $\nabla\Phi(x^*) = 0$, we have $\Phi(x)\geq \Phi(x^*) +\frac{\mu_x}{2}\|x-x^*\|^2$. Similarly, because $\Psi(y)$ is $\mu_y$ strongly-concave and $\nabla\Psi(y^*) = 0$, we have $\Psi(y^*)-\Psi(y) \geq \frac{\mu_y}{2}\|y-y^*\|^2$. \item By definition of Lipschitz smoothness, $\|\nabla_x f(x, y)\|^2 = \|\nabla_x f(x, y)-\nabla_x f(x^*, y^*)\|^2 \leq L^2(\|x-x^*\|+\|y-y^*\|)^2\leq 2L^2(\|x-x^*\|^2+\|y-y^*\|^2)$ and $\|\nabla_y f(x, y)\|^2 = \|\nabla_y f(x, y)-\nabla_y f(x^*, y^*)\|^2 \leq L^2(\|x-x^*\|+\|y-y^*\|)^2\leq 2L^2(\|x-x^*\|^2+\|y-y^*\|^2)$. \end{enumerate} \end{proof} \noindent\textbf{Proof of Proposition \ref{prop regularized smooth}} \begin{proof} (a) and (b) directly follow from the definition of averaged smoothness and individual smoothness. \\ \iffalse (a) If we assume $f$ is $L$-IS, then \begin{align*} \left\Vert\frac1n\sum_{i=1}^n \nabla_xf_i(x_1,y_1) - \frac1n\sum_{i=1}^n \nabla_xf_i(x_2,y_2)\right\Vert \leq\ & \frac1n \sum_{i=1}^n\|\nabla_x f_i(x_1,y_1)-\nabla_x f_i(x_2,y_2)\| \\ \leq\ & L(\|x_1-x_2\|+\|y_1-y_2\|),\\ \left\Vert\frac1n\sum_{i=1}^n \nabla_yf_i(x_1,y_1) - \frac1n\sum_{i=1}^n \nabla_yf_i(x_2,y_2)\right\Vert \leq\ & \frac1n \sum_{i=1}^n\|\nabla_y f_i(x_1,y_1)-\nabla_y f_i(x_2,y_2)\| \\ \leq\ & L(\|x_1-x_2\|+\|y_1-y_2\|). \end{align*} If we assume $f$ is $L$-AS, then \begin{align*} \left\Vert\frac1n\sum_{i=1}^n \nabla_xf_i(x_1,y_1) - \frac1n\sum_{i=1}^n \nabla_xf_i(x_2,y_2)\right\Vert^2 \leq\ & \frac{1}{n} \sum_{i=1}^n\autonorm{\nabla_x f_i(x_1,y_1)-\nabla_x f_i(x_2,y_2)}^2\\ \leq\ & L^2\autopar{\autonorm{x_1-x_2}^2+\autonorm{y_1-y_2}^2},\\ \left\Vert\frac1n\sum_{i=1}^n \nabla_yf_i(x_1,y_1) - \frac1n\sum_{i=1}^n \nabla_yf_i(x_2,y_2)\right\Vert^2 \leq\ & \frac{1}{n} \sum_{i=1}^n\|\nabla_y f_i(x_1,y_1)-\nabla_y f_i(x_2,y_2)\|^2 \\ \leq\ & L^2\autopar{\autonorm{x_1-x_2}^2+\autonorm{y_1-y_2}^2}. \end{align*} where we apply Jensen's Inequality and note that $\autonorm{x_1-x_2}^2+\autonorm{y_1-y_2}^2\leq (\autonorm{x_1-x_2}+\autonorm{y_1-y_2})^2$.\\ \noindent(b) If we assume $f$ is $L$-IS, then \begin{align*} & \frac{1}{n}\sum_{i=1}^n\autonorm{\nabla f_i(x_1,y_1)-\nabla f_i(x_2,y_2)}^2\\ =\ & \frac{1}{n}\sum_{i=1}^n\autonorm{\nabla_x f_i(x_1,y_1)-\nabla_x f_i(x_2,y_2)}^2 + \frac{1}{n}\sum_{i=1}^n\autonorm{\nabla_y f_i(x_1,y_1)-\nabla_y f_i(x_2,y_2)}^2 \\ \leq\ & L^2(\|x_1-x_2\|+\|y_1-y_2\|)^2 + L^2(\|x_1-x_2\|+\|y_1-y_2\|)^2\\ \leq\ & 4L^2(\|x_1-x_2\|^2+\|y_1-y_2\|^2). \end{align*} \fi \noindent(c) Denote $$\Bar{f}(x, y)=f(x,y) + \frac{\tau_x}{2}\|x-\Tilde{x}\|^2 - \frac{\tau_y}{2}\|y-\Tilde{y}\|^2 = \frac{1}{n}\sum_{i=1}^n \left[f_i(x,y) + \frac{\tau_x}{2}\|x-\Tilde{x}\|^2 - \frac{\tau_y}{2}\|y - \Tilde{y}\|^2 \right] \triangleq \frac{1}{n}\sum_{i=1}^n \Bar{f}_i(x, y),$$ where $\Bar{f}_i(x, y) = f_i(x,y) + \frac{\tau_x}{2}\|x-\Tilde{x}\|^2 - \frac{\tau_y}{2}\|y - \Tilde{y}\|^2$. Note that for any $(x_1, y_1)$ and $(x_2, y_2)$, \begin{align*} \|\nabla_x \Bar{f}_i(x_1, y_1) - \nabla_x \Bar{f}_i(x_2, y_2) \|^2\leq 2\|\nabla_x f_i(x_1, y_1) - \nabla_x f_i(x_2, y_2)\|^2 + 2\tau_x^2\|x_1-x_2\|^2,\\ \|\nabla_y \Bar{f}_i(x_1, y_1) - \nabla_y \Bar{f}_i(x_2, y_2) \|^2\leq 2\|\nabla_y f_i(x_1, y_1) - \nabla_y f_i(x_2, y_2)\|^2 + 2\tau_y^2\|y_1-y_2\|^2. \end{align*} Therefore, \begin{align*} \frac{1}{n}\sum_{i=1}^n\autonorm{\nabla \Bar{f}_i(x_1,y_1)-\nabla \Bar{f}_i(x_2,y_2)}^2\leq& \frac{2}{n}\sum_{i=1}^n\autonorm{\nabla f_i(x_1,y_1)-\nabla f_i(x_2,y_2)}^2 + 2[\tau_x^2\|x_1-x_2\|^2 +\tau_y^2\|y_1-y_2\|^2]\\ \leq &\left(2L^2+2\max\{\tau_x^2,\tau_y^2\}\right) \left(\|x_1-x_2\|^2+\|y_1-y_2\|^2 \right). \end{align*} \end{proof} An important trick to transform the basic hard instance into the final hard instance is scaling, which will preserve the smoothness of the original function while extend the domain of the function to a high dimension, i.e., enlarging $d$, which helps to increase the lower bound. The properties of scaling is summarized in the following lemma. \begin{lemma}[Scaling and Smoothness] \label{lm:scaling} For a function $\Bar{g}(x,y)$ defined on $\mathbb{R}^{d_1}\times\mathbb{R}^{d_2}$, if $\Bar{g}$ is $L$-smooth, then for the following scaled function: \begin{equation} g(x,y)=\eta^2\Bar{g}\autopar{\frac{x}{\eta},\frac{y}{\eta}}, \end{equation} then $g$ is also $L$-smooth. Furthermore if the function $\Bar{g}$ has a finite-sum form: $\Bar{g}(x,y)=\frac{1}{n}\sum_{i=1}^n\Bar{g}_i(x,y)$, if $\{\Bar{g}_i\}_{i=1}^n$ is $L$-averaged smooth, then for the following functions: \begin{equation} g_i(x,y)=\eta^2\Bar{g}_i\autopar{\frac{x}{\eta},\frac{y}{\eta}}, \quad \text{and} \quad g(x,y)= \frac{1}{n}\sum_{i=1}^n g_i(x,y) = \frac{1}{n}\sum_{i=1}^n\eta^2\Bar{g}_i\autopar{\frac{x}{\eta},\frac{y}{\eta}}, \end{equation} $\{g_i\}_{i=1}^n$ is also $L$-averaged smooth. If we further assume $\{\Bar{g}_i\}_{i=1}^n$ is $L$-individually smooth, then $\{g_i\}_{i=1}^n$ is also $L$-individually smooth. \end{lemma} \begin{proof} For the first statement, note that $\nabla g\autopar{x,y}=\eta\nabla\Bar{g}\autopar{\frac{x}{\eta},\frac{y}{\eta}}$, so for any $(x_1,y_1), (x_2,y_2)\in\mathbb{R}^{d_1}\times\mathbb{R}^{d_2}$, \begin{equation} \begin{split} \autonorm{\nabla_x g(x_1,y_1)-\nabla_x g(x_2,y_2)} =\ & \autonorm{\eta\nabla_x g\autopar{\frac{x_1}{\eta},\frac{y_1}{\eta}}-\eta\nabla_x g\autopar{\frac{x_2}{\eta},\frac{y_2}{\eta}}}\\ \leq\ & \eta L\autopar{\autonorm{\frac{x_1}{\eta}-\frac{x_2}{\eta}}+\autonorm{\frac{y_1}{\eta}-\frac{y_2}{\eta}}} = L\autopar{\autonorm{x_1-x_2}+\autonorm{y_1-y_2}}, \end{split} \end{equation} similar conclusion also holds for $\nabla_y g$, which verifies the first conclusion. For the averaged smooth finite-sum statement, note that $\nabla g_i\autopar{x,y}=\eta\nabla\Bar{g}_i\autopar{\frac{x}{\eta},\frac{y}{\eta}}$, so for any $(x_1,y_1), (x_2,y_2)\in\mathbb{R}^{d_1}\times\mathbb{R}^{d_2}$, \begin{equation} \begin{split} &\mathbb{E}\automedpar{ \autonorm{\nabla g_i(x_1,y_1)-\nabla g_i(x_2,y_2)}^2 }\\ =\ & \mathbb{E}\automedpar{\autonorm{\eta\nabla\Bar{g}_i\autopar{\frac{x_1}{\eta}, \frac{y_1}{\eta}}-\eta\nabla\Bar{g}_i\autopar{\frac{x_2}{\eta}, \frac{y_2}{\eta}}}^2}\\ =\ & \eta^2\mathbb{E}\automedpar{\autonorm{\nabla\Bar{g}_i\autopar{\frac{x_1}{\eta}, \frac{y_1}{\eta}}-\nabla\Bar{g}_i\autopar{\frac{x_2}{\eta}, \frac{y_2}{\eta}}}^2}\\ \leq\ & \eta^2L^2\autopar{\autonorm{\frac{x_1}{\eta}-\frac{x_2}{\eta}}^2+\autonorm{\frac{y_1}{\eta}-\frac{y_2}{\eta}}^2} = L^2\autopar{\autonorm{x_1-x_2}^2+\autonorm{y_1-y_2}^2}, \end{split} \end{equation} so $\{g_i\}_{i=1}^n$ is $L$-averaged smooth. For the individually smooth case statement, note that each $g_i$ is a scaled version of $\Bar{g}_i$, which is $L$-smooth, by the conclusion for the first statement, it implies that $g_i$ is also $L$-smooth, which concludes the proof. \end{proof} \section{Proof of NC-SC Lower Bound} \label{sec:Apdx_THM_LB_NCSC} Similar to Section \ref{sec:main_LB_NCSC} in the main text, here in this section only, we denote $x_d$ as the $d$-th coordinate of $x$ and $x^t$ as the variable $x$ in the $t$-th iteration. \subsection{Deterministic NC-SC Lower Bound} We start from the proof several important lemmas, then proceed to the analysis of Theorem \ref{THM:LB_NCSC_DETER}. \subsubsection{Proof of Lemma \ref{LM:NCSC_LB_F_D}} \label{sec:Apdx_LM_NCSC_LB_F_D} \begin{proof} Recall the definition of $F_d$ in \eqref{eq:LB_hard_instance_deter}, define $\Gamma_d(x)\triangleq\sum_{i=1}^{d}\Gamma(x_i)$, note that $x_i^2=x^\top e_i e_i^\top x$, and \begin{equation} \begin{split} \nabla_xF_d(x,y;\lambda,\alpha) =\ & \lambda_1 B_d\top y- \frac{\lambda_1^2\sqrt{\alpha}}{2\lambda_2}e_1+ \frac{\lambda_1^2\alpha}{2\lambda_2}\nabla\Gamma_d(x)- \frac{\lambda_1^2\alpha}{2\lambda_2}e_{d+1}e_{d+1}^\top x \\ \nabla_yF_d(x,y;\lambda,\alpha) =\ & \lambda_1 B_dx-2\lambda_2y, \end{split} \end{equation} where $ \nabla\Gamma_d(x)=(\nabla\Gamma(x_1), \nabla\Gamma(x_2), \cdots, \nabla\Gamma(x_d))^\top $. Then for the matrix norm of $B_d$, note that $\alpha\in\automedpar{0,1}$ and \begin{equation} \begin{split} &\autonorm{B_dx}=\sqrt{x_{d+1}^2+(x_d-x_{d+1})^2+\cdots+(x_1-x_2)^2+(\sqrt[4]{\alpha}x_1)^2}\\ \leq\ & \sqrt{x_{d+1}^2+2\autopar{x_d^2+x_{d+1}^2+x_{d-1}^2+x_{d}^2+\cdots+x_2^2+x_3^2+x_1^2+x_2^2}+x_1^2}\\ \leq\ & \sqrt{4\autopar{x_{d+1}^2+x_{d}^2+x_{d-1}^2+\cdots+x_2^2+x_1^2}} = 2\autonorm{x}, \end{split} \end{equation} similarly we have $\autonorm{B_d^Ty}\leq 2\autonorm{y}$. Denote $C_\gamma\triangleq 360,$\footnote{The choice of $C_\gamma$ follows the setting in \citep[Proposition 3.11]{zhou2019lower}, which is an upper bound of the Lipschitz smoothness parameter of $ \Gamma_d(x) $ in \cite[Lemma 2]{carmon2019lowerII}.} so because $0\leq\alpha\leq 1$ and $\|B_d\|\leq 2$, we have ($\|\cdot\|$ here denotes the spectral norm of a matrix) \begin{equation} \|\nabla_{xx}^2F_d\|\leq \frac{\lambda_1^2}{2\lambda_2}(C_\gamma\alpha+\alpha)\leq \frac{400\lambda_1^2\alpha}{2\lambda_2}=\frac{200\lambda_1^2\alpha}{\lambda_2}, \quad \|\nabla_{xy}^2F_d\|\leq 2\lambda_1, \quad \|\nabla_{yx}^2F_d\|\leq 2\lambda_1, \quad \|\nabla_{yy}^2F_d\|= 2\lambda_2, \end{equation} which proves the first two statements (i) and (ii). For (iii), due to the structure of $ B_d $ and concerning the activation status defined in $\mathcal{X}_k$ and $\mathcal{Y}_k$, it is easy to verify that if $ x\in\mathcal{X}_{k_1}, y\in\mathcal{Y}_{k_2} $ for $k_1, k_2\in\mathbb{N}$ and $k_1, k_2\leq d$, we have $$B_dx\in\mathcal{Y}_{k_1},\quad B_d^\top y\in\mathcal{X}_{k_2+1}.$$ Since the remaining components in the gradient do not affect the activation with the initial point $ (0,0)\in\mathbb{R}^{d+1}\times\mathbb{R}^{d+2} $, this proves (iii). For (iv), by substituting the parameter settings, we have $\frac{200\lambda_1^2\alpha}{\lambda_2}= L$, $ 2\lambda_1=L $ and $ 2\lambda_2=\mu $, so the function $F_d$ is $\mu$-strongly concave in $y$ and $L$-Lipschitz smooth, which concludes the proof. \end{proof} \subsubsection{Proof of Lemma \ref{LM:NCSC_LB_PHI}} \label{sec:Apdx_LM_NCSC_LB_PHI} \begin{proof} Recall the primal function $\Phi_d$ of $F_d$ \eqref{eq:LB_hard_instance_deter}: \begin{equation} \Phi_d(x;\lambda,\alpha) = \underbrace{\frac{\lambda_1^2}{2\lambda_2} \left( \frac{1}{2}x^\top A_dx- \sqrt{\alpha}x_1+ \frac{\sqrt{\alpha}}{2}+ \alpha\sum_{i=1}^{d}\Gamma(x_i)\right)}_{\triangleq\Phi_{d1}(x)}+ \underbrace{\frac{(1-\alpha)\lambda_1^2}{4\lambda_2}x_{d+1}^2}_{\triangleq\Phi_{d2}(x)}. \end{equation} For the first statement, because $ x_d=x_{d+1}=0 $, we have \begin{equation} \nabla\Phi_d(x;\lambda,\alpha) = \nabla\Phi_{d1}(x;\lambda,\alpha)+\nabla\Phi_{d2}(x;\lambda,\alpha) = \nabla\Phi_{d1}(x;\lambda,\alpha), \end{equation} which corresponds to the hard instance in \cite[Equation 9]{carmon2019lowerII} with an extra coefficient $\frac{\lambda_1^2}{2\lambda_2}$, then we apply \cite[Lemma 3]{carmon2019lowerII} therein to attain the desired large gradient norm result, i.e. \begin{equation} \autonorm{\nabla\Phi_d(x;\lambda,\alpha)} \geq \frac{\lambda_1^2}{2\lambda_2}\times \frac{\alpha^{\frac{3}{4}}}{4} = \frac{\lambda_1^2}{8\lambda_2}\alpha^{\frac{3}{4}}. \end{equation} For the second statement, we have \begin{equation} \begin{split} &\Phi_d(0;\lambda,\alpha)-\inf_{x\in\mathbb{R}^{d+1}}\Phi_d(x;\lambda,\alpha)\\ =\ & \Phi_{d1}(0;\lambda,\alpha)-\inf_{x\in\mathbb{R}^{d+1}}\automedpar{\Phi_{d1}(x;\lambda,\alpha)+\Phi_{d2}(x;\lambda,\alpha)}\\ \leq\ & \Phi_{d1}(0;\lambda,\alpha)-\inf_{x\in\mathbb{R}^{d+1}}\Phi_{d1}(x;\lambda,\alpha)\\ \leq\ & \frac{\lambda_1^2}{2\lambda_2}\left(\frac{\sqrt{\alpha}}{2}+10\alpha d\right), \end{split} \end{equation} where the first inequality uses that $\Phi_{d2}(x;\lambda,\alpha)\geq 0$ because $ \alpha\in[0,1] $, and the last inequality applies \cite[Lemma 4]{carmon2019lowerII}, which proves the second statement. \end{proof} \subsubsection{Proof of Theorem \ref{THM:LB_NCSC_DETER}} \label{sec:Apdx_THM_LB_NCSC_DETER} The complexity for deterministic nonconvex-strongly-concave problems is defined as \begin{equation} \begin{split} \mathrm{Compl}_\epsilon \autopar{ \mathcal{F}_{\mathrm{NCSC}}^{L,\mu,\Delta},\mathcal{A},\mathbb{O}_{\mathrm{FO}} } \triangleq\ & \underset{f\in\mathcal{F}_{\mathrm{NCSC}}^{L,\mu,\Delta}}{\sup}\ \underset{\mathtt{A}\in{\mathcal{A}\autopar{\mathbb{O}_{\mathrm{FO}}}}}{\inf}\ T_{\epsilon}(f,\mathtt{A}) \\ =\ & \underset{f\in\mathcal{F}_{\mathrm{NCSC}}^{L,\mu,\Delta}}{\sup}\ \underset{\mathtt{A}\in{\mathcal{A}\autopar{\mathbb{O}_{\mathrm{FO}}}}}{\inf}\ \inf \autobigpar{T\in\mathbb{N}\ \Big|\ \autonorm{\nabla \Phi\autopar{x^T}}\leq\epsilon}. \end{split} \end{equation} As a helper lemma, we first discuss the primal function of the scaled hard instance. \begin{lemma}[Primal of the Scaled Hard Instance] \label{lm:Primal_Scaled_Deter_Hard_Instance} With the function $F_d$ defined in \eqref{eq:LB_hard_instance_deter}, $\Phi_d$ defined in \eqref{eq:LB_hard_instance_deter_Phi} and any $\eta\in\mathbb{R}$, for the following function: \begin{equation} f(x,y)=\eta^2F_d\left(\frac{x}{\eta},\frac{y}{\eta};\lambda,\alpha\right), \end{equation} then for its primal function $\Phi(x)\triangleq\max_{y\in\mathbb{R}^{d+2}}f(x,y)$, we have \begin{equation} \Phi(x)=\eta^2\Phi_d\autopar{\frac{x}{\eta};\lambda,\alpha}. \end{equation} \end{lemma} \begin{proof} Check the scaled function, \begin{equation} \label{eq:LB_hard_instance_deter_scaled} \begin{split} & f(x,y)\\ =\ & \eta^2 \Bigg( \lambda_1\autoprod{B_d\frac{x}{\eta},\frac{y}{\eta}}- \lambda_2\autonorm{\frac{y}{\eta}}^2-\frac{\lambda_1^2\sqrt{\alpha}}{2\lambda_2}\autoprod{e_1,\frac{x}{\eta}}+ \frac{\lambda_1^2\alpha}{2\lambda_2}\sum_{i=1}^{d}\Gamma\autopar{\frac{x_i}{\eta}} -\frac{\lambda_1^2\alpha}{4\lambda_2}\autopar{\frac{x_{d+1}}{\eta}}^2+ \frac{\lambda_1^2\sqrt{\alpha}}{4\lambda_2} \Bigg)\\ =\ & \lambda_1\autoprod{B_dx,y}- \lambda_2\autonorm{y}^2 + \eta^2 \Bigg( -\frac{\lambda_1^2\sqrt{\alpha}}{2\lambda_2}\autoprod{e_1,\frac{x}{\eta}}+ \frac{\lambda_1^2\alpha}{2\lambda_2}\sum_{i=1}^{d}\Gamma\autopar{\frac{x_i}{\eta}} -\frac{\lambda_1^2\alpha}{4\lambda_2}\autopar{\frac{x_{d+1}}{\eta}}^2+ \frac{\lambda_1^2\sqrt{\alpha}}{4\lambda_2} \Bigg) , \end{split} \end{equation} check the gradient over $y$ and set it to be $0$ to solve for $y^*(x)$, we have \begin{equation} \nabla_y f(x,y^*(x))=\lambda_1B_dx-2\lambda_2y^*(x)=0 \quad\Longrightarrow \quad y^*(x)=\frac{\lambda_1}{2\lambda_2}B_dx, \end{equation} so the primal function is \begin{equation} \begin{split} & \Phi(x)=f\autopar{x,y^*(x)}\\ =\ & \lambda_1\autoprod{B_dx,y^*(x)}- \lambda_2\autonorm{y^*(x)}^2 + \eta^2 \autopar{ -\frac{\lambda_1^2\sqrt{\alpha}}{2\lambda_2}\autoprod{e_1,\frac{x}{\eta}} + \frac{\lambda_1^2\alpha}{2\lambda_2}\sum_{i=1}^{d}\Gamma\autopar{\frac{x_i}{\eta}} - \frac{\lambda_1^2\alpha}{4\lambda_2}\autopar{\frac{x_{d+1}}{\eta}}^2 + \frac{\lambda_1^2\sqrt{\alpha}}{4\lambda_2} }\\ =\ & \frac{\lambda_1^2}{4\lambda_2}\autonorm{B_dx}^2 + \eta^2 \autopar{ -\frac{\lambda_1^2\sqrt{\alpha}}{2\lambda_2}\autoprod{e_1,\frac{x}{\eta}}+ \frac{\lambda_1^2\alpha}{2\lambda_2}\sum_{i=1}^{d}\Gamma\autopar{\frac{x_i}{\eta}} - \frac{\lambda_1^2\alpha}{4\lambda_2}\autopar{\frac{x_{d+1}}{\eta}}^2+ \frac{\lambda_1^2\sqrt{\alpha}}{4\lambda_2} }\\ =\ & \eta^2 \autopar{ \frac{\lambda_1^2}{4\lambda_2}\autonorm{B_d\frac{x}{\eta}}^2 -\frac{\lambda_1^2\sqrt{\alpha}}{2\lambda_2}\autoprod{e_1,\frac{x}{\eta}}+ \frac{\lambda_1^2\alpha}{2\lambda_2}\sum_{i=1}^{d}\Gamma\autopar{\frac{x_i}{\eta}} - \frac{\lambda_1^2\alpha}{4\lambda_2}\autopar{\frac{x_{d+1}}{\eta}}^2+ \frac{\lambda_1^2\sqrt{\alpha}}{4\lambda_2} }\\ =\ & \eta^2\Phi_d\autopar{\frac{x}{\eta};\lambda,\alpha}, \end{split} \end{equation} which concludes the proof. \end{proof} Now we come to the formal statement and proof of the main theorem. \begin{theorem}[Lower Bound for General NC-SC, Restate Theorem \ref{THM:LB_NCSC_DETER}] For any linear-span first-order algorithm $ \mathtt{A}\in\mathcal{A} $ and parameters $ L,\mu,\Delta>0 $, with a desired accuracy $ \epsilon>0 $, for the following function $ f:\mathbb{R}^{d+1}\times\mathbb{R}^{d+1}\rightarrow\mathbb{R} $: \begin{equation} f(x,y)\triangleq\eta^2F_d\left(\frac{x}{\eta},\frac{y}{\eta};\lambda^*,\alpha\right), \end{equation} where $F_d$ is defined in \eqref{eq:LB_hard_instance_deter}, with a primal function $ \Phi(x)\triangleq\max_{y\in\mathbb{R}^{d+1}}f(x,y) $, for a small enough $ \epsilon>0 $ satisfying $$ \epsilon^2\leq\min\left(\frac{\Delta L}{64000}, \frac{\Delta L\sqrt{\kappa}}{38400}\right), $$ if we set \begin{equation} \lambda^*=\left(\frac{L}{2},\frac{\mu}{2}\right), \quad \eta=\frac{16\mu}{L^2}\alpha^{-3/4}\epsilon, \quad \alpha=\frac{\mu}{100L}\in\automedpar{0,1}, \quad d=\left\lfloor\frac{\Delta L\sqrt{\kappa}}{12800}\epsilon^{-2}\right\rfloor\geq 3, \end{equation} we have \begin{itemize} \item The proposed function $ f\in\mathcal{F}_{\mathrm{NCSC}}^{L,\mu,\Delta} $. \item To obtain a point $ \hat{x}\in\mathbb{R}^{d+1} $ such that $ \|\nabla\Phi(\hat{x})\|\leq\epsilon $, the number of FO queries required by the algorithm $ \mathtt{A}\in\mathcal{A} $ is at least $2d-1=\Omega\left(\sqrt{\kappa} \Delta L \epsilon^{-2}\right)$, namely, \begin{equation} \mathrm{Compl}_\epsilon \autopar{ \mathcal{F}_{\mathrm{NCSC}}^{L,\mu,\Delta},\mathcal{A},\mathbb{O}_{\mathrm{FO}} } = \Omega\autopar{\sqrt{\kappa} \Delta L \epsilon^{-2}}. \end{equation} \end{itemize} \end{theorem} \begin{proof} First, we verify the smoothness and strong concavity of the function $f$. According to Lemma \ref{LM:NCSC_LB_F_D}, $ \alpha\leq\frac{\mu}{100L} $ implies that $ F_d(x,y;\lambda^*,\alpha) $ is $ L $-smooth and $ \mu $-strongly concave in $ y $. Given that $ f $ is a scaled version of $ F_d $, by Lemma \ref{lm:scaling}, it is easy to verify that $ f $ is also $ L $-smooth and $ \mu $-strongly concave in $ y $. Then by Lemma \ref{lm:Primal_Scaled_Deter_Hard_Instance}, we have \begin{equation} \Phi(x)=\eta^2\Phi_d\autopar{\frac{x}{\eta};\lambda^*,\alpha}, \end{equation} where $\Phi_d$ is defined in \eqref{eq:LB_hard_instance_deter_Phi}. Next we check the initial primal function gap, by Lemma \ref{LM:NCSC_LB_PHI} and parameter substitution, \begin{equation} \Phi(0)-\inf_x\Phi(x)=\eta^2\left(\Phi_d(0)-\inf_x\Phi_d(x)\right) \leq \frac{\eta^2L^2}{4\mu}\left(\frac{\sqrt{\alpha}}{2}+10\alpha d\right) = \frac{64\mu}{L^2}\autopar{\frac{1}{2\alpha}+\frac{10d}{\sqrt{\alpha}}}\epsilon^2, \end{equation} by substituting $ \alpha $ and $ d $ into the RHS above, we have \begin{equation} \begin{split} \frac{64\mu}{L^2}\autopar{\frac{1}{2\alpha}+\frac{10d}{\sqrt{\alpha}}}\epsilon^2 \leq\ & \frac{64\mu}{L^2}\left(\frac{50L}{\mu}+100\sqrt{\frac{L}{\mu}}\cdot\frac{\Delta L\sqrt{\kappa}}{12800}\epsilon^{-2}\right)\epsilon^2\\ \leq\ & \frac{64}{L}\left(50+\frac{\Delta L}{128}\epsilon^{-2}\right)\epsilon^2 \leq \frac{64}{L}\left(\frac{\Delta L}{64}\epsilon^{-2}\right)\epsilon^2=\Delta. \end{split} \end{equation} The second inequality holds because $ \epsilon $ above is set to be small enough than $\frac{\Delta L}{6400}$. We conclude that $ f\in\mathcal{F}_{\mathrm{NCSC}}^{L,\mu,\Delta} $. We now discuss the lower bound argument. Based on Lemma \ref{LM:NCSC_LB_PHI} and the setting of $ \eta $, we have when $ x_d=x_{d+1}=0 $, \begin{equation} \label{eq:NCSC_LB_nonconvergence} \left\|\nabla\Phi(x)\right\| = \eta\left\|\nabla \Phi_d\left(\frac{x}{\eta};\lambda^*,\alpha\right)\right\| \geq \frac{\eta L^2}{16\mu}\alpha^{3/4}=\epsilon. \end{equation} So starting from $ (x,y)=(0,0)\in\mathbb{R}^{d+1}\times\mathbb{R}^{d+2} $, we cannot get the primal stationarity convergence at least until $ x_d\neq 0 $. By the ``alternating zero-chain" mechanism\footnote{Also known as the ``Domino argument" in \cite{ibrahim2020linear}.} in Lemma \ref{LM:NCSC_LB_F_D}, each update with the linear-span algorithm interacting with the FO oracle call will activate exactly one coordinate alternatively between $ x $ and $ y $. Therefore the algorithm $ \mathtt{A} $ requires at least $ 2d-1 $ queries to FO to activate the $d$-th element of $x$, i.e., $x_d$, which implies the lower bound is (note that $\epsilon$ is small enough such that $d\geq 3$) \begin{equation} 2d-1=\Omega\autopar{\sqrt{\kappa} \Delta L \epsilon^{-2}}, \end{equation} which concludes the proof. Notice that this argument works even for randomized algorithms, as long as they satisfy the linear-span assumption. \end{proof} \subsection{Averaged Smooth Finite-Sum NC-SC Lower Bound} \label{sec:Apdx_LM_NCSC_LB_FS_AS} Similar to the deterministic NC-SC case, here we still start from several important lemmas and proceed to the proof of Theorem \ref{THM:LB_NCSC_FS_AS}. \subsubsection{Hard Instance Construction} \label{sec:Apdx_hard_instance_FS_AS_properties} Recall the (unscaled) hard instance in averaged smooth finite-sum case in \eqref{eq:LB_hard_instance_FS}: $H_d:\mathbb{R}^{d+2}\times\mathbb{R}^{d+1}\rightarrow\mathbb{R}$, $\Gamma_d^n:\mathbb{R}^{n(d+1)}\rightarrow\mathbb{R}$ and \begin{equation} \begin{split} H_d(x,y;\lambda,\alpha) \triangleq\ & \lambda_1\autoprod{B_dx,y}- \lambda_2\|y\|^2-\frac{\lambda_1^2\sqrt{\alpha}}{2\lambda_2}\autoprod{e_1,x}- \frac{\lambda_1^2\alpha}{4\lambda_2}x_{d+1}^2+ \frac{\lambda_1^2\sqrt{\alpha}}{4\lambda_2},\\ \Gamma_d^n(x) \triangleq\ & \sum_{i=1}^n\sum_{j=i(d+1)-d}^{i(d+1)-1}\Gamma(x_j), \end{split} \end{equation} then $\bar{f}_i, \bar{f}: \mathbb{R}^{n(d+1)}\times\mathbb{R}^{n(d+2)}\rightarrow\mathbb{R}$, $\{\mathbf{U}^{(i)}\}_{i=1}^n \in \mathrm{\mathbf{Orth}}(d+1,n(d+1),n)$, $\{\mathbf{V}^{(i)}\}_{i=1}^n \in \mathrm{\mathbf{Orth}}(d+2,n(d+2),n)$ and \begin{equation} \begin{split} \bar{f}_i(x,y) \triangleq\ & H_d\autopar{\mathbf{U}^{(i)}x,\mathbf{V}^{(i)}y;\lambda,\alpha}+\frac{\lambda_1^2\alpha}{2n\lambda_2}\Gamma_d^n(x),\\ \bar{f}(x,y) \triangleq\ & \frac{1}{n}\sum_{i=1}^{n}\bar{f}_i(x,y) = \frac{1}{n}\sum_{i=1}^{n}\automedpar{H_d\autopar{\mathbf{U}^{(i)}x,\mathbf{V}^{(i)}y;\lambda,\alpha}+\frac{\lambda_1^2\alpha}{2n\lambda_2}\Gamma_d^n(x)}. \end{split} \end{equation} i.e., by denoting $u^{(i)}\triangleq\mathbf{U}^{(i)}x$ and note that $\autonorm{y}^2=\sum_{i=1}^{n}\autonorm{\mathbf{V}^{(i)}y}^2$, \begin{equation} \label{eq:LB_AS_FS_Hard_Instance_Detailed} \begin{split} &\bar{f}(x,y)\\ =\ & \frac{1}{n}\sum_{i=1}^{n} \Bigg[ \lambda_1\autoprod{B_d\mathbf{U}^{(i)}x,\mathbf{V}^{(i)}y}- \lambda_2\|\mathbf{V}^{(i)}y\|^2-\frac{\lambda_1^2\sqrt{\alpha}}{2\lambda_2}\autoprod{e_1,\mathbf{U}^{(i)}x}+ \frac{\lambda_1^2\alpha}{2n\lambda_2}\Gamma_d^n(x)- \frac{\lambda_1^2\alpha}{4\lambda_2}\autopar{u^{(i)}_{d+1}}^2+ \frac{\lambda_1^2\sqrt{\alpha}}{4\lambda_2}\Bigg]\\ =\ & -\frac{\lambda_2}{n}\|y\|^2 + \frac{1}{n}\sum_{i=1}^{n} \Bigg[\lambda_1\autoprod{B_d\mathbf{U}^{(i)}x,\mathbf{V}^{(i)}y}- \frac{\lambda_1^2\sqrt{\alpha}}{2\lambda_2}\autoprod{e_1,\mathbf{U}^{(i)}x}+ \frac{\lambda_1^2\alpha}{2n\lambda_2}\Gamma_d^n(x)- \frac{\lambda_1^2\alpha}{4\lambda_2}\autopar{u^{(i)}_{d+1}}^2+ \frac{\lambda_1^2\sqrt{\alpha}}{4\lambda_2}\Bigg] , \end{split} \end{equation} so $\bar{f}$ is $\frac{2\lambda_2}{n}$-strongly concave in $y$. Recall the gradient of $f_i$: \begin{equation} \begin{split} \nabla_x f_i(x,y)=\ & \lambda_1\autopar{\mathbf{U}^{(i)}}^\top B_d^\top \mathbf{V}^{(i)}y- \frac{\lambda_1^2\sqrt{\alpha}}{2\lambda_2}\autopar{\mathbf{U}^{(i)}}^\top e_1+ \frac{\lambda_1^2\alpha}{2n\lambda_2}\nabla\Gamma_d^n(x) -\frac{\lambda_1^2\alpha}{2\lambda_2}\autopar{\mathbf{U}^{(i)}}^\top e_{d+1}e_{d+1}^\top\mathbf{U}^{(i)}x,\\ \nabla_y f_i(x,y)=\ & \lambda_1\autopar{\mathbf{V}^{(i)}}^\top B_d\mathbf{U}^{(i)}x-2\lambda_2\autopar{\mathbf{V}^{(i)}}^\top\mathbf{V}^{(i)}y, \end{split} \end{equation} then we discuss the smoothness of $\{\bar{f}_i\}_i$. \begin{lemma}[Properties of $\bar{f}$] \label{lm:LB_FS_Hard_Instance_bar_f_base_AS} For $n\in\mathbb{N}^+$, $ L\geq 2n\mu>0 $, if we set \begin{equation} \lambda=\lambda^*=(\lambda_1^*,\lambda_2^*)=\autopar{\sqrt{\frac{n}{40}}L,\frac{n\mu}{2}} \quad\text{ and }\quad \alpha=\frac{n\mu}{50L}, \end{equation} then the function $\{\bar{f}_i\}_i$ is $L$-averaged smooth, and $ \bar{f}(x,\cdot) $ is $ \mu $-strongly concave for any fixed $ x\in\mathbb{R}^{d+1} $. \end{lemma} \begin{proof} For the strong concavity, note that $\bar{f}$ is $\frac{2\lambda_2}{n}$-strongly concave, so by substitution we have $\bar{f}$ is $\mu$-strongly concave in $y$. Then for the average smoothness, by definition, we have for any $(x_1,y_1), (x_2,y_2)\in\mathbb{R}^{d+1}\times\mathbb{R}^{d+1}$, \begin{equation} \begin{split} &\frac{1}{n}\sum_{i=1}^{n}\autonorm{\nabla f_i(x_1,y_1)-\nabla f_i(x_2,y_2)}^2\\ =\ & \frac{1}{n}\sum_{i=1}^{n}\automedpar{\autonorm{\nabla_x f_i(x_1,y_1)-\nabla_x f_i(x_2,y_2)}^2+\autonorm{\nabla_y f_i(x_1,y_1)-\nabla_y f_i(x_2,y_2)}^2}, \end{split} \end{equation} then note that $\Gamma_d^n$ and $\Gamma_d$ enjoys the same Lipschitz smoothness parameter as that of $\Gamma$, so we have \begin{equation} \begin{split} &\autonorm{\nabla_x f_i(x_1,y_1)-\nabla_x f_i(x_2,y_2)}^2\\ \leq\ & 4\autonorm{\lambda_1\autopar{\mathbf{U}^{(i)}}^\top B_d^\top \mathbf{V}^{(i)}\autopar{y_1-y_2}}^2+ 4\autonorm{\frac{\lambda_1^2\alpha}{2n\lambda_2}\autopar{\nabla\Gamma_d^n(x_1)-\nabla\Gamma_d^n(x_2)}}^2\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad + 4\autonorm{\frac{\lambda_1^2\alpha}{2\lambda_2}\autopar{\mathbf{U}^{(i)}}^\top e_{d+1}e_{d+1}^\top\mathbf{U}^{(i)}\autopar{x_1-x_2}}^2\\ =\ & 4\lambda_1^2\autonorm{B_d^\top \mathbf{V}^{(i)}\autopar{y_1-y_2}}^2 + \frac{\lambda_1^4\alpha^2}{n^2\lambda_2^2}\autonorm{\nabla\Gamma_d^n(x_1)-\nabla\Gamma_d^n(x_2)}^2 + \frac{\lambda_1^4\alpha^2}{\lambda_2^2}\autonorm{e_{d+1}e_{d+1}^\top\mathbf{U}^{(i)}\autopar{x_1-x_2}}^2\\ \leq\ & 16\lambda_1^2\autonorm{\mathbf{V}^{(i)}\autopar{y_1-y_2}}^2 + \frac{C_\gamma^2\lambda_1^4\alpha^2}{n^2\lambda_2^2}\autonorm{x_1-x_2}^2 + \frac{\lambda_1^4\alpha^2}{\lambda_2^2}\autonorm{\mathbf{U}^{(i)}\autopar{x_1-x_2}}^2, \end{split} \end{equation} and \begin{equation} \begin{split} &\autonorm{\nabla_y f_i(x_1,y_1)-\nabla_y f_i(x_2,y_2)}^2 =\ \autonorm{\lambda_1\autopar{\mathbf{V}^{(i)}}^\top B_d\mathbf{U}^{(i)}\autopar{x_1-x_2}-2\lambda_2\autopar{\mathbf{V}^{(i)}}^\top\mathbf{V}^{(i)}\autopar{y_1-y_2}}^2\\ \leq\ & 2\autonorm{\lambda_1\autopar{\mathbf{V}^{(i)}}^\top B_d\mathbf{U}^{(i)}\autopar{x_1-x_2}}^2 + 2\autonorm{2\lambda_2\autopar{\mathbf{V}^{(i)}}^\top\mathbf{V}^{(i)}\autopar{y_1-y_2}}^2\\ \leq\ & 8\lambda_1^2\autonorm{\mathbf{U}^{(i)}\autopar{x_1-x_2}}^2 + 8\lambda_2^2\autonorm{\mathbf{V}^{(i)}\autopar{y_1-y_2}}^2, \end{split} \end{equation} so we have \begin{equation} \begin{split} &\frac{1}{n}\sum_{i=1}^{n}\autonorm{\nabla f_i(x_1,y_1)-\nabla f_i(x_2,y_2)}^2\\ \leq\ & \frac{1}{n}\sum_{i=1}^{n} \automedpar{\autopar{16\lambda_1^2+8\lambda_2^2}\autonorm{\mathbf{V}^{(i)}\autopar{y_1-y_2}}^2 + \autopar{\frac{\lambda_1^4\alpha^2}{\lambda_2^2}+8\lambda_1^2}\autonorm{\mathbf{U}^{(i)}\autopar{x_1-x_2}}^2+\frac{C_\gamma^2\lambda_1^4\alpha^2}{n^2\lambda_2^2}\autonorm{x_1-x_2}^2}\\ =\ & \frac{1}{n}\autopar{16\lambda_1^2+8\lambda_2^2}\sum_{i=1}^{n}\automedpar{\autonorm{\mathbf{V}^{(i)}\autopar{y_1-y_2}}^2}+\frac{1}{n}\autopar{\frac{\lambda_1^4\alpha^2}{\lambda_2^2}+8\lambda_1^2}\sum_{i=1}^{n}\automedpar{\autonorm{\mathbf{U}^{(i)}\autopar{x_1-x_2}}^2}+\frac{C_\gamma^2\lambda_1^4\alpha^2}{n^2\lambda_2^2}\autonorm{x_1-x_2}^2\\ =\ & \frac{1}{n}\autopar{16\lambda_1^2+8\lambda_2^2}\autonorm{y_1-y_2}^2+\frac{1}{n}\autopar{\frac{\lambda_1^4\alpha^2}{\lambda_2^2}+8\lambda_1^2}\autonorm{x_1-x_2}^2+\frac{C_\gamma^2\lambda_1^4\alpha^2}{n^2\lambda_2^2}\autonorm{x_1-x_2}^2\\ \leq\ & \frac{1}{n}\max\autobigpar{16\lambda_1^2+8\lambda_2^2, \frac{C_\gamma^2\lambda_1^4\alpha^2}{n\lambda_2^2}+\frac{\lambda_1^4\alpha^2}{\lambda_2^2}+8\lambda_1^2}\autopar{\autonorm{x_1-x_2}^2+\autonorm{y_1-y_2}^2}, \end{split} \end{equation} then note that $\alpha\in\automedpar{0,1}$ because we set $ L\geq 2n\mu \geq \frac{1}{50}n\mu$, so substitute parameters into the above, we have \begin{equation} \begin{split} & \frac{1}{n}\max\autobigpar{16\lambda_1^2+8\lambda_2^2, \frac{C_\gamma^2\lambda_1^4\alpha^2}{n\lambda_2^2}+\frac{\lambda_1^4\alpha^2}{\lambda_2^2}+8\lambda_1^2}\\ =\ & \frac{1}{n}\max\autobigpar{16\lambda_1^2+2n^2\mu^2, \frac{4C_\gamma^2\lambda_1^4\alpha^2}{n^3\mu^2}+\frac{4\lambda_1^4\alpha^2}{n^2\mu^2}+8\lambda_1^2}\\ \leq\ & \frac{1}{n}\max\autobigpar{16\lambda_1^2+2n^2\mu^2, 1000000\alpha^2\frac{\lambda_1^4}{n^3\mu^2}+8\lambda_1^2}\\ =\ & \frac{1}{n}\max\autobigpar{\frac{16nL^2}{40}+2n^2\mu^2, 1000000\cdot\frac{n^2\mu^2}{2500L^2}\cdot\frac{n^2L^4}{1600n^3\mu^2}+\frac{8nL^2}{40}}\\ \leq\ & \max\autobigpar{\frac{2L^2}{5}+\frac{L^2}{2}, \frac{L^2}{4}+\frac{L^2}{5}}\\ \leq\ & \max\autobigpar{\frac{9L^2}{10}, \frac{9L^2}{20}} \leq L^2, \end{split} \end{equation} where the first inequality is attained by the computation with the value of $C_\gamma=360$, the second inequality comes from the assumption $L\geq 2n\mu\geq 2\sqrt{n}\mu$; the last equality is attained by parameter substitution, which verifies the conclusion. \end{proof} Next we discuss the primal function of the finite-sum hard instance. \begin{lemma}[Primal of Averaged Smooth Finite-Sum Hard Instance] \label{lm:primal_hard_instance_FS_AS} For the function $\bar{f}=\frac{1}{n}\sum_{i=1}^n\bar{f}_i$ defined in \eqref{eq:LB_hard_instance_FS}, define $ \bar{\Phi}(x)\triangleq \max_y \bar{f}(x,y)$, then we have \begin{equation} \bar{\Phi}(x)=\frac{1}{n}\sum_{i=1}^{n}\bar{\Phi}_i(x), \quad \text{where}\quad \bar{\Phi}_i(x) \triangleq \Phi_d\autopar{\mathbf{U}^{(i)}x}, \end{equation} while $\Phi_d$ is defined in \eqref{eq:LB_hard_instance_deter_Phi}. \end{lemma} \begin{proof} By the expression of $\bar{f}$ in \eqref{eq:LB_AS_FS_Hard_Instance_Detailed}, take the gradient over $y$ and set it as $0$, denote the maximizer as $y^*(x)$, we have \begin{equation} -\frac{2\lambda_2}{n}y^*(x)+\frac{1}{n}\sum_{i=1}^{n} \lambda_1\autopar{\mathbf{V}^{(i)}}^\top B_d\mathbf{U}^{(i)}x=0 \quad\Longrightarrow\quad y^*(x)=\frac{\lambda_1}{2\lambda_2}\sum_{i=1}^{n}\autopar{\mathbf{V}^{(i)}}^\top B_d\mathbf{U}^{(i)}x, \end{equation} so we have \begin{equation} \begin{split} &\bar{\Phi}(x)=\bar{f}\autopar{x,y^*(x)}\\ =\ & \frac{1}{n}\sum_{i=1}^{n} \automedpar{\frac{\lambda_1^2}{4\lambda_2}\autonorm{B_d\mathbf{U}^{(i)}x}^2- \frac{\lambda_1^2\sqrt{\alpha}}{2\lambda_2}\autoprod{e_1,\mathbf{U}^{(i)}x}+ \frac{\lambda_1^2\alpha}{2n\lambda_2}\Gamma_d^n(x)- \frac{\lambda_1^2\alpha}{4\lambda_2}\autopar{u^{(i)}_{d+1}}^2+ \frac{\lambda_1^2\sqrt{\alpha}}{4\lambda_2}}\\ =\ & \frac{1}{n}\sum_{i=1}^{n} \automedpar{\frac{\lambda_1^2}{4\lambda_2}\autonorm{B_d\mathbf{U}^{(i)}x}^2- \frac{\lambda_1^2\sqrt{\alpha}}{2\lambda_2}\autoprod{e_1,\mathbf{U}^{(i)}x}+ \frac{\lambda_1^2\alpha}{2n\lambda_2}\sum_{j=1}^{n}\Gamma_d\autopar{\mathbf{U}^{(j)}x}- \frac{\lambda_1^2\alpha}{4\lambda_2}\autopar{u^{(i)}_{d+1}}^2+ \frac{\lambda_1^2\sqrt{\alpha}}{4\lambda_2}}\\ =\ & \frac{1}{n}\sum_{i=1}^{n} \automedpar{\frac{\lambda_1^2}{4\lambda_2}\autonorm{B_d\mathbf{U}^{(i)}x}^2- \frac{\lambda_1^2\sqrt{\alpha}}{2\lambda_2}\autoprod{e_1,\mathbf{U}^{(i)}x}+ \frac{\lambda_1^2\alpha}{2\lambda_2}\Gamma_d\autopar{\mathbf{U}^{(i)}x}- \frac{\lambda_1^2\alpha}{4\lambda_2}\autopar{u^{(i)}_{d+1}}^2+ \frac{\lambda_1^2\sqrt{\alpha}}{4\lambda_2}}\\ =\ & \frac{1}{n}\sum_{i=1}^{n} \automedpar{\frac{\lambda_1^2}{2\lambda_2} \autopar{\frac{1}{2}\autopar{\mathbf{U}^{(i)}x}^\top A_d\mathbf{U}^{(i)}x- \sqrt{\alpha}\autoprod{e_1,\mathbf{U}^{(i)}x}+ \alpha\Gamma_d\autopar{\mathbf{U}^{(i)}x} + \frac{1-\alpha}{2}\autopar{u^{(i)}_{d+1}}^2+ \frac{\sqrt{\alpha}}{2}} }\\ =\ & \frac{1}{n}\sum_{i=1}^{n}\Phi_d\autopar{\mathbf{U}^{(i)}x}, \end{split} \end{equation} where the third equality follows from \eqref{eq:Gamma_nd_equivalent}, and $A_d$ and $\Phi_d$ are defined in \eqref{eq:matrix_A_d_definition} and \eqref{eq:LB_hard_instance_deter_Phi}, which concludes the proof. \end{proof} The above two lemmas proves the statements in Lemma \ref{lm:properties_hard_instance_FS_AS}. Before we present the main theorem, we first discuss the behavior of the scaled hard instance, which will be used in the final lower bound analysis. \begin{lemma}[Primal of the Scaled Finite-Sum Hard Instance] \label{lm:Primal_Scaled_FS_AS_Hard_Instance} With the function $\bar{f}(x,y)$ and $\bar{f}_i(x,y)$ defined in \eqref{eq:LB_hard_instance_FS}, $ \bar{\Phi}(x)\triangleq \max_y \bar{f}(x,y)$, then for any $\eta\in\mathbb{R}$ and the following function: \begin{equation} f(x,y)=\frac{1}{n}\sum_{i=1}^{n}f_i(x,y) =\frac{1}{n}\sum_{i=1}^{n}\eta^2\bar{f}_i\autopar{\frac{x}{\eta},\frac{y}{\eta}}, \end{equation} then for its primal function $\Phi(x)\triangleq\max_{y\in\mathbb{R}^{d+1}}f(x,y)$, we have \begin{equation} \Phi(x)=\frac{1}{n}\sum_{i=1}^{n}\Phi_i(x), \quad \text{where}\quad \Phi_i(x) = \eta^2\bar{\Phi}_i\autopar{\frac{x}{\eta}} = \eta^2\Phi_d\autopar{\frac{1}{\eta}\mathbf{U}^{(i)}x}. \end{equation} \end{lemma} \begin{proof} Based on \eqref{eq:LB_AS_FS_Hard_Instance_Detailed}, we can write out the formulation of $f$: \begin{equation} \label{eq:LB_AS_FS_Hard_Instance_Scaled_Detailed} \begin{split} &f(x,y) = \eta^2\bar{f}\autopar{\frac{x}{\eta},\frac{y}{\eta}}\\ =\ & \eta^2\Bigg( -\frac{\lambda_2}{n}\autonorm{\frac{y}{\eta}}^2 + \frac{1}{n}\sum_{i=1}^{n} \Bigg[\lambda_1\autoprod{B_d\mathbf{U}^{(i)}\frac{x}{\eta},\mathbf{V}^{(i)}\frac{y}{\eta}}- \frac{\lambda_1^2\sqrt{\alpha}}{2\lambda_2}\autoprod{e_1,\mathbf{U}^{(i)}\frac{x}{\eta}}+ \frac{\lambda_1^2\alpha}{2n\lambda_2}\Gamma_d^n\autopar{\frac{x}{\eta}}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad - \frac{\lambda_1^2\alpha}{4\lambda_2}\autopar{\frac{u^{(i)}_{d+1}}{\eta}}^2+ \frac{\lambda_1^2\sqrt{\alpha}}{4\lambda_2}\Bigg]\Bigg)\\ =\ & -\frac{\lambda_2}{n}\autonorm{y}^2 + \frac{1}{n}\sum_{i=1}^{n} \lambda_1\autoprod{B_d\mathbf{U}^{(i)}x,\mathbf{V}^{(i)}y} + \eta^2\Bigg( \frac{1}{n}\sum_{i=1}^{n} \Bigg[- \frac{\lambda_1^2\sqrt{\alpha}}{2\lambda_2}\autoprod{e_1,\mathbf{U}^{(i)}\frac{x}{\eta}} + \frac{\lambda_1^2\alpha}{2n\lambda_2}\Gamma_d^n\autopar{\frac{x}{\eta}}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad - \frac{\lambda_1^2\alpha}{4\lambda_2}\autopar{\frac{u^{(i)}_{d+1}}{\eta}}^2+ \frac{\lambda_1^2\sqrt{\alpha}}{4\lambda_2}\Bigg]\Bigg) , \end{split} \end{equation} check the gradient over $y$ and set it to be $0$ to solve for $y^*(x)$, we have \begin{equation} \nabla_y f(x,y^*(x))=-\frac{2\lambda_2}{n}y^*(x)+\frac{\lambda_1}{n}\sum_{i=1}^{n} \autopar{\mathbf{V}^{(i)}}^\top B_d\mathbf{U}^{(i)}x=0 \quad\Longrightarrow \quad y^*(x)=\frac{\lambda_1}{2\lambda_2}\sum_{i=1}^{n} \autopar{\mathbf{V}^{(i)}}^\top B_d\mathbf{U}^{(i)}x, \end{equation} which implies that \begin{equation} \begin{split} \mathbf{V}^{(i)}y^*(x) =\ & \frac{\lambda_1}{2\lambda_2}\sum_{j=1}^{n} \mathbf{V}^{(i)}\autopar{\mathbf{V}^{(j)}}^\top B_d\mathbf{U}^{(j)}x = \frac{\lambda_1}{2\lambda_2} B_d\mathbf{U}^{(i)}x\\ \autonorm{y^*(x)}^2 =\ & \frac{\lambda_1^2}{4\lambda_2^2}\sum_{i=1}^{n}\autonorm{B_d\mathbf{U}^{(i)}x}^2, \end{split} \end{equation} so the primal function is \begin{equation} \begin{split} &\Phi(x)=f(x,y^*(x)) = \eta^2\bar{f}\autopar{\frac{x}{\eta},\frac{y^*(x)}{\eta}}\\ =\ & \frac{\lambda_1^2}{4\lambda_2n}\sum_{i=1}^{n}\autonorm{B_d\mathbf{U}^{(i)}x}^2 + \frac{\eta^2}{n}\sum_{i=1}^{n} \Bigg[- \frac{\lambda_1^2\sqrt{\alpha}}{2\lambda_2}\autoprod{e_1,\mathbf{U}^{(i)}\frac{x}{\eta}} + \frac{\lambda_1^2\alpha}{2n\lambda_2}\Gamma_d^n\autopar{\frac{x}{\eta}} - \frac{\lambda_1^2\alpha}{4\lambda_2}\autopar{\frac{u^{(i)}_{d+1}}{\eta}}^2+ \frac{\lambda_1^2\sqrt{\alpha}}{4\lambda_2}\Bigg]\\ =\ & \frac{\eta^2}{n}\sum_{i=1}^{n} \Bigg[ \frac{\lambda_1^2}{4\lambda_2}\autonorm{B_d\mathbf{U}^{(i)}\frac{x}{\eta}}^2 - \frac{\lambda_1^2\sqrt{\alpha}}{2\lambda_2}\autoprod{e_1,\mathbf{U}^{(i)}\frac{x}{\eta}} + \frac{\lambda_1^2\alpha}{2n\lambda_2}\Gamma_d^n\autopar{\frac{x}{\eta}} - \frac{\lambda_1^2\alpha}{4\lambda_2}\autopar{\frac{u^{(i)}_{d+1}}{\eta}}^2+ \frac{\lambda_1^2\sqrt{\alpha}}{4\lambda_2}\Bigg]\\ =\ & \frac{1}{n}\sum_{i=1}^{n}\autopar{\eta^2\Phi_d\autopar{\frac{1}{\eta}\mathbf{U}^{(i)}x}} , \end{split} \end{equation} where the last equality directly applies the conclusion in Lemma \ref{lm:primal_hard_instance_FS_AS}, which concludes the proof. \end{proof} \subsubsection{Proof of Theorem \ref{THM:LB_NCSC_FS_AS}} \label{sec:Apdx_THM_LB_NCSC_FS_AS} Recall that the complexity for averaged smooth finite-sum nonconvex-strongly-concave problems is defined as \begin{equation} \begin{split} \mathrm{Compl}_\epsilon\autopar{\mathcal{F}_{\mathrm{NCSC}}^{L,\mu,\Delta},\mathcal{A},\mathbb{O}_{\mathrm{IFO}}^{L,\mathrm{AS}}} \triangleq\ & \underset{f\in\mathcal{F}_{\mathrm{NCSC}}^{L,\mu,\Delta}}{\sup}\ \underset{\mathtt{A}\in{\mathcal{A}\autopar{\mathbb{O}_{\mathrm{IFO}}^{L,\mathrm{AS}}}}}{\inf}\ \mathbb{E}\ T_{\epsilon}(f,\mathtt{A})\\ =\ & \underset{f\in\mathcal{F}_{\mathrm{NCSC}}^{L,\mu,\Delta}}{\sup}\ \underset{\mathtt{A}\in{\mathcal{A}\autopar{\mathbb{O}_{\mathrm{IFO}}^{L,\mathrm{AS}}}}}{\inf}\ \mathbb{E}\ \inf \autobigpar{T\in\mathbb{N}\ \Big|\ \autonorm{\nabla \Phi\autopar{x^T}}\leq\epsilon}. \end{split} \end{equation} Based on the discussion of the properties of the hard instance, we come to the final statement and proof of the theorem. \begin{theorem}[Lower Bound for Finite-Sum AS NC-SC, Restate Theorem \ref{THM:LB_NCSC_FS_AS}] For any linear-span first-order algorithm $ \mathtt{A}\in\mathcal{A} $, and parameters $ L,\mu,\Delta>0 $ with a desired accuracy $ \epsilon>0 $, for the following function $ f:\mathbb{R}^{(d+1)}\times\mathbb{R}^{(d+1)}\rightarrow\mathbb{R} $: \begin{equation} f_i(x,y)=\eta^2\bar{f}_i\left(\frac{x}{\eta},\frac{y}{\eta}\right), \quad f(x,y)=\frac{1}{n}\sum_{i=1}^{n}f_i(x,y) \end{equation} where $\bar{f}_i$ is defined as \eqref{eq:LB_hard_instance_FS} and $\autobigpar{\mathbf{U}^{(i)}}_{i=1}^n\in\mathrm{\mathbf{Orth}}\autopar{d+1,(d+1)n,n}$ is defined in \eqref{eq:LB_FS_Orthogonal_Matrix}, with its primal function $ \Phi(x)\triangleq\max_{y\in\mathbb{R}^{d+1}}f(x,y) $, for small enough $ \epsilon>0 $ satisfying \begin{equation} \epsilon^2\leq\min\autopar{ \frac{\sqrt{\alpha}L^2\Delta}{76800n\mu}, \frac{\alpha L^2\Delta}{1280n\mu}, \frac{L^2\Delta}{\mu} }, \end{equation} if we set $L\geq 2n\mu>0$ and \begin{equation} \lambda^*=\autopar{\sqrt{\frac{n}{40}}L,\frac{n\mu}{2}}, \quad \eta=\frac{160\sqrt{2n}\mu}{L^2}\alpha^{-\frac{3}{4}}\epsilon, \quad \alpha=\frac{n\mu}{50L}, \quad d=\left\lfloor\frac{\sqrt{\alpha}L^2\Delta}{25600n\mu}\epsilon^{-2}\right\rfloor\geq 3, \end{equation} we have \begin{itemize} \item The function $ f\in\mathcal{F}_{\mathrm{NCSC}}^{L,\mu,\Delta} $, $ \{f_i\}_{i=1}^n $ is $ L $-averaged smooth. \item In the worst case, the algorithm $ \mathcal{A} $ requires at least $\Omega\autopar{n+\sqrt{n\kappa} \Delta L \epsilon^{-2}}$ IFO calls to attain a point $ \hat{x}\in\mathbb{R}^{d+1} $ such that $ \mathbb{E}\|\nabla\Phi(\hat{x})\|\leq\epsilon $, i.e., \begin{equation} \mathrm{Compl}_\epsilon\autopar{\mathcal{F}_{\mathrm{NCSC}}^{L,\mu,\Delta},\mathcal{A},\mathbb{O}_{\mathrm{IFO}}^{L,\mathrm{AS}}}=\Omega\autopar{n+\sqrt{n\kappa} \Delta L \epsilon^{-2}}. \end{equation} \end{itemize} \end{theorem} \begin{proof} We divide our proof into two cases. \paragraph{Case 1} The first case builds an $\Omega(n)$ lower bound from a special case of NC-SC function. Consider the following function: $x,y\in\mathbb{R}^d$ and \begin{equation} \label{eq:NCSC_FS_hard_instance_Case_1} h_i(x,y)\triangleq \theta\autoprod{v_i,x}+L\autoprod{x,y}-\frac{\mu}{2}\|y\|^2, \quad h(x,y)\triangleq\frac{1}{n}\sum_{i=1}^n h_i(x,y), \end{equation} where $\theta\leq\sqrt{\frac{2L^2n^2\Delta}{\mu d}}$, $0<\mu\leq L$, the dimension number $d$ is set as a multiple of $n$, and $v_i\in\mathbb{R}^d$ is defined as \begin{equation} v_i\triangleq \begin{bmatrix} 0 & \cdots & 0 & 1 & \cdots & 1 & 0 & \cdots & 0 \end{bmatrix}^\top, \end{equation} such that elements with indices from $\frac{i-1}{n}d+1$ to $\frac{i}{n}d$ are 1 and the others are all $0$, namely, there are $\frac{d}{n}$ non-zero elements. It is easy to see that the function $h_i$ is $\mu$-strongly convex and $L$-smooth in both $x$ and $y$. For the initial value gap, denote $\varphi\triangleq\max_y h$. We have \begin{equation} \varphi(x) = \frac{1}{n}\sum_{i=1}^n\left(\theta\autoprod{v_i,x}+\frac{L^2}{2\mu}\|x\|^2\right) = \frac{L^2}{2\mu}\|x\|^2+\frac{\theta}{n}\sum_{i=1}^n\autoprod{v_i,x}, \end{equation} which is a strongly convex function, and its optimal point $x^*$ is \begin{equation} x^*=-\frac{\mu\theta}{L^2n}\sum_{i=1}^n v_i, \quad \varphi^*=-\frac{\mu\theta^2}{2L^2n^2}\left\|\sum_{i=1}^n v_i\right\|^2. \end{equation} Based on the setting of $\theta$, \begin{equation} \varphi(0)-\varphi^*=\frac{\mu\theta^2}{2L^2n^2}\left\|\sum_{i=1}^n v_i\right\|^2 = \frac{\mu\theta^2d}{2L^2n^2} \leq \Delta. \end{equation} Hence, we have $h\in\mathcal{F}_{\mathrm{NCSC}}^{L,\mu,\Delta}$. Then based on the expression of $\nabla_x h_i$ and $\nabla_y h_i$, we have that, starting from $(x,y)=(0,0)$ and denoting $\{i_1, i_2, \cdots, i_t\}$ as the index of IFO sequence for $t$ queries, then the output $(\hat{x}_t,\hat{y}_t)$ will be \begin{equation} \hat{x}_t,\hat{y}_t\in\mathrm{Span}\{v_{i_1}, v_{i_2}, \cdots, v_{i_t}\}. \end{equation} then note that each $v_i$ contains only $\frac{d}{n}$ non-zero elements, by the expression of the gradient of the primal function $\nabla \varphi$, we have that if $t\leq n/2$, then there must be at least $\frac{n}{2}\times\frac{d}{n}=\frac{d}{2}$ zero elements in $\hat{x}_t$, which implies that for $\epsilon^2\leq\frac{L^2\Delta}{\mu}$, \begin{equation} \|\nabla \varphi(\hat{x}_t)\| = \left\|\frac{L^2}{\mu}\hat{x}_t+\frac{\theta}{n}\sum_{i=1}^n v_i\right\| \geq \frac{\theta}{n}\sqrt{\frac{d}{2}} \geq \epsilon, \end{equation} where we follow the setting of $\theta$ above. So we proved that it requires $\Omega(n)$ IFO calls to find an $\epsilon$-stationary point. \paragraph{Case 2} The second case provides an $\Omega(\sqrt{n\kappa}\Delta L\epsilon^{-2})$ lower bound concerning the second term in the result. Throughout the case, we assume $L\geq 2n\mu>0$ as that in Lemma \ref{lm:LB_FS_Hard_Instance_bar_f_base_AS}. Here we still use the hard instance constructed in \eqref{eq:LB_hard_instance_FS}, note that $ \nabla f_i(x,y)=\eta\nabla \bar{f}_i\autopar{\frac{x}{\eta},\frac{y}{\eta}} $ is a scaled version of $\bar{f}_i$, which is $L$-averaged smooth by Lemma \ref{lm:LB_FS_Hard_Instance_bar_f_base_AS}, so by Lemma \ref{lm:scaling} we have $ \{f_i\}_i $ is also $ L $-average smooth. The for the strong concavity, note that $\bar{f}$ is $ \mu $-strongly concave on $ y $, so as the scaled version, $f$ is also $ \mu $-strongly concave on $ y $. Then for the primal function of $f$, let $ \Phi(x)\triangleq\max_y f(x,y) $, by Lemma \ref{lm:primal_hard_instance_FS_AS} and Lemma \ref{lm:Primal_Scaled_FS_AS_Hard_Instance}, we have \begin{equation} \Phi(x) = \eta^2\bar{\Phi}\autopar{\frac{x}{\eta}} = \frac{1}{n}\sum_{i=1}^{n}\eta^2\bar{\Phi}_i\autopar{\frac{x}{\eta}}, \end{equation} where $\bar{\Phi}$ and $\bar{\Phi}_i$ follow the definition in Lemma \ref{lm:primal_hard_instance_FS_AS}, We first justify the lower bound argument by lower bounding the norm of the gradient. Recall the definition of $\mathcal{I}$ (see \eqref{eq:LB_FS_nonconvergence}), which is the index set such that $ u^{(i)}_d=u^{(i)}_{d+1}=0,\ \forall i\in\mathcal{I} $ while $ u^{(i)}=\mathbf{U}^{(i)}x $. By substituting the parameters in the statement above into \eqref{eq:LB_FS_nonconvergence} and Lemma \ref{LM:NCSC_LB_PHI}, we have that when the size of the set $\mathcal{I}$, i.e., $ |\mathcal{I}|>n/2 $ (note that scaling does not affect the activation status), \begin{equation} \begin{split} \|\nabla \Phi(x)\|^2 =\ & \autonorm{\eta\nabla \bar{\Phi}\autopar{\frac{x}{\eta}}}^2 = \eta^2\autonorm{\nabla \bar{\Phi}\autopar{\frac{x}{\eta}}}^2\\ \geq\ & \frac{51200n\mu^2}{L^4}\alpha^{-\frac{3}{2}}\epsilon^2\cdot \frac{\lambda_1^4}{128n\lambda_2^2}\alpha^{\frac{3}{2}}\\ =\ & \frac{51200n\mu^2}{L^4}\alpha^{-\frac{3}{2}}\epsilon^2\cdot \frac{L^4}{51200n\mu^2}\alpha^{\frac{3}{2}} = \epsilon^2. \end{split} \end{equation} Next, we upper bound the starting optimality gap. By substitution of parameter settings and the initial gap of $\bar{\Phi}$ in \eqref{eq:hard_instance_FS_initial_gap}, also recall the setting of $\epsilon$, we have \begin{equation} \begin{split} \Phi(0)-\Phi^* =\ & \eta^2\left(\bar{\Phi}(0)-\inf_{x\in\mathbb{R}^{d+1}}\bar{\Phi}(x)\right) = \frac{51200n\mu^2}{L^4}\alpha^{-\frac{3}{2}}\epsilon^2\cdot\frac{nL^2}{40n\mu}\left(\frac{\sqrt{\alpha}}{2}+10\alpha d\right)\\ =\ & \frac{1280n\mu}{L^2}\left(\frac{1}{2\alpha}+\frac{10d}{\sqrt{\alpha}}\right)\epsilon^2 = \frac{640n\mu\epsilon^2}{\alpha L^2}+\frac{12800n\mu d\epsilon^2}{L^2\sqrt{\alpha}}\\ \leq\ & \frac{640n\mu}{\alpha L^2}\cdot\frac{\alpha L^2\Delta}{1280n\mu}+\frac{12800n\mu\epsilon^2}{L^2\sqrt{\alpha}}\cdot\frac{\sqrt{\alpha}L^2\Delta}{25600n\mu}\epsilon^{-2}\\ \leq\ & \frac{\Delta}{2}+\frac{\Delta}{2} = \Delta, \end{split} \end{equation} so we conclude that $ f\in\mathcal{F}_{\mathrm{NCSC}}^{L,\mu,\Delta} $, i.e. the function class requirement is satisfied. To show the lower bound, by previous analysis and the choice of \eqref{eq:LB_FS_Orthogonal_Matrix}, the activation process for each component will also mimic the "alternating zero-chain" mechanism (see Lemma \ref{LM:NCSC_LB_F_D}) independently. So we have, by the lower bound argument \eqref{eq:LB_FS_nonconvergence}, it requires to activate at least half of the components through until their $d$-th elements (or at least half of $\{u^{(i)}\}_i$ are not activated through until the $d$-th element, note that each $u^{(i)}$ corresponds to an unique part of $x$ with length $(d+1)$) for the primal stationarity convergence of the objective function, which takes (note that $2\lfloor x\rfloor-1\geq x$ when $x\geq 3$) \begin{equation} T=\frac{n}{2}(2d-1) \geq \frac{n}{2}\cdot\frac{\sqrt{\alpha}L^2\Delta}{25600n\mu}\epsilon^{-2} = \Omega\autopar{\sqrt{\alpha}\Delta L\kappa\epsilon^{-2}} = \Omega\autopar{\sqrt{n \kappa}\Delta L\epsilon^{-2}} \end{equation} IFO oracle queries. So we found that for any fixed index sequence $\{i_t\}_{t=1}^T$, the output $ z^{T+1} $ from a randomized algorithm\footnote{Note that randomization does not affect the lower bound, as long as the algorithm satisfies the linear-span assumption.} must not be an approximate stationary point, which verifies the $\Omega\autopar{n\vee \sqrt{n\kappa}\Delta L\epsilon^{-2}}$ or $\Omega\autopar{n+\sqrt{n\kappa}\Delta L\epsilon^{-2}}$ lower bound by combining the two cases discussed above together. We conclude the proof by applying Yao's minimax theorem \citep{yao1977probabilistic}, the lower bound will also hold for a randomized index sequence incurred by IFOs. \end{proof} \section{Proof of NC-SC Catalyst} \subsection{Outer-loop Complexity} In this section, we first introduce a few useful definitions. The Moreau envelop of a function $F$ with a positive parameter $\lambda>0$ is: $$ F_{\lambda}(x) = \min_{z\in\mathbb{R}^{d_1}} F(z) + \frac{1}{2\lambda}\Vert z - x \Vert^2.$$ We also define the proximal point of $x$: $$\operatorname{prox}_{\lambda F}(x)=\argmin_{z\in\mathbb{R}^{d_1}}\left\{F(z)+\frac{1}{2 \lambda}\|z-x\|^{2}\right\}.$$ When $F$ is differentiable and $\ell$-weakly convex, for $\lambda \in(0,1 / \ell)$ we have \begin{equation} \nabla F(\prox_{\lambda F} (x)) = \nabla F_\lambda(x) = \lambda^{-1}(x - \prox_{\lambda F}(x)). \end{equation} Thus a small gradient $\|\nabla F_\lambda (x)\|$ implies that $x$ is near a point $\prox_{\lambda F}(x)$ that is nearly stationary for $F$. Therefore $\|\nabla F_\lambda(x)\|$ is also commonly used as a measure of stationarity. We refer readers to \citep{drusvyatskiy2019efficiency} for more discussion on Moreau envelop. In this subsection, we use $(x^t, y^t)$ as a shorthand for $(x^t_0, y^t_0)$. We will denote $(\hat{x}^t, \hat{y}^t)$ as the optimal solution to the auxiliary problem (\ref{auxiliary prob ncc}) at $t$-th iteration: $\min_{x\in \mathbb{R}^{d_1}}\max_{y\in \mathbb{R}^{d_2}} \left[\hat{f}_{t}(x,y)\triangleq f(x,y) + L\Vert x - x^t\Vert^2 \right]$. It is easy to observe that $\hat{x}^t = \prox_{\Phi/2L}(x^t)$. Define $\hat{\Phi}_t(x) = \max_y f(x,y) + L\|x-x^t\|^2 $. In the following theorem, we show the convergence of the Moreau envelop $\|\nabla\Phi_{1/2L}(x)\|^2$ when we replace the inexactness measure (\ref{ncc criteion}) by another inexactness measure $\gap_{\hat{f}_{t}}(x^{t+1}, y^{t+1})\leq \beta_t(\|x^t-\hat{x}^t\|+\|y^t-\hat{y}^t\|^2)$. Later we will show this inexactness measure can be implied by (\ref{ncc criteion}) with our choice of $\beta_t$ and $\alpha_t$. \begin{theorem} \label{ncc moreau complexity} Suppose function $f$ is NC-SC with strong convexity parameter $\mu$ and L-Lipschitz smooth. If we replace the stopping criterion (\ref{ncc criteion}) by $\gap_{\hat{f}_{t}}(x^{t+1}, y^{t+1})\leq \beta_t(\|x^t-\hat{x}^t\|^2+\|y^t-\hat{y}^t\|^2)$ with $\beta_t = \frac{\mu^4}{28L^3}$ for $t>0$ and $\beta_0 = \frac{\mu^4}{32L^4\max\{1,L\}}$, then iterates from Algorithm \ref{catalyst ncc 1} satisfy \begin{align} \label{ncsc moreau sum} \sum_{t=0}^{T-1} \|\nabla\Phi_{1/2L}(x^t)\|^2 \leq \frac{87L}{5}\Delta_0 + \frac{7L}{5}D_y^0, \end{align} where $D_y^0 = \|y^0 - y^*(x^0)\|^2$ and $\Delta_0 = \Phi(x^0) -\inf_{x}\Phi(x) $. \end{theorem} \begin{proof} Define $b_{t+1} = \gap_{\hat{f_t}}(x^{t+1}, y^{t+1})$. By Lemma 4.3 in \citep{drusvyatskiy2019efficiency}, \begin{align} \nonumber \Vert \nabla \Phi_{1/2L}(x^t)\Vert^2 = 4L^2 \Vert x^t - \prox_{\Phi/2L}(x^t)\Vert^2 \leq & 8L [\hat{\Phi}_t(x^t) - \hat{\Phi}_t(\prox_{\Phi/2L}(x^t))] \\ \nonumber \leq & 8L [\hat{\Phi}_t(x^t) -\hat{\Phi}_t(x^{t+1}) + b_{t+1}] \\ \nonumber = & 8L\big\{\Phi(x^t) - \left[\Phi(x^{t+1})+L\Vert x^{t+1}-x^t\Vert^2 \right]+b_{t+1}\big\}\\ \label{bound x to prox 3} \leq & 8L [\Phi(x^t) - \Phi(x^{t+1}) +b_{t+1}], \end{align} where in the first inequality we use $L$-strongly convexity of $\hat{\Phi}_t$. Then, for $t\geq 1$ \begin{align*} \|y^t - \hat{y}^t\|^2 \leq& 2\|y^t - \hat{y}^{t-1}\|^2 + 2\|y^*(\hat{x}^{t-1})-y^*(\hat{x}^t)\|^2 \\ \leq & 2\|y^t - \hat{y}^{t-1}\|^2 + 2\left(\frac{L}{\mu}\right)^2\|\hat{x}^t-\hat{x}^{t-1}\|^2\\ \leq & 2\|y^t - \hat{y}^{t-1}\|^2 + 4\left(\frac{L}{\mu}\right)^2\|\hat{x}^t-x^t\|^2 + 4\left(\frac{L}{\mu}\right)^2\|x^t-\hat{x}^{t-1}\|^2 \\ \leq & \frac{8L}{\mu^2}b_{t} + 4\left(\frac{L}{\mu}\right)^2\|\hat{x}^t-x^t\|^2, \end{align*} where we use Lemma \ref{lin's lemma} in the second inequality, and $(L,\mu)$-SC-SC of $\Tilde{f}_{t-1}(x,y)$ and Lemma \ref{criterion relation} in the last inequality. Therefore, \begin{equation} \label{ncc dist dif} \|x^t - \hat{x}^t\|^2 + \|y^t - \hat{y}^t\|^2 \leq \frac{8L}{\mu^2}b_{t} + \left(\frac{4L^2}{\mu^2} + 1\right)\|\hat{x}^t-x^t\|^2. \end{equation} By our stopping criterion and $ \Vert \nabla \Phi_{1/2L}(x^t)\Vert^2 = 4L^2 \Vert x^t - \hat{x}^t\Vert^2$, for $t\geq 1$ \begin{equation*} b_{t+1} \leq \beta_t \left[ \|x_{t} - \hat{x}^t\|^2 + \|y_{t} - \hat{y}^t\|^2 \right] \leq \frac{8L \beta_t}{\mu^2}b_{t} + \beta_t\left(\frac{1}{\mu^2} + \frac{1}{4L^2}\right)\Vert \nabla \Phi_{1/2L}(x^t)\Vert^2. \end{equation*} Define $\theta = \frac{2}{7}$ and $w = \frac{5\mu^2}{112L^3}$. It is easy to verify that as $\beta_t = \frac{\mu^4}{28L^3}$, then $\frac{8L \beta_t}{\mu^2}\leq \theta$ and $\beta_t\left(\frac{1}{\mu^2} + \frac{1}{4L^2}\right)\leq w$. We conclude the following recursive bound \begin{equation}\label{eqn:rec_outer_loop} b_{t+1} \leq \theta b_t + w\Vert \nabla \Phi_{1/2L}(x^t)\Vert^2. \end{equation} For $t=0$, \begin{equation} \label{ncc y0 bound} \|y^0-\hat{y}^0\|^2\leq 2\|y^0-y^*(x^0)\|^2 + 2\|\hat{y}^0-y^*(x^0)\|^2\leq 2\|y^0-y^*(x^0)\|^2 + 2\left(\frac{L}{\mu}\right)^2\|x^0-\hat{x}^0\|^2. \end{equation} Because $\Phi(x)+L\|x-x^0\|^2$ is $L$-strongly convex, we have \begin{equation*} \left(\Phi(\hat{x}^0) +L\|\hat{x}^0-x^0\|^2 \right) + \frac{L}{2}\|\hat{x}^0-x^0\|^2\leq \Phi(x^0) = \Phi^* + (\Phi(x^0)-\Phi^*) \leq \Phi(\hat{x}^0) + (\Phi(x^0)-\Phi^*). \end{equation*} This implies $\|\hat{x}^0-x^0\|^2\leq \frac{L}{2}(\Phi(x^0)-\Phi^*)$. Then combining with (\ref{ncc y0 bound}) \begin{equation*} \|y^0-\hat{y}^0\|^2 + \|x^0-\hat{x}^0\|^2 \leq \left(\frac{L^3}{\mu^2}+\frac{L}{2}\right)(\Phi(x^0)-\Phi^*) + 2\|y^0-y^*(x^0)\|^2. \end{equation*} Hence, by the stopping criterion, \begin{equation*} b_1 \leq \beta_0\left(\frac{L^3}{\mu^2}+\frac{L}{2}\right)(\Phi(x^0)-\Phi^*) + 2\beta_0\|y^0-y^*(x^0)\|^2. \end{equation*} Define $\theta_0 = \frac{\mu^2}{16L^2}$ . With $\beta_0 = \frac{\mu^4}{32L^4\max\{1,L\}}$, $\beta_0\left(\frac{L^3}{\mu^2}+\frac{L}{2}\right)\leq \theta_0$ and $2\beta_0\leq \theta_0$. So we can write $$b_1 \leq \theta_0(\Phi(x^0)-\Phi^*) + \theta_0\|y^0-y^*(x^0)\|^2.$$ Unravelling \eqref{eqn:rec_outer_loop}, we have for $t\geq1$ \begin{align} b_{t+1} \leq \theta^tb_1 + w\sum_{k=1}^t\theta^{t-k}\|\nabla\Phi_{1/2L}(x_k)\|^2\leq \theta^{t}\theta_0(\Phi(x^0)-\Phi^*) + \theta^{t}\theta_0\|y^0-y^*(x^0)\|^2 +w\sum_{k=1}^t\theta^{t-k}\|\nabla\Phi_{1/2L}(x_k)\|^2. \end{align} Summing from $t=0$ to $T-1$, \begin{align} \nonumber \sum_{t=0}^{T-1}b_{t+1} &= \sum_{t=1}^{T-1}b_t + b_1\\ \nonumber &\leq \theta_0\sum_{t=0}^{T-1}\theta^t[\Phi(x^0)-\Phi^*] +\theta_0\sum_{t=0}^{T-1}\theta^t\|y^0-y^*(x^0)\|^2 + w\sum_{t=1}^{T-1}\sum_{k=1}^t\theta^{t-k}\|\nabla\Phi_{1/2L}(x_k)\|^2 \\ \label{rewrite b_t} &\leq \theta_0\sum_{t=0}^{T-1}\theta^t[\Phi(x^0)-\Phi^*] +\theta_0\sum_{t=0}^{T-1}\theta^t\|y^0-y^*(x^0)\|^2 +w\sum_{t=1}^{T-1}\frac{1}{1-\theta}\|\nabla\Phi_{1/2L}(x^t)\|^2, \end{align} where we use $\sum_{t=1}^{T-1}\sum_{k=1}^t\theta^{t-k}\|\nabla\Phi_{1/2L}(x_k)\|^2 = \sum_{k=1}^{T-1}\sum_{t=k}^T\theta^{t-k}\|\nabla\Phi_{1/2L}(x_k)\|^2 \leq \sum_{k=1}^{T-1}\frac{1}{1-\theta}\|\nabla\Phi_{1/2L}(x_k)\|^2$. Now, by telescoping (\ref{bound x to prox 3}), \begin{equation*} \frac{1}{8L}\sum_{t=0}^{T-1}\|\nabla\Phi_{1/2L}(x^t)\|^2 \leq \Phi(x^0)-\Phi^* + \sum_{t=0}^{T-1}b_{t+1}. \end{equation*} Plugging (\ref{rewrite b_t}) in, \begin{equation} \frac{1}{8L}\sum_{t=0}^{T-1}\|\nabla\Phi_{1/2L}(x^t)\|^2 -w\sum_{t=1}^{T-1}\frac{1}{1-\theta}\|\nabla\Phi_{1/2L}(x^t)\|^2 \leq \left(1+\frac{\theta_0}{1-\theta}\right)[\Phi(x^0)-\Phi^*]+\frac{\theta_0}{1-\theta}\|y^0-y^*(x^0)\|^2. \end{equation} Plugging in $w\leq \frac{5}{112L}$, $\frac{1}{1-\theta}=\frac{7}{5}$ and $\theta_0\leq \frac{1}{16}$ \begin{align*} \frac{1}{16L}\sum_{t=0}^{T-1}\|\nabla\Phi_{1/2L}(x^t)\|^2 \leq \frac{87}{80}[\Phi(x^0)-\Phi^*]+\frac{7}{80}\|y^0-y^*(x^0)\|^2. \end{align*} \end{proof} \noindent\textbf{Proof of Theorem \ref{THM CATALYST NCSC}} \begin{proof} We first show that criterion (\ref{ncc criteion}) implies the criterion in Theorem \ref{ncc moreau complexity}. By Lemma \ref{criterion relation}, as $\hat{f}_t$ is $(L, \mu)$-SC-SC and $3L$-smooth, \begin{equation*} 2\mu \gap_{\hat{f}_{t}}(x^{t+1}, y^{t+1})\leq \|\nabla \hat{f}_t(x^{t+1}, y^{t+1})\|^2 \leq \alpha_t\|\nabla \hat{f}_t(x^t, y^t)\|^2 \leq 36L^2\alpha_t(\|x^t-\hat{x}^t\|^2+\|y^t-\hat{y}^t\|^2), \end{equation*} therefore, \begin{equation*} \gap_{\hat{f}_{t}}(x^{t+1}, y^{t+1}) \leq \frac{18L^2\alpha_t}{\mu}(\|x^t-\hat{x}^t\|^2+\|y^t-\hat{y}^t\|^2), \end{equation*} which implies $\gap_{\hat{f}_{t}}(x^{t+1}, y^{t+1})\leq \beta_t(\|x^t-\hat{x}^t\|^2+\|y^t-\hat{y}^t\|^2)$ by our choice of $\{\beta_t\}_t$ and $\{\alpha_t\}_t$. We still use $b_{t+1} = \gap_{\hat{f_t}}(x^{t+1}, y^{t+1})$ as in the proof of Theorem (\ref{ncc moreau complexity}). First, note that \begin{align} \nonumber \|\nabla\Phi(x^{t+1})\|^2 &\leq 2\|\nabla\Phi(x^{t+1})-\nabla \Phi(\hat{x}^t) \|^2 + 2\|\nabla\Phi(\hat{x}^t)\|^2\\ \nonumber &\leq 2\left(\frac{2L^2}{\mu} \right)\|x^{t+1}-\hat{x}^t\|^2 + 2\|\nabla \Phi_{1/2L}(x^t)\|^2 \\ &\leq \frac{16L^3}{\mu^2}b_{t+1} + 2\|\nabla \Phi_{1/2L}(x^t)\|^2. \end{align} where in the second inequality we use Lemma \ref{lin's lemma} and Lemma 4.3 in \citep{drusvyatskiy2019efficiency}. Summing from $t=0$ to $T-1$, we have \begin{equation} \label{moreau to primal convergence} \sum_{t=0}^{T-1} \|\nabla\Phi(x^{t+1})\|^2 \leq \frac{16L^3}{\mu^2}\sum_{t=0}^{T-1} b_{t+1} + 2\sum_{t=0}^{T-1} \|\nabla \Phi_{1/2L}(x^t)\|^2. \end{equation} Applying (\ref{rewrite b_t}), we have \begin{align*} \frac{16L^3}{\mu^2}\sum_{t=0}^{T-1} b_{t+1} \leq \frac{16L^3\theta_0}{\mu^2}\sum_{t=0}^{T-1}\theta^t[\Phi(x^0)-\Phi^*] +\frac{16L^3\theta_0}{\mu^2}\sum_{t=0}^{T-1}\theta^t\|y^0-y^*(x^0)\|^2 +\frac{16L^3w}{\mu^2}\sum_{t=1}^{T-1}\frac{1}{1-\theta}\|\nabla\Phi_{1/2L}(x^t)\|^2. \end{align*} Plugging in $\theta_0 = \frac{\mu^2}{16L^2}$, $\theta = \frac{2}{7}$ and $w = \frac{5\mu^2}{112L^3}$, \begin{equation*} \frac{16L^3}{\mu^2}\sum_{t=0}^{T-1} b_{t+1} \leq \frac{7L}{5}[\Phi(x^0)-\Phi^*] + \frac{7L}{5}\|y^0-y^*(x^0)\|^2 + \sum_{t=1}^{T-1}\|\nabla\Phi_{1/2L}(x^t)\|^2. \end{equation*} Plugging back into (\ref{moreau to primal convergence}), \begin{equation*} \sum_{t=0}^{T-1} \|\nabla\Phi(x^{t+1})\|^2 \leq \frac{7L}{5}[\Phi(x^0)-\Phi^*] + \frac{7L}{5}\|y^0-y^*(x^0)\|^2 + 3\sum_{t=0}^{T-1}\|\nabla\Phi_{1/2L}(x^t)\|^2. \end{equation*} Applying Theorem \ref{ncc moreau complexity}, \begin{equation*} \frac{1}{T}\sum_{t=1}^T \|\nabla\Phi(x^{t+1})\|^2 \leq \frac{268L}{5T}[\Phi(x^0)-\Phi^*] + \frac{28L}{5T}\|y^0-y^*(x^0)\|^2. \end{equation*} \end{proof} \subsection{Complexity of solving auxiliary problem (\ref{auxiliary prob ncc}) and proof of Theorem \ref{thm catalyst scsc}} In this layer, we apply an inexact proximal point algorithm to solve the $(L,\mu)$-SC-SC and $3L$-smooth auxiliary problem: $\min_x\max_y \hat{f}_t(x,y)$. Throughout this subsection, we suppress the outer-loop index $t$ without confusion, i.e. we use $\hat{f}$ instead of $\hat{f}_t$ and $\Tilde{f}_{k} = \hat{f}(x,y) - \frac{\tau}{2}\Vert y - z_k\Vert^2$ instead of $\Tilde{f}_{t,k}$. Accordingly, we also omit the superscript in $(x^t_k, y^t_k)$ and $\epsilon^t_k$. Before we prove Theorem \ref{thm catalyst scsc}, we present a lemma from \citep{lin2018catalyst}. The inner loop to solve (\ref{auxiliary prob ncc}) can be considered as applying Catalyst for strongly-convex minimization in \citep{lin2018catalyst} to the function $-\hat{\Psi}(y) = -\min_{x\in\mathbb{R}^{d}}\hat{f}(x, y)$. The following lemma captures the convergence of Catalyst framework in minimization, which we present in Algorithm \ref{catalyst min}. \begin{algorithm}[ht] \caption{ Catalyst for Strongly-Convex Minimization} \setstretch{1.25} \begin{algorithmic}[1] \label{catalyst min} \REQUIRE function $h$, initial point $x_0$, strong-convexity constant $\mu$, parameter $\tau>0$ \STATE Initialization: $q = \frac{\mu}{\mu +\tau}, z_1 = x_0$, $\alpha_1 = \sqrt{q}$. \FORALL{$k = 1,2,..., K$} \STATE Find an inexact solution $x_k$ to the following problem with algorithm $\mathcal{M}$ \begin{equation*} \min_{x\in\mathbb{R}^d}\Tilde{h}_k(x) \triangleq \left[h(x) + \frac{\tau}{2}\Vert x - z_k\Vert^2 \right] \end{equation*} such that \begin{align} \Tilde{h}_k(x_k) - \min_{x\in\mathbb{R}^d}\Tilde{h}_k(x)\leq \epsilon_k. \end{align} \STATE Choose $\alpha_{k+1}\in [0,1]$ such that $ \alpha_{k+1}^2 = (1-\alpha_{k+1})\alpha_k^2 + q\alpha_{k+1}. $ \STATE $z_{k+1} = x_k + \beta_k(x_k-x_{k-1})$ where $\beta_k = \frac{\alpha_k(1-\alpha_k)}{\alpha_k^2 + \alpha_{k+1}}.$ \ENDFOR \ENSURE $x_K$. \end{algorithmic} \end{algorithm} \begin{lemma} [\citep{lin2018catalyst}] \label{lemma catalyst min} Consider the problem $\min_{x\in\mathbb{R}^d} h(x)$. Assume function $h$ is $\mu$-strongly convex. Define $A_k = \prod_{i=1}^k (1-\alpha_i)$, $\eta_k = \frac{\alpha_k-q}{1-q}$ and a sequence $\{v_t \}_t$ with $v_0 = x_0$ and $v_k = x_{k-1} + \frac{1}{\alpha_k}(x_k- x_{k-1})$ for $k>1$. Consider the potential function: $S_k = h(x_k) - h(x^*) + \frac{\eta_{k+1}\alpha_{k+1}\tau}{2(1-\alpha_{k+1})}\|x^*-v_k\|^2$, where $x^*$ is the optimal solution. After running Algorithm \ref{catalyst min} for $K$ iterations, we have \begin{equation} \frac{1}{A_K}S_K \leq \left(\sqrt{S_0}+ 2\sum_{t=1}^K \sqrt{\frac{\epsilon_k}{A_k}}\right)^2. \end{equation} \end{lemma} \bigskip Before we step into the proof of Theorem \ref{thm catalyst scsc}, we introduce several notations. We denote the dual function of $\hat{f}$ by $\hat{\Psi}(y) = \min_x \hat{f}(x, y)$. We denote the dual function of $\Tilde{f}_k(x,y)$ by $\Tilde{\Psi}_k(y) =\min_x\Tilde{f}_k(x,y)= \min_{x}\hat{f}(x, y) - \frac{\tau}{2}\Vert y-z_k\Vert^2 = \hat{\Psi}(y) - \frac{\tau}{2}\Vert y-z_k\Vert^2$. Let $y_k^* = \argmax_y \Tilde{\Psi}_k(y)$. We also define $(x^*, y^*)$ as the optimal solution to $\min_x\max_y \hat{f}(x,y)$ \bigskip {\noindent \textbf{Proof of Theorem \ref{thm catalyst scsc}}}\\ \begin{proof} When the criterion $\|\nabla \Tilde{f}_{k}(x^{k}, y^{k})\|^2 \leq \epsilon_k$ is satisfied, by Lemma \ref{criterion relation}, $$\gap_{\Tilde{f}_k}(x_k, y_k) \leq \frac{1}{2\mu}\|\nabla \Tilde{f}_{k}(x^{k}, y^{k})\|^2 \leq \frac{1}{2\mu}\epsilon_k =\frac{\sqrt{2}}{4}(1-\rho)^k\gap_{\hat{f}}(x_0, y_0) =\hat{\epsilon}_k,$$ where we define $\hat{\epsilon}_k = \frac{\sqrt{2}}{4}(1-\rho)^k\gap_{\hat{f}}(x_0, y_0) $. The auxiliary problem (\ref{subprob}) can be considered as $\max_{y} \hat{\Psi}(y)$. We see $\gap_{\Tilde{f}_k}(x_k, y_k) \leq \hat{\epsilon}_k$ implies $\max_{y} \Tilde{\Psi}_k(y) - \Tilde{\Psi}_k(y_k) \leq \hat{\epsilon}_k$. By choosing $\alpha_1 = \sqrt{q}$ in Algorithm \ref{catalyst min}, it is easy to check that $\alpha_k = \sqrt{q}$ and $\beta_k = \frac{\sqrt{q}-q}{\sqrt{q}+q}$, for all $k$. So this inner loop can be considered as applying Algorithm \ref{catalyst min} to $-\Tilde{\Psi}(y)$ and Lemma \ref{lemma catalyst min} can guarantee the convergence of the dual function. Define $S_k = \hat{\Psi}(y^*) - \hat{\Psi}(y_k) + \frac{\eta_{t+1}\alpha_{t+1}\tau}{2(1-\alpha_{t+1})}\|y^*-v_k\|^2$ with $\eta_k = \frac{\alpha_k-q}{1-q}$. Lemma \ref{lemma catalyst min} gives rise to \begin{equation} \label{dec 5} \frac{1}{A_K}S_K \leq \left(\sqrt{S_0}+ 2\sum_{k=1}^K \sqrt{\frac{\hat{\epsilon}_k}{A_k}}\right)^2. \end{equation} Note that $A_k = \prod_{i=1}^k (1-\alpha_i) = (1-\sqrt{q})^k$ and $$\frac{\eta_k\alpha_k\tau}{2(1-\alpha_k)} = \frac{\sqrt{q}-q}{1-q}\frac{\sqrt{q}\tau}{2(1-\sqrt{q})} = \frac{\sqrt{q}-q}{\tau/(\mu+\tau)}\frac{\sqrt{q}\tau}{2(1-\sqrt{q})} = \frac{q(\mu+\tau)}{2} = \frac{\mu}{2}.$$ So $S_0 = \hat{\Psi}(y^*) - \hat{\Psi}(y_0) + \frac{\mu}{2}\|y^* - y_0\|^2 \leq 2(\hat{\Psi}(y^*) - \hat{\Psi}(y_0))$. Then with $\epsilon_k = \frac{\sqrt{2}}{4}(1-\rho)^k\gap_{\hat{f}}(x_0, y_0)$, and we have \begin{align} \text{Right-hand side of } (\ref{dec 5}) \leq &\left( \sqrt{2(\hat{\Psi}(y^*) - \hat{\Psi}(y_0))} + \sum_{t=1}^T\sqrt{2\left(\frac{1-\rho}{1-\sqrt{q}}\right)^t\gap_{\hat{f}}(x_0, y_0)}\right)^2 \\ \leq & 2\left(1+ \sum_{k=1}^K\left( \sqrt{\frac{1-\rho}{1-\sqrt{q}}}\right)^k \right)^2 \gap_{\hat{f}}(x_0, y_0) \\ \leq & 2\left( \frac{\left(\sqrt{\frac{1-\rho}{1-\sqrt{q}}}\right)^{K+1}}{\sqrt{\frac{1-\rho}{1-\sqrt{q}}}-1} \right)^2 \gap_{\hat{f}}(x_0, y_0) \leq 2\left( \frac{\sqrt{\frac{1-\rho}{1-\sqrt{q}}}}{\sqrt{\frac{1-\rho}{1-\sqrt{q}}}-1} \right)^2 \left( \frac{1-\rho}{1-\sqrt{q}} \right)^K \gap_{\hat{f}}(x_0, y_0). \end{align} Plugging back into (\ref{dec 5}), \begin{equation} \label{dec 6} S_K \leq 2\left( \frac{1}{\sqrt{1-\rho} - \sqrt{1-\sqrt{q}}} \right)^2 (1-\rho)^{K+1} \gap_{\hat{f}}(x_0, y_0) \leq \frac{8}{(\sqrt{q}-\rho)^2}(1-\rho)^{K+1} \gap_{\hat{f}}(x_0, y_0), \end{equation} where the second inequality is due to $\sqrt{1-x} + \frac{x}{2}$ is decreasing in $[0,1]$. Note that \begin{align} \nonumber \|x_K-x^*\|^2 \leq & 2\|x_K - x^*(y_K)\|^2 + 2\|x^*(y_K)-x^*(y^*)\|^2 \\ \nonumber \leq &\frac{4}{L}[\hat{f}(x_K, y_K) - \hat{f}(x^*(y_K),y_K)] + 18\|y_K-y^*\|^2 \\ \leq & \frac{4}{L}\hat{\epsilon}_K + 18\|y_K-y^*\|^2. \end{align} where in the second inequality we use Lemma \ref{lin's lemma}. Then, \begin{align} \|x_K-x^*\|^2 + \|y_K-y^*\|^2 \leq 19\|y_K-y^*\|^2 +\frac{4}{L}\hat{\epsilon}_K. \end{align} Because $\|y_K-y^*\|^2 \leq \frac{2}{\mu}[\hat{\Psi}(y^*)-\hat{\Psi}(y_K)] \leq \frac{2}{\mu}S_K$, by plugging in (\ref{dec 6}) and the definition of $\hat{\epsilon}_k$, we get \begin{align} \nonumber \Vert x_K-x^*\Vert^2 + \Vert y_K- y^*\Vert^2 \leq \left( \frac{306}{\mu(\sqrt{q}-\rho)^2} + \frac{\sqrt{2}}{L} \right)(1-\rho)^{K}\gap_{\hat{f}}(x_0, y_0). \end{align} By Lemma \ref{criterion relation}, we have $$ \Vert x_K-x^*\Vert^2 + \Vert y_K- y^*\Vert^2 \geq \frac{1}{36L^2}\|\nabla \hat{f}(x_{K}, y_{K})\|^2 \quad \text{ and }\quad \gap_{\hat{f}}(x_0, y_0) \leq \frac{1}{2\mu}\|\nabla \hat{f}(x_{0}, y_{0})\|^2. $$ Then we finish the proof. \end{proof} \subsection{Complexity of solving subproblem (\ref{subprob}) and proof of Theorem \ref{thm catalyst inner}} As in the previous subsection, we suppress the outer-loop index $t$. Define $\hat{\Phi}(x) = \max_y\hat{f}(x,y)$, $\hat{\Psi}(y) = \min_x \hat{f}(x,y)$ and $\hat{\Phi}^* = \min_x \hat{\Phi}(x) = \max_y\hat{\Psi}(y) = \hat{\Psi}^*$. We still define $\Tilde{\Psi}_k(y) =\min_x\Tilde{f}_k(x,y)= \min_{x}\hat{f}(x, y) - \frac{\tau}{2}\Vert y-z_k\Vert^2 = \hat{\Psi}(y) - \frac{\tau}{2}\Vert y-z_k\Vert^2$, and $\Tilde{\Phi}_k(x) = \max_y \Tilde{f}_k(x,y)$. Let $(x^*, y^*)$ be the optimal solution to $\min_x\max_y \hat{f}(x,y)$ and $(x_k^*, y_k^*)$ the optimal solution to $\min_x\max_y \Tilde{f}_k(x,y)$. Also, in this subsection, we denote $x^*(y) = \argmin_x \hat{f}(x, y)$ and $y^*(x) = \argmax_y \hat{f}(x,y)$. Recall that we defined a potential function $S_k = \hat{\Psi}(y^*) - \hat{\Psi}(y_k) + \frac{\mu}{2}\|y^*-v_k\|^2$ in the proof of Theorem \ref{thm catalyst scsc}. The following lemma shows that the initial point we choose to solve (\ref{subprob}) for $\mathcal{M}$ at iteration $k$ is not far from the optimal solution of (\ref{subprob}) if the stopping criterion is satisfied for every iterations before $k$. \begin{lemma} [Initial distance of the warm-start] \label{warm start scsc} Under the same assumptions as Theorem \ref{thm catalyst scsc}, with accuracy $\epsilon_k$ specified in Theorem \ref{thm catalyst scsc}, we assume that for $\forall i<k$, $\|\nabla \Tilde{f}_{i}(x_{i}, y_{i})\|^2 \leq \epsilon_i$. At iteration $k$, solving the subproblem \eqref{subprob} from initial point $(x_{k-1}, y_{k-1})$, we have \begin{equation*} \|x_{k-1} - x_k^*\|^2 + \|y_{k-1} - y_k^*\|^2 \leq C_k \epsilon_k, \end{equation*} where $C_1 = \left[ \frac{72\sqrt{2}}{\mu^2} + \frac{74\sqrt{2}}{(2\tau+\mu)\mu}\right] \frac{1}{1-\rho}$, $C_t = \frac{2}{\mu\min\{L, \mu+\tau \}}\frac{1}{1-\rho} + \frac{288\sqrt{2}\tau^2\max\{40L^2, 9\tau^2 + 4L^2 \}}{(\mu+\tau)^2L^2\mu^2(\sqrt{q}-\rho)^2)} \frac{1}{(1-\rho)^2} $ for $t>1$. \end{lemma} \begin{proof} We separate the proof into two cases: $k=1$ and $k>1$. \\ \textbf{Case $k=1$: } Note that $z_1 = y_0$, and therefore the subproblem at the first iteration is \begin{equation} \min_{x}\max_{y}\left[ \Tilde{f}_1(x, y) = \hat{f}(x,y) - \frac{\tau}{2}\|y-y_0\|^2\right]. \end{equation} Since $x_1^* = \argmin_x \Tilde{f}_1(x, y_1^*) = \argmin_x \hat{f}(x, y_1^*)$ and $x^* = \argmin_x \hat{f}(x, y^*)$, by Lemma \ref{lin's lemma} we have $\|x^* - x_1^*\| \leq 3\|y^* - y_1^*\|$. Furthermore, \begin{align}\nonumber \|x_0 - x_1^*\|^2+\|y_0-y_1^*\|^2 \leq & 2\|x_0-x^*\|^2 + 2\|x^* - x_1^*\|^2 + \|y_0-y_1^*\|^2 \\ \nonumber \leq & 2\|x_0-x^*\|^2 + 18\|y^* - y_1^*\|^2 + \|y_0-y_1^*\|^2 \\ \nonumber \leq & 2\|x_0-x^*\|^2 + 36\|y_0 - y^*\|^2 + 37\|y_0 - y_1^*\|^2 \\ \label{initial dist iter 1} \leq & \frac{72}{\mu}\gap_{\hat{f}}(x_0, y_0) + 37\|y_0 - y_1^*\|^2, \end{align} where in the last inequality we use Lemma \ref{criterion relation}. It remains to bound $\|y_0 - y_1^*\|$. Since $\hat{\Psi}(y) - \frac{\tau}{2}\|y - y_0\|^2$ is $(\mu+\tau)$ strongly-concave w.r.t.~$y$, we have \begin{equation} \left( \hat{\Psi}(y_1^*) - \frac{\tau}{2}\|y_1^* - y_0\|^2 \right) - \frac{\tau+\mu}{2}\|y_1^* - y_0\|^2 \geq \hat{\Psi}(y_0) = \hat{\Psi}^* - [\hat{\Psi}^* - \hat{\Psi}(y_0)] \geq \hat{\Psi}(y_1^*) - [\hat{\Psi}^* - \hat{\Psi}(y_0)], \end{equation} It further implies \begin{equation} \left(\tau + \frac{\mu}{2}\right) \|y_1^* - y_0\|^2 \leq \hat{\Psi}^* - \hat{\Psi}(y_0) \leq \gap_{\hat{f}}(x_0, y_0). \end{equation} Plugging back into (\ref{initial dist iter 1}), we have \begin{align*} \|x_0 - x_1^*\|^2+\|y_0-y_1^*\|^2 \leq & \left[ \frac{72}{\mu} + \frac{74}{2\tau+\mu}\right]\gap_{\hat{f}}(x_0, y_0) \\ \leq & \left[ \frac{72\sqrt{2}}{\mu^2} + \frac{74\sqrt{2}}{(2\tau+\mu)\mu}\right] \frac{1}{1-\rho}\epsilon_1. \end{align*} {\noindent\textbf{Case $k>1$:}} From the proof of Theorem \ref{thm catalyst scsc}, we see that $\|\nabla \Tilde{f}_{i}(x^t_{i}, y_{i})\|^2 \leq \epsilon_i$ implies $\gap_{\Tilde{f}_i}(x_i, y_i) \leq \hat{\epsilon}_i$ where $\hat{\epsilon}_i =\frac{\sqrt{2}}{4}(1-\rho)^i\gap_{\hat{f}}(x_0, y_0)$. Note that $\Tilde{f}_k$ is $(L, \mu+\tau)$-SC-SC and $(L+\max\{2L,\tau\})$-smooth. Then \begin{equation} \label{initial dist x} \begin{split} \Vert x_{k-1} - x_k^*\Vert^2 \leq\ & 2\Vert x_{k-1} - x^*(y_{k-1}^*)\Vert^2 + 2\Vert x^*(y_{k-1}^*)-x^*(y_k^*)\Vert^2 \\ \leq\ & 2\Vert x_{k-1} - x_{k-1}^*\Vert^2 + 2\left(\frac{L+\max\{2L,\tau\}}{L}\right)^2 \Vert y_k^* - y_{k-1}^*\Vert^2. \end{split} \end{equation} Furthermore, \begin{align} \nonumber & \|x_{k-1} - x_k^*\|^2+ \|y_{k-1}- y_k^*\|^2 \leq \|x_{k-1} - x_k^*\|^2+ 2\|y_{k-1}- y_{k-1}^*\|^2 +2\|y_{k-1}^*- y_k^*\|^2\\ \nonumber \leq & 2\Vert x_{k-1} - x_{k-1}^*\Vert^2 + 2\|y_{k-1}- y_{k-1}^*\|^2 + 2\left[\left(\frac{L+\max\{2L,\tau\}}{L}\right)^2+1\right] \Vert y_k^* - y_{k-1}^*\Vert^2 \\\label{initial dist bound} \leq & \frac{4\hat{\epsilon}_{k-1}}{\min\{L, \mu+\tau \}} + \max\left\{20, \frac{9\tau^2}{2L^2}+2 \right\} \Vert y_k^* - y_{k-1}^*\Vert^2. \end{align} Now we want to bound $\|y_{k-1}^* - y_k^*\|$. By optimality condition, we have for $\forall y$, \begin{equation} (y-y_k^*)^\top \nabla \Tilde{\Psi}_t(y_k^*) \leq 0, \quad (y-y_{k-1}^*)^\top \nabla \Tilde{\Psi}_{t-1}(y_{k-1}^*) \leq 0. \end{equation} Choose $y$ in the first inequality to be $y_{k-1}^*$, $y$ in the second inequality to be $y_k^*$, and sum them together, we have \begin{equation} (y_k^* - y_{k-1}^*)^\top (\nabla \Tilde{\Psi}_{k-1}(y_{k-1}^*) - \nabla \Tilde{\Psi}_k (y_k^*))\leq 0. \end{equation} Using $\nabla \Tilde{\Psi}_k(y) = \nabla_y \hat{f}(x^*(y), y) - \tau(y-z_k)$, we have \begin{equation} \label{y optimality} (y_k^* - y_{k-1}^*)^\top (\nabla_y \hat{f}(x^*(y_{k-1}^*), y_{k-1}^*) - \tau(y_{k-1}^* - z_{k-1}) - \nabla_y \hat{f}(x^*(y_k^*), y_k^*) + \tau(y_k^* -z_k))\leq 0. \end{equation} By strong concavity of $\hat{\Psi}(y) = \max_{x}\hat{f}(x, y)$, we have \begin{equation} (y_k^* - y_{k-1}^*)^\top (\nabla \hat{\Psi}(y_k^*) - \nabla \hat{\Psi}(y_{k-1}^*)) \leq -\mu\|y_k^* - y_{k-1}^*\|^2. \end{equation} Adding to (\ref{y optimality}), we have \begin{equation} (y_k^* - y_{k-1}^*)^\top [\tau(y_k^* - z_k) - \tau(y_{k-1}^* - z_{k-1})]\leq -\mu\|y_k^* - y_{k-1}^*\|^2 \end{equation} Rearranging, \begin{equation} \frac{\tau}{\mu+\tau}(y_k^*-y_{k-1}^*)^\top(z_{k-1}-z_k) \geq \|y_k^* - y_{k-1}^*\|^2. \end{equation} Further with $(y_k^*-y_{k-1}^*)^\top(z_{k-1}-z_k) \leq \|y_k^*-y_{k-1}^*\|\|z_{k-1}-z_k\| $, we have \begin{equation} \label{optimal y dist bound by z} \|y_k^* - y_{k-1}^*\| \leq \frac{\tau}{\mu+\tau}\|z_{k-1}-z_k\|. \end{equation} From updates of $\{z_k\}_k$, we have for $t>2$ \begin{align} \nonumber \|z_k - z_{k-1}\| = &\left\Vert y_{k-1} + \frac{\sqrt{q}-q}{\sqrt{q}+q}(y_{k-1}-y_{k-2}) - y_{k-2} - \frac{\sqrt{q}-q}{\sqrt{q}+q}(y_{k-2}-y_{k-3})\right\Vert \\ \nonumber \leq & \left(1+\frac{\sqrt{q}-q}{\sqrt{q}+q}\right)\|y_{k-1} - y_{k-2}\| + \frac{\sqrt{q}-q}{\sqrt{q}+q}\|y_{k-2}-y_{k-3}\| \\ \nonumber \leq & 2\|y_{k-1} - y_{k-2}\| + \|y_{k-2}-y_{k-3}\| \\ \leq & 6\max\{\|y_{k-1} - y^*\|, \|y_{k-2} - y^*\| , \|y_{k-3} - y^*\| \} \end{align} Therefore, \begin{align*} \|z_k - z_{k-1}\|^2 \leq & 36\max\{\|y_{k-1} - y^*\|^2, \|y_{k-2} - y^*\|^2 , \|y_{k-3} - y^*\|^2 \} \\ \leq & \frac{72}{\mu}\max\{\hat{\Psi}(y_{k-1})-\hat{\Psi}^*, \hat{\Psi}(y_{k-2})-\hat{\Psi}^*, \hat{\Psi}(y_{k-3})-\hat{\Psi}^*\} \\ \leq & \frac{72}{\mu} \max\{S_{k-1}, S_{k-2},S_{k-3}\}, \end{align*} where in the second inequality we use strongly concavity of $\hat{\Psi}$ and in the last we use $\hat{\Psi}(y_k) - \hat{\Psi}^*\leq S_k$. Combining with (\ref{optimal y dist bound by z}) and (\ref{initial dist bound}), we have \begin{equation} \|x_{k-1} - x_k^*\|^2+ \|y_{k-1} - y_k^*\|^2 \leq \frac{4\hat{\epsilon}_{k-1}}{\min\{L, \mu+\tau \}} + \frac{36\tau^2\max\{40L^2, 9\tau^2 + 4L^2 \}}{(\mu+\tau)^2L^2\mu} \max\{S_{k-1}, S_{k-2},S_{k-3}\}. \end{equation} Plugging in $S_{k} \leq \frac{8}{(\sqrt{q}-\rho)^2}(1-\rho)^{k+1} \gap_{\hat{f}}(x_0, y_0)$ from the proof of Theorem \ref{thm catalyst scsc} and from definition of $\epsilon_k$ and $\hat{\epsilon}_k$, we have \begin{equation} \|x_{k-1}- x_k^*\|^2+ \|y_{k-1} - y_k^*\|^2 \leq \bigg\{ \frac{2}{\mu\min\{L, \mu+\tau \}}\frac{1}{1-\rho} + \frac{288\sqrt{2}\tau^2\max\{40L^2, 9\tau^2 + 4L^2 \}}{(\mu+\tau)^2L^2\mu^2(\sqrt{q}-\rho)^2)} \frac{1}{(1-\rho)^2} \bigg\} \epsilon_k. \end{equation} It is left to discuss the case $t=2$. Similarly, we have \begin{equation*} \|z_2 - z_{1}\| = \left\Vert y_{1} + \frac{\sqrt{q}-q}{\sqrt{q}+q}(y_{1}-y_{0}) - y_0\right\Vert = \left(1+\frac{\sqrt{q}-q}{\sqrt{q}+q}\right)\|y_{1} - y_{0}\| \leq 4\max\{\|y_{1} - y^*\|, \|y_{0} - y^*\| \} \end{equation*} Then \begin{equation*} \begin{split} & \|z_2 - z_{1}\|^2 \leq 16\max\{\|y_{1} - y^*\|^2, \|y_{0} - y^*\|^2 \}\\ \leq\ & \frac{32}{\mu}\max\{\hat{\Psi}(y_{1})-\hat{\Psi}^*, \hat{\Psi}(y_{0})-\hat{\Psi}^* \} \leq \frac{32}{\mu} \max\{S_1, \gap_{\hat{f}}(x_0, y_0) \}, \end{split} \end{equation*} Combining with (\ref{optimal y dist bound by z}) and (\ref{initial dist bound}), we have \begin{equation} \|x_1 - x_2^*\|^2+ \|y_1 - y_2^*\|^2 \leq \frac{4\hat{\epsilon}_{1}}{\min\{L, \mu+\tau \}} + \frac{16\tau^2\max\{40L^2, 9\tau^2 + 4L^2 \}}{(\mu+\tau)^2L^2\mu} \max\{S_1, \gap_{\hat{f}}(x_0, y_0)\}. \end{equation} Plugging in $S_{1} \leq \frac{8}{(\sqrt{q}-\rho)^2}(1-\rho)^{2} \gap_{\hat{f}}(x_0, y_0)$ and definition of $\epsilon_2$ and $\hat{\epsilon}_1$, we have \begin{equation} \|x_1 - x_2^*\|^2+ \|y_1 - y_2^*\|^2 \leq \bigg\{ \frac{2}{\mu\min\{L, \mu+\tau \}}\frac{1}{1-\rho} + \frac{128\sqrt{2}\tau^2\max\{40L^2, 9\tau^2 + 4L^2 \}}{(\mu+\tau)^2L^2\mu^2(\sqrt{q}-\rho)^2)} \bigg\} \epsilon_2. \end{equation} \end{proof} \iffalse \begin{lemma}[Inner-loop complexity for SC-SC objectives] \label{inner-loop complexity scsc} Under the same assumptions as Theorem \ref{thm catalyst scsc}, we assume algorithm $\mathcal{M}$ can solve the subproblem (\ref{subprob}) when $f$ is $(\mu_x,\mu)$-SC-SC and $L$-Lipschitiz at a linear convergence rate depending on $\tau$: (here $(x_k, y_k)$ and $(x^*, y^*)$ denote iterate and optimal solution in solving the subproblem) \begin{equation} \label{deter M rate} \Vert x_k - x^*\Vert^2+\Vert y_k-y^*\Vert^2 \leq \left(1-\frac{1}{\Lambda^{\mathcal{M}}_{\mu_x, \mu, L}(\tau)}\right)^k[\Vert x_0-x^*\Vert^2 + \Vert y_0-y^*\Vert^2] \end{equation} if $\mathcal{M}$ is deterministic, and \begin{equation} \label{stoc M rate} \mathbb{E}[\Vert x_k - x^*\Vert^2+\Vert y_k-y^*\Vert^2]\leq \left(1-\frac{1}{\Lambda^{\mathcal{M}}_{\mu_x, \mu, L}(\tau)}\right)^k[\Vert x_0-x^*\Vert^2 + \Vert y_0-y^*\Vert^2] \end{equation} if $\mathcal{M}$ is stochastic. Let $K_t(\epsilon)$ denote the number of iterations (expected number of iterations if $\mathcal{M}$ is stochastic) for $\mathcal{M}$ to find a point $(x,y)$ such that $\gap_{\Tilde{f}_t}(x, y) \leq \epsilon$ for subproblem (\ref{subprob}) at $t$-th iteration. Then $K_t(\epsilon^{(t)})$ is $O \left(\Lambda^{\mathcal{M}}_{L, \mu, \Tilde{L}}(\tau) \log\left(\frac{\max\{1,\ell,\tau\}}{\min\{1,\mu_x,\mu \}} \right)\right).$ \end{lemma} \fi {\noindent\textbf{Proof of Theorem \ref{thm catalyst inner}}}\\ \begin{proof} We separate our arguments for the deterministic and stochastic settings. Inside this proof, $(x_{(i)}, y_{(i)})$ denotes the $i$-th iterate of $\mathcal{M}$ in solving the subproblem: $\min_x\max_y \Tilde{f}_k(x,y)$. We use $(x_k^*, y_k^*)$ to denote the optimal solution as before. We pick $(x_{(0)}, y_{(0)})$ to be $(x_{k-1}, y_{k-1})$. \paragraph{Deterministic setting.} The subproblem is $(L+\max\{2L, \tau\})$-Lipschitz smooth and $(L,\mu+\tau)$-SC-SC. By Lemma \ref{criterion relation}, after $N$ iterations of algorithm $\mathcal{M}$, \begin{align*} \|\nabla\Tilde{f}_k(x_{(N)}, y_{(N)})\|^2 &\leq 4(L+\max\{2L, \tau\})^2[\Vert x_{(N)}-x_k^*\Vert^2 + \Vert y_{(N)}-y_k^*\Vert^2] \\ \label{linear convergence after t} & \leq 4(L+\max\{2L, \tau\})^2 \left(1-\frac{1}{\Lambda^\mathcal{M}_{ \mu, L}(\tau)} \right)^N[\Vert x_{k-1}-x_k^*\Vert^2 + \Vert y_{k-1}-y_k^*\Vert^2]. \end{align*} Choosing \begin{align*} N = &\Lambda^{\mathcal{M}}_{\mu, L}(\tau)\log\frac{4(L+\max\{2L, \tau\})^2 (\Vert x_{k-1}-x_k^*\Vert^2 +\Vert y_{k-1}-y_k^*\Vert^2)}{\epsilon_k} \\ \leq & \Lambda^{\mathcal{M}}_{\mu, L}(\tau)\log\frac{4(L+\max\{2L, \tau\})^2 C_t\epsilon_k}{\epsilon_k} = \Lambda^{\mathcal{M}}_{\mu_x, \mu, L}(\tau)\log\left( 4(L+\max\{2L, \tau\})^2C_t \right) , \end{align*} where $C_t$ is specified in Lemma \ref{warm start scsc}, we have $\|\nabla\Tilde{f}_k(x_{(N)}, y_{(N)})\|^2 \leq \epsilon_k$. \paragraph{Stochastic setting.} With the same reasoning as in deterministic setting and applying Appendix B.4 of \citep{lin2018catalyst}, after $$N = \Lambda^{\mathcal{M}}_{\mu, L}(\tau)\log\frac{4(L+\max\{2L, \tau\})^2 (\Vert x_{k-1}-x_k^*\Vert^2 +\Vert y_{k-1}-y_k^*\Vert^2)}{\epsilon_k}+1 $$ iterations of $\mathcal{M}$, we have $\|\nabla\Tilde{f}_k(x_{(N)}, y_{(N)})\|^2 \leq \epsilon_k$. \end{proof} \subsection{Total complexity} \textbf{Proof of Corollary \ref{THM CATALYST TOTAL}} \begin{proof} From Theorem \ref{THM CATALYST NCSC}, the number of outer-loop calls to find an $\epsilon$-stationary point of $\Phi$ is $T = O\left( L(\Delta+D_y^0)\epsilon^{-2} \right)$. From Theorem \ref{thm catalyst scsc}, by picking $\rho =0.9\sqrt{q}= 0.9\sqrt{\mu/(\mu +\tau)} $, we have \begin{equation} \|\nabla \hat{f}_t(x^t_{k}, y^t_{k})\|^2 \leq \left[ \frac{5508L^2}{\mu^2(\sqrt{q}-\rho)^2} + \frac{18\sqrt{2}L^2}{\mu}\right](1-\rho)^{k}\|\nabla \hat{f}_t(x^t_{0}, y^t_{0})\|^2. \end{equation} Therefore, to achieve $ \|\nabla \hat{f}_t(x^{t}_{K}, y^{t}_{K})\|^2 \leq \alpha_t\|\nabla \hat{f}_t(x^t_{0}, y^t_{0})\|^2$, we need to solve (\ref{subprob}) $$K = 0.9\sqrt{(\tau+\mu)/\mu}\log\frac{\left[ \frac{5508L^2}{\mu^2(\sqrt{q}-\rho)^2} + \frac{18\sqrt{2}L^2}{\mu}\right]}{\alpha_t} = O\left(\sqrt{(\tau+\mu)/\mu} \log\left(\frac{\max\{1,L,\tau\}}{\min\{1,\mu \}} \right)\right)$$ times, where $\alpha_t$ is defined as in Theorem \ref{THM CATALYST NCSC}. Finally, Theorem \ref{thm catalyst inner} implies that solving (\ref{subprob}) needs $N = O\left(\Lambda^{\mathcal{M}}_{\mu, L}(\tau) \log\left(\frac{\max\{1,L,\tau\}}{\min\{1,\mu \}} \right)\right)$ gradient oracles. The total complexity is . \begin{equation} T\cdot K\cdot N = O\left(\frac{\Lambda^{\mathcal{M}}_{\mu, L}(\tau)L(\Delta+D_y^0) }{\epsilon^2}\sqrt{\frac{\mu+\tau}{\mu}} \log^2\left(\frac{\max\{1,L,\tau\}}{\min\{1,\mu \}}\right)\right). \end{equation} \end{proof}
1,314,259,992,586
arxiv
\section{Introduction} Many experiments and numerical simulations point onto a wide-spread (if not universal) behavior of displacements of tracers in different complex systems: The mean squared displacement (MSD) of the tracers grows linearly in time, \begin{equation} \langle \mathbf{x}^2 \rangle = 2 d D_0 t \label{eq:Fick} \end{equation} (with $D_0$ being the diffusion coefficient and $d$ being the dimension of space), like in the normal, Fickian diffusion; the probability density function (PDF) of the displacements is, however, strongly non-Gaussian. At first such non-Gaussian behavior accompanied by the MSD growth which is linear in time was observed in a broad class of materials close to glass and jamming transitions, such as binary Lennard-Jones mixture \cite{Kob1997}, dense colloidal hard sphere suspensions \cite{Kegel2000,Weeks2000}, silica melt \cite{Berthier2007}, and bidisperse sheared granular materials \cite{Marty2005}. Later it was found, that the PDF in most of these cases follows the exponential (Laplace) pattern \begin{equation} P(\mathbf{x},t) \propto \exp\left( - \frac{|\mathbf{x}|}{l(t)} \right) \label{eq:LaplDis} \end{equation} with the parameter $l(t)$ characterizing the width of the distribution \cite{Stariolo2006}. In complex fluids this dependence was explained by the \textit{dynamic heterogeneity of ensemble of tracers} which results in the intermittent nature of particles' trajectories \cite{Chaudhury2007}, see Ref. \cite{BerthierRMP2011} for the review. Later on, a very similar behavior was observed in a large amount of systems of a very different nature. Thus, in Wang et al. \cite{Wang1,Wang2}, such a behavior was observed for the motion of colloidal beads on phospholipid tubes and in entangled actin suspensions, and later in the motion of tracer particles in mucin gels \cite{Wagner}, the cases very different from systems discussed above. In \cite{Wang1} this type of behavior was termed Brownian yet non-Gaussian (BnG) diffusion. More experimental examples are mentioned in \cite{Seno}. for recent simulation results see \cite{Miotto}. In \cite{Wang2} the behavior was attributed to the \textit{heterogeneity of the medium}. Each tracer moves in the environment with its own diffusivity $D$, i.e., the PDF of each tracer's displacement follows \begin{equation} G(x,t|D) = \frac{1}{\sqrt{4 \pi D t}} \exp\left(- \frac{x^2}{4 D t} \right) \label{eq:Gauss} \end{equation} (in one dimension, or in the projection on the $x$-direction). The PDF of displacements in an ensemble of tracers is than given by \begin{equation} P(x,t) = \int_0^\infty G(x,t|D) p(D) dD, \label{eq:compound} \end{equation} where $p(D)$ is the probabitity density of the distribution of the diffusivities $D$. If $p(D)$ is exponential, than $P(x,t)$ is a Laplace distribution. Different diffusivities for different members of the ensemble may be connected with the fact that the properties of the medium slowly change in space or in time. The heterogeneity can be atrributed to the tracers themselves. \textit{Heterogeneous ensembles of tracers} were invoked in several works. Thus, analyzing the consequences of the fundamental observation in population biology that individuals of the same species are not identical, it was shown in \cite{Petrovsky} that e.g., the Maxwell type of the distribution of speeds of flying insects may lead to the exponential distribution of diffusivities. Analyzing the movement data of the parasitic nematodes, Hapca et al. \cite{Hapca} showed that there is a significant variation of an effective diffusion coefficient within the population, and that the distribution of the diffusion coefficients follows a gamma distribution. This again leads to the exponential tails of the displacement distribution. In mathematical statistics the procedure given by Eq. (\ref{eq:compound}) is called compounding \cite{Dubey}. A slightly different procedure formulated in the language of random variables, is associated with the concept of generalized grey Brownian motion (ggBm) \cite{Mura1,Mura2,Gianni,Vittoria,Sliusarenko}. For Brownian diffusion characterized by the Gaussian displacement distribution, Eq.(\ref{eq:Gauss}), this procedure leads to a similar scheme as simple compounding. In physics, such procedures fall under the notion of superstatistics \cite{Beck1,Beck2,BeckCohen}. All above means that the BnG diffusion appears in many different physical systems and mathematical models, and, in physics, this is probably not a monocausal phenomenon. Similar effects are also seen in anomalous diffusion \cite{Spakowitz,Metzler,Stylianidou,MetzAcX,Krapf} which case however is not discussed in the present work. In many cases the form of the PDF changes at longer times for the normal, Gaussian one, see e.g. \cite{Wang1,Wang2}. Such change cannot be described within the tracers' heterogeneity model unless the properties of tracers change in time (which is not the case in the experiments using well-controlled tracers). In the models of heterogeneous media, the exponential-to-Gaussian transition in the PDF may stem from slow temporal fluctuations of the diffusion coefficient (a \textit{diffusing diffusivity} model \cite{ChuS,Sebastian,Seno,Cherayil,Lano}). As mentioned in \cite{Hapca} and \cite{Seno}, such a transition may also stem from the \textit{quenched spacial heterogeneity} of the medium. At short times the motion of a tracer is confined to a domain with some diffusivity $D$, while at longer times it travels between the domains with different diffusivities, so that the values of the diffusion coefficients fluctuate along the trajectory. In the present work we consider the motion of tracers in a medium with quenched distribution of diffusion coefficients slowly varying in space. Thus, at short times different tracers see different diffusion coefficients while at long times (and correspondingly large scales) the homogenization sets in, and the motion of each tracer can be described as taking place in a homogeneous medium characterized by some effective diffusion coefficient $D^*$. The BnG diffusion implies that the mean diffusion coefficient $D_0$ sampled at short times is equal to the effective one $D^*$ in the homogenized regime. The main topic of the present discussion is to find out when this is likely to be the case, and, if it is case, how does the transition between the short-time Laplace and the long-time Gaussian PDF shapes takes place. \section{Position-dependent diffusion coefficient} Let us assume that the BnG diffusion phenomenon in some specific medium is fully due to the spacial heterogeneity of the local diffusion coefficient $D(\mathbf{x})$ which is position-dependent. In dimensions higher than $d=1$ the system will be assumed to be isotropic on the average, which considerably simplifies the further discussion. In order to observe heterogeneity at times which are not very short (much longer than experimental sampling time for the trajectory), the local diffusion coefficient $D(\mathbf{x})$ has to vary only slowly in space. The correlation length $\lambda$ of $D(\mathbf{x})$ is therefore mesoscopic, and the characteristic time of the transition from the short time inhomogeneous (superstatistical) to the homogenized behavior is $t_H \sim \lambda^2 / D_0$. Therefore, we assume that the particle's motion in our system is described by the Langevin equation \begin{equation} \dot{\mathbf{x}} = \sqrt{2 D(\mathbf{x})} \mbox{\boldmath $\xi$ \unboldmath}(t) \label{eq:Lang} \end{equation} with the Gaussian noise $\mbox{\boldmath $\xi$ \unboldmath}(t)$ fulfilling $\langle \mbox{\boldmath $\xi$ \unboldmath} \rangle=0$ and $\langle \xi_\beta (t) \xi_\gamma(t') \rangle = \delta_{\beta \gamma } \delta(t-t')$ with $\beta, \gamma$ denoting Cartesian components. The knowledge of the diffusion coefficient $D(\mathbf{x})$ as a function of the coordinates is not enough to uniquely define the properties of this diffusion. Systems with exactly the same $D(\mathbf{x})$, the ones described by the Langevin equation Eq.(\ref{eq:Lang}), are described by different Fokker-Planck equations (FPEs), depending on what interpretation of the stochastic integral is assumed. The corresponding FPEs for a given realization of $D(\mathbf{x})$ read \begin{equation} \frac{\partial P(\mathbf{x},t)}{\partial t} = \nabla \left[(1 - \alpha) \nabla D(\mathbf{x}) + D(\mathbf{x}) \nabla \right] P(\mathbf{x},t), \label{eq:FPE} \end{equation} with the interpretation parameter $\alpha$ taking the values in the interval $0 \leq \alpha \leq 1$. The typical interpretations, the Ito, Stratonovich and the H\"anggi-Klimontovich (HK) ones, correspond to $\alpha = 0, 1/2$ and 1, respectively. Each of these schemes may appear in different physical situations (see e.g. Ref.\cite{Sokolov}). The equilibrium concentration profile $n(\mathbf{x}) \propto P(\mathbf{x}, t \to \infty)$ corresponding to the vanishing flux in a closed system is given by the solution of the equation $ (1 - \alpha) n(\mathbf{x}) \nabla D(\mathbf{x})+ D(\mathbf{x}) \nabla n(\mathbf{x}) = 0$ and reads \begin{equation} n(\mathbf{x}) = C \cdot D^{\alpha -1}(\mathbf{x}) \label{purediff} \end{equation} with the constant $C > 0$ depending on the size of the system and on the number of particles therein. The HK case is the only one when this profile is flat. The Fokker-Planck equation for HK interpretation corresponds to the phenomenological second Fick's law \begin{equation} \frac{\partial n(\mathbf{x},t)}{\partial t} = \nabla \left[ D(\mathbf{x}) \nabla n(\mathbf{x},t) \right]. \label{eq:HK} \end{equation} The Ito interpretation leads to the Fokker-Planck equation \begin{equation} \frac{\partial n(\mathbf{x},t)}{\partial t} = \Delta \left[ D(\mathbf{x}) n(\mathbf{x},t) \right] \label{eq:Ito} \end{equation} which, in some cases, better reproduces the experimental results for heterogeneous diffusion and is connected with the continuous time random walk (CTRW) scheme \cite{Milligan}. In what follows we will estimate the long-time diffusion coefficient for different values of interpretation parameter $\alpha$ under different additional conditions and discuss the question, which model is the best candidate for reproducing the BnG behavior. We will moreover present simulation results for systems corresponding to HK and Ito models with respect to the time-dependence of the diffusion coefficient and the corresponding PDF. \section{Sampled diffusion coefficients and local diffusivities} In heterogeneous media, the situations under equilibrium, and the non-equilibrium ones may show very different properties \cite{ACe,Meroz}. The first, equilibrium, situation corresponds to the case when the system (medium, already containing tracers) was prepared long before starting the observation. The position of the tracer is monitored from the beginning of observation ($t=0$) on. If there are several tracers in the system, the ones to observe are chosen at random. In this case the probability density to find a tracer at position $\mathbf{x}$ at $t=0$ is proportional to the equilibrium density $n(\mathbf{x})$ of tracers at the corresponding position. If $n(\mathbf{x})$ is not constant, the space is not sampled homogeneously. The second, opposite, situation corresponds to the case when the tracers were introduced at random exactly at the beginning of the observation, and the diffusivity values at initial particles' positions are sampled according to the distribution of $D(\mathbf{x})$. We will call the first and the second situations the \textit{equilibrium sampling}, and the \textit{homogeneous sampling}, respectively. In \cite{Wagner}, a moving time averaging over the long data acquisition time is used to get both the displacement's PDFs and the MSDs, which assumes that the system has enough time to equilibrate. In \cite{Wang1} the ensemble average was used, but time between preparation and the beginning of the observation was not specified. The situation will be equilibrated if this time is much larger than $t_H$. \subsection{Distribution of sampled diffusion coefficients} At short times, i.e. in the superstatistical regime, the PDF of particles' displacements $p(\mathbf{x},t)$ follows by averaging the Gaussian PDF of particles' displacements in a patch with local value of the diffusion coefficient $D(\mathbf{x})$ over the distribution of these local diffusivities close to the particles' initial positions. Therefore, the PDF in the superstatistical regime (when each particle can be considered as moving with its own diffusion coefficient) is given by \begin{equation} p(\mathbf{x},t) = \int_0^\infty \frac{1}{(4 \pi d D t)^{d/2}} \exp\left( - \frac{|\mathbf{x}|^2}{2 d D t} \right) p_S(D) dD , \label{Trafo} \end{equation} where $p_S(D)$ is the PDF of sampled diffusion coefficients. This PDF is \textit{defined} by Eq.(\ref{Trafo}) and may or may not be equal to the one-point PDF $p(D)$ of the random field $D(\mathbf{x})$, depending on the type of sampling (equilibrium or homogeneous) implied in experiment. Let us take as a ``stylized fact'' that the PDF of displacements at short times has the exponential form \begin{equation} p(\mathbf{x},t) = A(t) \exp(-| \mathbf{x}| / l (t) ), \end{equation} with $l(t)$ defining the width of the distribution and $A(t)$ being the normalization constant. Requiring that \[ \int p(\mathbf{x},t) d\mathbf{x} = 1 \] and that \[ \int \mathbf{x}^2 p(\mathbf{x},t) d\mathbf{x} = 2 d D_0 t \] in $d$ dimensions, we obtain the explicit form of the displacements' PDFs \begin{center} \begin{tabular}{ll} $\displaystyle p(x,t) = \frac{1}{2 \sqrt{D_0 t}} \exp\left(-\frac{x}{\sqrt{D_0 t}} \right) $& in $d=1$\\ $\displaystyle p(\mathbf{x},t) = \frac{3}{4 \pi D_0 t} \exp \left(- \sqrt{\frac{3}{2}} \frac{|\mathbf{x}|}{\sqrt{D_0 t}} \right) $ & in $d=2$ \\ $\displaystyle p(\mathbf{x},t) = \frac{1}{\pi (2D_0 t)^{3/2}} \exp \left(- \sqrt{2} \frac{|\mathbf{x}|}{\sqrt{D_0 t}} \right) $ & in $d=3$. \end{tabular} \end{center} Passing to the Fourier transforms in the spacial coordinate in Eq.(\ref{Trafo}) and taking into account the central (rotational) symmetry of the PDF we get \[ \tilde{p}(\mathbf{k},t) = \int_0^\infty \exp\left( - D t k^2 \right) p_S(D) dD, \] where $k = |\mathbf{k}|$, and therefore see that $\tilde{p}(\mathbf{k},t)$ is the Laplace transform of $p_S(D)$ taken at the value of the Laplace variable $s= tk^2$, and that $p_S(D)$ thus follows as the corresponding inverse Laplace transform. The values of $\tilde{p}(\mathbf{k},t)$ are \[ \tilde{p}(\mathbf{k},t) = \left\{ \begin{array}{ll} \displaystyle \frac{1}{1 + D_0 t k^2} & \mbox{in } d=1 \\ \displaystyle \frac{1}{(1 + \frac{2}{3} D_0 t k^2)^{3/2} } & \mbox{in } d=2 \\ \displaystyle \frac{1}{(1 + \frac{1}{2} D_0 t k^2)^2} & \mbox{in } d=3 , \end{array} \right. \] and therefore $p_S(D)$, following as the inverse Laplace transform of these equations in $s=tk^2$, are: \[ p_S(D) = \left\{ \begin{array}{ll} \displaystyle \frac{1}{D_0} \exp \left(- \frac{D}{D_0} \right) & \mbox{in } d=1 \\ \displaystyle \frac{3^{3/2}}{\sqrt{2 \pi} D_0} \sqrt{\frac{D}{D_0}} \exp \left(- \frac{3}{2} \frac{D}{D_0} \right) & \mbox{in } d=2 \\ \displaystyle \frac{4}{D_0} \frac{D}{D_0} \exp \left(- 2 \frac{D}{D_0} \right) & \mbox{in } d=3. \end{array} \right. \] These distributions possess the form of a Gamma-distribution \begin{equation} p_S(D) =\frac{\beta^\beta}{\Gamma(\beta)} \frac{1}{\overline{D}} \left(\frac{D}{\overline{D}} \right)^{\beta-1} \exp \left(- \beta \frac{D}{\overline{D}} \right) \label{eq:Sampled} \end{equation} with the shape parameter $\beta=(d+1)/2$ (with $d$ being the dimension of space) and with the mean $\overline{D} = D_0$. The first inverse moment of the distribution $\langle D^{-1} \rangle = D_0^{-1} \beta \Gamma(\beta-1)/\Gamma(\beta) = D_0^{-1} \beta/(\beta-1)$, which will be of use in what follows, diverges in $d=1$ and is finite in higher dimensions: \begin{equation} \langle D^{-1} \rangle = \left\{ \begin{array}{ll} 3/D_0 & \mbox{in } d=2 \\ 2/D_0 & \mbox{in } d=3. \end{array} \right. \label{eq:FIM} \end{equation} \subsection{Distribution of local diffusion coefficients} In the superstatistical regime corresponding to short times the particles move in different areas with different local diffusion coefficients $D(\mathbf{x})$. Since the patches do not have to be sampled homogeneously, the distribution of sampled diffusion coefficients might differ from such of $D(\mathbf{x})$. In situations corresponding to homogeneous sampling, the two PDFs, $p(D)$ and $p_S(D)$ coincide, so that $p(D)$ is given by Eq.(\ref{eq:Sampled}). The same is true also for HK interpretation under equilibrium sampling, when the concentration profile given by Eq.(\ref{purediff}) is flat. The situation for other interpretations under equilibrium sampling is different. The probability density to find a domain with diffusivity $D$ by picking up a particle at random is proportional to the particles' density in the corresponding region and to probability to find a region with density $n$ among all regions with given diffusivity, i.e. to $n p(n,D)$, where $p(n,D)$ is the joint probability density of $n(\mathbf{x})$ and $D(\mathbf{x})$. Thus \begin{eqnarray} p_S(D) &=& \mathcal{N} \int_0^\infty n p(n,D) dn \nonumber \\ &=& \mathcal{N} \int_0^\infty n p(n|D) p(D) dn, \label{eq:SampGeo} \end{eqnarray} where $\mathcal{N}$ is the normalization constant, and, in the second line, $p(n|D)$ is a probability density of the particle concentration conditioned on the diffusion coefficient in the corresponding domain. The normalization constant $\mathcal{N}$ is trerefore given by \begin{eqnarray*} \mathcal{N}^{-1} &=& \int_0^\infty \left[ \int_0^\infty n p(n|D) dn \right] p(D) dD \\ &=& \int_0^\infty \langle n | D \rangle p(D) dD \end{eqnarray*} where $\langle n | D \rangle$ in the last line is the first conditional moment of the particle concentration. This gives us a physical interpretation of the normalization constant $\mathcal{N}$: it is the inverse of the concentration averaged over all possible landscapes, which coincides with a volume mean $\langle n(\mathbf{x})\rangle$ in the thermodynamical limit. Therefore, it is useful to introduce the normalized equilibrium concentration at a position $\mathbf{x}$ \begin{equation} \nu(\mathbf{x}) = \frac{n(\mathbf{x})}{\langle n(\mathbf{x}) \rangle}, \label{Nu1} \end{equation} and put down Eq.(\ref{eq:SampGeo}) as \begin{equation} p_S(D) = \int_0^\infty \nu p(\nu|D) p(D) d\nu \label{eq:psnu} \end{equation} (note that $\mathcal{N} n = \nu$ and $p(n | D) dn = p(\nu | D) d \nu$). According to Eq.(\ref{purediff}) the connection between $n(\mathbf{x})$ and $D(\mathbf{x})$ is deterministic, $n(\mathbf{x}) = n(D(\mathbf{x})) = C \cdot D^{\alpha - 1}(\mathbf{x})$. Therefore the normalized concentration is \begin{equation} \nu(\mathbf{x}) = \frac{D^{\alpha-1}(\mathbf{x})}{\langle D^{\alpha-1}(\mathbf{x}) \rangle}, \label{Nu} \end{equation} and the conditional probability density $p(\nu|D)$ is given by \begin{equation} p(\nu | D) = \delta(\nu - D^{\alpha - 1} \langle D^{\alpha - 1} \rangle^{-1}). \label{eq:pnuD} \end{equation} Subsituting Eq.(\ref{eq:pnuD}) into Eq.(\ref{eq:psnu}) we get \begin{equation} p_S(D) = \frac{D^{\alpha - 1}}{\langle D^{\alpha-1}\rangle} p(D), \label{eq:psDpD} \end{equation} with the additional requirement that \begin{equation} \langle D^{\alpha-1}\rangle = \int_0^\infty D^{\alpha-1} p(D) dD. \label{eq:Norm2} \end{equation} Equations Eq.(\ref{eq:psDpD}) and (\ref{eq:Norm2}) are easily solved by noting that the first one gives the form of $p(D)$ up to the normalization constant $\langle D^{\alpha - 1} \rangle$: \begin{eqnarray} p(D) &=& \langle D^{\alpha - 1} \rangle D^{1-\alpha} p_S(D) \nonumber \\ &=& \langle D^{\alpha - 1} \rangle \frac{\beta^\beta}{\Gamma(\beta)} D_0^{-\beta} D^{\beta-\alpha} \exp \left(- \beta \frac{D}{D_0} \right) \label{PDD} \end{eqnarray} where in the second line we simply substitute the expression for $p_S(D)$ as following from Eq.(\ref{eq:Sampled}). Requiring the normalization of the l.h.s. we get \begin{equation} \langle D^{\alpha - 1} \rangle= \frac{\Gamma(\beta)}{\beta^{\alpha-1}\Gamma(\beta - \alpha +1)} D_0^{\alpha-1}. \label{NormEq} \end{equation} Substituting this expression into Eq.(\ref{PDD}) we obtain \[ p(D) = \frac{D_0^{\alpha - \beta -1} \beta^{\beta - \alpha + 1}}{\Gamma(\beta - \alpha + 1)} D^{\beta - \alpha} e^{-\beta \frac{D}{D_0}} \] and recognize on the r.h.s. a $\Gamma$-distribution \begin{equation} p(D) =\frac{\beta'^{\beta'}}{\Gamma(\beta')} \frac{1}{\overline{D}} \left(\frac{D}{\overline{D}} \right)^{\beta'-1} \exp \left(- \beta' \frac{D}{\overline{D}} \right) \label{eq:PDFloc} \end{equation} with the shape parameter $\beta'= \beta - \alpha + 1$ and with a mean \[ \overline{D} = \frac{\beta-\alpha+1}{\beta} D_0 \] different from $D_0$. Therefore the PDF of local diffusion coefficients is again given by a Gamma distribution. For future convenience we repeat the expressions for the parameters of the distribution of local diffusion coefficients as functions of the dimension of space for homogeneous sampling ($p(D) = p_S(D)$ with $p_S(D)$ given by Eq.(\ref{eq:Sampled})) and for equilibrium sampling, Eq.(\ref{eq:PDFloc}), in the table below. \begin{table}[h!] \caption{Parameters of the Gamma distribution \label{Tab:Dist} } \begin{tabular}{|c|c|c|} \hline sampling type & shape parameter & mean \\ \hline homogeneous & $\beta = \frac{d+1}{2}$ & $\overline{D} = D_0$ \\ \hline equilibrium & $\beta' = \frac{d+3}{2} - \alpha$ & $\overline{D} = \left[ 1 + \frac{2(1-\alpha)}{d+1} \right] D_0$ \\ \hline \end{tabular} \end{table} The cumulative distribution function corresponding to the PDFs, Eqs. (\ref{eq:Sampled}) and (\ref{eq:PDFloc}), is \begin{equation} F_z(D) = \int_0^D p(D') dD' = \frac{1}{\Gamma(z)} \gamma \left(z, z \frac{D}{\overline{D}} \right) \label{CDF} \end{equation} with $z=\beta$ or $z=\beta'$ respectively, and $\gamma(z,x)$ being the lower incomplete $\Gamma$-function. This form will be used for generating diffusivity landscapes in simulations, as discussed Section \ref{Sec:Sim}. In the HK case and in the Ito case for homogeneous sampling the parameters are $\beta =(d+1)/2$ and $\overline{D}=D_0$; in the equilibrated Ito case they are $\beta' = (d+3)/2$ and $\overline{D}=\frac{d+3}{d+1} D_0$. Calculating $\langle D^{\alpha -1} \rangle = \int_0^\infty D^{\alpha -1} p(D) dD$ for the case of homogeneous sampling we can put down the expression for $\nu(D)$ which will be useful when calculating the effective medium properties: \begin{equation} \nu(D) = \frac{\beta^{\alpha-1}\Gamma(\beta)}{\Gamma(\beta + \alpha -1)} \left( \frac{D}{D_0} \right)^{\alpha-1}. \label{eq:nuhom} \end{equation} For the case of equilibrium sampling $\langle D^{\alpha -1} \rangle$ is given by Eq.(\ref{NormEq}), and \begin{equation} \nu(D) = \frac{\beta^{\alpha-1}\Gamma(\beta - \alpha +1)}{\Gamma(\beta)} \left( \frac{D}{D_0} \right)^{\alpha-1}. \label{eq:nueq} \end{equation} \section{Long-time behavior: Homogenization} Now let us consider the long-time behavior of the diffusion coefficient corresponding to its homogenization. At long times particles sample large domains of the system and feel some effective diffusion coefficient $D^*$. The problem of finding effective large-scale characteristics of an inhomogeneous medium is an old one. The simplest approaches correspond to static, time-independent problems, and the best-investigated situations are pertinent to the homogenization of the electric conductance, and to mathematically similar cases of homogenization of dielectric or magnetic successibility, see e.g. \cite{Beran}. In the language of conductivity the situation is as follows: One considers a large piece of medium (inhomogeneous conductor), say in a form of a slab, with given boundary conditions for the potential (for example, with two opposite sides connected to a battery by highly conducting electrodes which are thus kept at constant potential difference $\Delta \phi$) and measures the total current through the system. Locally the current density follows the Ohm's law $\mathbf{j} (\mathbf{x}) = \sigma(\mathbf{x}) \mathbf{E} (\mathbf{x})$ giving a linear connection between a solenoidal field $\mathbf{j}(\mathbf{x})$ (for which $\mathrm{div} \; \mathbf{j} = 0$) and a potential field $\mathbf{E}(\mathbf{x}) = \mathrm{grad} \; \phi(\mathbf{x})$ (for which in the static case $\mathrm{rot} \; \mathbf{E} = 0$ holds). Demanded is the connection between the volume mean current density $\overline{\mathbf{j} }= \frac{1}{V} \int_V \mathbf{j} (\mathbf{x}) d \mathbf{x}$ and mean electric field $\overline{\mathbf{E}} = \frac{1}{V} \int_V \mathbf{E} (\mathbf{x}) d \mathbf{x}$ in the thermodynamical limit $V \to \infty$: $\overline{\mathbf{j}} = \sigma^* \overline{\mathbf{E}}$, where $\sigma^*$ is the effective conductance. The mean values of the current density and of the electric field are immediately connected with the total current through the system and to the potential difference provided by a battery. A review of methods for calculating $\sigma^*$ for a variety of cases is given e.g. in Ref. \cite{Sahimi}. Standard procedures of finding $\sigma^*$ can be applied to finding the effective long-time diffuson coefficient in the HK case, but not in any case with inhomogeneous equilibrium. The reason is that the standard phenomenological Fick's equation corresponding to the HK case, Eq. (\ref{eq:HK}), can be considered as a combination of a local continuity equation $\frac{\partial}{\partial t} p(\mathbf{x}) = - \mathrm{div}\; \mathbf{j(\mathbf{x})}$, with $\mathbf{j}(\mathbf{x})$ being now the probability or particles' flux, and the linear relation between $\mathbf{j(\mathbf{x})}$ and an obviously potential field $\mathrm{grad}\; p(\mathbf{x})$, namely the first Fick's law $\mathbf{j} (\mathbf{x}) = D(\mathbf{x}) \; \mathrm{grad} \;p(\mathbf{x})$ serving as an analogue of the Ohm's law. At long times the probability distribution spreads, and the process of diffusion slows down, so that $\frac{\partial}{\partial t} p(\mathbf{x}) = - \mathrm{div}\; \mathbf{j(\mathbf{x})} \to 0$, and the methods used in the time-independent electrical case can be safely applied. In the cases corresponding to inhomogeneous equilibrium the methods cannot be applied immediately. Although the corresponding diffusion equations can still be considered as a combination of a local continuity equation $\frac{\partial}{\partial t} p(\mathbf{x}) = - \mathrm{div}\; \mathbf{j(\mathbf{x})}$ with some linear response law, the last one does not have the form of the Ohm's law. For example, for the Ito case, Eq.(\ref{eq:Ito}), we have $\mathbf{j} (\mathbf{x}) = \mathrm{grad} \; D(\mathbf{x}) p(\mathbf{x})$. The homogenization of the diffusion coefficient in a heterogeneous medium is still similar to the one for the electric conductance \cite{Beran,Sahimi}, but with an important difference discussed below. Let us assume that we are able to calculate the effective conductance $\sigma^*$ of an inhomogeneous system with local conductances $\sigma(\mathbf{x})$ and denote this as a special type of an average, the homogenization mean, by $\sigma^* = \langle \sigma(\mathbf{x}) \rangle_H$. Then the effective diffusion coefficient in the homogenized regime for the general diffusive case is given by the same type of the average: \begin{equation} D^* = \frac{\langle D(\mathbf{x}) n (\mathbf{x})\rangle_H}{\langle n (\mathbf{x}) \rangle} = \langle D(\mathbf{x}) \nu(\mathbf{x}) \rangle_H =\langle \kappa(\mathbf{x}) \rangle_H, \label{HomAv} \end{equation} where $\kappa(\mathbf{x}) = D(\mathbf{x}) \nu (\mathbf{x})$. This result follows by considering the stationary flow through a large piece of the medium with concentrations kept constant at its boundaries \cite{Camboni}. An alternative approach can follow the lines of \cite{Dean} where some specific situations were considered. Here we give a simple explanation, not following the lines of the proofs but giving a physical intuition. The explanation relies on the property $\langle a \sigma(\mathbf{x}) \rangle_H = a \langle \sigma(\mathbf{x}) \rangle_H$ which is evident from the initial definition of $\sigma^*$ as connecting the mean current density with the mean electric field. Let us assume that the diffusing particles carry a charge $q$, and move in electric field $\mathbf{E}$ under the force $\mathbf{f} = q \mathbf{E}$. Then the local current density is $\mathbf{j}(\mathbf{x}) = q^2 n(\mathbf{x}) \mu(\mathbf{x}) \mathbf{E}(\mathbf{x})$, with $\mu(\mathbf{x}) = D(\mathbf{x})/k_B T$ being the local mobility, giving $\sigma(\mathbf{x}) = q^2 D(\mathbf{x})n(\mathbf{x}) /k_B T$. Now we calculate the large scale conductance $\sigma^* = \langle \sigma(\mathbf{x}) \rangle_H$ and assume that it is connected with the homogenized diffusion coefficient and the mean particles' density by the same Nernst-Einstein relation $\sigma^* = q^2 D^* \langle n(\mathbf{x}) \rangle /k_B T$. Cancelling all constant prefactors we get $\langle D(\mathbf{x}) n(\mathbf{x}) \rangle_H = D^* \langle n(\mathbf{x}) \rangle$, which is our Eq.(\ref{HomAv}). The homogenized conductance possesses the upper and the lower bound, from which the Wiener bounds, see e.g. \cite{Beran}, are the most general ones \cite{Bounds}. In our case they read \begin{equation} \left \langle \kappa^{-1}(\mathbf{x}) \right \rangle^{-1} \leq \langle \kappa(\mathbf{x}) \rangle_H \leq \langle \kappa(\mathbf{x}) \rangle. \label{Wiener} \end{equation} The averages in the bounds in Eq.(\ref{Wiener}) can be considered either as volume means or averages over landscapes. In $d=1$ the homogenization mean corresponds \textit{exactly} to the lower bound \begin{equation} \langle \kappa(x) \rangle_H = \left \langle \kappa^{-1}(x) \right \rangle^{-1}, \label{oned} \end{equation} which for conductance follows immediately from the Ohm's law. Far from percolation transition $\sigma^*$ is typically well reproduced by the effective medium approximation (EMA), see \cite{Sahimi} for the discussion. Within this approximation $D^* = \langle \kappa \rangle_H$ is given by the solution of the equation \begin{equation} \left \langle \frac{D^* - \kappa}{(d-1)D^* + \kappa} \right\rangle = 0, \label{EMA} \end{equation} where the average again can be considered either as a volume average or as an average over the distribution of $\kappa$. We note that for $d=1$ EMA reproduces the exact result, Eq.(\ref{oned}). Eq.(\ref{EMA}) is pertinent to quadratic and cubic lattices in $d=2$ and 3 \cite{Kirkpatrick}, and to continuous $d$-dimensional systems \cite{Sahimi}. For known $p(D)$ and $\nu(D)$ Eq.(\ref{EMA}) takes the form \begin{equation} \int_0^\infty \frac{D^* - D \nu(D)}{(d-1)D^* + D \nu(D)} p(D) dD = 0 \end{equation} and can be solved numerically by using the parameters of $p(D)$ given in Table \ref{Tab:Dist} and $\nu(D)$ given by Eq.(\ref{eq:nuhom}) and Eq.(\ref{eq:nueq}) for homogeneous and equilibrium sampling, respectively. The results of such numerical solution for $d=2$ are shown in Fig. \ref{fig_EMA}. \begin{figure} \centering% \includegraphics[width=\columnwidth]{Fig2new}% \caption{The EMA predictions for the values of $D^*/D_0$ in $d=2$ as a function of the parameter $\alpha$ defining the interpretation. The upper (dashed) curve represents the results for the equilibrated case, the lower (full) one for the case of homogeneous sampling. \label{fig_EMA}} \end{figure} We note that for the Ito cases, $\alpha = 0$, Eqs.(\ref{eq:nuhom}) and (\ref{eq:nueq}) give $\nu(D) \propto D^{-1}$, so that the values of $\kappa = D \nu(D)$ do not fluctuate. According to these equations we have $\kappa = D \nu(D)=\frac{\beta-1}{\beta} D_0$ for homogeneous sampling (so that $\kappa = \frac{1}{3} D_0$ in $d=2$ and $\kappa = \frac{1}{2} D_0$ in $d= 3$), and $\kappa = D_0$ for equilibrium sampling in any dimension. Therefore the corresponding predictions of EMA for Ito cases are essentially \textit{exact}. \subsection{Systems with homogeneous equilibrium.} For systems with homogeneous equilibrium (the HK interpretation, or the random diffusivity model of Ref. \cite{Dean}) $\nu = 1$ and there is no difference between equilibrium and homogeneous sampling situations: the distributions of $\kappa$ and of $D$ coincide. The upper Wiener bound corresponds to $\overline{D} =D_0$. In $d=1$ the first inverse moment $\int_0^\infty D^{-1} p(D) d D$ of the PDF Eq.(\ref{eq:Sampled}) diverges, and the effective diffusion coefficient $D^*$ given by Eqs. (\ref{HomAv}) and (\ref{oned}) vanishes, giving rise to anomalous diffusion \cite{Camboni}. In $d=2$ and $d=3$ the value of $\langle D^{-1} \rangle$ given by Eq. (\ref{eq:FIM}) is finite, and lower Wiener bounds $\langle D^{-1} \rangle^{-1}$ are $D_0/3$ and $D_0/2$, respectively. The result of EMA from Eq.(\ref{EMA}) is $D^* = 0.719 D_0$ in $d=2$ and $D^* = 0.852 D_0$ in $d=3$, both smaller than $D_0$. Independently on the quality of approximation given by EMA we note that the EMA result is realizable in the continuum case \cite{Milton}: the ensemble of all ``disordered'' configurations contains realizations with the effective conductance (diffusivity) equal to the one predicted by EMA. The lower Wiener bounds are also realizable \cite{Beran}. Therefore situations leading to the BnG diffusion are highly unlikely. The ensemble of disordered systems should indeed contain the realizations with diffusivities close to the upper bound $D_0$, but also the ones with considerably smaller diffusivities given by the EMA and by the lower bound. Therefore $D^*$ will typically be lower than $D_0$. In Fig. \ref{fig:MSDs} we present the full time dependence of the diffusion coefficient in the HK interpretation as following from numerical simulations. The figure shows $D(t) = \frac{1}{4} \frac{d}{dt}\langle |\mathbf{x}|^2(t) \rangle$ normalized on $D_0$ in $d=2$. The details of our simulation approach are given in Sec. \ref{Sec:Sim}. One readily infers that the diffusion coefficient decays with time, so that no BnG diffusion is observed. The value of the terminal diffusion coefficient $D^*$ obtained in simulations agrees well with the EMA prediction. \begin{figure} \centering% \includegraphics[width=\columnwidth]{Fig1new}% \caption{The behavior of $D(t)/D_0$ for the HK interpretation and the two Ito cases for the same values of $\lambda=10$ and $D_0 = 1$ in $d=2$. The upper dashed-dotted line corresponds to the equilibrated Ito case and stays horizontal within the statistical error. The error bars show standard deviations of the mean in 1500 realizations. The lower dashed line gives the time-dependent diffusion coefficient in the HK interpretation. The lowest full line corresponds to the Ito situation under homogeneous sampling. For these lines the statistical errors are of the order of the lines' thickness. The dotted horizontal lines correspond to asymptotic predictions of Fig. \ref{fig_EMA}. \label{fig:MSDs}} \end{figure} \subsection{Systems with inhomogeneous equilibrium.} For systems with inhomogeneous equilibrium the situations under homogeneous and equilibrium sampling are different. We start our discussion using the hints given by the EMA, as shown in Fig. \ref{fig_EMA}. For homogeneous sampling, we see that the difference between the short time diffusion coefficient $D_0$ and the long-time one $D^*$ increases when $\alpha$ decreases from 1. Parallel to the discussion above, there is no reason to await the BnG behavior. The difference is maximal for the Ito case, when in $d=2$ the terminal diffusion coefficient is exactly one third of the short-time one (in $d=3$ it will make a half of an initial one). The result of EMA for equilibrium sampling is shown in Fig. \ref{fig_EMA} as the upper curve. We see that the behavior is opposite to the one for homogeneous sampling, and that for $\alpha \to 0$ the difference between $D^*$ and $D_0$ vanishes, which result again does not depend on EMA. Moreover, this behavior persists even in $d=1$. The simulation results for the Ito cases are shown in Fig. \ref{fig:MSDs} along with the one for the HK. The effective medium approximation only allows for comparing the initial and terminal values of the diffusion coefficients, but gives neither the full time-dependence of the MSD (and therefore of $D(t)$) nor the forms of the corresponding PDFs (essentially, no analytical method is known to reliably reproduce such PDFs in the intermediate time domain). Therefore here we have to rely on the results of numerical simulations. Fig. \ref{fig_Ito_P_equilib} displays such PDFs for equilibrated Ito case at different times. These exhibit the transition from exponential to a Gaussian distribution, showing a pronounced central peak at intermediate times, which is well known from the experimental realizations \cite{Wang1,Wang2,Wagner}. \begin{figure} \centering% \includegraphics[width=\columnwidth]{Fig3new}% \caption{The PDFs $p(x)$ in the equilibrated Ito case in projection on the $x$-axis for $t = 1,10,100$ and 500 in a system with $\lambda=10$ and $D_0 = 1$ in $d=2$, see Sec. \ref{Sec:Sim} for details of simulations. The figure demonstrates the transition from the double-sided exponential to the Gaussian form in a BnG situation. \label{fig_Ito_P_equilib} }% \end{figure} \section{Simulations of the PDF and MSD in the pure diffusion cases} \label{Sec:Sim} In our simulations we start from a discretized model and consider the situation described by the master equation \[ \frac{d}{dt} p_i = \sum_j \left(w_{ij} p_j - w_{ji} p_i \right) \] where $i$ and $j$ number the sites of a square or cubic lattice with lattice constant $a=1$. Only the transitions between neighboring sites are possible. This master equation describes a random walk scheme and can be considered as a spatial discretization scheme for the corresponding Fokker-Planck equations, Eq.(\ref{eq:HK}) or Eq.(\ref{eq:Ito}). The transition rates follow the distribution similar to the distribution of local diffusivities $p(D)$ given by Eq.(\ref{eq:PDFloc}); the local diffusivity for the rates which vary slowly in space is simply $D = a^2 w = w$. For the HK case, Eq.(\ref{eq:HK}), the rates $w_{ij}=w_{i \leftarrow j}$ satisfy the condition of the detailed balance, i.e., in the absence of the external force $w_{ij}=w_{ji}$ as discussed in Ref. \cite{Sokolov}. The rates follow from the PDF $p(w)$ similar to $p(D)$. To simulate the Ito situations, Eq.(\ref{eq:Ito}), we assume that the transition rates from each site to all its neighbors are the same. The transition rate from the site $j$ to any of its neighboring sites $j$ is $w_{ij} = w_j$ and this $w_j$ is distributed according to the corresponding $p(w)$ \cite{Sokolov}. The transitions now are asymmetric: $w_{ij} \neq w_{ji}$, which makes a difference. To simulate the two situations we generate two or three dimensional arrays of correlated transition rates $w_{ij}$. For the Ito cases all $w_{ij} = w_j$ are defined on the sites $j$ of a simple square or simple cubic lattice, corresponding to the lattice of sites at which the probabilities $p_j$ are defined. For the HK case the transition rates $w_{ij} = w_{ji}$ are defined at the midpoints of bonds of the corresponding lattice of $p_j$. In $d=2$ this lattice of midpoints is a square lattice with the lattice constant equal to $a/\sqrt{2}$ and with main axes rotated by $\pi/4$ with respect to the axes of the $p_j$-lattice, but can also be considered as a quadratic lattice with lattice constant $a$ with basis (with an additional site placed at a center of a square), which is shifted by $a/2$ with respect to the lattice of $p_j$. For three-dimensional case the lattice of the midpoints of bonds of a simple cubic lattice is an octahedral lattice, again considered as a simple cubic lattice with basis. Note that the arrays used for simulation of HK and Ito cases are different in size: For example, for simulating a 2d lattice with $N=l \times l$ sites we need $l^2$ different values of transition rates for the Ito cases and $2l(l-1)$ (i.e. approximately twice as many) different rates for the HK case. The correlated random variables $w_{ij}$ on the corresponding lattices of transition rates can be easily obtained by a probability transformation. Let us call $F^{-1}_{\beta}(y)$ the function inverse to $F_\beta(D)$, as given by Eq.(\ref{CDF}). Then the probability transformation \[ w(z) = F^{-1}_{\beta} \left[\frac{1}{2} \mbox{erfc} \left(\frac{z}{\sqrt{2}} \right) \right] \] transforms the Gaussian variable $z$ with zero mean and unit variance into a $\Gamma$-distributed $w$ with shape parameter $\beta$ and unit mean. The corresponding function $F_\beta(x)$ can be easily inverted (there exists a standard MATLAB implementation for this inverse), and therefore the corresponding fields can be easily simulated for any given two-point correlation function. In our simulations we use independent Gaussian variables for the uncorrelated case, or a correlated Gaussian landscape with Gaussian correlation function $\langle z_i z_j \rangle = \exp(- r^2_{ij}/2\lambda^2)$ with $r_{ij}$ being the distance between the sites $i$ and $j$ on the corresponding lattice, with $\lambda$ being the correlation length. Such a correlated Gaussian array is easily obtained by filtering of the initial array of independent Gaussian variables with their subsequent renormalization necessary to keep $\langle z^2 \rangle = 1$ (note that initial arrays of independent Gaussian variables must be sufficiently larger than its ``internal'' part used in simulations). The corresponding example of the diffusivity landscape as generated by this method is shown in Fig. \ref{fig_landscape}. \begin{figure}[h!] \centering% \includegraphics[width=\columnwidth]{sFig1new} \caption{A single realization of a landscape of diffusion coefficients (color coded) with $\lambda = 10$ lattice units obtained by the method outlined. \label{fig_landscape} } \end{figure} The simulations of the situations under homogeneous sampling follow by numerical solution of the master equation for a particle starting at the origin. In 2d the system is of the size $(2L+1) \times (2L+1)$ (i.e. one has $-L \leq k,l \leq L$) where $L=256$ is used in simulations. The master equation is solved by forward Euler integration scheme to get $p_i(t)$ for each site $i$ characterized by coordinates $\mathbf{r}_i = (k,l)$. Fig. \ref{fig_PDFs} shows the exemplary distributions of $p_i(t)$ for the HK and Ito situations, which allow to grasp the differences between the cases. Note that the corresponding figures use different realizations of the landscape. The probabilities $p_j(t) = p_{(k,l)}(t)$ are then used for plotting the corresponding PDFs. The MSD for a particle starting at the origin ($k=l=0$) is given by $\langle \mathbf{x}^2(t) \rangle = \sum_{k,l = -L}^L (k^2 + l^2) p_{(k,l)}(t)$. The PDFs and MSDs are then averaged over different realizations of landscapes $w_{ij}$ (typically 1500 realizations). In the Ito case under equilibrium sampling the corresponding probabilities and MSDs are weighted with the inverse transition rate from the origin $w_0^{-1}$, which is proportional to $D^{-1}(\mathbf{0})$ and therefore to the equilibrium concentration at the origin. Note that since the mean local diffusivity in this case is not equal to $D_0$, additional normalization is applied to keep $D_0=1$. The PDFs $p(x,t)$ shown in Figs. \ref{fig_Ito_P_equilib}, \ref{fig_PDF} and \ref{fig_PDFI} depict such PDFs for $\mathbf{r}_i = (k,0)$. The approach in 3d is exactly the same (except for the fact that $\mathbf{r}_i = (k,l,m)$) but the size of the system is smaller: $L=64$. \begin{figure}[h!] \centering% \includegraphics[width=\columnwidth]{PHK}\\ \includegraphics[width=\columnwidth]{PIto} \caption{Single realizations of probabilities to find particles at corresponding sites of the landscapes with $\lambda=10$ for simulation time $t=0.3$ for the H\"anggi-Klimontovich (upper panel) and Ito (lower panel) cases. Note the difference in color coding in the two panels. One readily infers more or less homogeneous spreading of probabilities for the HK case, and a very granular structure corresponding to probabilities' concentration in the regions with lower diffusivities for the Ito case. \label{fig_PDFs} } \end{figure} The results for time-dependent diffusivity $D(t)$ are obtained by numerical differentiation of the corresponding MSD: $D(t) = \frac{1}{2d} \frac{d}{dt} \langle |\mathbf{x} |^2(t)\rangle$. The results for HK case are shown in Fig. \ref{fig_dDdT}. Similar results for the Ito case are presented in Fig. \ref{fig_MSDI}. The results shown in Fig. \ref{fig:MSDs} in the previous section are the curve for $\lambda = 10$ from Fig. \ref{fig_dDdT}, and the two corresponding curves for the homogeneous and eqilibrium sampling from Fig. \ref{fig_MSDI}. To show that the behavior in $d=3$ is similar we present in Fig. \ref{fig_3d} the results for the time dependent diffusion coefficient in $d=3$ for $\lambda = 3$. The EMA prediction for the HK case corresponds to $D^*/D_0 \approx 0.852$, and for the Ito case under homogeneous sampling the analytical prediction is $D^*/D_0 = 0.5$. \begin{figure}[h!] \centering% \scalebox {0.40}{\includegraphics{sFig2new}}% \caption{Time dependent diffusion coefficients in the HK case as obtained by numerical differentiation of MSD for diffusivity landscapes with different correlation lengths in $d=2$: for uncorrelated transition rates (solid black curve), and for diffusivity landscapes with correlation length $\lambda = 5$ (dashed blue line) and $\lambda = 10$ (red dashed-dotted line). The deviation of the curves in the limit of short times gives the impression about the typical statistical error of simulation. The parameters of simulations imply $D_0 =\overline{D} =1$. The simulation results at long times reproduce the ones of EMA within a statistical accuracy (i.e. within a few percent). Thus the terminal diffusion coefficient for the uncorrelated case is $D^*=0.718$ vs. EMA prediction of 0.719, showing the high accuracy of the EMA prediction. The diffusion coefficient is not constant over the time as it should be in the BnG diffusion. \label{fig_dDdT}} \end{figure} \begin{figure}[h!] \centering% \scalebox {0.40}{\includegraphics{sFig3new}}% \caption{The probability density functions $p(x,t)$ of the displacements for the HK case with $\lambda = 10$ in $d=2$. The times are (from top to bottom) $t=0.1, 1, 10$ and 200. Note the logarithmic scale. The figure again clearly demonstrates the transition between the double-sided exponential and the Gaussian form. Note that the peak at the mode is much less pronounced than in the Ito cases, Fig. \ref{fig_Ito_P_equilib} and Fig. \ref{fig_PDFI}. \label{fig_PDF}}% \end{figure} \begin{figure}[htb] \centering% \scalebox {0.40}{\includegraphics{sFig4new}}% \caption{Time dependent diffusion coefficients in diffusivity landscapes with different correlation lengths in $d=2$ for the Ito case. The lower bundle of curves corresponds to homogeneous sampling and shows the considerable decay of the diffusion coefficient with time. The theoretical estimate for the terminal diffusion coefficient is $D^* = \frac{1}{3} D_0 $, and approaching this value by the simulation results is evident. The upper bundle of curves corresponds to the equilibrim sampling, with the diffusion coefficient staying constant. The deviations from the constant diffusivity stay within the statistical error. The coding of the curves is the same as in Fig. \ref{fig_dDdT}.} \label{fig_MSDI}% \end{figure} \begin{figure}[htb] \centering% \scalebox {0.40}{\includegraphics{sFig5new}}% \caption{The probability density functions $p(x,t)$ for $\lambda = 10$ for the Ito case under homogeneous sampling. The times are (from top to bottom) $t=1, 10, 100$ and 250. Note the logarithmic scale. \label{fig_PDFI}}% \end{figure} \begin{figure}[h!] \centering% \scalebox {0.40}{\includegraphics{sFig6new}}% \caption{The behavior of $D(t)/D_0$ for the HK case and the two Ito cases for the same value of $\lambda=3$ in $d=3$. The upper dashed-dotted line corresponds to the equilibrated Ito case and stays horizontal within the statistical error. The lower dashed line gives the time-dependent diffusion coefficient in the HK case. The lowest full line corresponds to the Ito situation under homogeneous sampling. \label{fig_3d}} \end{figure} \section{Discussion} The properties of diffusion in random diffusivity landscapes strongly depend on the model adopted, but still share some similarities. The PDFs averaged over the realizations of diffusivity landscapes show similar features in all cases: The gradual transition from exponential to a Gaussian form which happens at the wings of the distribution while all distributions still show a cusp at the origin at intermediate times. These findings are similar to what is seen in experiments \cite{Wang1,Wang2,Wagner,Larrat}, and in theoretical models of disordered systems like the trap model of Refs. \cite{Traps,Luo} close in spirit to our Ito model with homogeneous sampling, or a barrier model of Ref. \cite{Stylianidou}, close in spirit to the HK one. The behavior of the MSD in different systems however differs, i.e. may show the BnG diffusion, the crossover between different types of diffusive behavior, and even anomalous diffusion. As we have already seen, within the model class adopted (spatially inhomogeneous systems with slowly varying diffusion coefficient), the equilibrated Ito model is the only promising candidate for a model showing the BnG diffusion, i.e. the diffusion coefficient staying constant over the time. The Ito interpretation relies on the martingale property, which, in the Gaussian case, means that the increments of the process during small time intervals are symmetric \cite{Stroock}. A random walk interpretation of this process can be a continuous time random walk with locally symmetric steps in space, in which the spatial change of the diffusivity is attributed to coordinate-dependent waiting times. Such a random walk scheme corresponds to a trap model \cite{Sokolov} which is thus the most prominent candidate for modeling BnG diffusion. In higher dimensions (in $d=3$ and in $d=2$ approximately, up to logarithmic corrections which are hard to detect) trap models may be mapped to CTRW under disorder averaging \cite{KlaSo}. CTRW is a process subordinated to a simple random walk (in a continuous limit -- a process subordinated to Brownian motion). In our case, the waiting times in our CTRW would however be correlated, which is different from the standard CTRW schemes. The diffusing diffusivity model \cite{Seno} is also a representative of the class of models subordinated to Brownian motion, and shares some properties with the corresponding correlated CTRWs. We note however that this diffusing diffusivity model shows a different kind of transition from exponential to Gaussian PDF which does not lead to a cusp at the origin. \section{Acknowledgements} EBP is supported by the Russian Science Foundation, project 19-15-00201. AC acknowledges the support by the Deutsche Forschungsgemeinschaft within the project ME1535/7-1.
1,314,259,992,587
arxiv
\section{ Introduction } Controlling the polarization of X-ray FEL pulses is critical for many experiments. In particular, a number of FEL applications in the soft X-ray range of the electromagnetic spectrum require the possibility of switching between left- and right-handed circularly polarized pulses with high, stable degree of polarization. However, presently, most XFELs are based on planar undulators, meaning that the output radiation is mainly linearly polarized in the direction orthogonal to the electron acceleration, with a very small power fraction in the other direction~\cite{Geloni2015c}. A straightforward solution to obtain full polarization control is to rely on APPLE-like undulators~\cite{Sasaki1994,Hwang1999}, as was done for example at FERMI~\cite{Allaria2012}. This solution is nevertheless not convenient in the case of facilities already built or under construction, and is also more expensive. A way around this issue is to use planar undulators only for inducing bunching in the electron beam. The bunched beam can then be sent through a short ``afterburner'' undulator with polarization-control capabilities. In this case, the afterburner acts as a coherent radiator emitting, to fix the ideas, circularly polarized light, while the linearly polarized radiation emitted during the bunching process is only a detrimental byproduct, and should be separated from the main circularly polarized pulse, or suppressed before reaching the sample. It is natural to start the afterburner before the bunching saturates, so that the circularly polarized pulse still undergoes FEL amplification (albeit limited) in the afterburner, and will thus have higher energy than the linearly polarized one. However, even accounting for a few extra gain-lengths in the afterburner, the power ratio between the circularly polarized and the linearly polarized pulses only amounts to several units. In contrast, a ratio of about a thousand is desirable, to be sure that the purest possible degree of polarization is achieved. Several approaches have been developed to address this issue. One may think of exploiting the bunching at higher harmonics that develops in the electron beam near saturation. In this case, the planar undulator preceding the afterburner is tuned at a subharmonic of the target wavelength and the two pulses are separated in wavelength. However, depending on the experiment, one may still need to spectrally filter the output radiation. Moreover, this scheme cannot be used to reach the longest possible wavelengths for which the XFEL is designed. For example, if the lowest photon energy achievable is about $250$~eV, like in the case of SASE3 at the European XFEL~\cite{XTDR}, using this method one could control the polarization starting from the second harmonic only, i.e. $500$~eV. Several other approaches to reduce the power of the linearly polarized radiation have been proposed. In~\cite{Geloni2011b} it was proposed to produce the bunching well upstream of the afterburner. Since here we discuss about soft X-rays, one usually needs only a few XFEL segments to reach the optimum bunching, and the ultrarelativistic energy in the multi-GeV range guarantees that the bunched beam can be transported without deterioration up to the afterburner. Then, due to divergence, the difference in the radiation spot sizes relevant to linearly and circularly polarized pulses can be large enough to spatially filter the linearly polarized pulse using a thin slotted foil, while the electron beam can bypass the foil through a chicane, before being dumped. In this way it was previously demonstrated that a power ratio in the order of $10^3$ can be achieved. An elegant alternative was proposed in~\cite{Schneidmiller2013}. In that paper an asymptotic solution of the FEL equations for large negative values of the detuning parameter is used. It maximizes the electron beam bunching while minimizing the output radiation. This works, in practice, by increasing the value of the undulator parameter~$K$ while the electron beam progresses through the linear undulator -- hence the name ``inverse tapering''. At the entrance of the afterburner the beam is strongly bunched due to FEL interaction, but the linearly polarized pulse is strongly suppressed. The method was tested at the LCLS~\cite{Lutman2016}, where a contrast factor~$20$ was obtained. Similar tests at FLASH~\cite{Schneidmiller2016} demonstrated a suppression of a factor $200$, which should be also obtainable at the European XFEL. The main reason for the different performance is ascribed to the sensitivity of the inverse tapering method on the electron energy spread. In this note we study the performance of the inverse tapering method for the SASE3 beamline of the European XFEL using the simulation code GENESIS~\cite{GENE}. In this way we can include electron beam distributions in current, emittance, energy and energy spread obtained from start-to-end beam dynamics simulations~\cite{mpy_web}, and an undulator lattice with intersections~\cite{Tschentscher2011}. We confirm that the method is capable of yielding a suppression factor in the order of a thousand. Further on, we complement our studies with several techniques based on electron and photon beam optics automatically available at SASE3, to significantly further decrease the density of the linearly polarized radiation at the sample position. \section{FEL Simulation Results} \begin{figure} \centering \includegraphics[width=0.45\linewidth]{el_current.eps} \includegraphics[width=0.45\linewidth]{el_emittance.eps} \includegraphics[width=0.45\linewidth]{el_energy.eps} \includegraphics[width=0.45\linewidth]{el_enspread.eps} \includegraphics[width=0.45\linewidth]{el_wake.eps} \caption{Results from electron beam start-to-end simulations at the entrance of SASE3. (First Row, Left) Current profile. (First Row, Right) Normalized emittance as a function of the position inside the electron beam. (Second Row, Left) Energy profile along the beam. (Second Row, Right) Electron beam energy spread profile. (Bottom row) Resistive wakefields in the SASE3 undulator.} \label{fig:ebeam} \end{figure} As discussed above, the inverse tapering technique allows one to obtain a high degree of electron density modulation in the beam, while significantly reducing the FEL radiation power. In this study we used the electron beam obtained from start-to-end simulations, shown in Figure~\ref{fig:ebeam}. We found that the optimal inverse tapering strategy for our electron beam parameters and the SASE3 undulator at the European XFEL ($ \lambda_w=68 $~mm, 21 segments, 5~meter-long each with 1.1~meter intersections) is to start the SASE FEL process with a uniform undulator up to the point when the bunching reaches the value of $0.025-0.05$ (radiation power is 3-4 orders of magnitude below saturation). Then a linear inverse tapering law is introduced, such that the undulator K value is increased by about 3\% per undulator section. In this case the radiation power growth is suppressed, while bunching grows linearly. In Figure~\ref{fig:el_ph_sp} we show the evolution of the electron phase space for different undulator configurations. These configurations lead to a comparable bunching at the fundamental harmonic, but qualitatively different electron phase space distributions in the bucket at the undulator end. Time-dependent simulations were run for the nominal 20~pC electron beam with an energy of 14~GeV. The beam was propagated trough the SASE3 undulator segments resonant at 800~eV photon energy. When no inverse tapering is applied (Fig.~\ref{fig:el_ph_sp}, top row), the bunching value of~0.6 is reached within 5 undulator segments, generating 40~GW FEL power. If inverse tapering is introduced after the $3^{rd}$ undulator (Fig.~\ref{fig:el_ph_sp}, middle row), a total number of 8 undulators is required to reach the same bunching value with 100~MW radiation power. Finally, introducing inverse tapering after the $4^{th}$ undulator (Fig.~\ref{fig:el_ph_sp}, bottom row), only a total of 6 undulator segments is needed to reach maximum bunching at the expense of a higher FEL output (2~GW) and a larger electron beam energy spread. \begin{figure} \centering \includegraphics[angle=90,width=1.1\textwidth]{invtap_hpspace2.pdf} \caption{Electron phase space at various positions inside the SASE3 undulator resonant at 800~eV photon energy.} \label{fig:el_ph_sp} \end{figure} We now proceed investigating the performance of the baseline SASE3 beamline of the European XFEL. Four 2~m-long segments of a helical undulator with $ \lambda_w=80 $~mm period are assumed to be installed at the end of the planar baseline undulator\footnote{We assume that the helical undulator segments are installed in the space of the two last baseline segments which in turn are reinstalled at the beginning of SASE3, (see Figure~\ref{fig:schemes}-b)}. As a simplifying assumption, we consider the radiation from a helical undulator to be perfectly circularly polarized. For our case study we set the electron beam to 12~GeV energy for simulating photon energies of 500~eV, 1~keV and 2~keV, and the 8.5~GeV for 250~eV photon energy. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{run.10.s1.gout_und_quad--rad_pow_en--el_bunching.png} \includegraphics[width=0.49\textwidth]{run.10.s4.gout_und_quad--rad_pow_en--el_bunching.png} \caption{Electron beam and radiation parameters evolution within planar (left subfigure) and helical (right subfigure) undulators for 2000~eV photon energy. Top subplots represent undulator K values and quadrupole fields. Next subplots provide the radiation peak power and integrated energy of the pulse as a function of the undulator length. Last subplots show the evolution of a maximum (black line) and average (grey line) bunching along the beam} \label{fig:2000eV_evo} \end{figure} Electron beam and radiation evolution inside the inverse tapered planar undulator are presented in the left plot of Figure~\ref{fig:2000eV_evo} for the 2000~eV photon energy case. The electron beam distribution with developed microbunching is dumped and used as an input for the next simulation stage -- radiating in the helical undulator. We found that a free-space drift of the electron beam between the inverse tapered undulator and the helical one, does not significantly affect the electron beam properties, and is therefore ignored in our numerical simulations. In the helical undulator the already-bunched electron beam quickly reaches saturation, therefore the appropriate linear post-saturation tapering is applied. Simulation results for the helical radiator are presented in the right plot of Figure~\ref{fig:2000eV_evo}. Numerical studies indicate that by means of inverse tapering of the baseline undulator one can substantially decrease the linearly polarized radiation output power. The energy of the 2~keV radiation pulse produced in the helical undulator (0.11~mJ) outruns the linearly polarized one (0.1~$\mu$J) by more than 3 orders of magnitude (see Table~\ref{table:results}, last column). A high degree of circular polarization of the total radiation pulse is thus obtained. Results for the other energy points are summarized in Table~\ref{table:results} (see inverse tapered contribution). It is important to remark, that at low photon energies there are several FEL gain lengths within a single undulator segment and it becomes challenging to stop the FEL amplification at the certain desired power level, should it be reached in the middle of an undulator. When we calculated our results we did not account for this effect. However, in order to reach better performance one may detune the first undulator segment. It is also worth mentioning that an increase of the output linearly polarized radiation divergence takes place when the number of uniform undulator segments is reduced. \section{Complementary methods to increase the degree of circular polarization} As discussed above, the inverse tapering technique is expected to be very efficient at the SASE3 line of European XFEL. In this section we discuss complementary but independent methods, to disentangle the residual linearly polarized component in the output radiation pulse. These methods can be used in combination with inverse tapering to guarantee the stable delivery of a high-degree circular polarization to the SASE3 scientific instruments. We also investigate a way to optimize ratio of the circularly polarized component of the photon density at the sample over the linearly polarized one. Simulations in this section are based on the wavefront propagation technique, implemented in the code Synchrotron Radiation Workshop~\cite{Chubar1999}. The FEL radiation sources are modeled as Gaussian sources, based on the FEL radiation divergence. In order to disentangle the effects of our methods from those of inverse tapering, we model the sources of linearly and circularly polarized pulses with the same intensity, therefore assuming no linear polarization suppression via inverse tapering. The radiation is propagated through the optics beamline layout down to the interaction region~\textit{f1} of the SQS instrument where the sample would be introduced (see Figure~\ref{fig:schemes}). No imperfections of the optical components were assumed during simulations in order to study solely the effects of linearly polarized background suppression methods. Only the focusing optical components were modeled for radiation propagation, such as the offset mirror M1, the monochromator pre-mirrors M3a and M3b and the SQS KB mirror pair. The KB mirrors of the SQS instrument are tuned such that the radiation originated in the helical undulator is focused on the sample. Locations and parameters of the optical components that we use for the simulations are provided in Table~\ref{table:optics}. We assume that no circular birefringence effects take place after radiation reflection from the beamline components. \begin{figure} \centering \includegraphics[width=1.0\textwidth]{schemes4.png} \caption{Schematic illustration of methods to generate and deliver circularly polarized radiation to the sample.} \label{fig:schemes} \end{figure} \begin{table} \makegapedcells \setcellgapes{3pt} \scriptsize \centering \begin{tabular}{cc} \hline\hline Parameter & Value \\ \hline \multicolumn{2}{c}{\textbf{M1 - plane mirror (ignored)}}\\ \hline \multicolumn{2}{c}{\textbf{M2 - bendable mirror}}\\ \hline focusing plane & horizontal \\ \hline position$^1$ & 272~m \\ \hline length & 850~mm \\ \hline radius of curvature & \makecell{8200~m (LE$^2$)\\ 27333~m (HE$^3$)} \\ \hline inc. angle & \makecell{20~mrad (LE)\\ 6~mrad (HE)} \\ \hline \multicolumn{2}{c}{\textbf{M3 - cyllindrical mirror}}\\ \hline focusing plane & vertical \\ \hline position & 288~m \\ \hline length & 580~mm \\ \hline radius of curvature & \makecell{7482~m (LE)\\ 16710~m (HE)} \\ \hline inc. angle & \makecell{20~mrad (LE)\\ 9~mrad (HE)} \\ \hline \end{tabular} \hspace{1cm} \begin{tabular}{cc} \hline \multicolumn{2}{c}{\textbf{M4 -plane mirror (ignored)}}\\ \hline \multicolumn{2}{c}{\textbf{Exit slit (opened)}}\\ \hline position & 388~m \\ \hline \multicolumn{2}{c}{\textbf{VKB - elliptical adaptive mirror}}\\ \hline focusing plane & vertical \\ \hline position & 430.6~m \\ \hline length & 800~mm \\ \hline inc. angle & 9~mrad \\ \hline \multicolumn{2}{c}{\textbf{HKB - elliptical adaptive mirror}}\\ \hline focusing plane & horizontal \\ \hline position & 431.4~m \\ \hline length & 800~mm \\ \hline inc. angle & 9~mrad \\ \hline \multicolumn{2}{c}{\textbf{f1 - image plane}}\\ \hline position & 433.2~m \\ \hline \multicolumn{2}{c}{\textbf{f2 - image plane}}\\ \hline position & 435.23~m \\ \hline\hline \end{tabular} \caption{Optical system parameters used for the calculations, based on \cite{Mazza2014a} \newline $^1$ distance from the last undulator segment end \newline $^2$ low energy operation geometry \newline $^3$ high energy operation geometry} \label{table:optics} \end{table} \subsection{Defocusing approach} In order to carry out certain experiments, it may be enough to \textit{reduce the photon density} of the linearly polarized radiation on the sample below a certain threshold, obtaining an acceptable degree of circular polarization. Since the planar and helical undulators may be separated spatially by a distance much larger than the Rayleigh length of the emitted radiation, one can obtain two separate sources of linearly and circularly polarized radiation. We separately studied two cases when an intermediate focus (IMF) is present, or not (see Fig.~\ref{fig:schemes} - b and c). Let us first consider the case when no IMF is present (Fig.~\ref{fig:schemes}-b). The two sources will inevitably be focused by the SQS KB system to two separated images, located nearly 2~mm apart. At the position when one of the images is focused (in our case - the circularly polarized radiation), the other would be out of focus, forming a plateau with significantly lower photon density, as presented in Figure~\ref{fig:Defocused} (first row) for the case of 250~eV. Hereinafter we refer to this method of reducing the photon density of linearly polarized background due to defocusing as the \textit{defocusing approach}, since the linearly polarized radiation is out-of-focus at the sample plane. The resulting photon density ratio of the two polarization components is $ 4.9\cdot10^{-2} $ for 250~eV (see Table~\ref{table:results}, photon density ratio, no kick, no IMF). At the higher energy of 2~keV instead, the same ratio amounts to $ 1.7\cdot10^{-3} $. \begin{figure} \scriptsize \centering \hspace{0.8cm} circular at sample \hspace{1.4cm} linear at sample \hspace{1.3cm} linear at its waist \\ \rotatebox{90}{\hspace{1cm}without IMF} \includegraphics[width=0.25\textwidth]{250_circ_noap_2_cut.png} \includegraphics[width=0.25\textwidth]{250_lin_noap_2_cut.png} \includegraphics[width=0.25\textwidth]{250_lin_noap_2_ref_cut.png} \rotatebox{90}{\hspace{1cm}with IMF} \includegraphics[width=0.25\textwidth]{250_circ_leap_1_cut.png} \includegraphics[width=0.25\textwidth]{250_lin_leap_1_cut.png} \includegraphics[width=0.25\textwidth]{250_lin_leap_1_ref_cut.png} \caption{ Transverse intensity distributions of the 250~eV radiation, obtained without (first row) and with (second row) introduction of an intermediate focus (IMF) with transverse sizes of $1.1\times1.5~\mu m^{2}$ and $1.3\times2.3~\mu m^{2}$ correspondingly (full width at half maximum). First column - circularly polarized radiation distribution at the sample position, second column - out-of-focus linearly polarized radiation distribution at the same position. Third column - the linearly polarized radiation distribution at its minimum waist, located 2~mm upstream of the sample if no IMF introduced, and 22~mm upstream in the other case. \newline The photon density ratio between the circularly and linearly polarized radiation at the sample is approximately $5\cdot10^{-2}$ without IMF and $1\cdot10^{-3}$ Mirrors height error effects are ignored. } \label{fig:Defocused} \end{figure} From these numbers we see that the efficiency of the defocusing approach changes with the photon energy. In fact, the geometrical transmission of the SASE3-SQS beamline varies: it reaches 80\% above 2~keV photon energy, but deteriorates down to 5\% at 250~eV, since KB focusing mirrors become overfilled with highly divergent radiation. These values are slightly lower than the nominal ones, presented in~\cite{Mazza2012} (Fig.~3.2.3, red curve) due to larger-than-nominal divergence of the inverse-tapered FEL radiation. The finite size of the projected clear aperture of the KB mirrors increases the Rayleigh length of the focused radiation waist or, in other words, the depth of focus. When the Rayleigh length of the waists becomes comparable or smaller than their longitudinal separation, the defocusing approach is not effective any more. Fortunately, a beam transport scheme with an IMF is foreseen via insertion of mirrors M3-M4 to the beam path and changing the curvature of the initially plane mirror M2. While mirrors M1 and M4 remain flat and only direct the FEL radiation, mirrors M2 and M3 focus the radiation in the horizontal and vertical planes, creating the IMF (see Fig.~\ref{fig:schemes}-c). The IMF introduction allows \begin{itemize} \item to transport long wavelength radiation through the KB mirrors much more efficiently; \item to increase the distance between images of circularly and linearly polarized radiation. \end{itemize} The introduction of an IMF increases the distance between waists of different polarizations in terms of their Rayleigh lengths. It results in a larger spot size difference, as presented in Figure~\ref{fig:Defocused}, bottom row. Therefore, IMF allows one to increase the ratio of the photon density of different polarizations by more than one order of magnitude: from $ 4.9\cdot10^{-2} $ to $ 9.2\cdot10^{-4} $ at 250~eV (see Table~\ref{table:results}, photon density ratio, no kick, IMF). Photon density ratio is also presented graphically on Figure~\ref{fig:ph_dens} for different scenarios. At higher photon energies the benefit of the IMF is not as strong: radiation divergence is small and beamline geometrical transmission is large even without it. These findings, obtained for the~\textit{f1} image plane, also apply for the image plane~\textit{f2}. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{Ph_dens.eps} \caption{Reduction of the peak photon density at the sample due to defocusing effect for different radiation propagation scenarios: ``on-axis'' stands for the nominal SASE3 radiation propagation scenario without neither e-beam kick, nor IMF introduction; ``IMF'' indicates the presence of the intermediate focus in the radiation propagation geometry; the electron beam, ``kicked'' transversely in the undulator by the maximum possible angle radiates in the propagation direction, reducing the geometrical beamline transmission. No inverse tapering contribution is assumed: sources of equal intensities are modeled. } \label{fig:ph_dens} \end{figure} \subsection{Beam split approach} The defocusing method is based on the exploitation of existing components and we expect it can be routinely implemented during European XFEL operations. However, for some applications it may be important to prevent the linearly polarized background from entering the experimental area or to reduce its pulse energy at the sample position. This implies some kind of spatial filtering. The fist aperture (``COLB-1'' element), that can be potentially used for the spatial filtering of the linearly polarized background is located 187~m downstream the helical undulator. At that position, both circular and linear polarized radiation pulses diverge to a comparable transverse size. Hence, the method proposed in~\cite{Geloni2011b} is not applicable anymore. However, if the circularly and linearly polarized radiation pulses are emitted at a certain angle with respect to each other, they may be separated at a far zone. To this end, the electron beam should be transversely deflected somewhere between the baseline and helical undulators. Then the linearly polarized background may be blocked with any arbitrary beamline aperture: in fact all apertures are located in the far zone of the radiation. This possibility was investigated earlier in terms of designing a beam transport system, capable of transporting the electron beam through the bend to the next undulator while preserving the microbunching~\cite{Li2010}. In the light of delta undulator commissioning at LCLS~\cite{Nuhn2014}, it was found that the transverse deflection of the electron beam does not effectively deteriorate the FEL power radiated in an undulator downstream. In the current approach we take advantage of this effect, theoretical explanation of which is proposed in~\cite{Geloni2015,Geloni2015a,Geloni2016,Geloni2016b}. In order to effectively separate the two pulses one should introduce a transverse deflection of the electron beam larger than the FEL radiation divergence. We can define a criterium of effective spatial separation of the two beams by requiring a deflection angle larger than the sum of full width at half maxima of the two radiation beams divergences. For example, the average 500~eV radiation divergence if 42~$\mu$rad FWHM, therefore in order to disentangle the two pulses spatially in the far zone, an 84~$\mu$rad transverse kick should be applied to the electrons (see Table~\ref{table:results}). Divergence of the SASE3 radiation pulses is presented in Figure~\ref{fig:rad_div} for two distant photon energies. In the SASE radiation mode the divergence may fluctuate significantly, and the linearly polarized radiation is more divergent on average (see Figure~\ref{fig:div_stat}) \begin{figure} \centering \includegraphics[trim= 50 300 730 60,clip,height=0.4\textwidth]{250_v0_radiation_distribution_ff_run.0.s4.gout.png} \includegraphics[trim= 100 300 730 60,clip,height=0.4\textwidth]{2000_v4_radiation_distribution_ff_run.2.s1.gout.png} \caption{Angular intensity distribution of the far zone of the radiation from the inverse tapered undulator for 250 (left plot) and 2000~eV (right plot) photon energies. Simulation results provided for the ``typical'' shots.} \label{fig:rad_div} \end{figure} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{div_stat_new.eps} \caption{Divergence of the radiation, originating from the inverse tapered planar undulator (linearly polarized) and the helical undulator for various photon energies.} \label{fig:div_stat} \end{figure} Once spatially separated, the linearly polarized radiation spot may be blocked with an aperture as discussed above. However, at low photon energies the required deflection of the electron beam becomes significant (see Table~\ref{table:results}). The effective opening angle of the SASE3 beam transport aperture is comparable with the divergence of the FEL radiation at small photon energies (below 1~keV). Therefore, radiation propagating at an angle larger than 35~$\mu$rad with respect to the optical axis will be blocked by the 20~mm ``COLA1'' collimator located 3~m upstream the first offset mirror M1. The circularly polarized radiation of large wavelengths is then inevitably blocked if the orbit kick is applied within the helical undulator. Therefore, we limit ourselves to the scenario when the circularly polarized radiation is propagating parallel to the design optical axis and the transverse kick is applied to the electron beam within the inverse tapered undulator segments, which directs the linearly polarized radiation accordingly (see Figure~\ref{fig:schemes}-d). The kick of the electron beam orbit implies an appropriate re-arrangement of the electron beam focusing system, namely - quadrupoles, in accordance to the new electron orbit. Quadrupoles should be shifted transversely according to a linear law to ensure that the electron beam passes through the optical center of each quadrupole. Due to the quadrupole displacement constraints (1.4~mm off the optical axis in either direction), we assume the maximum still-reasonable electron beam offset in both dimensions to be 2.5~mm (2$\times$1.25~mm). Based on our numerical simulations, the required inverse tapered undulator length is e.g. 42.8~meters for the 500~eV photon energy case. This corresponds to a maximum possible deflection angle of 58~$\mu$rad, which is not enough for the effective linearly polarized radiation background elimination, while at 1~keV a maximum deflection of 26~$\mu$rad is just enough to fulfill our separation criterium (see Table~\ref{table:results}). Consequently the beam split approach is better used at the photon energies above 1~keV, where a maximum circular polarization can be reached (see Figure~\ref{fig:Transm_ratio}). Finally at photon energies around 2~keV it may be already feasible to introduce a kick to the electron beam inside the helical undulator, accompanied with an appropriate transverse arrangement of the offset M1/M2 mirrors pair, which would significantly reduce the machine tuning time. \begin{landscape} \begin{table} \makegapedcells \setcellgapes{3pt} \scriptsize \centering \begin{tabular}{ccccccccc} \hline\hline Photon energy [eV] & \multicolumn{2}{c}{ 250} & \multicolumn{2}{c}{500} & \multicolumn{2}{c}{1000} & \multicolumn{2}{c}{2000} \\ \hline Electron energy [GeV]& \multicolumn{2}{c}{ 8.5} & \multicolumn{2}{c}{12} & \multicolumn{2}{c}{12} & \multicolumn{2}{c}{12} \\ \hline & no IMF & IMF & no IMF & IMF & no IMF & IMF & no IMF & IMF \\ \hline & \multicolumn{8}{c}{\textbf{Inverse tapering contribution}}\\ \hline Inversely tapered planar undulators & \multicolumn{2}{c}{5 (30~m)} & \multicolumn{2}{c}{7 (42~m)} & \multicolumn{2}{c}{ 8 (48~m)}& \multicolumn{2}{c}{ 9 (54~m)} \\ \hline Helical undultors & \multicolumn{2}{c}{ 4 (8~m)} & \multicolumn{2}{c}{ 4 (8~m)} & \multicolumn{2}{c}{ 4 (8~m) } & \multicolumn{2}{c}{ 4 (8~m) } \\ \hline Radiation power ratio lin./circ. [W] & \multicolumn{2}{c}{\makecell{$2.4\cdot10^{8}/1.7\cdot10^{11}$\\ ($1.6\cdot10^{-3}$)}}& \multicolumn{2}{c}{\makecell{$8.7\cdot10^{7}/ 2.1\cdot10^{11}$ \\ ($4\cdot10^{-4}$)}} & \multicolumn{2}{c}{ \makecell{$3.3\cdot10^{7}/1.4\cdot10^{11}$ \\ ($2.3\cdot10^{-4}$)}} & \multicolumn{2}{c}{ \makecell{$1.4\cdot10^{8}/1.1\cdot10^{11}$ \\ ($1.2\cdot10^{-3}$)}} \\ \hline Radiation pulse energy ratio lin/circ. [J] & \multicolumn{2}{c}{\makecell{$5.3\cdot10^{-7}/3.0\cdot10^{-4}$ \\ ($1.7\cdot10^{-3}$)}}& \multicolumn{2}{c}{\makecell{$1.6\cdot10^{-7}/2.9\cdot10^{-4}$ \\ ($5.5\cdot10^{-4}$)}}& \multicolumn{2}{c}{\makecell{$5.4\cdot10^{-8}/1.2\cdot10^{-4}$ \\ ($4.5\cdot10^{-4}$)}} & \multicolumn{2}{c}{\makecell{$1.0\cdot10^{-7}1.1/\cdot10^{-4}$ \\ ($9\cdot10^{-4}$)}} \\ \hline & \multicolumn{8}{c}{\textbf{Optical beam transport geometrical transmission, no inverse tapering assumed$^1$}}\\ \hline Radiation divergence, averaged [$\mu$rad]$^2$ & \multicolumn{2}{c}{64} & \multicolumn{2}{c}{42} & \multicolumn{2}{c}{26} & \multicolumn{2}{c}{13} \\ \hline Required electron beam kick [$\mu$rad]& \multicolumn{2}{c}{128} & \multicolumn{2}{c}{84} & \multicolumn{2}{c}{52}& \multicolumn{2}{c}{26} \\ \hline Maximum electron beam kick [$\mu$rad]& \multicolumn{2}{c}{81} & \multicolumn{2}{c}{58}& \multicolumn{2}{c}{52} & \multicolumn{2}{c}{45} \\ \hline Transmission, circ. & 5\% & 38\% & 12\% & 68\% &28\% & 34\% &75\% & 81\% \\ \hline Transmission, lin. & 3\% & 11\% & 8.3\% & 42\% & 20\% & 21\% & 62\% & 61\% \\ \hline Transmission, lin., maximum kick & 0.07\% & 1\% & 0.06\% & 0.3\% & 0.001\% & 0.001\% & 0\% & 0\% \\ \hline & \multicolumn{8}{c}{\textbf{Defocusing approach contribution, no inverse tapering assumed$^1$}}\\ \hline Photon density ratio, without kick & $ 4.9\cdot10^{-2} $ & $ 9.2\cdot10^{-4}$ & $ 2\cdot10^{-2} $ & $ 3.3\cdot10^{-4}$ & $5.5\cdot10^{-3}$ &4.5$\cdot10^{-3} $ & $1.7\cdot10^{-3}$ &$ 2\cdot10^{-3} $ \\ \hline Photon density ratio, maximum kick & $ 1.1\cdot10^{-3} $ & $ 1\cdot10^{-4} $ & $ 2\cdot10^{-4}$ & $ 1.6\cdot10^{-5} $ & $7.3\cdot10^{-7}$ & $7.2\cdot10^{-7}$ & 0 & 0 \\ \hline & \multicolumn{8}{c}{\textbf{Total background reduction in combination with inverse tapering}}\\ \hline Pulse energy ratio, maximum kick & $2.4\cdot10^{-5}$ & $ 4.5\cdot10^{-5}$ & $2.7\cdot10^{-6}$ & $2.4\cdot10^{-7}$ & $1.6\cdot10^{-8}$ & $1.3\cdot10^{-8}$ & 0 & 0 \\ \hline Photon density ratio, without kick & $8.3\cdot10^{-5}$ & $1.5\cdot10^{-6}$ & $1.1\cdot10^{-5}$ & $1.8\cdot10^{-7}$ & $2.5\cdot10^{-6}$ & $2\cdot10^{-6}$ & $1.5\cdot10^{-6}$ & $1.8\cdot10^{-6}$ \\ \hline Photon density ratio, maximum kick & $1.9\cdot10^{-6}$ & $1.7\cdot10^{-7}$ & $1.1\cdot10^{-7}$ & $8.8\cdot10^{-9}$ & $3.3\cdot10^{-10}$ & $3\cdot10^{-10}$ & 0 & 0 \\ \hline\hline \end{tabular} \caption{Summary of the obtained results for 4 photon energies. In both beam split and defocusing approaches the same energy per pulse for both polarizations is assumed. \newline $^1$ Model Gaussian sources of equal intensities for both linearly and circularly polarized radiation. \newline $^2$ Average over 30 shots, intensity full width at half maximum} \label{table:results} \end{table} \end{landscape} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{Geom_transm.eps} \caption{The ratio of beamline geometrical transmission of the linear over circular polarization for the maximum electron orbit kick inside the inverse tapered undulator. Equal source intensities are assumed.} \label{fig:Transm_ratio} \end{figure} \section{Conclusions} We confirm via numerical simulations that the inverse tapering technique~\cite{Schneidmiller2013,Schneidmiller2016} is expected to yield a high electron density modulation while significantly reducing the FEL radiation power at the SASE3 line of the European XFEL. This way, the installation of a helical radiator would allow one to reach a high degree of circular polarization: the contribution of the linearly polarized background to the total FEL radiation intensity would be in order of~0.1\% along the design SASE3 photon energy range. Several complementary methods can be used to further decrease of the linearly polarized background intensity. The beam split approach allows one to spatially separate the background and collimate a large portion of it. It is the most effective above the 1~keV photon energy, where it allows one to obtain a degree of circular polarization, limited by the helical undulator properties. Another way to effectively reduce the background contribution is to decrease its photon density on the sample with the defocusing approach. Being very simple in practice, it introduces a photon density ratio from 100 up to 2000 on top of the inverse tapering contribution. We conclude that even in the event of lower-than-expected performance of the inverse tapering technique, synergy with the described methods will allow a sufficient contrast between linearly and circularly polarizations \section{Acknowledgments} We wish to thank Michael Meyer, Suren Karabekyan, Daniele La Civita, Nina Golubeva and Nikolay Smolyakov for useful discussions and Serguei Molodtsov for his interest in this work.
1,314,259,992,588
arxiv
\section{Introduction}If a representation$$\overline{\rho}:G_\mathbb{Q}\to\operatorname{GL}_2(\overline{\F}_p)$$ is continuous, odd, and irreducible, then a conjecture of Serre (now a theorem of Khare-Wintenberger and Kisin) predicts that $\overline{\rho}$ is modular. More precisely, Serre predicted a minimal weight $k(\overline{\rho})$ and a minimal level $N(\overline{\rho})$ for a modular form giving rise to $\overline{\rho}$. It is natural to try to extend these results to totally real fields $F$. The natural generalisation of Serre's conjecture is to conjecture that if $$\overline{\rho}:G_F\to\operatorname{GL}_2(\overline{\F}_p)$$is continuous, irreducible and totally odd, then it is modular (in the sense that it arises from a Hilbert modular form). It is straightforward to generalise the definition of $N(\overline{\rho})$ to this setting, and there has been much progress on ``level-lowering'' for Hilbert modular forms. It is, however, much harder to generalise the definition of $k(\overline{\rho})$. For example, there is no longer a total ordering on the weights, and the $p$-adic Hodge theory is much more complicated than in the classical case. Suppose that $p$ is unramified in $F$. Recently (see \cite{bdj}), Buzzard, Diamond and Jarvis have proposed a conjectural set $W(\overline{\rho})$ of weights attached to $\overline{\rho}$, from which in the classical case one can deduce the weight part of Serre's conjecture (see \cite{bdj} for more details). In this paper we prove many cases of a closely related conjecture (we work with a definite, rather than indefinite quaternion algebra; as we discuss below, it should be straightforward to prove the corresponding results in the setting of \cite{bdj}). To be precise, a weight is an irreducible $\overline{\F}_p$-representation of $\operatorname{GL}_2(\mathcal{O}_F/p)$, and such a representation factors as a tensor product $$\otimes_{v|p}\sigma_{\vec{a},\vec{b}}$$where $\vec{a}$, $\vec{b}$ are $[k_v:\F_p]$-tuples indexed by embeddings $\tau:k_v\hookrightarrow\overline{\F}_p$, and $0\leq a_\tau\leq p-1$, $1\leq b_\tau\leq p$. Then we say that a weight is \emph{regular} if in fact $2\leq b_\tau\leq p-2$ for all $\tau$. Our main theorem requires a technical condition which we prefer to define later, that of a weight being partially ordinary of type $I$ for $\overline{\rho}$, $I$ a set of places of $F$ dividing $p$; see section \ref{2}. \begin{thmn}Suppose that $\overline{\rho}$ is modular, that $p\ge 5$, and that $\overline{\rho}|_{G_{F(\zeta_{p})}}$ is irreducible. Then if $\sigma$ is a regular weight and $\overline{\rho}$ is modular of weight $\sigma$ then $\sigma\in W(\overline{\rho})$. Conversely, if $\sigma\in W(\overline{\rho})$ and $\sigma$ is non-ordinary for $\overline{\rho}$, then $\overline{\rho}$ is modular of weight $\sigma$. If $\sigma$ is partially ordinary of type $I$ for $\overline{\rho}$ and $\overline{\rho}$ has a partially ordinary modular lift of type $I$ then $\overline{\rho}$ is modular of weight $\sigma$.\end{thmn} Before we discuss the proof, we make some remarks about the assumptions in the theorem. The assumption that $\overline{\rho}$ is modular is essential to our methods. The assumption that $p\ge 5$ is needed in order for there to be any regular weights at all; it is possible that this could be relaxed in future work, as there is no essential obstruction to the application of the techniques that we employ in characteristic $3$. In characteristic $2$ the results from $2$-adic Hodge theory that we would require have not yet been developed in sufficient generality, but this too does not appear to be an insurmountable difficulty. The assumption that $\overline{\rho}|_{G_{F(\zeta_p)}}$ is irreducible, and the assumption on partial ordinarity, are needed in order to apply $R=T$ theorems. The main idea of our proof is the same as that for our proof of a companion forms theorem for totally real fields (see \cite{gee051}), namely that we use a lifting theorem to construct lifts of $\overline{\rho}$ satisfying certain local properties at places $v|p$, and then use a modularity lifting theorem of Kisin to prove that these representations are modular. In fact, Kisin's theorem is not general enough for our applications, and we need to use the main theorem of \cite{gee052}. The arguments are much more complicated than those in \cite{gee051} because we need to construct liftings with more delicate local properties; rather than just considering ordinary lifts, we must consider potentially Barsotti-Tate lifts of specified type. The other complication which intervenes is that the connection between being modular of a certain weight and having a lift of a certain type is rather subtle, and this is the reason for our hypothesis that the weight be regular. One needs to consider many liftings for each weight, and we have only obtained the necessary combinatorial results in the case where the weight is regular. However, while these results appear to hold for most non-regular weights, there are cases where they do not hold, so it seems that it is not possible to give a general proof that the list of weights is correct by simply considering the types of potentially Barsotti-Tate lifts. It is possible to give a complete proof in the case where $p$ splits completely in $F$, and we do this in \cite{gee061}. We now outline the structure of the paper. Rather than working with the ``geometric'' conventions of \cite{bdj}, we prefer to work with more ``arithmetic'' ones. In particular, we work with automorphic forms on definite quaternion algebras. We set out our conventions in section \ref{2}, and we state the appropriate reformulation of the conjectures of \cite{bdj} here. In section \ref{reducible} we carry out the required local analysis in the case where the local representation is reducible. Sections \ref{brmod} and \ref{deform} use Breuil modules and strongly divisible modules to determine when reducible representations arise as the generic fibres of certain finite flat group schemes. In section \ref{fl} we relate these finite flat group schemes to certain crystalline representations considered in \cite{bdj}, and in section \ref{reducibletypes} we prove the necessary combinatorial results relating types and regular weights. We then repeat this analysis in the irreducible case in section \ref{irreducible}, and finally in section \ref{global} we combine these results with the lifting theorems mentioned above to deduce our main results. Firstly, we use our local results to show that if $\overline{\rho}$ is modular of weight $\sigma$ with $\sigma$ regular, then $\sigma\in W(\overline{\rho})$. For each regular weight $\sigma\in W(\overline{\rho})$ we then produce a modular lift of $\overline{\rho}$ which is potentially Barsotti-Tate of a specific type, so that $\overline{\rho}$ must be modular of some weight occurring in the mod $p$ reduction of this type. We then check that $\sigma$ is the only element of $W(\overline{\rho})$ occurring in this reduction, so that $\overline{\rho}$ is modular of weight $\sigma$, as required. In fact, we do not quite do this; the combinatorics is slightly more involved, and we are forced to make use of a notion of a ``weakly regular'' weight. See section \ref{global} for the details. It is a pleasure to thank Fred Diamond for numerous helpful discussions regarding this work; without his patient advice this paper could never have been written. We would like to thank David Savitt for pointing out several errors and omissions in an earlier version of this paper, and for writing \cite{sav06}. We would like to thank Florian Herzig for pointing out an inconsistency between our conventions and those of \cite{bdj}, which led to the writing of section \ref{2}. We are extremely grateful to Xavier Caruso and Christophe Breuil for their many helpful comments and corrections; in particular, the material in section \ref{fl} owes a considerable debt to Caruso's efforts to correct a number of inaccuracies, and the proof of Lemma \ref{nicebasis} is based on an argument of his. We would also like to thank the anonymous referee for a careful reading, and for pointing out a number of serious errors in an earlier version of the paper. \section{Definitions}\label{2}\subsection{}Rather than use the conventions of \cite{bdj}, we choose to state a closely related variant of their conjectures by working on totally definite quaternion algebras. This formulation is more suited to applications to modularity lifting theorems, and indeed to the application of modularity lifting theorems to proving cases of the conjecture. We begin by recalling some standard facts from the theory of quaternionic modular forms; see either \cite{taymero}, section 3 of \cite{kis04} or section 2 of \cite{kis06} for more details, and in particular the proofs of the results claimed below. We will follow Kisin's approach closely. We fix throughout this paper an algebraic closure $\overline{\Q}$ of $\mathbb{Q}$, and regard all algebraic extensions of $\mathbb{Q}$ as subfields of $\overline{\Q}$. For each prime $p$ we fix an algebraic closure ${\overline{\Q}_p}$ of ${{\mathbb Q}_p}$, and we fix an embedding $\overline{\Q}\hookrightarrow{\overline{\Q}_p}$. In this way, if $v$ is a finite place of a number field $F$, we have a homomorphism $G_{F_v}\hookrightarrow G_F$. Let $F$ be a totally real field in which $p>2$ is unramified, and let $D$ be a quaternion algebra with center $F$ which is ramified at all infinite places of $F$ and at a set $\Sigma$ of finite places, which contains no places above $p$. Fix a maximal order $\mathcal{O}_D$ of $D$ and for each finite place $v\notin\Sigma$ fix an isomorphism $(\mathcal{O}_D)_v\stackrel{\sim}{\To} M_2(\mathcal{O}_{F_v})$. For any finite place $v$ let $\pi_v$ denote a uniformiser of $F_v$. Let $U=\prod_v U_v\subset (D\otimes_F\mathbb{A}^f_F)^\times$ be a compact subgroup, with each $U_v\subset (\mathcal{O}_D)^\times_v$. Furthermore, assume that $U_v=(\mathcal{O}_D)_v^\times$ for all $v\in\Sigma$, and that $U_v=\operatorname{GL}_2(\mathcal{O}_{F_v})$ if $v|p$. Take $A$ a topological $\mathbb{Z}_p$-algebra. For each $v|p$, fix a continuous representation $\sigma_v:U_v\to\operatorname{Aut}(W_{\sigma_v})$ with $W_{\sigma_v}$ a finite free $A$-module. Write $W_\sigma=\otimes_{v|p,A}W_{\sigma_v}$ and let $\sigma=\prod_{v|p}\sigma_v$. We regard $\sigma$ as a representation of $U$ in the obvious way (that is, we let $U_v$ act trivially if $v\nmid p$). Fix also a character $\psi:F^\times\backslash(\mathbb{A}^f_F)^\times\to A^\times$ such that for any place $v$ of $F$, $\sigma|_{U_v\cap\mathcal{O}_{F_v}^\times}$ is multiplication by $\psi^{-1}$. Then we can think of $W_\sigma$ as a $U(\mathbb{A}_F^f)^\times$-module by letting $(\mathbb{A}_F^f)^\times$ act via $\psi^{-1}$. Let $S_{\sigma,\psi}(U,A)$ denote the set of continuous functions $$f:D^\times\backslash(D\otimes_F\mathbb{A}_F^f)^\times\to W_\sigma$$ such that for all $g\in (D\otimes_F\mathbb{A}_F^f)^\times$ we have $$f(gu)=\sigma(u)^{-1}f(g)\text{ for all }u\in U,$$ $$f(gz)=\psi(z)f(g)\text{ for all }z\in(\mathbb{A}_F^f)^\times.$$We can write $(D\otimes_F\mathbb{A}_F^f)^\times=\coprod_{i\in I}D^\times t_iU(\mathbb{A}_F^f)^\times$ for some finite index set $I$ and some $t_i\in(D\otimes_F\mathbb{A}_F^f)^\times$. Then we have $$S_{\sigma,\psi}(U,A)\stackrel{\sim}{\To}\oplus_{i\in I}W_\sigma^{(U(\mathbb{A}_F^f)^\times\cap t_i^{-1}D^\times t_i)/F^\times},$$the isomorphism being given by the direct sum of the maps $f\mapsto\{f(t_i)\}$. From now on we make the following assumption:$$\text{For all }t\in(D\otimes_F\mathbb{A}_F^f)^\times\text{ the group }(U(\mathbb{A}_F^f)^\times\cap t^{-1}D^\times t)/F^\times=1.$$ One can always replace $U$ by a subgroup (obeying the assumptions above) for which this holds (cf. section 3.1.1 of \cite{kis07}). Under this assumption, which we make from now on, $S_{\sigma,\psi}(U,A)$ is a finite free $A$-module, and the functor $W_\sigma\mapsto S_{\sigma,\psi}(U,A)$ is exact in $W_\sigma$. We now define some Hecke algebras. Let $S$ be a set of finite places containing $\Sigma$, the places dividing $p$, and the primes of $F$ such that $U_v$ is not a maximal compact subgroup of $ D_v^\times$. Let $\mathbb{T}^{\operatorname{univ}}_{S,A}=A[T_v]_{v\notin S}$ be the commutative polynomial ring in the formal variables $T_v$. Consider the left action of $(D\otimes_F\mathbb{A}_F^f)^\times$ on $W_\sigma$-valued functions on $(D\otimes_F\mathbb{A}_F^f)^\times$ given by $(gf)(z)=f(zg)$. For each finite place $v$ of $F$ we fix a uniformiser $\pi_v$ of $F_v$. Then we make $S_{\sigma,\psi}(U,A)$ a $\mathbb{T}^{\operatorname{univ}}_{S,A}$-module by letting $T_v$ act via the double coset $U\bigl( \begin{smallmatrix} \pi_v&0\\0&1 \end{smallmatrix} \bigr)U$. These are independent of the choices of $\pi_v$. We will write $\mathbb{T}_{\sigma,\psi}(U,A)$ or $\mathbb{T}_{\sigma,\psi}(U)$ for the image of $\mathbb{T}^{\operatorname{univ}}_{S,A}$ in $\operatorname{End} S_{\sigma,\psi}(U,A)$. Let $\mathfrak{m}$ be a maximal ideal of $\mathbb{T}^{\operatorname{univ}}_{S,A}$. We say that $\mathfrak{m}$ is in the support of $(\sigma,\psi)$ if $S_{\sigma,\psi}(U,A)_\mathfrak{m}\neq 0$. Now let $\mathcal{O}$ be the ring of integers in ${\overline{\Q}_p}$, with residue field $\F=\overline{\F}_p$, and suppose that $A=\mathcal{O}$ in the above discussion, and that $\sigma$ has open kernel. Consider a maximal ideal $\mathfrak{m}\subset\mathbb{T}^{\operatorname{univ}}_{S,\mathcal{O}}$ which is induced by a maximal ideal of $\mathbb{T}_{\sigma,\psi}(U,\mathcal{O})$. Then there is a semisimple Galois representation $\overline{\rho}_\mathfrak{m}:G_F\to\operatorname{GL}_2(\F)$ associated to $\mathfrak{m}$ which is characterised up to equivalence by the property that if $v\notin S$ and $\Frob_v$ is an arithmetic Frobenius at $v$, then the trace of $\overline{\rho}_\mathfrak{m}(\Frob_v)$ is the image of $T_v$ in $\F$. We are now in a position to define what it means for a Galois representation to be modular of some weight. Let $v|p$ be a place of $F$, let $F_v$ have ring of integers $\mathcal{O}_v$ and residue field $k_v$, and let $\sigma$ be an irreducible $\overline{\F}_p$-representation of $G:=\prod_{v|p}\operatorname{GL}_2(k_v)$. We also denote by $\sigma$ the representation of $\prod_{v|p}\operatorname{GL}_2(\mathcal{O}_{v})$ induced by the surjections $\mathcal{O}_{v}\twoheadrightarrow k_v$. \begin{defn} We say that an irreducible representation $\overline{\rho}:G_F\to\operatorname{GL}_2(\overline{\F}_p)$ is modular of weight $\sigma$ if for some $D$, $S$, $U$, $\psi$, and $\mathfrak{m}$ as above we have $S_{\sigma,\psi}(U,\F)_\mathfrak{m}\neq 0$ and $\overline{\rho}_\mathfrak{m}\cong\overline{\rho}$. \end{defn} We now show how one can gain information about the weights associated to a particular Galois representation by considering lifts to characteristic zero. \begin{lemma} \label{432} Let $\psi:F^{\times}\backslash(\mathbb{A}_{F}^f)^{\times}\to\mathcal{O}^{\times}$ be a continuous character, and write $\overline{\psi}$ for the composite of $\psi$ with the projection $\mathcal{O}^{\times}\to \F^{\times}$. Fix a representation $\sigma$ of $\prod_{v|p}U_v$ on a finite free $\mathcal{O}$-module $W_{\sigma}$, and an irreducible representation $\sigma'$ on a finite free $\F$-module $W_{\sigma'}$. Suppose that for each $v|p$ we have $\sigma|_{U_v\cap\mathcal{O}_{F_v}^\times}=\psi^{-1}|_{U_v\cap\mathcal{O}_{F_v}^\times}$ and $\sigma'|_{U_v\cap\mathcal{O}_{F_v}^\times}=\overline{\psi}^{-1}|_{U_v\cap\mathcal{O}_{F_v}^\times}$ Let $\mathfrak{m}$ be a maximal ideal of $\mathbb{T}_{S,\mathcal{O}}^{\textrm{univ}}$. Suppose that $W_{\sigma'}$ occurs as a $\prod_{v|p}U_v$-module subquotient of ${W}_{\overline{\sigma}}:=W_\sigma\otimes\F$. If $\mathfrak{m}$ is in the support of $(\sigma',\overline{\psi})$, then $\mathfrak{m}$ is in the support of $(\sigma,\psi)$. Conversely, if $\mathfrak{m}$ is in the support of $(\sigma,\psi)$, then $\mathfrak{m}$ is in the support of $(\sigma',\overline{\psi})$ for some irreducible $\prod_{v|p}U_v$-module subquotient $W_{\sigma'}$ of ${W}_{\overline{\sigma}}$. \end{lemma} \begin{proof} The first part is proved just as in Lemma 3.1.4 of \cite{kis04}, and the second part follows from Proposition 1.2.3 of \cite{as86}. \end{proof} We note a special case of this result, relating the existence of potentially Barsotti-Tate lifts of a particular tame type to information about Serre weights. Firstly, we recall some particular representations of $\operatorname{GL}_{2}(k_{v})$. For any pair of distinct characters $\chi_1$, $\chi_2:k_{v}^\times\to\mathcal{O}^\times$ we let $I(\chi_1,\chi_2)$ denote the irreducible $(q+1)$-dimensional ${\overline{\Q}_p}$-representation of $\operatorname{GL}_2(k_{v})$ induced from the character of $B$ (the upper triangular matrices in $\operatorname{GL}_2(k_{v})$) given by $$\left(\begin{array}{cc} x & w \\ 0 & y \\ \end{array} \right)\mapsto\chi_1(x)\chi_2(y).$$ We let $\sigma_{\chi_{1},\chi_{2}}$ denote the representation of $\operatorname{GL}_{2}(k_{v})$ on an $\mathcal{O}$-lattice in $I(\chi_1,\chi_2)$; we also regard this as a representation of $\operatorname{GL}_{2}(\mathcal{O}_{v})$ via the natural projection. Let $\tau(\sigma_{\chi_{1},\chi_{2}})$ be the inertial type $\chi_{1}\oplus\chi_{2}$ (regarded as a representation of $I_{F_{v}}$ via local class field theory, normalised so that a uniformiser corresponds to a geometric Frobenius element). Let $k_{v}'$ be the quadratic extension of $k_{v}$. For any character $\theta:k_{v}'^\times\to\mathcal{O}^{\times}$ which does not factor through the norm $k_{v}'^\times\to k_v^\times$, there is an irreducible $(q-1)$-dimensional cuspidal representation $\Theta(\theta)$ of $\operatorname{GL}_2(k_v)$ (see Section 1 of \cite{dia05} for the definition of $\Theta(\theta)$). Let $\sigma_{\Theta(\theta)}$ denote the representation of $\operatorname{GL}_{2}(k_{v})$ on an $\mathcal{O}$-lattice in $\Theta(\theta)$; we also regard this as a representation of $\operatorname{GL}_{2}(\mathcal{O}_{v})$ via the natural projection. Let $q_{v}$ be the cardinality of $k_{v}$, and let $\tau(\sigma_{\Theta(\theta)})$ be the inertial type $\theta\oplus\theta^{q_{v}}$ (again regarded as a representation of $I_{F_{v}}$ via local class field theory). \begin{defn}\label{defn:barsotti tate lifts} Let $\tau$ be an inertial type, and let $v|p$ be a place of $F$. We say that a lift $\rho$ of $\overline{\rho}|_{G_{F_v}}$ is \emph{potentially Barsotti-Tate of type} $\tau$ if $\rho$ is potentially Barsotti-Tate, has determinant a finite order character of order prime to $p$ times the cyclotomic character, and the corresponding Weil-Deligne representation (see Appendix B of \cite{cdt}), when restricted to $I_{F_v}$, is isomorphic to $\tau$. \end{defn} \begin{lemma}\label{lem:local langlands version of lifting}For each $v|p$, fix a representation $\sigma_{v}$ of the type just considered (that is, isomorphic to $\sigma_{\chi_{1},\chi_{2}}$ or to $\sigma_{\Theta(\theta)}$). Let $\tau_{v}=\tau(\sigma_{v})$ be the corresponding inertial type. Suppose that $\overline{\rho}$ is modular of weight $\sigma$, and that $\sigma$ is a $\prod_{v|p}\operatorname{GL}_{2}(k_{v})$-subquotient of $\otimes_{v|p}\sigma_{v}\otimes_{\mathcal{O}}\F$. Then $\overline{\rho}$ lifts to a modular Galois representation which is potentially Barsotti-Tate of type $\tau_{v}$ for each $v|p$. Conversely, suppose that $\overline{\rho}$ lifts to a modular Galois representation which is potentially Barsotti-Tate of type $\tau_{v}$ for each $v|p$. Then $\overline{\rho}$ is modular of weight $\sigma$ for some $\prod_{v|p}\operatorname{GL}_{2}(k_{v})$-subquotient $\sigma$ of $\otimes_{v|p}\sigma_{v}\otimes_{\mathcal{O}}\F$. \end{lemma} \begin{proof}This follows from Lemma \ref{432}, the Jacquet-Langlands correspondence, and the compatibility of the local and global Langlands correspondences at places dividing $p$ (see \cite{kis06}). \end{proof} We now state a conjecture on Serre weights, following \cite{bdj}. Note that our conjecture is only valid for regular weights (a notion which we will define shortly); there are some additional complications when dealing with non-regular weights. Let $\overline{\rho}:G_F\to\operatorname{GL}_2(\overline{\F}_p)$ be modular. We propose a conjectural set of regular weights $W(\overline{\rho})$ for $\overline{\rho}$. In fact, for each place $v|p$ we propose a set of weights $W(\overline{\rho}|_{G_{F_v}})$, and we define $$W(\overline{\rho}):=\bigl\{\otimes_{v|p}\sigma_v | \sigma_v\in W(\overline{\rho}|_{G_{F_v}})\bigr\}.$$ Let $S_v$ be the set of embeddings $k_v\hookrightarrow \overline{\F}_p$. A weight for $\operatorname{GL}_2(k_v)$ is an isomorphism class of irreducible $\overline{\F}_p$-representations of $\operatorname{GL}_2(k_{v})$, which automatically contains one of the form $$\sigma_{\vec{a},\vec{b}}=\otimes_{\tau\in S_{v}}\det{}^{a_\tau}\operatorname{Sym}^{b_\tau-1}k_{v}^2\otimes_\tau\overline{\F}_p,$$with $0\leq a_\tau\leq p-1$ and $1\leq b_\tau\leq p$ for each $\tau\in S_{v}$. We demand further that some $a_\tau<p-1$, in which case the representations $\sigma_{\vec{a},\vec{b}}$ are pairwise non-isomorphic. \begin{defn}We say that a weight $\sigma_{\vec{a},\vec{b}}$ is \emph{regular} if $2\leq b_\tau\leq p-2$ for all $\tau$. We say that it is \emph{weakly regular} if $1\leq b_\tau\leq p-1$ for all $\tau$.\end{defn} For each $\tau\in S_v$ we have the fundamental character $\omega_\tau$ of $I_{F_v}$ given by composing $\tau$ with the homomorphism $I_{F_v}\to k_v^\times$ given by local class field theory, normalised so that uniformisers correspond to geometric Frobenius elements. Let $k'_v$ denote the quadratic extension of $k_v$. Let $S'_{v}$ denote the set of embeddings $\sigma:k_v'\hookrightarrow\overline{\F}_p$, and let $\omega_\sigma$ denote the fundamental character corresponding to $\sigma$. Suppose firstly that $\overline{\rho}|_{G_{F_v}}$ is irreducible. There is a natural $2-1$ map $\pi:S'_v\to S_{v}$ given by restriction to $k_{v}$, and we say that a subset $J\subset S'_{v}$ is a \emph{full subset} if $|J|=|\pi(J)|=|S_{v}|$. Then we have \begin{defn}Let $\sigma_{\vec{a},\vec{b}}$ be a regular weight for $\operatorname{GL}_2(k_v)$. Then $\sigma_{\vec{a},\vec{b}}\in W(\overline{\rho}|_{G_{F_v}})$ if and only if there exists a full subset $J\subset S'_v$ such that $$\overline{\rho}|_{I_{F_v}}\sim\prod_{\tau\in S_v}\omega_\tau^{a_\tau}\left(\begin{array}{cc} \prod_{\sigma\in J}\omega_{\sigma}^{b_{\sigma|_{k_{v}}}} & 0 \\ 0 & \prod_{\sigma\notin J}\omega_{\sigma}^{b_{\sigma|_{k_{v}}}} \\ \end{array}\right).$$\end{defn} Suppose now that $\overline{\rho}|_{G_{F_v}}$ is reducible, say $\overline{\rho}|_{G_{F_v}}\sim\bigl(\begin{smallmatrix}\psi_1&*\\0&\psi_2\end{smallmatrix}\bigr)$. We define the set $W(\overline{\rho}|_{G_{F_v}})$ in two stages. Firstly, define a set $W(\overline{\rho}|_{G_{F_v}})'$ of regular weights as follows. \begin{defn}\label{defn:weight set in reducible case}Let $\sigma_{\vec{a},\vec{b}}$ be a regular weight for $\operatorname{GL}_2(k_v)$. Then $\sigma_{\vec{a},\vec{b}}\in W(\overline{\rho}|_{G_{F_v}})'$ if and only if there exists $J\subset S_v$ such that $\psi_1|_{I_{F_v}}=\prod_{\tau\in S_v}\omega_\tau^{a_\tau}\prod_{\tau\in J}\omega_\tau^{b_\tau}$ and $\psi_{2}|_{I_{F_v}}=\prod_{\tau\in S_v}\omega_\tau^{a_\tau}\prod_{\tau\notin J}\omega_\tau^{b_\tau}$. We say that $\sigma_{\vec{a},\vec{b}}\in W(\overline{\rho}|_{G_{F_v}})'$ is \emph{ordinary for} $\overline{\rho}$ if furthermore $J=S_v$ or $J=\emptyset$ (note that the set $J$ is uniquely determined, because $\sigma_{\vec{a},\vec{b}}$ is regular).\end{defn} Suppose that we have a regular weight $\sigma_{\vec{a},\vec{b}}\in W(\overline{\rho}|_{G_{F_v}})'$ and a corresponding subset $J\subset S_v$. We now define crystalline lifts $\widetilde{\psi}_1$, $\widetilde{\psi}_2$ of $\psi_1$, $\psi_2$. If $\psi:G_{F_v}\to{\overline{\Q}_p}^\times$ is a crystalline character, and $\tau:F_{v}\hookrightarrow {\overline{\Q}_p}$, we say that the Hodge-Tate weight of $\psi$ with respect to $\tau$ is the $i$ for which $gr^{-i}((\psi\otimes_{{{\mathbb Q}_p}}B_{dR})^{G_{F_v}}\otimes_{{\overline{\Q}_p}\otimes_{{\mathbb Q}_p} F_v,1\otimes\tau}{\overline{\Q}_p})\neq 0$. Then we demand that for some fixed Frobenius element $\Frob_v$ of $G_{F_{v}}$, $\widetilde{\psi}_i(\Frob_v)$ is the Teichm\"{u}ller lift of $\psi_i(\Frob_v)$, and that: \begin{itemize}\item $\widetilde{\psi}_1$ is crystalline, and the Hodge-Tate weight of $\widetilde{\psi}_1$ with respect to $\tau$ is $a_\tau+b_\tau$ if $\tau\in J$, and $a_\tau$ if $\tau\notin J$. \item $\widetilde{\psi}_2$ is crystalline, and the Hodge-Tate weight of $\widetilde{\psi}_2$ with respect to $\tau$ is $a_\tau+b_\tau$ if $\tau\notin J$, and $a_\tau$ if $\tau\in J$.\end{itemize} The existence and uniqueness (for our fixed choice of $\Frob_v$) of $\widetilde{\psi}_1$, $\widetilde{\psi}_2$ is straightforward (see \cite{bdj}). Then we have \begin{defn}$\sigma_{\vec{a},\vec{b}}\in W(\overline{\rho}|_{G_{F_v}})$ if and only if $\overline{\rho}|_{G_{F_v}}$ has a lift to a crystalline representation $\bigl(\begin{smallmatrix}\widetilde{\psi}_1&*\\0&\widetilde{\psi}_2\end{smallmatrix}\bigr)$.\end{defn}Note that by remark 3.10 of \cite{bdj}, and the regularity of $\sigma_{\vec{a},\vec{b}}$, this definition is independent of the choice of $\Frob_v$. For future reference, we say that a weight $\sigma$ is partially ordinary of type $I$ for $\overline{\rho}$ if $I$ is the set of places $v|p$ for which $\sigma_v$ is ordinary for $\overline{\rho}$. We say that $\overline{\rho}$ has a partially ordinary modular lift of type $I$ if it has a potentially Barsotti-Tate modular lift which is potentially ordinary at precisely the places in $I$. \subsection{Relation to the Buzzard-Diamond-Jarvis conjecture}Our conjectured sets of regular weights are exactly the same as the regular weights predicted in \cite{bdj}. However, they work with indefinite quaternion algebras rather than the definite ones of this paper, and in the absence of a mod $p$ Jacquet-Langlands correspondence our results do not automatically prove cases of their conjectures. That said, our arguments are for the most part purely local, with the only global input being in characteristic zero, where one does have a Jacquet-Langlands corresponence. In particular, given the analogue of Lemma \ref{lem:local langlands version of lifting} in the setting of \cite{bdj} (cf. Proposition 2.10 of \cite{bdj}) our arguments will go over unchanged to their setting. \section{Local analysis - the reducible case}\label{reducible} \subsection{Breuil Modules}\label{brmod}Let $p>2$ be prime, let $k$ be a finite extension of $\F_p$, let $K_0=W(k)[1/p]$, and let $K$ be a finite Galois totally tamely ramified extension of $K_0$, of degree $e$. Fix a subfield $M$ of $K_0$, and assume that there is a uniformiser $\pi$ of $\mathcal{O}_{K}$ such that $\pi^{e}\in M$, and fix such a $\pi$. Since $K/M$ is tamely ramified (and automatically Galois), the category of Breuil modules with coefficients and descent data is easy to describe (see \cite{sav06}). Let $k\in[2,p-1]$ be an integer (there will never be any ambiguity in our two uses of the symbol $k$, one being a finite field and the other a positive integer). Let $E$ be a finite extension field of ${\mathbb F}_p$. The category $\operatorname{BrMod}^{k-1}_{dd,M}$ consists of quintuples $(\mathcal{M},\mathcal{M}_{k-1},\phi_{k-1},\hat{g},N)$ where: \begin{itemize}\item $\mathcal{M}$ is a finitely generated $(k\otimes_{\F_p}E)[u]/u^{ep}$-module, free over $k[u]/u^{ep}$. \item $\mathcal{M}_{k-1}$ is a $(k\otimes_{\F_p}E)[u]/u^{ep}$-submodule of $\mathcal{M}$ containing $u^{e(k-1)}\mathcal{M}$. \item $\phi_{k-1}:\mathcal{M}_{k-1}\to\mathcal{M}$ is $E$-linear and $\phi$-semilinear (where $\phi:k[u]/u^{ep}\to k[u]/u^{ep}$ is the $p$-th power map) with image generating $\mathcal{M}$ as a $(k\otimes_{\F_p}E)[u]/u^{ep}$-module. \item $N:\mathcal{M}\to u\mathcal{M}$ is $(k\otimes_{\F_{p}}E)$-linear and satisfies $N(ux)=uN(x)-ux$ for all $x\in\mathcal{M}$, $u^{e}N(\mathcal{M}_{k-1})\subset\mathcal{M}_{k-1}$, and $\phi_{k-1}(u^{e}N(x))=(-\pi^e/p)N(\phi_{k-1}(x))$ for all $x\in\mathcal{M}_{k-1}$. \item $\hat{g}:\mathcal{M}\to\mathcal{M}$ are additive bijections for each $g\in\operatorname{Gal}(K/M)$, preserving $\mathcal{M}_{k-1}$, commuting with the $\phi_{k-1}$-, $E$-, and $N$-actions, and satisfying $\hat{g}_1\circ \hat{g}_2=\widehat{g_1\circ g}_2$ for all $g_1,g_2\in\operatorname{Gal}(K/M)$, and $\hat{1}$ is the identity. Furthermore, if $a\in k\otimes_{\F_{p}}E$, $m\in\mathcal{M}$ then $\hat{g}(au^{i}m)=g(a)((g(\pi)/\pi)^{i}\otimes 1)u^{i}\hat{g}(m)$.\end{itemize} We will omit $M$ from the notation in the case $M=K_{0}$. We write $\operatorname{BrMod}_{dd,M}=\operatorname{BrMod}^{1}_{dd,M}$. The category $\operatorname{BrMod}_{dd,M}$ is equivalent to the category of finite flat group schemes over $\mathcal{O}_K$ together with an $E$-action and descent data on the generic fibre from $K$ to $M$ (this equivalence depends on $\pi$). In this case it follows from the other axioms that there is always a unique $N$ which satisfies the required properties, and we will frequently omit the details of this operator when we are working in the case $k=2$. In section \ref{fl} we will also use the case $k=p-1$, and here we will make the operators $N$ explicit. We choose in this paper (except in section \ref{fl}) to adopt the conventions of \cite{bm} and \cite{sav04}, rather than those of \cite{bcdt}; thus rather than working with the usual contravariant equivalence of categories, we work with a covariant version of it, so that our formulae for generic fibres will differ by duality and a twist from those following the conventions of \cite{bcdt}. To be precise, we obtain the associated $G_{M}$-representation (which we will refer to as the generic fibre) of an object of $\operatorname{BrMod}_{dd}$ via the functor $T_{st,2}^{M}$, which is defined in section 4 of \cite{sav04}. Let $\rho:G_{K_0}\to\operatorname{GL}_2(E)$ be a continuous representation. We assume from now on that $E$ contains $k$. Suppose for the rest of this section that $\rho$ is reducible but not scalar, say $\rho\sim \bigl(\begin{smallmatrix}\psi_1&*\\0&\psi_2\end{smallmatrix}\bigr)$. Fix $\pi=(-p)^{1/(p^r-1)}$, where $r=[k:\F_p]$, and fix $K=K_0(\pi)$, so that $\pi$ is a uniformiser of $\mathcal{O}_K$, the ring of integers of $K$. By class field theory $\psi_{1}|_{I_K}$ and $\psi_{2}|_{I_{K}}$ are trivial. We fix some general notation for elements of $\operatorname{BrMod}_{dd}$. Let $S$ denote the set of embeddings $\tau:k\hookrightarrow E$. We have an isomorphism $k\otimes_{\F_p}E\stackrel{\sim}{\To}\oplus_{S}E_{\tau}$, where $E_{\tau}:=k\otimes_{k,\tau}E$, and we let $\epsilon_\tau$ denote the idempotent corresponding to the embedding $\tau$. Then any element $\mathcal{M}$ of $\operatorname{BrMod}_{dd}$ can be decomposed into $E[u]/u^{ep}$-modules $\mathcal{M}^{\tau}:=\epsilon_\tau\mathcal{M}$, $\tau\in S$, so that $\hat{g}:\mathcal{M}^\tau\to\mathcal{M}^\tau$, and $\phi_1:\mathcal{M}^\tau_1\to\mathcal{M}^{\tau\circ\phi^{-1}}$, so that $\mathcal{M}$ is free over $(k\otimes_{{\mathbb F}_p} E)[u]/u^{ep}$. We now write $S=\{\tau_1,\dots,\tau_r\}$, numbered so that $\tau_{i+1}=\tau_{i}\circ\phi^{{-1}}$, where we identify $\tau_{r+1}$ with $\tau_1$. In fact, it will often be useful to consider the indexing set of $S$ to be $\mathbb{Z}/r\mathbb{Z}$, and we will do so without further comment. Fix $J\subset S$. We wish to single out particular representations $\rho$ depending on $J$. Firstly, we need some notation. Recall that (as in appendix B of \cite{cdt}) if $\rho':G_{K_0}\to\operatorname{GL}_2(\mathcal{O}_L)$ is potentially Barsotti-Tate, where $L$ is a finite extension of $W(E)[1/p]$, then there is a Weil-Deligne representation $WD(\rho'):W_{K_0}\to\operatorname{GL}_2(\overline{\mathbb{Q}}_p)$ associated to $\rho'$, and we say that $\rho'$ has type $WD(\rho')|_{I_{K_0}}$. \begin{defn}We say that $\rho$ has a \emph{lift of type} $J$ if there is a representation $\rho':G_{K_0}\to\operatorname{GL}_2(\mathcal{O}_L)$ lifting $\rho$, where $L$ is a finite extension of $W(E)[1/p]$, such that $\rho'$ becomes Barsotti-Tate over $K$, with $\varepsilon^{-1}\det\rho'$ equal to the Teichm\"{u}ller lift of $\varepsilon^{-1}\det\rho$ (with $\varepsilon$ denoting the cyclotomic character) and $\rho'$ has type $\widetilde{\psi}_1|_{I_{K_{0}}}\prod_{\tau\in J}\widetilde{\omega}_\tau^{-p}\oplus\widetilde{\psi}_2|_{I_{K_{0}}}\prod_{\tau\notin J}\widetilde{\omega}_\tau^{-p}$. Here a tilde denotes the Teichm\"{u}ller lift.\end{defn} \begin{defn}For any subset $H\subset S$, we say that an element $\mathcal{M}$ of $\operatorname{BrMod}_{dd}$ is of class $H$ if it is of rank one, and for all $\tau\in S$ we can choose a basis $e_\tau$ of $\mathcal{M}^{\tau}$ such that $\mathcal{M}_1^{\tau}$ is generated by $u^{j_\tau}e_\tau$, where $$j_\tau=\left\{\begin{array}{c} 0\text{ if }\tau\circ\phi^{-1}\notin H \\ e\text{ if }\tau\circ\phi^{-1}\in H \\ \end{array}\right.$$ \end{defn} \begin{defn}We say that an element $\mathcal{M}$ of $\operatorname{BrMod}_{dd}$ is of type $J$ if $\mathcal{M}$ is an extension of an element of class $J^c$ by an element of class $J$, and we say that $\rho$ has a model of type $J$ if there is an element of $\operatorname{BrMod}_{dd}$ of type $J$ with generic fibre $\rho$. \end{defn} We will also refer to finite flat group schemes with descent data as being of class $J$ or of type $J$ if they correspond to Breuil modules with descent data of this kind. The notions of having a model of type $J$ and having a lift of type $J$ are closely related, although not in general equivalent. We will see in section \ref{subsec: models of type J} that in sufficiently generic cases, if $\rho$ has a model of type $J$ then it has a lift of type $J$, and in section \ref{reducibletypes} we prove a partial converse (see Proposition \ref{prop:liftimpliesmodel}). \subsection{Strongly divisible modules}\label{deform}In this section we prove that if $\rho$ has a model of type $J$ then it has a lift of type $J$. We begin by recalling the definition and basic properties of strongly divisible modules from \cite{sav04}. For the purpose of giving these definitions we return briefly to the general setting of $K_0$ an unramified finite extension of $\mathbb{Q}_p$ and $K$ a totally tamely ramified Galois extension of $K_0$ of degree $e$, with uniformiser $\pi$, satisfying $\pi^{e}\in M$ for some subfield $M$ of $K_{0}$. Let $L$ be a finite extension of $\mathbb{Q}_p$ with ring of integers $\mathcal{O}_L$ and residue field $E$. Let $S_{K}$ be the ring $$\left\{\sum_{j=0}^\infty r_j\frac{u^j}{\lfloor j/e\rfloor !}\text{, }r_j\in W(k)\text{, }r_j\to 0\ p\text{-adically as }j\to\infty\right\},$$ and let $S_{K,\mathcal{O}_{L}}=S_{K}\otimes_{\mathbb{Z}_{p}}\mathcal{O}_{L}$. Let $\operatorname{Fil}^{1}S_{K,\mathcal{O}_L}$ be the $p$-adic completion of the ideal generated by $E(u)^j/j!$, $j\geq 1$, where $E(u)$ is the minimal polynomial of $\pi$ over $K_0$. Let $\phi:S_{K,\mathcal{O}_L}\to S_{K,\mathcal{O}_L}$ be the unique $\mathcal{O}$-linear, $W(k)$-semilinear ring homomorphism with $\phi(u)=u^p$, and let $N$ be the unique $W(k)\otimes \mathcal{O}_L$-linear derivation such that $N(u)=-u$ (so that $N\phi=p\phi N$). One can check that $\phi(\operatorname{Fil}^{1}S_{K,\mathcal{O}_L})\subset p S_{K,\mathcal{O}_L}$, and we define $\phi_{1}:\operatorname{Fil}^{1}S_{K,\mathcal{O}_L}\to S_{K,\mathcal{O}_L}$ by $\phi_{1}=(\phi|_{\operatorname{Fil}^{1}S_{K,\mathcal{O}_L}})/p$. One can check (see section 4 of \cite{sav04}) that if $I$ is an ideal of $\mathcal{O}_{L}$, then $IS_{K,\mathcal{O}_L}\cap\operatorname{Fil}^{1}S_{K,\mathcal{O}_L}=I\operatorname{Fil}^{1}S_{K,\mathcal{O}_L}$. We give $S_{K}$ an action of $\operatorname{Gal}(K/M)$ via ring isomorphisms via the usual action on $W(k)$, and by letting $\hat{g}(u)=(g(\pi)/\pi)u$. We extend this action $\mathcal{O}_L$-linearly to $S_{K,\mathcal{O}_L}$. We now define the category $\mathcal{O}_{L}-\operatorname{Mod}^1_{cris,dd,M}$, the category of strongly divisible $\mathcal{O}_{L}$-modules with descent data from $K$ to $M$. \begin{defn}\label{strongdivis}A strongly divisible $\mathcal{O}_{L}$-module with descent data from $K$ to $M$ is a finitely generated free $S_{K,\mathcal{O}_L}$-module $\mathcal{M}$, together with a sub-$S_{K,\mathcal{O}_L}$-module $\operatorname{Fil}^{1}\mathcal{M}$ and a map $\phi:\mathcal{M}\to\mathcal{M}$, and additive bijections $\hat{g}:\mathcal{M}\to\mathcal{M}$ for each $g\in\operatorname{Gal}(K/M)$, satisfying the following conditions:\begin{enumerate}\item $\operatorname{Fil}^{1}\mathcal{M}$ contains $(\operatorname{Fil}^{1}S_{K,\mathcal{O}_L})\mathcal{M}$, \item $\operatorname{Fil}^{1}\mathcal{M}\cap I\mathcal{M}=I\operatorname{Fil}^{1}\mathcal{M}$ for all ideals $I$ in $\mathcal{O}_{L}$,\item $\phi(sx)=\phi(s)\phi(x)$ for $s\in S_{K,\mathcal{O}_L}$ and $x\in\mathcal{M}$,\item $\phi(\operatorname{Fil}^{1}\mathcal{M})$ is contained in $p\mathcal{M}$ and generates it over $S_{K,\mathcal{O}_L}$,\item $\hat{g}(sx)=\hat{g}(s)\hat{g}(x)$ for all $s\in S_{K,\mathcal{O}_L}$, $x\in \mathcal{M}$, $g\in\operatorname{Gal}(K/M)$,\item $\hat{g}_1\circ\hat{g}_2=\widehat{g_1\circ g}_2$ for all $g_1$, $g_2\in\operatorname{Gal}(K/M)$,\item $\hat{g}(\operatorname{Fil}^{1}\mathcal{M})\subset\operatorname{Fil}^{1}\mathcal{M}$ for all $g\in\operatorname{Gal}(K/M)$, and \item $\phi$ commutes with $\hat{g}$ for all $g\in\operatorname{Gal}(K/M)$.\end{enumerate} \end{defn} Note that it is not immediately obvious that this definition is equivalent to Definition 4.1 of \cite{sav04}, as we have made no mention of the operator $N$ of \emph{loc. cit.} However, since $\mathcal{O}_{L}$ is finite over $\mathbb{Z}_{p}$, it follows from part (1) of Proposition 5.1.3 of \cite{MR1804530} that any such operator $N$ is unique. The existence of an operator $N$ satisfying all of the conditions of Definition 4.1 of \cite{sav04} except possibly for $\mathcal{O}_{L}$-linearity follows from the argument at the beginning of section 3.5 of \cite{sav04}. To check $\mathcal{O}_{L}$-linearity it is enough (by $\mathbb{Z}_{p}$-linearity) to check that $N$ is compatible with the action of the units in $\mathcal{O}_{L}$, but this is clear from the uniqueness of $N$. By Proposition 4.13 of \cite{sav04} (and the remarks immediately preceding it), there is a functor $T^{M}_{st,2}$ from the category $\mathcal{O}_L-\operatorname{Mod}^1_{cris,dd,M}$ to the category of $G_{M}$-stable $\mathcal{O}_{L}$-lattices in representations of $G_{M}$ which become Barsotti-Tate on restriction to $G_{K}$. This functor preserves dimensions in the obvious sense. Recall also from section 4.1 of \cite{sav04}) that there is a functor $T_0$, compatible with $T_{st,2}^{M}$, from $\mathcal{O}_L-\operatorname{Mod}^1_{cris,dd,M}$ to $\operatorname{BrMod}_{dd,M}$. The functor $T_0$ is given by $\mathcal{M}\mapsto(\mathcal{M}/\mathfrak{m}_{L}\mathcal{M})\otimes_{S_K} k[u]/u^{ep}$. \subsection{Models of type $J$}\label{subsec: models of type J} We now wish to discuss the relationships between models of type $J$ and lifts of type $J$. With an eye to our future applications, we will often make a simplifying assumption. \begin{defn} Say that $\rho$ is \emph{$J$-regular} if $\psi_{1}\psi_{2}^{-1}|_{I_{K_{0}}}=\prod_{\tau\in J}\omega_{\tau}^{b_{\tau}}\prod_{\tau\in J^{c}}\omega_{\tau}^{-b_{\tau}}$ for some $2\leq b_{\tau}\leq p-2$. \end{defn} Suppose now that $\rho$ has a model of type $J$. Recall that this means that, with the notation of Section \ref{brmod}, we can write down a Breuil module $\mathcal{M}$ with descent data whose generic fibre is $\rho$, which is an extension of a Breuil module with descent data $\mathcal{B}$ by a Breuil module with descent data $\mathcal{A}$, where $\mathcal{A}$ is of class $J$ and $\mathcal{B}$ is of class $J^{c}$. Let $\psi'_{i}$ denote $\psi_{i}|_{I_{K_{0}}}$ regarded as a character of $\operatorname{Gal}(K/K_{0})$. By Theorem 3.5 and Example 3.7 of \cite{sav06} we see that we can choose bases for $\mathcal{A}$ and $\mathcal{B}$ so that they take the following form: $$\mathcal{A}^{\tau_i}=E[u]/u^{ep}\cdot e_{\tau_i}$$ $$\mathcal{A}_1^{\tau_i}=E[u]/u^{ep}\cdot u^{j_{\tau_i}} e_{\tau_i}$$ $$\phi_1(u^{j_{\tau_i}} e_{\tau_i})=(a^{-1})_{i}e_{\tau_{i+1}} $$ $$\hat{g}(e_{\tau_i})=\left(\left(\psi_1'\prod_{\sigma\in J}\omega_\sigma^{-p}\right)(g)\right)e_{\tau_i}$$ $$\mathcal{B}^{\tau_i}=E[u]/u^{ep}\cdot \overline{f}_{\tau_i}$$ $$\mathcal{B}_1^{\tau_i}=E[u]/u^{ep}\cdot u^{e-j_{\tau_i}} \overline{f}_{\tau_i}$$ $$\phi_1(u^{e-j_{\tau_i}} \overline{f}_{\tau_i})=(b^{-1})_{i}\overline{f}_{\tau_{i+1}} $$ $$\hat{g}(\overline{f}_{\tau_i})=\left(\left(\psi_2'\prod_{\sigma\notin J}\omega_\sigma^{-p}\right)(g)\right)\overline{f}_{\tau_i}$$ where $a$, $b\in E^{\times}$, the notation $(x)_{i}$ means $x$ if $i=1$ and $1$ otherwise, and $$j_{\tau_i}=\left\{\begin{array}{c} e\text{ if }\tau_{i+1}\in J \\ 0\text{ if }\tau_{i+1}\notin J. \\ \end{array}\right. $$ We now seek to choose a basis for $\mathcal{M}$ extending the basis $\{e_{\tau}\}$ for $\mathcal{A}$. Such a basis will be given by lifting the $\overline{f}_{\tau}$ to elements $f_{\tau}$ (where we mean lifting under the map $e_{\tau}\mapsto 0$). \begin{lemma}\label{nicebasis}Assume that $\rho$ is $J$-regular and has a model $\mathcal{M}$ of type $J$. Then for some choice of basis, we can write $$\mathcal{M}^{\tau_i}=E[u]/u^{ep}\cdot e_{\tau_i}+E[u]/u^{ep}\cdot f_{\tau_i}$$ $$\mathcal{M}_1^{\tau_i}=E[u]/u^{ep}\cdot u^{j_{\tau_i}} e_{\tau_i}+E[u]/u^{ep}\cdot (u^{e-j_{\tau_i}}f_{\tau_i}+\lambda_{\tau_i}u^{i_{\tau_i}}e_{\tau_i})$$ $$\phi_1(u^{j_{\tau_i}} e_{\tau_i})=(a^{-1})_ie_{\tau_{i+1}} $$ $$\phi_1(u^{e-j_{\tau_i}}f_{\tau_i}+\lambda_{\tau_i}u^{i_{\tau_i}}e_{\tau_i})=(b^{-1})_if_{\tau_{i+1}}$$ $$\hat{g}(e_{\tau_i})=\left(\left(\psi_1'\prod_{\sigma\in J}\omega_\sigma^{-p}\right)(g)\right)e_{\tau_i}$$ $$\hat{g}(f_{\tau_i})=\left(\left(\psi_2'\prod_{\sigma\notin J}\omega_\sigma^{-p}\right)(g)\right)f_{\tau_i}$$where $\lambda_{\tau_i}\in E$, with $\lambda_{\tau_i}=0$ if $\tau_{i+1}\notin J$, the $i_{\tau_i}$ are such that $\mathcal{M}_1$ is Galois-stable and $0\leq i_{\tau_i}\leq e-1$, and $$j_{\tau_i}=\left\{\begin{array}{c} e\text{ if }\tau_{i+1}\in J \\ 0\text{ if }\tau_{i+1}\notin J. \\ \end{array}\right. $$ \end{lemma} \begin{proof}Assume firstly that $J\neq S$, and choose $k$ so that $\tau_{k+1}\notin J$. One can lift $\overline{f}_{\tau_{k}}$ to an element $f_{\tau_{k}}$ of $\phi_{1}(\mathcal{M}^{\tau_{k-1}})$, and in fact one can choose $f_{\tau_{k}}$ so that for all $g\in \operatorname{Gal}(K/{K_{0}})$ we have $$\hat{g}(f_{\tau_k})=\left(\left(\psi_2'\prod_{\sigma\notin J}\omega_\sigma^{-p}\right)(g)\right)f_{\tau_k}$$(the obstruction to doing this is easily checked to vanish, as the degree of $K/K_{0}$ is prime to $p$). As $\tau_{k+1}\notin J$, we have $j_{\tau_{k}}=0$, so that $e_{\tau_{k}}$ and $u^{e}f_{\tau_{k}}$ must generate $\mathcal{M}^{\tau_{k}}_{1}$. Now, suppose inductively that for some $i$ we have chosen $f_{\tau_{i}}$ and $\lambda_{\tau_{i}}$ so that $\mathcal{M}_1^{\tau_i}$ is generated by $u^{j_{\tau_i}} e_{\tau_i}$ and $(u^{e-j_{\tau_i}}f_{\tau_i}+\lambda_{\tau_i}u^{i_{\tau_i}}e_{\tau_i})$. Then we put $f_{\tau_{i+1}}=\phi_{1}(u^{e-j_{\tau_i}}f_{\tau_i}+\lambda_{\tau_i}u^{i_{\tau_i}}e_{\tau_i})/(b^{-1})_{i}$. Then $f_{\tau_{i+1}}$ is a lift of $\overline{f}_{\tau_{i+1}}$, and the commutativity of $\phi_{1}$ and the action of $\operatorname{Gal}(K/K_{0})$ ensures that $$\hat{g}(f_{\tau_{i+1}})=\left(\left(\psi_2'\prod_{\sigma\notin J}\omega_\sigma^{-p}\right)(g)\right)f_{\tau_{i+1}}.$$ Then the fact that $\mathcal{M}_{1}$ is $\operatorname{Gal}(K/K_{0})$-stable ensures that for some $\lambda_{t_{i+1}}\in E$ we must have that $u^{j_{\tau_{i+1}}} e_{\tau_{i+1}}$ and $(u^{e-j_{\tau_{i+1}}}f_{\tau_{i+1}}+\lambda_{\tau_{i+1}}u^{i_{\tau_{i+1}}}e_{\tau_{i+1}})$ generate $\mathcal{M}_{1}^{\tau_{i+1}}$, and of course if $\tau_{i+2}\notin J$ we can take $\lambda_{\tau_{i+1}}=0$. So, beginning at $k$ we inductively define $f_{\tau_{i}}$ and $\lambda_{\tau_{i}}$ for all $i$, which automatically satisfy all the required properties, except that we do not know that $$\phi_1(u^{e-j_{\tau_{k-1}}}f_{\tau_{k-1}}+\lambda_{\tau_{k-1}}u^{i_{\tau_{k-1}}}e_{\tau_{k-1}})=(b^{-1})_{k-1} f_{\tau_{k}}.$$However, because $k+1\notin J$, we may replace $f_{\tau_{k}}$ with $\phi_1(u^{e-j_{\tau_{k-1}}}f_{\tau_{k-1}}+\lambda_{\tau_{k-1}}u^{i_{\tau_{k-1}}}e_{\tau_{k-1}})/(b^{-1})_{k-1}$ without altering the fact that $$\phi_1(u^{e}f_{\tau_{k}})=(b^{-1})_{k}f_{\tau_{k+1}},$$ so we are done. Suppose now that $J=S$. Then we may carry out a similar inductive procedure starting with $\tau_{1}$, and we again define $f_{\tau_{i}}$ and $\lambda_{\tau_{i}}$ for all $i$, satisfying all the required properties, except that we do not know that $$\phi_{1}(f_{\tau_{r}}+\lambda_{\tau_{r}}u^{i_{\tau_{r}}}e_{\tau_{r}})=f_{\tau_{1}}.$$ We wish to redefine $f_{\tau_{1}}$ to be $\phi_{1}(f_{\tau_{r}}+\lambda_{\tau_{r}}e_{\tau_{r}})$, and we claim that doing so does not affect the relation $$\phi_{1}(f_{\tau_{1}}+\lambda_{\tau_{1}}u^{i_{\tau_{1}}}e_{\tau_{1}})=b^{-1}f_{\tau_{2}}.$$ To see this, note that we are modifying $f_{\tau_{1}}$ by a multiple of $e_{\tau_{1}}$ which is in the image of $\phi_{1}$, which by considering the action of $\operatorname{Gal}(K/K_{0})$ must in fact be of the form $\theta u^{pi_{\tau_{r}}}e_{\tau_{1}}$, with $\theta\in E$ and $pi_{\tau_{r}}\equiv i_{\tau_{1}}\textrm{ mod }e.$ Now, the assumption that $\rho$ is $S$-regular means that $i_{\tau_{1}}=e-\sum_{l=1}^{r}p^{r-l}(b_{\tau_{l+1}}-1)\equiv -b_{\tau_{1}}\textrm{ mod }p$, with $2\leq b_{\tau_{l}}\leq p-2$. Now, if we write $pi_{\tau_{r}}=i_{\tau_{1}}+me$, we see that $m\equiv i_{\tau_{1}}\equiv -b_{\tau_{1}}\textrm{ mod }p$, and since $2\leq b_{\tau_{1}}\leq p-2$ we see that $m\geq 2$. But then $\phi_{1}(\theta u^{pi_{\tau_{r}}}e_{\tau_{1}})=\phi_{1}(\theta u^{i_{\tau_{1}}+(m-1)e}u^{e}e_{\tau_{1}})$ is divisible by $u^{p(m-1)e}$ and is thus $0$, as required. \end{proof} \begin{thm}\label{localdef}Assume that $\rho$ is $J$-regular and has a model of type $J$. Then $\rho$ has a lift of type $J$, which is potentially ordinary if and only if $J=S$ or $J=\emptyset$. \end{thm} \begin{proof}We will write down an element $\mathcal{M}_{J}$ of $W(E)-\operatorname{Mod}_{cris,dd,K_{0}}$ such that $T_{0}(\mathcal{M}_{J})=\mathcal{M}$, where $\mathcal{M}$ is as in Lemma \ref{nicebasis}. We can write $S_{K,W(E)}$ as $\oplus_{\tau\in S} S_{K}$, and we then define $$\mathcal{M}_J^{\tau_i}=S_{K,W(E)}\cdot e_{\tau_i}+S_{K,W(E)}\cdot f_{\tau_i}$$ $$\hat{g}(e_{\tau_i})=\left(\left(\widetilde{\psi}_1'\prod_{\sigma\in J}\widetilde{\omega}_\sigma^{-p}\right)(g)\right)e_{\tau_i}$$ $$\hat{g}(f_{\tau_i})=\left(\left(\widetilde{\psi}_2'\prod_{\sigma\notin J}\widetilde{\omega}_\sigma^{-p}\right)(g)\right)f_{\tau_i}$$ If $\tau_{i+1}\in J$, $$\operatorname{Fil}^1\mathcal{M}_J^{\tau_i}=\operatorname{Fil}^1S_{K,W(E)}\cdot\mathcal{M}^{\tau_i}_J+S_{K,W(E)}\cdot (f_{\tau_i}+\tilde{\lambda}_{\tau_i}u^{i_{\tau_i}}e_{\tau_i})$$ $$\phi(e_{\tau_i})=(\tilde{a}^{-1})_i e_{\tau_{i+1}}$$ $$\phi(f_{\tau_i}+\tilde{\lambda}_{\tau_i}u^{i_{\tau_i}}e_{\tau_i})=(\tilde{b}^{-1})_i pf_{\tau_{i+1}}$$ If $\tau_{i+1}\notin J$, $$\operatorname{Fil}^1\mathcal{M}_J^{\tau_i}=\operatorname{Fil}^1S_{K,W(E)}\cdot\mathcal{M}^{\tau_i}_J+S_{K,W(E)}\cdot e_{\tau_i}$$ $$\phi(e_{\tau_i})=(\tilde{a}^{-1})_i p e_{\tau_{i+1}}$$ $$\phi(f_{\tau_i})=(\tilde{b}^{-1})_i f_{\tau_{i+1}}$$ Here a tilde denotes a Teichm\"{u}ller lift. Firstly we verify that this really is an element of $W(E)-\operatorname{Mod}^1_{cris,dd,K_{0}}$. Of the properties in Definition \ref{strongdivis}, the only non-obvious points are that $\operatorname{Fil}^1\mathcal{M}_J\cap I\mathcal{M}_J=I\operatorname{Fil}^1\mathcal{M}_J$ for all ideals $I$ of $\mathcal{O}_{L}$, and that $\phi(\operatorname{Fil}^1\mathcal{M}_J)$ is contained in $p\mathcal{M}_J$ and generates it over $S_{K,W(E)}$. But these are both straightforward; that $\operatorname{Fil}^1\mathcal{M}_J\cap I\mathcal{M}_J=I\operatorname{Fil}^1\mathcal{M}_J$ follows at once from the definition of $\operatorname{Fil}^1\mathcal{M}_J$ and the corresponding assertion for $S_{K}$, and that $\phi(\operatorname{Fil}^1\mathcal{M}_J)$ is contained in $p\mathcal{M}_J$ and generates it over $S_{K,W(E)}$ follows by inspection and the corresponding assertions for $S_K$. It is immediate from the definition of $T_0$ that $T_0(\mathcal{M}_J)\simeq\mathcal{M}$. To see that $T^{K_0}_{st,2}(\mathcal{M}_J)$ is a lift of $\rho$ of type $J$, note firstly that the Hodge-Tate weights of $T^{K_0}_{st,2}(\mathcal{M}_J)$ can be read off from the form of the filtration, exactly as in the last two paragraphs of the proof of Theorem 6.1 of \cite{geesavquatalg}. This shows that the determinant is a finite order character times the cyclotomic character, and it also shows that the representation is potentially ordinary if and only if $J=S$ or $J=\emptyset$. That the lift is of type $J$ is then immediate from the form of the $\operatorname{Gal}(K/K_{0})$-action and Proposition 5.1 of \cite{geesavquatalg}.\end{proof} \subsection{Breuil modules and Fontaine-Laffaille theory}\label{fl}In this section we relate the notion of having a model of type $J$ to that of possessing a certain crystalline lift. Suppose as usual that $\rho\sim \bigl(\begin{smallmatrix}\psi_1&*\\0&\psi_2\end{smallmatrix}\bigr)$, and that we can write $\psi_1|_{I_{K_0}}=\prod_{\tau\in J}\omega_{\tau}^{b_\tau}$, $\psi_2|_{I_{K_0}}=\prod_{\tau\notin J}\omega_{\tau}^{b_\tau}$ with $2\leq b_\tau\leq p-2$ (note that for a fixed $J$ it is \emph{not} always possible to do this, even after twisting - indeed, up to twisting it is equivalent to $\rho$ being $J$-regular). In this case we define canonical crystalline lifts $\psi_{1,J}$, $\psi_{2,J}$ of $\psi_1$, $\psi_2$, as in section \ref{2}. That is, we demand that for some choice of a Frobenius element $\Frob_{K_0}\in G_{K_0}$, ${\psi}_{i,J}(\Frob_{K_0})$ is the Teichm\"{u}ller lift of $\psi_i(\Frob_{K_0})$, and that: \begin{itemize}\item ${\psi}_{1,J}$ is crystalline, and the Hodge-Tate weight of ${\psi}_{1,J}$ with respect to $\tau$ is $b_\tau$ if $\tau\in J$, and $0$ if $\tau\notin J$. \item ${\psi}_{2,J}$ is crystalline, and the Hodge-Tate weight of ${\psi}_{2,J}$ with respect to $\tau$ is $b_\tau$ if $\tau\notin J$, and $0$ if $\tau\in J$.\end{itemize} The main result of this section is \begin{prop}\label{h1f}Under the above hypotheses, $\rho$ has a model of type $J$ if and only if $\rho$ has a lift to a crystalline representation $\bigl(\begin{smallmatrix}\psi_{1,J}&*\\0&\psi_{2,J}\end{smallmatrix}\bigr).$\end{prop} \begin{proof}The idea of the proof is to express both the condition of having a model of type $J$ and the condition of having a crystalline lift of the prescribed type in terms of conditions on strongly divisible modules. In fact, we already have a description of the general model of type $J$ in terms of Breuil modules with descent data, and it is easy to write down the general crystalline representation $\bigl(\begin{smallmatrix}\psi_{1,J}&*\\0&\psi_{2,J}\end{smallmatrix}\bigr)$ in terms of Fontaine-Laffaille theory. The only difficulty comes in relating the generic fibres of the Breuil modules to the generic fibres of the Fontaine-Laffaille modules, as the image of the functors describing passage to the generic fibre is in general too complicated to describe directly. Fortunately, it is relatively easy to compare the two generic fibres we obtain, without explicitly determining either. Let $\mathcal{M}\in\operatorname{BrMod}^{k-1}_{dd}$ for some $k\in [2,p-1]$. Let $\hat{A}$ be the filtered ring defined in section 2.1 of \cite{carusomodp}. There is a contravariant functor $T_{st}^{*}$ from $\operatorname{BrMod}^{k-1}_{dd}$ to the category of $E$-representations of $G_{K_{0}}$ given by $$T_{st}^{*}(\mathcal{M}):=\operatorname{Hom}_{k[u]/u^{ep},\phi_{k-1},N,\operatorname{Fil}^\cdot}(\mathcal{M},\hat{A})$$(where compatibility with $\operatorname{Fil}^{\cdot}$ means that the image of $\mathcal{M}_{k-1}$ is contained in $\operatorname{Fil}^{k-1}\hat{A}$). The action of $G_{K_0}$ on $T_{st}^{*}(\mathcal{M})$ is given by \[(gf)(x):=gf(\widehat{\overline{g}}^{-1}(x)),\]where $\overline{g}$ is the image of $g$ in $\operatorname{Gal}(K/K_0)$, and the action of $\operatorname{Gal}(K/K_0)$ on $\hat{A}=\hat{A}_K$ is defined in section 4.2 of \cite{carusomodp}. For the compatibility of this definition with those used in \cite{MR1681105}, \cite{MR2543474} and \cite{sav04}, see Lemma 3.3.1.2 of \cite{MR1621389}. This functor is exact and faithful, and preserves dimension in the obvious sense. To see these properties, it is enough to work with the category $\operatorname{BrMod}^{k-1}$ without descent data, and it is also straightforward to see that it suffices to consider the case $E={\mathbb F}_p$. In this case, the fact that $T_{st}^*$ is faithful is Corollary 2.3.3 of \cite{MR2543474}, and exactness follows from Proposition 2.3.1 of \cite{carusomodp} and the duality explained in section 2.1 of \cite{carusomodp}. The preservation of dimension is Lemma 2.3.1.2 of \cite{MR1681105}. We will see below that by Breuil's generalisation of Fontaine-Laffaille theory (see \cite{MR1621389}) there are objects of $\operatorname{BrMod}^{p-2}_{dd}$ which correspond via $T_{st}^{*}$ to the reductions mod $\pi$ of crystalline representations with Hodge-Tate weights in $[0,p-2]$. In order to compare the generic fibres of these Breuil modules to those of finite flat group schemes with descent data, we need to be able to compare elements of $\operatorname{BrMod}_{dd}^{1}$ and $\operatorname{BrMod}_{dd}^{p-2}$. This is straightforward: it is easy to check that there is a fully faithful functor from $\operatorname{BrMod}_{dd}^{1}$ to $\operatorname{BrMod}_{dd}^{p-2}$, given by defining (for $\mathcal{M}\in\operatorname{BrMod}_{dd}^{1}$) $\mathcal{M}_{p-2}:=u^{e(p-3)}\mathcal{M}_{1}$, $\phi_{p-2}(u^{e(p-3)}x)=\phi_{1}(x)$ for all $x\in\mathcal{M}_{1}$, and leaving the other structures unchanged. This functor commutes with $T_{st}^{*}$. Because we are now using the functor $T_{st}^{*}$ rather than $T_{st,2}^{K_{0}}$, the form of the Breuil modules (and in particular their descent data) corresponding to models of type $J$ under $T_{st}^{*}$ is slightly different. We will simultaneously write it as an element of $\operatorname{BrMod}_{dd}^1$ and $\operatorname{BrMod}_{dd}^{p-2}$ (making use of the fully faithful functor of the previous paragraph), by specifying $\mathcal{M}_1$, $\mathcal{M}_{p-2}$, $\phi_1$ and $\phi_{p-2}$. Explicitly, we see (recalling that the operator $N$ is uniquely determined for an element of $\operatorname{BrMod}^1_{dd}$, so it suffices to check that it satisfies $N(\mathcal{M})\subset u\mathcal{M}$ and the commutation relations with $\phi_{1}$ and $\hat{g}$, which we will check below) from Lemma \ref{nicebasis} that $\rho$ has a model of type $J$ if and only if there are $\lambda_{\tau_{i}}\in E$ with $\lambda_{\tau_{i}}=0$ if $\tau_{i+1}\notin J$, and elements $a$, $b\in E^{\times}$ such that $\rho\cong T_{st}^{*}(\mathcal{M})$, where $$\mathcal{M}^{\tau_i}=E[u]/u^{ep}\cdot e_{\tau_i}+E[u]/u^{ep}\cdot f_{\tau_i}$$ $$\mathcal{M}_{1}^{\tau_i}=E[u]/u^{ep}\cdot u^{j_{\tau_i}} e_{\tau_i}+E[u]/u^{ep}\cdot (u^{e-j_{\tau_i}}f_{\tau_i}+\lambda_{\tau_i}u^{i_{\tau_i}}e_{\tau_i})$$ $$\mathcal{M}_{p-2}^{\tau_i}=E[u]/u^{ep}\cdot u^{(p-3)e+j_{\tau_i}} e_{\tau_i}+E[u]/u^{ep}\cdot (u^{(p-2)e-j_{\tau_i}}f_{\tau_i}+\lambda_{\tau_i}u^{(p-3)e+i_{\tau_i}}e_{\tau_i})$$ $$\phi_{1}(u^{j_{\tau_i}} e_{\tau_i})=(a^{-1})_ie_{\tau_{i+1}} $$ $$\phi_{1}(u^{e-j_{\tau_i}}f_{\tau_i}+\lambda_{\tau_i}u^{i_{\tau_i}}e_{\tau_i})=(b^{-1})_if_{\tau_{i+1}}$$ $$\phi_{p-2}(u^{(p-3)e+j_{\tau_i}} e_{\tau_i})=(a^{-1})_ie_{\tau_{i+1}} $$ $$\phi_{p-2}(u^{(p-2)e-j_{\tau_i}}f_{\tau_i}+\lambda_{\tau_i}u^{(p-3)e+i_{\tau_i}}e_{\tau_i})=(b^{-1})_if_{\tau_{i+1}}$$ $$\hat{g}(e_{\tau_i})=\left(\left(\prod_{\sigma\notin J}\omega_\sigma^{p-b_{\sigma}}\right)(g)\right)e_{\tau_i}$$ $$\hat{g}(f_{\tau_i})=\left(\left(\prod_{\sigma\in J}\omega_\sigma^{p-b_{\sigma}}\right)(g)\right)f_{\tau_i}$$ $$N(e_{\tau_{i}})=0 $$ $$N(f_{\tau_{i}})=-\frac{(b)_{i-1}}{(a)_{i-1}}i_{\tau_{i-1}}\lambda_{\tau_{i-1}}u^{pi_{\tau_{i-1}}}e_{\tau_{i}} $$ where $\lambda_{\tau_i}\in E$, with $\lambda_{\tau_i}=0$ if $\tau_{i+1}\notin J$, the $i_{\tau_i}$ are such that $\mathcal{M}_{p-2}$ is Galois-stable and $0\leq i_{\tau_i}\leq e-1$, and $$j_{\tau_i}=\left\{\begin{array}{c} e\text{ if }\tau_{i+1}\in J \\ 0\text{ if }\tau_{i+1}\notin J. \\ \end{array}\right. $$ To see that $N(\mathcal{M})\subset u\mathcal{M}$, it is enough to check that $i_{\tau_i}>0$ for all $i$. In fact, we claim that we have $pi_{\tau_i}\ge e$ for all $i$. To see this, note that by definition we have $i_{\tau_{i+1}}\equiv pi_{\tau_i}\pmod{e}$. If $pi_{\tau_i}<e$ for some $i$, then this congruence forces $i_{\tau_{i+1}}=pi_{\tau_i}$. However, it is easy to check that since $2\le b_{\tau_i}\le p-2$ for all $i$, no $i_{\tau_i}$ is divisible by $p$ (for example, by (6) below we have \[i_{\tau_i}\equiv b_{\tau_i}(\delta_{J^c}(\tau_i)-\delta_J(\tau_i))-\delta_{J^c}(\tau_{i+1})\pmod{p},\]which is never $0\pmod{p}$), so the claim follows. The compatibility of $N$ with $\hat{g}$ is evident from the definition of $i_{\tau_i}$. To see that $u^eN(\mathcal{M}_{1})\subset\mathcal{M}_{1}$, and that $\phi_{1}(u^eN(x))=N(\phi_{1}(x))$ for all $x\in\mathcal{M}_{1}$, we compute as follows (recalling that the Leibniz rule implies that $N(u^ix)=u^iN(x)-iu^ix$): \[N(u^{j_{\tau_i}}e_{\tau_i})=-j_{\tau_i}u^{j_{\tau_i}}e_{\tau_i}\in\mathcal{M}_{1},\]so that \[\phi_{1}(u^eN(u^{j_{\tau_i}}e_{\tau_i}))=0=N((a^{-1})_ie_{\tau_{i+1}})=N(\phi_{1}(u^{j_{\tau_i}}e_{\tau_i})).\]Similarly, we have \begin{align*}N(u^{e-j_{\tau_i}}f_{\tau_i}+\lambda_{\tau_i}u^{i_{\tau_i}}e_{\tau_i})&=-\frac{(b)_{i-1}}{(a)_{i-1}}i_{\tau_{i-1}}\lambda_{\tau_{i-1}}u^{e-j_{\tau_i}+pi_{\tau_{i-1}}}e_{\tau_i} \\&- \left((e-j_{\tau_i})u^{e-j_{\tau_i}}f_{\tau_i}+i_{\tau_i}\lambda_{\tau_i}u^{i_{\tau_i}}e_{\tau_i}\right) \end{align*} Recalling that if $y\in\mathcal{M}_{1}$ then $\phi_{1}(u^ey)=0$, we see that it is enough to compute the right hand side modulo $\mathcal{M}_{1}$. Since $pi_{\tau_{i-1}}\ge e$, the exponent of $u$ in the first term on the right hand side is at least $2e-j_{\tau_i}\ge j_{\tau_i}$, so this term is contained in $\mathcal{M}_{1}$. We thus see that modulo $\mathcal{M}_{1}$, the right hand side is congruent to \[ (e-j_{\tau_i}-i_{\tau_i})\lambda_{\tau_i}u^{i_{\tau_i}}e_{\tau_i}=-i_{\tau_i}\lambda_{\tau_i}u^{i_{\tau_i}}e_{\tau_i} \](since if $\lambda_{\tau_i}\ne 0$, we have $\tau_{i+1}\in J$, so that $j_{\tau_i}=e$). Then \begin{align*} \phi_{1}(-u^ei_{\tau_i}\lambda_{\tau_i}u^{i_{\tau_i}}e_{\tau_i})&=\phi_{1}(-i_{\tau_i}\lambda_{\tau_i}u^{i_{\tau_i}}u^{j_{\tau_i}}e_{\tau_i})\\ &= -i_{\tau_i}\lambda_{\tau_i}u^{pi_{\tau_i}}(a^{-1})_ie_{\tau_{i+1}}\\ &=(b^{-1})_iN(f_{\tau_{i+1}})\\ &= N(\phi_{1}(u^{e-j_{\tau_i}}f_{\tau_i}+\lambda_{\tau_i}u^{i_{\tau_i}}e_{\tau_i})) \end{align*}as required. It is an easy exercise to write down the reductions mod $p$ of the strongly divisible modules corresponding to crystalline representations $\bigl(\begin{smallmatrix}\psi_{1,J}&*\\0&\psi_{2,J}\end{smallmatrix}\bigr)$, as we now explain. Firstly, we must recall one of the main results of \cite{MR707328}. Let $L$ be a finite extension of ${{\mathbb Q}_p}$ with residue field $E$. We say that an admissible $\mathcal{O}_L$-lattice is a finite free $(\mathcal{O}_{K_0}\otimes_{\mathbb{Z}_p}\mathcal{O}_L)$-module $M$ together with a decreasing filtration $\operatorname{Fil}^iM$ by $\mathcal{O}_{K_0}$-direct summands and $\phi$-linear, $\mathcal{O}_L$-linear maps $\phi_i:\operatorname{Fil}^iM\to M$ for all $0\le i\le p-2$ such that \begin{itemize} \item $\operatorname{Fil}^0M=M$ and $\operatorname{Fil}^{p-1}M=0$. \item For all $0\le i\le p-3$, $\phi_i|_{\operatorname{Fil}^{i+1}M}=p\phi_{i+1}$. \item $\sum_{i=0}^{p-2}\phi_i(\operatorname{Fil}^iM)=M$. \end{itemize} There is an exact functor $T^*_{cris}$ from the category of admissible $\mathcal{O}_L$-lattices to the category of $G_{K_0}$-representations on free $\mathcal{O}_L$-lattices defined by \[T^*_{cris}(M)=\operatorname{Hom}_{\mathcal{O}_{K_0},\operatorname{Fil}^\cdot,\phi}(M,A_{cris}).\]This gives an equivalence of categories between the category of admissible $\mathcal{O}_L$-lattices and the category of $G_{K_0}$-stable $\mathcal{O}_L$-lattices in crystalline $L$-representations in $G_{K_0}$ with all Hodge-Tate weights in $[0,p-2]$. In particular, one can easily write down the form of the rank one $\mathcal{O}_L$-lattices corresponding to the characters $\psi_{1,J}$ and $\psi_{2,J}$, and we must then compute the possible form of extensions of these two lattices. As usual, we decompose $M$ as a direct sum of $\mathcal{O}_L$-modules $M^{\tau_i}$. We obtain the following general form: \[M^{\tau_i}=\mathcal{O}_L E_{\tau_i}+\mathcal{O}_L F_{\tau_i}\] \[\operatorname{Fil}^0M^{\tau_i}=M^{\tau_i}\] \[\operatorname{Fil}^{b_{\tau_i}+1}M^{\tau_i}=0\] \[\text{ if }\tau_i\in J{\text, }\operatorname{Fil}^{j}M^{\tau_i}=\mathcal{O}_L F_{\tau_i}\text{ for all }1\le j\le b_{\tau_i}\] \[\text{ if }\tau_i\in J^c{\text, }\operatorname{Fil}^{j}M^{\tau_i}=\mathcal{O}_L E_{\tau_i}\text{ for all }1\le j\le b_{\tau_i}\] \[\text{ if }\tau_i\in J{\text, }\phi_0(E_{\tau_i})=(\tilde{a}^{-1})_iE_{\tau_{i+1}}\text{ and }\phi_{b_{\tau_i}}(F_{\tau_i})=(\tilde{b}^{-1})_i(F_{\tau_{i+1}}-\lambda'_{\tau_i}E_{\tau_{i+1}}) \] \[\text{ if }\tau_i\notin J{\text, }\phi_{b_{\tau_i}}(E_{\tau_i})=(\tilde{a}^{-1})_iE_{\tau_{i+1}}\text{ and }\phi_{0}(F_{\tau_i})=(\tilde{b}^{-1})_i(F_{\tau_{i+1}}-\lambda'_{\tau_i}E_{\tau_{i+1}}) \] where $\tilde{a}$, $\tilde{b}\in\mathcal{O}_L^\times$, $\lambda'_{\tau_i}\in\mathcal{O}_L$, and $\lambda'_{\tau_i}=0$ if $\tau_{i+1}\notin J$. To see this, note that the form of the filtration is easily deduced from the relationship between the filtration of a Fontaine-Laffaille module, and the Hodge-Tate weights of the corresponding Galois representation, and the form of the Frobenius action on the $E_{\tau_i}$ is also determined. To see that we can arrange the Frobenius action as claimed, suppose firstly that $J\neq\emptyset$, and choose $\tau_i\in J$. Then $F_{\tau_i}$ is determined (by the form of the filtration) up to an element of $\mathcal{O}_L^\times$, and we fix a choice of $F_{\tau_i}$. If $\tau_{i+1}\notin J$, we can simply define $F_{\tau_{i+1}}=(\tilde{b})_i\phi_{b_{\tau_i}}(F_{\tau_i})$. If $\tau_{i+1}\in J$, then there is a unique $\lambda_{\tau_i}'\in\mathcal{O}_L$ such that \[\operatorname{Fil}^{b_{\tau_{i+1}}}M^{\tau_{i+1}}=\mathcal{O}_L((\tilde{b})_i\phi_{b_{\tau_i}}(F_{\tau_i})+\lambda'_{\tau_i}E_{\tau_{i+1}}),\] and we set \[F_{\tau_{i+1}}=(\tilde{b})_i\phi_{b_{\tau_i}}(F_{\tau_i})+\lambda'_{\tau_i}E_{\tau_{i+1}}.\]We can then continue in the same fashion, defining $F_{\tau_{i+2}}$ and so on, and the fact that $\tau_i\in J$ gives us the freedom to choose $\lambda'_{\tau_{i-1}}$ so that \[\phi_{\delta_J(\tau_{i-1})b_{\tau_{i-1}}}(F_{\tau_{i-1}})=(\tilde{b}^{-1})_{i-1}(F_{\tau_{i}}-\lambda'_{\tau_{i-1}}E_{\tau_{i}}) .\]The case $J=\emptyset$ is similar, except that one may need to modify the initial choice of $F_{\tau_i}$; the argument is very similar to that used in the case $J=S$ in the proof of Lemma \ref{nicebasis}. In this case one also needs to use the fact that $\tilde{b}^{-1}-\tilde{a}^{-1}p^{\sum_{\tau\in S}b_\tau}\in\mathcal{O}_L^\times$, which holds as $\sum_{\tau\in S}b_\tau>0$. Breuil's generalisation of Fontaine-Laffaille theory (\cite{MR1621389}) allows us to reduce these modules mod $\pi_L$ and obtain the corresponding elements of the category $K_0-\operatorname{BrMod}_{dd,K_0}^{p-2}$ of Breuil modules with descent data for the case $K=K_0$ (so that the descent data is trivial, as the group $\operatorname{Gal}(K/K_0)$ is trivial). We find that they are of the form: $$\mathcal{Q}^{\tau_i}=E[u]/u^{p}\cdot E_{\tau_i}+E[u]/u^{p}\cdot F_{\tau_i}$$ $$\mathcal{Q}_{p-2}^{\tau_i}=E[u]/u^{p}\cdot u^{(p-2-b_{\tau_{i}}\delta_{J^{c}}(\tau_{i}))} E_{\tau_i}+E[u]/u^{p}\cdot u^{(p-2-b_{\tau_{i}}\delta_{J}(\tau_{i}))}F_{\tau_i}$$ $$\phi_{p-2}(u^{(p-2-b_{\tau_{i}}\delta_{J^{c}}(\tau_{i}))} E_{\tau_i})=(a^{-1})_iE_{\tau_{i+1}} $$ $$\phi_{p-2}(u^{(p-2-b_{\tau_{i}}\delta_{J}(\tau_{i}))}F_{\tau_i})=(b^{-1})_i(F_{\tau_{i+1}}-\lambda_{\tau_{i}}'E_{\tau_{i+1}})$$ $$N(E_{\tau_{i}})=0 $$ $$N(F_{\tau_{i}})=0 $$ where $\lambda_{\tau_i}'\in E$, with $\lambda'_{\tau_i}=0$ if $\tau_{i+1}\notin J$. Of course, we wish to know the corresponding objects of $\operatorname{BrMod}_{dd}^{p-2}$. This is straightforward: by Proposition 4.2.2 of \cite{carusomodp}, and the discussion preceding and following it, we see that we can obtain the requisite modules by simply taking the extension of scalars $k[u]/u^{p}\to k[u]/u^{ep}$ given by $u\mapsto u^e$, and allowing $\operatorname{Gal}(K/K_0)$ to act via its action on $k[u]/u^{ep}$. We obtain the following general form: $$\mathcal{N}^{\tau_i}=E[u]/u^{ep}\cdot E_{\tau_i}+E[u]/u^{ep}\cdot F_{\tau_i}$$ $$\mathcal{N}_{p-2}^{\tau_i}=E[u]/u^{ep}\cdot u^{e(p-2-b_{\tau_{i}}\delta_{J^{c}}(\tau_{i}))} E_{\tau_i}+E[u]/u^{ep}\cdot u^{e(p-2-b_{\tau_{i}}\delta_{J}(\tau_{i}))}F_{\tau_i}$$ $$\phi_{p-2}(u^{e(p-2-b_{\tau_{i}}\delta_{J^{c}}(\tau_{i}))} E_{\tau_i})=(a^{-1})_iE_{\tau_{i+1}} $$ $$\phi_{p-2}(u^{e(p-2-b_{\tau_{i}}\delta_{J}(\tau_{i}))}F_{\tau_i})=(b^{-1})_i(F_{\tau_{i+1}}-\lambda_{\tau_{i}}'E_{\tau_{i+1}})$$ $$\hat{g}(E_{\tau_i})=E_{\tau_i}$$ $$\hat{g}(F_{\tau_i})=F_{\tau_i}$$ $$N(E_{\tau_{i}})=0 $$ $$N(F_{\tau_{i}})=0 $$ where $\lambda_{\tau_i}'\in E$, with $\lambda'_{\tau_i}=0$ if $\tau_{i+1}\notin J$. We claim that if for each $i$ we have \begin{equation}\label{eqn:341} \lambda_{\tau_{i}}(b)_{i}=\lambda'_{\tau_{i}}(a)_{i} \end{equation} then $T_{st}^{*}(\mathcal{M})\cong T_{st}^{*}(\mathcal{N})$. This is of course enough to demonstrate the proposition, as given any set of $\lambda_{\tau_{i}}$ (respectively $\lambda_{\tau_{i}}'$) such that $\lambda_{\tau_i}=0$ (respectively $\lambda'_{\tau_i}=0$) if $\tau_{i+1}\notin J$, we may choose a set of $\lambda_{\tau_{i}}'$ (respectively $\lambda_{\tau_{i}}$) so that (\ref{eqn:341}) holds. Assume now that (\ref{eqn:341}) holds. Note that we may write both $\mathcal{M}$ and $\mathcal{N}$ as extensions $$0\to\mathcal{M}''\to\mathcal{M}\to\mathcal{M}'\to 0 $$ $$0\to\mathcal{N}''\to\mathcal{N}\to\mathcal{N}'\to 0 $$with $T_{st}^{*}(\mathcal{M}'')\cong T_{st}^{*}(\mathcal{N}'')\cong\psi_{2}$, $T_{st}^{*}(\mathcal{M}')\cong T_{st}^{*}(\mathcal{N}')\cong\psi_{1}$. To prove that $T_{st}^{*}(\mathcal{M})\cong T_{st}^{*}(\mathcal{N})$, we will construct a commutative diagram $$\xymatrix{0\ar[r] & \mathcal{M}''\ar[r]\ar[d]^{f_{\mathcal{M}''}} &\mathcal{M}\ar[r]\ar[d]^{f_{\mathcal{M}}} & \mathcal{M}'\ar[r]\ar[d]^{f_{\mathcal{M}'}} & 0 \\ 0\ar[r] & \mathcal{P}''\ar[r] &\mathcal{P}\ar[r] & \mathcal{P}'\ar[r] & 0\\ 0\ar[r] & \mathcal{N}''\ar[u]_{f_{\mathcal{N}''}}\ar[r] &\mathcal{N}\ar[r]\ar[u]_{f_{\mathcal{N}}} & \mathcal{N}'\ar[r]\ar[u]_{f_{\mathcal{N}'}} & 0}$$such that each of $T_{st}^{*}(f_{\mathcal{M}''})$, $T_{st}^{*}(f_{\mathcal{M}'})$, $T_{st}^{*}(f_{\mathcal{N}''})$ and $T_{st}^{*}(f_{\mathcal{N}'})$ are isomorphisms. From the five lemma it then follows that $T_{st}^{*}(f_{\mathcal{M}})$ and $T_{st}^{*}(f_{\mathcal{N}})$ are isomorphisms, and we will be done. In fact, we take $$\mathcal{P}^{\tau_i}=E[u]/u^{ep}\cdot e'_{\tau_i}+E[u]/u^{ep}\cdot f'_{\tau_i}$$ $$\mathcal{P}_{p-2}^{\tau_i}=E[u]/u^{ep}\cdot u^{n_{\tau_{i}}} e'_{\tau_i}+E[u]/u^{ep}\cdot (u^{n'_{\tau_{i}}}f'_{\tau_i}+\lambda_{\tau_i}u^{n_{\tau_{i}}-\beta_{\tau_{i+1}}}e'_{\tau_i})$$ $$\phi_{p-2}(u^{n_{\tau_{i}}} e'_{\tau_i} )=(a^{-1})_ie'_{\tau_{i+1}} $$ $$\phi_{p-2}(u^{n'_{\tau_{i}}}f'_{\tau_i}+\lambda_{\tau_i}u^{n_{\tau_{i}}-\beta_{\tau_{i+1}}}e'_{\tau_i})=(b^{-1})_if'_{\tau_{i+1}}$$ $$\hat{g}(e'_{\tau_i})=\nu_{1,\tau_{i}}(g)e'_{\tau_i}$$ $$\hat{g}(f'_{\tau_i})=\nu_{2,\tau_{i}}(g)f'_{\tau_i}$$ $$N(e'_{\tau_{i}})=0 $$ $$N(f'_{\tau_{i}})=-\frac{(b)_{i-1}}{(a)_{i-1}}i_{\tau_{i-1}}\lambda_{\tau_{i-1}}u^{pi_{\tau_{i-1}}-p \alpha_{\tau_{i}}}e'_{\tau_{i}} $$where $$\alpha_{\tau_{i}}=\sum_{j=0}^{r-1}p^{r-1-j}\left(b_{\tau_{i+j}}\delta_{J^{c}}(\tau_{i+j})-\delta_{J^{c}}(\tau_{i+j+1})\right) $$ $$\beta_{\tau_{i}}=\sum_{j=0}^{r-1}p^{r-1-j}\left(b_{\tau_{i+j}}\delta_{J}(\tau_{i+j})-\delta_{J}(\tau_{i+j+1})\right) $$ $$\nu_{1,\tau_{i}}(g)=\left\{\begin{array}{ll}\prod_{\sigma\notin J}\omega_\sigma^{p-b_{\sigma}}(g)&\text{ if }\tau_i\notin J \\ 1 &\text{ if }\tau_i\in J \end{array}\right.$$ $$\nu_{2,\tau_{i}}(g)=\left\{\begin{array}{ll}\prod_{\sigma\in J}\omega_\sigma^{p-b_{\sigma}}(g)&\text{ if }\tau_i\in J \\ 1 &\text{ if }\tau_i\notin J \end{array}\right.$$ $$n_{\tau_{i}}= (p-2-\delta_{J^{c}}(\tau_{i})b_{\tau_{i}})e+p\delta_{J^{c}}(\tau_{i})\alpha_{\tau_{i}}-\delta_{J^{c}}(\tau_{i+1})\alpha_{\tau_{i+1}}$$ $$n'_{\tau_{i}}= (p-2-\delta_{J}(\tau_{i})b_{\tau_{i}})e+p\delta_{J}(\tau_{i})\beta_{\tau_{i}}-\delta_{J}(\tau_{i+1})\beta_{\tau_{i+1}}$$We then define $f_{\mathcal{M}}$ and $f_{\mathcal{N}}$ by $$f_{\mathcal{M}}(e_{\tau_{i}})=u^{-p\alpha_{\tau_{i}}\delta_{J}(\tau_{i})}e_{\tau_{i}}' $$ $$f_{\mathcal{M}}(f_{\tau_{i}})=u^{-p\beta_{\tau_{i}}\delta_{J^{c}}(\tau_{i})}f_{\tau_{i}}' $$ $$f_{\mathcal{N}}(E_{\tau_{i}})=u^{p\alpha_{\tau_{i}}\delta_{J^{c}}(\tau_{i})}e_{\tau_{i}}' $$ $$f_{\mathcal{N}}(F_{\tau_{i}})=u^{p\beta_{\tau_{i}}\delta_{J}(\tau_{i})}f_{\tau_{i}}' $$We define $\mathcal{P}'$ to be the submodule generated by the $e'_{\tau_{i}}$, and $\mathcal{P}''$ to be the quotient obtained by $e'_{\tau_{i}}\mapsto 0$. The remaining maps are then defined by the commutativity of the diagram. Before we verify that this construction behaves as claimed, we pause to record a number of useful identities and inequalities. \begin{enumerate} \item If $\tau_{i+1}\notin J$, then $\lambda_{\tau_i}=\lambda'_{\tau_i}=0$ by definition. \item $$p\alpha_{\tau_{i}}-\alpha_{\tau_{i+1}}=e(b_{\tau_{i}}\delta_{J^{c}}(\tau_{i})-\delta_{J^{c}}(\tau_{i+1})), $$ $$p\beta_{\tau_{i}}-\beta_{\tau_{i+1}}=e(b_{\tau_{i}}\delta_{J}(\tau_{i})-\delta_{J}(\tau_{i+1})). $$These both follow immediately from the definitions of $\alpha_{\tau_i}$, $\beta_{\tau_i}$. \item \[n_{\tau_i}=\alpha_{\tau_{i+1}}\delta_J(\tau_{i+1})-p\alpha_{\tau_i}\delta_J(\tau_i)+e(p-3)+e\delta_J(\tau_{i+1}),\] \[n'_{\tau_i}=\beta_{\tau_{i+1}}\delta_{J^c}(\tau_{i+1})-p\beta_{\tau_i}\delta_{J^c}(\tau_i)+e(p-3)+e\delta_{J^c}(\tau_{i+1}).\]These both follow from the definitions of $n_{\tau_i}$, $n_{\tau_i}'$ and property (2) above. \item We have $\tau_{i}\in J$ if and only if $\beta_{\tau_{i}}>0$ if and only if $\alpha_{\tau_{i}}\leq 0$. To see this, note that from the definition, the sign of $\alpha_{\tau_{i}}$ is determined by the sign of the first non-zero term in the sum (this uses that $2\leq b_{\tau_{j}}\leq p-2$). If $\tau_{i}\notin J$ then the first term is positive, and thus so is the whole sum. If $\tau_{i}\in J$ then either every term in the sum is zero, or the first non-zero term must be negative. A similar analysis applies to the sign of $\beta_{\tau_{i}}$. \item $$-e/(p-1)<\alpha_{\tau_{i}},\ \beta_{\tau_{i}}< e(p-2)/(p-1).$$ This is immediate from the definitions, and the fact that $2\leq b_{\tau_{j}}\leq p-2$ for all $j$. \item \[i_{\tau_{i-1}}=\alpha_{\tau_i}-\beta_{\tau_i}+e\delta_J(\tau_i).\] It follows straightforwardly from the forms of the $\hat{g}$-actions that the two side are congruent modulo $e$, so it suffices to check that the right hand side is an element of $[0,e-1]$. This follows from points (4) and (5). \item \[(p-2)e\ge n_{\tau_i},\ n_{\tau_i}\ge 0.\] We demonstrate these inequalities for $n_{\tau_i}$, the argument for $n'_{\tau_i}$ being formally identical after exchanging $\alpha_{\tau_j}$ and $\beta_{\tau_j}$, $J$ and $J^c$. We examine 4 cases in turn. If $\tau_{i}\in J$ and $\tau_{i+1}\in J$, then $n_{\tau_i}=(p-2)e$ and there is nothing to prove. If $\tau_i\in J$ and $\tau_{i+1}\notin J$, then $n_{\tau_i}=(p-2)e-\alpha_{\tau_{i+1}}$, and the inequalities follow from points (4) and (5) above. If $\tau_i\notin J$ and $\tau_{i+1}\in J$ then by point (3) above we have $n_{\tau_i}=(p-2)e+\alpha_{\tau_{i+1}}$, and the inequalities follow from (4) and (5). Finally, if $\tau_i\notin J$ and $\tau_{i+1}\notin J$, then $n_{\tau_i}=(p-2-b_{\tau_i})e+p\alpha_{\tau_i}-\alpha_{\tau_{i+1}}=e(p-3)$ by (2). \item If $\tau_{i+1}\in J$, we have \[n_{\tau_i}-\beta_{\tau_{i+1}}=e(p-3)-p\alpha_{\tau_i}\delta_J(\tau_i)+i_{\tau_i}.\]This follows from (3) and (6) above. \item If $\tau_{i+1}\in J$, then \[n_{\tau_i}-\beta_{\tau_{i+1}}\equiv n_{\tau_i}'+ i_{\tau_i}\pmod{p}.\]This follows from (3) and (8). \item If $\tau_{i+1}\in J$, then $n_{\tau_i}\ge \beta_{\tau_{i+1}}$. This follows from (4) and (8). \item If $\tau_{i+1}\in J$, then \[n'_{\tau_i}+\beta_{\tau_{i+1}}\le e(p-2).\]From (2) and (3), we obtain \[n'_{\tau_{i}}+\beta_{\tau_{i+1}}=e(p-2)+\delta_J(\tau_i)(p\beta_{\tau_i}-eb_{\tau_i}),\]so we must check that if $\tau_i\in J$, then $p\beta_{\tau_i}-eb_{\tau_i}\le 0$. But by the definition of $\beta_{\tau_i}$, if $\tau_i\in J$ then we have \[p\beta_{\tau_i}-eb_{\tau_i}=-p^r+b_{\tau_i}+\sum_{j=1}^{r-1}p^{r-j}\left(b_{\tau_{i+j}}\delta_{J}(\tau_{i+j})-\delta_{J}(\tau_{i+j+1})\right)\]and the result follows as $2\le b_{\tau_j}\le p-2$ for all $j$. \item If $\tau_i\in J$, then \[n_{\tau_i}'+pi_{\tau_{i-1}}-p\alpha_{\tau_i}\ge n_{\tau_i}.\]To see this, by (2), (3) and (6) we have that if $\tau_{i+1}\in J$, then \begin{align*}n_{\tau_i}'+pi_{\tau_{i-1}}-p\alpha_{\tau_i}-n_{\tau_i}&=(p-1)e-p\beta_{\tau_i}\\ &\ge\left((p-1)-\frac{p(p-2)}{p-1}\right)e\\&\ge 0\end{align*}by (5). If on the other hand $\tau_{i+1}\notin J$, we find that \begin{align*}n_{\tau_i}'+pi_{\tau_{i-1}}-p\alpha_{\tau_i}-n_{\tau_i}&=(p+1-b_{\tau_i})e+p\alpha_{\tau_i}\\ &\ge 3e+p\alpha_{\tau_i} \\&\ge\left(3-\frac{p}{p-1}\right)e\\&\ge 0\end{align*} by (5). \end{enumerate} We now verify that $\mathcal{P}$ is indeed an object of $\operatorname{BrMod}_{dd}^{p-2}$. \begin{itemize} \item To see that we have defined a $(k\otimes_{{\mathbb F}_p} E)[u])/u^{ep}$-module, we must check that all of the exponents of $u$ in the definition are nonnegative; so, we need to check the inequalities $n_{\tau_i}\ge 0$, $n'_{\tau_i}\ge 0$, and if $\lambda_{\tau_i}\ne 0$ we need to verify that $n_{\tau_i}\ge\beta_{\tau_{i+1}}$. These follow from (1), (7) and (10) above. \item To see that $u^{e(p-2)}\mathcal{P}\subset\mathcal{P}_{p-2}$, we need to verify that $(p-2)e\ge n_{\tau_i}$, that $(p-2)e\ge n'_{\tau_i}$, and if $\lambda_{\tau_i}\ne 0$ then $(p-2)e\ge n'_{\tau_i}+\beta_{\tau_{i+1}}$. These follow from (1), (7) and (11). \item To see that $N(\mathcal{P})\subset u\mathcal{P}$, we need to check that if $\lambda_{\tau_{i-1}}\ne 0$ then $i_{\tau_{i-1}}>\alpha_{\tau_i}$. This follows from (1), (5) and (6). \item To see that $u^e N(\mathcal{P}_{p-2})\subset \mathcal{P}_{p-2}$, we note by the Leibniz rule we have \[N(u^{n_{\tau_i}}e'_{\tau_i})=-n_{\tau_i}u^{n_{\tau_i}}e'_{\tau_i}\in \mathcal{P}_{p-2}\]and \begin{align*}N(u^{n'_{\tau_{i}}}f'_{\tau_i}+\lambda_{\tau_i}u^{n_{\tau_{i}}-\beta_{\tau_{i+1}}}e'_{\tau_i})&= -\frac{(b)_{i-1}}{(a)_{i-1}}i_{\tau_{i-1}}\lambda_{\tau_{i-1}} u^{n'_{\tau_{i}}+pi_{\tau_{i-1}}-p \alpha_{\tau_{i}}}e'_{\tau_{i}} \\&-n'_{\tau_i}u^{n'_{\tau_i}}f'_{\tau_i}-\lambda_{\tau_i}(n_{\tau_{i}}-\beta_{\tau_{i+1}})u^{n_{\tau_{i}}-\beta_{\tau_{i+1}}}e'_{\tau_i}.\end{align*}By (1) and (12), the first time on the right hand side is contained in $\mathcal{P}_{p-2}$. By (9), the remaining terms are equal to \[-n'_{\tau_i}(u^{n'_{\tau_i}}f'_{\tau_i}+\lambda_{\tau_i}u^{n_{\tau_{i}}-\beta_{\tau_{i+1}}}e'_{\tau_i})-\lambda_{\tau_i}i_{\tau_{i}}u^{n_{\tau_{i}}-\beta_{\tau_{i+1}}}e'_{\tau_i}.\]The first term is contained in $\mathcal{P}_{p-2}$ by definition, and if we multiply the second term by $u^e$, we obtain \[-\lambda_{\tau_i}i_{\tau_{i}}u^{n_{\tau_{i}}+e-\beta_{\tau_{i+1}}}e'_{\tau_i},\]which is contained in $\mathcal{P}_{p-2}$ by (5). \item To see that $\phi_{p-2}(u^eN(x))=N(\phi_{p-2}(x))$ for all $x\in \mathcal{P}_{p-2}$, we recall that $\phi_{p-2}(u^ey)=0$ if $y\in\mathcal{P}_{p-2}$. Thus \[\phi_{p-2}(u^eN(u^{n_{\tau_i}}e'_{\tau_i}))=0=N((a^{-1})_ie'_{\tau_{i+1}})=N(\phi_{p-2}(u^{n_{\tau_i}}e'_{\tau_i})).\] We also have, using (1), (6) and the calculation of the previous bullet point, \begin{align*}\phi_{p-2}(u^eN(u^{n'_{\tau_{i}}}f'_{\tau_i}+\lambda_{\tau_i}u^{n_{\tau_{i}}-\beta_{\tau_{i+1}}}e'_{\tau_i}))&=\phi_{p-2}(-\lambda_{\tau_i}i_{\tau_{i}}u^{n_{\tau_{i}}+e-\beta_{\tau_{i+1}}}e'_{\tau_i})\\ &=-\lambda_{\tau_i}i_{\tau_i}u^{p(e-\beta_{i+1})}(a^{-1})_{i}e'_{\tau_{i+1}}\\ &=-\lambda_{\tau_i}i_{\tau_i}u^{p(i_{\tau_i}-\alpha_{\tau_{i+1}})}(a^{-1})_{i}e'_{\tau_{i+1}}\\ &=(b^{-1})_{i}N(f'_{\tau_{i+1}})\\&=N(\phi_{p-2}(u^{n'_{\tau_{i}}}f'_{\tau_i}+\lambda_{\tau_i}u^{n_{\tau_{i}}-\beta_{\tau_{i+1}}}e'_{\tau_i})).\end{align*} \item That $\mathcal{P}_{p-2}$ is $\hat{g}$-stable follows directly from the definitions of $\beta_{\tau_i}$, $n_{\tau_i}$ and $n'_{\tau_i}$. \item That the action of $\hat{g}$ commutes with $\phi_{p-2}$ follows from the definition of $n_{\tau_i}$ and $n'_{\tau_i}$. \item That the action of $\hat{g}$ commutes with $N$ follows from (6). \end{itemize} We now verify the claimed properties of $f_{\mathcal{M}}$ and $f_{\mathcal{N}}$. \begin{itemize} \item In order that the maps $f_{\mathcal{M}}$ and $f_{\mathcal{N}}$ be defined, it is necessary that the exponents of $u$ in their definition be non-negative. This follows from (4). \item To see that $f_{\mathcal{M}}(\mathcal{M}_{p-2})\subset\mathcal{P}_{p-2}$ and $f_{\mathcal{N}}(\mathcal{N}_{p-2})\subset\mathcal{P}_{p-2}$, we compute as follows. \begin{align*}f_{\mathcal{M}}(u^{(p-3)e+j_{\tau_i}}e_{\tau_i})&=u^{(p-3)e+j_{\tau_i}-n_{\tau_i}-p\alpha_{\tau_i}\delta_J(\tau_i)}u^{n_{\tau_i}}e_{\tau_i}'\\&=u^{-\delta_J(\tau_{i+1})\alpha_{\tau_{i+1}}}(u^{n_{\tau_i}}e_{\tau_i}')\end{align*}by (3) and the definition of $j_{\tau_i}$. Similarly, by using (1), (3) and (6), we find that \[f_{\mathcal{M}}(u^{(p-2)e-j_{\tau_i}}f_{\tau_i}+\lambda_{\tau_i}u^{(p-3)e+i_{\tau_i}}e_{\tau_i})=u^{-\beta_{\tau_{i+1}}\delta_{J^c}(\tau_{i+1})}(u^{n'_{\tau_i}}f_{\tau_i}'+\lambda_{\tau_i}u^{n_{\tau_i}-\beta_{\tau_{i+1}}}e_{\tau_i}').\]In the same way, using (1) and the definitions of $n_{\tau_i}$ and $n'_{\tau_i}$, \[f_{\mathcal{N}}(u^{e(p-2-b_{\tau_i}\delta_{J^c}(\tau_i))}E_{\tau_i})=u^{\delta_{J^c}(\tau_{i+1})\alpha_{\tau_{i+1}}}(u^{n_{\tau_i}}e_{\tau_i}'),\] \[f_{\mathcal{N}}(u^{e(p-2-b_{\tau_i}\delta_J(\tau_i))}F_{\tau_i})=u^{\delta_{J}(\tau_{i+1})\beta_{\tau_{i+1}}}(u^{n'_{\tau_i}}f'_{\tau_i}+\lambda_{\tau_i}u^{n_{\tau_i}-\beta_{\tau_{i+1}}}e'_{\tau_{i}})-\lambda_{\tau_i}u^{n_{\tau_i}}e'_{\tau_i}. \]The result then follows from (4). \item To check that $f_{\mathcal{M}}$ and $f_{\mathcal{N}}$ commute with $\phi_{p-2}$, we again compute directly. We have \begin{align*}f_{\mathcal{M}}(\phi_{p-2}(u^{(p-3)e+j_{\tau_i}}e_{\tau_i}))&=f_{\mathcal{M}}((a^{-1})_ie_{\tau_{i+1}})\\&=(a^{-1})_iu^{-p\alpha_{\tau_{i+1}}\delta_J(\tau_{i+1})}e'_{\tau_{i+1}},\end{align*}while \begin{align*}\phi_{p-2}(f_{\mathcal{M}}(u^{(p-3)e+j_{\tau_i}}e_{\tau_i}))&= \phi_{p-2}(u^{-\delta_J(\tau_{i+1})\alpha_{\tau_{i+1}}}(u^{n_{\tau_i}}e_{\tau_i}'))\\&=(a^{-1})_iu^{-p\alpha_{\tau_{i+1}}\delta_J(\tau_{i+1})}e'_{\tau_{i+1}}.\end{align*}Similarly, we find \begin{align*}f_{\mathcal{M}}(\phi_{p-2}(u^{(p-2)e-j_{\tau_i}}f_{\tau_i}+\lambda_{\tau_i}u^{(p-3)e+i_{\tau_i}}e_{\tau_i}))&=f_{\mathcal{M}}((b^{-1})_if_{\tau_{i+1}})\\&=(b^{-1})_iu^{-p\beta_{\tau_{i+1}}\delta_{J^c}(\tau_{i+1})}f'_{\tau_{i+1}},\end{align*} \begin{align*}\phi_{p-2}(f_{\mathcal{M}}(u^{(p-2)e-j_{\tau_i}}f_{\tau_i}+\lambda_{\tau_i}u^{(p-3)e+i_{\tau_i}}e_{\tau_i})))&= \phi_{p-2}(u^{-\beta_{\tau_{i+1}}\delta_{J^c}(\tau_{i+1})}(u^{n'_{\tau_i}}f_{\tau_i}'+\lambda_{\tau_i}u^{n_{\tau_i}-\beta_{\tau_{i+1}}}e_{\tau_i}')))\\&=(b^{-1})_iu^{-p\beta_{\tau_{i+1}}\delta_{J^c}(\tau_{i+1})}f'_{\tau_{i+1}},\end{align*} \begin{align*}f_{\mathcal{N}}(\phi_{p-2}(u^{e(p-2-b_{\tau_i}\delta_{J^c}(\tau_i))}E_{\tau_i}))&=f_{\mathcal{N}}((a^{-1})_iE_{\tau_{i+1}})\\&=(a^{-1})_iu^{p\delta_{J^c}(\tau_{i+1})\alpha_{\tau_{i+1}}}e'_{\tau_{i+1}}, \end{align*} \begin{align*}\phi_{p-2}(f_{\mathcal{N}}(u^{e(p-2-b_{\tau_i}\delta_{J^c}(\tau_i))}E_{\tau_i}))&= \phi_{p-2}(u^{\delta_{J^c}(\tau_{i+1})\alpha_{\tau_{i+1}}}(u^{n_{\tau_i}}e_{\tau_i}')\\&=(a^{-1})_iu^{p\delta_{J^c}(\tau_{i+1})\alpha_{\tau_{i+1}}}e'_{\tau_{i+1}},\end{align*} \begin{align*}f_{\mathcal{N}}(\phi_{p-2}(u^{e(p-2-b_{\tau_i}\delta_J(\tau_i))}F_{\tau_i}))&=f_{\mathcal{N}}((b^{-1})_i(F_{\tau_{i+1}}-\lambda'_{\tau_{i}}E_{\tau_{i+1}}))\\&=(b^{-1})_i(u^{p\delta_{J}(\tau_{i+1})\beta_{\tau_{i+1}}}f'_{\tau_{i+1}}-\lambda'_{\tau_i}u^{p\delta_{J^c}(\tau_{i+1})\alpha_{\tau_{i+1}}}e'_{\tau_{i+1}}), \end{align*} \begin{align*} \phi_{p-2}(f_{\mathcal{N}}(u^{e(p-2-b_{\tau_i}\delta_J(\tau_i))}F_{\tau_i}))&=\phi_{p-2}(u^{\delta_{J}(\tau_{i+1})\beta_{\tau_{i+1}}}(u^{n'_{\tau_i}}f'_{\tau_i}+\lambda_{\tau_i}u^{n_{\tau_i}-\beta_{\tau_{i+1}}}e'_{\tau_{i}})-\lambda_{\tau_i}u^{n_{\tau_i}}e'_{\tau_i})\\&=u^{p\delta_{J}(\tau_{i+1})\beta_{\tau_{i+1}}}(b^{-1})_if'_{\tau_{i+1}}-\lambda_{\tau_i}(a^{-1})_ie'_{\tau_{i+1}}.\end{align*} The result follows, because $\lambda_{\tau_i}(a^{-1})_i=\lambda'_{\tau_i}(b^{-1})_i$ by (~(\ref{eqn:341})), and if $\lambda_{\tau_i}\ne 0$ then $\delta_{J^c}(\tau_{i+1})=0$ by (1). \item To check that $f_{\mathcal{M}}$ and $f_{\mathcal{N}}$ commute with $N$, we again compute directly. We have \begin{align*}N(f_{\mathcal{M}}(e_{\tau_i})&=N(u^{-p\alpha_{\tau_i}\delta_{J}(\tau_i)}e'_{\tau_i})\\&=-p\alpha_{\tau_i}\delta_{J}(\tau_i)u^{-p\alpha_{\tau_i}\delta_{J}(\tau_i)}e'_{\tau_i}\\&=0\\&=f_{\mathcal{M}}(N(e_{\tau_i})). \end{align*} Similarly,\begin{align*} N(f_{\mathcal{M}}(f_{\tau_i}))&=N(u^{-p\beta_{\tau_i}\delta_{J^c}(\tau_i)}f'_{\tau_i})\\&=u^{-p\beta_{\tau_i}\delta_{J^c}(\tau_i)}N(f'_{\tau_i})\\&= -\frac{(b)_{i-1}}{(a)_{i-1}}i_{\tau_{i-1}}\lambda_{\tau_{i-1}}u^{pi_{\tau_{i-1}}-p \alpha_{\tau_{i}}-p\beta_{\tau_i}\delta_{J^c}(\tau_i)}e'_{\tau_{i}},\end{align*}while \begin{align*}f_{\mathcal{M}}(N(f_{\tau_i}))&=f_{\mathcal{M}}\left(-\frac{(b)_{i-1}}{(a)_{i-1}}i_{\tau_{i-1}}\lambda_{\tau_{i-1}}u^{pi_{\tau_{i-1}}}e_{\tau_{i}} \right)\\&=-\frac{(b)_{i-1}}{(a)_{i-1}}i_{\tau_{i-1}}\lambda_{\tau_{i-1}}u^{pi_{\tau_{i-1}}-p\alpha_{\tau_{i}}\delta_{J}(\tau_{i})}e'_{\tau_{i}}, \end{align*}and these two expressions are equal by (1). In the same fashion, we find that \begin{align*}N(f_{\mathcal{N}}(E_{\tau_i}))=f_{\mathcal{N}}(N(E_{\tau_i}))=0, \end{align*}while \begin{align*}f_{\mathcal{N}}(N(F_{\tau_i}))=f_{\mathcal{N}}(0)=0,\end{align*}and \begin{align*}N(f_{\mathcal{N}}(F_{\tau_i}))=N(u^{p\beta_{\tau_i}\delta_J(\tau_i)}f'_{\tau_i})= -\frac{(b)_{i-1}}{(a)_{i-1}}i_{\tau_{i-1}}\lambda_{\tau_{i-1}}u^{pi_{\tau_{i-1}}-p \alpha_{\tau_{i}}+p\beta_{\tau_i}\delta_J(\tau_i)}e'_{\tau_{i}}. \end{align*}If $\tau_i\notin J$, this expression is $0$ by (1). On the other hand, if $\tau_i\in J$, then the exponent of $u$ in this expression is $p(i_{\tau_{i-1}}-\alpha_{\tau_i}+\beta_{\tau_i})=pe$ by (6), so the expression is again $0$, as required. \item Finally, that $f_{\mathcal{M}}$ and $f_{\mathcal{N}}$ commute with $\hat{g}$ follows directly from the definitions of $\alpha_{\tau_i}$ and $\beta_{\tau_i}$. \end{itemize} It is clear from the construction that the maps $f_{\mathcal{M}''}$, $f_{\mathcal{M}'}$, $f_{\mathcal{N}''}$ and $f_{\mathcal{N}'}$ are nonzero. Since $T_{st}^*$ is faithful, the maps $T_{st}^{*}(f_{\mathcal{M}''})$, $T_{st}^{*}(f_{\mathcal{M}'})$, $T_{st}^{*}(f_{\mathcal{N}''})$ and $T_{st}^{*}(f_{\mathcal{N}'})$ are all nonzero, and are thus isomorphisms (as they are maps between one-dimensional $E$-vector spaces). The result follows.\end{proof} \subsection{Weights and types}\label{reducibletypes}We recall some definitions and results from \cite{dia05}. Fix, as ever, $\rho\sim \bigl(\begin{smallmatrix}\psi_1&*\\0&\psi_2\end{smallmatrix}\bigr)$. We make the following definitions: \begin{defn}A weight $\sigma_{\vec{a},\vec{b}}$ is \emph{compatible} with $\rho$ (via $J$) if and only if there exists a subset $J\in S$ so that $$\psi_1|_{I_{K_0}}=\prod_{\tau\in S}\omega_\tau^{a_\tau}\prod_{\tau\in J}\omega_\tau^{b_\tau},\ \psi_2|_{I_{K_0}}=\prod_{\tau\in S}\omega_\tau^{a_\tau}\prod_{\tau\notin J}\omega_\tau^{b_\tau} $$ \end{defn}Suppose that these equations hold. We define $$c_{\tau_i}=\left\{\begin{array}{ll} {b_{\tau_i}}-\delta_J(\tau_{i+1})&\text{ if }\tau_i\in J \\ p-b_{\tau_i}-\delta_J(\tau_{i+1})&\text{ if }\tau_i\notin J \\ \end{array}\right.$$where $\delta_J$ is the characteristic function of $J$. Define a character $\chi_{\vec{a},\vec{b},J}$ by $$\chi_{\vec{a},\vec{b},J}=\prod_{\tau_i\in S}\omega_{\tau_i}^{a_{\tau_i}}\prod_{\tau_i\notin J}\omega_{\tau_i}^{b_{\tau_i}-p}.$$Suppose that the $c_\tau$ are not all equal to either $0$ or $p-1$. Then we define a representation $I_{\vec{a},\vec{b},J}$ of $\operatorname{GL}_2(k)$ and a type $\tau_{\vec{a},\vec{b},J}$ by $$I_{\vec{a},\vec{b},J}=I\left(\tilde{\chi}_{\vec{a},\vec{b},J},\tilde{\chi}_{\vec{a},\vec{b},J}\prod_{\tau\in S}\tilde{\omega}_\tau^{c_\tau}\right)$$ $$\tau_{\vec{a},\vec{b},J}=\tilde{\chi}_{\vec{a},\vec{b},J}\oplus\tilde{\chi}_{\vec{a},\vec{b},J}\prod_{\tau\in S}\tilde{\omega}_\tau^{c_\tau}$$Note that if $\overline{\rho}$ is compatible with $\sigma_{\vec{a},\vec{b}}$, then a lift of type $J$ is precisely a lift of type $\tau_{\vec{a},\vec{b},J}$ with specified determinant. \begin{prop}\label{typeswts}Suppose that $\sigma_{\vec{a},\vec{b}}$ is regular. If $\rho$ is compatible with $\sigma_{\vec{a},\vec{b}}$ via $J$, then $\rho$ is compatible with precisely one of the Jordan-H\"{o}lder factors of the reduction mod $p$ of $I_{\vec{a},\vec{b},J}$, and that factor is isomorphic to $\sigma_{\vec{a},\vec{b}}$. \end{prop} \begin{proof}We use the explicit computations of \cite{dia05}. Firstly, note that reduction mod $p$ and the notion of compatibility both commute with twisting, so we may replace $\rho$ by $\rho\otimes\chi_{\vec{a},\vec{b},J}^{-1}$. By Proposition 1.1 of \cite{dia05}, we have $\overline{I}_{\vec{a},\vec{b},J}\sim\oplus_{K\subset S}\sigma_{\vec{a}_K,\vec{b}_K}$ where $a_K$ and $b_K$ are defined as follows: $$a_{K,\tau_i}=\left\{\begin{array}{ll} 0&\text{ if }\tau_i\in K \\ c_{\tau_i}+\delta_K(\tau_{i+1})&\text{ if }\tau_i\notin K \\ \end{array}\right.$$ $$b_{K,\tau_i}=\left\{\begin{array}{ll} {c_{\tau_i}}+\delta_K(\tau_{i+1})&\text{ if }\tau_i\in K \\ p-c_{\tau_i}-\delta_K(\tau_{i+1})&\text{ if }\tau_i\notin K \\ \end{array}\right.$$ By the definition of the $c_\tau$, we see at once that $\sigma_{\vec{a}_J,\vec{b}_J}=\sigma_{\vec{a},\vec{b}}$, and in fact $$\psi_1|_{I_{K_0}}=\prod_{\tau\in S}\omega_\tau^{a_{J,\tau}}\prod_{\tau\in J}\omega_\tau^{b_{J,\tau}},\ \psi_2|_{I_{K_0}}=\prod_{\tau\in S}\omega_\tau^{a_{J,\tau}}\prod_{\tau\notin J}\omega_\tau^{b_{J,\tau}}.$$ If $\rho$ is compatible with another Jordan-H\"{o}lder factor, there are subsets $J'$, $K'\subset S$, $J'\neq J$ such that $$\psi_1|_{I_{K_0}}=\prod_{\tau\in S}\omega_\tau^{a_{J,\tau}}\prod_{\tau\in J}\omega_\tau^{b_{J,\tau}}=\prod_{\tau\in S}\omega_\tau^{a_{J',\tau}}\prod_{\tau\in K'}\omega_\tau^{b_{J',\tau}},$$ $$\psi_2|_{I_{K_0}}=\prod_{\tau\in S}\omega_\tau^{a_{J,\tau}}\prod_{\tau\notin J}\omega_\tau^{b_{J,\tau}}=\prod_{\tau\in S}\omega_\tau^{a_{J',\tau}}\prod_{\tau\notin K'}\omega_\tau^{b_{J',\tau}}.$$ Using the formulae above, the first equation simplifies to $$\prod_{\tau_i\in S}\omega_{\tau_i}^{c_{\tau_i}+\delta_J(\tau_{i+1})}=\prod_{\tau_i\in (J'\cap K')\cup(J'^c\cap K'^c)}\omega_{\tau_i}^{c_{\tau_i}+\delta_{J'}(\tau_{i+1})}\prod_{\tau_{i+1}\in K'\cap J'^c}\omega_{\tau_i}.$$ By the assumption that $\sigma_{\vec{a},\vec{b}}$ is regular, we have $1\leq c_{\tau_i}\leq p-2$ and $2\leq c_{\tau_i}+\delta_{J}(\tau_{i+1})\leq p-2$ for each $i$. Then we see that we can equate the exponents of $\omega_{\tau_i}$ on each side of each equation, and we easily obtain $(J'\cap K')\cup(J'^c\cap K'^c)=S$, whence $J'=K'$. But then the equation becomes $$\prod_{\tau_i\in S}\omega_{\tau_i}^{\delta_J(\tau_{i+1})}=\prod_{\tau_i\in S}\omega_{\tau_i}^{\delta_{J'}(\tau_{i+1})},$$whence $J=J'$, a contradiction. \end{proof} \begin{rem}\label{weaklyregular}Note that it follows from the formulae in the proof of Proposition \ref{typeswts} that if $\sigma_{\vec{a},\vec{b}}$ is regular, then all the Jordan-H\"{o}lder factors of the reduction mod $p$ of $I_{\vec{a},\vec{b},J}$ are weakly regular.\end{rem} \begin{prop}\label{jhcompat}Let $\theta_{1}$, $\theta_{2}$ be two tamely ramified characters of $I_{K_{0}}$ which extend to $G_{K_{0}}$. If $\rho$ has a potentially Barsotti-Tate lift (with determinant equal to a finite order character times the $p$-adic cyclotomic character) of type $\theta_1\oplus\theta_2$, then $\rho$ is compatible with some weight occurring in the mod $p$ reduction of $I(\theta_1,\theta_2)$.\end{prop} \begin{proof}This follows easily from consideration of the possible Breuil modules corresponding to the $\pi_{L}$-torsion in the $p$-divisible group of such a lift (where the corresponding Galois representation is valued in $\mathcal{O}_{L}$, and $\pi_{L}$ is a uniformiser of $\mathcal{O}_{L}$). The case $\theta_1=\theta_2$ is easier, so from now on we assume that $\theta_1\neq\theta_2$. The $\pi_L$-torsion must contain a closed sub-group-scheme (with descent data) with generic fibre $\psi_1$. Suppose that this group scheme corresponds to a Breuil module with descent data $\mathcal{M}$. Then we can choose a basis so that $\mathcal{M}$ takes the following form: $$\mathcal{M}^{\tau_i}=E[u]/u^{ep}\cdot x_{\tau_i} $$ $$\mathcal{M}_1^{\tau_i}=E[u]/u^{ep}\cdot u^{r_i}x_{\tau_i}$$ $$\phi_1(u^{r_i} x_{\tau_i})=(a^{-1})_ix_{\tau_{i+1}}$$ $$\hat{g}(x_{\tau_i})=\theta^i(g)x_{\tau_i}$$ Here $0\leq r_i\leq e$ is an integer, and $\theta^i:\operatorname{Gal}(K/K_0)\to E^\times$ is a character. Now, by Corollary 5.2 of \cite{geesavquatalg}, because the lift is of type $\theta_1\oplus\theta_2$, we must have $\theta^i=\theta_1$ or $\theta_2$ for each $i$ (here and below we denote the reduction mod $p$ of the $\theta_{i}$ by the same symbol). Define subsets $Y$, $Z$ by $$Y=\{\tau_i\in S| \theta^i\neq\theta^{i+1}\},$$ $$Z=\{\tau_i\in S| \theta^i=\theta_1\}.$$ Because $\theta_1\neq\theta_2$, if $i\in Y$ then the compatibility of the $\phi_1$- and $\operatorname{Gal}(K/K_{0})$-actions determines $r_i$ uniquely, and if $i\in Y^c$ then we can take either $r_i=0$ or $r_i=e$. Having written down all possible $\mathcal{M}$, we now need to determine their generic fibres. This is a straightforward calculation using Example 3.7 of \cite{sav06}. Without loss of generality, we may twist and assume that $\theta_1=\prod_{\tau_i\in S}\omega_{\tau_i}^{c_i}$, $\theta_2=1$, with $0\leq c_i\leq p-1$. Then one easily obtains $$\psi_1|_{I_{K_0}}=\omega_{\tau_1}^{m_1+n_1}\prod_{\tau_i\in \{Y^c|r_i=e\}}\omega_{\tau_i}\prod_{\tau_i\in Y\cap Z}\omega_{\tau_i},$$where $$m_1=\left\{\begin{array}{ll} 0&\text{ if }\tau_1\notin Z \\ c_1+pc_r+\dots+p^{r-1}c_2&\text{ if }\tau_1\in Z \\ \end{array}\right.$$ \begin{align*}n_1=&\frac{1}{e}\sum_{\tau_i\in Y\cap Z^c}p^{r-i}(p^ic_1+p^{i+1}c_r+\dots+p^rc_i+c_{i+1}+\dots+p^{i-1}c_2)\\ &-\frac{1}{e}\sum_{\tau_i\in Y\cap Z}p^{r-i}(p^ic_1+p^{i+1}c_r+\dots+p^rc_i+c_{i+1}+\dots+p^{i-1}c_2).\end{align*}Now, consider the coefficient of $c_1$ in $n_1$. The sets $Y\cap Z^c$ and $Y\cap Z$ have equal cardinality, so this coefficient is in fact zero. Thus the coefficient of $c_1$ in $m_1+n_1$ is $1$ if $\tau_1\in Z$, and $0$ otherwise. By cyclic symmetry, we obtain $$\psi_1|_{I_{K_0}}=\prod_{\tau_i\in Z}\omega_{\tau_i}^{c_i}\prod_{\tau_{i}\in X}\omega_{\tau_i},$$ where $$X=\{\tau_i\in Y^c|r_i=e\}\cup(Y\cap Z).$$ We wish to show that $\rho$ is compatible with some weight in the reduction mod $p$ of $I(\theta_1,\theta_2)$. It is easy to check that the determinant of $\rho$ is correct, so it suffices to examine $\psi_1$; in the notation of Proposition \ref{typeswts}, we see that $\rho$ is compatible with $\sigma_{\vec{a}_K,\vec{b}_K}$ via $L$ if and only if $$\psi_1|_{I_{K_0}}=\prod_{\tau_i\in (K^{c}\cap L)\cup(K\cap L^c)}\omega_{\tau_i}^{c_i+\delta_{K^{c}}(\tau_{i+1})}\prod_{\tau_i\in S}\omega_{\tau_i}^{\delta_{K\cap L}(\tau_{i+1})}$$(note that our convention that $\theta_{2}=1$ causes $K^{c}$ to appear in this formula rather than $K$). The result now follows upon taking, for example, $$K=\{\tau_i|\tau_{i-1}\in(X^c\cap Y^c\cap Z)\cup(X\cap Y^c\cap Z^c)\}$$ and $$L=(K^c\cap Z)\cup(K\cap Z^c).$$\end{proof} \begin{prop}\label{prop:liftimpliesmodel} Suppose that $\sigma_{\vec{a},\vec{b}}$ is regular. If $\rho$ is compatible with $\sigma_{\vec{a},\vec{b}}$ via $J$, and $\rho$ has a lift of type $J$, then $\rho$ has a model of type $J$. \end{prop} \begin{proof}\label{pf:liftimpliesmodel}This follows from similar considerations to those involved in the proof of Proposition \ref{jhcompat}. Consider the $\pi_{L}$-torsion in the $p$-divisible group corresponding to the lift of type $J$. It contains a closed sub-group-scheme (with descent data) with generic fibre $\psi_1$. Suppose that this group scheme corresponds to a Breuil module with descent data $\mathcal{M}$. Then we can choose a basis so that $\mathcal{M}$ takes the following form: $$\mathcal{M}^{\tau_i}=E[u]/u^{ep}\cdot x_{\tau_i} $$ $$\mathcal{M}_1^{\tau_i}=E[u]/u^{ep}\cdot u^{r_i}x_{\tau_i}$$ $$\phi_1(u^{r_i} x_{\tau_i})=(a^{-1})_ix_{\tau_{i+1}}$$ $$\hat{g}(x_{\tau_i})=\theta^i(g)x_{\tau_i}$$ Again, by Corollary 5.2 of \cite{geesavquatalg} and the definition of a lift of type $J$, for each $i$ we must have $\theta^i=\theta_1$ or $\theta^{i}=\theta_{2}$ where $$\theta_{1}=\prod_{\tau\in S}\omega_{\tau}^{a_{\tau}}\prod_{\tau\in J}\omega_{\tau}^{b_{\tau}-p},$$ $$\theta_{2}=\prod_{\tau\in S}\omega_{\tau}^{a_{\tau}}\prod_{\tau\in J^{c}}\omega_{\tau}^{b_{\tau}-p}.$$ Note that $\psi_1|_{I_{K_0}}=\theta_{1}\prod_{\tau_{i}\in S}\omega_{\tau_{i}}^{\delta_{J}(\tau_{i+1})}$. Without loss of generality, we can twist so that $\theta_1=\prod_{\tau_i\in S}\omega_{\tau_i}^{c_i}$, $\theta_2=1$, with $0\leq c_i\leq p-1$. Then we obtain $$\theta_{1}=\theta_{1}\theta_{2}^{-1}=\prod_{\tau_{i}\in J}\omega_{\tau_{i}}^{b_{\tau_{j}}-\delta_{J}(\tau_{i+1})}\prod_{\tau_{i}\in J^{c}}\omega_{\tau_{i}}^{p-b_{\tau_{i}}-\delta_{J}(\tau_{i+1})}. $$Since $0\leq c_{i}\leq p-1$ and $2\leq b_{\tau_{i}}\leq p-2$, we obtain $$c_{i}=\left\{\begin{array}{ll} {b_{\tau_i}}-\delta_J(\tau_{i+1})&\text{ if }\tau_i\in J \\ p-b_{\tau_i}-\delta_J(\tau_{i+1})&\text{ if }\tau_i\notin J \\ \end{array}\right.$$Note that this implies that $2\leq c_{i}+\delta_{J}(\tau_{i+1})\leq p-2$. Now, using the same definitions of $X$, $Y$ and $Z$ as in the proof of Proposition \ref{jhcompat}, we can compare the two expressions we have for $\psi_{1}|_{I_{K_0}}$ to obtain $$\prod_{\tau_{i}\in S}\omega_{\tau_{i}}^{c_{i}+\delta_{J}(\tau_{i+1})}=\prod_{\tau_i\in Z}\omega_{\tau_i}^{c_i}\prod_{\tau_{i}\in X}\omega_{\tau_i}. $$ Since $2\leq c_{i}+\delta_{J}(\tau_{i+1})\leq p-2$, this gives $Z=S$, and $X=\{\tau_{i}|\tau_{i+1}\in J\}$. Since $Z=S$, we have $Y=\emptyset$, and thus the fact that $X=\{\tau_{i}|\tau_{i+1}\in J\}$ means that $\mathcal{M}$ is in fact of class $J$. It is then clear that the $\pi_{L}$-torsion is a model of $\rho$ of type $J$, as required. \end{proof} \section{Local analysis - the irreducible case}\label{irreducible}\subsection{}We now prove the analogues of some of the results of section \ref{reducible} in the case where $\rho$ is irreducible. We assume that $\rho$ is irreducible from now on. In addition to the assumptions made at the beginning of section \ref{reducible}, we now also assume that $\F_{p^2}\subset E$, where $\rho:G_{K_0}\to\operatorname{GL}_2(E)$. Let $k'$ be the (unique) quadratic extension of $k$. Label the embeddings $k'\hookrightarrow\overline{\F}_p$ as $S'=\{\sigma_{i}\}$, $0\leq i\leq 2r-1$, so that $\sigma_{i+1}=\sigma_{i}\circ\phi^{-1}$, and $\sigma_{i}|_{k}=\tau_{\pi(i)}$, where $\pi:\mathbb{Z}/2r\mathbb{Z}\to\mathbb{Z}/r\mathbb{Z}$ is the natural surjection. For simplicity of notation we will sometimes refer to the elements of $S'$ as elements of $\mathbb{Z}/2r\mathbb{Z}$, and the elements of $S$ as elements of $\mathbb{Z}/r\mathbb{Z}$. Recall that we say that a subset $H\subset S'$ is a \emph{full subset} if $|H|=|\pi(H)|=r$. \begin{defn}We say that $\rho$ is \emph{compatible} with a weight $\sigma_{\vec{a},\vec{b}}$ (via $J$) if there exists a full subset $J\subset S'$ such that $$\rho|_{I_{K'_0}}\sim\prod_{\sigma\in S'}\omega_\sigma^{a_\sigma}\left(\begin{array}{cc} \prod_{\sigma\in J}\omega_\sigma^{b_\sigma} & 0 \\ 0 & \prod_{\sigma\notin J}\omega_\sigma^{b_\sigma} \\ \end{array}\right),$$where we write $a_{\sigma}$, $b_{\sigma}$ for $a_{\pi(\sigma)}$, $b_{\pi(\sigma)}$ respectively.\end{defn} Note that the predicted set of weights $W(\overline{\rho})$ is just the set of compatible weights; this is one way in which the irreducible case is simpler than the reducible one. Given a regular weight $\sigma_{\vec{a},\vec{b}}$ and a full subset $J\subset S'$, we wish to define a representation and a type. Let $K_{J}=\pi(J\cap\{1,\dots,r\})$. Then let $$c_{i}=\left\{\begin{array}{llll} {b_{i}}+\delta_{K_{J}}({1})-1&\text{ if }0=i\in K_{J} \\ p-b_{i}+\delta_{K_{J}}({1})-1&\text{ if }0=i\notin K_{J} \\ {b_{i}}-\delta_{K_{J}}({i+1})&\text{ if }0\neq i\in K_{J} \\ p-b_{i}-\delta_{K_{J}}({i+1})&\text{ if }0\neq i\notin K_{J} \\ \end{array}\right.$$ Define a character $$\psi_{\vec{a},\vec{b},J}=\omega_{\tau_{0}}^{-\delta_{K_{J}}(1)}\prod_{\tau\in S}\omega_{\tau}^{a_{\tau}}\prod_{\tau\notin {K}_{J}}\omega_{\tau}^{b_{\tau}-p}.$$Then we define $$I'_{\vec{a},\vec{b},J}=\Theta\left(\tilde{\psi}_{\vec{a},\vec{b},J}\tilde{\omega}_{\sigma_{r}}\prod_{i=1}^{r}\tilde{\omega}_{\sigma_{i}}^{c_{i}}\right)$$ $$\tau'_{\vec{a},\vec{b},J}=\tilde{\psi}_{\vec{a},\vec{b},J}\tilde{\omega}_{\sigma_{r}}\prod_{i=1}^{r}\tilde{\omega}_{\sigma_{i}}^{c_{i}}\oplus\left(\tilde{\psi}_{\vec{a},\vec{b},J}\tilde{\omega}_{\sigma_{r}}\prod_{i=1}^{r}\tilde{\omega}_{\sigma_{i}}^{c_{i}}\right)^{p^r}$$ \begin{prop}\label{typeswts2}Recall that $\sigma_{\vec{a},\vec{b}}$ is regular. If $\rho$ is compatible with $\sigma_{\vec{a},\vec{b}}$ via $J$, then $\rho$ is compatible with precisely one of the Jordan-H\"{o}lder factors of the reduction mod $p$ of $I'_{\vec{a},\vec{b},J}$, and that factor is isomorphic to $\sigma_{\vec{a},\vec{b}}$. \end{prop} \begin{proof}We may twist and assume without loss of generality that $\psi_{\vec{a},\vec{b},J}=1$. Then by Proposition 1.3 of \cite{dia05} (note here that Diamond's conventions on the numbering of the elements of $S'$ are the opposite of ours, so that his $\sigma_{i}$ is our $\sigma_{-i}$), the Jordan-H\"{o}lder factors of the reduction mod $p$ of $I'_{\vec{a},\vec{b},J}$ are $\{\sigma_{\vec{a}_{K},\vec{b}_{K}}\}_{{K\subset S}}$, where $$a_{K,\tau_{i}}=\left\{\begin{array}{llll} \delta_{K}({1})&\text{ if }0=i\in K\\ c_{i}+1&\text{ if }0=i\notin K \\ 0&\text{ if }0\neq i\in K \\ c_{i}+\delta_{K}({i+1})&\text{ if }0\neq i\notin K \\ \end{array}\right.$$ $$b_{K,\tau_{i}}=\left\{\begin{array}{llll} {c_{i}}+1-\delta_{K}({1})&\text{ if }0=i\in K \\ p-c_{i}+\delta_{K}({1})-1&\text{ if }0=i\notin K \\ {c_{i}}+\delta_{K}({i+1})&\text{ if }0\neq i\in K \\ p-c_{i}-\delta_{K}({i+1})&\text{ if }0\neq i\notin K \\ \end{array}\right.$$From the definition of the $c_{i}$ and of $\psi_{\vec{a},\vec{b},J}$, we have $\sigma_{\vec{a}_{K_{J}},\vec{b}_{K_{J}}}=\sigma_{\vec{a},\vec{b}}$. Suppose that $\rho$ is compatible with $\sigma_{\vec{a}_{K'},\vec{b}_{K'}}$ via $J'$. Then, replacing $J'$ by $(J')^{c}$ if necessary, we must have $$\prod_{i\in S'}\omega_{\sigma_{i}}^{a_{K_{J},i}}\prod_{i\in J}\omega_{\sigma_{i}}^{b_{K_{J},i}}=\prod_{i\in S'}\omega_{\sigma_{i}}^{a_{K',i}}\prod_{i\in J'}\omega_{\sigma_{i}}^{b_{K',i}}. $$ Using the formulae above, this becomes \begin{align}\label{407eqn} \omega_{\sigma_{0}}^{\delta_{J',K'}(1)}\omega_{\sigma_{r}}^{\delta_{J',K'}(r+1)}\prod_{i\in T'}\omega_{\sigma_{i}}^{c_{i}+\delta_{K'}(i+1)}\prod_{i\in S'}\omega_{\sigma_{i}}^{\delta_{{J'\cap\pi^{-1}((K')^{c})}}(i+1)}\notag \\ = \omega_{\sigma_{0}}^{\delta_{J,K_{J}}(1)}\omega_{\sigma_{r}}^{\delta_{J,K_{J}}(r+1)}\prod_{i\in T}\omega_{\sigma_{i}}^{c_{i}+\delta_{K_{J}}(i+1)}\prod_{i\in S'}\omega_{\sigma_{i}}^{\delta_{{J\cap\pi^{-1} (K_{J}^{c})}}(i+1)}, \end{align} where $$T=(J\cap\pi^{-1}(K_{J}))\cup(J^{c}\cap\pi^{-1}(K_{J}^{c}))=\{1,\dots,r\}, $$ $$T'=(J'\cap\pi^{-1}(K'))\cup((J')^{c}\cap\pi^{-1}((K')^{c})), $$ $$\delta_{J,K_{J}}(i+1)=\left\{\begin{array}{ll} 1-\delta_{K_{J}}({i+1})&\text{ if }i\in T\\ \delta_{K_{J}}(i+1)&\text{ if }i\notin T, \\ \end{array}\right. $$ $$\delta_{J',K'}(i+1)=\left\{\begin{array}{ll} 1-\delta_{K'}({i+1})&\text{ if }i\in T'\\ \delta_{K'}(i+1)&\text{ if }i\notin T'. \\ \end{array}\right. $$Note that (since $\sigma_{\vec{a},\vec{b}}$ is regular) all the exponents on the right hand side of (\ref{407eqn}) are in the range $[0,p-1]$. On the left hand side, this is true except possibly for the exponents of $\omega_{\sigma_{0}}$, $\omega_{\sigma_{r}}$. Since $T=\{1,\dots,r\}$, it is easy to see that the only opportunity for this not to hold is for the exponent of $\omega_{\sigma_{0}}$ to be $p$ on the left hand side and $0$ on the right hand side. However, in order for the exponent of $\omega_{\sigma_{0}}$ to be $p$ on the left hand side we require $c_{0}=p-2$, which requires that $1\in K_{J}$. But then the exponent of $\omega_{\sigma_{0}}$ on the right hand side is 1, a contradiction. Thus we may equate exponents on each side of (\ref{407eqn}). In particular, if $i\neq 0$, we have (again because $\sigma_{\vec{a},\vec{b}}$ is regular) $c_{i}+\delta_{K_{J}}(i+1)\in [2,p-2]$, so that we must have $\{1,\dots,r-1\}\subset T'$. We also have $c_{0}\in[1,p-2]$. If $0\in T'$, we see that the exponent of $\omega_{\sigma_{0}}$ on the left hand side of (\ref{407eqn}) is $c_{0}+1+\delta_{J'\cap\pi^{-1}((K')^{c})}(1)=c_{0}+1$ (because $1\in T'$), which is at least 2. However the exponent of $\omega_{\sigma_{0}}$ on the right hand side of (\ref{407eqn}) is 0 or 1, as $0\notin T$, which is a contradiction. Thus $T'=T=\{1,\dots,r\}$. Then (\ref{407eqn}) simplifies to $$\prod_{i=0}^{r-1}\omega_{\sigma_{i}}^{\delta_{K'}(i+1)}\prod_{i=r}^{2r-1}\omega_{\sigma_{i}}^{\delta_{(K')^{c}}(i+1)}=\prod_{i=0}^{r-1}\omega_{\sigma_{i}}^{\delta_{K_{J}}(i+1)}\prod_{i=r}^{2r-1}\omega_{\sigma_{i}}^{\delta_{K_{J}^{c}}(i+1)}, $$whence $K'=K_{J}$, as required. \end{proof} \begin{rem}\label{weaklyregular2}Note that it follows easily from the formulae in the proof of Proposition \ref{typeswts2} that if $\sigma_{\vec{a},\vec{b}}$ is regular, then all the Jordan-H\"{o}lder factors of the reduction mod $p$ of $I'_{\vec{a},\vec{b},J}$ are weakly regular.\end{rem} \begin{thm}\label{localdef2}Assume that $\sigma_{\vec{a},\vec{b}}$ is regular and that $\rho$ is compatible with $\sigma_{\vec{a},\vec{b}}$ via $J$. Then $\rho$ has a lift of type $\tau'_{\vec{a},\vec{b},J}$ which is not potentially ordinary. \end{thm} \begin{proof}A simple computation shows that we in fact have $$\tau'_{\vec{a},\vec{b},J}=\prod_{\tau\in S}\omega_{\tau}^{a_{\tau}}\prod_{\sigma\in J}\omega_{\sigma}^{b_{\sigma}-p}\oplus\prod_{\tau\in S}\omega_{\tau}^{a_{\tau}}\prod_{\sigma\notin J}\omega_{\sigma}^{b_{\sigma}-p}. $$This means that we only need to make a very minor modification to the proof of Theorem \ref{localdef}. Let $K_{0}'=W(k')[1/p]$. Fix $\pi'=(-p)^{1/(p^{2r}-1)}$, and let $K'=K_0'(\pi')$. Let $g_{\phi}$ be the nontrivial element of $\operatorname{Gal}(K'/K_{0})$ which fixes $\pi'$. It is clear from the proof of Theorem \ref{localdef} that for some choice of $a\in W(E)^{\times}$ the following object of $W(E)-\operatorname{Mod}^{1}_{cris,dd,K_{0}}$ provides us with the required lift. $$\mathcal{M}_J^{\sigma_i}=S_{K}\cdot e_{\sigma_i}+S_{K}\cdot f_{\sigma_i}$$ $$\hat{g}_{\phi}(e_{\sigma_i})=f_{\sigma_{i+r}}$$ $$\hat{g}_{\phi}(f_{\sigma_i})=e_{\sigma_{i+r}}$$ If $g\in\operatorname{Gal}(K'/K_{0}')$, $$\hat{g}(e_{\sigma_i})=\left(\left(\prod_{\tau\in S}\widetilde{\omega}_{\tau}^{a_{\tau}}\prod_{\sigma\in J}\widetilde{\omega}_{\sigma}^{b_{\sigma}-p}\right)(g)\right)e_{\sigma_i}$$ $$\hat{g}(f_{\sigma_i})=\left(\left(\prod_{\tau\in S}\widetilde{\omega}_{\tau}^{a_{\tau}}\prod_{\sigma\notin J}\widetilde{\omega}_{\sigma}^{b_{\sigma}-p}\right)(g)\right)f_{\sigma_i}$$ If $\sigma_{i+1}\in J$, $$\operatorname{Fil}^1\mathcal{M}_J^{\sigma_i}=\operatorname{Fil}^1S_{K}\cdot\mathcal{M}^{\sigma_i}_J+S_{K}\cdot f_{\sigma_i}$$ $$\phi(e_{\sigma_i})=(a^{-1})_i e_{\sigma_{i+1}}$$ $$\phi(f_{\sigma_i})=({a}^{-1})'_i pf_{\sigma_{i+1}}$$ If $\sigma_{i+1}\notin J$, $$\operatorname{Fil}^1\mathcal{M}_J^{\sigma_i}=\operatorname{Fil}^1S_{K}\cdot\mathcal{M}^{\sigma_i}_J+S_{K}\cdot e_{\sigma_i}$$ $$\phi(e_{\sigma_i})=({a}^{-1})_i p e_{\sigma_{i+1}}$$ $$\phi(f_{\sigma_i})=({a}^{-1})'_i f_{\sigma_{i+1}}$$Here the notation $(x)'_{i}$ means $x$ if $i=r+1$ and $1$ otherwise. \end{proof} \section{Global Results}\label{global}\subsection{}We now show how the local results obtained in the previous sections can be combined with lifting theorems to prove results about the possible weights of mod $p$ Hilbert modular forms. Firstly, we show that if $\overline{\rho}$ is modular of some regular weight, then $\overline{\rho}$ is compatible with that weight, by making use of Lemma \ref{lem:local langlands version of lifting} and Proposition \ref{jhcompat}. We then turn this analysis around. We take a conjectural regular weight $\sigma$ for $\overline{\rho}$, and using modularity lifting theorems we produce a modular lift of $\overline{\rho}$ of a specific type, which is enough to prove that $\overline{\rho}$ is modular of weight $\sigma$ by Propositions \ref{typeswts} and \ref{typeswts2}. Assume now that $F$ is a totally real field in which $p>2$ is unramified, and that $\overline{\rho}:G_F\to\operatorname{GL}_2(E)$ is a continuous representation, known to be modular, where $E$ is a finite extension of $\F_p$. Let $W(\overline{\rho})$ be the conjectural set of Serre weights for $\overline{\rho}$, as defined in Section \ref{2}. Recall that the elements of $W(\overline{\rho})$ are just the tensor products of elements of $W_v(\overline{\rho})$, for $v|p$, and that such elements are of the form $\sigma_{\vec{a},\vec{b}}$, as described above. We say that a weight is (weakly) regular if and only if it is a tensor product of (weakly) regular weights. The following argument is based on an argument of Michael Schein (c.f. Proposition 5.11 of \cite{sch06}), and is due to him in the case that $\overline{\rho}|_{G_{F_v}}$ is irreducible. \begin{lemma}\label{compat} Suppose that $p\geq 3$, that $\overline{\rho}$ is modular of weight $\sigma=\otimes_v \sigma^v_{\vec{a},\vec{b}}$, and that $\sigma$ is weakly regular. Then for each $v$, either $\overline{\rho}|_{G_{F_v}}$ is compatible with $\sigma^v_{\vec{a},\vec{b}}$, or $\sigma^v_{\vec{a},\vec{b}}$ is not regular and $\overline{\rho}|_{G_{F_v}}$ is not compatible with any regular weight.\end{lemma} \begin{proof} Suppose firstly that $\overline{\rho}|_{G_{F_v}}$ is reducible. We will assume for the rest of this proof that $F_v\neq\mathbb{Q}_p$; the argument needed when $F_v={{\mathbb Q}_p}$ is slightly different, although much simpler, and the result follows from Lemma 4.4.6 of \cite{gee061}. We will also assume that there is at least one $b_{\tau_i}\ne 1$; the case where all $b_{\tau_i}=1$ is much easier, and we leave it to the reader. Then for any type $\tau=\chi_1\oplus\chi_2$ (with $\chi_{1}\ne\chi_{2}$ tame characters of $I_{F_{v}}$ which extend to $G_{F_{v}}$) such that $\sigma^v_{\vec{a},\vec{b}}$ occurs in the reduction of $I(\chi_1,\chi_2)$, it follows from Lemma \ref{lem:local langlands version of lifting} and Proposition \ref{jhcompat} that there must be a weight $\sigma^v_{\vec{a'},\vec{b'}}$ in the reduction of $I(\chi_1,\chi_2)$ which is compatible with $\overline{\rho}|_{G_{F_v}}$. Since we are working purely locally, we return to the notation of section \ref{reducibletypes}. Twisting, we may without loss of generality suppose that $a_\tau=0$ for all $\tau$. By Proposition 1.1 of \cite{dia05} (and the fact that $\sigma$ is weakly regular, with at least one $b_{\tau_i}\ne 1$) there is for each $J\subset S$ a unique pair of characters $\prod_{\tau\in S}\tilde{\omega}^{c^J_\tau}$, $\prod_{\tau\in S}\tilde{\omega}^{d^J_\tau}$ (with $0\leq c^{J}_{\tau}, d^{J}_{\tau}\leq p-1$) such that if we define $$\sigma^J=I\left(1,\prod_{\tau\in S}\tilde{\omega}^{d^J_\tau}\right)\otimes\prod_{\tau\in S}\tilde{\omega}_{\tau}^{c^{J}_{\tau}}\circ\det$$then, with the same notation for reductions as in \cite{dia05}, extended to be compatible with twisting, $\sigma^J_J\sim \sigma_{\vec{a},\vec{b}}$. Then there must (by the argument above) be some subset $K_J\subset S$, such that $\sigma^J_{K_J}$ is compatible with $\rho$. If $\sigma^J_{K_J}\sim \sigma_{\vec{m}^J_{K_J},\vec{n}^J_{K_J}}$ this means that there must be a subset $L_J\subset S$ such that $$\psi_1|_{I_{K_0}}=\prod_{\tau\in S}\omega_\tau^{{m_{K_{J},\tau}^J}}\prod_{\tau\in L_J}\omega_\tau^{{n_{K_{J},\tau}^J}}.$$ By Proposition 1.1 of \cite{dia05}, this is equal to $$\prod_{\tau_i\in S}\omega_{\tau_i}^{c^J_{\tau_i}}\prod_{\tau_i\in L_J\cap K_J^c}\omega_{\tau_i}^p\prod_{\tau_i\in(L_J\cap K_J)\cup(L_J^c\cap K_J^c)}\omega_{\tau_i}^{d^J_{\tau_i}+\delta_{K_J}(\tau_{i+1})}.$$ Now, since $\sigma^J_J\sim \sigma_{\vec{a},\vec{b}}$, we have $$\prod_{\tau_i\in S}\omega_{\tau_i}^{c^J_{\tau_i}}\prod_{\tau_i\notin J}\omega_{\tau_i}^{d^J_{\tau_i}+\delta_J(\tau_{i+1})}=\prod_{\tau_i\in S}\omega_{\tau_i}^{a_{\tau_i}}=1,$$ by the assumption that $a_{\tau}=0$ for all $\tau$, so that in fact $$\psi_1|_{I_{K_0}}=\prod_{\tau_i\in J^c}\omega_{\tau_i}^{-(d^J_{\tau_i}+\delta_J(\tau_{i+1}))}\prod_{\tau_i\in L_J\cap K_J^c}\omega_{\tau_i}^p\prod_{\tau_i\in(L_J\cap K_J)\cup(L_J^c\cap K_J^c)}\omega_{\tau_i}^{d^J_{\tau_i}+\delta_{K_J}(\tau_{i+1})}.$$ Since $\sigma^J_J\sim \sigma_{\vec{a},\vec{b}}$, we have $$d^J_{\tau_i}=\left\{\begin{array}{ll} b_{\tau_i}-\delta_J(\tau_{i+1})&\text{ if }\tau_i\in J \\ p-b_{\tau_i}-\delta_J(\tau_{i+1})&\text{ if }\tau_{i}\notin J \\ \end{array}\right.$$ Substituting, we see that $$\psi_1|_{I_{K_0}}=\prod_{\tau_i\in (T_J\cap J)\cup(T_J^c\cap J^c)}\omega_{\tau_i}^{b_{\tau_i}}\prod_{\tau_i\in S}\omega_{\tau_i}^{\delta_{L_J\cap K_J^c}(\tau_{i+1})-\delta_{T^c_J\cap J^c}(\tau_{i+1})} \prod_{\tau_i\in T_J}\omega_{\tau_i}^{\delta_{K_J}(\tau_{i+1})-\delta_{J}(\tau_{i+1})},$$ where we write $T_J=(K_J\cap L_J)\cup(K^c_J\cap L^c_J)$. Putting $J=S$, we obtain \begin{align}\psi_1|_{I_{K_0}}&=\prod_{\tau_i\in T_S}\omega_{\tau_i}^{b_{\tau_i}}\prod_{\tau_i\in S}\omega_{\tau_i}^{\delta_{L_S\cap K_S^c}(\tau_{i+1})} \prod_{\tau_i\in T_S}\omega_{\tau_i}^{\delta_{K_S}(\tau_{i+1})-1}\notag\\ &=\prod_{\tau_{i}\in T_{S}}\omega_{\tau_{i}}^{b_{\tau_{i}}-\delta_{K_{S}^{c}\cap L_{S}^{c}}(\tau_{i+1})}\prod_{\tau_{i}\in T^{c}_{S}}\omega_{\tau_{i}}^{\delta_{L_{S}\cap K_{S}^{c}}(\tau_{i+1})}.\label{Sexpression} \end{align} Now, suppose that $\sigma_{\vec{a},\vec{b}}$ is \emph{not} compatible with $\rho$, so that for all $J$ we have $K_{J}\neq J$. We can uniquely write $$\psi_1|_{I_{K_0}}=\prod_{\tau_i\in S}\omega_{\tau_i}^{c_{\tau_i}}$$ with $0\leq c_{\tau_i}\leq p-1$ not all equal to $p-1$ (in fact, an examination of the product just written shows that the exponents are already in this range). Examining the formula just established, we see that after possibly exchanging $\psi_1$ and $\psi_2$ (which we can do, as the definition of ``compatible'' is unchanged by this exchange), there must be some $j$ such that $b_{\tau_j}\neq 1$, $c_{\tau_j}=b_{\tau_j}-1$, $\tau_j\in T_S$, and $\tau_{j+1}\in K_S^c\cap L_S^c\subset T_S$ (else $\rho$ would be compatible with $\sigma_{\vec{a},\vec{b}}$). Now take $J=\{\tau_{j}\}$, so that \begin{align}\psi_1|_{I_{K_0}}&=\prod_{\tau_i\in (T_{\{\tau_{j}\}}\cap \{\tau_{j}\})\cup(T_{\{\tau_{j}\}}^{c}\cap \{\tau_{j}\}^c)}\omega_{\tau_i}^{b_{\tau_i}}\prod_{\tau_i\in S}\omega_{\tau_i}^{\delta_{L_{\{\tau_{j}\}}\cap K_{\{\tau_{j}\}}^{c}}(\tau_{i+1})-\delta_{T^c_{\{\tau_{j}\}}\cap \{\tau_{j}\}^c}(\tau_{i+1})}\cdot\notag\\& \prod_{\tau_i\in T_{\{\tau_{j}\}}}\omega_{\tau_i}^{\delta_{K_{\{\tau_{j}\}}}(\tau_{i+1})-\delta_{\{\tau_{j}\}}(\tau_{i+1})}.\label{jexpression}\end{align}It is easy to see that the exponent of $\omega_{\tau_{i}}$ in this product is always between 0 and $p-1$, unless $i=j-1$ or $i=j$. If the exponent is always between $0$ and $p-1$, then we have a contradiction, because we already know that $c_{\tau_j}=b_{\tau_j}-1$, but from (\ref{jexpression}) we see that the exponent of $\omega_{\tau_{j}}$ can only be $0$, $b_{\tau_{j}}$ or $b_{\tau_{j}}+1$. So, at least one of the exponents of $\omega_{\tau_{j-1}}$ and $\omega_{\tau_{j}}$ must be $-1$ or $p$. We now analyse when this can occur. It's easy to see that the exponent of $\omega_{\tau_{j}}$ is $-1$ if and only if $\tau_{j}\notin T_{\{\tau_{j}\}}$ and $\tau_{j+1}\in L^{c}_{\{\tau_{j}\}}\cap K_{\{\tau_{j}\}}$, and it is $p$ if and only if $b_{\tau_{j}}=p-1$, $\tau_{j}\in T_{\{\tau_{j}\}}$ and $\tau_{j+1}\in L_{\{\tau_{j}\}}\cap K_{\{\tau_{j}\}}$. Similarly, the exponent of $\omega_{\tau_{j-1}}$ is $-1$ if and only if $\tau_{j-1}\in T_{\{\tau_{j}\}}$ and $\tau_{j}\in L^{c}_{\{\tau_{j}\}}\cap K^{c}_{\{\tau_{j}\}}$, and it is $p$ if and only if $b_{\tau_{j-1}}=p-1$, $\tau_{j-1}\in T^{c}_{\{\tau_{j}\}}$ and $\tau_{j}\in L_{\{\tau_{j}\}}\cap K^{c}_{\{\tau_{j}\}}$. Thus it is impossible for both exponents to be $p$, or both to be $-1$. Suppose now that the exponent of $\omega_{\tau_{j}}$ in (\ref{jexpression}) is $-1$. If we multiply each of the expressions (\ref{Sexpression}), (\ref{jexpression}) by $\omega_{\tau_{j}}$, write each side as a product $\prod_{\tau}\omega_{\tau}^{n_{\tau}}$ with $0\leq n_{\tau}\leq p-1$ and equate coefficients of $\omega_{\tau_{j}}$ in the resulting expression, we obtain $b_{\tau_{j}}=0$ or $1$ (the second case only a possibility when the exponent of $\omega_{\tau_{j-1}}$ in (\ref{jexpression}) is $p$), a contradiction. Suppose that the exponent of $\omega_{\tau_{j}}$ in (\ref{jexpression}) is $p$. Then we again easily see that $p-2=b_{\tau_{j}}-1=0$ or $1$. Thus $p-2=1$, and we additionally need to have $(T_{\{\tau_j\}}\cap\{\tau_j\}\cup(T_{\{\tau_j\}}^c\cap\{\tau_j\}^c=S$, so that $T_{\{\tau_j\}}=\{\tau_j\}$. But for the exponent of $\omega_{\tau_j}$ to be $p$ we need that $\tau_{j+1}\in L_{\{\tau_j}\}\cap K_{\{\tau_j\}}\subset T_{\{\tau_j\}}$, a contradiction. Suppose that the exponent of $\omega_{\tau_{j-1}}$ in (\ref{jexpression}) is $p$. Then in the same fashion we obtain $b_{\tau_{j}}-1$=$0$, or $1$. The only possibility is that $b_{\tau_{j}}=2$, when we in addition (in order that the necessary carrying should occur) require that $b_{\tau_{i}}=p-1$ for all $i\neq j$. Finally, suppose that the exponent of $\omega_{\tau_{j-1}}$ in (\ref{jexpression}) is $-1$. Multiply each of (\ref{Sexpression}), (\ref{jexpression}) by $\omega_{\tau_{j-1}}$. Then we see that the only way for equality to hold is again if $b_{\tau_{i}}=p-1$ for all $i \neq j$. So, we have deduced that $b_{\tau_{i}}=p-1$ for all $i \neq j$, so that $\sigma_{\vec{a},\vec{b}}$ is certainly not regular. It now remains to show that $\rho$ is not compatible with any regular weight. Examining the above argument, we see that we have in fact deduced that (again, after possibly exchanging $\psi_{1}$, $\psi_{2}$) $$\psi_{1}|_{I_{K}}=\omega_{\tau_{j}}^{b_{\tau_{j}}-1}\prod_{i\neq j}\omega_{\tau_{i}}^{p-1}, $$ $$\psi_{2}|_{I_{K}}=\omega_{\tau_{j}}, $$ with $2\leq b_{\tau_{j}}\leq p-1$. If $\rho$ is compatible with some regular weight, then we have by definition that $$\psi_{1}|_{I_{K}}\psi_{2}|_{I_{K}}^{{-1}}=\prod_{\tau\in J}\omega_{\tau}^{n_{\tau}}\prod_{\tau\in J^{c}}\omega_{\tau}^{-n_{\tau}} $$for some $J\subset S$ and $2\leq n_{\tau}\leq p-2$. Substituting, we obtain $$\omega_{\tau_{j-1}}\prod_{\tau\in J}\omega_{\tau}^{n_{\tau}}=\omega_{\tau_{j}}^{b_{\tau_{j}}-1}\prod_{\tau\in J^{c}}\omega_{\tau}^{n_{\tau}}.$$If $\tau_{j}\in J$ then we can immediately equate coefficients of $\omega_{\tau_{j-1}}$ and deduce a contradiction. If not, then because $n_{\tau_{j}}+b_{\tau_{j}}<2p$ we see that we can still equate coefficients of $\omega_{\tau_{j-1}}$ to obtain a contradiction. The proof in the irreducible case is very similar, and rather simpler, as less ``carrying'' is possible. In fact, the argument gives the stronger result that $\overline{\rho}|_{G_{F_{v}}}$ is compatible with $\sigma^{v}_{\vec{a},\vec{b}}$ for all $v$. A proof is given in the proof of Proposition 5.11 of \cite{sch06}; note that \cite{sch06} works in the setting of \cite{bdj} (using indefinite quaternion algebras), but the proof of Proposition 5.11 is purely local (using Raynaud's theory of finite flat group schemes of type $(p,\dots,p)$ in place of the Breuil module calculations used in this paper), and applies equally well in our setting.\end{proof} The following theorem is due to Michael Schein in the case that $\overline{\rho}|_{G_{F_v}}$ is irreducible for all places $v|p$ (see \cite{sch06}). \begin{thm}If $\overline{\rho}$ is modular of weight $\sigma$, and $\sigma$ is regular, then $\sigma\in W(\overline{\rho})$. \end{thm} \begin{proof}Suppose that $\sigma=\otimes_v \sigma^v_{\vec{a},\vec{b}}$, so that we need to show that $\sigma^v_{\vec{a},\vec{b}}\in W_v(\overline{\rho})$ for all $v|p$. By Lemma \ref{compat}, $\sigma^v_{\vec{a},\vec{b}}$ is compatible with $\overline{\rho}|_{G_{F_v}}$, via $J$, say. If $\overline{\rho}|_{G_{F_v}}$ is irreducible, we are done, so assume that it is reducible. By Lemma \ref{lem:local langlands version of lifting}, $\overline{\rho}|_{G_{F_{v}}}$ has a lift to a potentially Barsotti-Tate representation of type $\tau_{\vec{a},\vec{b},J}$. By definition, this is, up to an unramified twist, a lift of type $J$. By Proposition \ref{prop:liftimpliesmodel}, $\overline{\rho}|_{G_{F_{v}}}$ has a model of type $J$. Twisting, we may without loss of generality suppose that each $a_\tau=0$. Then by Proposition \ref{h1f}, and the definition of $W_v(\overline{\rho})$, we see that $\sigma^v_{\vec{a},\vec{b}}\in W_v(\overline{\rho})$, as required.\end{proof} \begin{thm}If $\sigma\in W({\overline{\rho}})$ is a regular weight, and $\sigma$ is non-ordinary, then $\overline{\rho}$ is modular of weight $\sigma$. If $\sigma\in W({\overline{\rho}})$ is regular, and $\sigma$ is partially ordinary of type $I$ and $\overline{\rho}$ has a partially ordinary modular lift of type $I$ then $\overline{\rho}$ is modular of weight $\sigma$. \end{thm} \begin{proof}Suppose that $\sigma=\otimes_v \sigma^v_{\vec{a},\vec{b}}$, so that $\sigma^v_{\vec{a},\vec{b}}\in W_v(\overline{\rho})$ for all $v|p$. Firstly, we note that (by the definition of $W_v(\overline{\rho})$) $\sigma^v_{\vec{a},\vec{b}}$ is compatible with $\overline{\rho}|_{G_{F_v}}$, via $J_v$, say. Consider firstly the case where $\overline{\rho}|_{G_{F_v}}$ is reducible. We claim that $\overline{\rho}|_{G_{F_v}}$ has a model of type $J_v$. To see this, we may twist, and without loss of generality suppose that $a_\tau=0$ for all $\tau$, so that $\overline{\rho}|_{G_{F_{v}}}\sim \bigl(\begin{smallmatrix}\psi_1&*\\0&\psi_2\end{smallmatrix}\bigr)$, with $\psi_1|_{I_{F_v}}=\prod_{\tau\in J_v}\omega_{\tau}^{b_\tau}$, $\psi_2|_{I_{F_v}}=\prod_{\tau\notin J_v}\omega_{\tau}^{b_\tau}$. Now, by Proposition \ref{h1f} (and the definition of $W(\overline{\rho}_v)$) $\overline{\rho}|_{G_{F_v}}$ has a model of type $J_v$, as required. Then Theorem \ref{localdef} shows that $\overline{\rho}|_{G_{F_v}}$ has a potentially Barsotti-Tate deformation of type $\tau_{\vec{a},\vec{b},J_v}$. If $\overline{\rho}|_{G_{F_v}}$ is irreducible, then Theorem \ref{localdef2} shows that shows that $\overline{\rho}|_{G_{F_v}}$ has a potentially Barsotti-Tate deformation of type $\tau'_{\vec{a},\vec{b},J_v}$. By Corollary 3.1.7 of \cite{gee061} there is a modular lift $\rho:G_F\to \operatorname{GL}_2({\overline{\Q}_p})$ of $\overline{\rho}$ which is potentially Barsotti-Tate of type $\tau_{\vec{a},\vec{b},J_v}$ for each $v|p$ for which $\overline{\rho}|_{G_{F_{v}}}$ is reducible, and of type $\tau'_{\vec{a},\vec{b},J_v}$ for each $v|p$ for which $\overline{\rho}|_{G_{F_{v}}}$ is irreducible. Then by Lemma \ref{lem:local langlands version of lifting}, $\overline{\rho}$ is modular of weight $\sigma'$ for some Jordan-H\"{o}lder constituent $\sigma'$ of the reduction modulo $p$ of $\otimes_v I_v$, where $I_{v}=I_{\vec{a},\vec{b},J_v}$ if $\overline{\rho}|_{G_{F_{v}}}$ is reducible, and $I_{v}=I'_{\vec{a},\vec{b},J_v}$ otherwise. The result then follows from Propositions \ref{typeswts} and \ref{typeswts2}, Remarks \ref{weaklyregular} and \ref{weaklyregular2}, and Lemma \ref{compat}.\end{proof} \bibliographystyle{amsalpha}
1,314,259,992,589
arxiv
\section{Introduction}\label{sec:Introduction} Convolutional neural networks (CNNs) have achieved state-of-the-art performance across a variety of tasks involving natural images, including object and action recognition. Convolutional layers come with a host of benefits, including a more efficient usage of parameters compared to standard neural networks and the ability to learn patterns at multiple scales. In addition, convolutional layers allow the network to learn spatially invariant relationships between input and output. This is particularly useful in vision tasks involving natural images, in which an object's identity is often independent from its location in the image. Researchers from the neuroimaging community have recently begun exploring the utility of CNNs applied to publicly available brain-image datasets, predicting neurological conditions like Alzheimer's \citep{sarraf2016deepad} and autism \citep{anirudh2017bootstrapping}, as well as age \citep{cole2017predicting} and survival time of high-grade glioma patients \citep{nie20163d}. Although structural brain scans and natural images share similarities, there are several key differences that make adapting CNNs to brain images non-trivial. Much of the existing work on CNNs has focused on 2-dimensional (2D) images, but structural MRI scans of the human brain are 3-dimensional (3D). Some previous work has treated 3D volumes as a set of independent 2D slices \citep{sarraf2016deepad,farooq2017deep}, but doing so fails to fully leverage the spatial structure of the brain. A recent review \citep{bernal2017deep} suggests that 3D models outperform 2D models on several neuroimaging tasks. This increase in performance, however, comes at the cost of increased computation. The computational complexity involved in applying CNNs to 3D volumes of the brain makes it difficult to explore architectures even on small datasets. In \cite{cole2017predicting}, a 10-layer 3D CNN took 83 hours to train on only 2000 images. Existing applications of 3D convolutions to neuroimaging data \citep{cole2017predicting, cao2016mental} use architectures based on existing 2D CNN architectures. However, choices like i) the size and number of convolutional filters, ii) the type and size of pooling, and iii) the relationship between pooling, convolutional and fully connected layers, are among a myraid choices that, while extensively studied for natural images, have not been well studied in the context of 3D structural neuroimages. Architecture design choices are often inspired by known invariances in the data. The types of invariances in natural images are typically affine, as evidenced by the data augmentation techniques, such as translations and reflections, used in existing applications of CNNs \citep{krizhevsky2012imagenet}. However, it is unclear which of these invariances arise in neuroimaging data. Given that structural brain images are aligned, translation invariance may not be as important in brain images as it is in natural images. In fact, if local patterns in the brain are location-dependent, then standard convolutions may be inappropriate. In this work, we begin to explore CNN architectural choices in the context of structural neuroimaging data. We propose two modifications to existing CNN architectures. Our modifications produce a network that is able to better learn patterns from brain images and trains faster than architectures developed on natural image tasks. Furthermore, these modifications are straightforward to implement in Tensorflow \citep{abadi2016tensorflow}. By sharing our implementation\footnote{\url{https://gitlab.eecs.umich.edu/mld3/brain_age_prediction}}, we hope to inspire other researchers to continue to challenge current assumptions. We summarize both the technical and clinical significance of our work below. \paragraph{Technical Significance} Based on the structure of neuroimaging data, we present two modifications to existing CNN architectures: 1) learning different parameters in different regions of the brain, and 2) applying more filters in the early stages of the network. These modifications enable learning distinct patterns in different brain regions. By designing these modifications to target 3D brain images, we demonstrate improved performance and training times compared to a 3D CNN baseline when predicting age from brain scans. These improvements are robust to changes to the amount of training data and the number of network parameters. Our work suggests a greater space of improvements can be achieved by redesigning CNNs for brain images. \paragraph{Clinical Relevance} Neuroimages are a rich form of data from which a wealth of clinically relevant labels can be predicted. We focus on the task of predicting age from neuroimages. Age prediction from neuroimaging data allows researchers to quantify the difference in predicted age and chronological age, which can be a powerful marker of deviation from expected aging trajectories. There are a number of psychiatric and neurological conditions that are closely linked to such age-related deviations, including attention-deficit/hyperactivity disorder \citep{Shaw19649} and Alzheimer's \citep{stern2012cognitive}. Differences in predicted and chronological age have already been shown to be reliable and heritable biomarkers for a variety of neurodegenerative conditions \citep{cole2017predicting}. In addition, age prediction from brain scans enables further investigation into mechanistic factors correlated with accelerated or decelerated brain aging and associated changes in cognitive function. Our work takes the first step toward designing deep learning models that capture the invariances present in structural brain scans. We demonstrate that our ideas improve age prediction from these scans, but we hypothesize that the proposed ideas could generalize to other neuroimaging tasks. \paragraph{} The rest of this paper is organized as follows. Section \ref{sec:RelatedWork} presents related work, both in developing CNNs on neuroimaging data and in generalizing CNNs for data with different types of invariances. Section \ref{sec:Methods} discusses the proposed architectural modifications in more detail. Section \ref{sec:ExperimentalSetup} details the dataset we use for evaluation, and our experimental setup. In Section \ref{sec:Results}, we present an empirical evaluation of our proposed approach as well as several follow-up analyses. \section{Related Work}\label{sec:RelatedWork} There is a broad scope of work in applying CNNs to brain images. The most common tasks include anomaly detection, which covers tumor and micro-bleed detection, segmentation, which includes skull stripping and tumor segmentation, and label prediction. Label prediction is involves disease prediction, as well as the task we focus on: age prediction. Predicting age from brain images is an important step towards understanding typical brain development and further understanding developmental disorders \citep{dosenbach2010prediction}. While there has been a lot of progress on the task of developing machine learning techniques for predicting age from brain images \citep{franke2012brain}, here we focus primarily on those works that utilize CNNs. \cite{cole2017predicting} propose a CNN architecture with repeated blocks of 3D, $3\times3\times3$ convolutions. Their focus is not on comparing different CNN architectures, but rather comparing the performance of their proposed architecture to a Gaussian Process model. Few works exist that explicitly attempt to adapt CNNs to brain image tasks like age prediction or disease prediction. \cite{meszlenyi2017resting} and \cite{kawahara2017brainnetcnn} propose novel architectures to predict mild cognitive impairment and age, respectively, from functional connectivity networks extracted from functional MRI. They model the network as a graph and apply convolutions across the edges. Neither consider applying CNNs directly to brain scans. \cite{zheng2017novel} introduce a method involving a new pooling operation to predict HIV and ADHD from directly from functional imaging. However, their method requires an ensemble of CNNs and a separate algorithm to mix the ensemble, which increases computational complexity during training. Many of the works that focus on learning from brain data do not discuss architectural modifications. As described above, those that do either focus on learning from functional connectivity networks rather than actual images, or require computationally expensive models. More generally, there is a large body of work that aims to tailor CNNs to data with different structural properties. \cite{dieleman2016exploiting} introduce four new operations to help CNNs learn invariance to rotations. \cite{gens2014deep} and \cite{cohen2016group} attempt to generalize CNNs to arbitrary groups of affine transformations. \cite{ngiam2010tiled} modify convolutions in a way that helps CNNs learn invariances present in the training data. These works focus on natural images. None consider 3D spatial data. In contrast, we investigate a different and relatively unexplored class of images. We propose a novel architecture based on the structure of 3D brain volumes. Our approach increases performance without sacrificing computational efficiency, and is straightforward to implement. \section{Methods}\label{sec:Methods} This section describes two proposed modifications to existing architecture choices. The first, regional segmentation, is motivated by the idea that a CNN should learn different parameters in different regions of the brain. We segment the brain into consecutive regions and treat each region as an input channel. This encourages the network to learn region-dependent patterns. However, region-specific information is lost after the initial convolutional layers. To address this, the second modification applies more filters earlier in the network and fewer filters later. This is in contrast to the existing practice of learning fewer filters early on in the network and more in the later layers. Combined, these two modifications both improve performance on age prediction and decrease training time. \subsection{Regional Segmentation} CNNs are designed to capture spatially invariant patterns \citep{lecun1998gradient}. This model works well for natural images, since most objects in images retain their identity regardless of where they appear in an image. In contrast, brain images are typically aligned: each brain in a given dataset occupies the same spatial region. Complete spatial invariance is thus unnecessary since the network is not required to deduce the location of objects of interest. Furthermore, across different regions of the brain, the same pattern may have different meaning. Requiring a network to learn the same parameters over the entirety of the brain image may force those parameters to be too general and lose region-specific information. To better capture region-specific information, a CNN should learn different parameters in different brain regions. To address this, we propose the following modification: before applying convolutions to brain images, divide the brain image into distinct regions and concatenate those regions along the channel dimension. Learned weights will not be shared across regions, allowing the network to learn region-specific patterns. Given that we aim to learn different patterns in different regions, how should these regions be chosen? It may seem appealing to segment the brain into known anatomical regions. However, following known anatomical regions is challenging. Anatomical regions in the brain may vary in size and shape, but it is computationally desirable to have regions with equal dimensions. Formatting these regions to share the same dimensions may lead to large quantities of zero padding and wasted computation. \begin{figure}[htbp] \centering \begin{adjustbox}{center} \includegraphics[width=5in]{RegionalSegmentationNew.png} \end{adjustbox} \caption{A depiction of regional segmentation. The operation is simple: it divides an input volume into separate, consecutive regions and then treats each region as a channel. Different colors in the figure indicate different parameters. The convolutional filter does not share parameters across regions, but applies the separate parameters simultaneously over separate regions. } \label{fig:Segmentation} \end{figure} In light of the difficulty of using expert-defined anatomical regions, we segment the brain into consecutive, equally-sized cubes. More formally: let $I$ be a 3D, single-channel volume with dimensions $(X, Y, Z)$. Let $k$ be an integer regional segmentation rate, a hyperparameter. Regional segmentation divides $I$ into $k^3$ adjacent regions of size $(\lfloor \frac{X}{k} \rfloor, \lfloor \frac{Y}{k} \rfloor, \lfloor \frac{Z}{k} \rfloor)$. The regions are then concatenated along the channel dimension. In practice, each region is given an additional 3-voxel boundary in each dimension to avoid losing information along the edges of the regions. This approach, depicted in Figure \ref{fig:Segmentation} is easy to implement, requires no prior knowledge, and allows for different levels of granularity. Setting $k$ to be large corresponds to a finer segmentation of the brain, with less spatial parameter sharing; conversely, setting $k$ to be small corresponds to more spatial parameter sharing. Existing architectures set $k = 1$, while fully connected networks set $k$ to be the input dimension, treating each voxel as independent of the others. Separate parameters are learned for each region since regions are no longer spatially connected. Given that these regions are now treated as separate input channels, depthwise convolutions may be more appropriate than standard convolutions, because they peform completely separate convolutions across channels \citep{chollet2016xception}. However, current deep learning libraries lack implementations of 3D depthwise convolutions, and non-native implementations are computationally expensive. Normal convolutions do learn different parameters over different channels, but combine information across channels after convolution. We mitigate this problem in two ways. First, we rotate regions into the orientation that minimizes spatial overlap across the channel dimension as depicted in Figure \ref{fig:RotationRegions}, ensuring that as much region-specific information as possible is retained during convolutions. Second, we alter the number of filters in the network to focus on the earlier layers, as discussed in the next section. \begin{figure}[htbp] \centering \begin{adjustbox}{center} \includegraphics[width=4in]{RotatingRegions.png} \end{adjustbox} \caption{ An illustration of the overlap in the channel dimension between regions of a center-aligned image after regional segmentation with $k=2$. The original image is depicted on the left. The top pathway depicts the default rotation of the regions, which minimizes spatial overlap. The bottom pathway depicts the rotation that maximizes spatial overlap. For illustration purposes, this image depicts a 2D circle, but the same principle applies for 3D brain scans. } \label{fig:RotationRegions} \end{figure} \subsection{Filter Layouts} Existing CNN architectures have a common trend: the number of filters in convolutional layers starts small and increases with each layer. This models human intuition: at finer detail, objects share local patterns like edges and corners. But with increasing abstraction, objects become more and more distinct. Having a large number of specialized filters in later layers of a CNN is thus critical for an object recognition task with a large number of distinct classes \citep{zeiler2014visualizing}. The input to our network, however, consists of segmented regions of the brain concatenated across the channel dimension. This poses a problem: the first convolutional operation across the input image combines information from all regions. With each convolution, we lose information distinguishing regions. To mitigate this issue, we propose reversing the traditional filter scheme: instead of starting with a small number of filters and increasing them, start with a large number of filters and decrease that number in later layers. We hypothesize that it is critical to extract as much information as possible from earlier stages of the image, and less important to learn information in the later layers, when information from all regions is blended together. Our approach forces the network to learn more parameters at the lower levels, since these levels contain more information from distinct regions. This approach also bottlenecks the representation of these regions at later layers in the network, which acts as a form of regularization. Usually, it is difficult to experiment with a large number of filters early on in a network, because earlier filters are applied to larger images. Loading more filters on larger images results in reduced training speed and increased memory demands. However, by first applying regional segmentation and then reversing the filter layout, we find that the training time is still decreased relative to a baseline, as discussed in Section \ref{sec:Results}. \section{Experimental Setup \& Baselines}\label{sec:ExperimentalSetup} In this section, we describe the brain image dataset and task we used to evaluate our proposed modifications. We detail our exclusion criteria, the baseline we compare our proposed method to, and the specifics of how our models are trained. We also describe the architectural details of both the baseline and our proposed approach. \subsection{Dataset \& Task} We evaluate our modifications using data from the Philadelphia Neurodevelopmental Cohort (PNC) study \citep{satterthwaite2014neuroimaging}. The PNC study is a population-based sample of children ages 8-21 ($\mu$=15.48, $\sigma$=3.19) living in the greater Philadelphia area. Neuroimaging data were collected from 1,445 subjects. Data from 997 of those subjects were made publicly available. These data were initially downloaded and preprocessed for the purposes of functional connectivity analysis. Therefore, data quality inclusion criteria follow functional connectivity guidelines (low motion, at least 4min usable data). Data from 724 of the publicly available subjects met these criteria and were included. For our analysis, we use T1-weighted structural images (voxel size $0.94 \times 0.94 \times 1$, FOV dimensions $196 \times 256 \times 160$). These images were preprocessed using the DARTEL toolbox in SPM8. Preprocessing steps included: bias field correction, brain extraction, and spatially normalizing to MNI152 template space. After preprocessing, the structural images have dimensions $121 \times 145 \times 121$. From these images, we aim to predict subject age. To learn a model mapping 3D images to age, we randomly split the subjects into training (n=524), validation (n=100) and test (n=100) sets. Validation data were used for model selection and stopping criteria. All reported performance metrics pertain to the test set unless otherwise specified. \subsection{Models}\label{sec:Models} We compare a standard baseline architecture from \cite{cole2017predicting} to an architecture with our proposed modifications. Both the baseline and our proposed network are made up of repeated blocks of (convolution, convolution, batch normalization, pooling), as depicted in Figure \ref{fig:Block}. All architectures consist of 4 blocks total, followed by a hidden layer with 256 units, and then a single unit to output age prediction. All convolutional and hidden layers are followed by the eLU activation function. All models use $3 \times 3 \times 3$ convolutions of stride length 1. Before convolution, the inputs are padded so that the dimensions of the outputs are equal to the dimensions of the inputs, \textit{e.g.} ``same'' padding in Tensorflow. Pooling layers are all $2 \times 2 \times 2$ max pooling with stride length 2. \begin{figure}[htbp] \centering \includegraphics[width=6in]{BlockConv.png} \caption{ \small Both the baseline and proposed method use repeated copies of the convolutional block above. The convolutional layers within a block always have the same number of filters. } \label{fig:Block} \end{figure} \begin{table}[htbp] \centering \caption{ \small Architectures of the baseline and proposed CNNs, from left to right. Blocks 1 through 4 are copies of the block above. Both architectures have a single hidden layer with 256 units. The hidden layer uses the same activation function as the convolutional layers, namely eLu. } \begin{adjustbox}{center} \scalebox{0.8}{ \begin{tabular}{|c|c|c|c|c|c|c|c|c|}\hline &&& \multicolumn{4}{c|}{\textbf{\# Filters}} & & \\ \cline{4-7} \textbf{Approach} & \textbf{Segmentation} & \makecell{\textbf{Input} \\ \textbf{Channels}} & \textbf{Block 1} & \textbf{Block 2} & \textbf{Block 3} & \textbf{Block 4} & \makecell{\textbf{Hidden} \\ \textbf{Units}} & \makecell{\textbf{Output} \\ \textbf{Units}} \\ \hline Baseline & No & 1 & 8 & 16 & 32 & 64 & 256 & 1\\ \hline Proposed & Yes & $k^3$ & 64 & 32 & 16 & 8 & 256 & 1 \\ \hline \end{tabular} } \end{adjustbox} \label{tab:Arch} \end{table} The \textit{Baseline} architecture has 8 filters in its first block of convolutions, and double that number in every successive block, for a total of 64 filters in the final block. Our \textit{Proposed} approach has 64 filters in its first block, and half that every block after, for a total of 8 filters in the final block. A summary of both architectures is presented in Table \ref{tab:Arch}. Each layer of convolutional filters can be thought of as a 5-D tensor of shape $C \times 3 \times 3 \times 3 \times M$, where $C$ is the output dimension of the previous layer, and $M$ is the number of filters (the output dimension of the current layer). For example, in the first layer of the baseline network, the original input image is convolved with a 5-D tensor of shape $1 \times 3 \times 3 \times 3 \times 8$, while in the first layer of the proposed network, the segmented input image is convolved with a 5-D tensor of shape $k^3 \times 3 \times 3 \times 3 \times 64$. \subsection{Implementation Details}\label{sec:implementation} For all experiments, every model is trained five times with different random initializations until performance on the validation set no longer improves, approximately 700 epochs. Weights are initialized using Xavier initialization \citep{glorot2010understanding}. For each run, the model weights that achieved the best performance on the validation set during training were saved. These weights were restored after each run and the model was evaluated on the test set. We report the average test performance (and standard deviation) over the different initializations. For computational efficiency, all images are pooled by a factor of 3 in each dimension before being fed into the network, resulting in images of size $41 \times 49 \times 41$. We pool the images by averaging consecutive $3 \times 3 \times 3$ blocks; the specific aggregation function performed is discussed in the next section. We train all models with the Adam optimizer \citep{kingma2014adam}. We found no significant performance improvement for any model tested with non-default parameters, so we keep the defaults ($\alpha=0.001, \beta_1=0.9, \beta_2=0.999, \epsilon=10^{-8}$). We optimize with respect to mean squared error (MSE) on the training set. We report both MSE and mean absolute error (MAE) on the held-out test set. We implemented all models using Tensorflow \citep{kingma2014adam} and ran all experiments on a GeForce GTX 1080 Ti GPU. To select the hyperparameter $k$ for our proposed model, we searched in the range $[1,2,3,4]$, where 1 corresponds to no segmentation to and 4 corresponds 64 regions. $k$ was chosen based on the model that provided the lowest average performance (MSE) on the validation set. \section{Results \& Discussion}\label{sec:Results} In this section, we compare the performance of our proposed approach to the baseline architecture described in Section \ref{sec:Models}. We also compare the performance of each proposed modification independently. We measure performance in terms of both error and training time. We present an in-depth follow-up analysis, in which we investigate performance across a range of different settings. \subsection{Prediction Error} In Table \ref{tab:PNC1} we report the MSE and the MAE for the task of predicting age from structural brain images. As described in Section \ref{sec:implementation}, reported performances below represent the average and standard deviation of five different initializations of each model evaluated on the held-out test set. \begin{table}[htbp] \centering \caption{\small Comparison of model performances for the task of predicting age on the PNC dataset. The regional segmentation rate $k$, as described in Section 3, refers to how finely the brain is segmented before filters are learned in each region. We find that 2 is the optimal setting for this task. The filter layout refers to either increasing the number of filters (8-16-32-64), or decreasing the number of filters (64-32-16-8), as the depth of the network continues.} \begin{adjustbox}{center} \begin{tabular}{|c|c|c|c|c|c|}\hline \textbf{Approach} & \textbf{Filter Layout} & \textbf{$k$} & \textbf{MSE} & \textbf{MAE} & \textbf{\makecell{Training Time\\ (minutes)}} \\ \hline Baseline & 8-16-32-64 & - & $3.72 \pm 0.20$ & $1.58 \pm 0.06$ & 52.5\\ \hline Proposed & 64-32-16-8 & 2 & $\mathbf{3.03 \pm 0.15}$ & $\mathbf{1.43 \pm 0.03}$ & 40 \\ \hline Segmentation only & 8-16-32-64 & 2 & $3.76 \pm 0.08$ & $1.59 \pm 0.03$ & \textbf{30} \\ \hline Reverse layout only & 64-32-16-8 & - & $3.64 \pm 0.35$ & $1.53 \pm 0.05$ & 117 \\ \hline \end{tabular} \end{adjustbox} \label{tab:PNC1} \end{table} \begin{figure}[htbp] \centering \includegraphics[width=4in]{TestValdkPlot.png} \caption{\small Performance of proposed model plotted against the regional segmentation rate, $k$. The points are offset from integer values to better display both test and validation performance. Each point is the mean of five runs, and the error bars are the standard deviation across those runs. This chart demonstrates that for this particular problem, $k=2$ is a locally optimal hyperparameter. } \label{fig:TestKPlot} \end{figure} For the proposed model, we selected a regional segmentation rate of $k=2$ based on validation performance, resulting in eight different regions. Our experiments show that the combination of segmenting the brain into eight separate regions and reversing the order of filters results in improved age prediction from structural brain data. Results indicate that it is the combination of both modifications that improves performance; neither modification alone lowers the prediction error from the baseline. To understand how model performance varies with $k$, we plot both the validation and test performance of our proposed method for each setting of $k$ in Figure \ref{fig:TestKPlot}. Each point is the average performance of the model across five initializations, and error bars represent the standard deviation. For our proposed architecture, segmenting the brain too finely or leaving the brain whole results in decreased performance. We only used validation performances to do hyperparameter selection, but the same trend holds for both test and validation data. \subsection{Training Time} Table \ref{tab:PNC1} also displays the training time of the evaluated models. Training time refers to wall clock time, in minutes, per 700 epochs, the number of epochs taken to train each model. Each model was trained on the same GPU under the same load, for fair comparison. The table shows that reversing the layout of the filters in a CNN is a computationally expensive modification. Doing so more than doubles the training time from the equivalent baseline. More filters earlier in the network results in more filters being applied when the image data is larger and convolutions are more expensive. Table \ref{tab:PNC1} also shows that applying regional segmentation nearly halves training time. This is due in part to the reduced number of fully connected parameters in the proposed network, as discussed in section \ref{sec:fc}, and in part to the size of the feature maps in the network after the first convolution. A similar volume of data is input to the first convolution of the baseline and the segmentation only network. In the former, the images are arranged as ($41 \times 49 \times 41 \times 1$) arrays. In the latter, the images are arranged as ($23 \times 27 \times 23 \times 8$) arrays\footnote{Note that the dimensions of the segmented image are not exactly equal to ($\lfloor \frac{41}{2} \rfloor \times \lfloor \frac{49}{2} \rfloor \times \lfloor \frac{41}{2} \rfloor \times 8$) due to the padding boundary described in section \ref{sec:Methods}.}. After the first convolution, the number of feature maps in each section of the network is the same, but the maps are roughly twice as small in each spatial dimension in the segmentation only network, meaning the segmentation only network performs less computation in the remaining convolutional layers. The underlying implementation of the convolution operation also affects training time. Consider an input image of shape ($N \times X \times Y \times Z \times C$) where $N$ is the number of images in a batch, $X, Y, \textrm{ and } Z$ are spatial dimensions, $C$ is the number of channels in the image and a convolutional filter tensor of shape ($C \times \ell \times \ell \times \ell \times M$) where $\ell$ is the size of the convolutional filter and $M$ is the number of filters. The convolution can be thought of as a matrix multiplication between a filter matrix of shape ($M \times C\ell^3 $) and an input matrix of shape\footnote{This assumes that the output dimension is made to be equal to the input dimension using \textit{same} padding.} ($C\ell^3 \times NXYZ$). This multiplication is more efficient if the products $C \ell^3$ and $NXYZ$ are similar in magnitude, rather than one being significantly larger than the other \citep{chetlur2014cudnn}. If the input image is a single-channel, 3D brain image, and the filter size is small, the product $C\ell^3$ will be much smaller than $NXYZ$. In our baseline network, $C\ell^3 = 27$, while $NXYZ=4*41*49*41=329476$. Regional segmentation increases $C$ while decreasing $XYZ$ by a factor of $k^3$, making the products $C\ell^3$ and $NXYZ$ closer in magnitude and resulting in a more efficient convolution. To demonstrate how regional segmentation impacts the speed of convolutions, we simulated the forward pass of the convolution operation on a randomly-generated $4\times \frac{72}{k} \times \frac{72}{k} \times \frac{72}{k} \times k^3$ image. We convolved the image with a randomly generated $k^3 \times 3\times3\times3 \times 8$ filter, similar to the first convolutional layer of our baseline architecture. We varied the segmentation rate $k$ in the range $[1,2,3,4,6,8,9,12,18,24,36]$, choosing the divisors of 72 and forgoing the voxel boundary at the edges of regions to ensure that each convolution was applied over exactly the same amount of data. Each data-point of Figure \ref{fig:SimulatedConvolutions} displays the result of timing 500 forward-pass convolutions at the indicated segmentation rate. Figure \ref{fig:SimulatedConvolutions} demonstrates that initially as the segmentation rate increases and the products $C\ell^3$ and $NXYZ$ become closer in magnitude, the convolution speeds up. But, as the image becomes too finely segmented, $C\ell^3$ exceeds $NXYZ$ and the convolution slows down. \subsection{Robustness to Pooling Strategy} In both the baseline and proposed approach, we average pool the images before feeding them into the network to speed up training. Average pooling smooths out signals within the original images, but other pooling strategies may have different properties. To better understand how this smoothing affects the relative performance of to our proposed approach to the baseline, we evaluate both models on two other pooling strategies. \textit{Max} pooling corresponds to taking the maximum value of every consecutive $3 \times 3 \times 3$ block. \textit{Naive} pooling refers to taking the first voxel and discarding the rest in every consecutive $3 \times 3 \times 3$ block. While average pooling smooths signals, max pooling emphasizes strong signals and naive pooling arbitrarily selects voxels based on order. Given these differences, we hypothesized that relative performance may vary based on the pooling type. To test this hypothesis, we train the baseline and our proposed model separately on the training images pooled with each strategy. We evaluated the MSE of each model on the held-out test set pooled with each strategy, respectively (Figure \ref{fig:PNC_pooling_strategy}). Average pooling is the best strategy for both the baseline model and our proposed method. Our proposed method, however, is more robust to pooling strategies than the baseline. Most importantly, our proposed method reports a lower MSE for all pooling strategies than the baseline for any pooling strategy. \begin{figure* \centering \begin{subfigure}{0.54\textwidth} \centering \includegraphics[width=1\linewidth]{SimulatedConvolutions.png} \caption{}\label{fig:SimulatedConvolutions} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=1\linewidth]{TwoSampleComp.png} \caption{} \label{fig:PNC_pooling_strategy} \end{subfigure} \caption{\small Convolution time vs. segmentation rate on the left, and performance based on pooling type on the right. \textbf{(a)} Time to compute the forward pass of a convolution on a 3D, single-channel image batch after regional segmentation. The plot shows that segmenting the image decreases training time up to a point, but too finely segmenting the image results in increased training time. \textbf{(b)} Plot of performance on pooling types. \textit{Average} and \textit{Max} refer to average and max pooling, respectively, while \textit{Naive} refers to downsampling the image by keeping the first voxel in every consecutive $3 \times 3 \times 3$ block. Average pooling is the best pooling type for both proposed and baseline. The proposed model outperforms the baseline using any pooling type, and is less sensitive to how the images are pooled. } \label{fig:TimeAndPooling} \end{figure*} \subsection{Sensitivity to Number of Fully Connected Parameters}\label{sec:fc} Our proposed model and the baseline model have a similar number of convolutional parameters (233,280 compared to 219,672). However, because the proposed model has fewer convolutional filters in later layers, it also has fewer parameters in the fully connected layers, despite having the same number of hidden units. Specifically, the proposed model has 16,640 weights in its fully connected layers, while the baseline has 590,080 such weights, despite both having 256 hidden units. In order to examine how much the number of fully parameters impacts performance, we varied the number of units in the hidden layer in our proposed approach in the range $[256, 2496, 4736, 6976, 9216]$. We varied the number of hidden units in our baseline in the range $[7, 69, 131, 194, 256]$. These ranges were chosen because they result in a similar number of fully connected parameters for each model, within 1\% of difference. Note that $256$ is the default number of hidden units for both models. The resulting performance of each model across these different settings is plotted in Figure \ref{fig:PNC_fc_param}. Having fewer fully connected parameters hurts the baseline. In contrast, the reverse model remains within the margin of error regardless of the number of fully connected parameters. These results suggest that the difference in performance between the proposed approach and the baseline is not due to the proposed approach having fewer fully connected parameters. \subsection{Varying the Amount of Training Data} Lack of data is a large concern in MRI studies, because gathering research-grade MRI scans is expensive and time consuming. It is not uncommon for researchers to work with datasets of fewer than 200 patients \citep{zheng2017novel, meszlenyi2017resting}. Compared to other single-site datasets, the PNC dataset is relatively large. In our setup, we have 524 training examples. To better understand the impact of small training sizes on our conclusions, we varied the number of training examples in the range $[100, 200, 300, 400, 524]$. For each number of training examples, we independently and randomly selected examples without replacement from the original training set of 524. In Figure \ref{fig:PNC_training_set_size}, we plot the number of training examples vs. model performance in terms of MSE for both the baseline and proposed approach. The line in purple plots the absolute difference in performance between the two models. The performance of both approaches declines as we decrease the number of training examples, as expected. However, the difference between the two models increases as the number of training examples decreases. This suggests that our method is more robust to small training sets. Furthermore, our proposed approach exhibits less variation in performance, indicating that it is potentially less more robust to weight initialization. \begin{figure* \centering \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=1\linewidth]{FCPlot.png} \caption{} \label{fig:PNC_fc_param} \end{subfigure} \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=1\linewidth]{NumberExamples.png} \caption{} \label{fig:PNC_training_set_size} \end{subfigure} \caption{\small Investigating the robustness of observed improvements \textbf{(a)} Plot of performance vs. number parameters in the fully connected layers. The left side, 16,640 parameters, is the default for the proposed model. The right side, 590,080 parameters, is the default for the baseline model. We vary the number of fully connected parameters by changing the number of units in the fully connected layer. Reducing the number of parameters for the baseline model does not help performance: rather, it hurts it. \textbf{(b)} Plot of performance vs. number of examples in training set size. The full training set, 524 examples, is on the right. The performance gap between our model and the baseline increases as training examples decrease. } \label{fig:ParamsAndTraining} \end{figure*} \section{Conclusions}\label{sec:Conclusions} In this paper, we introduced two novel modifications to existing CNN architectures, inspired by the structure of brain image data. First, we apply different learned parameters to different regions in the brain. Second, we start with a large number of filters in the first layer and decrease the number of filters after each convolutional block. These modifications encourage the network to learn region-specific patterns in the brain. Combined, they are simple and easy to implement, and result in a model that trains faster than a comparable CNN baseline. In addition, using our proposed architecture, we consistently observed improved age prediction across multiple downsampling types and varying amounts of training data. Our work suggests that there is a larger space of possible architectural choices to explore when adapting CNNs to brain imaging tasks. Due to fundamental differences between brain images and natural images, previous findings about how to build CNNs may not apply to neuroimaging data, or other types of data outside of the domain of natural images. This work provides a starting point to challenge assumptions existing architectures make when applying CNNs to brain images. There exists a variety of ways to expand this work. We did not focus increasing the depth of the neural network, or techniques associated with deeper networks, like inception modules \citep{szegedy2015going} or skip connections \citep{he2016deep}. However, if publicly available brain image datasets increase by an order of magnitude, such techniques may be of interest. Our work also did not focus on comparing pooling types within the network, activation functions, or larger convolution sizes. However, a better understanding of how to best design architectures for 3D brain volumes demands investigation in these areas as well. We hope that our work inspires more questions into how to best design CNNs for neuroimaging learning tasks. Understanding how to challenge and adapt ideas from existing CNN architectures will be critical to better predicting a multitude of clinically relevant labels, from age to Alzheimer's. \acks{This work was supported by the Exercise and Sports Science Initiative (grant no. U056408), and the National Science Foundation (NSF award no. IIS-1553146). The views and conclusions in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the Exercise and Sports Science Initiative or the NSF.}
1,314,259,992,590
arxiv
\section{Introduction} Shock waves are one of the most destructive phenomena occurring during the collapse of cavitation bubbles, and therefore a topic of long-standing interest. The associated pressures, reaching values in the order of GPa~\cite{Pecha2000,Akhatov2001}, are able to wear metallic surfaces, which is a classic concern for ship propellers and hydraulic turbines~\cite{Silverrad1912,Arndt1981,Vogel1989,Escaler2006}. Further victims of cavitation-induced damage are, for example, artificial heart valves~\cite{Rambod1999}, liquid-propelled rocket engines~\cite{Jakobsen1971} and the prey of a mantis shrimp~\cite{Patek2005}. The damaging power can also be exploited for beneficial uses such as in medical~\cite{Brennen2015} (e.g. shock wave lithotripsy~\cite{Field1991,Sass1991}, cancer therapy~\cite{Yu2004,Brennen2015}) and cleaning~\cite{Song2004} applications. However, predictive tools to characterize the key properties of cavitation-driven shocks are limited. In the quest of mitigating the harm they may cause or maximizing their benefit, we here make detailed observations of shocks of single cavitation bubbles and propose a framework to predict their `strengths'. Much progress has been made in the prediction of the damage potential of shock waves emitted by spherically collapsing bubbles~\cite{Hickling1964,Fujikawa1979,Akhatov2001,Fuster2010,Magaletti2015}. However, doing so for non-spherically collapsing bubbles is still an open problem. Bubbles may deform under the effect of, for example, nearby surfaces, inertial forces such as gravity or passing shock waves. The collapse shock wave strengths have been shown, both experimentally and numerically, to vary with the bubble sphericity for bubbles collapsing near a rigid wall~\cite{Vogel1988,Ohl1999,Hsiao2014,Wang2016b}. Shocks from bubbles collapsing under the effect of a passing shock wave have been shown sensitive to the latter's timing and strength~\cite{Sankin2005}. The shocks emitted at the collapse of an individual bubble are often referred to as a single event, yet it is known that deformed bubbles that are pierced by high-speed micro-jets produce several shock waves from multiple locations upon collapse~\cite{Ohl1999,Lindau2003,Supponen2015}. However, understanding the contribution of each shock emission mechanism to the final damage characteristics and a systematic study on the influence of the bubble deformation on them is still lacking, as recently was pointed out by Lauterborn~\cite{Lauterborn2013}. Although numerical simulations offer an excellent means to reproduce complex shock wave scenarios associated to non-spherical collapses~\cite{Johnsen2009,Chahine2015,Koukouvinis2016,Koukouvinis2016b}, observations for their validation are limited. Furthermore, we still lack an understanding on how the shocks from bubbles deformed by distinct sources differ. In this work, shock wave energies and pressures are systematically measured as a function of the various bubble parameters and asymmetries. The objective is to understand how the deformation of bubbles affects their detailed collapse shock wave emission. In particular, we aim to estimate, through visualizations and pressure measurements, the strengths and the timings of the distinct shock waves produced at the collapse of bubbles with geometries varying from highly spherical to strongly deformed by a nearby free surface. These data are then compared to bubbles deformed by a nearby rigid surface and by the hydrostatic pressure gradient, which is modulated in variable gravity aboard parabolic flights (60th and 62nd European Space Agency parabolic flight campaigns and the 1st Swiss parabolic flight). The advantage of a gravity-induced pressure gradient to deform bubbles is its uniformity in time and space that leads to similar bubble `collapse shapes' across a wide range of bubble asymmetries~\cite{Supponen2016}. Furthermore, any smooth pressure field can be approximated to first order by such a uniform pressure gradient. We exploit the large number of data and a broad parameter space to reach an empirical model for predicting the shock strengths for non-spherical bubbles, which is backed-up by theoretical arguments. This model applies the scaling laws for micro-jets, which we have recently developed in detail~\cite{Supponen2016}, to the shock wave emission of deformed cavitation bubbles. The deformation of bubbles collapsing near surfaces is usually quantified by the stand-off parameter $\gamma = h/R_{0}$, where $h$ is the distance between the bubble center and the surface and $R_{0}$ is the maximum bubble radius. Deformations caused by near surfaces and gravity can be compared by using the vector-parameter $\boldsymbol{\zeta}$~\cite{Supponen2016,Obreschkow2011}: \begin{equation} \boldsymbol{\zeta} = \left \{ \begin{array}{l l} -\rho\mathbf{g}R_{0}\Delta p^{-1} & \quad \text{gravitational field}\\ +0.195\gamma^{-2}\mathbf{n} & \quad \text{flat free surface}\\ -0.195\gamma^{-2}\mathbf{n} & \quad \text{flat rigid surface} \end{array} \right. \label{eq:zeta} \end{equation} where $\rho$ is the liquid density, $\mathbf{g}$ is the gravitational acceleration, $\Delta p = p_{0}-p_{v}$ is the driving pressure (where $p_{0}$ is the static pressure of the unperturbed liquid at the location of the bubble and $p_{v}$ is the vapor pressure) and $\mathbf{n}$ is the unit vector normal to the surface, in the direction from the surface to the bubble. $\boldsymbol{\zeta}$ is essentially the dimensionless equivalent of the Kelvin impulse, which is the linear momentum acquired by the liquid during the growth and the collapse of the bubble~\cite{Blake1988}. A higher $\zeta \equiv |\boldsymbol{\zeta}|$ causes a more pronounced bubble deformation and delineates key parameters of the micro-jet, such as the jet speed or the jet impact timing, almost irrespective of the source of deformation for $\zeta<0.1$~\cite{Supponen2016}. We henceforth primarily use $\zeta$ to quantify bubble deformation, but also display the equivalent $\gamma$ for convenience. This paper is structured as follows. Firstly, Section~\ref{s:experiment} presents the experimental methods, describing the setup and the relevant calibrations. Section~\ref{s:details} shows detailed observations of single and multiple shock waves emitted by bubbles near a free surface. A framework for predicting shock peak pressures and energies is then proposed in Section~\ref{s:models}, along with comparisons between shocks from bubbles deformed by different sources (free/rigid surface and gravity). Finally, the results are discussed in Section~\ref{s:discussion}. \section{Experimental methods} \label{s:experiment} \begin{figure}[b] \begin{center} \includegraphics[width=\textwidth]{Fig1.pdf} \end{center} \caption{Top and side view schematics of the experimental setup. The dimensions are given in mm.} \label{fig:experiment} \end{figure} The central components of our experimental setup are shown in Fig.~\ref{fig:experiment}. A pulsed laser is expanded and focused in demineralized water by an immersed parabolic mirror, which produces a point-like plasma and thereby an initially highly spherical bubble~\cite{Obreschkow2013} that grows and, subsequently, collapses. The bubble and the associated shock waves are visualized using shadowgraphy with an ultra-high-speed camera (Shimadzu HPV-X2) reaching filming speeds up to 10~million frames per second with a 50~ns exposure time and a collimated backlight beam from a light emitting diode. The driving pressure $\Delta p$ can be adjusted by varying the static pressure $p_{0}$ in the test chamber between 0.08 and 1~bar with a vacuum pump. Tuning the laser power generates bubbles of energies $E_{0} = (4\pi/3)R_{0}^{3}\Delta p$ ranging from 0.1 to 28~mJ. This parameter space leads to a wide range of maximum bubble radii, $R_{0}=1$--$10$~mm, which are large enough for viscosity and surface tension to have a negligible effect on the bubble dynamics~\cite{Levkovskii1968}. To modulate the bubble deformation, we vary the bubble's distance to a surface ($h \sim 3$--$30$~mm) and/or the perceived gravity ($|\mathbf{g}|\sim 0$--$2$~$g$, where $g=9.81$~ms$^{-2}$), in addition to varying $R_{0}$ and $\Delta p$. The maximum radii are obtained from the recorded collapse time $T_{c}$ (i.e.~half oscillation time) of the bubble as $R_{0}=1.093T_{c}(\Delta p/\rho)^{1/2}\kappa^{-1}$~\cite{Rayleigh1917}, where $\kappa$ is a factor depending on the source and level of deformation. For bubbles collapsing near a free surface, $\kappa$ is a lifetime-shortening factor that can be approximated as $\kappa \approx 1-0.102\gamma^{-1}$~\cite{Gregorcic2007}. The bubbles deformed by gravity or a nearby rigid surface in this work are at $\zeta<10^{-2}$, and therefore the deformations are weak enough for them to justify the assumption $\kappa\approx1$. All measurements are made at room temperature. Additional details on our experimental setup and the parabolic flights may be found in ref.~\citep{Obreschkow2013}. \begin{figure} \begin{center} \includegraphics[width=.6\textwidth, trim=0.2cm 0cm 0.9cm 0.1cm, clip]{Fig2.pdf} \end{center} \caption{A typical hydrophone pressure signal of the shock wave emitted at the bubble generation. $t=0$~$\mu$s corresponds to the time instant of bubble generation.} \label{fig:gen} \end{figure} A needle hydrophone (75 $\mu$m sensor, manufactured by Precision Acoustics) is used to record the pressure of the shock waves. The bandwidth of this hydrophone is guaranteed to extend above 30~MHz, and is thus capable of a detailed sampling of the shock waveform and of disentangling multiple fronts. The rise time upper bound is found to be approximately 15~ns, estimated from the time it takes for the pressure signal of the steep shock wave produced at the explosive bubble generation (Figure~\ref{fig:gen}) to rise from 10\% to 90\% of its maximum amplitude. The actual rise time of the shock wave is likely to be even shorter~\cite{Vogel1996}. The pressure signal, represented by an electrical voltage, is amplified and recorded at 100~MHz sampling frequency by an oscilloscope. The hydrophone sensor is located at a distance of $d = 44.5$~mm from the bubble center at an angle of 30$^{\circ}$ below the horizontal plane with a planar incidence of the shock wave onto the sensor. The shock waves take approximately 30~$\mu$s to reach the hydrophone after being generated. Being thin (needle thickness is 0.3~mm) and located far relative to the bubble size, the presence of the hydrophone needle is assumed to have a negligible effect on the bubble dynamics. \begin{figure} \begin{center} \includegraphics[width=.9\textwidth]{Fig3.pdf} \caption{Energies of shock waves emitted at bubble generation (left) and collapse (right) for various bubble energies $E_{0}$. The colors indicate the level of $\zeta$. The solid lines show $E_{S}=E_{0}$.} \label{fig:E0_ES} \end{center} \end{figure} We assume spherical propagation of the shock waves, and estimate their energies as \begin{equation} E_{S}=aU_{\rm max}^{b}\int U(t)^{2}{\rm d}t \label{eq:ES} \end{equation} where $U(t)$~[V] is the hydrophone voltage signal (containing the full shock wave scenario in the case of multiple collapse shocks, but excluding any reflections from boundaries), $U_{\rm max}$ is the maximum value of $U(t)$ and $a$ and $b$ are calibration constants. If the shock propagated with no energy dissipation, then $a=4\pi d^{2}\left(\rho c\right)^{-1}G^{-2}$~\cite{Vogel1988} (where $c$ is the sound speed in the liquid and $G$ is the gain in units of $[V/Pa]$) and $b=0$. An exponent $b>0$ is used to approximately compensate for non-linear dissipation (e.g.~due to inelastic heating, induced micro-cavitation, etc.), whose relative effect increases with pressure. As the precise gain $G$ is unknown in our current setup, and non-linear dissipation is expected, we treat $a$ and $b$ as positive free parameters. We fit these parameters to simultaneously satisfy two conditions: (1) The energy of the laser-induced shock at the bubble generation $E_{\rm S,gen}$ scales linearly with the bubble energy $E_{0}$~\cite{Vogel1988}, and (2) The total energy of the shock(s) emitted at the bubble collapse $E_{\rm S,coll}$ is bounded by the difference between the bubble energy $E_{0}$ and the rebound energy $E_{\rm reb}$. For bubbles that collapse spherically ($\zeta < 10^{-3}$) and produce no jets, we assume $E_{\rm S,coll}\approx E_{0}-E_{\rm reb}$~\cite{Tinguely2012}. We find that $a$ is such that $E_{\rm S,gen}/E_{0} \approx 0.75$ (i.e.~43\% of the absorbed laser energy goes into the generation shock and 57\% goes into the bubble) and $b\approx 0.45$, indicating slight non-linear dissipation. Figure~\ref{fig:E0_ES} displays the calibrated energies both for bubble generation and collapse shocks waves for various $E_{0}$ and $\zeta$, clearly showing the linear relationship between $E_{\rm S,gen}$ and $E_{0}$ and that the collapse shock energies tend to be lower for increasing $\zeta$. Pressures are then computed from the calibrated energies as $p(t)=U(t)/G$, where the gain $G$ is determined for each individual bubble separately as $G^{2}=4\pi d^{2}\left(\rho c\right)^{-1}\int U(t)^{2}{\rm d}t/E_{S}$. Using a variable $G$ allows to compare the signals obtained in different conditions, for which the recorded pressures are differently affected by the shock's non-linear dissipation. \section{Detailed observations} \label{s:details} \subsection{Spherical collapse} \label{s:spherical} \begin{figure} \begin{center} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\textwidth]{Fig4a.jpg} \caption{ } \label{fig:spherical_visu} \end{subfigure} \begin{subfigure}{0.58\textwidth} \includegraphics[width=\textwidth]{Fig4b.pdf} \caption{ } \label{fig:spherical_hydro} \end{subfigure} \end{center} \caption{Cavity of $R_{0} = 3.8$~mm collapsing spherically at $\zeta < 10^{-3}$ and emitting a single shock wave: (a)~High-speed shadowgraph visualization. The interframe time is 100~ns and the black bar shows the 1~mm scale. See supplementary movie \textit{Video\_Fig4.avi}. (b)~Pressure recorded by the hydrophone. The inset shows the whole bubble oscillation, where the orange and blue circles refer to generation and collapse shock wave peaks pressures, respectively. The dashed line shows $p(t)-p_{0}$ where $p(t)$ is the Rayleigh pressure model computed from Eq.~(\ref{eq:pre}) up to the shock peak, and the dotted line extends the curve to the time at which the bubble is estimated to reach a radius of $R=100$~$\mu$m.} \label{fig:spherical} \end{figure} A spherical bubble collapse emits a single shock front that is spherically symmetrical, as visualized in Figure~\ref{fig:spherical_visu}. This shock is well studied and arises from the compression of the incondensable gases inside the bubble overcoming the high pressures in the liquid around the bubble in the final collapse stage, which makes the liquid rapidly invert its motion as the bubble rebounds~\cite{Hickling1964}. The gases inside the bubble are compressed so violently that they heat up to temperatures reaching levels of visible light emission, a phenomenon known as luminescence, which is visible in frame 5 of Figure~\ref{fig:spherical_visu} and implies that the bubble reaches its minimum size during the 50~ns exposure time of this image. The rebound bubble then forms a compression wave that propagates outwards and quickly steepens to form a shock front, as seen in frames 6-8. The corresponding hydrophone measurement of the shock wave is shown in Figure~\ref{fig:spherical_hydro}. Assuming $1/r$ spreading of the spherical wave and the luminescence spot in Figure~\ref{fig:spherical_visu} as the minimum bubble size ($R_{\rm min}\approx100$~$\mu$m), the lower bound for the peak pressure at the bubble wall at minimum bubble radius is estimated as $2$~GPa which is in agreement to previously estimated values~\cite{Lauterborn2013}. The actual value is likely much higher, because we overestimate the minimum bubble radius that our apparatus is not able to capture due to the luminescence and the dark region around the bubble hiding this information. When using the Keller-Miksis model~\cite{Keller1980}, where we adjust the initial gas pressure by numerically fitting the model to the observed radial evolution of the bubble (first and second oscillation), we would expect a minimum bubble radius of $R_{\rm min}\approx15~\mu$m, and thereby a peak pressure of $12$~GPa. In agreement with previous research, we find that the most energetic shock waves are emitted by highly spherical collapses, reaching up to about 90\% of the initial bubble energy. The bubbles here are found to emit a single shock front at anisotropies up to $\zeta \approx 10^{-3}$ (equivalent to $\gamma \approx 14$), which is also the approximate limit for the appearance of a micro-jet piercing the bubble in our setup~\cite{Supponen2016}. In the last stages of the collapse, the pressure in the liquid near the bubble wall increases to values so high that it deflects light, producing the shaded ring around the bubble in Figure~\ref{fig:spherical_visu} (frames 2-4). This pressure has previously been predicted to reach thousands of bars~\cite{Rayleigh1917,Hickling1964} and experimentally detected using Mach-Zehnder interferometry~\cite{Ward1991} or elevated ambient pressures~\cite{Sukovich2017}. However, it is interesting that our setup is able to visualize it using simple shadowgraphy at atmospheric pressure. This is due to the bubble's high initial sphericity allowing it to reach very small radii upon its exceptionally spherical collapse. The incompressible model for the pressure distribution around the bubble, developed by Rayleigh a century ago, is given as follows~\cite{Rayleigh1917}: \begin{equation} \frac{p}{p_{0}} = 1+\frac{R}{3r}\left(\frac{R_{0}^{3}}{R^{3}}-4\right)-\frac{R^{4}}{3r^{4}}\left(\frac{R_{0}^{3}}{R^{3}}-1\right) \label{eq:pre} \end{equation} where $r$ is the radial distance from the bubble center. Considering the lower bound for the compression ratio of the bubble in Figure~\ref{fig:spherical_visu} ($R_{0}/R_{\rm min} > 40$), we expect the maximum peak pressure to be in the order of GPa in the incompressible framework. The pressure buildup is visible in the hydrophone signal in Figure~\ref{fig:spherical_hydro} as a relatively slow rise preceding the peak pressure of the shock. We may compute the pressure evolution in time from Eq.~(\ref{eq:pre}) at the radial distance where the hydrophone is located ($r=44.5$~mm), assuming the time evolution of the bubble radius to follow the analytical approximation $R(t)\approx R_{0}\left(1-t^{2}\right)^{2/5}$~\cite{Obreschkow2012} (where $t$ is the time normalized to collapse time $T_{c}$), down to $R_{\rm min}\approx 100$~$\mu$m. The computed pressures from Eq.~(\ref{eq:pre}) can be roughly compared with the hydrophone signal if the delay in the far-field caused by the finite sound speed is accounted for. Furthermore, the shock pressure peak is assumed to represent a time approximately 100~ns preceding the final collapse instant, for the shock wave is expected to propagate the first $\sim 300$~$\mu$m with supersonic speeds~\cite{Vogel1996}. The average shock speed during the exposure of the first frame after the collapse is estimated approximately as 3000~$ms^{-1}$ from Figure~\ref{fig:spherical_visu}, and therefore the shock wave is indeed estimated to reach the hydrophone $\Delta t \approx102$~ns earlier than the pressure buildup, of which the information is assumed to propagate at the sound speed. As seen in Figure~\ref{fig:spherical_hydro}, the computed (dashed line) and measured (solid line) pressure evolutions almost up to the signal peak are surprisingly similar. The good agreement is remarkable considering our unconventional pressure calibration. The model is not able to reproduce the shock wave because it is incompressible (dotted line), and when the bubble reaches a radius of $R=100$~$\mu$m, the predicted pressure at the hydrophone location is $p-p_{0}=3.8$~MPa, which is close to the measured peak pressure very likely by coincidence. The pressure rise, in addition to the tensile part of the shock wave tail, is the clearest difference between the measured waveform from a spherical collapse and that of the bubble generation (Figure~\ref{fig:gen}). \subsection{Non-spherical collapse: Bubbles near a free surface} \begin{figure}[b] \begin{center} \includegraphics[width=\textwidth]{Fig5.pdf} \caption{Illustration of the bubble shapes at jet impact for different stand-off distances $\gamma$ from the free surface. The corresponding values for $\zeta$ from left to right are $\zeta=0.54$, 0.30, 0.20, 0.14, 0.10 and 0.076. The shapes of the free surface are shown as a dotted line. The shapes have been obtained numerically using potential flow theory.} \label{fig:freeshapes} \end{center} \end{figure} The dynamics of bubbles near free surfaces has been extensively studied in the past experimentally, theoretically and numerically~\cite{Chahine1977,Blake1981,Blake1987,Wang1996,Robinson2001,Pearson2004,Zhang2016,Koukouvinis2016}, yet no study to-date has focused specifically on their shock wave emission. The advantage of studying bubbles near a free surface is the contact avoidance between the bubble and the surface, allowing thus free collapse and rebound dynamics, as the bubble migration and the micro-jet are directed away from the surface (in contrary to a rigid surface). While bubbles near a free surface form micro-jets that have similar characteristics to bubbles deformed by a rigid surface~\cite{Supponen2016}, their shapes at the final collapse stages have significant differences, which may give us some further insight into the distinct shock wave emission mechanisms. In particular, for $\gamma=1$--3, the micro-jet formed during the collapse is broad and impacts the opposite bubble wall on a ring rather than a single point, some examples being illustrated in Figure~\ref{fig:freeshapes}. At lower values of $\gamma$, the micro-jet becomes narrow and the spike formed on the free surface increases in height. The shapes in Figure~\ref{fig:freeshapes} were obtained numerically using potential flow theory (boundary integral method~\cite{Supponen2016,Taib1983,Blake1987,Robinson2001}\footnote{The code for the numerical simulations is available online at \url{https://obreschkow.shinyapps.io/bubbles}~\cite{Supponen2016}.}) and have previously been validated by their good agreement with experiments~\cite{Supponen2016}. We now present observations of shock waves from bubbles collapsing near a free surface at different levels of $\zeta$. Non-spherically collapsing bubbles that produce micro-jets generate multiple shock waves, which are clearly observed on the shadowgraph images at $\zeta > 10^{-3}$. However, they only become clearly distinct events on the hydrophone signal beyond $\zeta \sim 8\times10^{-3}$ ($\gamma \sim 5$). \begin{figure} \begin{center} \begin{subfigure}{0.4\textwidth} \begin{overpic}[width=\textwidth]{Fig6a.jpg} \end{overpic} \caption{ } \label{fig:weak_visu2} \end{subfigure} \begin{subfigure}{0.58\textwidth} \includegraphics[width=\textwidth]{Fig6b.pdf} \caption{ } \label{fig:weak_hydro2} \end{subfigure} \end{center} \caption{Cavity of $R_{0} = 4.1$~mm at $\zeta = 3.8\times10^{-3}$ ($\gamma = 7.2$): (a)~High-speed shadowgraph visualization. The interframe time is 100~ns and the black bar shows the 1~mm scale. See supplementary movie \textit{Video\_Fig6.avi} (b)~Pressure recorded by the hydrophone. The inset shows the whole bubble oscillation, where the orange and blue circles refer to generation and collapse shock wave peak pressures, respectively.} \label{fig:intermediate_lum} \end{figure} Figure~\ref{fig:intermediate_lum} shows selected shadowgraph images and the corresponding hydrophone pressures for a bubble collapsing at $\zeta = 3.8\times10^{-3}$. The first sign of asymmetry in the bubble collapse, together with the bubble's displacement, is the shaded region appearing near the upper bubble wall where the downward micro-jet is forming (starting from frame 2 in Figure~\ref{fig:weak_visu2}). It is similar to the gradual pressure buildup observed for the spherical collapse in Figure~\ref{fig:spherical_visu}, but not spherically symmetric. It is also in agreement with reported numerical simulations of jetting bubbles, finding higher pressures at the root of the jet relative to the rest of the pressure field~\cite{Chahine2015,Koukouvinis2016,Koukouvinis2016b,Li2016}. The shaded region eventually surrounds most of the bubble in frame 5, and two clear shock fronts are visible in frame 6 following the collapse. We observe luminescence at the tip of the bubble in frame 5, which also appears to be the center of the most pronounced shock wave visible in the subsequent images. Although it is much weaker compared to the light emitted in the spherical case, the observed flash suggests a high gas compression between the jet and the opposite bubble wall. Interestingly, the first shock front in Figure~\ref{fig:weak_visu2} is produced on the side of the bubble where the initial pressure rise in the liquid occurred. The hydrophone is unable to distinguish the first shock wave from the rest due to its location and temporal resolution, but it records the gradual pressure rise occurring on the sides of the bubble preceding the main shock wave (Figure~\ref{fig:weak_hydro2}). \begin{figure}[b] \begin{center} \begin{subfigure}{0.4\textwidth} \begin{overpic}[width=\textwidth]{Fig7a.jpg} \put (27,27) {\textbf{1}} \put (52,27) {\textbf{2}} \put (2,2) {\textbf{3}} \put (27,2) {\textbf{4}} \end{overpic} \caption{The interframe time is 200~ns.} \label{fig:jet_visu} \end{subfigure} \begin{subfigure}{0.52\textwidth} \includegraphics[width=\textwidth]{Fig7b.pdf} \caption{ } \label{fig:jet_hydro} \end{subfigure} \begin{subfigure}{0.4\textwidth} \begin{overpic}[width=\textwidth]{Fig7c.jpg} \put (26,26) {\textbf{1}} \put (76,26) {\textbf{2}} \put (1,1) {\textbf{3}} \put (51,1) {\textbf{4}} \end{overpic} \caption{The interframe time is 300~ns.} \label{fig:tip_visu} \end{subfigure} \begin{subfigure}{0.52\textwidth} \includegraphics[width=\textwidth]{Fig7d.pdf} \caption{ } \label{fig:tip_hydro} \end{subfigure} \begin{subfigure}{0.4\textwidth} \begin{overpic}[width=\textwidth]{Fig7e.jpg} \put (26,26) {\textbf{1}} \put (76,26) {\textbf{3}} \put (26,1) {\textbf{2}} \put (76,1) {\textbf{5}} \end{overpic} \caption{The interframe time is 600~ns.} \label{fig:tip_visu2} \end{subfigure} \begin{subfigure}{0.52\textwidth} \includegraphics[width=\textwidth]{Fig7f.pdf} \caption{ } \label{fig:tip2_hydro} \end{subfigure} \end{center} \caption{Caption on next page} \label{fig:intermediate1} \end{figure} \begin{figure} \begin{center} \ContinuedFloat \captionsetup{list=off} \begin{subfigure}{0.4\textwidth} \begin{overpic}[width=\textwidth]{Fig7g.jpg} \put (26,26) {\textbf{1}} \put (76,26) {\textbf{3}} \put (1,1) {\textbf{5}} \put (51,1) {\textbf{2}} \put (0,20) {+15~$\mu$s} \put (25,20) {+13~$\mu$s} \end{overpic} \caption{The interframe time is 400~ns unless otherwise indicated.} \label{fig:torus_visu} \end{subfigure} \begin{subfigure}{0.52\textwidth} \includegraphics[width=\textwidth]{Fig7h.pdf} \caption{ } \label{fig:torus_hydro} \end{subfigure} \end{center} \caption{(Continued) Selected images (left) and hydrophone signal (right) for cavities of (a),(b) $R_{0} = 3.6$~mm at $\zeta = 2.9\times10^{-2}$ ($\gamma = 2.6$), (c),(d) $R_{0} = 3.6$~mm at $4.6\times10^{-2}$ ($\gamma = 2.1$), (e),(f) $R_{0} = 3.2$~mm at $\zeta = 0.19$ ($\gamma = 1$), and (g),(h) $R_{0} = 3.0$~mm at $\zeta = 0.33$ ($\gamma = 0.77$). The shock waves are denoted as: 1) jet impact, 2) torus collapse, 3) tip bubble collapse, 4) second torus collapse and 5) second tip bubble collapse shock waves. The black bars show the 1~mm scale. The insets show the whole bubble oscillation, where the orange and blue circles refer to generation and collapse shock wave peak pressures, respectively. See supplementary movies \textit{Video\_Fig7a.avi}, \textit{Video\_Fig7c.avi}, \textit{Video\_Fig7e.avi} and \textit{Video\_Fig7g.avi} (\textit{Video\_Fig7g.avi} combines films made of two separate bubbles due to the long duration of the events and the limited number of frames captured by the camera. The events are highly repetitive.)} \label{fig:intermediate1} \end{figure} Figures~\ref{fig:jet_visu}--\ref{fig:torus_hydro} show images and the corresponding measured shock pressures for more deformed bubbles, collapsing at different distances from the free surface at $\zeta = 2.9\times10^{-2}$, $4.6\times10^{-2}$, 0.19 and 0.33. The recorded peak pressures are significantly lower compared to the more spherical cases, and many distinct shock wave events are observed. The first pressure peak in all cases corresponds to the water hammer induced by the jet impact. Such a shock has been observed in the past for non-spherically collapsing bubbles both experimentally~\cite{Ohl1999,Lindau2003,Supponen2015} and numerically~\cite{Johnsen2009,Koukouvinis2016}. It produces a torus-like shock wave due its contact on the opposite bubble wall not being a single point but a circular line (see Figure~\ref{fig:freeshapes}), clearly visible on the images as two shock source `points' on the sides of the bubble. If the jet is broad enough, the hydrophone may detect two individual pressure peaks, such as in Figure~\ref{fig:tip2_hydro}, owing to such torus-like shock having two fronts on the hydrophone axis that reach the sensor. Subsequently, the jet separates a part of the vapor at the `tip' from the rest of the bubble, with this separation being particularly clear in Figures~\ref{fig:tip_visu} and \ref{fig:tip_visu2} as a horizontal line that cuts the bubble and implies that the vapor in that zone has disappeared. It is difficult to tell with certainty that the first shock wave results from a jet impact in Figure~\ref{fig:jet_visu} due to the short time intervals between the distinct events. However, observing several bubbles between $\zeta=2.9\times10^{-2}$ and $4.6\times10^{-2}$ (of which the results are summarized later in Section~\ref{s:dis}), a systematic variation of the shock timings and strengths with $\zeta$ was noted. The identification of each peak in Figure~\ref{fig:jet_hydro} was therefore done accordingly. The peak pressure associated with the jet impact decreases with an increasing $\zeta$, and is barely detected at $\zeta =0.33$. At $\zeta = 2.9\times10^{-2}$ and $4.6\times10^{-2}$ (Figures \ref{fig:jet_visu}--\ref{fig:tip_hydro}), the jet impact is followed by the collapse of the toroidal bubble. The associated shocks are torus-like and meet in the jet axis in the middle of the bubble, which is known to sometimes produce a `counter-jet', a vertical column-like cluster of micro-cavities~\cite{Lindau2003,Supponen2016}. The torus collapse shock may also yield two individual peaks in the pressure signal, such as in Figures~\ref{fig:tip_hydro} and \ref{fig:tip2_hydro}. The peak pressure of the torus collapse shock first decreases with increasing $\zeta$ (Figures~\ref{fig:jet_hydro} and \ref{fig:tip_hydro}), and then increases again slightly (Figures~\ref{fig:tip2_hydro} and \ref{fig:torus_hydro}). The next pressure peak in Figures~\ref{fig:jet_hydro}~and~\ref{fig:tip_hydro} corresponds to the tip bubble collapse. It appears to be the dominant shock in the collapse scenario at these $\zeta$. The tip bubble collapse shock triggers a second collapse of the rebounding toroidal bubble, which emits a further shock wave manifested as the fourth pressure peak in the signal. The second torus collapse pressure peak is considerable at $\zeta = 2.9\times10^{-2}$ but barely detected by the hydrophone at $\zeta = 4.6\times10^{-2}$. As seen in Figures~\ref{fig:tip_visu2}~and~\ref{fig:torus_visu}, at a higher $\zeta$ the tip bubble collapse and the torus collapse change order. In Figure~\ref{fig:torus_visu} the tip bubble is very small and its collapse follows the jet impact so closely that it is difficult to distinguish the shocks they emit. At $\zeta = 0.19$ it is the torus collapse that triggers a second collapse of the tip bubble, while at $\zeta = 0.33$ the tip bubble is able to collapse naturally a second time long before the torus collapse. In Figure~\ref{fig:torus_visu}, the compression of the toroidal bubble is highly non-uniform, yielding multiple peaks that generate a noisy hydrophone signal (Figure~\ref{fig:torus_hydro}). \begin{figure} \begin{center} \begin{subfigure}{0.33\textwidth} \begin{overpic}[width=\textwidth]{Fig8a.jpg} \put (2,93) {\textcolor{white}{$t=0.46$~ms}} \put (29,17) {\textbf{1}} \put (35,17) {$\boldsymbol{\rightarrow}$} \put (19,5) {\textbf{2}} \put (25,5) {$\boldsymbol{\rightarrow}$} \put (63,19) {\textbf{3}} \put (63,11) {$\boldsymbol{\downarrow}$} \end{overpic} \caption{ } \label{fig:Sec_cav1} \end{subfigure} \begin{subfigure}{0.33\textwidth} \begin{overpic}[width=\textwidth]{Fig8b.jpg} \put (2,93) {\textcolor{white}{$t=0.49$~ms}} \put (33,56) {\textbf{3}} \put (39,56) {$\boldsymbol{\rightarrow}$} \put (69,33) {\textbf{2}} \put (61,39) {$\boldsymbol{\nwarrow}$} \put (22,19) {\textbf{1}} \put (22,28) {$\boldsymbol{\uparrow}$} \put (45,1) {\textbf{4}} \put (35,3) {$\boldsymbol{\nwarrow}$} \put (51,3) {$\boldsymbol{\nearrow}$} \end{overpic} \caption{ } \label{fig:Sec_cav2} \end{subfigure} \end{center} \caption{Visualization of secondary cavitation resulting from the passage of rarefaction waves for the same bubble as in Figure~\ref{fig:torus_visu} at two different instants: (a)~Secondary cavitation (1) below the bubble, generated by the tip bubble collapse shock wave (2) turned into a rarefaction wave (3) after reflecting at the bubble's interface; and (b)~Secondary cavitation visible in the pre-heated cone-shaped zone in the laser path (1), as streamers along the micro-jet flow (2) and as a vertical column (3), generated by the rarefaction waves (4) caused by the reflection of torus collapse shock waves at the free surface. These are selected images from the supplementary movie \textit{Video\_Fig7g.avi}.} \label{fig:secondary} \end{figure} The shock wave strengths are also visible as the darkness levels of the corresponding image pixels owing to their ability to deflect light, which is seen, for example, in Figure~\ref{fig:tip_visu} where the tip bubble shock wave is clearly the most pronounced of all the events. The time intervals between each event substantially increase with $\zeta$. When the bubble collapses very close to the free surface, the hydrophone also detects the reflected rarefaction waves following closely the original shocks and contributing to the noise in the signal of Figure~\ref{fig:torus_hydro}. These waves are visible in all movies of Figure~\ref{fig:intermediate1} and, due to their negative pressure resulting from the reflection at the free surface, they generate secondary cavitation in the bubble's neighborhood, as shown in Figure~\ref{fig:secondary}. The secondary cavities are visible as clusters of micro-bubbles most prominently in the path of the focused laser, where the liquid is pre-heated and thereby the nucleation of cavities is facilitated, and between the bubble and the free surface (Figure~\ref{fig:Sec_cav2}). Interestingly, some of these clusters, arranged in streamers towards the central axis of the toroidal bubble, delineate the flow induced by the formation of the micro-jet. The vertical column of micro-bubbles between the toroidal bubble and the free surface in Figure~\ref{fig:Sec_cav2} appears to result from the confluence of the rarefaction waves that are the reflections of the shocks initially emitted by the torus collapse. For the same bubble, secondary cavitation resulting from the shock emitted at the first tip bubble collapse is also observed below the bubble, right after the jet impact, as seen in Figure~\ref{fig:Sec_cav1}. Here the negative pressure results from the reflection at the bubble interface, and the rarefaction wave follows closely the original shock wave, which explains the significant tensile tail of the tip bubble collapse peak captured by the hydrophone in Figure~\ref{fig:torus_hydro}. \subsection{Energy distribution and event timings} \label{s:dis} The observations of the distinct shock wave events and their corresponding pressures show important variations with different bubble asymmetries. The energy of the observed shock waves can be estimated from the hydrophone pressure signal via Eq.~(\ref{eq:ES}), where the integration range is selected by identifying the pressures associated to each individual event from the high-speed visualizations. It should be noted that this method assumes spherically symmetric propagation of the shock wave. Some shocks, especially the jet impact shock, might have some directionality, biasing their energy measurement. Indeed, it has been shown numerically that jet impact-induced shocks are dependent on the orientation with respect to the jet close to the bubble~\cite{Johnsen2009,Hsiao2014}. However, the symmetric shock shadings seen in the high-speed visualizations far from the bubble center (not shown in figures) suggest that this directionality must be sub-dominant. The shock pressure dependence on orientation likely reduces as the wave propagates and decreases in amplitude. We nonetheless caution that directionality is a potential source of systematic error, which might be reduced in future experiments by using multiple hydrophones in different directions. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{Fig9.pdf} \end{center} \caption{Normalized shock wave energy for each shock emission mechanism from bubbles deformed by a near free surface, as a function of $\zeta$ (and corresponding $\gamma$, top axis). Numerically calculated bubble shapes at jet impact are shown for $\zeta= 10^{-2}$, $6\times10^{-2}$ and 0.3.} \label{fig:z_E_all} \end{figure} The fraction of the bubble's initial energy $E_{0}$ distributed to the distinct shock waves for bubbles collapsing near a free surface is shown in Figure~\ref{fig:z_E_all} as a function of the anisotropy parameter $\zeta$ (and the equivalent $\gamma$). We only measured bubbles up to $\zeta\sim0.3$ ($\gamma \sim 0.8$), beyond which the free surface resulted in severe perturbations in the hydrophone signal due to the reflected rarefaction waves. The driving pressure was kept $\Delta p > 75$~kPa in order to avoid simultaneous deformations by the free surface and gravity, which could lead to more complex shapes at the bubble collapse (e.g.~bubble splitting or annular jets~\cite{Cui2016}). The energy of each of the three main shock waves, i.e.\ jet impact, tip bubble collapse and torus collapse, vary as functions of $\zeta$. Interestingly, each of them dominates a certain range of $\zeta$, as seen in Figure~\ref{fig:z_E_all}. For bubbles that produce jets, the jet impact shock appears to dominate up to $\zeta \sim 2\times10^{-2}$. The tip bubble shock wave has a clear domination in the range $2\times10^{-2} < \zeta < 0.15$. Beyond $\zeta \sim 0.15$, the torus collapse shock wave is the most energetic, yet weak in relative terms with less than $10\%$ of the initial bubble energy. The torus collapse energy is particularly low in the range $2\times10^{-2}<\zeta < 0.1$, coinciding with the domination of the tip bubble. The second torus collapse and the second tip bubble collapse emit shock waves with a negligible energy compared to the others, which is why they have been excluded from the figures. The domination of the tip bubble in the range $2\times10^{-2} < \zeta < 0.15$ is explained through its large volume relative to the rest of the bubble at the moment of the jet impact, its spherical topology that allows an effective gas compression during its collapse, and/or the further compression provided by the pushing jet. The large volume of the tip bubble and the small volume of the torus in this range result from the characteristic shape the jet assumes for bubbles collapsing near a free surface (see Figure~\ref{fig:freeshapes}). Beyond $\zeta \sim 0.1$ however, the torus becomes relatively larger again at the moment of jet impact, as the bubble shape at $\zeta=0.3$ in Figure~\ref{fig:z_E_all} suggests, and the torus is able to compress the gases it contains more effectively. This explains the slight rise of the torus collapse shock energy for $\zeta > 0.1$. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{Fig10.pdf} \end{center} \caption{Normalized total collapse shock wave energy $E_{S}/E_{0}$ for bubbles deformed by a near free surface, as a function of $\zeta$ (and $\gamma$, top axis).} \label{fig:z_E_tot} \end{figure} When the energies of the different collapse shock waves are summed, an overall decrease of the total shock energy is observed, as seen in Figure~\ref{fig:z_E_tot}. Here data for lower $\zeta$ have been added, including energies from pressure measurements for which it was not possible to distinguish the different shock wave events. Interestingly, the total shock energy varies as a function of $\zeta$ independently of the bubble maximum radius and driving pressure within the ranges covered here ($R_{0}=1$--$4$~mm, $\Delta p = 0.75$--1~bar). A major part of the collapse shock energy decrease occurs within the range $10^{-3}<\zeta<2\times10^{-2}$, where the jet impact hammer shock is expected to dominate. As the bubble deforms, the liquid inflow towards the bubble center becomes anisotropic, and as a result, the level of compression of the bubble's enclosed gases reduces yielding weaker shock wave emission. As less energy is radiated away by the shock waves for increasing $\zeta$, more energy is distributed to the motion of the liquid forming the micro-jet and to the rebound bubble, both of which are observed to grow with $\zeta$. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{Fig11.pdf} \caption{Time differences between the jet impact and the tip bubble collapse, torus collapse and the second torus collapse as a function of $\zeta$ (and $\gamma$, top axis), normalized with bubble collapse time $T_{c}$. The time between jet impact and torus collapse is modeled as $\Delta T/T_{c} = 0.15\zeta^{5/3}$~\cite{Supponen2016}.} \label{fig:z_deltaT} \end{center} \end{figure} The timing of the distinct events in the shock wave scenario also appears to vary with the level of deformation of the bubble. Figure~\ref{fig:z_deltaT} displays the time difference $\Delta T$ between the jet impact, that generally emits the first shock wave, and the other observed events, normalized to the bubble collapse time $T_{c}$. The experiments are displayed together with our previously established model estimating the normalized time between the jet impact and torus collapse $\Delta T/T_{c} = 0.15\zeta^{5/3}$~\cite{Supponen2016}. Only data for $\zeta>10^{-2}$ are displayed as the temporal resolution of our apparatus is not sufficient for identifying the exact shock timings of more spherical bubbles. The jet impact occurs within the last 1\% of the bubble's collapse time up to $\zeta \approx 0.2$, followed very closely by the other events. The torus collapse precedes the tip bubble collapse up to $\zeta \approx 0.14$, beyond which they change order. The second torus collapse occurs right after the tip bubble collapse up to this limit, as the rebounding torus compresses under the effect of the shock wave produced by the latter, which is seen as an almost constant time difference between the two events in Figure~\ref{fig:z_deltaT}. The normalized timings of each shock wave are independent of the maximum bubble radii and driving pressures covered here. \section{Models for shock energy and pressure} \label{s:models} \begin{figure} \begin{center} \begin{overpic}[width=0.55\textwidth, trim=0cm 1.93cm 0cm 0cm, clip]{Fig12.pdf} \put (14,42) {(a)} \put (41.5,42) {(b)} \put (69.5,42) {(c)} \end{overpic} \end{center} \caption{Examples of hydrophone pressure signals of shock waves measured at the collapse of bubbles deformed by gravity at (a) $\zeta<10^{-3}$, (b) $\zeta = 3.8\times10^{-3}$ and (c) $\zeta = 10^{-2}$. The corresponding shadowgraph images with an exposure of 50~ns are shown on top. The black bars show the 1~mm scale.} \label{fig:hydros} \end{figure} We now investigate shock waves from non-spherically collapsing bubbles at a more general level with the aim of developing an empirical model to predict their strengths. For this purpose, we look at shock waves from bubbles deformed by different sources, in particular by the gravity-induced uniform pressure gradient. Examples of measured shock waves from bubbles deformed by gravity are shown in Fig.~\ref{fig:hydros}. A spherical collapse (Fig.~\ref{fig:hydros}a) produces a single shock, as observed previously in Section~\ref{s:spherical}. Non-spherical collapses (Fig.~\ref{fig:hydros}b,c) generate multiple shocks, and the associated peak pressures clearly decrease with increasing bubble deformation, similarly to bubbles deformed by a free surface. However, the characteristic shape of bubbles collapsing in uniform pressure gradients is such that the radii of curvature of the jet tip and the opposite bubble wall at their impact are very similar for a wide range of $\zeta$ according to potential flow theory~\cite{Supponen2016}, as illustrated in Figure~\ref{fig:shapes} for $\zeta=10^{-2}$. As a consequence, the volumes of the `tip bubble' and the toroidal bubble remain relatively small and the associated shocks are barely distinguishable. We therefore analyze the collapse shock as one event, expected to be dominated by the jet impact (as suggested by Figure~\ref{fig:z_E_all} for bubbles near a free surface at $\zeta<10^{-2}$), without resolving its sub-structure in the following analyses. \begin{figure} \begin{center} \includegraphics[width=.7\textwidth, trim=0cm 2.3cm 0cm 0cm, clip]{Fig13.pdf} \end{center} \caption{Bubble shapes at jet impact for bubbles deformed by a uniform pressure gradient, a near rigid and a near free surface, predicted by potential flow theory~\cite{Supponen2016} for $\zeta=10^{-2}$. $\boldsymbol{\zeta}$ is directed downward.} \label{fig:shapes} \end{figure} We first consider the variation of the peak pressures $p_{\rm max}$ measured by the hydrophone as a function of $\zeta$. Figure~\ref{fig:z_hammer_g} shows this function for bubbles deformed by the gravity-induced pressure gradient (varied parameters: $R_{0}=1.5$--10~mm, $\Delta p=6$--98~kPa, at normal gravity). Clearly, the relation between $p_{\rm max}$ and $\zeta$ depends on $\Delta p$. We can build a model for the relationship between $p_{\rm max}$, $\Delta p$ and $\zeta$, based on the simplistic assumptions of scale-free micro-jets and shocks resulting from a water hammer pressure caused by the jet impact~\cite{Field1991,Johnsen2009}: \begin{equation} p_{h} = \frac{1}{2}\rho c U_{\rm jet} = 0.45\left(\rho c^{2}\Delta p\right)^{1/2} \zeta^{-1} \label{eq:p_hammer} \end{equation} where $U_{\rm jet}$ is the micro-jet speed at its impact on the opposite bubble wall. The scaling model for the micro-jet speed, $U_{\rm jet} = 0.9 \left(\Delta p/\rho\right)^{1/2} \zeta^{-1}$, has previously been established by combining numerical simulations and analytical arguments with experimental observations, and is a valid approximation for jets driven by gravity and near surfaces at $\zeta < 0.1$~\cite{Supponen2016}. We can therefore expect also the resulting hammer pressures to be similar for these different sources of bubble deformation and to decrease with $\zeta$ for a given $\Delta p$ (with constant $\rho$ and $c$). The scaling factor in Eq.~(\ref{eq:p_hammer}) could be different if the jet impact is not the dominant shock mechanism, but this is irrelevant in the following derivation because of the free parameter $\alpha$ discussed hereafter. \begin{figure} \begin{center} \includegraphics[width=0.6\textwidth, trim=0cm 0cm 0cm 0cm, clip]{Fig14.pdf} \end{center} \caption{Measured shock peak pressures as a function of $\zeta$ (and $\gamma$, top axis) for bubbles deformed by gravity. The dashed lines represent the model in Eq.~(\ref{eq:pmax}). The colors indicate different driving pressures $\Delta p$. The symbol sizes portray the different maximum bubble radii.} \label{fig:z_hammer_g} \end{figure} The equivalent observational proxy for $p_{h}$ is expressed as \begin{equation} p_{h} = p_{\rm max} \left(\frac{d}{r_{\rm shock}}\right)^{\beta} = \alpha p_{\rm max}\left(\frac{d}{R_{0}}\right)^{\beta} \zeta^{-2\beta/3} \label{eq:exp_p} \end{equation} where $p_{\rm max}$ is the peak pressure measured by the hydrophone, $d$ is the distance between the bubble center and the hydrophone sensor, $r_{\rm shock}$ is the shock emitting radius, assumed to scale as the radius of the jet tip (see schematic in Fig.~\ref{fig:shapes}) and thereby as the bubble's characteristic length at jet impact $s \propto \zeta^{2/3}R_ {0}$ as predicted by potential flow theory for $\zeta<<1$~\cite{Supponen2016}, and $\alpha$ and $\beta$ are free parameters. $\alpha$ represents the unknown scaling of $r_{\rm shock}\propto \zeta^{2/3}R_ {0}$. $\beta$ would equal 1 for negligible shock dissipation and spreading of the shock width, yet in reality non-linearities are present and result in a higher exponent, typically about 2 in the near field and $\sim1.1$ in the far field of the emission center~\cite{Schoeffmann1988,Doukas1991,Vogel1996,Pecha2000}. Equating Eq.~(\ref{eq:p_hammer}) and~(\ref{eq:exp_p}) gives \begin{equation} p_{\rm max} = \frac{0.45}{\alpha} \left(\rho c^{2}\Delta p\right)^{1/2}\left(\frac{R_{0}}{d}\right)^{\beta}\zeta^{2\beta/3-1}. \label{eq:pmax} \end{equation} We fit $\alpha$ and $\beta$ simultaneously to a sample of 931 bubbles deformed by gravity to minimize the $\chi^{2}$ deviation between the left and right hand sides of Eq.~(\ref{eq:pmax})~\footnote{A fit with the exponent of $\rho c^{2} \Delta p$ as a free parameter was also performed, which consistently gave $0.506 \pm 0.006$, which is why we kept this exponent as 1/2.}. The resulting fitted parameters are $\alpha = 0.277 \pm 0.006$ and $\beta=1.249 \pm 0.003$, and the corresponding determination coefficient is $R^{2}=0.93$. $\beta$ lies between 1 and 2 as expected. In the case of bubbles deformed by gravity, there is a unique relation between $R_{0}$, $\Delta p$ and $\zeta$ as shown by Eq.~(\ref{eq:zeta}). Substituting $R_{0}$ from this relation into Eq.~(\ref{eq:pmax}) makes $p_{\rm max}$ a function of only $\Delta p$ and $\zeta$. These relations are plotted as dashed lines in Fig.~\ref{fig:z_hammer_g} and show excellent agreement with the measurements. \begin{figure} \begin{center} \includegraphics[width=0.6\textwidth, trim=0cm 0cm 0cm 0cm, clip]{Fig15.pdf} \end{center} \caption{Measured shock wave peak pressures as a function of the model given in Eq.~(\ref{eq:pmax}) for bubbles deformed by gravity, a rigid and a free surface.} \label{fig:z_hammer_all} \end{figure} The lines in Fig.~\ref{fig:z_hammer_g} can be collapsed to a single relationship by plotting the measured peak pressures $p_{\rm max}$ directly against the model in Eq.~(\ref{eq:pmax}), which is shown in Fig.~\ref{fig:z_hammer_all}. We now also apply this simple model to predict the shock pressures of non-spherical bubbles with different sources of deformation (free and rigid surface), where the unique relationship between $R_{0}$, $\Delta p$ and $\zeta$ no longer holds because of the additional dependence on the distance $h$ to the surface, as shown by Eq.~(\ref{eq:zeta}). These data also coincide with the model, as seen in Fig.~\ref{fig:z_hammer_all}, confirming that the hammer pressure model can be used to estimate shock pressures produced by a non-spherical bubble collapse. The pressures $p_{h}$ at the source, estimated using Eq.~(\ref{eq:p_hammer}) and~(\ref{eq:exp_p}), range from 100~MPa to 10~GPa at $\zeta > 10^{-3}$. Figure~\ref{fig:z_E_general} displays the normalized collapse shock wave energy for bubbles deformed by gravity, a nearby rigid and a free surface as a function of $\zeta$. All the measured shock energies generally decrease with increasing $\zeta$ independently of $R_{0}$ and $\Delta p$. For gravity-deformed bubbles, most of the decrease happens in the range $10^{-3}<\zeta<10^{-2}$, reaching values down to about 10\% of initial bubble energy $E_{0}$ at $\zeta \sim 10^{-2}$. These values differ significantly from bubbles deformed by a rigid and a free surface that respectively have shock energies as high as 30\% and 40\% of the initial bubble energy $E_{0}$ at $\zeta \sim 10^{-2}$ ($\gamma \sim 4.4$). Shocks from bubbles deformed by a near rigid and a free surface experience a decrease in energy with $\zeta$ that is similar to the gravity-deformed cases, but which occurs at a higher $\zeta$. \begin{figure} \begin{center} \includegraphics[width=\textwidth]{Fig16.pdf} \end{center} \caption{The normalized total collapse shock wave energy for bubbles deformed by gravity, a near rigid and a near free surface as a function of $\zeta$ (and $\gamma$, top axis). Averaged shock energies measured in micro-gravity $\mu$g ($0\pm0.02$~$g$) and hyper-gravity hg ($1.66\pm0.093$~$g$) at three different $\Delta p$ are also displayed. The gray scale indicates different driving pressures $\Delta p$ for bubbles deformed by gravity. The models in solid lines show the fits $0.0073\zeta^{-2/3}$, $0.011\zeta^{-2/3}$ and $0.016\zeta^{-2/3}$ for bubbles deformed by gravity, rigid surface and free surface, respectively. The mean error of $E_{S}/E_{0}$ is 0.04.} \label{fig:z_E_general} \end{figure} It should be noted that the expression of $\zeta$ for gravity-induced bubble deformations (Eq.~(\ref{eq:zeta})) includes $\Delta p$, making $\Delta p$ correlate with $\zeta$ in our data obtained on-ground (see gray scale in Fig.~\ref{fig:z_E_general}). However, the data in micro-gravity ($0\pm0.02$~$g$), which were obtained aboard ESA parabolic flights, confirm that the bubble deformation is the main cause of the observed shock energy variations, rather than $\Delta p$. For example, bubbles collapsing at $\Delta p\approx20$~kPa in our experiment on-ground emit low-energy shocks ($E_{S}/E_{0}<30\%$), yet in micro-gravity at the same driving pressure $E_{S}/E_{0}>75\%$~\footnote{The presence of the closest surface to the bubble, i.e.~the parabolic mirror, is accounted for when determining $\zeta$ for bubbles collapsing in micro-gravity.}. Some data for bubbles collapsing at higher gravity levels ($1.66\pm0.093$~$g$) are also displayed in Figure~\ref{fig:z_E_general}, showing reasonable agreement with the general shock energy trend with $\zeta$. Since the measured peak pressures for deformed bubbles are well approximated with the hammer pressure model, we aim at estimating their shock energies using the same approach. We recall that the shock energy $E_{S} = (4\pi d^{2}\rho^{-1}c^{-1})\int p^{2}\text{d}t$ from ref.~\cite{Vogel1988}, as for Eq.~(\ref{eq:ES}). If the pressure profile in time is represented with a hammer pressure $p_{h}$ being applied for a time $\Delta t=\Delta d c^{-1}$, where $\Delta d$ denotes the thickness of the shock, the energy reads $E_{S} = (4\pi d^{2}\rho^{-1}c^{-1})p_{h}^{2} \Delta t$. The shock wave energy is therefore alternatively expressed as \begin{equation} E_{S} = \frac{\Delta V p_{h}^{2}}{\rho c^{2}}, \label{eq:ES2} \end{equation} where $\Delta V=4\pi d^{2}\Delta d$ is the volume of the compressed liquid. As mentioned before, the characteristic length of the bubble at the jet impact scales as $s/R_{0} \propto \zeta^{2/3}$. As the surface area of contact of the jet onto the opposite bubble wall is two-dimensional and the compressed liquid volume is assumed to be proportional to that area, we have $\Delta V/R_{0}^{3} \propto s^{2}/R_{0}^{2} \propto \zeta^{4/3}$. With this model plugged into Eq.~(\ref{eq:ES2}) and $p_{h}$ substituted for Eq.~(\ref{eq:p_hammer}), we obtain \begin{equation} \frac{E_{S}}{E_{0}} \propto \frac{\Delta V}{R_{0}^{3}\zeta^{2}} \propto \zeta^{-2/3}. \label{eq:eps} \end{equation} The missing scaling factor for Eq.~(\ref{eq:eps}) comes from the unknown size of the compressed liquid region. An analytical evaluation of this unknown is difficult and would have to account for the non-uniform liquid compression by the curved jet tip. The scaling factor is expected to vary for the distinct sources of deformations, since the jet shapes are different for each case and leave gas/vapor pockets of dissimilar sizes between the jet and the opposite bubble wall, as illustrated in Fig.~\ref{fig:shapes} for $\zeta = 10^{-2}$. These vapor pockets are rather large for bubbles collapsing near a rigid or a free surface, while gravity-induced jets hit the opposite bubble wall in a highly uniform way, thereby resulting in the smallest scaling factor. When minimizing the $\chi^{2}$ deviation between the measurements $E_{S}/E_{0}$ for bubbles deformed by gravity at $\zeta > 10^{-3}$ and a model in the form $f = a\zeta^{b}$ with free parameters $a$ and $b$, we find $a = 0.0078$ and $b = -0.66$. When imposing $b=-2/3$ to conform with Eq.~(\ref{eq:eps}), the best fit for $a$ is 0.0073. The corresponding fitted scaling factor for the rigid and free surface are $a=0.011$ and 0.016, respectively. Equation~(\ref{eq:eps}) with these fitted scaling factors is plotted as solid lines for bubbles deformed by gravity, free surface and rigid surface in Fig.~\ref{fig:z_E_general}, and agrees reasonably well with the experimental data. \section{Discussion} \label{s:discussion} There are several limitations in the presented shock models worth addressing. The micro-jet is expected to reach the speed of sound for a bubble collapsing at $\zeta \lesssim 0.9(\Delta p/\rho)^{1/2}c^{-1}$ ($\zeta \lesssim 0.006 $ at $\Delta p = 98$~kPa), below which the model in Eq.~(\ref{eq:p_hammer}) may no longer be able to estimate the jet hammer pressures. Furthermore, our model neglects the gas inside the bubble. Compressed and heated gases within highly spherically collapsing bubbles can potentially slow down and destroy the jet and/or delay or prevent its formation. These effects naturally decrease with increasing $\zeta$, since at higher $\zeta$ the jet forms earlier in the bubble evolution, when the gases are less compressed. We estimate the bubble gas to seriously hamper the jet for $\zeta<10^{-3}$, where no observable jets are formed in the bubble rebound in our current setup~\cite{Supponen2016}. This is the likely explanation for the sudden curvature change in the shock energy trend for bubbles deformed by gravity at $\zeta \sim 10^{-3}$, as seen in Fig.~\ref{fig:z_E_general}. Below this approximate threshold (at which $p_{h}\sim7$~GPa for bubbles collapsing here at atmospheric pressure), the shock pressures predicted by the model are overestimated. This threshold value is consistent with previous findings for a spherical collapse at atmospheric pressure, both in our setup (Section~\ref{s:spherical}) and in the literature~\cite{Pecha2000,Akhatov2001,Holzfuss1998}. The shock energies of bubbles collapsing near a rigid surface show important differences when compared with the measurements performed by Vogel and Lauterborn~\cite{Vogel1988}. Although they observed, similarly to us with bubbles near a free surface, a clear minimum in shock energies at $\gamma = 1$, they also measured shocks beyond $\gamma \sim 3$ to have the same energies as those emitted in a spherical collapse, while at $\gamma=3$ we measure barely $20\%$ of a typical shock energy from a spherical collapse. It suggests that the experimental conditions play an important role on the collapse shock wave characteristics, including the initial bubble sphericity, which highly differs for parabolic mirror and lens-based laser focusing methods. Indeed, in Vogel's study the stand-off was varied only up to $\gamma \sim 3$, beyond which a spherical collapse was assumed, while we still find important shock energy variations between $\gamma \sim 5$ and 10. \section{Conclusion} \label{s:conclusion} We have presented detailed observations of shock wave emissions from the collapse of bubbles with various levels of deformation, quantified by the anisotropy parameter $\zeta$, using simultaneous time-resolved shadowgraphy and needle hydrophone pressure measurements. A gradual pressure rise in the liquid near the bubble wall was observed in the last collapse stage of nearly spherically collapsing bubbles, in agreement with the century-old predictions of Lord Rayleigh. Non-spherical bubble collapses produced multiple shock waves associated with different processes such as the jet impact and the individual collapses of the various separated parts of the bubble. When quantifying these distinct shocks for bubbles collapsing near a free surface, the jet impact shock was found to dominate up to $\zeta \sim 2\times10^{-2}$, the bubble tip collapse in the range $2\times10^{-2} < \zeta < 0.15$ and the torus collapse at $\zeta > 0.15$. The timings of the individual events, normalized with the bubble collapse time, were also found to vary with $\zeta$. Models predicting the shock peak pressure and energy were proposed based on the assumption that the shock wave is generated by a jet impact hammer pressure. The pressure model showed excellent agreement with the observed data in the range $10^{-3}<\zeta<10^{-2}$ for all three sources of bubble deformation used here (gravity, rigid and free surface), and the energy model captured the approximative trend of the measured energies. The total collapse shock wave energy, normalized to the total bubble energy, generally decreased with increasing $\zeta$. However, we found differences between the shock energies from bubbles deformed by different sources, which likely result from the small variations in the jet shapes at their impact onto the opposite bubble wall. Interestingly, these differences do not seem to affect the shock peak pressures, which could be due to the jet speed at the moment of impact - which the hammer pressure is proportional to - being nearly identical for the three sources of bubble deformation at this range of $\zeta$. \acknowledgements{We acknowledge the support of the Swiss National Science Foundation (Grant no. 513234), the University of Western Australia Research Collaboration Award obtained by co-authors DO and MF, the European Space Agency for the 60th and 62nd parabolic flight campaigns and Prof.\ Ullrich for the 1st Swiss parabolic flight.}
1,314,259,992,591
arxiv
\section{Introduction} Pre-trained models for vision-and-language tasks have made remarkable progress recently~\cite{NEURIPS2019_c74d97b0, Su2020VL-BERT:,10.1007/978-3-030-58577-8_7}. Existing pre-trained models either focus on text-to-image synthesis or image-to-text generation~\cite{DBLP:journals/corr/abs-2102-12092,pmlr-v139-cho21a}. These models are often pre-trained with image-text pairs which are aligned in semantics. However, due to the limitations of model structure, existing models cannot be adapted to each other. In addition, pre-training objectives are designed either for text generation conditioned on the image or image generation conditioned on the text, limiting the model to learn better semantic alignment from bi-directional generation~\cite{xu-etal-2021-e2e, DBLP:journals/corr/abs-2105-13290}. We argue that \textit{image-to-text and text-to-image generation appear as dual tasks, which both require strong visual and textual representations aligned in the same semantic space.} Images and text descriptions are of different information quantity and density. The images often contain more information, but are with heavy redundancy, while text descriptions are semantically condensed, but may neglect details. Uni-directional generation paradigm may induce the model to amplify this property. Take Fig.\ref{fig:intro} as an example, the uni-directional model may fail in capturing details. Inspired by this observation, we propose to utilize bi-directional generation objectives to learn better generalization of image and text representations. \begin{figure}[t] \centering \includegraphics[width=\columnwidth,trim=0 0 0cm 0, clip]{figure/intro_fig.pdf} \captionof{figure}{An example from COCO dataset. For image captioning, our system generates informative captions, with key words highlighted in \textbf{\textcolor{red}{bold}}. Incorrect information is \underline{\textcolor{blue}{underlined}}. For text-to-image generation, our system synthesizes vivid images aligned with captions. } \label{fig:intro} \vspace{-2mm} \end{figure} To this end, we present \textbf{DU-VLG}, a framework with \textbf{DU}al sequence-to-sequence pre-training for \textbf{V}ision-and-\textbf{L}anguage \textbf{G}eneration. Under the encoder-decoder Transformer framework, our model takes text and raw images as inputs and generate text and images autoregressively. Concretely, images are represented as continuous patch features in the encoder and discrete visual tokens in the decoder. With the hybrid image embedding schema, DU-VLG is able to unify vision-and-language generation in a single model. In order to utilize dualities of image-text pairs, we further propose \textbf{two pairs of dual pre-training tasks}: multi-modal denoising autoencoder task and modality translation task. For the multi-modal denoising autoencoder task, our model takes image-text pairs with some image patches or words randomly masked as inputs and learns image-text alignment through reconstruction of the corrupted modality. For modality translation tasks, we form image captioning and text-to-image generation as dual pre-training tasks, which further enhance model ability of semantic alignment. Different from existing multi-modal pre-trained models, our model learns image-text alignment through bi-directional generation objectives. Moreover, we propose \textbf{a novel commitment loss} to drive the model to acquire better image representation. Concretely, the commitment loss is designed to connect visual embeddings in the decoder to patch-based features in the encoder. In tandem with our model design, the commitment loss aims to unify image understanding and generation in a single model, which allows for better utilization of bi-directional generation objectives. We conduct experiments on various vision-and-language generation tasks. We first study effects of dual pre-training tasks and the commitment loss. On both image captioning and text-to-image generation tasks, DU-VLG outperforms its variant without commitment loss or the variants that only learns uni-directional generation objectives. For image captioning, we achieve better BLEU-4 and CIDER than existing pre-trained models on COCO dataset~\cite{10.1007/978-3-319-10602-1_48}. For text-to-image generation, our model achieves better results than both Transformer-based and GAN-based methods on both COCO and CUB dataset~\cite{WelinderEtal2010}. Human judges confirm that our model generates captions and images with high-quality. Importantly, we test our model on a challenging vision-and-language generation task: visual commonsense reasoning~\cite{park2020visualcomet}. Results demonstrate that our model is able to handle challenging multi-modal generation tasks effectively. The main contributions of DU-VLG are as follows: \noindent $\bullet$ We unifies vision-and-language generation tasks with a single model, DU-VLG. With an encoder-decoder Transformer, DU-VLG is able to handle various vision-and-language generation tasks. \noindent $\bullet$ DU-VLG is pre-trained with novel dual pre-training tasks, which utilizes dualities of image-text pairs. DU-VLG yields better or comparable results than existing state-of-the-art methods on three vision-and-language generation tasks. \noindent $\bullet$ We further propose a new commitment loss, which aims to bridge the gap between image understanding and generation inner with our proposed dual paradigm. Experimental results show that the ability of dual tasks is further enhanced. The rest of the paper is organized as follows. We describe our model in \S~\ref{sec:model} and introduce our proposed pre-training task and commitment loss in \S~\ref{sec:task}. Training details are presented in \S~\ref{sec:exp}. In \S~\ref{sec:results}, we discuss experimental results. Related work is listed in \S~\ref{sec:related} and we finally draw our conclusion in \S~\ref{sec:conclusion}. \section{Related Work} \label{sec:related} \smallskip \noindent \textbf{Vision-and-Language Pre-training for Image-to-Text Generation Tasks.} Transformer backbones have achieved great success in language pre-training~\cite{devlin-etal-2019-bert,lewis-etal-2020-bart,liu2020roberta}. In order to adapt Transformers to multi-modal pre-training, previous work mainly focuses on (1) better image features and (2) designing pre-training tasks~\cite{NEURIPS2019_c74d97b0, DBLP:journals/corr/abs-1908-03557}. To obtain high-quality image features, Image region features extracted from an object detection model are widely adopted in multi-modal pre-training~\cite{Zhou_Palangi_Zhang_Hu_Corso_Gao_2020,10.1007/978-3-030-58577-8_8,Zhang_2021_CVPR}. ~\newcite{pmlr-v139-kim21k} points out that two-stage method is time-consuming and the trained object detector may fail in the unlabeled domain~\cite{jiang2021decoupled}. To that end, ~\newcite{DBLP:journals/corr/abs-2004-00849} feeds raw images to convolutional backbones such as ResNets~\cite{7780459} and takes its outputs as image features. ~\newcite{pmlr-v139-kim21k} uses linear projection to obtain patch-based image features. However, currently, end-to-end image feature extraction methods cannot yield comparable results compared to two-stage methods on image captioning. To learn image-text alignment, masked token prediction, which masks a portion of text or image tokens and predicts masked positions conditioned on the context, is widely used as the pre-training task~\cite{DBLP:journals/corr/abs-2003-01473}. ~\newcite{10.1007/978-3-030-58577-8_8} designs image-text matching task, which predicts whether the image and the text are paired or not. ~\newcite{li-etal-2021-unimo} proposes special self-attention masks to unify text understanding and generation. ~\newcite{xu-etal-2021-e2e} includes image captioning and object detection as pre-training objectives to enhance the decoder. However, current methods for generation tasks are limited to text generation and are struggled to learn fine-grained image-text alignment. In this paper, we introduce a hybrid image embedding schema to connect image understanding and generation, which unifies image and text generation via sequence-to-sequence pre-training. Concretely, we enhance image-text alignment with novel dual pre-training tasks. Our model outperforms state-of-the-art pre-trained systems on image captioning. \smallskip \noindent \textbf{Vision-and-Language Pre-training for Text-to-Image Generation Tasks.} To generate images autoregressively, images are represented as discrete tokens. X-LXMERT~\cite{cho-etal-2020-x} partitions image grid features into clusters and obtains visual tokens via neareast-neighbor search. However, X-LXMERT needs to train an image generator from scratch to synthesize images from visual tokens, which accumulates errors during training. ~\newcite{DBLP:journals/corr/abs-2105-13290,DBLP:journals/corr/abs-2102-12092} use discrete visual tokens from a trained vector-quantised variational autoencoder (VQ-VAE)~\cite{10.5555/3295222.3295378} for text-to-image generation. However, their models consist of billions of parameters and require a huge corpus to pre-train (more than 100 million image-text pairs). In this paper, we present a relative small model (about 200M parameters), with better generation quality on COCO dataset. In particular, we offer a detailed analysis on the inference speed and the model size in the appendices. \section{Model} \label{sec:model} \input{004Model_figure} In this section, we describe our proposed model. Overall, our model design is mainly inspired by two observations: (1) sharing parameters that play the same role boosts model performance~\cite{pmlr-v80-xia18a} and (2) image understanding and generation require representing image features in different granularity~\cite{cho-etal-2020-x}. Hence, we use a standard Transformer with the encoder-decoder structure~\cite{NIPS2017_3f5ee243}, as illustrated in Fig.\ref{fig:model}. Our model takes images and text as inputs and treats image and text generation as sequence generation problems. Importantly, we propose to use a hybrid image embedding schema in the encoder and the decoder. \subsection{Encoder} In the encoder, images and text are first passed to embedding layers to obtain text embeddings $\mathbf{x}_{\rm text}$ and image embeddings $\mathbf{x}_{\rm image}$. For text embedding, we follow RoBERTa and tokenize inputs into BPEs~\cite{liu2020roberta}. Each BPE token is represented as the summation of word embedding and position embedding. Unlike text, Images are represented as pixels in a continuous semantic space. However, using pixels as image tokens results in a huge amount of computational cost since model needs to process long sequences. In order to maintain semantic information as well as reduce the computational cost, we split raw images into a grid of patches. \smallskip \noindent \textbf{Image Embedding for Encoder.} In the encoder, image inputs are flattened to a sequence of patches, with each patch represents the feature of $p$ $\times$ $p$ pixels. To obtain patch embedding, we pass input images to a trained Vision Transformer (ViT)~\cite{dosovitskiy2021an} and take hidden states of the last layer $\mathbf{x}_{\rm image}$ as image patch embeddings. Image and text embeddings are then concatenated and fed into the encoder self-attention layers. If either image or text is missing in the input, we use a \texttt{[IMAGEPAD]} or \texttt{[TEXTPAD]} token as the placeholder. \subsection{Decoder} In the decoder, we use two embeddings: the text embedding which shares weights with the text embedding in the encoder and the image embedding which maps discrete visual tokens to embedding vectors. To enable autoregressive generation, we add \texttt{[BOI]} and \texttt{[EOI]} token to denote the start and the end of the image sequence. \smallskip \noindent \textbf{Discrete Visual Tokens for Decoder.} In the decoder, the model generates a sequence of discrete visual tokens recurrently. During training, ground truth visual tokens are obtained by a Vector Quantised Variational Autoencoder (VQ-VAE)~\cite{10.5555/3295222.3295378}. The VQ-VAE contains two modules, an image tokenizer and a visual decoder. The image tokenizer first extracts grid features from raw images and maps into discrete tokens $\mathbf{y}_{\rm image}$. The visual decoder reconstructs the original image from discrete visual tokens. The image tokenizer represents each $p$ $\times$ $p$ pixels as a visual token, with a vocabulary size of $|\mathcal{V}|$. Therefore, the number of decoder visual tokens is the same as the number of encoder patch tokens. We refer to the original paper for more details. Importantly, during testing, model first generates a sequence of image tokens recurrently and reconstruct the image with the visual decoder. \section{Dual Pre-training Tasks and Pre-training Objectives} \label{sec:task} \begin{figure}[t] \centering \includegraphics[width=\columnwidth,trim=0 0 0cm 0, clip]{figure/tasks.pdf} \captionof{figure}{ An illustration of our proposed dual pre-training tasks. The model reconstructs the image or text conditioned on its visual and textual context. } \label{fig:task} \end{figure} Next, we introduce our pre-training method. Pre-training corpus consists of millions of aligned image-text pairs. In order to effectively learn vision-and-language understanding and generation, we propose dual pre-training tasks. Dual pre-training tasks drive the model to learn from reconstruction of the image or text description based on given context. We propose two pairs of pre-training tasks: (1) multi-modal denoising autoencoder task (\S~\ref{task:dae}) and (2) modality translation task (\S~\ref{task:mt}), as shown in Fig.\ref{fig:task}. In \S~\ref{task:commitment}, we formulate a commitment loss to connect image understanding and generation. \subsection{Multi-modal Denoising Autoencoder Task} \label{task:dae} Given an image-text pair $(V,W)$ from the training set $D$, we first obtain image patch embeddings $\mathbf{x}_{\rm image}$ computed by ViT layers and attain text embeddings $\mathbf{x}_{\rm text}$. To encourage the model to learn cross-modal contextualized embeddings, we propose two dual tasks: 1) text-driven image inpainting task which aims to reconstruct the original image and 2) image-driven text infilling task which aims to reconstruct the original text. \smallskip \noindent \textbf{Text-Driven Image Inpainting.} Given image patch embeddings $\mathbf{x}_{\rm image}$, we replace 50 percent of image patches with the same umber of trainable \texttt{[MASK]} embeddings, producing masked image sequences $\mathbf{\Tilde{x}}_{\rm image}$. We use blockwise masking algorithm~\cite{DBLP:journals/corr/abs-2106-08254} to randomly select patches. Meanwhile, we feed the input image to the image tokenizer and produce a sequence of visual tokens $\mathbf{y}_{\rm image}$. The model is trained to reconstruct the image by optimizing negative log likelihood loss of the ground-truth visual tokens: {\fontsize{10}{11}\selectfont \begin{flalign} \mathcal{L}^{\rm DAE}_{\rm image} &= - \sum_{(V, W) \in D} {\log{p(\mathbf{y}_{\rm image}\, | \mathbf{\Tilde{x}}_{\rm image}, \mathbf{x}_{\rm text})}} \end{flalign} \label{eq:imageloss} } \vspace{-3mm} \smallskip \noindent \textbf{Image-Driven Text Infilling.} Inspired by text infilling~\cite{lewis-etal-2020-bart}, we randomly sample a number of text spans from a Poisson distribution ($\lambda = 3$) and replace with a single \texttt{[MASK]}. Different from text infilling, we randomly mask 50 percent of tokens since we additionally include image as visual context. The model is trained to optimize negative log likelihood loss of original text tokens: {\fontsize{10}{11}\selectfont \begin{flalign} \mathcal{L}^{\rm DAE}_{\rm text} &= - \sum_{(V, W) \in D} {\log{p(\mathbf{x}_{\rm text}\, | \mathbf{\Tilde{x}}_{\rm text}, \mathbf{x}_{\rm image})}} \end{flalign} \label{eq:textloss} } \vspace{-3mm} where $\mathbf{\Tilde{x}}_{\rm text}$ represents the corrupted text sequence. \subsection{Modality Translation Task} \label{task:mt} In addition to the denoising autoencoder task, we further enhance the model with the modality translation task. The modality translation task drives the model to learn mapping from a modality to the other. Given an image-text pair, we form the modality translation task as two dual tasks: 1) image captioning and 2) text-to-image synthesis. \smallskip \noindent \textbf{Image Captioning.} Given an image as input, model first produces image patch embeddings $\mathbf{x}_{\rm image}$ from ViT and encodes image features with encoder self-attentions. The decoder is trained to generate text based on image features. The loss function can be defined as: {\fontsize{10}{11}\selectfont \begin{flalign} \mathcal{L}^{\rm MT}_{\rm text} &= - \sum_{(V, W) \in D} {\log{p(\mathbf{x}_{\rm text}\, | \mathbf{x}_{\rm image})}} \end{flalign} \label{eq:textloss_2} } \vspace{-3mm} \smallskip \noindent \textbf{Text-to-Image Synthesis.} Given a visual description as input, model encodes the input with the encoder and the decoder generates discrete visual tokens $\textbf{y}_{\rm image}$ recurrently. During training, the ground truth visual tokens are computed by the image tokenizer. The loss function can be defined as: {\fontsize{10}{11}\selectfont \begin{flalign} \mathcal{L}^{\rm MT}_{\rm image} &= - \sum_{(V, W) \in D} {\log{p(\mathbf{y}_{\rm image}\, | \mathbf{x}_{\rm text})}} \end{flalign} \label{eq:imageloss_2} } \vspace{-3mm} \subsection{Connecting Image Embedding between Encoder and Decoder.} \label{task:commitment} In the encoder-decoder structure, text embedding is often shared among the encoder, the decoder and the token generation layer~\cite{paulus2018a}. This allows the model to learn better syntactic and semantic information. For image embedding, since we use a hybrid embedding schema in the encoder and the decoder, we propose a commitment loss to connect image understanding and generation during training. Intuitively, decoder visual token embeddings $\mathbf{y}_{\rm image}$ should commit to corresponding patch embeddings $\mathbf{x}_{\rm image}$ in encoder. Therefore, the commitment loss uses a square loss to connect the encoder and the decoder: {\fontsize{10}{11}\selectfont \begin{flalign} \mathcal{L}_{\rm com} &= - \sum_{(V) \in D} {\parallel {\rm sg}[\mathbf{x}_{\rm image}] - \mathbf{y}_{\rm image} \parallel^{2}} \end{flalign} \label{eq:commitment} } \vspace{-3mm} where ${\rm sg}$ means stopgradient operator which is identity at forward computation but has zero partial derivatives at backward computation. The commitment loss is applied to the text-driven image inpainting objective and the text-to-image synthesis objective. During training, for each instance, we randomly select a couple of objectives from denoising autoencoder and modality translation. We set probability of denoising autoencoder as 0.6 for all experiments. Therefore, for each batch, the pre-training loss is a combination of three losses: {\fontsize{10}{11}\selectfont \begin{flalign} \mathcal{L}_{\rm total} &= \mathcal{L}_{\rm text} + \alpha \mathcal{L}_{\rm image} \\ \mathcal{L}_{\rm image} &= \mathcal{L}^{DAE}_{\rm image} + \mathcal{L}^{MT}_{\rm image} + \beta \mathcal{L}_{\rm com} \\ \mathcal{L}_{\rm text} &= \mathcal{L}^{DAE}_{\rm text} + \mathcal{L}^{MT}_{\rm text} \end{flalign} \label{eq:ptloss} } \vspace{-3mm} where $\alpha$ and $\beta$ are hyperparameters to control the scale of image loss and commitment loss. \section{Experimental Setup} \label{sec:exp} \subsection{Pre-training} \smallskip \noindent \textbf{Pre-training Corpus.} We train our model on four existing datasets that consist of image-text pairs. Our pre-training datasets include 1) Common Objects in Context (COCO)~\cite{10.1007/978-3-319-10602-1_48}, 2) Conceptual Captions (CC)~\cite{sharma-etal-2018-conceptual}, 3) SBU Captioned Photo (SBU)~\cite{Ordonez:2011:im2text} and 4) Visual Genome (VG)~\cite{Krishna2016VisualGC}. For Visual Genome dataset, since captions are collected for image regions, we use image regions and captions as pairs. We additionally filter captions which are fewer than five words. We end up with a collection of about 5 million image-text pairs. \smallskip \noindent \textbf{Implementation Detail.} We report results on two model sizes: 1) a base version with 6 layers for the encoder and decoder and 2) a large version with 12 layers for the encoder and decoder. For each model size, we report results with two different input image resolutions: 224 $\times$ 224 and 384 $\times$ 384. Following ViT, we use a patch size of $p=16$ for all the experiments. For VQ-VAE, we take the off-the-shelf VQ-GAN~\cite{Esser_2021_CVPR}, which is a variant of VQ-VAE. The VQ-GAN maps each $16 \times 16$ pixels as a discrete visual token, with a vocabulary size of $|\mathcal{V}| = 16384$. For base and large model, we use \texttt{ViT-base} and \texttt{ViT-large} with a patch size of $p=16$ to extract image patch embeddings. ViT weights are set frozen during pre-training. Since image sequences are longer than text sequences, we set $\alpha = 0.05$ and $\beta=1$ for all experiments. For model optimization, we utilize Adam optimizer with a gradient clipping of 1.0 and a batch size equivalent of 1024. \subsection{Fine-tuning on Downstream Tasks} In order to evaluate model capability of vision-and-language generation tasks, we test on three downstream tasks: 1) text-to-image generation, 2) image captioning and 3) visual commonsense reasoning. Here we mainly introduce evaluation metrics. For additional fine-tuning details, we refer to the appendices. \smallskip \noindent \textbf{Text-to-Image Generation.} We experiment with two popular text-to-image generation datasets: the Caltech-UCSD Birds 200 dataset (CUB) and Common Objects in Context dataset (COCO). The CUB dataset contains 200 bird categories with 11,788 images. Each image has ten text descriptions. We follow the standard split which uses 150 categories with 8,855 images for training and the remaining 50 categories with 2,933 images for testing. The COCO dataset contains 82,784 images for training and 40,505 for testing. Each image has five text descriptions. We fine-tune on the pre-trained model with a learning rate of 1e-4 for 300 epoches on both datasets. Similar to ~\newcite{DBLP:journals/corr/abs-2102-12092}, we sample 16 images per caption with nucleus sampling strategy~\cite{Holtzman2020The}. During testing, we first sample 16 images per caption and rerank the generated images with a CLIP model~\cite{pmlr-v139-radford21a}. The CLIP model selects the best image based on its correlation with the text description. We include two widely used evaluation metrics: 1) Inception Score (IS)~\cite{NIPS2016_8a3363ab} and 2) Fréchet Inception Distance (FID)~\cite{NIPS2017_8a1d6947}. The IS score computes the KL-divergence between the conditional class distribution and the marginal class distribution obtained by a pre-trained Inception v3 model~\cite{7780677}. The FID computes the Fréchet distance between ground-truth images and generated images based on the features obtained by the Incaption v3 model. Higher IS scores and lower FID scores denote that images synthesized by the model are of better quality. Previous work~\cite{Li_2019_CVPR} reports that the IS score fails in evaluating the quality of images on COCO dataset. Hence, we do not report the IS score on COCO dataset. For fair comparison, we resize our model outputs to $256 \times 256$ and calculate FID and IS scores. \smallskip \noindent \textbf{Image Captioning.} For image captioning, we test our model on COCO dataset. We report four metrics based on word overlapping on COCO dataset: 1) BLEU-4~\cite{papineni-etal-2002-bleu}, 2) METEOR~\cite{lavie-agarwal-2007-meteor}, 3) CIDEr~\cite{Vedantam_2015_CVPR} and 4) SPICE~\cite{johnson-etal-2020-spice}. For COCO dataset, we follow the Karparthy split~\cite{Karpathy_2015_CVPR} which has 113,287, 5000 and 5000 images for training, validation and test. Each image has 5 human-written captions. During inference, we generate a caption for each image and evaluate against five references. We fine-tune on COCO dataset with a learning rate of 3e-5. Vision Transformer layers are trainable during fine-tuning. Following ~\newcite{10.1007/978-3-030-58577-8_8}, we add object labels detected by the object detection model as additional text inputs. We find object labels improve CIDER and BLEU scores for at least 1 point and 0.3 points. During testing, we use beam search with a beam size of 5. \smallskip \noindent \textbf{Visual Commonsense Reasoning.} Besides image captioning and text-to-image generation, which only requires model to encode one modality, we further test our model on a more challenging dataset, VisualCOMET~\cite{park2020visualcomet}. VisualCOMET is a visual commonsense reasoning task which provides the model with an image and the event that happens at present. The model is required to infer what may happen next, before and the people's intents at present. VisualCOMET requires the model to jointly comprehend image and text and generate reasonable inference. Similar to image captioning, we use BLEU-2, METEOR and CIDEr as metrics. \section{Results} \label{sec:results} In this section, we start with comparing our proposed pre-training objectives in \S~\ref{results:sec1}. We then conduct automatic evaluation on three vision-and-language generation tasks (\S~\ref{results:sec2}) and further report human evaluation on both caption and synthesized image quality (\S~\ref{results:sec2}). Finally, we investigate inference speed of our proposed model (\S~\ref{results:sec3}). \begin{table}[t] \centering \fontsize{10}{11}\selectfont \setlength{\tabcolsep}{0.5mm} \begin{tabular}{@{}lcccc@{}} \toprule Image -> Text & \multicolumn{4}{c}{COCO Caption} \\ \textbf{System} & \textbf{BLEU-4} & \textbf{CIDER} & \textbf{METEOR} & \textbf{SPICE} \\ \midrule \textsc{DU-VLG}$_{\text{B}-224}$ & \textbf{38.8} & \textbf{124.8} & \textbf{29.2} & \textbf{22.0} \\ w/o $L_{\rm image}$ & 36.9 & 118.8 & 28.4 & 20.5 \\ w/o $L_{\rm text}$ & 35.2 & 112.8 & 27.4 & 19.6 \\ w/o $L_{\rm com}$ & 38.4 & 123.1 & 28.8 & 21.7 \\ \toprule Text -> Image & \multicolumn{2}{c}{CUB} & COCO & \\ \textbf{System} & \textbf{IS}$\uparrow$ & \textbf{FID}$\downarrow$ & \textbf{FID}$\downarrow$ & \\ \textsc{DU-VLG}$_{\text{B}-224}$ & \textbf{5.14} & \textbf{23.78} & \textbf{26.82} & \\ w/o $L_{\rm image}$ & 4.84 & 25.28 & 36.59 & \\ w/o $L_{\rm text}$ & 5.03 & 24.68 & 29.64 & \\ w/o $L_{\rm com}$ & 5.08 & 24.44 & 27.92 & \\ \bottomrule \end{tabular} \vspace{-2mm} \caption{Ablation study on pre-training tasks and objectives. The best result per metric per dataset is \textbf{bolded}. \textsc{DU-VLG}$_{\text{B}-224}$ yields significantly higher scores than other comparisons with approximate randomization test ($p < 0.0005$). } \label{tab:ablation_study} \vspace{-1mm} \end{table} \subsection{Comparing Pre-training Objectives} \label{results:sec1} \smallskip \noindent \textbf{Comparisons.} We first investigate whether our proposed dual pre-training tasks and commitment loss improve generation quality. We fine-tune on two downstream tasks: image captioning and text-to-image generation. We report our base model with an input image resolution of $224 \times 224$ ( \textsc{DU-VLG}$_{\text{B}-224}$). We compare our base model with three variants: 1) the model trained without text-driven image inpainting and text-to-image synthesis tasks (w/o $L_{\rm image}$), 2) the model trained without image-driven text infilling and image captioning tasks (w/o $L_{\rm text}$) and 3) the model trained without commitment loss (w/o $L_{\rm com}$). \smallskip \noindent \textbf{Results.} As displayed in Tab.\ref{tab:ablation_study}, \textit{our model with dual pre-training tasks performs the best on both image captioning and text-to-image generation tasks}. This demonstrates the benefit of dual pre-training tasks and the commitment loss. For image captioning, comparing with the variant without image generation objectives, our model with dual pre-training tasks significantly improves automatic metrics, which indicates that image generation objectives can boost visual understanding. For text-to-image generation, our model yields better FID and IS scores than the variant without text generation objectives on both CUB and COCO dataset. This demonstrates that using text generation objectives can guide better semantic interpretation of text content. Moreover, our model outperforms the variant trained without the commitment loss on two downstream tasks. This further illustrates that the commitment loss improves model performance on both image understanding and generation. \subsection{Automatic Evaluation} \label{results:sec2} \smallskip \noindent \textbf{Comparisons.} We then compare our model with other vision-and-language models. For image captioning, we include state-of-the-art vision-and-language pre-trained models: (1) object-semantics aligned pre-training (\textsc{OSCAR})~\cite{10.1007/978-3-030-58577-8_8}, (2) unified modal understanding and generation pre-training (\textsc{UNIMO})~\cite{li-etal-2021-unimo}, (3) improving visual representations for vision-and-language pre-training (\textsc{VinVL})~\cite{Zhang_2021_CVPR} and (4) end-to-end vision-and-language pre-training (\textsc{E2E-VLP})~\cite{xu-etal-2021-e2e}. For \textsc{OSCAR} and \textsc{VINVL}, we report their results with cross-entropy optimization for fair comparison. For text-to-image generation, we include four Transformer-based models: (1) \textsc{X-LXMERT}, which has 228 million parameters and is trained on 9 million image-text pairs, (2) \textsc{DALLE}, which has 12 billion parameters and is trained on 250 million text-image pairs ~\cite{DBLP:journals/corr/abs-2102-12092}, (3) \textsc{Cogview}, which has 4 billion parameters and is trained on 30 million data~\cite{DBLP:journals/corr/abs-2105-13290} and (4) \textsc{NUWA}, which has 870 million parameters and is trained on a mixture of text-image pairs and text-video pairs~\cite{DBLP:journals/corr/abs-2111-12417}. We further compare our model with three traditional methods based on generative adversarial network (GAN): (1) \textsc{DM-GAN}~\cite{Zhu_2019_CVPR}, (2) \textsc{DF-GAN}~\cite{DBLP:journals/corr/abs-2008-05865} and (3) \textsc{XMC-GAN}~\cite{zhang2021cross}. For visual commonsense reasoning, we include Vision-Language Transformer (\textsc{V-L Transformer})~\cite{park2020visualcomet} as a baseline, which fuses region-based visual features into a pre-trained GPT-2~\cite{radford2019language}. \smallskip \noindent \textbf{Results.} For image captioning, our model achieves better scores than both end-to-end method and two-stage methods. In Tab.\ref{tab:caption}, DU-VLG outperforms previous state-of-the-art pre-trained model \textsc{VINVL}, e.g., improving BLEU-4 and CIDEr by more than 1 and 3 points. Moreover, for text-to-image generation tasks, our model achieves state-of-the-art IS and FID on CUB dataset, as displayed in Tab.\ref{tab:text2image}, outperforming traditional GAN-based methods. Compared with Transformer-based methods, our model yields better or comparable FID scores on COCO datasets. It is worth to note that our models are with fewer parameters and less training data compared with \textsc{DALLE}, \textsc{Cogview} and \textsc{NUWA}. This demonstrates the effectiveness of our proposed framework. In addition, we study the effect of different input image resolutions. We compare two different resolutions of the input images: $224 \times 224$ and $384 \times 384$. In Tab.\ref{tab:caption} and Tab.\ref{tab:text2image}, we find higher resolution as inputs leads to better results on both image-to-text and text-to-image generation tasks. This observation remarks the importance of fine-grained image representation. \input{007_1_i2t} We then evaluate our model on a more challenging vision-and-language task, visual commonsense reasoning. As shown in Tab.\ref{tab:vcr}, our model significantly outperforms \textsc{V-L Transformer}, which is fine-tuned based on a language model, GPT-2. This demonstrates that our model is able to jointly comprehend image and text inputs and generate informative inference. \subsection{Human Evaluation} \label{results:sec3} \input{007_2_t2i} \input{007_3_add} \begin{figure}[t] \centering \includegraphics[width=\columnwidth,trim=0 0 0cm 0, clip]{figure/humaneval.pdf} \captionof{figure}{Human evaluation on COCO dataset.\textsc{DU-VLG} yields significantly higher scores than other systems on fidelity, relavance, informativeness and faithfulness ($p < 0.05$). } \label{fig:humaneval} \vspace{-1mm} \end{figure} We conduct human evaluation to analyze generation quality of images and text. For both image captioning and text-to-image generation, we select 100 samples from COCO test set and hire three annotators to rate captions and images. For image captioning, we include three systems: (1) best performed pre-trained model \textsc{VINVL} (2) our model that removes dual pre-training \textsc{DU-VLG} w/o $L_{\rm image}$ and (3) our best performed model \textsc{DU-VLG}. For text-to-image generation, we compare three models: (1) Transformer-based model pre-trained on about 9 million data \textsc{X-LXMERT}, (2) our model trained without text generation objectives \textsc{DU-VLG} w/o $L_{\rm text}$ and (3) \textsc{DU-VLG}. For our model, we use the large version with the input image resolution of $384 \times 384$. For image captioning, human judges are asked to rate on two aspects: \textbf{informativeness}---whether the caption covers important objects from the image and \textbf{faithfulness}---whether the caption correctly describes the image. For text-to-image generation, we consider two aspects: \textbf{fidelity}---whether the image is realistic and \textbf{relevance}---whether the image matches with the caption. All aspects are rated on a Likert scale from 1 (poor) to 5 (good). \smallskip \noindent \textbf{Results.} From Fig.\ref{fig:humaneval}, we find our \textsc{DU-VLG} model obtains better scores in relevance, fidelity, informativeness and faithfulness than the variant that removes dual pre-training tasks. This confirms our claim that bi-directional generation objectives improve semantic alignment between images and text. Meanwhile, compared with well-performed model \textsc{VINVL} and \textsc{X-LXMERT}, our model yields better scores on four aspects. This implies that our model generates more informative captions committed to the input images and synthesizes more realistic images aligned with the captions compared to the state-of-the-art pre-trained models. Interestingly, image captioning models yield higher scores than text-to-image generation models, closer to 5 (perfect). After inspection, we find that our model yields near-perfect captions compared to human written ones, while the generated images sometimes fail in synthesizing details. For example, the shape of a banana may be distorted, limiting the fidelity of the image. \subsection{Inference Efficiency} \label{results:sec4} Next, we compare the inference speed and the number of model parameters with existing models. For image captioning, we compare our model with two best performed pre-trained models: the base version of UNIMO and VINVL. For text-to-image generation, we compare with two transformer-based large models DALLE and Cogview. For our model, we report the base version. We test speed on COCO test set with one 32GB NVIDIA TESLA V100. We include the visual decoder when calculating the inference speed. In Tab.\ref{tab:speed_comparison}, we find our model is roughly 7$\times$ faster than two-stage methods on image captioning. This is mainly because extracting image features with ViT is much faster than object detection. Importantly, our model has comparable parameters compared with UNIMO and VINVL. For text-to-image generation, our model is roughly 400$\times$ faster than large model Cogview and has only 5 percent of parameters. This further confirms the importance of dual pre-training tasks. \begin{table}[t] \centering \fontsize{11}{11}\selectfont \setlength{\tabcolsep}{2mm} \begin{tabular}{@{}lcc@{}} \toprule \textbf{System} & \textbf{Time(s)} & \textbf{\# Param. (M)} \\ \midrule \multicolumn{2}{@{}l}{\bf Image Captioning} & \\ \textsc{UNIMO}$_{\text{B}}$ & 0.88$+$0.12 & 172 \\ \textsc{VINVL}$_{\text{B}}$ & 0.90$+$0.12 & 187 \\ \textsc{DU-VLG}$_{\text{B-224}}$ & 0.14 & 228 \\ \midrule \multicolumn{2}{@{}l}{\bf Text-to-Image Generation} & \\ \textsc{DALLE} & -- & 12,000 \\ \textsc{Cogview} & 300 & 4,000 \\ \textsc{DU-VLG}$_{\text{B-224}}$ & 0.76 & 228 \\ \bottomrule \end{tabular} \caption{Comparing inference speed (time) and number of parameters (\# Param.) on different tasks. For two-stage methods \textsc{UNIMO} and \textsc{VINVL}, we report image feature extraction and beam search time respectively. } \label{tab:speed_comparison} \vspace{-3mm} \end{table} \section{Conclusion} \label{sec:conclusion} We presented a novel framework, DU-VLG, which unifies vision-and-language generation tasks with an encoder-decoder Transformer. We propose to use a hybrid image embedding schema in the encoder and decoder. In addition, we pre-train the model with novel dual pre-training tasks, along with a new commitment loss, to guide better image and text understanding and generation. Experiments show that our proposed dual pre-training objectives significantly improve performance on three vision-and-language generation tasks. Human evaluation further confirms that our model with dual pre-training tasks improves generation quality on image captioning and text-to-image generation. \section{Additional Evaluation} We include 5 examples on COCO dataset for image captioning and text-to-image generation tasks. In Fig.\ref{fig:image2text} and Fig.\ref{fig:text2image}, we find that DU-VLG generates captions and images of high quality. \section{Human Evaluation Guideline} In human evaluation, each annotator is presented with 100 model generated images and 100 model generated captions from 3 systems (in random order). For text-to-image generation, the human judges are asked to evaluate on fidelity and informativeness on a scale of 1 to 5 (1 being good and 5 being poor). Here are descriptions of two aspects: $\bullet$ \textbf{Fidelity}: Whether the image is realistic and looks like a real photo. $\bullet$ \textbf{Relevance}: Whether the image provides necessary content coverage from the text description. For image captioning, the human annotators are asked to evaluate on faithfulness and informativeness on a scale of 1 to 5 (1 being good and 5 being poor). Here are detailed descriptions of two aspects: $\bullet$ \textbf{Faithfulness}: Whether the caption correctly describes main objects in the image. $\bullet$ \textbf{Informativeness}: Whether the caption covers enough information from the image. The definition of four aspects can be found in Tab.\ref{tab:guideline}. \input{012_guideline} ~\\ \input{011_case} \section{Additional Training Details} \subsection{Pre-training} Following ViT, we use a patch size of $p=16$ for all the experiments. For VQ-VAE, we take the off-the-shelf VQ-GAN~\cite{Esser_2021_CVPR}, which is a variant of VQ-VAE. The VQ-GAN maps each $16 \times 16$ pixels as a discrete visual token, with a vocabulary size of $|\mathcal{V}| = 16384$. For base and large model, we use \texttt{ViT-base} and \texttt{ViT-large} with a patch size of $p=16$ to extract image patch embeddings. ViT weights are set frozen during pre-training. Since image sequences are longer than text sequences, we set $\alpha = 0.05$ and $\beta=1$ for all experiments. For model optimization, we utilize Adam optimizer with a gradient clipping of 1.0 and a batch size equivalent of 1024. \subsection{Downstream Tasks} \noindent \textbf{Text-to-Image Generation.} The CUB dataset contains 200 bird categories with 11,788 images. Each image has ten text descriptions. We follow the standard split which uses 150 categories with 8,855 images for training and the remaining 50 categories with 2,933 images for testing. The COCO dataset contains 82,784 images for training and 40,505 for testing. Each image has five text descriptions. We fine-tune on the pre-trained model with a learning rate of 1e-4 for 300 epoches on both datasets. Similar to ~\newcite{DBLP:journals/corr/abs-2102-12092}, we sample 16 images per caption with nucleus sampling strategy~\cite{Holtzman2020The} During testing, we first sample 16 images per caption and rerank the generated images with a CLIP model~\cite{pmlr-v139-radford21a}. The CLIP model selects the best image based on its correlation with the text description. \noindent \textbf{Image Captioning.} For COCO dataset, we follow the Karparthy split which has 113,287 images for training , 5000 and 5000 images for training, validation and test. Each image has 5 human-written captions. During inference, we generate a caption for each image and evaluate against five references. We fine-tune on COCO dataset with a learning rate of 3e-5. Vision Transformer layers are trainable during fine-tuning. Following ~\newcite{10.1007/978-3-030-58577-8_8}, we add object labels detected by the object detection model as additional text inputs. We find object labels improve CIDER and BLEU scores for at least 1 point and 0.3 points. During testing, we use beam search with a beam size of 5. \noindent \textbf{Visual Commonsense Reasoning.} For visual commonsense reasoning, we fine-tune with a learning rate of 3e-5 and set vision transformer layers trainable. During testing, we use top-k sampling with $k=50$. We find top-k sampling yields more meaningful responses compared to beam search. \section{Acknowledgments} This work was supported by the National Key Research and Development Project of China (No. 2018AAA0101900) \section{Ethics Statement} Large models that are pre-trained on heterogeneous data can be potentially harmful to marginalized populations. Along with the improved controllability, we also recognize that our system might be misused to create offensive or fabricated content. We therefore advocate cautious usage in real-world deployment. \section{Example Appendix} \end{document}
1,314,259,992,592
arxiv
\section{Introduction} Quantum field theories are plagued by divergences in their continuum formulations. Regularisation on a lattice renders a theory well-defined. The renormalisation process, when successful, finds fixed-points of such a lattice theory, defined where the correlation length diverges. Each of these is thought to correspond to a well defined theory in the continuum limit. One may alternatively search directly for a critical point through the identification and measurement of an (continuous) order parameter, such as the bilinear condensate, which breaks a symmetry of the theory at the critical point. Further, we may look at the eigenvalues of the Dirac operator. This preliminary work continues the exploration of the Thirring model in 2+1D \cite{Hands2015,Hands2016,Hands2019} focusing on (bulk) domain wall fermions \cite{Kap92,Chiu2003,Bor2000}, equivalent to (truncated) overlap fermions \cite{Nar95,Neu98} which permit the continuum $U(2)\to U(1)\times U(1)$ symmetry breaking rather than the $U(1)\times U(1) \to U(1)$ found with staggered fermions. In this framework we look at condensates and eigenvalues, with the eigenvalue analysis of \cite{Nar2021} in mind. Numerical aspects are considered in particular at this stage. The Euclidean continuum formulation of the Thirring model is given by \begin{equation} S[\psi,\bar{\psi}]=\int d^3 x \bar{\psi}(\gamma_\mu \partial_\mu +m)\psi+\frac{g^2}{2}(\bar{\psi}\gamma_\mu\psi)^2 \end{equation} The self interacting term may be reformulated with an auxiliary field and the usual gauge interacting Dirac term $S[\psi,\bar{\psi}]=S_F[\psi,\bar{\psi},A]+S_G[A]$: \begin{equation} S_F[\psi,\bar{\psi},A]=\int d^3 x \bar{\psi}(\gamma_\mu (\partial_\mu +iA_\mu)+m)\psi \end{equation} \begin{equation} S_G[A]=\frac{1}{g^2}\int d^3x A_\mu^2 \end{equation} This formulation allows Monte Carlo methods to be used in calculations. \section{Lattice Dirac Formulations in 2+1D} Domain wall fermions \cite{Kap92} and subsequently overlap fermions \cite{Nar95} were developed in an attempt to capture the chiral anomaly on the lattice in even dimensions. Domain wall fermions add an extra dimension to the Dirac operator in such a way that chiral fermions are found on the walls, which are separated by the extra dimension. Overlap fermions, formally equivalent to bulk formulations of the domain wall fermions in the infinite limit of the extent of the extra dimension, eliminate the requirement of an extra dimension, and can be expressed compactly utilising the matrix sign function \cite{Neu98}. We consider Shamir ($D^S_{DW}$) and Wilson ($D^W_{DW}$) domain wall fermions, both of which are instances of Mobius fermions \cite{Brow2017}. We want to express them in the form $D=D_0+mD_m$. With the extent of the extra dimension set to $L_s=4$ the massless components $D^{S}_{0,DW}$ and $D^{W}_{0,DW}$ may be expressed by \begin{equation} D_{0,DW}^{S}= \begin{pmatrix} D_W^+ & -P_- & 0 & 0 \\ -P_+ & D_W^+ & -P_- & 0 \\ 0 & -P_+ & D_W^+ & -P_- \\ 0 & 0 & -P_+ & D_W^+ \\ \end{pmatrix}, D_{0,DW}^{W}= \begin{pmatrix} D_W^+ & D_W^-P_- & 0 & 0 \\ D_W^-P_+ & D_W^+ & -P_- & 0 \\ 0 & D_W^-P_+ & D_W^+ & D_W^-P_- \\ 0 & 0 & D_W^-P_+ & D_W^+ \\ \end{pmatrix} \end{equation} where $D_W^\pm=D_W\pm I$, $D_W$ is the usual Wilson Dirac operator, with a negative mass term, $M$, known as the domain wall height. The usual bare mass term $m$ is incorporated on the domain walls. \begin{equation} D_{m1,DW}^{S}= \begin{pmatrix} 0 & 0 & 0 & P_+ \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ P_- & 0 & 0 & 0\\ \end{pmatrix}, D_{m1,DW}^{W}= \begin{pmatrix} 0 & 0 & 0 & -D_W^-P_+ \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ -D_W^-P_- & 0 & 0 & 0\\ \end{pmatrix} \end{equation} In 2+1D we further have the anti-hermitian mass terms \begin{equation} D_{m3,DW}^{S}= \begin{pmatrix} 0 & 0 & 0 & i\gamma_3 P_+ \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ i\gamma_3 P_- & 0 & 0 & 0\\ \end{pmatrix}, D_{m3,DW}^{W}= \begin{pmatrix} 0 & 0 & 0 & -iD_W^-P_+\gamma_3 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ -i D_W^-P_- \gamma_3 & 0 & 0 & 0\\ \end{pmatrix} \end{equation} which eliminate an error term associated with the hermitian mass terms and enable $L_s\to\infty$ measurements to be accurately approximated at significantly lower $L_s$ values \cite{Hands2015}. For the overlap operator we have \begin{equation} \label{EQN::ol} \begin{split} D^I_{OL} & = \frac{1+V}{2}+m\frac{1-V}{2}\\ D^{G3}_{OL} & = \frac{1+V}{2}+im\frac{1-V}{2}\gamma_3\\ \end{split} \end{equation} in which $V =\gamma_3 \text{sgn} (H)$ and we consider the kernel $H$ to be either the Shamir kernel $H_S$ or the Wilson kernel $H_W$: \begin{equation} \begin{split} H_W & =\gamma_3 D_W \\ H_S & =\gamma_3 \frac{D_W}{2+D_W} \\ \end{split} \end{equation} where $\gamma_3 V \gamma_3 = V^\dagger$ and $D_W\equiv D_W(-M)$ again. In 2+1D the $\gamma_5$ \it may \rm be replaced with $\gamma_3$ as has been done above. Even though it is sufficient to show the equivalence of the theories through the equality of the determinants \cite{Ken2006, Hands2016}, since all measurable quantities can be derived from the partition function, and the partition function evaluates to be the determinant, it is nevertheless instructive to see the the precise relation between the full operator matrices. Defining $K_{DW} \equiv C^\dagger D_{DW}^{-1}(1)D_{DW}(m)C$, then \begin{equation} K_{DW}= \begin{pmatrix} D_{OL}(m) & 0 & 0 & \cdots \\ -(1-m)\triangle_2^R & 1 & 0 & \\ -(1-m)\triangle_3^R & 0 & \ddots & \\ \end{pmatrix} =C^\dagger D_{DW}^{-1}(1)D_{DW}(m)C \end{equation} For further details the reader is referred to \cite{Brow2017}. Similar relations can be demonstrated for the 2+1D variants above, although it should be noted that introducing Zolatarev coefficients in the Shamir domain wall formulation breaks this relation \cite{Chiu2003}. \section{Results} The auxiliary fields are generated using a rational hybrid monte carlo approach \cite{Cla2004} necessary for exploration with a single dirac field. In all cases these are generated using the Shamir domain wall formulation with coefficients fixed to one, corresponding to the hyperbolic tanh formulation of the overlap operator, and the anti-hermitian mass terms. We then conduct measurements with different overlap operators of different type and $L_s$ value. As such we are generally looking at partially quenched results although we emphasise that when only the $L_s$ value differs in the methodology of the generation of the auxiliary field and the measurements, then the full physics should be achieved in the simulation if both $L_s$ values are large enough. \subsection{Kernel Spectra and Condition Number} Both the Wilson ($H_W=\gamma_3 D_W$) and Shamir ($H_S=\gamma_3 D_W / (2+D_W)$) kernels appear to have minimum and maximum eigenvalues independent of $L_s$, at least above a certain unexplored cutoff. This is shown in the left panel of figure \ref{FIG::KLsInd}. This hints that the auxiliary field structure may be retained even at small values of $L_s$, eliminating the requirement for more arduous calculations in the dynamic creation of the auxiliary fields through the RHMC method. We need to be aware of the spectral range of the kernels, that is the eigenvalue extrema, $\lambda_{\text{min}}(H)$ and $\lambda_{\text{max}}(H)$, and also the condition $\kappa(H)$, to enable a wise choice of Zolotarev or HT parameters. In the latter case a rescaling \cite{Brow2017} may be possible in some cases, but this acceleration technique appears not applicable to the non-compact Shamir formulation due to the unbounded upper eigenvalue. It is similarly unbounded for the non-compact Wilson formulation, but in practice the maximum eigenvalue does not grow so prohibitively, as also indicated in figure \ref{FIG::KLsInd}. A bounded value of a compact case is also shown (artificial as it was created from a non-compact auxiliary field). The condition number is plotted against $\beta=\frac{1}{g^2}$ in the second panel of figure \ref{FIG::KLsInd} and the increased (numerical) challenge of the non-compact formulation around the critical point is in evidence. \begin{figure}[h] \begin{center} \includegraphics[scale=0.5]{spectraLsInd.pdf} \includegraphics[scale=0.5]{spectraCondNo.pdf} \caption{LHS: Maximum and minimum eigenvalues for different kernels - non-compact (NC), Wilson(W) and Shamir(S) kernels - produced with $L_s$ values of 20 or 60, plotted against coupling strength $\beta$. A compact (C) case is also plotted for the maximum eigenvalue only. Note that markers and lines represent different, nearly overlapping, curves in this plot. RHS:The condition number $\kappa(H)$ for compact and non-compact, plotted against coupling strength $\beta=\frac{1}{g^2}$, for different kernels $H$, all using auxiliary (A) fields generated with $L_s=20$.} \label{FIG::KLsInd} \end{center} \end{figure} Often, the physics of interest is determined by the smallest eigenvalues. Although it is not formally the case, for the smallest eigenvalues we have the approximation \begin{equation} \label{EQN::eig} \text{eig}[H_S]\approx\frac{\text{eig}[H_W]}{2+\text{eig}[H_W]} \end{equation} as demonstrated in figure \ref{FIG::KSrelW}. The large eigenvalues have no such approximation. It will be interesting to see if this relation can be exploited in the evaluation of the overlap operator and justifies the interchange of different kernels for sea and valence fermions. \begin{figure} \begin{center} \includegraphics[scale=0.6]{spectraRel.pdf} \caption{The lowest eigenvalues are plotted for the Wilson and Shamir kernels for a range of auxiliary fields. A derived eigenvalue S[Wilson] calculated according to eqn. \ref{EQN::eig} is also plotted, showing the accuracy of the approximation.} \label{FIG::KSrelW} \end{center} \end{figure} \subsection{Condensate} The independence of the spectral range on the value of $L_s$ shown in fig \ref{FIG::KLsInd} suggests that it may be sufficient to use auxiliary fields generated with sea fermions using a lower $L_s$ value. This is explored via the bilinear condensate, defined by $C \equiv \frac{\partial \text{ln} Z}{\partial m} = \frac{1}{Z} \braket{\frac{\partial Z_F}{\partial m}}_G$, where $\braket{O}_G \equiv \int \mathcal{D}[U] O[U] \text{exp}(-S_G[U])$ and \begin{equation} \frac{\partial Z_F}{\partial m} = \text{Tr} [D^m D^{-1}] \equiv C_F \end{equation} For the overlap operators we have \begin{equation} \begin{split} C_{F,OL}^{M1} & =\text{Tr}[\frac{1}{1-m}((D_{OL}^I){-1}-1)] \\ C_{F,OL}^{M3} & = \text{Tr}[\frac{-1}{i\gamma_3+m}((D_{OL}^{G3})^{-1}-1)] \\ \end{split} \end{equation} corresponding to the forms given in eqn \ref{EQN::ol}. \begin{figure}[h] \begin{center} \includegraphics[scale=0.6]{condensateA.pdf} \caption{Condensate evaluated with $L_s$ values for the generation of the auxiliary fields distinct from the $L_s$ values used for the condensate measurement. The coupling strength is $\beta=0.25$, in the broken phase.} \label{FIG::cond} \end{center} \end{figure} Figure \ref{FIG::cond} shows condensates over a range of mass values. The curves show different $L_s$ values used for the auxiliary fields and the condensate measurement, both using Shamir kernels, and indicates that the results are predominantly determined by the $L_s$ value of the measurement rather than the auxiliary field, as hoped. There is a clear difference between the results measured with $L_s=60$ with auxiliary fields of $L_s=20$ and $L_s=60$, but this appears to be significantly smaller than the error from not having reached the $L_s$ limit in the measurement. We argue this makes a strong case for the decoupling of $L_s$ in fully dynamic fermions, although what would constitute suitable $L_s$ values would be context dependent. \subsection{Overlap Spectra} We consider the convergence of the lowest eigenvalues of the (Hermitian) overlap operator as found by both forms $D_{OL}^\dagger D_{OL} =2+V+V^\dagger$, and $D_{OL}^\dagger D_{OL} =1+V+V^\dagger+V^\dagger V$ where the former holds for the exact overlap operator, and the latter for the truncated overlap operator, ie $V^\dagger V=1$ as the approximation to the sign function becomes exact. We refer to the second formulation as the alternative formulation, denoted Alt in the plots. The auxiliary fields used are generated with $L_s=20$ Shamir kernels, and zero mass. \begin{figure}[h] \begin{center} \includegraphics[scale=0.5]{spectraOLWHT40.pdf} \includegraphics[scale=0.5]{spectraOLSHT40.pdf} \caption{LHS: The lowest 3 eigenvalues (e1, e3, e5) of the Wilson overlap operator using the hyperbolic tanh (HT) approximation with $L_s=40$. RHS: The lowest 2 eigenvalues (e1, e3) of the Shamir overlap operator using the hyperbolic tanh (HT) approximation with $L_s=40$.} \label{FIG::OLWSHT40} \end{center} \end{figure} Figure \ref{FIG::OLWSHT40} shows the lowest 1st, 3rd, and 5th lowest eigenvalues since each eigenvalue occurs twice. The first panel shows Wilson kernel results. These very preliminary results may be compared with the quenched compact and non-compact results using the Wilson kernel of \cite{Nar2021}. Their compact case gives an S-curve which we see with our Wilson results, although the cases are not directly comparable since ours is a partially quenched non-compact case. Our Shamir case shows an upturn in the minimum eigenvalue, which perhaps corresponds to the more complex curve found for the non-compact case in \cite{Nar2021}, and is more directly comparable. However, more results need to be obtained. That $L_s$ convergence has not been attained is evidenced by the difference in standard and Alt cases. Figure \ref{FIG::OLW} shows the minimum eigenvalue for Wilson formulations including the Zolotarev formulation. Using an $L_s$ value of 20 with the alternative formulation and the HT approximation gives a better (although far from) converged result at strong coupling than the $L_s=40$ default case. The Zolatarev formulation with $L_s=40$ is visually converged (when comparing with $L_s=50,60$ not shown here), and it remains to be seen if using the alternative formulation will allow for a lower $L_s$ value. \begin{figure}[h] \begin{center} \includegraphics[scale=0.6]{spectraOLW.pdf} \caption{The lowest eigenvalue using the Wilson overlap operator using different $L_s$ values and approximation schemes. HT denotes the hyperbolic tanh approximation, and Z denotes the Zolatarev approximation with in the range [1e-4,10].} \label{FIG::OLW} \end{center} \end{figure} \section{Concluding Remarks} We have demonstrated the potential for lower $L_s$ valued sea fermions, and highlighted the challenge posed by the unbounded maximum eigenvalue of the non-compact formulation in the overlap kernels in the strongly coupled region. In practical simulations the maximum eigenvalue of the Shamir kernel was significantly higher than for the Wilson kernel, supporting the use of the Wilson kernel in the strongly coupled region if possible. Given the relation between the smallest eigenvalues of the Shamir and Wilson kernels, we hope the partially quenching interchange of kernels is justified, and will continue to investigate in this direction. We will continue examining the eigenvalues of the overlap operators. \section*{Acknowledgements} We thank Rajamani Narayanan for helpful discussions and support. This work entailed the use of the Cambridge Service for Data Driven Discovery (CSD3), part of which is operated by the University of Cambridge Research Computing on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The DiRAC component of CSD3 was funded by BEIS capital funding via STFC capital grants ST/P002307/1 and ST/R002452/1 and STFC operations grant ST/R00689X/1. DiRAC is part of the National e-Infrastructure. Further work was performed on the Sunbird facility of Supercomputing Wales. The work of JW was supported by an EPSRC studentship, and of SJH by STFC grant ST/L000369/1.
1,314,259,992,593
arxiv
\section{introduction} Electrohydrodynamics deals with the coupled motion of fluids and ions~\cite{theoretical_microfluidics}. Such coupling has relevant effects in microfluidics, allowing manipulation of fluids or dispersed particles using electrical stimuli~\cite{electrohydrodynamics_microsystems}. A specific range of applications in which electrohydrodynamic effects are of crucial importance are nanopore systems~\cite{dekker2007solid,gubbiotti2021electroosmosis}. When a fluid containing ions is located in a confined region as a nanopore, a non uniform distribution of ions may arise in a region whose size depends on the ionic concentration in the bulk, the so-called Debye layer~\cite{theoretical_microfluidics}. This inhomogeneity in the ionic distribution may be generated as a consequence of the interaction of the ions with the nanopore walls, as in the case of a charged nanopore surface~\cite{ma2019nanopore,laohakunakorn2014electroosmotic}. However, it may also be induced by ion specific interactions with neutral walls, as has been studied in the case of hydrophobic nanopores~\cite{kim2009high}. Ionic inhomogeneity can be also achieved via an external gating voltage applied to electrodes embedded in the nanopore~\cite{bai2014fabrication,cantley2019voltage} or via induced charge mechanism where the same external electric field that drives the ions through the pore, also polarizes the solid membrane inducing a surface potential that, in turn, alters ion distribution in the nanopore~\cite{di2021geometrically,yao2020induced}. In all these cases, the ionic distributions near the walls are generally different for positive and negative ions. As a consequence, the zone near the confining walls is electrically charged, and the fluid in that region can be put in motion by an external electric field. The resulting flow is known as electroosmotic flow and it has been shown to take place both in synthetic~\cite{yusko2009electroosmotic,laohakunakorn2014electroosmotic,balme2015ionic} and in biological~\cite{bonome2017electroosmotic,boukhet2016probing,huang2017electro} nanopores. Electroosmotic flow can be the dominant effect governing the translocation of particles or molecules through a nanopore~\cite{malgaretti2014entropic,asandei2016electroosmotic,boukhet2016probing,huang2020electro}, and can generate interesting phenomena such as current rectification~\cite{yusko2009electroosmotic} or complex velocity profiles~\cite{chinappi2018charge,chinappi2020analytical}. In all the mentioned examples, the modeling of confined systems has to combine electrohydrodynamic phenomena with thermal fluctuations, which are especially important in nanopores~\cite{marbach2021intrinsic}. For this reason, an extensively used technique to simulate nanopores is all-atoms Molecular Dynamics (MD)~\cite{maffeo2012modeling,bonome2017electroosmotic,chinappi2018charge} that naturally includes all the relevant effects. For systems out of the typical length and time scales accessible to Molecular Dynamics, mesoscale models which reduce the degrees of freedom while properly modeling the thermal fluctuations of the system are needed, for a review on computational methods to study electrohydrodynamics at the nanoscale, see, among others~\cite{rotenberg2013electrokinetics,gubbiotti2021electroosmosis}. A technique which has been widely used to simulate mesoscale systems is Dissipative Particle Dynamics (DPD)~\cite{espanol2017perspective}. In the DPD framework, the fluid is represented by a system of pairwise interacting particles~\cite{hoogerbrugge1992simulating,espanol1995statistical}. The original DPD model was developed to study the rheology of colloidal suspensions~\cite{hoogerbrugge1992simulating,bolintineanu2014particle}, but in the last two decades it has been expanded in many different ways in order to simulate increasingly complex systems~\cite{espanol2003smoothed,pagonabarraga2001dissipative,avalos1997dissipative,espanol1997dissipative,li2015transport,deng2016cdpd}. Physical systems studied with DPD or derived methods include blood~\cite{katanov2015microvascular,blumers2017gpu}, polymers~\cite{kreer2016polymer}, biomolecules~\cite{peter2015polarizable}, biological membranes~\cite{sevink2014efficient}, and droplets~\cite{wang2015droplets}. DPD simulations including electrostatic interactions have also been performed, either considering DPD particles with fixed charge~\cite{groot2003electrostatic,gonzalez2006electrostatic,smiatek2011mesoscopic} or charged polyelectrolytes~\cite{sindelka2014dissipative,lisal2016self}. A different approach considers the concentrations of ionic species as additional scalar variables associated to each DPD particle, modelling the fluxes of concentration between them~\cite{deng2016cdpd}. \tr{This approach is promising since it allows to rescale the system size without having to explicitly parametrize the ion-solvent interaction. However, a link between the equation of state of the electrolyte solution and the ionic transport equations is still lacking, restricting the fluxes which can be simulated to advection plus Fickian diffusion.} Here, we propose a mesoscale model based on DPD which is able to simulate the electrohydrodynamic phenomena taking place in nanofluidic systems. We will refer to this method as electrohydrodynamic-DPD, or EH-DPD. \footnote{We refer to electrohydrodynamics in its broader meaning of phenomena involving charge transport coupled to fluid motion~\cite{saville1997electrohydrodynamics}.} \newtr{The dissolved ions are represented by adding two degrees of freedom for each meso-particle, and the exchange of ions depends on the difference of chemical potential}. Although only two charged species are considered here, the model can be easily generalized to include electrolyte solutions with more species. In section~\ref{sec:equations} the equations for the dynamics of the system are reported, and it is shown (section~\ref{sec:equilibrium}) that, if appropriate fluctuation-dissipation conditions are satisfied, the proposed dynamics admits an equilibrium distribution. Since the equilibrium distribution of a system is related to its thermodynamic potential, this gives a link between the terms arising in the equations of motion and the thermodynamic properties of the meso-particles, allowing a consistent definition of pressure and chemical potential. In section~\ref{sec:model} the physical model used to derive the forces and the ionic exchange rates between meso-particles is described. The electrostatic interactions are computed considering the charge carried by each meso-particle to be distributed as a Gaussian of constant variance located on its center. The chemical potential used in the model is that of a perfect gas plus a contribution due to the electrostatic interactions. The ionic exchange rates between the meso-particles are modelled as dependent on the local ionic quantities to obtain a conductance which is linearly dependent on the ionic average concentration. It is shown that this dependence implies the necessity of considering an additional drift in order for the system to reach the desired equilibrium distribution. In section~\ref{sec:validation} the model is tested against analytical results for planar electroosmotic flow, finding an excellent agreement with the theoretical prediction for both the cases of overlapping and non overlapping electric double layers. \newtr{As an example of the applicability of the presented approach to simulate a more complex fluid, a Van der Waals equation of state is also used, simulating ion-specific effects such as excluded volume.} The possibility of simulating different equations of state for the electrolyte solution is promising for the study of current and mass transport in systems in which phase transitions and ion specific effects are relevant, such as hydrophobic nanopores and hydrophobic nanporous materials~\cite{tinti2017intrusion,camisasca2020gas,trick2017voltage,polster2020gating}. \section{Electro-Hydrodynamics: DPD formulation} \label{sec:equations} The fluid is constituted by $N$ meso-particles of equal mass $m$. The state of the $i^{\mathrm{th}}$ meso-particle is described by its position $\boldsymbol{x}_i$, velocity $\boldsymbol{v}_i$, quantity of cations $n^c_i$ and quantity of anions $n^a_i$. The vector of state of the entire systems has therefore dimension $8N$ and the equations for its evolution are \begin{align} \label{eq:ehdpd_1} &\mathrm{d}\boldsymbol{x}_i= \boldsymbol{v}_i\mathrm{d}t\;,\\ \label{eq:ehdpd_2} &m\mathrm{d}\boldsymbol{v}_i=\boldsymbol{f}^{\boldsymbol{C}}_{i}\mathrm{d}t+ \sum\limits_{j\ne i}\left[ \gamma w^D_{ij}\boldsymbol{v}_{ji}\cdot\boldsymbol{e}_{ij}\mathrm{d}t+ \sigma w^R_{ij}\mathrm{d}W^v_{ij} \right]\boldsymbol{e}_{ij}\;,\\ \label{eq:ehdpd_3} &\mathrm{d}n^c_i= \sum\limits_{j\ne i}\left[ \gamma^c w^D_{ij}h^{c}_{ij}\mathrm{d}t+ \sigma^c w^R_{ij}\mathrm{d}W^c_{ij}\right]\;,\\ \label{eq:ehdpd_4} &\mathrm{d}n^a_i= \sum\limits_{j\ne i}\left[ \gamma^aw^D_{ij} h^{a}_{ij}\mathrm{d}t+ \sigma^aw^R_{ij}\mathrm{d}W^a_{ij}\right]\;, \end{align} where $\boldsymbol{e}_{ij}=\left(\boldsymbol{x}_i-\boldsymbol{x}_j\right)/|\boldsymbol{x}_i-\boldsymbol{x}_j|$ is the unit vector pointing along the particle-particle direction. The increments $\mathrm{d}W_{ij}$ are independent increments of the Wiener process, three for each pair of particles, satisfying \begin{align} \label{eq:dWv} &\mathrm{d}W^{v}_{ij}=\mathrm{d}W^{v}_{ji}\;,\\ \label{eq:dWc} &\mathrm{d}W^{c}_{ij}=-\mathrm{d}W^{c}_{ji}\;,\\ \label{eq:dWa} &\mathrm{d}W^{a}_{ij}=-\mathrm{d}W^{a}_{ji}\;. \end{align} Equations~\eqref{eq:ehdpd_1} and \eqref{eq:ehdpd_2} have the structure of the standard DPD equations~\cite{groot1997dissipative}, where $\boldsymbol{f}^{\boldsymbol{C}}_{i}$ is a conservative force which depends on the physical model chosen and will be specified in Section~\ref{sec:model}. The parameters $\gamma$ and $\sigma$ control the intensity of the respective forces, i.e. the dissipative force \begin{equation} \boldsymbol{f}^{\boldsymbol{D}}_{ij}=\gamma w^D_{ij}\left(\boldsymbol{v}_{ji}\cdot\boldsymbol{e}_{ij}\right)\boldsymbol{e}_{ij}\;, \end{equation} and the random force \begin{equation} \boldsymbol{f}^{\boldsymbol{R}}_{ij}=\sigma w^R_{ij}\xi^{v}_{ij}\boldsymbol{e}_{ij}\;, \end{equation} where $\boldsymbol{v}_{ji}=\boldsymbol{v}_j-\boldsymbol{v}_i$ is the velocity difference between two interacting meso-particles and $\xi^{v}_{ij}$ is a white noise stochastic process such that $\mathrm{d}W^{v}_{ij}=\xi^{v}_{ij}\mathrm{d}t$. The functions $w^D$ and $w^R$ are weight functions which depend only on the interparticle distance $r_{ij}=|\boldsymbol{x}_i-\boldsymbol{x}_j|$. Such weight functions are maximum for $r_{ij}=0$, and vanish if the interparticle distance is larger than a cutoff radius $r_c$. There is no prescribed functional form for $w^D$ and $w^R$, here the Lucy function~\cite{espanol2003smoothed} is used for $w^D$, \begin{equation}\label{eq:wd} w^D_{ij}=w^D(r_{ij})= \left(1+3\frac{r_{ij}}{r_c}\right)\left(1-\frac{r_{ij}}{r_c}\right)^3\;, \end{equation} for $r_{ij}<r_c$ while the other weight function is such that $w^R_{ij}=\left(w^D_{ij}\right)^{1/2}$. Equations~\eqref{eq:ehdpd_3} and~\eqref{eq:ehdpd_4} represent the main novelty of the present paper and describe the rate at which the quantity of cations (and anions) carried by the meso-particle change, $\dot{n}_i^c$ (and $\dot{n}_i^a$) respectively. This rate is the sum of the contributions from all the pairs, and can be divided in two terms, the dissipative rates \begin{align} \label{eq:dissipative_rate} J^D_{ij}=\gamma^cw^D_{ij}h^c_{ij}\;, \end{align} and the random rates \begin{align} J^R_{ij}=\sigma^cw^R_{ij}\xi^c_{ij}\;, \end{align} where analogous expressions hold for the anions. The same weight functions employed in Eq.~\eqref{eq:ehdpd_2}, $w^D$ and $w^R$ are used. The quantities $\gamma^c$, $\sigma^c$ ($\gamma^a$, $\sigma^a$) determine the magnitude of the dissipative and random rate at which two meso-particles exchange cations (anions). The random rates depend on the white noise processes $\xi^c_{ij}$ and $\xi^a_{ij}$, corresponding to the Wiener increments $\mathrm{d}W^c_{ij}=\xi^c_{ij}\mathrm{d}t$ and $\mathrm{d}W^a_{ij}=\xi^a_{ij}\mathrm{d}t$. The quantities $h^c_{ij}$ and $h^a_{ij}$ determine the dissipative rates for the cations and anions, as a function of the state of the system. As the conservative force $\boldsymbol{f}^{\boldsymbol{C}}_{i}$, also the quantities $h^c_{ij}$ and $h^a_{ij}$ are specified in Section~\ref{sec:model} and will be shown to be related to the chemical potentials of the meso-particle for the cations and the anions, $\mu^c$ and $\mu^a$ respectively. The conditions of Eq.~\eqref{eq:dWc} and Eq.~\eqref{eq:dWa} imply that $J^R_{ij}=-J^R_{ji}$ for both ionic species. Assuming that the additional conditions \begin{align}\label{eq:h_antisymm} &h^c_{ij}=-h^c_{ji}\\ &h^a_{ij}=-h^a_{ji} \end{align} are satisfied, we have also $J^D_{ij}=-J^D_{ji}$. If such conditions hold, an important consequence is that the total quantity of both species, and hence the total charge of the system, is strictly conserved during the dynamics. The dynamics of ionic fluxes between particles is sketched in Fig.~1. \begin{figure} \label{fig:ion_sketch} \includegraphics[width=0.5\linewidth]{figure1.pdf} \caption{ Sketch of the exchange of positive ions $n^c$ between two meso-particles. The two particles, labeled $1$ and $2$, have different chemical potentials $\mu^c$, due to different ion concentration and electrostatic potentials. The relation between the chemical potential of the meso-particles and the quantities $h^c_{ij}$ are given in Section~\ref{sec:equilibrium}. Two different types of fluxes arise, one dissipative flux proportional to the difference of chemical potential, and one random flux proportional to a white noise process. The same applies to negative ions $n^a$. } \end{figure} \section{Equilibrium distribution and Fluctuation Dissipation relations (FDR)} \label{sec:equilibrium} The system of Equations~(\ref{eq:ehdpd_1}-\ref{eq:ehdpd_4}) can be written in the compact form of a Langevin equation \begin{equation} \label{eq:langevin_ehdpd} \boldsymbol{\dot{y}}=\boldsymbol{u}(\boldsymbol{y})+\boldsymbol{G}(\boldsymbol{y})\boldsymbol{\xi}\;, \end{equation} where \begin{equation} \label{y-def} \boldsymbol{y}= \begin{pmatrix} \boldsymbol{x}_1\\ \ldots \\ \boldsymbol{x}_N\\ \boldsymbol{v}_1\\ \ldots \\ \boldsymbol{v}_N\\ n^c_1\\ \ldots \\ n^c_N\\ n^a_1\\ \ldots \\ n^a_N\\ \end{pmatrix} \end{equation} is the state vector which has dimension $8N$. The standard hydrodynamics setting, with no ion transport, is recovered by reducing the state vector to its x- and v-components. In the following, vectors such as $\boldsymbol{u}$ are identified by lowercase bold letters, while matrices as $\boldsymbol{G}$ are identified by uppercase bold letters. The drift term $\boldsymbol{u}=\left(\boldsymbol{u}^x,\boldsymbol{u}^v,\boldsymbol{u}^c,\boldsymbol{u}^a\right)^T$ includes all the deterministic terms in the equations ~(\ref{eq:ehdpd_1}-\ref{eq:ehdpd_4}), i.e. \begin{equation} \label{u-def} \boldsymbol{u}(\boldsymbol{y})=\begin{pmatrix} \boldsymbol{v}_1\\ \ldots \\ \boldsymbol{v}_N\\ m^{-1}\left[\boldsymbol{f}^C_{1}+\sum\limits_{j\ne 1} \gamma w^D_{1j}\left(\boldsymbol{v}_{j1}\cdot\boldsymbol{e}_{1j}\right)\boldsymbol{e}_{1j}\right]\\ \ldots\\ m^{-1}\left[\boldsymbol{f}^C_{N}+\sum\limits_{j\ne N} \gamma w^D_{Nj}\left(\boldsymbol{v}_{jN}\cdot\boldsymbol{e}_{Nj}\right)\boldsymbol{e}_{Nj}\right]\\ \sum\limits_{j\ne 1} \gamma^cw^D_{1j}h_{1j}^c\\ \ldots \\ \sum\limits_{j\ne N} \gamma^cw^D_{Nj}h_{Nj}^c\\ \sum\limits_{j\ne 1} \gamma^aw^D_{1j}h_{1j}^a \\ \ldots \\ \sum\limits_{j\ne N} \gamma^aw^D_{Nj}h_{Nj}^a \end{pmatrix}\;.\end{equation} The stochastic vector \begin{equation} \label{w-def} \boldsymbol{\xi}= \begin{pmatrix} \xi^v_{12}\\ \ldots \\ \xi^v_{(N-1)N}\\ \xi^c_{12}\\ \ldots \\ \xi^c_{(N-1)N}\\ \xi^a_{12}\\ \ldots \\ \xi^a_{(N-1)N}\\ \end{pmatrix} \end{equation} is composed of independent white noise processes and has dimension $3N(N-1)/2$, i.e. three times the total number of particle pairs. The matrix $\boldsymbol{G}$ has therefore dimension $8N\times 3N(N-1)/2$, and is composed by the following blocks \begin{equation} \label{g-def} \boldsymbol{G}(\boldsymbol{y})=\begin{pmatrix} \boldsymbol{0} & \boldsymbol{0} & \boldsymbol{0} \\ \boldsymbol{G^v} & \boldsymbol{0} & \boldsymbol{0} \\ \boldsymbol{0} & \boldsymbol{G^c} & \boldsymbol{0} \\ \boldsymbol{0} & \boldsymbol{0} & \boldsymbol{G^a} \end{pmatrix}\;.\end{equation} The matrix $\boldsymbol{G^v}$ has dimension $3N\times N(N-1)/2$ and can be written in a compact form considering it to be composed of $N\times N(N-1)/2$ vectors of dimension 3, $\boldsymbol{g}^v_{i\alpha}$, each one containing the stochastic force acting on particle $i$ due to the process $\xi^v_\alpha$, where $\alpha$ is an index which spans all the particle pairs, i.e. $\alpha=\alpha(p,q)$ with $p\in[1,N-1]$ and $q\in[p+1,N]$. Hence, $r_\alpha=r_{pq}$, $w^R_\alpha=w^R_{ij}$, $\boldsymbol{e}_\alpha=\boldsymbol{e}_{pq}$, and, using this compact notation, $\boldsymbol{g}^v_{i\alpha}=m^{-1}f_{i\alpha}\sigma w^R_\alpha\boldsymbol{e}_\alpha$, where \begin{equation} f_{i\alpha}=\begin{cases} \quad 0 &\quad\mathrm{if}\,i\ne p\,\mathrm{and}\,i\ne q\;,\\ \quad 1 &\quad\mathrm{if}\,i=p\;,\\ \quad -1 &\quad\mathrm{if}\,i=q\;. \end{cases}\end{equation} The matrices $\boldsymbol{G^c}$ and $\boldsymbol{G^a}$ have dimension $N\times N(N-1)/2$ and their expressions are, respectively, $g^c_{i\alpha}=f_{i\alpha}\sigma^c w^R_\alpha$ and $g^a_{i\alpha}=f_{i\alpha}\sigma^a w^R_\alpha$. The trajectories obtained from the integration of the Langevin Equation~\eqref{eq:langevin_ehdpd} can be equivalently described as the evolution of a probability distribution for the state variables $\boldsymbol{y}$ obeying a Fokker-Planck equation~\cite{lau2007state,gubbiotti2019confinement}. With the definitions~\eqref{y-def}--\eqref{g-def} the Fokker-Planck equation associated with Eq.~\eqref{eq:langevin_ehdpd} reads \begin{equation} \label{eq:fokker-planck_ehdpd} \frac{\partial P(\boldsymbol{y},t)}{\partial t}=\boldsymbol{\nabla}_y\cdot\left[\left(-\boldsymbol{u}+\frac{1}{2}\boldsymbol{\nabla}_y \cdot\boldsymbol{GG}^T+\frac{1}{2}\boldsymbol{GG}^T \cdot \boldsymbol{\nabla}_y \right)P(\boldsymbol{y})\right]\;, \end{equation} where $\boldsymbol{\nabla}_y = \left(\boldsymbol{\nabla}_x, \boldsymbol{\nabla}_v, \boldsymbol{\nabla}_{c}, \boldsymbol{\nabla}_{a} \right)^T$ is the 8N-dimensional gradient built with the derivatives with respect to the components of $\boldsymbol{y}$, Eq.~\eqref{y-def}. It is convenient to introduce the matrix $\boldsymbol{D}=\boldsymbol{GG}^T/2$, which has dimension $8N\times 8N$ and can be decomposed in blocks \begin{equation} \boldsymbol{D}=\begin{pmatrix} \boldsymbol{0} & \boldsymbol{0} & \boldsymbol{0} & \boldsymbol{0} \\ \boldsymbol{0} & \boldsymbol{D^v} & \boldsymbol{0} & \boldsymbol{0} \\ \boldsymbol{0} & \boldsymbol{0} & \boldsymbol{D^c} & \boldsymbol{0} \\ \boldsymbol{0} & \boldsymbol{0} & \boldsymbol{0} & \boldsymbol{D^a} \end{pmatrix}\;,\end{equation} where $\boldsymbol{D}^{\boldsymbol v}$ is a $3N\times 3N$ submatrix, and both $\boldsymbol{D^c}$ and $\boldsymbol{D^a}$ are $N\times N$ submatrices. The matrix $\boldsymbol{D}^{\boldsymbol v}$ can be further decomposed in $N\times N$ blocks of $3\times3$ submatrices $\boldsymbol{D}_{ij}^v$. Their expressions are as follows \begin{equation}\label{eq:d_blocks}\begin{aligned} \boldsymbol{D}_{ij}^v=& \frac{1}{2} \sum\limits_{\alpha } \boldsymbol{g}^v_{i\alpha}\otimes\boldsymbol{g}^v_{j\alpha}=\frac{1}{2m^2}\sum\limits_{\alpha } f_{i\alpha}f_{j\alpha}\left(\sigma w^R_\alpha\right)^2\boldsymbol{e}_\alpha\otimes\boldsymbol{e}_\alpha\;,\\ D^c_{ij}=&\frac{1}{2} \sum\limits_{\alpha } g^c_{i\alpha}g^c_{j\alpha}=\frac{1}{2} \sum\limits_{\alpha } f_{i\alpha}f_{j\alpha}\left(\sigma^c w^R_\alpha\right)^2\;,\\ D^a_{ij}=&\frac{1}{2} \sum\limits_{\alpha } g^a_{i\alpha}g^a_{j\alpha}=\frac{1}{2} \sum\limits_{\alpha } f_{i\alpha}f_{j\alpha}\left(\sigma^a w^R_\alpha\right)^2\;, \end{aligned}\end{equation} where the summation over $\alpha$ between particle pairs is explicitly indicated. These expressions can be simplified accounting for the properties of product $f_{i\alpha}f_{j\alpha}$, i.e. $f_{i\alpha}f_{j\alpha} = 0$ except two special cases: i) $f_{i\alpha}f_{j\alpha} = 1$ for $i = j$ and $\alpha = \alpha(i,q)$ or $\alpha = \alpha(p,i)$ for any $p$ and $q$; ii) $f_{i\alpha}f_{j\alpha} = -1$ for $i \ne j$ and $\alpha = \alpha(i,j)$ or $\alpha = \alpha(j,i)$. With this in mind, Eq.~\eqref{eq:d_blocks} is rewritten as \begin{align} \label{eq:d_blocks_v} \boldsymbol{D}^v_{ij}=& \frac{1}{2m^2}\delta_{ij}\sum\limits_{k\ne i}\left(\sigma w^R_{ik}\right)^2\boldsymbol{e}_{ik}\otimes\boldsymbol{e}_{ik}+\frac{1}{2m^2}\left(\delta_{ij}-1\right)\left(\sigma w^R_{ij}\right)^2\boldsymbol{e}_{ij}\otimes\boldsymbol{e}_{ij}\;,\\ \label{eq:d_blocks_c} D^c_{ij}=&\frac{1}{2}\delta_{ij}\sum\limits_{k\ne i}\left(\sigma^c w^R_{ik}\right)^2+\frac{1}{2}\left(\delta_{ij}-1\right)\left(\sigma^c w^R_{ij}\right)^2\;,\\ \label{eq:d_blocks_a} D^a_{ij}=&\frac{1}{2}\delta_{ij}\sum\limits_{k\ne i}\left(\sigma^a w^R_{ik}\right)^2+\frac{1}{2}\left(\delta_{ij}-1\right)\left(\sigma^a w^R_{ij}\right)^2\; . \end{align} In the following it is assumed that the dynamics generated by Eq.~\eqref{eq:langevin_ehdpd} admits an equilibrium distribution \begin{equation} \label{eq:boltzmann_eq} P_{eq}(\boldsymbol{y}) = C \exp\left[S(\boldsymbol{y})/k_B\right]\; , \end{equation} where $S(\boldsymbol{y}) = k_B (\ln \left[ P_{eq}(\boldsymbol{y}) \right] + \ln C)$ is an appropriate thermodynamic potential depending on the coarse-grained variables $\boldsymbol{y}$ and $k_B$ is Boltzmann constant. In the present context, dealing with an isolated system, S can be understood as the (coarse-grained) entropy of the system. The equilibrium distribution must be the solution of the Fokker-Planck Eq.~\eqref{eq:fokker-planck_ehdpd}, i.e. \begin{equation} \label{eq:fokker-planck_equilibrium} 0=\boldsymbol{\nabla}_y \cdot\left[\left(-\boldsymbol{u}+\boldsymbol{\nabla}_y \cdot\boldsymbol{D}+\boldsymbol{D} \cdot \boldsymbol{\nabla}_y S\right)P_{eq}(\boldsymbol{y})\right]\; . \end{equation} For the following calculations is convenient to split the drift term $\boldsymbol{u}$ into two parts, the conservative drift \begin{equation} \label{eq:conservative_drift} \boldsymbol{u}^{\boldsymbol{C}}=\begin{pmatrix} \boldsymbol{v}_1\\ \ldots \\ \boldsymbol{v}_N\\ m^{-1}\boldsymbol{f}^{\boldsymbol{C}}_{1}\\ \ldots \\ m^{-1}\boldsymbol{f}^{\boldsymbol{C}}_{N}\\ \boldsymbol{0}\\ \boldsymbol{0} \end{pmatrix}\;,\end{equation} and the dissipative drift, \begin{equation} \label{eq:dissipative_drift} \boldsymbol{u}^{\boldsymbol{D}}=\begin{pmatrix} \boldsymbol{0}\\ m^{-1}\sum\limits_{j\ne 1} \gamma w^D_{1j}\left(\boldsymbol{v}_{j1}\cdot\boldsymbol{e}_{1j}\right)\boldsymbol{e}_{1j}\\ \ldots\\ m^{-1}\sum\limits_{j\ne N} \gamma w^D_{Nj}\left(\boldsymbol{v}_{jN}\cdot\boldsymbol{e}_{Nj}\right)\boldsymbol{e}_{Nj}\\ \sum\limits_{j\ne 1} \gamma^cw^D_{1j}h_{1j}^c\\ \ldots \\ \sum\limits_{j\ne N} \gamma^cw^D_{Nj}h_{Nj}^c\\ \sum\limits_{j\ne 1} \gamma^aw^D_{1j}h_{1j}^a \\ \ldots \\ \sum\limits_{j\ne N} \gamma^aw^D_{Nj}h_{Nj}^a \end{pmatrix}\;.\end{equation} such that $\boldsymbol{u}=\boldsymbol{u^C}+\boldsymbol{u^D}$. \subsection{FDR for hydrodynamics} The Fluctuation Dissipation Relation (FDR) for pure hydrodynamics is discussed there to recover within the present framework the classical DPD expression. The requirement that the conservative interactions alone should leave the equilibrium distribution unchanged implies that \begin{equation} \label{eq:Conservative} \boldsymbol{\nabla}_y\cdot\left(\boldsymbol{u^C}P_{eq}\right)= \boldsymbol{u^C}\cdot\boldsymbol{\nabla}_yP_{eq}= \left[\boldsymbol{u^C}\cdot\boldsymbol{\nabla}_y\left(\frac{S}{k_B}\right)\right]P_{eq}=0\;, \end{equation} where $\boldsymbol{\nabla}_y \cdot \boldsymbol{u^C} = 0$ and $\boldsymbol{\nabla}_y P_{eq} = P_{eq} \boldsymbol{\nabla}_y S/k_B$. This condition is satisfied if \begin{align} \label{eq:cond_S_1}\frac{\partial S}{\partial \boldsymbol{x}_i}\propto\boldsymbol{f}^{\boldsymbol C}_i\;,\\ \label{eq:cond_S_2}\frac{\partial S}{\partial \boldsymbol{v}_i}\propto - m\boldsymbol{v}_i\;. \end{align} As will be clear from the following, the proportionality constant is assumed to be $1/T$, with $T$ the temperature, taken to be uniform throughout the system. Based on Eq.~\eqref{eq:Conservative} and Eq.~\eqref{eq:fokker-planck_equilibrium} the expression \begin{equation} \label{eq:drift_expression} \boldsymbol{u^D}=\boldsymbol{\nabla}_y\cdot\boldsymbol{D}+\boldsymbol{D} \cdot \boldsymbol{\nabla}_yS\; \end{equation} gives a condition for the Fokker-Planck equation~\eqref{eq:fokker-planck_ehdpd} to have the equilibrium solution of Eq.~\eqref{eq:boltzmann_eq} with $S$ satisfying Eq.~\eqref{eq:cond_S_1} and Eq.~\eqref{eq:cond_S_2}. The Fokker-Planck equation can then be rewritten as follows \begin{equation}\label{eq:fokkerplanck} \dot{P}=-\boldsymbol{\nabla}\cdot\left[\left(\boldsymbol{u}^C+\boldsymbol{D}\cdot\boldsymbol{\nabla}S-\boldsymbol{D}\cdot\boldsymbol{\nabla}\right)P\right]\;. \end{equation} Using Eq,~\eqref{eq:dissipative_drift}, as well as the definition of dissipative drift Eq.~\eqref{eq:drift_expression}, gives for the velocity component of meso-particle $i$, \begin{equation} \label{eq:drift_velocity} m^{-1}\sum\limits_{j\ne i} \gamma w^D_{ij}\left(\boldsymbol{v}_{ji}\cdot\boldsymbol{e}_{ij}\right)\boldsymbol{e}_{ij}= \sum\limits_{j\ne i} \frac{1}{2m^2}\left(\sigma w^R_{ij}\right)^2\boldsymbol{e}_{ij}\otimes\boldsymbol{e}_{ij}\frac{1}{k_B}\left(\frac{\partial S}{\partial\boldsymbol{v}_i}-\frac{\partial S}{\partial \boldsymbol{v}_j}\right)\;, \end{equation} where we used the fact that the matrix $\boldsymbol{D}$ doesn't depend on the velocity, and hence $\boldsymbol{\nabla}_v\cdot\boldsymbol{D}=0$. Considering Eq.~\eqref{eq:cond_S_2}, Eq.~\eqref{eq:drift_velocity} is identically satisfied if the following two conditions are met \begin{equation}\label{eq:fluctuation-dissipation_velocity}\begin{aligned} \gamma=\frac{\beta\sigma^2}{2}\;,\\ w^D_{ij}=\left(w^R_{ij}\right)^2 \;,\end{aligned}\end{equation} with $\beta^{-1}= k_B T$. These are the same FDRs found by Espa\~nol and Warren~\cite{espanol1995statistical} for hydrodynamics. \subsection{FDR for Ionic transport} The Fluctuation Dissipation Relations (FDRs) discussed in the previous sections for pure hydrodynamics are here extended to include ionic transport. Clearly all the results previously established still hold in presence of transported charges. It is instrumental to introduce the problem by considering the simplest case of constant noise intensities $\sigma^c$ and $\sigma^a$ which imply $\boldsymbol{\nabla}_c\cdot\boldsymbol{D}^c=0$ and $\boldsymbol{\nabla}_a\cdot\boldsymbol{D}^a=0$. Using Eqs.~\eqref{eq:dissipative_drift} and~\eqref{eq:drift_expression}, and using the expressions for $\boldsymbol{D}^c$ and $\boldsymbol{D}^a$ from Eq.~\eqref{eq:d_blocks_c} and Eq.~\eqref{eq:d_blocks_a}, we obtain \begin{equation}\label{eq:drift_charge_1}\begin{aligned} \sum\limits_{j\ne i} \gamma^cw^D_{ij}h_{ij}^c= \frac{1}{2}\left(\sigma^c w^R_{ij}\right)^2\frac{1}{k_B}\left(\frac{\partial S}{\partial n^c_i}-\frac{\partial S}{\partial n^c_j}\right)\;,\\ \sum\limits_{j\ne i} \gamma^aw^D_{ij}h_{ij}^a= \frac{1}{2}\left(\sigma^a w^R_{ij}\right)^2\frac{1}{k_B}\left(\frac{\partial S}{\partial n^a_i}-\frac{\partial S}{\partial n^a_j}\right) \;.\end{aligned}\end{equation} Exploiting the second condition in \eqref{eq:fluctuation-dissipation_velocity} for the weight functions, we obtain \begin{equation}\label{eq:fluctuation-dissipation_charge}\begin{aligned} &\gamma^c=\frac{\beta}{2} \sigma_c^2\;,\\ &\gamma^a=\frac{\beta}{2} \sigma_a^2 \;\end{aligned}\end{equation} and \begin{equation}\begin{aligned} h^c_{ij}=&T\left(\frac{\partial S}{\partial n^c_i}-\frac{\partial S}{\partial n^c_j}\right)=\mu^c_j-\mu^c_i\;,\\ h^a_{ij}=&T\left(\frac{\partial S}{\partial n^a_i}-\frac{\partial S}{\partial n^a_j}\right)=\mu^a_j-\mu^a_i \;,\end{aligned}\end{equation} where the chemical potentials of the meso-particle are defined as \begin{equation}\begin{aligned} \label{eq:chemical_S} \mu^c_i=&-T\frac{\partial S}{\partial n^c_i}\;,\\ \mu^a_i=&-T\frac{\partial S}{\partial n^a_i}\;. \end{aligned}\end{equation} Removing the assumption of constant noise intensity, $\sigma^c$ and $\sigma^a$ can now depend on cations and anions carried by the interacting particles, i.e. $\sigma^c=\sigma^c(n^c_i,n^c_j)=\sigma^c_{ij}$ and $\sigma^a=\sigma^a(n^a_i,n^a_j)=\sigma^a_{ij}$ with $\sigma^c_{ij}=\sigma^c_{ji}$ and $\sigma^a_{ij}=\sigma^a_{ji}$. Hence $\boldsymbol{\nabla}_c\cdot\boldsymbol{D}^c$ and $\boldsymbol{\nabla}_a\cdot\boldsymbol{D}^a$ do not vanish, in general, and must be considered in the drift term, Eq.~\eqref{eq:drift_expression}. From the expressions for $\boldsymbol{D}^c$ and $\boldsymbol{D}^a$, Eq.~\eqref{eq:d_blocks_c} and Eq.~\eqref{eq:d_blocks_a}, the dissipative drifts now read \begin{equation}\label{eq:drift_charge_1bis}\begin{aligned} \sum\limits_{j\ne i} \gamma^cw^D_{ij}h_{ij}^c= \frac{1}{2}\sum\limits_{j\ne i}\left(\sigma^c_{ij} w^R_{ij}\right)^2\left[\frac{1}{\left(\sigma^c_{ij}\right)^2}\left(\frac{\partial \left(\sigma^c_{ij}\right)^2}{\partial n^c_i}-\frac{\partial \left(\sigma^c_{ij}\right)^2}{\partial n^c_j}\right)+ \frac{1}{k_B}\left(\frac{\partial S}{\partial n^c_i}-\frac{\partial S}{\partial n^c_j}\right)\right]\;,\\ \sum\limits_{j\ne i} \gamma^aw^D_{ij}h_{ij}^a= \frac{1}{2}\sum\limits_{j\ne i}\left(\sigma^a_{ij} w^R_{ij}\right)^2\left[\frac{1}{\left(\sigma^a_{ij}\right)^2}\left(\frac{\partial \left(\sigma^a_{ij}\right)^2}{\partial n^a_i}-\frac{\partial \left(\sigma^a_{ij}\right)^2}{\partial n^a_j}\right)+ \frac{1}{k_B}\left(\frac{\partial S}{\partial n^a_i}-\frac{\partial S}{\partial n^a_j}\right)\right]\;,\\ \;\end{aligned}\end{equation} which, using Eq.~\eqref{eq:fluctuation-dissipation_velocity} for the weight functions together with Eq.~\eqref{eq:fluctuation-dissipation_charge} and the definitions \eqref{eq:chemical_S}, gives the explicit expression for $h^c_{ij}$ and $h^a_{ij}$ in the general case \begin{equation}\label{eq:h}\begin{aligned} h^c_{ij}=\frac{1}{\beta\gamma^c_{ij}}\left(\frac{\partial \gamma^c_{ij}}{\partial n^c_i}-\frac{\partial\gamma^c_{ij}}{\partial n^c_j}\right)+\mu^c_{ji}\;,\\ h^a_{ij}=\frac{1}{\beta\gamma^a_{ij}}\left(\frac{\partial \gamma^a_{ij}}{\partial n^a_i}-\frac{\partial\gamma^a_{ij}}{\partial n^a_j}\right)+\mu^a_{ji}\;. \end{aligned}\end{equation} It is worth noting that the antisymmetry conditions~\eqref{eq:h_antisymm} are satisfied, implying conservation of the total quantity of each ionic species. As typical, when the noise intensity depends on the state variables, additional drift terms appear in the Langevin equation, see also \cite{lau2007state,gubbiotti2019confinement}. \section{Physical model} \label{sec:model} In the previous section it was shown that if the conditions~\eqref{eq:fluctuation-dissipation_charge}, \eqref{eq:fluctuation-dissipation_velocity}, \eqref{eq:cond_S_1} and \eqref{eq:cond_S_2} are met, system~(\ref{eq:ehdpd_1}-\ref{eq:ehdpd_4}) admits a stationary equilibrium solution given by~\eqref{eq:boltzmann_eq}. The conservative force $\boldsymbol{f}^C_i$, the particle velocity $\boldsymbol{v}_i$ and the chemical potentials $\mu^c_i$ and $\mu^a_i$ are related to the entropy of the system $S$ through Eqs.~\eqref{eq:cond_S_1}, \eqref{eq:cond_S_2} and \eqref{eq:chemical_S}. The system (total) energy $E_{TOT}$ may be expressed as \begin{equation}\label{eq:toteng} E_{TOT}=U_E+\sum\limits_{i=1}^N\left(\frac{m}{2}\boldsymbol{v}_i^2+U_i\right)\;, \end{equation} where $U_E$ is the electrostatic energy of the system. The sum includes the portion of kinetic energy and internal energy associated with particle $i$, $m\boldsymbol{v}^2_i/2$ and $U_i$, respectively. Under the assumption of local equilibrium, the particle entropy can be expressed in terms of Helmholtz free energy $A_i$ and internal energy, \begin{equation} S_i=\frac{1}{T}\left(U_i-A_i\right)\;. \end{equation} Hence, the total entropy reads \begin{equation}\label{eq:entropy} S=\sum\limits_{i=1}^NS_i=\sum\limits_{i=1}^N\frac{1}{T}\left(U_i-A_i\right)=\frac{1}{T}\left[E_{TOT}-U_{E}-\sum\limits_{i=0}^N \left(A_i+\frac{m}{2}\boldsymbol{v}^2_i\right)\right]\; . \end{equation} For an isolated system $E_{TOT} = const$ and the entropy $S$ is fully specified once free energy $A_i$ and electrostatic energy $U_E$ are given in terms of the coarse-grained variables. In general terms, the Helmholtz free energy density (per unit mass) is a function of specific volume, temperature and number densities. As a consequence, the free energy of the meso-particle depends on particle volume $V_i$, (uniform) temperature, and composition given in terms of number of atoms (in the sense of indivisible particles) belonging to the meso-particle. In the following the composition is specified in terms of $n^s_i$, $n^c_i$, $n^a_i$, the number of solvent, cationic and anionic atoms, respectively. Hence, $A_i = A_i(V_i,T,n_i^s, n_i^c, n_i^a)$. Before specifying the free energy in detail, it is worth defining first the meso-particle volume, \begin{equation}\label{eq:density} V_i^{-1}=\sum\limits_{j=1}^Nw(r_{ij})\;, \end{equation} where $w(r)$ is a differentiable compactly supported, positive function with (single) maximum at $r=0$, with integral normalized to $1$, that vanishes identically for $r$ larger than a cutoff $r_c$. Hereafter, $w(r)$ is specified as the Lucy's function commonly used in the context of Smoothed Dissipative Particle Dynamics~\cite{espanol2003smoothed}, \begin{equation} w\left({r}/{r_c}\right)=\begin{cases} \frac{\displaystyle 105}{\displaystyle 16 \pi \, r_c^3}\left(1+3r/r_c\right)\left(1-r/r_c\right)^3\quad\mathrm{if}\quad r/r_c<1\;, \\0\quad\mathrm{if}\quad r/r_c>1\; . \end{cases} \end{equation} \subsection{Free energy} As partially anticipated, the meso-particle is considered to be constituted by $M$ atoms of three different kinds and equal mass, namely $n^c_i$ cations, $n^a_i$ anions and $n^s_i$ solvent atoms. Number of cations, anions and solvent atoms may change during the dynamics under the constraint of constant total mass, \begin{equation}\label{eq:constant} n^c_i+n^a_i+n^s_i=M=\mathrm{const}\;. \end{equation} A simple model for the particle free energy $A_i$ follows by considering a system comprising three non interacting species (perfect gas model)~\cite{statistical_mechanics} \begin{equation} \label{eq:free-energy} A_i= \frac{n^c_i}{\beta}\left[\log\left(\frac{n^c_i\lambda^3}{V_i}\right)-1\right] +\frac{n^a_i}{\beta}\left[\log\left(\frac{n^a_i\lambda^3}{V_i}\right)-1\right]+\frac{n^s_i}{\beta}\left[\log\left(\frac{n^s_i\lambda^3}{V_i}\right)-1\right]\;, \end{equation} where $\lambda$, depending on temperature and atoms masses, is the De Broglie's thermal wavelength. In Eq.~\eqref{eq:free-energy} the dependence on $n^s_i$ may eliminated in favor of the total number of atoms forming the meso-particle, Eq.~\eqref{eq:constant}, while the temperature, and hence $\lambda$, is assumed to be the same in all the meso-particles. Using the constraint of Eq.~\eqref{eq:constant} to eliminate the number of solvent atoms $n_s$, Eq.~\eqref{eq:free-energy} reads \begin{equation}\label{eq:free_energy} A_i(V_i,n^c_i,n^a_i)=\frac{n^c_i}{\beta}\left[\log\left(\frac{n^c_i}{M-n^c_i-n^a_i}\right)-1\right]+\frac{n^a_i}{\beta}\left[\log\left(\frac{n^a_i}{M-n^c_i-n^a_i}\right)-1\right]+\frac{M}{\beta}\log\left(\frac{M-n^a_i-n^c_i}{V_i}\right)\;, \end{equation} where inessential constant terms have been omitted. Notice that the particle pressure, related to the derivative of the free energy with respect to volume, turns out to depend on the total number of atoms $M$, see \S~\ref{sec:conservative} below. \subsection{Electrostatics of EH-DPD particles} The expression~\eqref{eq:entropy} for the entropy requires an explicit form for the electrostatic energy. The coarse-grained variables $n^c_i$ and $n^a_i$ provide the number of cations and anions carried by the meso-particle. Its charge is then \begin{equation}\label{eq:charge} q_i=q_cn^c_i-q_an^a_i\;, \end{equation} with $q_c$ and $q_a$ the (absolute value of the) charge of a single cation/anion. The charge distribution associated to each meso-particle is given by as a Gaussian function centered in $\boldsymbol{x}_i$ with constant variance $s^2$, i.e. \begin{equation}\label{eq:gaussian} \rho_i(\boldsymbol{r})=\rho(\boldsymbol{r},\boldsymbol{x}_i,q_i)=\frac{q_i}{\left(2\pi s^2\right)^{3/2}}\exp{\frac{-|\boldsymbol{r}-\boldsymbol{x}_i|^2}{2s^2}}\; . \end{equation} The energy ascribed to the interaction between meso-particles, $i\ne j$, is then~\cite{kiss2014efficient} \begin{equation} U^E_{ij} = U^E_{ji} = \frac{1}{2} \int\int\frac{\rho_i(\boldsymbol{r})\rho_j(\boldsymbol{r'})}{|\boldsymbol{r}-\boldsymbol{r'}|}\mathrm{d}\boldsymbol{r}\mathrm{d}\boldsymbol{r'} =\frac{q_iq_j}{r_{ij}}\erf\left(\frac{r_{ij}}{2s}\right)\;, \end{equation} where the interaction energy of the couple is $U^E_{ij} + U^E_{ji}$. Introducing the self-energy, $i=j$, \begin{equation} U^E_{ii}= \frac{1}{2}\int\int\frac{\rho_i(\boldsymbol{r})\rho_i(\boldsymbol{r'})}{|\boldsymbol{r}-\boldsymbol{r'}|}\mathrm{d}\boldsymbol{r}\mathrm{d}\boldsymbol{r'} =\frac{q^2_i}{2s\sqrt{\pi}}\;. \end{equation} The self-energy does not contribute to the electrostatic force, since it is independent of the relative positions of the meso-particles. However, it does contribute to the total electrostatic potential of the meso-particle, which can be defined as \begin{equation}\label{eq:electrostatic_potential} \Phi_i=\frac{\partial U^E}{\partial q_i}=\frac{q_i}{s\sqrt\pi}+\sum\limits_{i\ne j}^N \frac{q_j}{r_{ij}}\erf\left(\frac{r_{ij}}{2s}\right)\;. \end{equation} Since $\lim\limits_{r\to 0}\erf(r/(2s))/r=1/(s\sqrt\pi)$, the above expression can be rewritten in compact form as \begin{equation}\label{eq:particle_potential} \Phi_i=\sum\limits_{j=1}^N \frac{q_j}{r_{ij}}\erf\left(\frac{r_{ij}}{2s}\right)\;, \end{equation} where now the summation also includes the term $j=i$ ($r_{ii}=0$). Finally, the total electrostatic energy of the system can be expressed as \begin{equation}\label{eq:electrostatic_energy} U_E=\frac{1}{2}\sum\limits_{i=1}^Nq_i\Phi_i \; . \end{equation} \subsection{Chemical potential and conservative force} \label{sec:conservative} Specifying the electrostatic energy completes the expression of the entropy, Eqs.~\eqref{eq:entropy}, providing the chemical potential, Eqs.~\eqref{eq:chemical_S}. Its explicit expression for cations and anions is \begin{equation}\begin{aligned}\label{eq:chemical} \mu^c_i=-T\frac{\partial S}{\partial n^c_i}=\frac{\partial \left(A_i+U_E\right)}{\partial n^c_i}=\frac{1}{\beta}\log\left(\frac{n^c}{M-n^c-n^a}\right)+q_c\Phi_i\;,\\ \mu^a_i=-T\frac{\partial S}{\partial n^a_i}=\frac{\partial \left(A_i+U_E\right)}{\partial n^a_i}=\frac{1}{\beta}\log\left(\frac{n^a}{M-n^c-n^a}\right)-q_a\Phi_i\;,\\ \end{aligned}\end{equation} respectively, where constant terms have been omitted, since the dynamics depends only on the chemical potential differences. The effect of electrostatic interactions, proportional to the particle electrostatic potential, adds to the familiar contribution coming from the (perfect gas) equation of state. The conservative component of the force acting on the particle is obtained by differentiating the entropy $S$ with respect to particle position, Eq.~\eqref{eq:cond_S_1}, \begin{equation} \boldsymbol{f}^{\boldsymbol {C}}_i=T\frac{\partial S}{\partial\boldsymbol{x}_i}=-\frac{\partial\left(U_E+A\right)}{\partial\boldsymbol{x}_i}=\boldsymbol{f}^{\boldsymbol {E}}_i+\boldsymbol{f}^{\boldsymbol {P}}_i\;, \end{equation} where $A=\sum\limits_{j=1}^NA_j$ is the system free energy. As for the chemical potential, the conservative force comes form two contributions. The origin of the electrostatic force $\boldsymbol{f}^{\boldsymbol {E}}_i$ is immediately clear. It can be computed using Eqs.~\eqref{eq:electrostatic_energy} and \eqref{eq:electrostatic_potential}, giving \begin{equation} \boldsymbol{f}^{\boldsymbol{E}}_i=-\sum\limits_{j\ne i}q_i\frac{\partial\Phi_i}{\partial r_{ij}}\boldsymbol{e}_{ij}=\sum\limits_{j\ne i}q_iq_j\frac{\sqrt\pi s\erf\left(r_{ij}/(2s)\right)-r_{ij}\exp\left(-r_{ij}^2/(4s^2)\right)}{s\sqrt\pi r^2_{ij}}\boldsymbol{e}_{ij}\;. \end{equation} The second contribution follows from the equation for the free energy $A$, \eqref{eq:free_energy}, giving \begin{equation}\label{eq:pforce} \boldsymbol{f}^{\boldsymbol{P}}_i=-\sum\limits_{j\ne i}\frac{\partial A_j}{\partial{V_j}}\frac{\partial V_j}{\partial \boldsymbol{x}_i}=\sum\limits_{j\ne i}\frac{M}{\beta}\left(V_j+V_i\right)w'_{ij}\boldsymbol{e}_{ij}\;, \end{equation} where Eq.~\eqref{eq:density} has been used and $w'_{ij}$ is the derivative of the weight function $w(r_{ij})$. It could be noted that $- \partial A_i/\partial V_i = M/\beta V_i$ is the meso-particle pressure $p_i$, providing the standard interpretation of $\boldsymbol{f}^{\boldsymbol{P}}_i$ as the pressure force. \begin{figure} \includegraphics[width=0.5\linewidth]{figure2.pdf} \caption{ \label{fig:cond2} Ionic current density $J$ as defined in Appendix~\ref{sec:current} resulting after the application of an electric field $E=10$, in the direction parallel to the application of the field. The current density was computed for different values of the dissipative parameters $\gamma^a_0=\gamma^c_0=\gamma_0$ and of average concentration $c_0$. The electric current density (and hence the conductivity) shows a linear dependence on the ion concentration ${c_0}$ for the range of concentrations simulated. For these simulations a value of $q^c=q^a=q=0.03635$ has been used. } \end{figure} \subsection{Ionic exchange between particles} \label{sec:dissipative_factors} The coefficients $\gamma^{c/a}$ control the ionic exchange between particles and don't affect the equilibrium distribution. Instead, they control the cationic and anionic conductivity of the solution, see Appendix~\ref{sec:conductivity}. The following expression was chosen for the coefficients $\gamma^{c/a}$ \begin{equation}\begin{aligned}\label{eq:gamma0} &\gamma_c(n^c_i,n^c_j)=\gamma^c_0\sqrt{n^c_in^c_j}\\ &\gamma_a(n^a_i,n^a_j)=\gamma^a_0\sqrt{n^a_in^a_j}\;, \end{aligned}\end{equation} where $\gamma^{c/a}_0$ are parameters which control the ionic current between particles at a given concentration. The resulting ionic conductivity shows a linear dependence on the ionic concentration ${c_0}$ and on the parameters $\gamma^{c/a}_0$, at least in the explored range of parameters, see Fig.~\ref{fig:cond2}. As previously shown, Eqs.~\eqref{eq:h}, the use of parameters $\gamma^{c/a}$ which depend on concentration gives rise to an additional exchange of ions between the particles independent on the chemical potential \begin{equation}\begin{aligned} &h^c_{ij}=\frac{1}{\beta}\left(\frac{1}{n^c_i}-\frac{1}{n^c_j}\right)+\mu^c_{ji}\\ &h^a_{ij}=\frac{1}{\beta}\left(\frac{1}{n^a_i}-\frac{1}{n^a_j}\right)+\mu^a_{ji} \; . \end{aligned}\end{equation} \section{Validation} \label{sec:validation} As a case of study to validate the EH-DPD model, a system consisting of a planar channel of height $h$ with given surface charge at the two walls is simulated. Before describing the actual set-up, it is instrumental to review classical solutions for electroosmotic flows based on the Debye approximation $\beta\zeta q \ll 1$ where $\zeta$ is the wall electric potential and $q=q_c=q_a$ is the ionic charge of the ions, where the electrolyte has been assumed symmetric. In this case, the 1-D version of the Poisson-Boltzmann equation reads~\cite{theoretical_microfluidics} \begin{equation}\label{eq:poissonboltzmann} \frac{\mathrm{d}^2\phi}{\mathrm{d}z^2}= \frac{2qc_0}{\varepsilon}\sinh(\beta q\phi(z)) \simeq \frac{\phi(z)}{\lambda_D^2} \;, \end{equation} where $z$ is the coordinate orthogonal to the channel walls, $\phi$ is the electrostatic potential, $c_0$ the concentration at zero potential and $ \varepsilon$ the dielectric constant. In the linearized form on the right hand side of the equation, $\lambda_D=\left(2 \beta q^2c_0/\varepsilon\right)^{-1/2}$ is the Debye length. The boundary conditions for this equation relate the derivative of the potential at the walls to the wall charge, i.e., assuming a vanishing electric field outside the channel \begin{align} \frac{\mathrm{d}\phi}{\mathrm{d}z}\biggr\lvert_{z=h/2}=-\frac{\sigma_{up}}{\varepsilon}\;, \qquad\qquad\qquad\qquad \frac{\mathrm{d}\phi}{\mathrm{d}z}\biggr\lvert_{z=-h/2}=\frac{\sigma_{low}}{\varepsilon}\;, \end{align} where $\sigma_{up}$ and $\sigma_{low}$ are, respectively, the surface charges of the upper and lower walls. We consider two different scenarios: i) the symmetric case (suffix $S$) with both walls with same surface charge, $\sigma_{up}=\sigma_{low}=\sigma$ and ii) the antisymmetric case (suffix $A$) where the two walls are oppositely charged, $\sigma_{up}=-\sigma_{low}=\sigma$. In the two cases, the analytical solutions of Eq.~\eqref{eq:poissonboltzmann} read~\cite{micronanofluid} \begin{align} \phi^S=\zeta^S\frac{\cosh(z/\lambda_D)}{\cosh(h/(2\lambda_D))}\;, \qquad\qquad\qquad\qquad \phi^A=\zeta^A\frac{\sinh(z/\lambda_D)}{\sinh(h/(2\lambda_D))}\;, \label{eq:phi} \end{align} and the wall potentials ($\zeta$-potentials) in the symmetric and antisymmetric case are, respectively, \begin{align} \zeta^S=\frac{\lambda_D\sigma}{\varepsilon\tanh(h/2\lambda_D)}\;, \qquad\qquad\qquad\qquad \zeta^A=\frac{\lambda_D\sigma\tanh(h/2\lambda_D)}{\varepsilon}\;. \end{align} The resulting cationic and anionic concentrations are $c^c(z)=c_0\exp\left(-q\phi\right)$ and $c^a(z)=c_0\exp\left(q\phi\right)$, respectively. The electroosmotic velocity profile which arises after the application of an electric field $E$ parallel to the walls follows from the Stokes equation endowed with no slip conditions at the walls, \begin{align} \eta \frac{d^2u}{dz^2} + qE (c^c - c^a) = 0\, , \end{align} as \begin{align} u^S&=v_{eo}^S\left(1-\frac{\phi^S}{\zeta^S}\right)\;,\\ u^A&=v_{eo}^A\left(\frac{2z}{L}-\frac{\phi^A}{\zeta^A}\right)\;, \end{align} where $v_{eo}=\varepsilon \zeta E/\mu$ is the electroosmotic velocity and $\mu$ is the dynamic viscosity. \subsection{Simulation set up} For the validation simulations, we set equal to 1 the mass of the meso-particles $m$, the cutoff radius $r_c$ and the thermal energy $k_BT$. The remaining free parameters of the EH-DPD system are the number of atoms in the meso-particle $M$, the ion charge $q_a=q_c=q$, the parameter $s$ related to the Gaussian used to model electrostatic interactions, the dissipative coefficient $\gamma$ and the corresponding coefficients for the ionic transport $\gamma^c_0$ and $\gamma^a_0$ appearing in Eq.~(\ref{eq:gamma0}). Setting $\gamma=1000$ leads to a viscosity of $\mu=86$ as computed by imposing a Poiseuille flow and measuring the resulting velocity profile~\cite{boromand2015viscosity}. The parameters $\gamma^c_0$ and $\gamma^a_0$ were calibrated, figure~\ref{fig:cond2}, by estimating the electric conductivity obtained by applying a constant electric field to a triply-periodic EH-DPD system and measuring the resulting electric current density as defined in Appendix~\ref{sec:current}. We did so for different values of $\gamma^c_0$ and $\gamma^a_0$ and for different ionic concentrations $c_0$. We also estimated the conductivity of the fluid by an independent approach based on linear response theory, see Appendix~\ref{sec:conductivity}, finding a good agreement with the nonequilibrium simulations. Typical values used in the following subsection are $\gamma^c_0 = \gamma^a_0 = 16$. In the set of Eqs.~\eqref{eq:ehdpd_1} there is no guarantee that the quantities $n^c$ and $n^a$ are positive. In fact, due to the stochastic nature of the equations, unfrequent, strong events can lead to a negative number of ions in the particle. The chemical potential of Eq.~\eqref{eq:chemical} is not defined for negative $n^a_i$ and $n^c_i$, thus a limiting value of $\mu_{limit}=-10$ was used. Also, since $\gamma^a$ and $\gamma^c$ are not defined for negative $n^a_i$ and $n^c_i$, a lower cutoff $\bar{n}$ is assumed such that $n^a_i\ge\bar{n}$ and $n^c_i\ge\bar{n}$ (typically $\bar{n}=0.00223$ has been used in the simulations to be discussed). The electrostatic interactions are dealt with the Ewald summation algorithm~\cite{kiss2014efficient} and the model was implemented using the DPD package of LAMMPS~\cite{lammps}. The Euler-Maruyama algorithm~\cite{higham2001algorithmic} was used to integrate Eq.~\eqref{eq:ehdpd_1} in the It\^o formalism using a time step of $\Delta t=10^{-5}$. \subsection{Electroosmotic flow} \begin{figure} \centering\includegraphics[width=0.8\linewidth]{figure3.pdf} \caption{ Ionic density profile for the cations (blue) and the anions (red) compared with the analytical solutions (black lines). Figures a) and c) are cases with symmetric wall charges, while figures b) and d) are cases with antisymmetric wall charges. In the two top plots, two different values of the Debye length are considered and $\zeta=13.7$, while in the bottom plots the zeta potential is changed and $\lambda_D=1$. } \label{fig:charge} \end{figure} We simulated a planar channel with a height of 10, ranging from $z=-5$ to $z=5$, where $z$ is the coordinate orthogonal to the walls. The walls were modelled using fixed meso-particles of constant charge distributed randomly in two layers of width 1 for each side. The first layer, in direct contact with the fluid, has a particle number density of $\rho_1 = 3$, while the second layer, also of width 1, has a particle number density of $\rho_2 = 6$. In a previous work~\cite{gubbiotti2019confinement}, we showed that, in the DPD context, walls constituted by fixed random particles of variable density are suitable to guarantee impermeability and a low slip while controlling density fluctuations due to fluid layering. A similar model was employed here the main difference being that now the wall is constituted by two layers with different densities. The one exposed towards the liquid mainly controls the slippage, while the external one guarantees wall impermeability. The wall particles interact with the fluid particles via the multi-body potential which defines the pressure force, see Eq.~\eqref{eq:pforce}, where for the wall particle volume a constant value of $V_{wall}=0.8$ has been used for the inner layer and $V_{wall}=10$ for the external one. In case the wall is charged, also the electrostatic forces are included. We measured the slip length of the resulting fluid-wall system by imposing a Poiseuille flow, observing an acceptably low slip length $<2\%$ of the channel height. The ionic charge $q$, considered equal for both species, has been tuned to adjust the Debye screening length, using $c_0=30$. The concentrations of the two ionic species are set to guarantee the global electroneutrality of the system, with the additional condition $\sqrt{c^cc^a}=c_0$. An external electric field $E$ parallel to the walls forces the electroosmotic flow in the channel. The intensity of the electric field has been tuned to control the electroosmotic velocity. Finally, the wall charge $\sigma$ is tuned to control the $\zeta$-potential. After equilibration, a Debye layer sets in at the walls. Ten systems were simulated five each for the symmetric and antisymmetric setting, corresponding to different values of Debye length and surface charge. The cationic and anionic density profiles are plotted in Fig.~\ref{fig:charge} together with the particle electrostatic potential for several symmetric and antisymmetric systems in comparison with the predictions of the linearized Poisson-Boltzmann model. The simulated ionic density is in good agreement with the analytical predictions for all simulations, except for slight differences for the largest $\zeta$-potentials (see Fig.~\ref{fig:charge}c). This is not unexpected, since the Debye approximation is bound to fail at large $\beta q\zeta$. \label{sec:electroosmosis} The electroosmotic flow generated by the external electric field through the charge imbalance near the walls is plotted in Fig.~\ref{fig:velocity} for different Debye lengths and electroosmotic velocities in comparison with the analytical predictions. \begin{figure} \centering\includegraphics[width=0.8\linewidth]{figure4.pdf} \caption{ Velocity profile of the electroosmotic flow generated by a constant electric field parallel to the walls compared with the analytical solutions (black lines). Figures a) and c) correspond to cases with both walls positively charged, while figures b) and d) are cases with a positive charge in the lower wall, $z=-5$, and a negative charge in the upper wall, $z=5$. In the two top plots, two different values of the Debye length are considered, while in the bottom plots the electroosmotic velocity is changed by changing the electric field and $\lambda_D=1$. In all the plots, $\zeta=13.7$. } \label{fig:velocity} \end{figure} \subsection{Mapping to dimensional units} \label{sec:parameters} After the previous discussion on the general features of the proposed approach, in this section we consider the specific case of a channel with height $h = 10 \, {\rm nm}$, Debye length $\lambda_D = 4.28 \, {\rm nm}$, mass density $10^3 \, {\rm Kg/m^3}$, electric conductivity $\kappa = 70 \, {\rm mS/m}$ and viscosity $\mu = 6.5\cdot 10^{-4}\,{\rm Pa\cdot s}$. The reference dimensional quantities are: length $L_{ref}=0.5\,{\rm nm}$, energy $E_{ref}=1.38\cdot10^{-23}\,{\rm J}$, and mass $M_{ref}=3.33\cdot 10^{-25}\,{\rm Kg}$ which corresponds to the mass of a single EH-DPD particle. The reference charge is set to $Q_{ref}=7.6\cdot10^{-21}\,{\rm C}$, leading to a relative dielectric constant $\epsilon_r = 75$. The assigned Debye length is obtained by using the (dimensionless) concentration of ions (dimensionless charge $q = 0.3$) $c_0 = 1.875$. The particle interaction cutoff is set to $r_c=1\, {\rm nm}$. The dimensionless meso-particle density $\rho=0.375$ provides the target mass density of the solution. The remaining physical parameters to be mapped are the solution viscosity and conductivity. This requires preliminary calibration simulations to determine their dependence on the model parameters $\gamma$, $\gamma^c_0$ and $\gamma^a_0$. In principle, $\gamma^c_0$ and $\gamma^a_0$ can be used to independently reproduce anion and cation conductivities. Limiting, for simplicity, the analysis to symmetric solutions, we assume $\gamma^c_0 = \gamma^a_0$. Figure~\ref{fig:dimensional}a provides the solution viscosity as a function of $\gamma$ for fixed $\gamma^c_0 = \gamma^a_0$. We computed the viscosity by imposing a Poiseuille flow~\cite{boromand2015viscosity} and measuring the average velocity obtained at a given pressure difference. In the investigated range of parameters, the viscosity is found to be almost independent of $\gamma^c_0$ and $\gamma^a_0$, which control the conductivity. From the $\mu-\gamma$ curve, we find that $\gamma=6500$ provides the target viscosity. Analogously, Fig.~\ref{fig:dimensional}b provides the solution conductivity as a function of the common value of $\gamma^c_0 = \gamma^a_0 = \gamma_0$. From the figure, $\gamma_0=0.015$ yields the assigned electrical conductivity. We estimated the electric conductivity by applying a constant electric field to a bulk system and measuring the resulting current. Moreover, since the model parameters are changed with respect to the case reported in Fig~\ref{fig:charge} and \ref{fig:velocity}, the wall model needs to be recalibrated in order to achieve a no slip boundary condition. Here, we used $V_{wall}=6.4\cdot 10^{-4}$ for the inner layer and $V_{wall} = 0.064$ for the external one. The resulting electroosmotic flow in a $10\, {\rm nm}$ planar channel is shown in Fig.~\ref{fig:dimensional}c, while the ionic density is shown in Fig.~\ref{fig:dimensional}d. The zeta potential is such that $\zeta=-0.5 (q\beta)^{-1}$. It is apparent that in the middle of the channel, the concentration of anions and cation is different, as expected when the electric double layers overlap. \begin{figure} \centering\includegraphics[width=0.8\linewidth]{figure5.pdf} \caption{Electroosmotic flow for a $10, {\rm nm}$ planar channel. a) Viscosity of the fluid as a function of the parameter $\gamma$, with $\mu_{ref} = M_{ref}^{1/2}E_{ref}^{1/2}L_{ref}^{-2}=8.57\cdot 10^{-6} {\rm Pa\cdot s}$. The value highlighted with a circle was used for the electroosmotic flow simulation. b) Conductivity of the fluid as a function of the parameters $\gamma^a=\gamma^c=\gamma_0$, with $\kappa_{ref}=Q_{ref}^2L_{ref}^{-2}E_{ref}^{-1/2}M_{ref}^{-1/2}=107.8\, {\rm S/m}$. The value highlighted with a circle was used for the electroosmotic flow simulation. c) Velocity and d) density profiles of cations (blue) and anions (red). The Debye length was set to $4.28\, {\rm nm}$. } \label{fig:dimensional} \end{figure} \subsection{Excluded volume effects} \begin{figure} \centering\includegraphics[width=\linewidth]{vdw.pdf} \caption{ Simulation results for a fluid with perfect gas and Van der Waals equation of state. In all the panels, the dashed lines refer to the perfect gas model, while the continuous lines refer to the Van der Waals model of Eq.~\eqref{eq:free_energy-vdw}, with $a=10$, $b^s=0.01$, $b^c=0.02$ and $b^a=0.04$. All the remaining parameters are set as described in Section~\ref{sec:parameters}, with the only difference of $\gamma=4500$ in the perfect gas case to match the viscosity of the corresponding Van der Waals model which has $\gamma=6500$. The Debye length was set to $2.15$ in all the cases by tuning the charge of the ions. a-b) Cation (blue) and anion (red) concentration profiles and concentration difference (black) for the perfect gas case (continuous lines) and the Van der Waals model (dashed lines), for a planar channel with symmetric (a) and antisymmetric (b) charges in the walls. c-d) Velocity profiles corresponding panel a, symmetric case (c), and b, antisymmetric case (d). } \label{fig:vdw} \end{figure} To test the capability of EH-DPD to simulate fluids with different equations of state, we performed planar electroosmosis simulations similar to the above described using a Van der Waals equation of state, \textit{i.e.} \begin{equation} P=\frac{k_BT}{v-b}-\frac{a}{v^2}\;, \end{equation} where $v$ is the volume divided by the total number of particles, $a$ is a parameter modeling the attraction between fluid particles and $b$ is the excluded volume of the single particle. For a multicomponent fluid, the parameters $a$ and $b$ are usually expressed as combinations of the respective single components parameters according to mixing rules~\cite{kwak1986van}. In the case of a fluid with three components (\textit{i.e.} $n_c$ cations, $n_a$ anions and $n_s$ solvent), for the meso-particle $i$, $b_i=n^c_i b^c+n^c_a b^a+ n^s_i b^s$ and $a_i=n^c_in^c_i a^{cc}+n^a_in^a_i a^{aa}+2n^s_in^s_i a^{ss}+2n^c_in^a_i a^{ca}+2n^c_in^s_i a^{cs}+2n^s_in^a_i a^{sa}$, where $b^c$, $b^a$ and $b^s$ are the excluded volume of each species and $a^{aa}$, $a^{cc}$, $a^{ss}$, $a^{ca}$, $a^{cs}$ and $a^{sa}$ are parameters controlling the attractive force between the species. As in the case of perfect gas, we assumed $n^c+n^a+n^s=M$ to hold for each particle, leaving only the $n^c$ and $n^a$ as independent variables. The free energy is hence \begin{equation}\label{eq:free_energy-vdw} A_i(V_i,n^c_i,n^a_i)=\frac{n^c_i}{\beta}\left[\log\left(\frac{n^c_i}{M-n^c_i-n^a_i}\right)-1\right]+\frac{n^a_i}{\beta}\left[\log\left(\frac{n^a_i}{M-n^c_i-n^a_i}\right)-1\right]+\frac{M}{\beta}\log\left(\frac{M-n^a_i-n^c_i}{V_i-b_i}\right)+\frac{a_i}{V_i^2}\;, \end{equation} We chosed to focus on the effect of the corresponding excluded volume parameters $b^s$, $b^c$ and $b^a$, letting the parameter $a=10$ constant and independent on the local fluid composition, such that the fluid is single phase. In this framework, the chemical potential for cations in particle $i$ reads \begin{equation} \mu^c_i=\frac{1}{\beta}\left[\log\left(\frac{n^c_i}{M-n^c_i-n^a_i}\right)+\frac{M b^c}{V_i-b_i}\right]+q_c\Phi_i\;, \end{equation} and similarly for the anions. The results of the simulations are displayed in Fig.~\ref{fig:vdw}. In Fig~\ref{fig:vdw}a and b, the ionic concentration profiles are shown for two cases each corresponding to the symmetric (panel a) and antisymmetric (panel b) setting. As reference cases, the perfect gas model above reported is considered (dashed lines), with the same parameters reported in~Section\ref{sec:parameters} with the only difference that $\gamma=4500$, which was adopted to obtain a viscosity equal to the simulated Van der Waals fluid. The applied electric field was such that $v_{eo}=2$. The continuous lines refer to a Van der Waals model with $b^s=0.01$, $b^c=0.02$ and $b^a=0.04$. The effect of the excluded volume is to decrease the ion concentration near the walls, generating a larger charge density near negatively charged surfaces and a smaller charge density near positively charged surfaces, due to the larger excluded volume of anions with respect to cations. The difference in charge distribution (in particular close to the walls) leads to a more pronounced electroosmotic flow in the symmetric case reported in Fig.~\ref{fig:vdw}c, while a net flow is observed in the case of antisymmetrically charged walls due to the different ion accumulation in the two walls, as reported in Fig.~\ref{fig:vdw}d. \section{Conclusions} The EH-DPD (ElectroHydrodynamic Dissipative Particle Dynamics) model we have illustrated in the present paper is an extension of the DPD model which can be used to simulate the dynamics of electrolyte solutions at mesoscopic scales. The meso-particles carry and exchange among them two ionic species under the collective motion induced by their mutual interactions. The forces acting between the meso-particles and the ionic exchange rates are determined by the specific fluid model which, in the EH-DPD, amount to define the free-energy of the meso-particle system. The model has been validated simulating the electroosmotic flow in a planar nanochannel with charged walls, and a good agreement is obtained both in the case of Debye length comparable with channel height and in the case of small Debye length. This approach can be used to study fluid systems where thermal fluctuations are crucial on scales larger than affordable with Molecular Dynamics like nanoparticle and biomolecule sensing, systems with membranes for desalination or energy harvesting. \newtr{We validated our methods against analytical results for electroosmotic flows in a planar channel obtained from the commonly adopted linearized Poisson-Boltzmann solution of the Poisson-Nernst-Plank-Navier-Stokes equation. This was accomplished using the free energy of a perfect gas to model the interparticle interactions. It has also been shown that different free energies representing more complex fluids can be used. As an example we employed the Van der Waals equation of state, which introduces ion specific effects such as the excluded volume. As a final remark, although in this work we focused on a single phase fluid, the same equation of state allows phase transitions and introduces an energetic cost for the creation of interfaces between different phases allowing to deal with multiphase systems. In general, the possibility of providing the equation of state as an input of the simulation is promising for the study of systems with ion specific effects driven by electric field or concentration gradients also in presence of phase transitions like, \textit{e.g.}, for hydrophobic channels and hydrophobic nanoporous materials.}
1,314,259,992,594
arxiv
\section{Introduction}\label{intro} Shear flows are ubiquitous in astrophysical problems, such as jet propagation in the interstellar medium \citep{ferrari1980,begelman1984,bodo1994}, the dynamics of spiral arms in galaxies \citep{dwarkadas1996}, cometary tails \citep{ershkovich1973,brandt1979} and differential rotation in accretion disks \citep{balbus1998}. It is also relevant in a variety of space physics problems, such as zonal flows in the atmospheres of rotating planets like Jupiter \citep{hasegawa1985}, the solar wind \citep{poedts1998} or the Earth's magnetopause \citep{parker1958}. Shear flows often give rise to the well-known Kelvin-Helmholtz (KH) instability \citep{helmholtz1868,kelvin1871}, which has been extensively studied by \citet{chandra1961}. It is an ideal hydrodynamic instability, that converts the energy of the large-scale velocity gradients into kinetic and/or magnetic energy at much smaller scales. The presence of a magnetic field component parallel to the shear flow has a stabilizing effect, and can even stall the instability if the parallel component of the Alfven velocity becomes larger than one half of the shear velocity jump \citep{lau1980,miura1982}. A similar instability condition was anticipated by \citet{ershkovich1973} in connection with observational evidence of KH in comet tails. On the other hand, an external magnetic field pointing in any direction perpendicular to the shear flow has no effect on the linear regime of the instability, and it is simply advected by the flow. The first observations of a KH pattern in the solar corona were reported by \citet{foullon2011} for a 2010 November 3 event using data from the Atmospheric Imaging Assembly (AIA) on board the {\it Solar Dynamics Observatory} (SDO). \citet{ofman2011} also reported observations of a KH pattern obtained by AIA/SDO for an 2010 April 8 event. AIA produces high spatial resolution (pixel size of 0.6 arcsec) and high temporal cadence (10-20 sec) images of the Sun in several bandpasses covering white light, ultraviolet and extreme ultraviolet. The observed pattern of the Kelvin-Helmholtz instability {observed by \citet{foullon2011}} extends from about $70~Mm$ up to about $180~Mm$ above the solar surface ($1~Mm = 10^3~km$). When a coronal mass ejection (CME) expands supersonically upwards from the solar surface, a bow shock is formed ahead of the CME and a strong shear flow develops across the contact discontinuity separating the shocked ambient plasma from the ejected material. A similar configuration arises at the flanks of the Earth magnetopause, where the KH instability has also been observed and studied \citep{fujimoto1995,fairfield2000,nykyri2001}. More recently it was observed in connection to the magnetopause of other planets, such as Saturn \citep{masters2010} and Mercury \citep{sundberg2011}. When the supersonic solar wind impinges on these magnetized planets, it first crosses a bow shock (and becomes subsonic in the reference frame of the planet) and then circumvents the planet slipping through the outer part of a surface of tangential discontinuity known as the {\it magnetopause}, where a strong shear flow is generated. The ambient corona is expected to be turbulent, as evidenced by measurements of non-thermal broadenings of highly ionized spectral lines. Most recent observations of nonthermal broadenings obtained by the Extreme-ultraviolet Imaging Spectrometer (EIS) on board {\it Hinode}, correspond to nonthermal motions in the range of $20-60~km.s^{-1}$ \citep{doschek2014}. The typical sizes of these nonthermal motions are sufficiently small to remain unresolved by EIS, whose pixel size is $2~arcsec$, and therefore its only manifestation is an excess in the Doppler broadening of spectral lines (i.e. beyond the thermal Doppler broadening). The goal of the present paper is to study the interaction between these two rather dissimilar features: the large-scale laminar pattern generated by the ongoing KH instability, and the small-scale nonthermal motions presumably corresponding to a well developed turbulence. With this goal in mind, we set up three-dimensional simulations of the MHD equations, to study the evolution of the KH instability in the presence of a turbulent ambient background. \citet{nykyri2013} presented results from a large number of compressible 2.5D MHD simulations (without a turbulent background) for parameter values compatible with the observations of the 2010 November 3 event. This comparison is consistent with a magnetic field almost perpendicular to the flow plane, and therefore we make this assumption in our simulations. When a small-scale turbulent background is considered, the expected role on a large-scale flow is to produce the effect of an enhanced diffusivity which can be characterized through an effective or turbulent viscosity. The effect of this extra diffusivity on an ongoing instability for the large-scale flow, as it is currently the case for KH, is to reduce its growth rate or even to switch-off the instability completely. We test and basically confirm this theoretical picture with a series of simulations of a KH-unstable shear flow superimposed on a small-scale turbulent background with different turbulence intensities. The AIA observations showing a KH pattern are described in \S~\ref{aia} and the observed features of small-scale turbulence are summarized in \S~\ref{bro}. We introduce the MHD equations in \S~\ref{mhd} and describe the basic properties of the Kelvin-Helmholtz instability in \S\ref{khinst}. The characteristic features of the turbulent background generated in our simulations are discussed in \S~\ref{turb} and our numerical results are shown in \S~\ref{num}. The potential consequences of the results presented in this paper are discussed in \S\ref{discu}, and our conclusions are listed in \S~\ref{conclu}. \section{Observations}\label{obs} \subsection{AIA observations}\label{aia} The coronal mass ejection (CME) that occurred on 2010 November 3 near the southeast solar limb, showed the characteristic pattern of the KH instability on AIA images. This pattern has only been observed at the highest temperature channel, centered at the $131 \AA$ bandpass at $1.1\ 10^7\ K$. The sequence of AIA images shows a regular array of three to four vortex-like structures on the northern flank of the CME, that were interpreted by \citet{foullon2011} as the manifestation of an ongoing KH instability. The geometrical setup of a CME expanding upwards from the solar surface is similar to the one taking place at the Earth's magnetopause \citep{foullon2011}. In view of this similarity, these authors termed \textit{CME-pause} to the surface of tangential discontinuity that separates the plasma of the ejecta from the shocked plasma of the ambient corona. From these observations, \citet{foullon2013} were able to estimate several of the relevant physical parameters for this instability, while the values of other parameters were inferred under different assumptions discussed in their subsection 5.3. The observational values for these various parameters are listed in \textit{Table 2} of \citet{foullon2013}. Among the most important parameters, they estimated a wavelength for the observed KH pattern of $\lambda = 18.5 \pm 0.5~Mm$ and an instability growth rate of $\gamma_{KH} = 0.033 \pm 0.012~s^{-1}$, which was driven by the velocity jump accross the shear layer of $680 \pm 92~km.s^{-1}$. These numbers are in good agreement with the dispersion relationship of the KH instability (see \S~\ref{khinst} below). The total magnetic field reported by \citet{foullon2013} at the CME-pause is sufficiently strong to correspond to Alfven speeds comparable to the velocity jump accross the shear layer. However, as noted by these authors, the field is largely tangential to the interfase and normal to the KH flow. As a result, this large Alfven speed does not play any significant role in the development of the instability. In a series of compressible 2.5D MHD simulations \citet{nykyri2013} managed to approximately reproduce the observed features of this KH event (see more details in \S\ref{khinst}). \citet{ofman2011} also reported observational evidence of the occurrence of the KH instability at the interface between a CME and the surrounding corona. Their event took place on 2010 April 8, it was the first to be detected in EUV in the solar corona and was clearly observed by six out of the seven wave bands of AIA/SDO. The velocity jump accross the shear layer for this event was estimated in the range of $6-20~km.s^{-1}$, while the wavelength of the observed KH pattern was $\lambda \simeq 7~Mm$, based on the size of the initial ripples. From the dispersion relationship corresponding to an incompressible fluid with a discontinuous velocity jump, an instability growth rate of $\gamma_{KH} \simeq 0.005~s^{-1}$ can be obtained. This value shows a reasonable agreement with the approximately $14~min$ over which the KH pattern was observed to grow and reach saturation \citep{ofman2011}. The KH features, however, were observed to last for as long as 107\ min. These observations were also compared with the results of compressible 2.5D MHD simulations, showing a good qualitative agreement during the nonlinear stage as well. Another KH event took place on 2011 February 24 in connection with a CME. \citet{mostl2013} reported the quasi-periodic vortex structures observed by AIA/SDO and interpreted these observations with the aid of 2.5D MHD simulations. They find a reasonable agreement between the numerical results and the observations, assuming that the ejecta is about ten times denser than the surrounding ambient plasma. \subsection{Turbulent broadening}\label{bro} Spectroscopic studies of coronal spectral lines show quantitative evidence of the existence of spatially unresolved fluid motions through the nonthermal broadening effect on these lines. Early observations were performed by a number of instruments, such as the slit spectrograph aboard Skylab \citep{mariska1992}, the High Resolution Telescope Spectrograph rocket \citep{bartoe1982}, the Solar Ultraviolet Measurements of Emitted Radiation (SUMER) aboard the Solar and Heliospheric Observatory \citep{teriaca1999}, or the various Solar Extreme Ultraviolet Research Telescope and Spectrograph flights between 1991 and 1997 \citep{coyner2011}. More recently, \citet{doschek2014} report nonthermal motions with velocities between $20$ and $60~km.s^{-1}$ obtained by EIS on \textit{Hinode}, corresponding to regions at the loop tops and above the loop tops during several flares. EIS obtains images at the following two wavelength bands: $170-213~\AA$ and $250-290~\AA$. The angular resolution for the flare observations performed by \citet{doschek2014} is about $2~arcsec$. The line-of-sight motions responsible for these nonthermal broadenings correspond to plasma at temperatures in the range of $11-15~MK$, and they increase with the height above the flare loops. These fluid motions have also been observed with EIS/Hinode in non-flaring active region loops \citep{doschek2008}. These fluid motions are being carried out by plasma at temperatures of about $1.2-1.4~MK$ with particle densities spanning the range of $5\ 10^8-10^{10}~cm^{-3}$. The rms values for the fluid velocities were in the range of $20-90~km.s^{-1}$. Outflow velocities in the range of $20-50~km.s^{-1}$ have also been detected through net blueshifts of the same spectral lines. The magnitude of the outflow velocities was found to be positively correlated with the rms velocity. \citet{brooks2011} performed a detailed study on active region AR 10978 using EIS/Hinode during a time span of five days in 2007 December. Persistent outflows were observed to take place at the edges of this active region, with an average speed of $22~km.s^{-1}$ and average rms velocities of $43~km.s^{-1}$. More recently, \citet{tian2012} studied upflows in connection to dimming regions generated by CMEs, and reported velocities of up to $100~km.s^{-1}$. It is speculated that these persistent outflows can be a significant source for the slow solar wind. \section{Magnetohydrodynamic description}\label{mhd} The incompressible MHD equations for a fully ionized hydrogen plasma are the Navier-Stokes equation and the induction equation \begin{eqnarray} \frac{\partial \mbox{\boldmath $ U $}}{\partial t} & = & - \left( \mbox{\boldmath $ U $} \cdot \mbox{\boldmath $ \nabla $} \right) \mbox{\boldmath $ U $} + v_A^2 \left( \mbox{\boldmath $ \nabla $}\mbox{\boldmath $ \times $}\mbox{\boldmath $ B $}\right) \mbox{\boldmath $ \times $} \mbox{\boldmath $ B $} - \mbox{\boldmath $ \nabla $} P + \nu \nabla^2 \mbox{\boldmath $ U $} + \mbox{\boldmath $ F $} \label{eq:NS}\\ \frac{\partial\mbox{\boldmath $ B $}}{\partial t} & = & \mbox{\boldmath $ \nabla $}\mbox{\boldmath $ \times $}\left( \mbox{\boldmath $ U $}\mbox{\boldmath $ \times $}\mbox{\boldmath $ B $}\right) + \eta \nabla^2 \mbox{\boldmath $ B $} \label{eq:ind}\ . \end{eqnarray} The velocity $\mbox{\boldmath $ U $}$ is expressed in units of a characteristic speed $U_0$, the magnetic field $\mbox{\boldmath $ B $}$ is in units of $ B_0$, and we also assume a characteristic length scale $L_0$ and a spatially uniform particle density $n_0$. In general terms, the assumption of incompressibility is valid provided that the plasma velocity associated with the instabilities being considered (i.e. the fluctuating part of the velocity profile), remains significantly smaller than the speed of sound. Note that it is only the inhomogeneous part of the velocity field the one that should be much smaller than the speed of sound. This might be a good assumption for some KH events, while other KH events might require to include compressible effects. Notwithstanding, in the present paper we adopt incompressibility as a simplifying assumption. Because of quasi-neutrality, the electron and the proton particle densities are equal, i.e., $n_e = n_i = n_0$. The (dimensionless) Alfven speed is then $v_A = B_0/\sqrt{4\pi m_i n_0}U_0$, while $\eta $ and $\nu$ are respectively the dimensionless magnetic diffusivity and kinematic viscosity. Note that for simplicity we assume isotropic expressions for both dissipative effects, even though in the presence of magnetic fields a tensor representation would be a more appropriate model \citep{braginskii1965}. These equations are complemented by the solenoidal conditions for both vector fields, i.e., \begin{equation} \mbox{\boldmath $ \nabla $}\cdot\mbox{\boldmath $ B $} = 0 = \mbox{\boldmath $ \nabla $}\cdot\mbox{\boldmath $ U $}\ . \label{eq:div} \end{equation} \section{Kelvin-Helmholtz instability}\label{khinst} Let us assume that the plasma is subjected to an externally applied shear flow given by \begin{equation} \mbox{\boldmath $ U $}_0 = U_{0y}(x) \mbox{\boldmath $\hat y$} , \label{eq:shear} \end{equation} so that the total velocity field is now $\mbox{\boldmath $ U $}_0 + \mbox{\boldmath $ u $}$, where \begin{equation} U_{0y}(x) = U_0 \left[\tanh\left(\frac{x-\frac{\pi}{2}}{\Delta}\right)-\tanh\left(\frac{x-\frac{ 3\pi}{2}}{\Delta}\right)-1\right]\ , \label{eq:tanh-shear} \end{equation} The velocity profile given in Eqn.~(\ref{eq:tanh-shear}) simulates the encounter of largely uniform flows of intensities $+ U_0 \mbox{\boldmath $\hat y$}$ and $- U_0 \mbox{\boldmath $\hat y$}$ through a parallel interface of thickness $2\Delta $. The numerical setup is sketched in Figure~\ref{fig:fig1}, where the jump provided by the hyperbolic tangent is duplicated to satisfy periodic boundary conditions throughout the numerical box. Also, we assume the presence of an external and uniform magnetic field $\mbox{\boldmath $ B $}_0$ tangential to the interfase and almost perpendicular to the shear flow (see Fig.~\ref{fig:fig1}), so that the total magnetic field is $\mbox{\boldmath $ B $}_0 + \mbox{\boldmath $ b $}$. The assumption of a hyperbolic tangent velocity profile is often adopted \citep{drazin1958,chandra1961,miura1992} as a way to model shear flows with a finite thickness. The velocity profile given in Eqn.~(\ref{eq:tanh-shear}) is an exact equilibrium of Eqs.~(\ref{eq:NS})-(\ref{eq:ind}) obtained through the application of the stationary external force $\mbox{\boldmath $ F $}_0 = -\nu\nabla^2 U_{0y}(x)\mbox{\boldmath $\hat y$}$ (see also \citet{gomez2014}), and therefore it is numerically implemented simply by the application of the volume force $\mbox{\boldmath $ F $}_0$. In the KH event that took place at one of the flanks of the 2010 November 3 CME, the fluid is observed to move along the contact discontinuity, albeit at very different speeds on either side. We choose to describe the development of the KH instability from a reference frame moving along the interfase at the average between these two speeds. In this reference frame, the flow will display a hyperbolic tangent type of profile, for which the parameter $U_0$ (see Eqn~(\ref{eq:tanh-shear})) will be equal to one half of the relative velocity. A shear flow such as the one given by Eqn.~(\ref{eq:tanh-shear}) is subjected to the well known Kelvin-Helmholtz instability, which is of a purely hydrodynamic nature, i.e. it occurs even in the absence of any magnetic field. Within the framework of MHD, the stability of a tangential velocity discontinuity (i.e. in the limit of $\Delta =0$) was first studied by \citet{chandra1961}. For the case of an external magnetic field aligned with the shear flow, the mode is stabilized by the magnetic field, unless the velocity jump exceeds twice the Alfv\'en speed. For the case at hand, we assume the parallel component of the external magnetic field to be sufficiently weak (i.e. $v_A^\parallel < 1$), since otherwise the instability pattern would not have been observed in AIA images. A stability analysis of a sheared MHD flow of finite thickness (i.e., $\Delta \ne 0$) in a compressible plasma has also been performed \citep{miura1982}, confirming the result of the purely hydrodynamic case. Compressibility has a stabilizing effect in the sense that the growth rate is reduced as the velocity jump approaches the speed of sound, and even stalls the instability when the Mach number becomes unity \citep{miura1982}. From {\it Table 2} of \citet{foullon2013}, we derive a shear flow amplitude $U_0 = 340 km.s^{-1}$, which remains below the speed of sound at both sides of the CME-pause. For the sake of simplicity, we neglect the effect of compressibility, which would bring an extra parameter to the problem: the Mach number. Yet another effect that might become relevant for the evolution of the KH instability, is the presence of a density contrast between both sides of the shear flow \citep{prialnik1986,gonzalez1994,wyper2013}. However, for the particular event under consideration it is not expected to play a role, since the mass density at both sides of the CME-pause remains virtually the same \citep{foullon2013}. If we approximate the hyperbolic tangent profile given in Eqn.~(\ref{eq:tanh-shear}) by piecewise linear functions, the instability growth rate $\gamma_{KH}$ arising from the linearized version of Eqs.~(\ref{eq:NS})-(\ref{eq:ind}) is (for details, see \citet{drazin1981}) \begin{equation} \left(\frac{\gamma_{KH}\Delta}{U_0}\right)^2 = \frac{1}{4}\left(e^{-4k_y\Delta} - (2k_y\Delta - 1)^2\right) , \label{eq:gamma} \end{equation} which attains its maximum at $\lambda_{max}\approx 15.7\ \Delta$ and $\gamma_{max}\approx 0.2 U_0/\Delta$, as shown in Figure~\ref{fig:fig2}. More importantly, Figure~\ref{fig:fig2} also shows that the KH instability only arises for large-scale modes, i.e. such that $k_y\Delta\le 0.64$, corresponding to $\lambda \ge 9.82\Delta$. We perform numerical integrations of Eqs.~(\ref{eq:NS})-(\ref{eq:ind}) subjected to the external force $\mbox{\boldmath $ F $}_0 = -\nu\nabla^2 U_{0y}(x)\mbox{\boldmath $\hat y$}$ (where $U_{0y}(x)$ is given in Eqn.~(\ref{eq:tanh-shear})) on the cubic box of linear size $2\pi$ sketched in Figure~\ref{fig:fig1}, assuming periodic boundary conditions in all three directions. The number of gridpoints is $256^3$ and the dimensionless Alfven speed was set at $v_A^\parallel = 0.2$ in all our simulations, indicating that the component of the external magnetic field parallel to the flow (i.e. $B_{0y}$, see Fig.~\ref{fig:fig1}) is such that its associated Alfven velocity component remains smaller than the maximum velocity $U_0$ of the shear profile, and it is therefore unable to quench the instability. This is indeed the case of the 2010 November 3 KH event. \citet{nykyri2013} performed a series of 2.5D MHD simulations seeking to match the time development of the KH pattern observed by AIA/SDO. Their numerical quest is consistent with slightly different magnetic field intensities at either side of the shear layer within the range of 8-11 G, forming small angles with the $\hat{z}$-direction (between $1^\circ$ and $10^\circ$, see Fig.~\ref{fig:fig1}), which leads to values of $v_A^\parallel$ in the range of $v_A^\parallel \approx 0.04-0.31$. In our simulations, we use a pseudospectral method to perform the spatial derivatives and a second order Runge-Kutta scheme for the time integration (see a detailed description of the code in \cite{gomez2005}). For the viscosity and resistivity coefficients we chose $\nu = \eta = 2.10^{-3}$, which are small enough to produce energy dissipation only at very small scales, comparable to the Nyquist wavenumber. In particular, dissipative effects are certainly negligible for all wavenumbers becoming unstable due to KH (see Eqn.~(\ref{eq:gamma}) and the text right below it). The values of all the dimensionless parameters adopted for our simulations are summarized in Table~\ref{table}. In all simulations, the pressure in Eqn.~(\ref{eq:NS}) is obtained self-consistently by taking the divergence of the equation, using the incompressibility condition, and solving at each time step the resulting Poisson equation for the pressure. The evolution of the $\mbox{\boldmath $\hat z$}$-component of vorticity is shown in Figure~\ref{fig:fig3} at three different times, displaying the characteristic pattern of the KH instability. The observed frame corresponds to the right half of the numerical box displayed in Figure~\ref{fig:fig1}, which covers the shear layer centered at $x_0=3\pi/2$, and has been rotated for better viewing. The observed pattern shows the presence of the largest Fourier mode in our numerical box, characterized by $k_y=1$, whose growth rate according to Eqn.~(\ref{eq:gamma}) is $\gamma_{KH}(k_y = 1)\simeq 0.87$. At the same time, the presence of harmonics is also apparent, judging by the smaller scale patterns showing up as the instability progresses. In fact, from Eqn.~(\ref{eq:gamma}) (see also Figure~\ref{fig:fig2}) we can anticipate which ones would be the growing Fourier modes. To estimate the instability growth rate, we use the component $u_x(x_0,y,z)$ evaluated at $x_0=\pi/2, 3\pi/2$ (i.e., in the central part of the shear flows) as a proxy (see Figure~\ref{fig:fig4}). A Fourier analysis performed on $u_x(x_0,y,z)$ for any fixed value $z$, confirms that the exponentially growing modes belong to the interval $k_y=1,\dots , 6$; which is consistent with the theoretical prediction shown in Figure~\ref{fig:fig2} for $\Delta = 0.1$. Since the KH pattern is a two-dimensional flow taking place at $z=constant$ planes, we take the maximum velocity of the profile $u_x(x_0,y,z)$ at any given value of $z$, and then average in the $\mbox{\boldmath $\hat z$}$-direction, i.e. \begin{equation}\label{eq:uxmax} U_{x,max}=\int_0^{2\pi} \frac{dz}{2\pi} \max \left[ u_x(x_0,y,z), 0\le y < 2\pi \right] \ . \end{equation} In Figure~\ref{fig:fig5} we show the maximum value of the $u_x(x_0,y,z)$ profile (averaged with respect to the $\mbox{\boldmath $\hat z$}$-direction) for both $x_0=\pi/2$ and $x_0=3\pi/2$, although as expected the two curves are undistinguishable. The straight gray line corresponds to the theoretically predicted growth rate $\gamma_{KH}\simeq 0.87$ for the Fourier mode $k_y=1$ (using Eqn.~(\ref{eq:gamma})), which is the one observed in the time sequence shown in Fig.~\ref{fig:fig3}. The fact that our empirical determination of the growth rate so strongly resembles $\gamma_{KH} (k_y=1)$ even though (as mentioned) the observed pattern is more complex that a single Fourier mode, arises as the combined result of the $z$-averaging and our choice of the maximum of the velocity profile, as defined in Eqn.~(\ref{eq:uxmax}). Note that even though the simulations include dissipative effects and the theoretical prediction does not, the coincidence between both curves during the linear regime of the instability is nonetheless remarkable. Considering that the attenuation effect of viscosity can be estimated by $\gamma \simeq \gamma_{KH}-\nu k_y^2$, we can easily verify that the dissipative correction is absolutely negligible for the evolution of the KH instability, as expected. \section{The turbulent corona}\label{turb} To generate a turbulent background in our simulations, we apply a stationary force to all modes within a thin spherical shell of radius $k_{turb}=1/l_{turb}$ in Fourier space, consisting of a superposition of harmonic modes with random phases. The nonlinear interactions between these Fourier modes that are being externally driven with a force of intensity $f_{turb}$, will develop a stationary turbulent regime with its associated energy cascade involving all wavenumbers $k\ge k_{turb}$. To make sure that it is a small-scale turbulence, we chose $l_{turb}$ to be much smaller than the wavelength observed for the KH pattern, and even somewhat smaller than the thickness $\Delta$ of the shear layer (i.e. $l_{turb} < \Delta$). The pattern of vorticity obtained when only the turbulent forcing is applied (i.e. a simulation with no KH driving), is shown in Figure~\ref{fig:fig6}. The observed pattern corresponds to a turbulent regime which is statistically stationary, homogeneous and isotropic. Even though all spatial scales from $l_{turb}$ down to the smallest scales available in the simulation participate in the dynamics and in the ensuing energy cascade, only those vortices of sizes comparable to $l_{turb}$ can be identified, which is to be expected for a power law power spectrum with a negative index such as Kolmogorov's. Therefore, these concentrations of vorticity can safely be associated to the energy-containing eddies of the turbulence. As mentioned in \S~\ref{intro}, the expected effect of this small-scale turbulence on a larger scale flow, is an effective or enhanced diffusivity. In the case at hand, its effect on the instability growth rate is expected to be \begin{equation}\label{eq:gamma-turb} \gamma(k) = \gamma_{KH}(k) - \nu_{turb} k^2\ , \end{equation} where $\gamma_{KH}(k)$ is given in Eqn.~(\ref{eq:gamma}) and $\nu_{turb}$ is the aforementioned effective or turbulent viscosity. The effect of increasing turbulent viscosity on the instability growth rate is illustrated in Figure~\ref{fig:fig7}, showing that not only the growth rate is reduced but also the range of unstable wavenumbers. We performed simulations applying both the large-scale force $\mbox{\boldmath $ F $}_0$ to drive the KH instability and the small-scale force of intensity $f_{turb}$ to drive the turbulent regime. In Figure~\ref{fig:fig8} we show the resulting distribution of the vorticity component $\omega_z(x,y)$, which can be compared with the one shown in Figure~\ref{fig:fig3} for the KH instability on a laminar background, and the one shown in Figure~\ref{fig:fig6} for the purely turbulent run, with no KH pattern. We can qualitatively see that the role of turbulence is in fact an attenuation in the growth of the instability. One of the observable consequences of this turbulent regime is the nonthermal broadening of coronal spectral lines caused by the turbulent motion of the fluid elements emitting these (optically thin) spectral lines. Once this turbulence reaches a Kolmogorov stationary regime, the rms value of the turbulent velocity $u_{turb}$ is \begin{equation}\label{eq:Eturb} E_{turb} = \frac{u_{turb}^2}{2}=\int_{1/l_{turb}}dk\ \epsilon^{2/3}\ k^{-5/3} \propto (\epsilon\ l_{turb})^{2/3}\ , \end{equation} where $E_{turb}$ is the (dimensionless) kinetic energy density of the turbulence and $\epsilon$ is its energy dissipation rate. Note that neither $\epsilon$ or $E_{turb}$ are known a priori, since they arise as a result of the stationary regime attained by the turbulence. However, using heuristic arguments we can find how these quantities scale with the input parameters of this turbulence: namely $l_{turb}$ and $f_{turb}$. The fluid is energized by the work done per unit time by the external force of intensity $f_{turb}$ at scale $l_{turb}$, energy then cascades down to smaller scales and it is dissipated by viscosity at the rate $\epsilon$ at dissipative scales. In a stationary regime, the power delivered by the external force should match the energy dissipation rate, i.e. \begin{equation}\label{eq:eps} \epsilon \propto f_{turb}\ u_{turb}\ . \end{equation} Equations~(\ref{eq:Eturb})-(\ref{eq:eps}) can be combined to obtain both $u_{turb}$ and $\epsilon$ in terms of $f_{turb}$ and $l_{turb}$, \begin{equation}\label{eq:eps-f} \epsilon \propto ( f_{turb}^3\ l_{turb} )^{1/2}\ , \end{equation} \begin{equation}\label{eq:uturb-f} u_{turb} \propto (f_{turb}\ l_{turb})^{1/2}\ . \end{equation} On dimensional arguments, the turbulent viscosity introduced in Eqn.~(\ref{eq:gamma-turb}) has to be proportional to the turbulent velocity $u_{turb}$ times the typical scale $l_{turb}$, i.e. $\nu_{turb}\propto u_{turb}\ l_{turb}$, which considering Eqn.~(\ref{eq:uturb-f}) \begin{equation}\label{eq:nuturb} \nu_{turb} = C ( f_{turb}\ l_{turb}^3 )^{1/2}\ . \end{equation} \section{Numerical results}\label{num} To quantify the role of turbulence in the evolution of the KH instability, we performed a sequence of simulations for which the only parameter being changed is the turbulent forcing $f_{turb}$. As the parameter $f_{turb}$ is gradually increased, the corresponding turbulent velocity $u_{turb}$ (observationally perceived as nonthermal broadening of spectral lines) is also increased, which in turn raises the turbulent viscosity $\nu_{turb}$. As a result, the instability growth rate (see Eqn.~(\ref{eq:gamma-turb})) is expected to be reduced. To estimate the instability growth rate from our simulations, we follow the same procedure described in \S~\ref{khinst}, which amounts to follow the temporal evolution of the profile $u_x(y)$ for the gridpoints centered at the shear layer. Note however, that now the velocity at each grid point can be split into a part corresponding to the large-scale KH evolution plus another part corresponding to the turbulence. Because of the geometrical setup of our simulations, the large-scale part of the flow at each $z=constant$ plane is an exact replica of one another (KH is a two-dimensional flow) while the turbulent part is not, since it is a fully three-dimensional flow. The averaging procedure in the $\mbox{\boldmath $\hat z$}$-direction described in Eqn.~(\ref{eq:uxmax}) gets rid of the turbulent part of the flow, since the mean velocity of this turbulence is exactly zero. We can also compute the rms deviation of the velocity when averaging in the $\mbox{\boldmath $\hat z$}$-direction, which should exactly correspond to $u_{turb}$, since the KH part of the flow is identical for all $z=constant$ planes. Therefore, this statistical strategy allows us to obtain the main features of both the large-scale (i.e. the KH instability) and small-scale (the turbulence) components of this complex flow. Figure~\ref{fig:fig9} shows the main result of the present study, which is the value of $U_{x,max}$ (defined in Eqn.~(\ref{eq:uxmax})) as a function of time in a lin-log plot, for runs corresponding to different turbulent intensities. The thick black lines corresponds to $U_{x,max}(t)$ for each simulation, the thin black lines indicate one standard deviation with respect to the average (i.e. $U_{x,max}\pm u_{turb}$), and the straight gray lines are the theoretical predictions for each case, as emerges from Eqn.~(\ref{eq:gamma-turb}). Note that the theoretical slopes (i.e. the gray lines in Figure~\ref{fig:fig9}) are not best fits to each of the simulations, but the result arising from Eqn.~(\ref{eq:gamma-turb}), which contains only one free parameter for the whole set of simulations, namely the constant $C$. This constant is the only dimensionless parameter that remains undetermined by the dimensional analysis described above. We find that the value of $C$ that best fits all our simulations is $C\approx 18.8$. \section{Discussion}\label{discu} In the previous section, we presented results from numerical simulations showing the role of a background turbulence in reducing the growth rate of an ongoing KH instability. These numerical results are intended to simulate the KH instability being developed at the interface between some CMEs and the ambient corona, which have been recently reported in the literature. There is also mounting observational evidence about the turbulent nature of the solar corona, mostly related with spatially unresolved motions leading to measurable nonthermal broadenings in coronal spectral lines. To numerically model this turbulent background, we made a number of simplifying assumptions. For instance, we assume the turbulent regime to be spatially homogeneous and isotropic and also stationary. We maintain this turbulent state throughout the whole simulation by applying a stationary stirring force of intensity $f_{turb}$ at a well defined lengthscale $l_{turb}$. We deliberately chose this lengthscale to be much smaller than the wavelength of the KH unstable mode, since the AIA images reporting the KH pattern do not show any observable evidence of a turbulent background. Also, the rotation period of the energy-containing vortices is of the order of $\tau_{turb}\simeq l_{turb}/u_{turb}$, which remains shorter than the instability growth time for all the cases considered. The properties of this turbulent regime are therefore determined by only two input parameters: $l_{turb}$ which is kept fixed throughout the whole study, and $f_{turb}$ which is varied to give rise to cases with different turbulent velocities ($u_{turb}$) and effective viscosities ($\nu_{turb}$). We can use Eqs.~(\ref{eq:uturb-f})-(\ref{eq:nuturb}) to express the effective viscosity $\nu_{turb}$ in terms of two measurable quantities such as $u_{turb}$ and $l_{turb}$. A crude estimate of the dimensionless constant in Eqn.~(\ref{eq:uturb-f}) leads to $u_{turb} \approx 0.22\ (f_{turb}\ l_{turb})^{1/2}$ and therefore \begin{equation}\label{eq:nuturb-uturb} \nu_{turb} \approx 85.4\ u_{turb}\ l_{turb}\ . \end{equation} If we refer for instance to the KH event occurred on 2010 November 3 and reported by \citet{foullon2011}, they estimate a velocity jump at the interface of $U_0 = 340\ km.s^{-1}$ and a wavelength for the KH pattern of $\lambda = 2\pi L_0 = 18.5\ Mm$ (corresponding to a length unit of $L_0 = 3\ Mm$ and $k_y = 1$ in our simulations). For $k_y = 1$, the dispersion relation reduces to $\gamma \approx 0.87 - \nu_{turb}$, as shown in Eqn.~(\ref{eq:gamma-turb}). The instability growth rate estimated by \citet{foullon2013} for this event is $\gamma \approx 0.033\ s^{-1}$, which in our dimensionless units becomes $\gamma L_0/U_0 = 0.29 = 0.87 - \nu_{turb}$. From this expression we can estimate the value of $\nu_{turb}$ required to explain the growth rate observed for this particular KH event. More interestingly, using Eqn.~(\ref{eq:nuturb-uturb}) we can obtain a level of turbulent velocity of $u_{turb}\approx 47 km.s^{-1}$ (for the value of $l_{turb}$ used in our simulations), which is well within the range reported by \citet{doschek2014} from Hinode observations. It is important to recall that other effects besides turbulence might contribute to reduce the instability growth rate. Depending of the parameter values of the particular KH event being considered, the compressibility of the plasma or the strength of the magnetic field component along the shear flow might play a role. Another consequence that we can derive from the present analysis is that, given the fact that the turbulence did not completely suppress the KH instability, we can in principle use Equations~(\ref{eq:gamma-turb})-(\ref{eq:nuturb-uturb}) to estimate an upper bound for $l_{turb}$ for any observed value of $u_{turb}$. For the turbulent attenuation to be negligible (i.e. $\nu_{turb}\ll 0.87$) and assuming a turbulent velocity of $60\ km.s^{-1}$ (see \citet{doschek2014}), we obtain for $l_{turb}$ an upper bound of $170\ km$. In general, \begin{equation}\label{eq:lturb} l_{turb}\ll 170\ km (\frac{u_{turb}}{60\ km.s^{-1}})^{-1} \end{equation} In summary, in order for the invoked turbulent state to produce nonthermal broadening of spectral lines of the order of $u_{turb}$ and at the same time not to affect the observed KH event in any appreciable manner, the typical size $l_{turb}$ of its energy-containing eddies should satisfy Eqn.~(\ref{eq:lturb}). \section{Conclusions}\label{conclu} The study presented in this paper was motivated by two relatively recent observational findings on the nature of the solar corona. One of them is the apparent development of the Kelvin-Helmholtz instability as some CMEs expand in the ambient corona, as shown by AIA/SDO images \citep{foullon2011,foullon2013,ofman2011}. The second one is that the coronal plasma seems to be in a turbulent state, as evidenced by the nonthermal broadening of coronal spectral lines measured from EIS/Hinode data \citep{doschek2008,brooks2011,tian2012,doschek2014}. Our main goal has been to study the feasibility for these two apparently dissimilar features to coexist. Namely, the large-scale laminar pattern observed for the KH instability, and the small-scale spatially unresolved turbulent motions leading to the observed nonthermal broadenings. We therefore performed three-dimensional simulations of the MHD equations, to study the evolution of the KH instability in the presence of a turbulent ambient background for different intensities of this turbulence. Theoretically, the effect of a small-scale turbulence on a large-scale flow would be to produce an enhanced diffusivity which can be modeled by an effective or turbulent viscosity. The impact of this small-scale turbulence on an ongoing large-scale instability such as KH, would then be a reduction of its growth rate, as emerges from Eqn.~(\ref{eq:gamma-turb}). The degree of this reduction is controlled by the turbulent viscosity $\nu_{turb}$ which we obtained from a dimensional analysis to be $\nu_{turb} = C (f_{turb}\ l_{turb}^3 )^{1/2}$ (see Eqn.~(\ref{eq:nuturb})), leaving only the dimensionless constant $C$ undetermined. The comparison between the instability growth rates obtained from our simulations with the ones arising from Eqn.~(\ref{eq:gamma-turb}) esentially confirms this theoretical scenario, while providing an empirical determination for the dimensionless constant $C$, which amounts to $C\approx 18.8$. Perhaps more importantly, since $\nu_{turb} \propto u_{turb}\ l_{turb}$ and given the fact that the instability has not been completely quenched by the turbulence (otherwise it would not have been observed), observational determinations of $u_{turb}$ from nonthermal broadenings pose an upper limit to the correlation length of the turbulence $l_{turb}$. For observational values of $u_{turb} \approx 20-60\ km.s^{-1}$, the correlation length of turbulence is expected to be smaller than about $l_{turb}\approx 170-510\ km$, which is consistent with not having been spatially resolved by current coronal imaging spectrometers such as EIS aboard Hinode. \acknowledgments DG and EED acknowledge financial support from grant SP02H1701R from Lockheed-Martin to SAO. DG also acknowledges support from PICT grant 0454/2011 from ANPCyT to IAFE and PDM acknowledges support from PICTs 2011-1529 and 2011-1626 from ANPCyT to IFIBA (Argentina).
1,314,259,992,595
arxiv
\section{Introduction.} The only yet observed phenomenon of $CP$ violation \cite{ChristensonCroninFitchTurlay} \cite{Argus}, called ``indirect'' $CP$ violation \cite{Nir}, is that some electroweak mass eigenstates are not $CP$ eigenstates. The unavoidable presence, for a number of generations $N/2 \geq 3$, of a complex number among the entries of the quark mixing matrix \cite{KobayashiMaskawa} is the preferred mechanism to trigger it in the framework of a $SU(2)_L \times U(1)$ electroweak gauge theory for quarks \cite{GlashowSalamWeinberg}. Other possibilities need enlarging the scalar sector of the model \cite{LeeWeinberg}. We shall deal here with mesons only, and stick to their interpretation as composite di-quark fields, first proposed by Gell-Mann \cite{GellMann} for the case of three flavours; the success of the $SU(3)$ classification of the corresponding eigenstates was next extended to $N$ flavours, with a similar role played by the diagonal subgroup of the chiral $U(N)_L \times U(N)_R$ group. The importance of the latter and of its breaking, specially as far as strong interactions are concerned, was put forward long ago \cite{CurrentAlgebra}; we shall refer to the corresponding eigenstates as the ``flavour'' or ``strong'' eigenstates (strong interactions are considered to be flavour independent). The quarks being in the fundamental representation of $U(N)$, one is naturally led to consider mesons as $N \times N$ matrices \cite{Machet1}; each of them is given in addition a quantum number $(+1)$ or $(-1)$ when acted upon by the parity-changing operator $\cal P$, such that their total number $(2N^2)$ of degrees of freedom matches the one of scalar and pseudoscalar $J=0$ mesons. The action on them of the $U(N) \times U(N)$ generators, which are $N \times N$ matrices, too, is defined inside the associative algebra that they form. Fermions can then be forgotten, though the group action as defined in \cite{Machet1} can be easily recovered by acting with the chiral group on both fermionic components of the mesonic wave function and introducing the appropriate ``left'' and ``right'' projectors, with a $\gamma_5$ matrix, respectively for the generators of $U(N)_L$ and $U(N)_R$. The extension of the Glashow-Salam-Weinberg model \cite{GlashowSalamWeinberg} to $J=0$ mesons that I proposed in \cite{Machet1} is thus a $SU(2)_L \times U(1)$ gauge theory of matrices. As the action of the gauge group can only be defined if its generators are also $N \times N$ matrices, it is considered as a subgroup of the chiral group. Its orientation within the latter has to be compatible with the customary action of the electroweak group on fermions, and is determined by a unitary $N/2 \times N/2$ matrix which is nothing else than the Cabibbo-Kobayashi-Maskawa mixing matrix $\Bbb K$ \cite{Cabibbo} \cite{KobayashiMaskawa}. The $SU(2)_L$ generators are \footnote{This construction of course requires an even number $N$ of flavours.} \begin{equation} {\Bbb T}^3_L = {1\over 2}\left(\begin{array}{rrr} {\Bbb I} & \vline & 0\\ \hline 0 & \vline & -{\Bbb I} \end{array}\right),\ {\Bbb T}^+_L = \left(\begin{array}{ccc} 0 & \vline & {\Bbb K}\\ \hline 0 & \vline & 0 \end{array}\right),\ {\Bbb T}^-_L = \left(\begin{array}{ccc} 0 & \vline & 0\\ \hline {\Bbb K}^\dagger & \vline & 0 \end{array}\right), \label{eq:SU2L} \end{equation} and act trivially on the $N$-vector of quarks \begin{equation} \Psi = \left( \begin{array}{c} u\\ c\\ \vdots \\d\\ s\\ \vdots \end{array} \right). \label{eq:psi}\end{equation} $\Bbb I$ is the $N/2 \times N/2$ identity matrix. The $U(1)$ generator satisfies the Gell-Mann-Nishijima relation (written in its ``chiral'' form) \begin{equation} ({\Bbb Y}_L,{\Bbb Y}_R) = ({\Bbb Q}_L,{\Bbb Q}_R) - ({\Bbb T}^3_L,0), \label{eq:GMN}\end{equation} and the customary electric charge operator \begin{equation} {\Bbb Q} = \left(\begin{array}{ccc} 2/3 & \vline & 0\cr \hline 0 & \vline & -1/3 \end{array}\right), \end{equation} yields back the usual expressions for the ``left'' and ``right'' hypercharges \begin{equation} {\Bbb Y}_L = {1\over 6}{\Bbb I}, \quad {\Bbb Y}_R = {\Bbb Q}_R. \end{equation} $\Bbb Q$ turns out to be the ``third'' generator of the custodial $SU(2)_V$ symmetry uncovered in \cite{Machet1}. The electroweak eigenstates can be classified into two types of quadruplets, respectively ``even'' and ``odd'' by the parity changing operator $\cal P$. Both write \vbox{ \begin{eqnarray} & &\Phi(\Bbb D)= ({\Bbb M}\,^0, {\Bbb M}^3, {\Bbb M}^+, {\Bbb M}^-)(\Bbb D)\cr = & & \left[ {1\over \sqrt{2}}\left(\begin{array}{ccc} {\Bbb D} & \vline & 0\\ \hline 0 & \vline & {\Bbb K}^\dagger\,{\Bbb D}\,{\Bbb K} \end{array}\right), {i\over \sqrt{2}} \left(\begin{array}{ccc} {\Bbb D} & \vline & 0\\ \hline 0 & \vline & -{\Bbb K}^\dagger\,{\Bbb D}\,{\Bbb K} \end{array}\right), i\left(\begin{array}{ccc} 0 & \vline & {\Bbb D}\,{\Bbb K}\\ \hline 0 & \vline & 0 \end{array}\right), i\left(\begin{array}{ccc} 0 & \vline & 0\\ \hline {\Bbb K}^\dagger\,{\Bbb D} & \vline & 0 \end{array}\right) \right],\cr & & \label{eq:reps} \end{eqnarray} } where $\Bbb D$ is a real $N/2 \times N/2$ matrix. That the entries ${\Bbb M}^+$ and ${\Bbb M}^-$ are, up to a sign, hermitian conjugate ({\em i.e.} charge conjugate) requires that the $\Bbb D$'s are restricted to symmetric or antisymmetric matrices. Because of the presence of an ``$i$'' for the for ${\Bbb M}^{3,\pm}$ and not for ${\Bbb M}^0$, the quadruplets always mix entries of different behaviour by hermitian (charge) conjugation, and are consequently not hermitian representations. Each of them is the sum of two doublets of $SU(2)_L$, and also the sum of one singlet plus one triplet of the custodial diagonal $SU(2)_V$. The $\cal P$-even and $\cal P$-odd quadruplets do not transform in the same way by $SU(2)_L$ (the Latin indices $i,j,k$ run from $1$ to $3$); for ${\cal P}$-even quadruplets, one has \begin{eqnarray} {\Bbb T}^i_L\,.\,{\Bbb M}^j_{{\cal P}even} &=& -{i\over 2}\left( \epsilon_{ijk} {\Bbb M}^k_{{\cal P}even} + \delta_{ij} {\Bbb M}_{{\cal P}even}^0 \right),\cr {\Bbb T}^i_L\,.\,{\Bbb M}_{{\cal P}even}^0 &=& {i\over 2}\; {\Bbb M}_{{\cal P}even}^i; \label{eq:actioneven} \end{eqnarray} while ${\cal P}$-odd quadruplets transform according to \begin{eqnarray} {\Bbb T}^i_L\,.\,{\Bbb M}_{{\cal P}odd}^j &=& -{i\over 2}\left( \epsilon_{ijk} {\Bbb M}_{{\cal P}odd}^k - \delta_{ij} {\Bbb M}_{{\cal P}odd}^0 \right),\cr {\Bbb T}^i_L\,.\,{\Bbb M}_{{\cal P}odd}^0 &=& \hskip 5mm -{i\over 2}\; {\Bbb M}_{{\cal P}odd}^i, \label{eq:actionodd} \end{eqnarray} and only representations transforming alike, $\cal P$-even or $\cal P$-odd, can be linearly mixed. The (diagonal) charge operator acts indifferently on both types of representations by: \begin{eqnarray} {\Bbb Q}\,.\,{\Bbb M}^i &=& -i\,\epsilon_{ij3} {\Bbb M}^j,\cr {\Bbb Q}\,.\,{\Bbb M}^0 &=& 0. \label{eq:chargeaction} \end{eqnarray} The misalignment of ``strong'' and electroweak eigenstates, resulting from the one of the electroweak group with respect to the chiral group, is conspicuous from the presence of the mixing matrix in the definition (\ref{eq:reps}). By adding or subtracting eqs.~(\ref{eq:actioneven}) and (\ref{eq:actionodd}), and defining scalar ($\Bbb S$) and pseudoscalar ($\Bbb P$) fields by \begin{equation} ({\Bbb M}_{{\cal P}even} + {\Bbb M}_{{\cal P}odd}) = {\Bbb S}, \label{eq:scalar} \end{equation} and \begin{equation} ({\Bbb M}_{{\cal P}even} - {\Bbb M}_{{\cal P}odd}) = {\Bbb P}, \label{eq:pseudo} \end{equation} one finds two new types of stable quadruplets which include objects of different parities, but which now correspond to a given $CP$ quantum number, depending in particular whether $\Bbb D$ is a symmetric or skew-symmetric matrix \cite{Machet1} \begin{equation} ({\Bbb M}\,^0, \vec {\Bbb M}) = ({\Bbb S}^0, \vec {\Bbb P}), \label{eq:SP} \end{equation} and \begin{equation} ({\Bbb M}\,^0, \vec {\Bbb M}) = ({\Bbb P}\,^0, \vec {\Bbb S}); \label{eq:PS} \end{equation} they transform in the same way by the gauge group, according to eq.~(\ref{eq:actioneven}), and thus can be linearly mixed. As they span the whole space of $J=0$ mesons too, this last property makes them specially convenient to build an electroweak gauge theory. Taking the hermitian conjugate of any representation $\Phi$ swaps the relative sign between ${\Bbb M}^0$ and $\vec{\Bbb M}$; as a consequence, $\Phi^\dagger_{{\cal P}even}$ transforms by $SU(2)_L$ as would formally do a ${\cal P}$-odd representation, and vice-versa; on the other hand, the quadruplets (\ref{eq:reps}) are also representations of of $SU(2)_R$, the action of which is obtained by swapping eqs.~(\ref{eq:actioneven}) and (\ref{eq:actionodd}) \cite{Machet1}; so, the hermitian conjugate of a given representation of $SU(2)_L$ is a representation of $SU(2)_R$ with the same law of transformation, and vice-versa. The same result holds for any (complex) linear representation $U$ of quadruplets transforming alike by the gauge group. The link with usually defined $J=0$ ``strong'' mesonic eigenstates proceeds as follows: consider for example the case $N=4$, for which $\Bbb K$ shrinks back to the Cabibbo mixing matrix; the pseudoscalar $\pi^+$ meson is represented in our notation, up to a scaling factor (see below), by the matrix \begin{equation} \Pi^+ = \left( \begin{array}{rrcrr} & &\vline & 1 & 0 \\ & &\vline & 0 & 0 \\ \hline & &\vline & & \\ & &\vline & & \end{array} \right), \end{equation} since, sandwiched between two 4-vectors $\Psi$ of quarks (\ref{eq:psi}), it gives \begin{equation} \overline\Psi\ \Pi^+\ \Psi = \bar u d, \end{equation} which indeed corresponds, according to the classification by flavour $SU(4)$, to the $(+1)$ charged pion. One identifies similarly the other strong pseudoscalar mesons, for example $K^+ = \bar u s$, $D^+ = \bar c d$, $D_s^+ = \bar c s$. So, for example, with the scaling that has to be introduced (see \cite{Machet2} \cite{Machet3} \cite{Machet1}, where I show that it leads in particular to the correct leptonic decays), the pseudoscalar entry ${\Bbb P}^+$ with charge $(+1)$ \begin{equation} {\Bbb P}^+ = i \left(\begin{array}{rrcrr} & &\vline & c_\theta & s_\theta \\ & &\vline &-s_\theta & c_\theta \\ \hline & &\vline & & \\ & &\vline & & \end{array} \right), \end{equation} corresponding to the matrix \begin{equation} {\Bbb D}_1 = \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right), \end{equation} represents the following linear combination of pseudoscalar mesons \begin{equation} {\Bbb P}^+({\Bbb D}_1) = i{f\over \langle H\rangle}\left(c_\theta (\pi^+ + D_s^+) + s_\theta (K^+ -D^+)\right), \end{equation} where $f$ is the leptonic decay constant of the mesons, that we consider to be the same for all of them, and $H$ is the Higgs boson (see the remark at the end of Appendix A). \section{Quadratic invariants.} To every representation is associated a quadratic expression invariant by the electroweak gauge group $SU(2)_L \times U(1)$ \begin{equation} {\cal I} = ({\Bbb M}^0, \vec {\Bbb M})\otimes ({\Bbb M}^0, \vec {\Bbb M})= {\Bbb {\Bbb M}}\,^0 \otimes {\Bbb {\Bbb M}}\,^0 + \vec {\Bbb M} \otimes \vec {\Bbb M}; \label{eq:invar} \end{equation} the ``$\otimes$'' product is a tensor product, not the usual multiplication of matrices and means the product of fields as functions of space-time; $\vec {\Bbb M} \otimes \vec {\Bbb M}$ stands for $\sum_{i=1,2,3} {\Bbb M}\,^i \otimes {\Bbb M}\,^i$. For the relevant cases $N=2,4,6$, there exists a set of $\Bbb D$ matrices (see appendix A) such that the algebraic sum (specified below) of invariants extended over all representations defined by (\ref{eq:SP},\ref{eq:PS},\ref{eq:reps}) \begin{eqnarray} && {1\over 2} \left((\sum_{symmetric\ {\Bbb D}} - \sum_{skew-symmetric\ {\Bbb D}}) \left( ({\Bbb S}^0, \vec {\Bbb P})({\Bbb D}) \otimes ({\Bbb S}^0, \vec {\Bbb P})({\Bbb D}) - ({\Bbb P}^0, \vec {\Bbb S})({\Bbb D}) \otimes ({\Bbb P}^0, \vec {\Bbb S})({\Bbb D}) \right)\right)\cr &=& {1\over 4} \left((\sum_{symmetric\ {\Bbb D}} - \sum_{skew-symmetric\ {\Bbb D}}) \left(\Phi_{{\cal P}even}({\Bbb D})\otimes\Phi^\dagger_{{\cal P}odd}({\Bbb D}) +\Phi_{{\cal P}odd}({\Bbb D})\otimes \Phi^\dagger_{{\cal P}even}({\Bbb D}) \right) \right)\cr && \label{eq:Idiag}\end{eqnarray} is diagonal both in the electroweak basis and in the basis of strong eigenstates: in the latter basis, all terms are normalized alike to $(+1)$ (including the sign). Note that two ``$-$'' signs occur in eq.~(\ref{eq:Idiag}) \footnote{Eq.~(\ref{eq:Idiag} specifies eq.~(25) of \cite{Machet1}, in which the ``$-$'' signs were not explicitly written.} :\l - the first between the $({\Bbb P}^0, \vec{\Bbb S})$ and $({\Bbb S}^0, \vec{\Bbb P})$ quadruplets, because, as seen on eq.~(\ref{eq:reps}), the ${\Bbb P}^0$ entry of the former has no ``$i$'' factor, while the $\vec{\Bbb P}$'s of the latter do have one; as we define all pseudoscalars without an ``$i$'' (like $\pi^+ = \bar u d$), a $(\pm i)$ relative factor has to be introduced between the two types of representations, yielding a ``$-$'' sign in eq.~(\ref{eq:Idiag});\l - the second for the representations corresponding to skew-symmetric $\Bbb D$ matrices, which have an opposite behaviour by charge conjugation ({\em i.e.} hermitian conjugation) as compared to the ones with symmetric ${\Bbb D}$'s. The kinetic part of the $SU(2)_L \times U(1)$ Lagrangian for $J=0$ mesons is built from the same combination (\ref{eq:Idiag}) of invariants, now used for the covariant derivatives of the fields with respect to the gauge group; it is thus diagonal in both the strong and electroweak basis, too. Other invariants can be built like tensor products of two representations transforming alike by the gauge group: two $\cal P$-odd or two $\cal P$-even, two $({\Bbb S}^0,\vec {\Bbb P})$, two $({\Bbb P}^0,\vec {\Bbb S})$, or one $({\Bbb S}^0,\vec {\Bbb P})$ and one $({\Bbb P}^0,\vec {\Bbb S})$; for example such is \begin{equation} {\cal I}_{1\tilde 2} = ({\Bbb S}^0,\vec {\Bbb P})({\Bbb D}_1) \otimes ({\Bbb P}^0,\vec {\Bbb S})({\Bbb D}_2) ={\Bbb S}^0({\Bbb D}_1) \otimes {\Bbb P}^0({\Bbb D}_2) + \vec {\Bbb P}({\Bbb D}_1) \otimes \vec {\Bbb S}({\Bbb D}_2). \label{eq:I12}\end{equation} According to the remark made in the previous section, all the above expressions are also invariant by the action of $SU(2)_R$. They naturally enter the mass terms in the Lagrangian, and there are {\em a priori} as many $(N^2/2)$ independent mass scales as there are independent representations. Introduced in a gauge invariant way, they share with the leptonic case the same arbitrariness; the ratios of mesonic masses have here the same status as the one between the muon and the electron. Note that we have given a purely electroweak origin to the mass splittings, since, from the diagonalization property of eq.~(\ref{eq:Idiag}), equal electroweak mass terms also correspond to equal mass terms for strong eigenstates. \subsection{The basic property of the quadratic invariants.} The quadratic $SU(2)_L$ invariants are not {\em a priori} self conjugate expressions \footnote{The hermitian combination (\ref{eq:Idiag}), used to build the kinetic terms, is special in this respect too.} and have consequently no definite property by hermitian conjugation; in particular, the one associated with a given representation $U$ is $U \otimes U$ and {\em not} $U \otimes U^\dagger$ (we have seen in the previous section that $U$ and $U^\dagger$ do not transform alike by the gauge group). As far as one only deals with representations of the type of eqs.~(\ref{eq:reps},\ref{eq:SP},\ref{eq:PS}), it has no consequence since each of their entries has a well defined behaviour by hermitian conjugation: the associated quadratic invariants are then always hermitian. But electroweak mass eigenstates are in general (complex) linear combinations of them with, consequently, no definite behaviour by hermitian (charge) conjugation. \section{Two results concerning \boldmath{$CP$} violation.} Let us use the invariants associated to the $N^2/4$ quadruplets (\ref{eq:SP}) and $N^2/4$ quadruplets (\ref{eq:PS}), which all transform by (\ref{eq:actioneven}), to construct a $SU(2)_L \times U(1)$ gauge Lagrangian for the $2N^2$ scalar and pseudoscalar $J=0$ mesons. \subsection{A first result.} Unitarity compels this Lagrangian to be hermitian, in particular its quadratic part. Suppose that it has been diagonalized and let us restrict for the sake of simplicity to a subsystem of two non-degenerate electroweak mass eigenstates $U$ and $V$; they are in general complex linear combinations of quadruplets (\ref{eq:SP}) and (\ref{eq:PS}), and transform by $SU(2)_L$ according to (\ref{eq:actioneven}). $\cal L$ writes, for example \begin{equation} {\cal L} = {1\over 2}(\partial_\mu U\otimes \partial^\mu U -\partial_\mu V\otimes \partial^\mu V - m_U^2 U\otimes U + m_V^2 V\otimes V +\cdots). \end{equation} with $m_U^2 \not = m_V^2$. Hermiticity yields the two following equations, coming respectively from the kinetic and mass terms \begin{equation}\left\{\begin{array}{l} (U\otimes U - V\otimes V)^\dagger = U\otimes U - V\otimes V,\cr (m_U^2 U\otimes U - m_V^2 V\otimes V)^\dagger =m_U^2 U\otimes U - m_V^2 V\otimes V, \end{array}\right. \end{equation} which, if we reject complex values of the (mass)$^2$, entail \begin{equation} U = \pm U^\dagger,\quad V=\pm V^\dagger; \end{equation} unitarity thus requires that the electroweak mass eigenstates be also $C$ eigenstates. Consequence: {\em if electroweak mass eigenstates are observed not to be $CP$ eigenstates, they can only be mixtures of states with different parities.} \subsection{A second result.} Suppose that we have a complex mixing matrix $\Bbb K$; the following Lagrangian for $J=0$ mesons, where the sum is extended to all representations defined by eqs.~(\ref{eq:SP},\ref{eq:PS},\ref{eq:reps}), is nevertheless hermitian, ($D_\mu$ is the covariant derivative with respect to $SU(2)_L \times U(1)$) \vbox{ \begin{eqnarray} {\cal L}= &&{1\over 2}\sum_{symmetric\ {\Bbb D}} \left(D_\mu ({\Bbb S}^0, \vec {\Bbb P})(\Bbb D) \otimes D^\mu ({\Bbb S}^0, \vec {\Bbb P})(\Bbb D) - m_D^2 ({\Bbb S}^0, \vec {\Bbb P})(\Bbb D) \otimes ({\Bbb S}^0, \vec {\Bbb P})(\Bbb D) \right.\cr &&\hphantom{{1\over 2}\sum_{symmetric\ {\Bbb D}}} \left. -\left(D_\mu ({\Bbb P}^0, \vec {\Bbb S})(\Bbb D) \otimes D^\mu ({\Bbb P}^0, \vec {\Bbb S})(\Bbb D) - \tilde m_D^2 ({\Bbb P}^0, \vec {\Bbb S})(\Bbb D) \otimes ({\Bbb P}^0, \vec {\Bbb S})(\Bbb D) \right) \right)\cr - &&{1\over 2}\sum_{skew-symmetric\ {\Bbb D}} \left(D_\mu ({\Bbb S}^0, \vec {\Bbb P})(\Bbb D) \otimes D^\mu ({\Bbb S}^0, \vec {\Bbb P})(\Bbb D) - m_D^2 ({\Bbb S}^0, \vec {\Bbb P})(\Bbb D) \otimes ({\Bbb S}^0, \vec {\Bbb P})(\Bbb D) \right.\cr &&\hphantom{{1\over 2}\sum_{symmetric\ {\Bbb D}}} \left.-\left(D_\mu ({\Bbb P}^0, \vec {\Bbb S})(\Bbb D) \otimes D^\mu ({\Bbb P}^0, \vec {\Bbb S})(\Bbb D) - \tilde m_D^2 ({\Bbb P}^0, \vec {\Bbb S})(\Bbb D) \otimes ({\Bbb P}^0, \vec {\Bbb S})(\Bbb D)\right)\right),\cr & & \end{eqnarray} } and its mass eigenstates, being the $({\Bbb S}^0, \vec {\Bbb P})$ and $({\Bbb P}^0, \vec {\Bbb S})$ representations given by (\ref{eq:SP},\ref{eq:PS}) are $CP$ eigenstates \cite{Machet1}. It is of course straightforward to also build hermitian $SU(2)_L \times U(1)$ invariant quartic terms. Consequence: {\em The existence of a complex phase in the mixing matrix for quarks is not a sufficient condition for the existence of electroweak mass eigenstates for $J=0$ mesons different from $CP$ eigenstates}. \section{Conclusion.} Until we observe direct $CP$ violation \cite{Nir}, and if we stick to a $SU(2)_L \times U(1)$ gauge theory of particles, the origin of observed features of $CP$ violation for $J=0$ mesons transforming like composite di-quark fields by the chiral group $U(N)_L \times U(N)_R$ should be looked for into a mixture of scalar and pseudoscalar states, and be interpreted as a simple effect of parity violation at the mesonic level. \bigskip \begin{em} \end{em} \newpage\null {\Large\bf Appendix}
1,314,259,992,596
arxiv
\section{Introduction} Classical theory of compact Riemann surfaces has an algebraic counterpart, theory of of algebraic functions in one variable over an arbitrary constant field, developed by R. Dedekind and H. Weber. Introduction of differentials into the algebraic theory by E. Artin and H. Hasse, and of ideles and adeles by C. Chevalley and A. Weil, open the way for application of infinite-dimensional methods to the theory of algebraic curves. Classic examples of using such methods are J.-P. Serre adelic interpretation of cohomology \cite{Serre}, and J. Tate proof of the general residue theorem \cite{Tate}. In 1987, E. Arbarello, C. de Concini and V. Kac \cite{Arbarello} interpreted Tate's approach in terms of central extensions of infinite-dimensional Lie algebras, and proved the celebrated A. Weil reciprocity law on algebraic curves by using the infinite-wedge representation. In 1987 D. Kazhdan \cite{Kazhdan} and E. Witten \cite{Witten-1} proposed an adelic formulation of the quantum field theory of one-component free fermions on an algebraic curve, and in \cite{Witten-2} E. Witten outlined the approach toward other quantum field theories. Let $X$ be an algebraic curve over an algebraically closed constant field $k$, and let $L$ be a spin structure on $X$. Denote by $\mathcal{M}(L)$ the infinite-dimensional $k$-vector space of meromorphic sections of $L$ over $X$, and by $\mathcal{M}_{P}$ --- the completions of $\mathcal{M}(L)$ at all points $P\in X$. The approach in \cite{Kazhdan, Witten-1} can be succinctly summarized as follows. \begin{itemize} \item The global Clifford algebra $\mathrm{Cl}_{X}$ on $X$, a restricted direct product over all points $P\in X$ of local Clifford algebras $\mathrm{Cl}_{P}$, associated with the $k$-vector spaces $\mathcal{M}_{P}$ by the residue maps $\Res_{P}(fg)$. \item The adelic Clifford module --- global fermion Fock space $\mathfrak{F}_{X}$ --- a restricted $\field{Z}/2\field{Z}$-graded symmetric tensor product of the local Clifford modules $\mathfrak{F}_{P}$ over all $P\in X$. \item The ``expectation value'' functional, the linear map $\langle\,\cdot\,\rangle : \mathfrak{F}_{X}\rightarrow k$, satisfying \begin{equation} \label{fermi-1} \langle f\cdot u\rangle=0\quad\text{for all}\quad f\in\mathcal{M}(L)\subset \mathrm{Cl}_{X},\;u\in\mathfrak{F}_{X}, \end{equation} where the vector space $\mathcal{M}(L)$ is embedded diagonally into the global Clifford algebra $\mathrm{Cl}_{X}$. \end{itemize} In this pure algebraic formulation of one-component free fermions on an algebraic curve, ``products of field operators inserted at points $P\in X$'' are replaced by vectors $u=\hat{\otimes}_{P\in X}u_{P}\in\mathfrak{F}_{X}$, and the linear map $\langle\,\cdot\,\rangle$ is a mathematical way of defining ``correlation functions of quantum fields'', which at the physical level of rigor are introduced by Feynman path integral. The vector space $\mathcal{M}(L)$ acts on $\mathfrak{F}_{X}$ by ``global symmetries'', and invariance of the quantum theory of free fermions with respect to this symmetry is expressed by ``quantum conservation laws'' \eqref{fermi-1}, also called the additive Ward identities. It is proved in \cite{Witten-1, Witten-2} that if the spin structure $L$ has no global holomorphic sections, then the additive Ward identities determine the expectation value functional $\langle\,\cdot\,\rangle$ uniquely. Moreover, \eqref{fermi-1} is compatible with the global residue theorem \begin{equation*} \sum_{P\in X}\Res_{P}(fdg)=0,\quad f,g\in\mathcal{M}(L). \end{equation*} In \cite{Witten-2}, E. Witten developed the rudiments of quantum field theories associated with a ``current algebra on an algebraic curve'', and mentioned the theories associated with a ``loop group on an algebraic curve''. Corresponding global symmetries of these theories are, respectively, rational maps of an algebraic curve $X$ into a finite-dimensional semi-simple Lie algebra over the field $k$, and rational maps of $X$ into the corresponding Lie group. In the latter case, the analog of quantum conservation laws \eqref{fermi-1} was called ``multiplicative Ward identities'' in \cite{Witten-2}. However, as it was emphasized in \cite[Sect. IV]{Witten-2}, when the genus of $X$ is greater than zero, the Ward identities, even in the Lie-algebraic case, do not determine the expectation value functional $\langle\,\cdot\,\rangle$ uniquely. Thus the main problem of constructing quantum field theories on an algebraic curve is to find additional conditions which would determine the linear functional $\langle\,\cdot\,\rangle$ uniquely. In \cite{Takhtajan} we announce solution of this problem for the theories in the simplest scalar case when the finite-dimensional Lie algebra is the abelian Lie algebra $k$, and the corresponding Lie group is the multiplicative group $k^{\ast}=k\setminus\{0\}$. We call these theories, respectively, quantum field theories of additive and multiplicative bosons. The proposed solution in \cite{Takhtajan} was to enlarge the global symmetries by considering algebraic analogs of the vector space ``multi-valued additive functions on a Riemann surface'' --- the analogs of classical abelian integrals of the second kind with zero $a$-periods --- and the group of ``multi-valued multiplicative functions on a Riemann surface'' --- the analogs of exponentials of abelian integrals of the third kind with zero $a$-periods. Though classical theory of abelian integrals on compact Riemann surface has been already developed by Riemann (see, e.g., \cite{Iwasawa} and \cite{Kra} for the modern exposition), corresponding algebraic theory ---``integral calculus on algebraic curves'' --- has not been fully developed. In the present paper we fill this gap for the case when the constant field $k$ has characteristic zero, and give an explicit construction of quantum field theories of additive and multiplicative bosons on an algebraic curve. These theories are naturally associated with the algebraic de Rham theorem and A. Weil reciprocity law, and their corresponding global symmetries are, respectively, the vector space of additive multi-valued functions and the group of multiplicative multi-valued functions. Here is the more detailed content of the paper. For the convenience of the reader, in Section \ref{Basic facts} we recall necessary facts of the theory of algebraic curves. Namely, let $X$ be an algebraic curve of genus $g$ over an algebraically closed constant field $k$, $F=k(X)$ be the field of rational functions on $X$, and $F_{P}$ be the corresponding local fields --- completions of the field $F$ with respect to the regular discrete valuations $v_{P}$ corresponding to the discrete valuation rings at points $P\in X$. In Section \ref{Definitions} we introduce the ring of adeles $$\field{A}_{X}=\coprod_{P\in X}F_{P}$$ --- a restricted direct product of the local fields $F_{P}$ --- and present Serre's adelic interpretation of the cohomology. In Section \ref{differentials} we recall the definitions of the $F$-module $\Omega^1_{F/k}$ of K\"{a}hler differentials on $X$, of the corresponding $\field{A}_{X}$-module of differential adeles $\bm{\Omega}_{X}$, and of the differential and residue maps $d:\field{A}_{X}\rightarrow\bm{\Omega}_{X}$ and $\Res: \bm{\Omega}_{X}\rightarrow k$. In Section \ref{R-R} we present the Serre duality and the Riemann-Roch theorem, and in Section \ref{tame} we define the group of ideles $\field{J}_{X}$, the local and global tame symbols, and state A. Weil reciprocity law. In Section \ref{Calculus}, assuming that the constant field $k$ has characteristic zero, we recall the ``differential calculus'' on an algebraic curve $X$ --- the structure theory of the $k$-vector space of K\"{a}hler differentials $\Omega^1_{F/k}$ on $X$ --- and develop the corresponding ``integral calculus''. Namely, in Section \ref{A-functions}, following Chevalley \cite{ Chevalley} and Eicher \cite{Eichler}, for the $k$-vector space $\Omega^{(2\text{nd})}$ of the differentials of the second kind --- differentials on $X$ with zero residues --- we introduce the skew-symmetric bilinear form $(\omega_1,\omega_2)_X$ by \begin{equation*} (\omega_1,\omega_2)_X=\sum_{P\in X} \Res_P(d^{-1}\omega_1\,\omega_2),\;\;\omega_1,\omega_2\in \Omega^{(\mathrm{2nd})}. \end{equation*} The main result of the differential calculus is Theorem \ref{Chevalley}, the algebraic version of the de Rham theorem. It goes back to Chevalley and Eichler, and for the algebraic curve $X$ of genus $g\geq 1$\footnote{The case $g=0$ is trivial.} states that $2g$-dimensional $k$-vector space $\Omega^{(2\text{nd})}/dF$ is a symplectic vector space with the symplectic form $(~,~)_X$. Moreover, for every choice of a degree $g$ non-special effective divisor $D=P_{1}+\dots +P_{g}$ on $X$, together with the uniformizers $t_{i}$ at the points $P_{i}$, there is an isomorphism $$\Omega^{(\mathrm{2nd})}/dF\simeq\Omega^{(\mathrm{2nd})}\cap \Omega^1_{F/k}(-2D)$$ and a symplectic basis $\{\theta_{i},\omega_{i}\}_{i=1}^{g}$ of $\Omega^{(\mathrm{2nd})}\cap \Omega^1_{F/k}(-2D)$, consisting, respectively, of differentials of the first and second kinds $\theta_{i}$ and $\omega_{i}$ with the following properties. Differentials $\theta_{i}$ vanish at all points $P_{j}$ for $j\neq i$, and $\theta_{i}=(1+O(t_{i}))dt_{i}$ at $P_{i}$, whereas differentials $\omega_{i}$ are regular at all points $P_{j}$ for $j\neq i$, and $\omega_{i}=(t_{i}^{-2}+O(t_{i}))dt_{i}$ at $P_{i}$. The differentials $\theta_i$ and $\omega_i$ are, respectively, algebraic analogs of differentials of the first kind on a compact Riemann surface with normalized ``$a$-periods'', and differentials of the second kind with second order poles, ``zero $a$-periods'' and normalized ``$b$-periods''. Algebraically, the $a$-periods of $\omega\in\Omega^{(\mathrm{2nd})}$ are defined by $(\omega,\omega_{i})_{X}$, $i=1,\dots,g$, and we denote by $\Omega^{(\mathrm{2nd})}_{0}$ the isotropic subspace of $\Omega^{(\mathrm{2nd})}$ consisting of differentials of the second kind with zero $a$-periods. According to Proposition \ref{Second}, \begin{equation} \label{2-a} \Omega^{(\mathrm{2nd})}_0 = k\cdot\omega_1\oplus\dots\oplus k\cdot\omega_g\oplus dF. \end{equation} In Section \ref{A-functions} we also introduce the algebraic notion of additive multi-valued functions on $X$. By definition, the $k$-vector space of additive multi-valued functions is a subspace $\mathcal{A}(X)$ of the adele ring $\field{A}_X$ satisfying $F\subset \mathcal{A}(X)$ and $d\mathcal{A}(X)\subset \Omega^1_{F/k}$, and the additional property that if $a\in\mathcal{A}(X)$ is such that $da=df$ for some $f\in F$, then $a-f=c\in k$. The main result of the integral calculus for differentials of the second kind with zero $a$-periods is explicit construction of the vector space $\mathcal{A}(X,D)$ in Example \ref{Additive}, which plays a fundamental role for the theory of additive bosons. It is parametrized by the choice of the degree $g$ non-special divisor $D=P_1 + \dots + P_g$ on $X$, the uniformizers $t_{i}$ at the points $P_{i}$, and the solutions in $\field{A}_{X}$ of the equations $d\eta_i = \omega_i$ (with any fixed choice of local additive constants). It is defined by \begin{equation*} \mathcal{A}(X;D) =k\cdot\eta_1\oplus\dots\oplus k\cdot\eta_g\oplus F\subset\field{A}_X \end{equation*} and satisfies the property $d(\mathcal{A}(X;D))=\Omega^{(\mathrm{2nd})}_0$. Finally, we introduce additive multi-valued functions $\eta_{P}^{(n)}\in\mathcal{A}(X;D)$ with single poles at $P\in X$ of any given order $n$, and in Lemma \ref{reciprocity} prove that every $f\in F$ admits a unique partial fraction expansion with simple fractions given by these $\eta_{P}^{(n)}$. Section \ref{M-functions} is devoted to the differential and integral calculus for the differentials of the third kind on $X$ --- K\"{a}hler differentials with only simple poles. We define the $a$-periods of a third kind differential $\omega$ by $$-(\omega_{i},\omega)_{X}=-\sum_{P\in X}\Res_{P}(\eta_{i}\omega),\quad i=1,\dots g,$$ and prove that there is a choice of local additive constants in the definition of $\eta_{i}=d^{-1}\omega_{i}$ such that all logarithmic differentials $\displaystyle{d\log f=\frac{df}{f}}$, $f\in F^{\ast}$, have zero $a$-periods; all such choices are parametrized by the $g$ elements in $\Hom(\mathrm{Pic}_{0}(X),k)$. For every $P,Q\in X$, $P\neq Q$, denote by $\omega_{PQ}$ the unique differential of the third kind with simple poles at points $P$ and $Q$ with respective residues $1$ and $-1$ and zero $a$-periods. The differentials $\omega_{PQ}$ span the vector space $\Omega_{0}^{\mathrm{3rd}}$ of the differentials of the third kind with zero $a$-periods, and \begin{equation} \label{dlogf-representation-1} d\log f=\sum_{i=1}^{n}\omega_{P_{i}Q_{i}}, \end{equation} where $(f)=\sum_{i=1}^{n}(P_{i}-Q_{i})$ is the divisor of $f\in F^{\ast}$. The main result of the integral calculus for the differentials of the third kind, summarized in Proposition \ref{Mult-Existence}, is the existence of an algebraic analog of the classical Schottky-Klein prime form on a compact Riemann surface --- the family of ideles $e_{Q}=\{e_{Q,P}\}_{P\in X}\in\field{J}_{X}$ parametrized by points $Q\in X$ with the following properties. \begin{itemize} \item For all $P\in X$, the elements $e_{Q,P}\in F_{P}^{\ast}$ satisfy $v_{P}(e_{Q,P})=0$, when $P\neq Q$ , and $v_{Q}(e_{Q,Q})=1$ when $P=Q$. \item For all $P,Q\in X$, $P\neq Q$, the constants $c_{Q,P}=e_{Q,P}\!\!\!\mod\mathfrak{p}\in k^{\ast}$ satisfy $c_{Q,P}=-c_{P,Q}$. \item For all $P,Q\in X$, $P\neq Q$, $$\displaystyle{\omega_{PQ}=d\log f_{PQ},\quad\text{where}\quad f_{PQ}=\frac{e_{P}}{e_{Q}}\in\field{J}_{X}.}$$ \item For every $f\in F^{\ast}$ with $(f)=\sum_{i=1}^{n}(P_{i}-Q_{i})$, \begin{equation*} f=c\prod_{i=1}^{n}f_{P_{i}Q_{i}}\quad\text{where}\quad c=c(f)\in k^{\ast}. \end{equation*} \end{itemize} The latter property is a unique factorization of rational functions on $X$ into the products of ``elementary functions'' $f_{PQ}$, which should be considered as an integral form of the differential property \eqref{dlogf-representation-1}. Correspondingly, the algebraic analogs of the exponentials of abelian integrals of the third kind with zero $a$-periods are defined by $$\exp\int_{P}^{Q}\omega_{RS}=\frac{f_{RS}(Q)}{f_{RS}(P)}.$$ They satisfy classical ``exchange law of variable and parameter'', proved in Lemma \ref{Law}. In Section \ref{M-functions} we also introduce the algebraic notion of multiplicative multi-valued functions on $X$. By definition, a group of multiplicative multi-valued functions on $X$ is a subgroup $\mathcal{M}(X)$ of the idele group $\field{J}_X$ satisfying $F^{\ast}\subset\mathcal{M}(X)$ and $\displaystyle{d\log m=\frac{dm}{m}=\omega\in\Omega^1_{F/k}}$ for every $m \in\mathcal{M}(X)$, and the additional property that if $m \in\mathcal{M}(X)$ is such that $d\log m=d\log f$ for some $f\in F^{\ast}$, then $m=cf$, $c\in k^{\ast}$. The main result of the integral calculus for differentials of the third kind with zero $a$-periods is a construction of the subgroup $\mathcal{M}(X,D)$ in Example \ref{2}, which plays a fundamental role in the theory of multiplicative bosons. Namely, it is a subgroup of $\field{J}_{X}$ generated by the ideles $f_{PQ}$ for all $P,Q\in X$, $P\neq Q$, and it is associated with the vector space $\mathcal{A}(X,D)$, defined in Example \ref{Additive}. Finally, in Proposition \ref{general-weil} we show that the restriction of the global tame symbol to the subgroup $\mathcal{M}(X,D)\times\mathcal{M}(X,D)$ of $\field{J}_{X}\times\field{J}_{X}$ is the identity, which can be considered as generalized A. Weil reciprocity law for multiplicative functions. In Section \ref{Local} we formulate local quantum field theories of additive, charged and multiplicative bosons. The local theory of additive bosons is associated with the representation theory of the local Heisenberg algebra $\mathfrak{g}_{P}$ --- a one-dimensional central extension of the abelian Lie algebra $F_{P}$, $P\in X$, by the $2$-cocycle $c_{P}(f,g)=-\Res_{P}(fdg)$. In Section \ref{Heisenberg-Lie-local} we introduce the highest weight representation $\rho$ of $\mathfrak{g}_{P}$ in the local Fock space $\curly{F}_{P}$, and define the corresponding contragradient representation $\rho^{\vee}$ of $\mathfrak{g}_{P}$ in the dual local Fock space $\curly{F}_{P}^{\vee}$. In Section \ref{lattice algebra-local} we define a local lattice algebra $\mathfrak{l}_{P}$ --- a semi-direct sum of the local Heisenberg algebra $\mathfrak{g}_{P}$ and the abelian Lie algebra $k[\field{Z}]$, the group algebra of $\field{Z}$. Corresponding irreducible highest weight $\mathfrak{l}_{P}$-module is the local Fock space $\curly{B}_{P}$ of ``charged bosons'' --- a symmetric tensor product of $k[\field{Z}]$ and $\curly{F}_{P}$. Material in Sections \ref{Heisenberg-Lie-local} and \ref{lattice algebra-local} is essentially standard and can be found in \cite{kac, ben-zvi-frenkel}. Finally, in Section \ref{Heisenberg system} we, following H. Garland and G. Zuckerman \cite{garland-zuckerman}, introduce local quantum field theory of multiplicative bosons. It is given by the so-called Heisenberg system --- a triple $(G_{P},\mathfrak{g}_{P},\mathrm{Ad})$ --- where $G_{P}$ is the central extension of the multiplicative group $F^{\ast}_{P}$ by the local tame symbol, and $\mathrm{Ad}$ stands for a certain ``adjoint action'' of $G_{P}$ on $\mathfrak{g}_{P}$ ($\mathrm{Ad}$ is well-defined despite the fact that the Heisenberg algebra $\mathfrak{g}_{P}$ is not a Lie algebra of $G_{P}$). A representation of the Heisenberg system $(G_{P},\mathfrak{g}_{P},\mathrm{Ad})$ in a $k$-vector space $V$ is a pair $(R_{P},dR_{P})$, where $R_{P}$ is a representation of the group $G_{P}$, and $dR_{P}$ is a representation of the Lie algebra $\mathfrak{g}_{P}$, satisfying $dR_{P}(\mathrm{Ad}\,g \cdot x)=R_{P}(g)dR_{P}(x)R_{P}(g)^{-1}$. Following \cite{garland-zuckerman}, we define a representation of the local Heisenberg system $(G_{P},\mathfrak{g}_{P},\mathrm{Ad})$ in the local Fock space $\curly{B}_{P}$ of charged bosons, as well as the corresponding contragradient representation in the dual Fock space $\curly{B}_{P}^{\vee}$. Finally, in Section \ref{Global} we formulate global quantum field theories, starting in Section \ref{AB} with the theory of additive bosons on an algebraic curve $X$. The latter is associated with a ``current algebra on an algebraic curve'' --- the global Heisenberg algebra $\mathfrak{g}_X$ --- the one-dimensional central extension of the abelian Lie algebra $\mathfrak{g}\mathfrak{l}_1(\field{A}_X)=\field{A}_X$ by the $2$-cocycle $c_X=\sum_{P\in X} c_P$. Since the $\Omega_{0}^{\mathrm{2nd}}$ is the isotropic subspace with respect to the bilinear form $(~,~)_X$, we have \begin{equation} \label{general-res} c_{X}(a_{1}, a_{2})=0\;\;\text{for all}\;\;a_{1}, a_{2}\in\mathcal{A}(X,D)\subset \field{A}_{X}, \end{equation} which can be considered as generalized residue theorem for the additive multi-valued functions. The irreducible highest weight module of the global Heisenberg algebra $\mathfrak{g}_{X}$ is the global Fock space $\curly{F}_{X}$, a restricted symmetric tensor product of local Fock spaces $\curly{F}_P$ over all points $P\in X$. The global Fock space can be considered as ``the space of observables of the quantum field theory of additive bosons" on an algebraic curve. In Theorem \ref{additive theorem} we prove that there exists a unique normalized expectation value functional $\langle\, \cdot\,\rangle:\curly{F}_X\rightarrow k$, uniquely characterized by the global symmetries \begin{equation} \label{AWI} \langle\rho(a)v\rangle=0\quad\text{for all}\quad a\in \mathcal{A}(X;D)\;\;\text{and}\;\;v\in\curly{F}_{X}, \end{equation} where the subspace $\mathcal{A}(X;D)\subset\field{A}_{X}$ is a vector space of additive multi-valued functions on $X$, defined in Section \ref{A-functions}, and $\rho:\mathfrak{g}_{X}\rightarrow\End\curly{F}_{X}$ is the corresponding representation of the global Heisenberg algebra. Specifically, we show that \begin{equation*} \langle v\rangle=(\Omega_{X},v)\;\;\text{for all}\;\;v\in\curly{F}_{X}, \end{equation*} where $\Omega_{X}\in\curly{F}_{X}^{\vee}$ --- the dual Fock space to $\curly{F}_{X}$ --- satisfies an infinite system of equations \begin{equation}\label{system-1} \Omega_{X}\cdot\rho^{\vee}(a)=0\;\;\text{for all}\;\;a\in\mathcal{A}(X,D). \end{equation} The vector $\Omega_{X}$ is given by an explicit formula (see Theorem \ref{additive theorem}), which encodes all ``correlation functions of the quantum field theory of additive bosons'' on an algebraic curve $X$. The reciprocity law for the differentials of the second kind with zero $a$-periods, proved in Lemma \ref{reciprocity}, plays a fundamental role of ensuring compatibility of the system \eqref{system-1}. The additive Ward identities \eqref{AWI} are also compatible with the generalized residue theorem. Namely, since $[\rho(x),\rho(y)]=c_{X}(x,y)\bm{I}$ for $x,y\in\field{A}_{X}$, where $\bm{I}$ is the identity operator in $\curly{F}_{X}$, we get from \eqref{AWI} that for $a_{1}, a_{2}\in\mathcal{A}(X,D)$, $$0=\langle(\rho(a_{1})\rho(a_{2})-\rho(a_{2})\rho(a_{1}))v\rangle=c_{X}(a_{1},a_{2})\langle v\rangle\;\;\text{for all}\;\;v\in\curly{F}_{X},$$ which gives \eqref{general-res}. In Section \ref{CB} we define a global lattice algebra $\mathfrak{l}_X$ as a semi-direct sum of the global Heisenberg algebra $\mathfrak{g}_{X}$ and the abelian Lie algebra $k[\Div_{0}(X)]$ with generators $e_{D}$, $D\in\Div_{0}(X)$ --- the group algebra of the additive group $\Div_{0}(X)$ of degree $0$ divisors on $X$. Its irreducible highest weight module is the global Fock space $\curly{B}_X$ of charged bosons --- the symmetric tensor product of the group algebra $k[\Div_0(X)]$ and the Fock space of additive bosons $\curly{F}_X$. The main result of this section is Theorem \ref{charged theorem}. It states that there is a unique expectation value functional $ \langle\, \cdot\,\rangle:\curly{B}_X\rightarrow k$, which is normalized with respect to the action of the group algebra $k[\Div_0(X)]$ and satisfies additive Ward identities \eqref{AWI} with respect to the action of global symmetries --- additive multi-valued functions $\mathcal{A}(X,D)$ --- in the global Fock space $\curly{B}_{X}$. It has the form $\langle v\rangle=(\hat{\Omega}_{X},v)$ where $\hat{\Omega}_{X}\in\curly{B}_{X}^{\vee}$ --- the dual Fock space to $\curly{B}_{X}$ --- is given by an explicit formula (see Theorem \ref{charged theorem}), which encodes all ``correlation functions of the quantum field theory of charged additive bosons'' on an algebraic curve $X$. In the last Section \ref{MB} we formulate quantum field theory of multiplicative bosons. We define the global Heisenberg system $(G_{X},\mathfrak{g}_{X},\Ad)$, where $G_{X}$ is a central extension of the group of ideles $\field{J}_{X}$ by the global tame symbol $\tau_{X}$, and in Theorem \ref{G-Z global} we construct its representation $(R_{X}, dR_{X})$ in the global Fock space $\tilde{\curly{B}}_{X}$--- the symmetric tensor product of the group algebra $k[\Div(X)]$ and $\curly{F}_{X}$. For the subgroup $G_{X}^{0}$ --- the central extension of the subgroup $\field{J}_{X}^{0}$ of degree $0$ ideles --- the representation $R_{X}$ has an invariant subspace $\curly{B}_{X}$. In Theorem \ref{MB} we prove that there is a unique normalized expectation value functional $\langle\, \cdot\,\rangle:\curly{B}_X\rightarrow k$, satisfying global symmetries \begin{equation} \label{MWI} \langle dR_{X}(a)v\rangle =0\;\;\text{and}\;\; \langle R_{X}(m)v\rangle =\langle v\rangle \end{equation} for all $a\in \mathcal{A}(X;D)\subset\field{A}_{X}$, $m\in \mathcal{M}(X;D)\subset\field{J}_{X}^{0}$, and $v\in\curly{B}_{X}$. The latter relations in \eqref{MWI} are the multiplicative Ward identities in the sense of E. Witten \cite{Witten-2} (see also \cite{Sen-Raina}). As in previous sections, the expectation value functional has the form $\langle v\rangle=(\bm{\Omega}_{X},v)$, where $\bm{\Omega}_{X}\in\curly{B}_{X}^{\vee}$ is given by an explicit formula (see Theorem \ref{MB}), which encodes all ``correlation functions of the quantum field theory of multiplicative bosons'' on an algebraic curve $X$. The property $c_{P,Q}=-c_{Q,P}$ of the algebraic analog of the prime form, which is fundamental for the exchange law of variable and parameter, proved in Lemma \ref{Law}, ensures the compatibility of the infinite system of equations for determining $\bm{\Omega}_{X}$. The multiplicative Ward identities are also compatible with the generalized A. Weil reciprocity law for multiplicative multi-valued functions, proved in Proposition \ref{general-weil}. Namely, since $R_{X}(ab)=\tau_{X}(a,b)R_{X}(a)R_{X}(b)$ for $a,b\in\field{J}_{X}$, we get from \eqref{MWI} that for all $m_{1}, m_{2}\in\mathcal{M}(X,D)$ and $v\in\curly{B}_{X}$, $$\langle v\rangle = \langle R_{X}(m_{1}m_{2}) v\rangle =\tau_{X}(m_{1},m_{2}) \langle R_{X}(m_{1})R_{X}(m_{2})v\rangle = \tau_{X}(m_{1},m_{2})\langle v\rangle,$$ so that $\tau_{X}(m_{1},m_{2})=1$. Finally, we note that our construction of quantum field theories on algebraic curves can be considered as an algebraic counterpart of geometric realization of conformal field theories on Riemann surfaces in \cite{KNTY}. In particular, explicit formula for the vector $\bm{\Omega}_{X}\in\curly{B}_{X}^{\vee}$ in quantum field theory of multiplicative bosons contains all correlation functions of vertex operators in \cite[Sect. 6A]{KNTY}. \subsection*{Acknowledgments} I am grateful to A.L. Smirnov for the suggestion to use $\mathrm{Ext}^{1}(\,\cdot\,,k)$ in the proof of Lemma \ref{Normal} in Section \ref{M-functions}. This work was partially supported by the NSF grants DMS-0204628 and DMS-0705263. \section{Basic Facts\label{Basic facts}} Here we recall necessary facts from the theory of algebraic curves. The material is essentially standard and can be found in \cite{Chevalley,Serre,Iwasawa}. \subsection{Definitions\label{Definitions}} An algebraic curve $X$ over an algebraically closed field $k$ is an irreducible, nonsingular, projective variety over $k$ of dimension 1, equipped with Zariski topology. The field $F=k(X)$ of rational functions on $X$ is a finitely-generated extension of the field $k$ of the transcendence degree $1$. Conversely, every finitely-generated extension of $k$ of the transcendence degree $1$ corresponds to the unique, up to an isomorphism, algebraic curve over $k$. Closed points $P$ on $X$ correspond to discrete valuation rings $O_P$, the subrings of the field $F$. The rings $O_P$ for all points $P\in X$ form a sheaf of rings over $X$ --- the structure sheaf $\mathcal{O}_X$, a subsheaf of the constant sheaf $\underline{F}$. For every point $P\in X$ let $v_P$ be the regular discrete valuation of the field $F$ over $k$, corresponding to the discrete valuation ring $O_P$. Completion of the field $F$ with respect to $v_P$ is the complete closed field $F_P$ with the valuation ring $\mathcal{O}_P$ --- the completed local ring at $P$, the prime ideal $\mathfrak{p}$, and the residue class field $k =\mathcal{O}_P/\mathfrak{p}$. The ring of adeles $\field{A}_X$ of an algebraic curve $X$, \begin{equation*} \field{A}_X=\coprod_{P\in X}F_P, \end{equation*} is a restricted direct product over all points $P\in X$ of the local fields $F_P$ with respect to the local rings $\mathcal{O}_P$. By definition, \begin{displaymath} x=\{x_P\}_{P\in X}\in \field{A}_X~\text{if}~x_P\in\mathcal{O}_P~ \text{for all but finitely many}~P\in X. \end{displaymath} The field $F$ is contained in all local fields $F_P$ and is diagonally embedded into $\field{A}_X$ by \begin{displaymath} F\ni f\mapsto\{f|_{P}\}_{P\in X}\in\field{A}_X. \end{displaymath} The divisor group $\Div(X)$ of $X$ is a free abelian group generated by points $P\in X$. By definition, \begin{displaymath} D=\sum_{P\in X}n_P\cdot P\in\Div(X) \end{displaymath} if $n_P=v_P(D)\in\field{Z}$, and $n_P=0$ for all but finitely many $P\in X$. The divisors of the form \begin{equation*} (f)=\sum_{P\in X} v_P(f)\cdot P\in\Div(X), \end{equation*} where $f\in F^{\ast}=F\setminus\{0\}$, the multiplicative group of the field $F$, are called principal divisors. The principal divisors form a subgroup $\mathrm{PDiv}(X)\simeq F^{\ast}/k^{\ast}$ of the divisor group $\Div(X)$. The degree of a divisor $D$ is \begin{equation*} \deg D=\sum_{P\in X}n_P=\sum_{P\in X} v_P(D)\in\field{Z}, \end{equation*} and $\deg(f)=0$ for $f\in F^{\ast}$. A divisor $D$ is said to be effective, if $v_P(D)\geq 0$ for all $P\in X$. By definition, divisors $D_1$ and $D_2$ are linear equivalent, $D_1\sim D_2$, if $D_1 - D_2 = (f),\,f\in F^{\ast}$. The equivalence classes of divisors form the divisor class group $\mathrm{Cl}(X)=\Div(X)/\mathrm{PDiv}(X)$. For every divisor $D$ the subspace $\field{A}_X(D)$ of the $k$-vector space $\field{A}_X$ is defined by \begin{equation*} \field{A}_X(D)=\{x\in\field{A}_X : v_P(x_P)\geq -v_{P}(D)~\text{for all}~ P\in X\}\,. \end{equation*} The ring of adeles $\field{A}_X$ is a topological ring with the product topology. The base of neighborhoods of 0 is given by the subspaces $\field{A}_X(D),\,D\in\Div(X)$, and $\field{A}_X$ is a $k$-vector space with linear topology in the sense of Lefschetz \cite[Ch.~II, \S 6]{Lefschetz}. Every subspace $\field{A}_X(D)$ is linear compact, so that $\field{A}_X$ is locally linear compact. The $k$-vector space $F=k(X)$ is discrete in $\field{A}_X$ and the quotient space $\field{A}_X/F$ is linear compact \cite[App., \S3] {Iwasawa}. To every divisor $D$ there corresponds an algebraic coherent sheaf $\mathcal{F}(D)$ on $X$ --- a subsheaf of the constant sheaf $\underline{F}$ whose stalk at each point $P\in X$ is \begin{equation*} \mathcal{F}(D)_P=\{f\in F : v_P(f)\geq - v_P(D)\}. \end{equation*} Linear equivalent divisors correspond to the isomorphic sheaves. Denote by $H^i(X,\mathcal{F}(D))$ the \v{C}ech cohomology groups of the sheaf $\mathcal{F}(D)$ --- a finite-dimensional vector spaces over $k$ that vanish for $i>1$ --- and put $h^i(D)=\dim_k H^i(X,\mathcal{F}(D))$. The zero divisor $D=0$ corresponds to the structure sheaf $\mathcal{O}_X$. In this case, $h^0(0)=1$ and $h^1(0)=g$ --- the arithmetic genus of the algebraic curve $X$. One has \begin{equation*} H^0(X,\mathcal{F}(D))= \field{A}_X(D)\cap F \end{equation*} and \begin{equation*} H^1(X,\mathcal{F}(D))\simeq\field{A}_X/(\field{A}_X(D)+ F), \end{equation*} which is Serre's adelic interpretation of the cohomology \cite[Ch. II, \S5]{Serre}. \subsection{Differentials and residues\label{differentials}} The $F$-module of K\"{a}hler differentials on $X$ is the module $\Omega^1_{F/k}$, which is universal with respect to the following properties. \begin{itemize} \item[\textbf{K1}] There exists a $k$-linear map $d: F\rightarrow \Omega^1_{F/k}$ satisfying the Leibniz rule $d(fg)=fdg + gdf$. \item[\textbf{K2}] The $F$-module $\Omega^1_{F/k}$ is generated by the elements $df,\,f\in F$. \end{itemize} Since $X$ is an algebraic curve, $\dim_F \Omega^1_{F/k}=1$. Let $t\in F$ be a Zariski local coordinate at point $P$ --- a rational function on $X$ satisfying $v_P(t)=1$. Then $dt$ is a generating element of the $F$-module $\Omega^1_{F/k}$, i.e., every K\"{a}hler differential can be written as $\omega=fdt$ for some $f\in F$. The order of $\omega\in\Omega^1_{F/k}$ at $P$ is defined by \begin{equation*} v_P(\omega)=v_P(f). \end{equation*} The order does not depend on the choice of a local coordinate at $P$ and defines a valuation on $\Omega^1_{F/k}$. The $O_P$-modules $\Omega^1_{O_P/k}$ for all points $P\in X$ form an algebraic coherent sheaf $\underline{\Omega}$ --- a subsheaf of the constant sheaf $\underline{\Omega^1_{F/k}}$. Moreover, \begin{equation*} \Omega^1_{F/k} = \Omega^1_{O_P/k}\underset{{O}_P}{\otimes} F. \end{equation*} In the case when the field $k$ has characteristic 0, the $F_P$-module $\Omega^1_{F_P/k}$ for every $P\in X$ is an infinite-dimensional $F_P$-vector space (the mapping $d$ is not continuous with respect to the $\mathfrak{p}$-adic topology of $F_P$). Following Serre, one defines \begin{equation*} \tilde\Omega^1_{F_P/k} = \Omega^1_{F_P/k}/\mathcal{Q}, \end{equation*} where $\mathcal{Q}=\cap_{n\geq 0}\,\mathfrak{p}^n d(\mathcal{O}_P)$, so that $\dim_{F_P}\tilde\Omega^1_{F_P/k}=1$ (see \cite[Ch. II, \S 11]{Serre}). The $F_P$-module $\tilde\Omega^1_{F_P/k}$ is the completion of the $F$-module $\Omega^1_{F/k}$ with respect to the valuation $v_P$. The completion of the $O_P$-module $\Omega^1_{O_P/k}$ is an $\mathcal{O}_P$-module $\tilde\Omega^1_{\mathcal{O}_P/k}$ and \begin{equation*} \tilde\Omega^1_{F_P/k} = \tilde\Omega^1_{\mathcal{O}_P/k}\underset{\mathcal{O}_P}{\otimes} F_P. \end{equation*} The $\field{A}_X$-module of adeles $\bm{\Omega}_X$ of the sheaf $\underline{\Omega}$, \begin{equation*} \bm{\Omega}_X=\coprod_{P\in X}\tilde\Omega^1_{F_P/k}, \end{equation*} is a restricted direct product over all points $P\in X$ of the $F_P$-modules $\tilde\Omega^1_{F_P/k}$ with respect to the $\mathcal{O}_P$-modules $\tilde\Omega^1_{\mathcal{O}_P/k}$. The $F$-module $\Omega^1_{F/k}$ is contained in all $F_P$-modules $\tilde\Omega^1_{F_P/k}$ and is diagonally embedded into $\bm{\Omega}_X$ by \begin{equation*} \Omega^1_{F/k}\ni \omega\mapsto\{\omega|_{P}\}_{P\in X} \in\bm{\Omega}_X. \end{equation*} The $k$-vector space $\bm{\Omega}_X$ has a linear topology with the base of the neighborhoods of zero given by the subspaces $\bm{\Omega}_X(D)$ for all $D\in\Div(X)$, where \begin{equation*} \bm{\Omega}_X(D)= \{\omega=\{\omega_P\}_{P\in X} \in\boldsymbol{\Omega}_X : v_P(\omega_P)\geq v_P(D)\;\;\text{for all}\;\; P\in X\}\,, \end{equation*} and is locally linear compact. The maps $d: F_P\rightarrow\tilde\Omega^1_{F_P/k}$ for all $P\in X$ give rise to the continuous map $d:\field{A}_X\rightarrow \bm{\Omega}_X$, satisfying the Leibniz rule. \begin{remark} The $\field{A}_X$-module $\bm{\Omega}_X$ is essentially the set of principal part systems of degree $1$ on $X$ in the sense of Eichler (see \cite[Ch.~III, \S 5.2]{Eichler}). \end{remark} Let $\omega \in\tilde\Omega^1_{F_P/k}$, and let $t$ be a local parameter of the field $F_P$, so that $dt$ is a basis of the $F_P$-module $\tilde\Omega^1_{F_P/k}$. The residue map $\Res_P: \tilde\Omega^1_{F_P/k}\rightarrow k$ is defined by \begin{equation*} \Res_P(\omega)=c_{-1},\quad\text{where}\quad\omega=\sum_{n\gg -\infty}^{\infty}c_n t^n dt, \end{equation*} and the symbol $n\gg-\infty$ indicates that summation goes only over a finitely many negative values of $n$. The definition of the residue does not depend on the choice of a local parameter. The residue map is continuous with respect to the $\mathfrak{p}$-adic topology on $\tilde\Omega^1_{F_P/k}$ and the discrete topology on $k$. The local residue maps $\Res_P$ give rise to the global residue map $\Res: \bm{\Omega}_X\rightarrow k$, \begin{equation*} \Res\,\omega=\sum_{P\in X}\Res_P(\omega_P),\;\;\omega=\{\omega_P\}_{P\in X} \in\bm{\Omega}_X. \end{equation*} The global residue map is well-defined, continuous, and satisfies the following fundamental property. \begin{theorem}[The residue formula] For every $\omega\in\Omega^1_{F/k}$, \begin{displaymath} \Res\,\omega=\sum_{P\in X}\Res_P(\omega|_P) = 0. \end{displaymath} \end{theorem} \subsection{Serre's duality and Riemann-Roch theorem\label{R-R}} Let \begin{equation*} \Omega^1_{F/k}(D)= \Omega^1_{F/k}\cap\bm{\Omega}_X(D)= \{\omega\in\Omega^1_{F/k} : v_P(\omega)\geq v_P(D)~\text{for all}~P\in X\}. \end{equation*} Define the residue pairing $(~,~):\bm{\Omega}_X\otimes_K\field{A}_X\rightarrow k$ by \begin{equation*} (\omega,x)=\sum_{P\in X}\Res_P(x_P\omega_P),~\text{where}~\omega\in\bm{\Omega}_X,\, x\in\field{A}_X. \end{equation*} The residue pairing has the following properties. \begin{enumerate} \item[\textbf{P1}] $(\omega,x)=0$ if $\omega\in\Omega^1_{F/k}$ and $x\in F$. \item[\textbf{P2}] $(\omega,x)=$ if $\omega\in\bm{\Omega}_X(D)$ and $x\in\field{A}_X(D)$. \end{enumerate} It follows from \textbf{P1}-\textbf{P2} that the formula $\imath(\omega)(x)=(\omega,x)$ for every $D\in\Div(X)$ defines a $k$-linear map \begin{equation*} \imath: \Omega^1_{F/k}(D)\rightarrow \left(\field{A}_X/(\field{A}_X(D) + F)\right)^\vee, \end{equation*} where $V^\vee=\Hom(V,k)$ is the topological dual of a $k$-vector space $V$ with the linear topology. \begin{theorem}[Serre's duality] For every $D\in\Div(X)$ the mapping $\imath$ is an isomorphism, i.e., the finite-dimensional $k$-vector spaces $\Omega^1_{F/k}(D)$ and $\field{A}_X/(\field{A}_X(D) + F)$ are dual with respect to the residue pairing. \end{theorem} \begin{corollary}[The strong residue theorem] \mbox{} \begin{enumerate} \item[(i)] An adele $x\in\field{A}_X$ corresponds to a rational function on $X$ under the embedding $F\hookrightarrow\field{A}_X$ if and only if \begin{equation*} (\omega,x)=0~\text{for all}~\omega\in\Omega^1_{F/k}. \end{equation*} \item[(ii)] A differential adele $\omega\in\bm{\Omega}_X$ corresponds to a K\"{a}hler differential on $X$ under the embedding $\Omega^1_{F/k}\hookrightarrow\bm{\Omega}_X$ if and only if \begin{equation*} (\omega,f)=0~\text{for all}~f\in F. \end{equation*} \end{enumerate} \end{corollary} \begin{proof} To prove part (i), observe that by Serre's duality $x\in\field{A}_X(D) + F$ for every $D\in\Div(X)$, and $F\cap\field{A}_X(D)=0$ for $D<0$ gives $x\in F$. To prove part (ii), let $\omega_0\in\Omega^1_{F/k},\,\omega_0\neq 0$. Setting $x=\omega/\omega_0\in\field{A}_X$ we get $0=(\omega,f)=(f\omega_0,x)$ for all $f\in F$, so that $x\in F$ by part (i). \end{proof} \begin{remark} In a slightly different form, the strong residue theorem can be found in \cite[Ch. III, \S 5.3]{Eichler}. \end{remark} For $\omega\in\Omega^1_{F/k}$ set \begin{equation*} (\omega)=\sum_{P\in X}v_P(\omega)\cdot P\in\Div(X). \end{equation*} Since $\dim_F \Omega^1_{F/k}=1$, divisors $(\omega)$ are linear equivalent and define the divisor class $K\in\text{Cl}(X)$, called the canonical class. Combining the Riemann-Roch formula for the Euler characteristic of the divisor $D$, \begin{equation*} \chi(D)=h^0(D) - h^1(D)=\deg D + 1 - g, \end{equation*} and using Serre's duality and Serre's adelic interpretation of cohomology, one gets the following result. \begin{theorem}[Riemann-Roch theorem] For every $D\in\Div(X)$, \begin{equation*} h^0(D) - h^0(K-D) = \deg D + 1-g. \end{equation*} \end{theorem} Effective divisor $D$ on $X$ is called non-special if $h^0(K-D)=0$. It follows from the Riemann-Roch theorem that effective divisor $D$ of degree $g$ is non-special if and only if $h^0(D)=1$. Equivalently, the only rational function whose poles are contained in the effective non-special divisor of degree $g$ is a constant function. \subsection{The tame symbol\label{tame}} The group of ideles $\field{J}_X$ is a group of invertible elements in $\field{A}_X$. Equivalently, \begin{equation*} \field{J}_X=\coprod_{P\in X}F_P^\ast \end{equation*} --- a restricted direct product of the multiplicative groups $F_P^\ast$ with respect to the subgroups $U_P=\mathcal{O}_{P}^{\ast}$ of invertible elements in $\mathcal{O}_{P}$. By definition, \begin{displaymath} a=\{a_P\}_{P\in X}\in\field{J}_{X}~\text{if}~a_P\in U_P~ \text{for all but finitely many}~P\in X. \end{displaymath} The multiplicative group $F^\ast$ is embedded diagonally into the group of ideles $\field{J}_X$, \begin{displaymath} F^\ast\ni f\mapsto\{\left.f\right |_{P}\}_{P\in X}\in\field{J}_X. \end{displaymath} The global residue map defines the pairing $\Res_{X}: \field{A}_{X}\otimes\field{J}_{X}\rightarrow k$ by \begin{displaymath} \Res_{X}(x,a)=\sum_{P\in X}\Res_{P}\Big(x_{P}\frac{da_{P}}{a_{P}}\Big), \end{displaymath} and by the residue theorem, \begin{equation} \label{add-res} \Res_{X}(f,g)=0\quad\text{for all}\quad f\in F,\,g\in F^{\ast}. \end{equation} The tame symbol (or Tate symbol) for the field $F_{P}$ is defined by $$ \tau_{P}(f,g)=(-1)^{mn}\frac{f^{n}}{g^{m}}\!\!\!\mod\mathfrak{p}\in k^{\ast},$$ where $f,g\in F_{P}^{\ast}$ and $m=v_{P}(f), n=v_{P}(g)$ (see, e.g., \cite[Ch. III, \S1.3]{Serre}). It satisfies the following properties: \begin{itemize} \item[\textbf{T1}] $\tau_{P}(f,g_{1}g_{2})=\tau_{P}(f,g_{1})\tau_{P}(f,g_{2})$. \item[\textbf{T2}] $\tau_{P}(f,g)\tau_{P}(g,f)=1$. \end{itemize} Since $\tau_{P}(f,g)=1$ when $f,g\in U_{P}$, the global tame symbol $$\tau_{X}(a,b)=\prod_{P\in X}\tau_{P}(a_{P},b_{P}),\quad a,b\in\field{J}_{X}$$ is a well-defined map $\tau_{X}:\field{J}_{X}\times\field{J}_{X}\rightarrow k^{\ast}$ satisfying the properties \textbf{T1}--\textbf{T2}. The classical A. Weil reciprocity law is the following statement \begin{equation} \label{Weil-reciprocity} \tau_{X}(f,g)=1\quad\text{for all}\quad f,g\in F^{\ast} \end{equation} (see \cite{Weil}, \cite[Ch.~III, \S1.4]{Serre} and \cite{Porras} for the modern exposition and non-abelian generalizations.) It can be considered as non-trivial multiplicative analog of the corresponding additive result \eqref{add-res}. \section{Differential and Integral Calculus\label{Calculus}} Starting from this section, we assume that the algebraically closed field $k$ has characteristic 0, and the algebraic curve $X$ has genus $g\geq 1$. \subsection{Differentials of the second kind and ``additive functions''\label{A-functions}} \label{second kind} Following classical terminology, a K\"{a}hler differential $\omega\in\Omega^1_{F/k}$ is said to be of the second kind if $\Res_P \omega = 0$ for all $P\in X$. The $k$-vector space $\Omega^{(2\text{nd})}$ of differentials of the second kind on $X$ carries a canonical skew-symmetric bilinear form $(~,~)_X$ defined as follows. For every $\omega\in\Omega^{(\mathrm{2nd})}$ let $x=\{x_P\}_{P\in X}\in\field{A}_X$ be an adele satisfying \begin{equation*} d\,x_P = \left.\omega\right|_P\;\;\text{for all}\;\;P\in X. \end{equation*} For every $P\in X$ such $x_P\in F_P$ exists, is defined up to an additive constant from $k$, and $x_P\in\mathcal{O}_P$ for all but finitely many $P\in X$. Denote $x=d^{-1}\omega$ and set \begin{equation*} (\omega_1,\omega_2)_X=\sum_{P\in X} \Res_P(d^{-1}\omega_1\,\omega_2),\;\;\omega_1,\omega_2\in \Omega^{(\mathrm{2nd})}. \end{equation*} The bilinear form $(~,~)_X$ does not depend on the choices of additive constants in the definition of $d^{-1}$ and is skew-symmetric. The infinite-dimensional $k$-vector space $\Omega^{(\mathrm{2nd})}$ has a $g$-dimensional subspace $\Omega^{(\mathrm{1st})}=\Omega^1_{F/k}(0)$ consisting of differentials of the first kind. The infinite-dimensional subspace $\Omega^{(\mathrm{1st})}\oplus dF$ of $\Omega^{(\mathrm{2nd})}$ is isotropic with respect to the bilinear form $(~,~)_X$. Since there is no canonical choice of the complementary isotropic subspace to $\Omega^{(\mathrm{1st})}\oplus dF$ in $\Omega^{(\mathrm{2nd})}$, the exact sequence \begin{equation*} 0\rightarrow \Omega^{(\mathrm{1st})}\oplus dF\rightarrow\Omega^{(\mathrm{2nd})} \rightarrow\Omega^{(\mathrm{2nd})}/(\Omega^{(\mathrm{1st})}\oplus dF)\rightarrow 0 \end{equation*} does not split canonically. Still, the following fundamental result holds (see \cite[Ch. VI, \S 8]{Chevalley} and \cite[Ch. III, \S\S 5.3-5.4]{Eichler}), which can be considered as an algebraic de Rham theorem (see \cite[Ch. III, \S5]{Griffiths-Harris}). \begin{theorem} \label{Chevalley} \mbox{} \begin{enumerate} \item[(i)] The restriction of the bilinear form $(~,~)_X$ to $\Omega^{(\mathrm{2nd})}/dF$ is non-degenerate and \begin{displaymath} \dim_k \Omega^{(\mathrm{2nd})}/dF =2g. \end{displaymath} \item[(ii)] Every choice of degree $g$ non-special effective divisor $D$ on $X$ defines the isomorphism \begin{equation*} \Omega^{(\mathrm{2nd})}/dF\simeq\Omega^{(\mathrm{2nd})}\cap \Omega^1_{F/k}(-2D). \end{equation*} \item[(iii)] Let $D=P_1 + \dots +P_g$, where points $P_{i}\in X,\; i=1,\dots,g$, are all distinct, be a non-special divisor. For every choice of the uniformizers $t_i$ at $P_i$, the $k$-vector space $\Omega^{(\mathrm{2nd})}\cap \Omega^1_{F/k}(-2D)$ has the basis $\{\theta_i,\omega_i\}_{i=1}^g$, symplectic with respect to the bilinear from $(~,~)_X$, \begin{equation*} (\theta_i,\theta_j)_X=(\omega_i,\omega_j)_X=0,\;\;(\theta_i,\omega_j)_X= \delta_{ij},\quad i,j=1,\dots,g. \end{equation*} This basis consists of differentials of the first kind $\theta_i$ and differentials of the second kind $\omega_i$, uniquely characterized by the conditions \begin{equation*} v_{P_i}\left(\theta_j -\delta_{ij}dt_i\right)>0\;\;\text{and}\;\; v_{P_i}\left(\omega_j -\delta_{ij}t^{-2}_i dt_i\right)>0 \end{equation*} $i,j=1,\dots,g$. \item[(iv)] The subspace $k \cdot\omega_1\oplus\dots\oplus k\cdot\omega_g$ is a complementary isotropic subspace to $\Omega^{(\mathrm{1st})}\oplus dF$ in $\Omega^{(\mathrm{2nd})}$. \end{enumerate} \end{theorem} \begin{proof} Let $(\omega)_{\infty}=n_{1}Q_{1}+\dots+n_{l}Q_{l}$ be the polar divisor of $\omega\in\Omega^{(\mathrm{2nd})}$. Since $\mathrm{char}\,k=0$, for every $Q_{i}$, $i=1,\dots,l$, there exists $f_{i}\in F$ such that $v_{Q_{i}}(\omega-df_{i})\geq 0$. Now define $x=\{x_{P}\}_{P\in X}\in\field{A}_{X}$ by \begin{equation*} x_{P}=\begin{cases} \left. f_{i}\right|_{Q_{i}} & P=Q_{i},\quad i=1,\dots,l ,\\ 0 &\text{otherwise.} \end{cases} \end{equation*} Since divisor $D$ of degree $g$ is non-special we have $\Omega^{1}_{F/k}(D)=\{0\}$, and by Serre duality $\field{A}_{X}(D)+F=\field{A}_{X}$. Thus there exists $f\in F$ with the property $v_{P}(f-x)\geq -v_{P}(D)$ for all $P\in X$, so that $(\omega-df)\geq -2D$. Since $D$ is non-special, such $f$ is unique, and this proves part (ii). To show that $\dim_{k}\Omega^{(\mathrm{2nd})}/dF=2g$ we observe that $\dim_{k}\Omega^{1}_{F/k}(-2D)=3g-1$ and $\dim_{k}\Omega^{1}_{F/k}(-D)=2g-1$, as it follows from the Riemann-Roch theorem. Denote by $\Omega^{(\mathrm{3rd})}$ the $k$-vector space of the differentials of the third kind --- the subspace of $\Omega^{1}_{F/k}$ consisting of differentials with only simple poles. Since $\Omega^{(\mathrm{2nd})}\cap\Omega^{(\mathrm{3rd})}=\Omega^{(\mathrm{1st})}$ and $\Omega^{(\mathrm{3rd})}\cap \Omega^1_{F/k}(-2D)=\Omega^1_{F/k}(-D)$, we conclude \begin{gather*} \dim_{k}\Omega^{(\mathrm{2nd})}\cap\Omega^{1}_{F/k}(-2D)+\dim_{k}\Omega^1_{F/k}(-D) \\=\dim_{k} \Omega^1_{F/k}(-2D)+\dim_{k}\Omega^{(\mathrm{1st})}, \end{gather*} so that by part (ii), $$\dim_{k}\Omega^{(\mathrm{2nd})}/dF=(3g-1)-(2g-1)+g=2g.$$ To finish the proof, consider the $k$-linear mapping $$L: \Omega^{(\mathrm{2nd})}\cap \Omega^1_{F/k}(-2D)\rightarrow k^{2g},$$ defined by $L(\omega)=(\alpha_{1}(\omega),\dots,\alpha_{g}(\omega),\beta_{1}(\omega),\dots,\beta_{g}(\omega))$, where \begin{equation*} v_{P_i}\bigl(\omega -(\alpha_i(\omega)t^{-2}_i + \beta_i(\omega)dt_i)\bigr) > 0,\quad i=1,\dots,g. \end{equation*} Since divisor $D$ is non-special, the mapping $L$ is injective and, therefore, is an isomorphism. The differentials $\omega_i$ and $\theta_i$ are obtained by choosing the only non-zero components of $L$ to be, respectively, $\alpha_i=1$ and $\beta_i=1$. \end{proof} \begin{remark} The choice of a non-special effective divisor $D=P_{1}+\dots +P_{g}$ on $X$ with distinct points $P_{i}$ and the uniformizers $t_{i}$ can be considered as an algebraic analog of the choice of ``$a$-cycles'' on a compact Riemann surface of genus $g\geq 1$. Correspondingly, differentials $\theta_i$ are analogs of the differentials of the first kind with normalized ``$a$-periods'', and differentials $\omega_i$ are analogs of the differentials of the second kind with second order poles, ``zero $a$-periods'' and normalized ``$b$-periods''. The symplectic property of the basis $\{\theta_i,\omega_i\}_{i=1}^g$ is an analog of the ``reciprocity law for differentials of the first and the second kind'' (see \cite[Ch.~5, \S1]{Iwasawa} and \cite[Ch.~VI, \S3]{Kra}). \end{remark} \begin{remark} Condition that effective non-special divisor $D$ consists of $g$ distinct points is not essential. The statement of Theorem \ref{Chevalley}, as well as of all other results in the paper, can be easily modified to include divisors with multiple points. \end{remark} A differential of the second kind $\omega$ is said to have zero $a$-periods, if \begin{displaymath} (\omega,\omega_i)_X=0,\quad i=1,\dots,g. \end{displaymath} It follows from Theorem \ref{Chevalley} that differential of the first kind with zero $a$-periods is zero. The vector space $\Omega^{(\mathrm{2nd})}_0$ of differentials of the second kind with zero $a$-periods has the following properties. \begin{proposition} \label{Second} \mbox{} \begin{itemize} \item[(i)] The $k$-vector space $\Omega^{(\mathrm{2nd})}_0$ is complementary isotropic subspace to $\Omega^{(\mathrm{1st})}$ in $\Omega^{(\mathrm{2nd})}$ and \begin{equation*} \Omega^{(\mathrm{2nd})}_0 = k\cdot\omega_1\oplus\dots\oplus k\cdot\omega_g\oplus dF. \end{equation*} \item[(ii)] There is a direct sum decomposition \begin{displaymath} \Omega^{(\mathrm{2nd})}_0 = \bigoplus_{P\in X} \Omega_0 (\ast\,P), \end{displaymath} where $\Omega_0(\ast\,P) is the subspace of differentials of the second kind with zero $a$-periods and only pole at $P\in X$. \item[(iii)] For every $P\in X$ the $k$-vector space $\Omega_0(\ast\,P)$ has a natural filtration \begin{displaymath} \{0\} = \Omega_0(-P)\subset\Omega_0(-2P)\dots\subset \Omega_0(-n P)\subset\dots, \end{displaymath} where $\Omega_0(-n P)$ is the subspace of differentials of the second kind with zero $a$-periods and only pole at $P$ of order not greater than $n$, \begin{displaymath} \dim_k \Omega_0(-n P)=n-1. \end{displaymath} \item[(iv)] Every $\omega\in\Omega_0(-n P)$ admits a unique decomposition \begin{equation*} \omega = d f + \sum_{i=1}^g c_i\omega_i, \end{equation*} where $f\in H^{0}(X,\mathcal{F}(D+(n-1)P))$. \end{itemize} \end{proposition} \begin{proof} Part (i) follows from Theorem \ref{Chevalley} since divisor $D$ is non-special, and part (ii) is clear. Since $\dim_k \Omega^1_{F/k}(-n P)=n-1+g$, part (iii) follows from the decomposition \begin{displaymath} \Omega^1_{F/k}(-n P) = \Omega_0(-nP)\oplus\Omega^{(\mathrm{1st})}. \end{displaymath} The divisor $D=P_1 +\dots + P_g$ is non-special, so that $h^0(D+(n-1)P)= n$, and part (iv) also follows from Theorem \ref{Chevalley}. \end{proof} \begin{definition} A space of ``additive multi-valued functions on $X$'' (additive functions for brevity) is the subspace $\mathcal{A}(X)\subset\field{A}_X$ with the following properties. \begin{itemize} \item[\textbf{AF1}] $F\subseteq\mathcal{A}(X)$. \item[\textbf{AF2}] For every $a \in\mathcal{A}(X)$, $da=\omega\in\Omega^1_{F/k}$ (and hence $\omega\in\Omega^{(\mathrm{2nd})}$). \item[\textbf{AF3}] If $a\in\mathcal{A}(X)$ satisfies $da=df$ for $f\in F$, then $a-f=c\in k$. \end{itemize} \end{definition} \begin{remark} For every $\omega\in\Omega^{(\mathrm{2nd})}$ corresponding $a=\{a_P\}_{P\in X}=d^{-1}\omega$ is defined up to the choice of additive constants for every $P\in X$. Condition \textbf{AF3} ensures that for all $f\in F$ these choices are compatible with the equation $f = d^{-1}(d f) +c$. \end{remark} \begin{example} \label{Additive} For every non-special effective divisor $D$ of degree $g$ on $X$, $D=P_1 + \dots + P_g$ with distinct points $P_{i}$, and a choice of uniformizers $t_{i}$ at $P_{i}$, there is an associated space of additive functions $\mathcal{A}(X;D)$ with ``zero $a$-periods'', defined as follows. Let $\eta_i\in\field{A}_X$ be the solutions of the equations $$d\eta_i = \omega_i, \quad i=1,\dots,g, $$ with any fixed choice of additive constants at all points $P\in X$. Since the divisor $D$ is non-special, the subspaces $k\cdot\eta_1\oplus\dots\oplus k\cdot\eta_g$ and $F$ of the $k$-vector space $\field{A}_{X}$ have zero intersection. Their direct sum, the subspace \begin{equation} \label{additive space} \mathcal{A}(X;D) =k\cdot\eta_1\oplus\dots\oplus k\cdot\eta_g\oplus F \subset\field{A}_X, \end{equation} satisfies properties \textbf{AF1-AF3} and the mapping $d: \mathcal{A}(X;D) \rightarrow\Omega^{(\mathrm{2nd})}_0$ is surjective. Indeed, according to Proposition \ref{Second}, every $\omega\in\Omega^{(\mathrm{2nd})}_0$ admits a unique decomposition \begin{equation} \label{dec-1} \omega = df + \sum_{i=1}^g c_i\omega_i, \end{equation} and \begin{equation} \label{dec-2} a=d^{-1}\omega = f + \sum_{i=1}^g c_i\eta_i + c \in\mathcal{A}(X;D). \end{equation} \end{example} \begin{remark} Additive functions $a=d^{-1}\omega\in\mathcal{A}(X,D)$ are algebraic analogs of abelian integrals of the second kind with zero $a$-periods on a compact Riemann surface of genus $g$ (see, e.g., \cite[Ch.~V, \S2]{Iwasawa}), and we can define $$\int_{P}^{Q}\omega =a(Q)-a(P),$$ where $a(P)=a_{P}\!\!\!\mod\mathfrak{p}\in k$ for every $P\in X$. \end{remark} It is quite remarkable that using additive functions introduced in Example \ref{Additive}, one can naturally define uniformizers $t_{P}$ at all points $P\in X$. These uniformizers are uniquely determined by the following data: a choice of a non-special divisor $D=P_1 + \dots + P_g$ with distinct points, uniformizers $t_{i}$ at $P_{i}$, and additive functions $\eta_{1},\dots,\eta_{g}$. Namely, for every $P\in X$ let $\omega^{(2)}_{P}\in \Omega_{0}(-2P)$ be a differential of the second kind with the second order pole at $P$ and zero $a$-periods, uniquely characterized by the condition \begin{equation} \label{norm} \sum_{i=1}^{g}\big(\theta_{i},\omega^{(2)}_{P}\big)_{X}=1. \end{equation} In particular, $\omega^{(2)}_{P_i}=\omega_i$ for $i=1,\dots, g$. Let $\eta_P=d^{-1}\omega^{(2)}_P\in\mathcal{A}(X;D)$ be the additive function with the only simple pole at $P\in X$. According to \eqref{dec-2}, $\eta_P$ is defined up to an overall additive constant, which we fix by the condition that the sum of constant terms of $\left.\eta_P\right|_{P_{i}}\in k((t_{i}))$ over all $i=1,\dots,g$ is equal to zero. In particular, $\eta_{P_i} =\eta_i+c_{i}$ for some $c_{i}\in k$. For every $P\in X$ the uniformizer $t_{P}$ is defined by $$t_{P}= -\left.\frac{1}{\eta_P}\right|_{P}, $$ and for $\omega^{(2)}_{P}=d\eta_{P}$ we get $$\left.\omega^{(2)}_{P}\right|_{P}=t_{P}^{-2}dt_{P ,\quad P\in X.$$ Extending this construction, for every $P\in X$ we choose a basis $\{\omega^{(n+1)}_P\}_{n=1}^\infty$ of the subspace $\Omega_0(\ast\,P)$ which consists of differentials of the second kind with the only pole at $P$ of order $n+1$ and zero $a$-periods, where differentials $\omega^{(2)}_{P}$ are specified by \eqref{norm}. Let $\eta^{(n)}_P=d^{-1}\omega^{(n+1)}_P\in\mathcal{A}(X;D)$ be the additive function with the only pole at $P\in X$ of order $n$, where the overall additive constant in \eqref{dec-2} is fixed as follows. We set $\eta^{(1)}_P=\eta_{P}$, and for $\eta^{(n)}_P$ with $n>1$ we impose a condition that the constant term of $\left.\eta^{(n)}_P\right|_{P}\in k((t_{P}))$ is zero. Introducing for every $P\in X$ the subspace $\mathcal{A}_P(X,D)$ --- the $k$-span of $\eta^{(n)}_P$, $n\in\field{N}$ --- we have the decomposition \begin{equation} \label{AD} \mathcal{A}(X,D)=\left(\bigoplus_{P\in X}\mathcal{A}_P(X,D)\right)\oplus k. \end{equation} The property that $\Omega_0^{(\mathrm{2nd})}=d\,\mathcal{A}(X;D)$ is the isotropic subspace, and the condition \textbf{AF3} can be equivalently stated as follows. \begin{lemma} \label{reciprocity}\mbox{} \begin{itemize} \item[(i)] For every $P,Q\in X$ and $m,n\in\field{N}$, \begin{equation*} \Res_P(\eta_P^{(m)}d\,\eta_Q^{(n)}) = \Res_Q(\eta_Q^{(n)}d\,\eta_P^{(m)}). \end{equation*} \item[(ii)] Every $f\in F$ admits a unique ``partial fraction expansion'' \begin{equation*} f = \sum_{i=1}^l\sum_{j=1}^{n_i} c_{ij}\eta^{(j)}_{Q_i} + c, \end{equation*} where $n_1 Q_1 +\dots n_l Q_l=(f)_\infty$ is the polar divisor of $f$, and $c,c_{ij}\in k$. \end{itemize} \end{lemma} \begin{proof} Since $\Res_{Q}(da)=0$ for every $a\in F_{P}$, we get for $P\neq Q$, \begin{align*} 0=(\omega_{P}^{(m+1)},\omega_{Q}^{(n+1)})_{X} &=\Res_{P}(\eta_{P}^{(m)}d\eta_{Q}^{(n)}) + \Res_{Q}(\eta_{P}^{(m)}d\eta_{Q}^{(n)}) \\ & =\Res_{P}(\eta_{P}^{(m)}d\eta_{Q}^{(n)}) - \Res_{Q}(\eta_{Q}^{(n)}d\eta_{P}^{(m)}). \end{align*} For $P=Q$ we get $0=(\omega_{P}^{(m+1)},\omega_{P}^{(n+1)})_{X} =\Res_{P}(\eta_{P}^{(m)}d\eta_{P}^{(n)})$ for all $m,n\in\field{N}$. Part (ii) immediately follows from \textbf{AF3} since there are $c_{ij}\in k$ such that \begin{equation*} df- \sum_{i=1}^l\sum_{j=1}^{n_i} c_{ij}\omega^{(j+1)}_{Q_i}\in\Omega_0^{(\mathrm{2nd})}\cap\Omega^{(\mathrm{1st})}=\{0\}. \qedhere \end{equation*} \end{proof} \begin{remark} The first statement of Lemma \ref{reciprocity} is an algebraic analog of the classical ``reciprocity law for the differentials of the second kind with zero $a$-periods'' on a compact Riemann surface (see, e.g., \cite[Ch.~V, \S1]{Iwasawa} and \cite[Ch.~VI, \S3]{Kra}). \end{remark} \begin{remark}\label{0-a} In the genus zero case $X=\mathbb{P}^{1}_{k}=k\cup\{\infty\}$, $F=k(z)$, and $$\omega_{P}^{(n+1)}=\frac{dz}{(z-P)^{n+1}}\;\;\text{for}\;\; P\in k,\quad \omega_{P}^{(n+1)}=-z^{n-1}dz\;\;\text{for}\;\; P=\infty.$$ Correspondingly, $$\eta_{P}^{(n)}(z)=-\frac{1}{n(z-P)^{n}}\;\;\text{for}\;\; P\in k,\quad \eta_{P}^{(n)}(z)=-\frac{z^{n}}{n}\;\;\text{for}\;\; P=\infty.$$ \end{remark} \subsection{Differentials of the third kind and ``multiplicative functions''\label{M-functions}} The $k$-vector space $\Omega^{(3\mathrm{rd})}$ of differentials of the third kind contains a $g$-dimensional subspace $\Omega^{(\mathrm{1st})}$ and the subspace $d\log F^\ast$ consisting of logarithmic differentials \begin{equation*} d\log f =\frac{df}{f},\quad f\in F^\ast = F\setminus\{0\}. \end{equation*} Let $D=P_{1}+\dots +P_{g}$ be a non-special divisor of degree $g$ on $X$ with distinct points, and let $\mathcal{A}(X;D)$ be the vector space of additive functions with zero $a$-periods, defined in Example \ref{Additive}. \begin{definition} Differential of the third kind $\omega$ has zero $a$-periods if \begin{equation*} (\omega_{i},\omega)_{X}=\sum_{P\in X}\Res_P\left(\eta_i\,\omega\right) = 0\quad\text{for}\quad i=1,\dots,g, \end{equation*} where $\eta_{i}=d^{-1}\omega_{i}\in \mathcal{A}(X;D)$. \end{definition} It follows from \eqref{additive space} and the residue theorem that every differential of the third kind $\omega$ with zero $a$-periods satisfies \begin{equation}\label{zero a-periods} \sum_{P\in X}\Res_P\left(\eta_P\,\omega|_P\right) = 0 \end{equation} for all $\eta\in\mathcal{A}(X;D)$. By the Riemann-Roch theorem $\dim_k\Omega^{1}_{F/k}(-P-Q) = g+1$, so that for every $P,Q\in X$, $P\neq Q$, there is a differential of the third kind with the only poles at $P$ with residue $1$ and at $Q$ with residue $-1$. Such differentials form an affine space over the vector space $\Omega^{(\mathrm{1st})}$ of differentials of the first kind, and there exists a unique differential of the third kind with zero $a$-periods, which we denote by $\omega_{PQ}$. The differentials $\omega_{PQ}$ for all $(P,Q)\in X\times X,\,P\neq Q$, span the $k$-vector space $\Omega^{(3\mathrm{rd})}_0$ of differentials of the third kind with zero $a$-periods, the complementary subspace to $\Omega^{(\mathrm{1st})}$ in $\Omega^{(3\mathrm{rd})}$, \begin{displaymath} \Omega^{(3\mathrm{rd})} = \Omega^{(3\mathrm{rd})}_0\oplus\Omega^{(\mathrm{1st})}. \end{displaymath} Note that so far we did not specify the choice of arbitrary constants in the definition of the additive functions $\eta_i$, $i=1,\dots,g$, so that we cannot guarantee that logarithmic differentials $d\log f, f\in F^\ast$, have zero $a$-periods. We have the following result. \begin{lemma} \label{Normal} There is a choice of additive constants in the definition of $\eta_i\in\mathcal{A}(X;D)$, $i=1,\dots,g$, such that $d\log F^{\ast}\subset\Omega^{(3\mathrm{rd})}_{0}$, and all such choices are parametrized by $g$ elements in $\mathrm{Hom}(\mathrm{Pic}_{0}(X),k)$. \end{lemma} \begin{proof} Every choice of additive constants for $ \eta_{i}$ --- an element $c_{i}=\{c_{iP}\}_{P\in X}\in\field{A}_{X}$ --- defines a homomorphism $\pi_{i}:\Div_{0}(X)\rightarrow k$ by $$\pi_{i}(D)=\sum_{j}c_{iQ_{j}}n_{j},\quad \text{where}\quad D=\sum_{j}n_{j}Q_{j}.$$ The condition $$\sum_{P\in X}\Res_P(\eta_i\,d\log f) = 0\quad\text{for all}\quad f\in F^{\ast}$$ implies that restriction of $\pi_{i}$ to the subgroup $\mathrm{PDiv}(X)$ of principal divisors is given by \begin{equation*} \pi_{i}((f))=\Res_{P_{i}}\frac{d\log f}{t_{i}},\quad i=1,\dots,g. \end{equation*} Since the field $k$ has characteristic zero, it is an injective $\field{Z}$-module and, therefore, $\mathrm{Ext}^{1}(A,k)=0$ for any $\field{Z}$-module $A$. Applying this to the $\field{Z}$-module $A=\mathrm{Pic}_{0}(X)=\Div_{0}(X)/\mathrm{PDiv}(X)$, we see that the restriction mapping $$\mathrm{Hom}(\mathrm{Div}_{0}(X),k)\rightarrow\mathrm{Hom}(\mathrm{PDiv}(X),k)$$ is surjective. \end{proof} \begin{definition} The space of additive functions $\mathcal{A}(X;D)$ is said to be compatible with the multiplicative group $F^\ast$ of the field $F$ of rational functions on $X$, if $d\log F^\ast\subset\Omega^{(\mathrm{3rd})}_0$. \end{definition} In this case, denoting by $(f)=\sum_{i=1}^{n}(Q_{i}-R_{i})$ the divisor of $f\in F^{\ast}$, we have \begin{equation} \label{dlogf-representation} \frac{df}{f}=\sum_{i=1}^{n}\omega_{Q_{i}R_{i}} \end{equation} Indeed, $d\log f - \sum_{i=1}^{n}\omega_{Q_{i}R_{i}}=0$ since it is a differential of the first kind with zero $a$-periods. For every $\omega_{QR}\in\Omega^{(3\mathrm{rd})}_{0}$ let $f_{QR}=\{f_{QR,P}\}_{P\in X}\in\field{J}_{X}$ be an idele such that $$\omega_{QR}=d\log f_{QR}=\frac{df_{QR}}{f_{QR}}.$$ It has the property $v_{P}(f_{QR,P})=0$ for all $P\neq Q,R$, $v_{P}(f_{QR,P})=1$ for $P=Q$, and $v_{P}(f_{QR,P})=-1$ for $P=R$. For every $P\in X$, the element $f_{QR,P}\in F_{P}^{\ast}$ is defined up to an arbitrary multiplicative constant $c_{P}\in k^{\ast}$. Since for arbitrary distinct points $Q,R,S\in X$ $$\omega_{QR}+\omega_{RS}=\omega_{QS},$$ we get that for every $P \in X$ and arbitrary choice of multiplicative constants, \begin{equation} \label{cocycle} \frac{f_{QR,P}f_{RS,P}}{f_{QS,P}}=c_{QRS,P}\in k^{\ast}. \end{equation} Moreover, equation \eqref{cocycle} extends to the case of coincident points $Q,R,S$ if we put $f_{QQ,P}=\alpha_{QQ,P}\in k^{\ast}$, etc. It follows from \eqref{cocycle} that for every $P\in X$ the elements $c=\{c_{QRS,P}\}$ satisfy the ``$3$-cocycle condition'' $$c_{QRS,P}\,c_{QST,P}=c_{QRT,P}\,c_{RST,P}.$$ Clearly, the $3$-cocycle $c$ is a coboundary: there exists a ``$2$-cochain'' $b=\{b_{QR,P}\}$ such that $$c_{QRS,P}=\frac{b_{QR,P}\,b_{RS,P}}{b_{QS,P}}.$$ This shows that we can choose multiplicative constants in the definition of $f_{QR,P}$ such that equation \eqref{cocycle} becomes \begin{equation} \label{cocycle-2} \frac{f_{QR,P}f_{RS,P}}{f_{QS,P}}=1. \end{equation} Now it follows from \eqref{cocycle-2} that for every $Q\in X$ there is an idele $e_{Q}=\{e_{Q,P}\}_{P\in X}\in\field{J}_{X}$ such that $v_{P}(e_{Q,P})=0$ for all $P\neq Q$, $v_{P}(e_{Q,P})=1$ for $P=Q$, and $$f_{QR}=\frac{e_{Q}}{e_{R}}.$$ The elements $e_{Q,P}\in F^{\ast}_{P}$ for every $P\in X$ are defined up to multiplicative constants, and we chose them in such a way that $c_{Q,P}=e_{Q,P}\!\!\!\mod\mathfrak{p}\in k^{\ast}$, defined for $P\neq Q$, always satisfy $c_{Q,P}=-c_{P,Q}$. We will use convenient notation $c_{Q,P}=e_{Q}(P)$. For $P=Q$ we have $e_{Q,Q}=c_{Q,Q}t_{Q}(1+ O(t_{Q}))$, where $c_{Q,Q}\in k^{\ast}$, and $t_{Q}$ are the uniformizers at points $Q\in X$ associated with the space of additive functions $\mathcal{A}(X,D)$, defined in Section \ref{A-functions}. We will also use the notation $c_{Q,Q}=\dot{e}_{Q}(Q)$. For distinct points $P,Q,R,S\in X$ we define an algebraic analog of an exponential of the abelian integral of the third kind on a compact Riemann surface of genus $g$ by $$\exp\int_{P}^{Q}\omega_{RS}=\frac{f_{RS}(Q)}{f_{RS}(P)},$$ where $f_{RS}(P)=f_{RS,P}\!\!\!\mod\mathfrak{p}\in k^{\ast}$, $\displaystyle{f_{RS}(P)=\frac{e_{R}(P)}{e_{S}(P)}}$. The following result is an analog of the reciprocity law for the normalized differentials of the third kind --- classical ``exchange law of variable and parameter'' (see, e.g., \cite[Ch.~V, \S1]{Iwasawa} and \cite[Ch.~VI, \S3]{Kra}). \begin{lemma} \label{Law} For distinct points $P,Q,R,S\in X$, $$\exp\int_{R}^{S}\omega_{PQ}=\exp\int_{P}^{Q}\omega_{RS}.$$ \end{lemma} \begin{proof} We have \begin{align*} \exp\int_{R}^{S}\omega_{PQ}&=\frac{f_{PQ}(S)}{f_{PQ}(R)}= \frac{e_{P}(S)e_{Q}(R)}{e_{Q}(S)e_{P}(R)} \\ & = \frac{e_{S}(P)e_{R}(Q)}{e_{S}(Q)e_{R}(P)}= \frac{f_{RS}(Q)}{f_{RS}(P)}=\exp\int_{P}^{Q}\omega_{RS}. \qedhere \end{align*} \end{proof} It follows from \eqref{dlogf-representation} that for $f\in F^{\ast}$ $$\left.f\right|_{P}=c_{P}\prod_{i=1}^{n}f_{Q_{i}R_{i},P}=c_{P}\prod_{i=1}^{n}\frac{e_{Q_{i},P}}{e_{R_{i},P}}$$ for every $P\in X$, where $(f)=\sum_{i=1}^{n}(Q_{i}-R_{i})$. We finalize our choice of multiplicative constants $c_{P,Q}$ by the condition that $c_{P}=c\in k^{\ast}$ --- a constant depending on $f$ --- for all $P\in X$, so that every $f\in F^{\ast}$ can be written in a `factorized form' \begin{equation} \label{f-global} f=c\prod_{i=1}^{n}f_{Q_{i}R_{i}}. \end{equation} \begin{proposition}\label{Mult-Existence} There is a choice of constants $\{c_{P,Q}\}_{P,Q\in X}\in k^{\ast}$ in the definition of the ideles $e_{P}\in\field{J}_{X}$, $P\in X$, satisfying $c_{P,Q}=-c_{Q,P}$ for all $P\neq Q$, and having the property that the factorization formula \eqref{f-global} holds for every $f\in F^{\ast}$. \end{proposition} \begin{proof} For $u\in\mathcal{O}_{P}^{\ast}$ put $u(P)=u\!\!\mod\mathfrak{p}\in k^{\ast}$. We need to show that there exist $c_{P,Q}\in k^{\ast}$ satisfying $c_{P,Q}=-c_{Q,P}$ for $P\neq Q$, such that for every $f\in F^{\ast}$, \begin{displaymath} \prod_{i=1}^{n}\frac{c_{Q_{i},P}}{c_{R_{i},P}}= c\left(\frac{\!\!f}{\;t_{P}^{v_{P}(f)}}\right)(P) \end{displaymath} for all $P\in X$ and some $c=c(f)\in k^{\ast}$, where $(f)=\sum_{i=1}^{n}(Q_{i}-R_{i})$. For every $D\in\mathrm{PDiv}(X)$ choose some $f\in F^{\ast}$ such that $D=(f)$, put $$c_{1}(D,P)=\left(\frac{\!\!f}{\;t_{P}^{v_{P}(f)}}\right)(P),\quad P\in X,$$ and extend it to the group homomorphism $c_{1}:\mathrm{PDiv}(X)\times\Div(X)\rightarrow k^{\ast}$ by multiplicativity. Similarly, define $c_{2}:\Div(X)\times \mathrm{PDiv}(X)\rightarrow k^{\ast}$ by $$c_{2}(P,D)=(-1)^{v_{P}(f)}c_{1}(D,P),\quad D\in\mathrm{PDiv}(X).$$ We claim that \begin{equation} \label{1=2} \left. c_{1}\right|_{\mathrm{PDiv}(X)\times\mathrm{PDiv}(X)}=\left. c_{2}\right|_{\mathrm{PDiv}(X)\times\mathrm{PDiv}(X)}. \end{equation} Indeed, $$c_{1}((f),(g))= \prod_{P\in (g)}\left(\frac{\!\!f}{\;t_{P}^{v_{P}(f)}}\right)(P) =\prod_{P\in X}\left(\frac{\!\!f^{v_{P}(g)}}{\;t_{P}^{v_{P}(f)v_{P}(g)}}\right)(P) $$ and $$c_{2}((f),(g))=(-1)^{\sum_{P\in X}v_{P}(f)v_{P}(g)}c_{1}((g),(f)).$$ As the result, equation \eqref{1=2} takes the form \begin{displaymath} c_{1}((f),(g))=(-1)^{\sum_{P\in X}v_{P}(f)v_{P}(g)}c_{1}((g),(f)), \end{displaymath} which is A. Weil reciprocity law \eqref{Weil-reciprocity}. Now it follows from equation \eqref{1=2} that there is a group homomorphism $c: \Div(X)\times \Div(X)\rightarrow k^{\ast}$ satisfying \begin{equation*} c(D_{1},D_{2})=(-1)^{\deg D_{1}\deg D_{2} +\sum_{P\in X}v_{P}(D_{1})v_{P}(D_{2})}c(D_{2},D_{1}) \end{equation*} for all $D_{1}, D_{2}\in\Div(X)$, and such that its restrictions to the subgroups $\mathrm{PDiv}(X)\times\Div(X)$ and $\Div(X)\times \mathrm{PDiv}(X)$ coincide, respectively, with $c_{1}$ and $c_{2}$. The homomorphism $c$ necessarily has the form \begin{displaymath} c(D_{1},D_{2})=\prod_{i,j}c(Q_{i},R_{j})^{n_{i}m_{j}},\quad\text{where}\quad D_{1}=\sum_{i}n_{i}Q_{i},\;D_{2}=\sum_{j}m_{j}R_{j}, \end{displaymath} and the constants $c_{P,Q}=c(P,Q)$ satisfy the required conditions. \end{proof} \begin{remark} The family of ideles $e_{P}\in\field{J}_{X}$, $P\in X$, has the property \begin{equation} \label{omega-prime} \omega_{PQ}=d(\log e_{P}-\log e_{Q}), \end{equation} and can be considered as an algebraic analog of the classical Schottky-Klein prime form --- a special multi-valued function $E(x,y)$ on the complex surface $X\times X$ with a single simple pole along the diagonal $\Delta$, which satisfies \eqref{omega-prime}. We refer to \cite{Fay, Mumford} for the analytic definition of $E(x,y)$, and to \cite{Raina} for the algebraic definition. In the complex analytic case, \begin{equation} \label{Bergmann} \omega_{B}=d_{x}d_{y}\log E(x,y) \end{equation} is the so-called Bergmann kernel --- a symmetric bidifferential on $X\times X$ with a single second-order pole on the diagonal $\Delta$ with the biresidue $1$ and zero $a$-periods. As in \cite{Biswas-Raina}, one can show that there is an algebraic analog of the Bergmann kernel for the algebraic curve $X$. It would be also interesting to introduce an analog of the Schottky-Klein prime form starting from \eqref{Bergmann}. This approach would require using Parshin's adeles \cite{Parshin} for the algebraic surface $X\times X$, and is beyond the scope of this paper. \end{remark} \begin{remark}\label{0-m} In the genus zero case $X=\mathbb{P}^{1}_{k}$, $F=k(z)$, and the family of ideles $e_{P}\in\field{J}_{X}$ is given explicitly by $$e_{P,Q}=\begin{cases}z-P\in F\subset F_{Q}=k((z-Q)) &\text{for}\;\;P,Q\in k,\\ 1-z^{-1}P\in F\subset F_{\infty}=k((z^{-1}))& \text{for}\;\;P\in k,\,Q=\infty,\\ 1\in F\subset F_{Q}=k((z-Q)) &\text{for}\;\;P=\infty,\,Q\in k,\\ z^{-1}\in F\subset F_{\infty}=k((z^{-1})) &\text{for}\;\;P=Q=\infty.\\ \end{cases}$$ Correspondingly, $c_{P,Q}=Q-P$ for $P,Q\in k$, $P\neq Q$, $c_{\infty,P}=-c_{P,\infty}=1$ for $P\in k$, and $c_{P,P}=1$ for all $P\in k\cup\{\infty\}$. \end{remark} Similar to the previous section, we define algebraic analogs of multiplicative multi-valued functions on a compact Riemann surface (see, e.g., \cite[Ch.~5, \S2]{Iwasawa} and \cite[Ch.~VI, \S4]{Kra}) as follows. \begin{definition} A group of ``multiplicative multi-valued functions on $X$'' (multiplicative functions for brevity) is a sugroup $\mathcal{M}(X)\subset\field{J}_X$ with the following properties. \begin{itemize} \item[\textbf{MF1}] $F^{\ast}\subseteq\mathcal{M}(X)$. \item[\textbf{MF2}] For every $m \in\mathcal{M}(X)$, $\displaystyle{\frac{dm}{m}=\omega\in\Omega^1_{F/k}}$ (and hence $\omega\in\Omega^{(\mathrm{3rd})}$). \item[\textbf{MF3}] If $m\in\mathcal{M}(X)$ and $f\in F^{\ast}$ satisfy $\displaystyle{\frac{dm}{m}=\frac{df}{f}}$, then $m=cf$, $c\in k^{\ast}$. \end{itemize} \end{definition} \begin{example} \label{2} Let $D=P_{1}+\dots + P_{g}$ be a non-special divisor with distinct points, and let $\mathcal{A}(X,D)$ be the corresponding space of additive functions compatible with the multiplicative group $F^{\ast}$. We define the associated group of multiplicative functions $\mathcal{M}(X,D)$ as the subgroup of $\field{J}_{X}$ generated by the ideles $$f_{PQ}=\frac{e_{P}}{e_{Q}},\quad P\neq Q\in X.$$ Properties \textbf{MF1}--\textbf{MF3} immediately follow from our definition of the family $e_{P}\in\field{J}_{X}$, $P\in X$. The mapping $$\mathrm{Div}_{0}(X)\ni D=\sum_{i=1}^{n}(Q_{i}-R_{i})\mapsto m_{D}= \prod_{i=1}^{n}f_{Q_{i}R_{i}}\in\mathcal{M}(X,D)$$ establishes the group isomorphism $\mathrm{Div}_{0}(X)\simeq \mathcal{M}(X,D)/k^{\ast}$. It follows from the Riemann-Roch theorem that every $D\in\mathrm{Div}_{0}(X)$ can be represented as $$D=(f)+\sum_{i=1}^{g}(Q_{i}-P_{i}),\quad f\in F,$$ and such representation is unique if and only if the divisor $Q_{1}+\dots+Q_{g}$ is non-special. Thus for every $m=cm_{D}\in\mathcal{M}(X,D)$ we have the decomposition $m=cf\prod_{i=1}^{g}f_{Q_{i}P_{i}}.$ \end{example} \begin{remark} In the genus zero case $f_{PQ}\in F=k(z)$ are given explicitly by $$f_{PQ}=\frac{z-P}{z-Q}\;\;\text{for}\;\; P,Q\in k,\quad f_{PQ}=\frac{1}{z-Q}\;\;\text{for}\;\; P=\infty,\,Q\in k.$$ \end{remark} For every $P\in X$ put \begin{equation*} F_{P}^{-} =\left.\mathcal{A}_P(X,D)\right|_P \subset F_{P}, \end{equation*} and let $u_{P}^{(n)}$, $n\in\field{N}$, be the basis in $\mathfrak{p}\subset F_{P}$ dual to the basis $v^{(n)}_P=\left.\eta^{(n)}_P\right|_P$ in $F_{P}^{-}$ with respect to the pairing $c: \mathfrak{p}\otimes F_{P}^{-}\rightarrow k$ given by $$c(u,v)=-\Res_{P}(udv).$$ \begin{lemma} \label{prime-taylor} Multiplicative functions $f_{PQ}=\{f_{PQ,R}\}_{R\in X}\in\mathcal{M}(X,D)$ in Example \rm{\ref{2}} are given by \begin{equation*} f_{PQ,R}=\begin{cases} \displaystyle{\frac{e_{P}(R)}{e_{Q}(R)} \exp\left\{\sum_{n=1}^{\infty}u_{R}^{(n)}\big(\eta_{R}^{(n)}(Q)-\eta_{R}^{(n)}(P)\big)\right\}}\quad &\text{if}\quad R\neq P,Q,\\ \displaystyle{\frac{\dot{e}_{P}(P)}{e_{Q}(P)}\,t_{P}\exp\left\{\sum_{n=1}^{\infty} u_{P}^{(n)}\eta_{P}^{(n)}(Q)\right\}}\quad &\text{if}\quad R=P,\\ \displaystyle{\frac{e_{P}(Q)}{\dot{e}_{Q}(Q)}\,t_{Q}^{-1}\exp\left\{-\sum_{n=1}^{\infty}u_{Q}^{(n)}\eta_{Q}^{(n)}(P)\right\}}\quad &\text{if}\quad R=Q. \end{cases} \end{equation*} Here for $R\neq P$ we put $\eta_{R}^{(n)}(P)=\left.\eta_{R}^{(n)}\right|_{P}\!\!\!\mod\mathfrak{p}\in k$, etc. \end{lemma} \begin{proof} Let $\mathfrak{r}$ be the prime ideal of the valuation ring $\mathcal{O}_{R}$. For $R\neq P,Q$ we have $$f_{PQ,R}=\frac{e_{P}(R)}{e_{Q}(R)}\exp g_{R},\quad \text{where}\quad g_{R}=\sum_{n=1}^{\infty}a_{n}u^{(n)}_{R}\in\mathfrak{r}.$$ It follows from \eqref{zero a-periods} that $$0=(\omega_{R}^{(n)},\omega_{PQ})_{X}=\Res_{R}(\eta_{R}^{(n)}dg_{R})+\Res_{P}(\eta_{R}^{(n)}\omega_{PQ})+\Res_{Q}(\eta_{R}^{(n)}\omega_{PQ})$$ and $$a_{n}=c(g_{R},\eta_{R}^{(n)})=-\Res_{R}(g_{R}\,d\eta_{R}^{(n)})=\Res_{R}(\eta_{R}^{(n)}dg_{R})=-\eta_{R}^{(n)}(P)+\eta_{R}^{(n)}(Q).$$ For $R=P$ we have $$f_{PQ,P}=\frac{\dot{e}_{P}(P)}{e_{Q}(P)}\,t_{P}\exp g_{P},\quad \text{where}\quad g_{P}=\sum_{n=1}^{\infty}a_{n}u^{(n)}_{P}\in\mathfrak{p}.$$ It follows from \eqref{zero a-periods}, and condition that the constant term in the expansion of $\eta_{P}^{(n)}$ at $P$ with respect to $t_{P}$ is zero, that \begin{align*} 0 & =(\omega_{P}^{(n)},\omega_{PQ})_{X}=\Res_{P}\Big(\eta_{P}^{(n)}\frac{dt_{P}}{t_{P}}\Big)+ \Res_{P}(\eta_{P}^{(n)}dg_{P})+\Res_{Q}(\eta_{P}^{(n)}\omega_{PQ}) \\ & =-\Res_{P}(g_{P}d\eta_{P}^{(n)})- \eta_{P}^{(n)}(Q), \end{align*} which gives $a_{n}=\eta_{P}^{(n)}(Q)$. The case $R=Q$ is considered similarly. \end{proof} \begin{remark}\label{0-u} In the genus zero case $F=k(z)$ and $u_{P}^{(n)}=-(z-P)^{n}$ for $P\in k$, $u_{\infty}^{(n)}=-z^{-n}$. \end{remark} \begin{proposition} \label{general-weil} Let $\mathcal{M}(X,D)\subset\field{J}_{X}$ be the group of multiplicative functions, defined in Example \rm{\ref{2}}. The restriction of the global tame symbol $\tau_{X}$ to $\mathcal{M}(X,D)\times\mathcal{M}(X,D)$ is the identity map. \end{proposition} \begin{proof} It is sufficient to show that $\tau(f_{PQ},f_{RS})=1$ for all $P,Q,R,S\in X$ such that $P\neq Q$ and $R\neq S$. Indeed, when all points $P,Q,R,S$ are distinct, we have \begin{align*} \tau_{X}(f_{PQ},f_{RS}) & =\tau_{P}(f_{PQ},f_{RS})\tau_{Q}(f_{PQ},f_{RS}) \tau_{R}(f_{PQ},f_{RS})\tau_{S}(f_{PQ},f_{RS}) \\ & =\frac{f_{RS}(Q)f_{PQ}(R)}{f_{RS}(P)f_{PQ}(S)}=1 \end{align*} as in the proof of Lemma \ref{Law}. Similarly, \begin{align*} \tau_{X}(f_{PQ},f_{PS}) & =\tau_{P}(f_{PQ},f_{PS})\tau_{Q}(f_{PQ},f_{PS}) \tau_{S}(f_{PQ},f_{PS}) \\ & =-\frac{f_{PQ}}{f_{PS}}(P)\frac{f_{PS}(Q)}{f_{PQ}(S)} \\ & = -\frac{e_{S}(P)e_{P}(Q)e_{Q}(S)}{e_{Q}(P)e_{S}(Q)e_{P}(S)}=(-1)^{4}=1 \end{align*} since $e_{P}(Q)=-e_{Q}(P)$ for all $P\neq Q$. Finally, \begin{equation*} \tau_{X}(f_{PQ},f_{PQ})=\tau_{P}(f_{PQ},f_{PQ})\tau_{Q}(f_{PQ},f_{PQ}) =(-1)^{2}=1. \qedhere \end{equation*} \end{proof} \begin{remark} Proposition \ref{general-weil} can be considered as generalized A. Weil reciprocity law for multiplicative functions. \end{remark} \section{Local Theory\label{Local}} Let $K$ be a complete closed field --- a complete discrete valuation field with the valuation ring $\mathcal{O}_K$, the maximal ideal $\mathfrak{p}$, and the algebraically closed residue field $k=\mathcal{O}_K/\mathfrak{p}$. Every choice of the uniformizer defines an isomorphism $K\simeq k((t))$, so that $K$ can be interpreted as a ``geometric loop algebra'' over the field $k$. The field $K=F_P$, where $P$ is a point on an algebraic curve $X$ over $k$, will be our main example. In this section we introduce infinite-dimensional algebras and groups naturally associated with the field $K$, and construct their highest weight modules. For the case $K=F_{P}$ these objects would define local quantum field theories at $P\in X$. Specifically, we consider the following local QFT's. \begin{enumerate} \item[\textbf{1.}] ``QFT of additive bosons'', which corresponds to the Heisenberg Lie algebra $\mathfrak{g}$ --- a one-dimensional central extension of the geometric loop algebra $\mathfrak{g}\mathfrak{l}_1(K)=K$. \item[\textbf{2.}] ``QFT of lattice bosons'', which corresponds to the lattice Lie algebra $\mathfrak{l}$ associated with the Heisenberg Lie algebra $\mathfrak{g}$ and the lattice $\field{Z}$. \item[\textbf{3.}] ``QFT of multiplicative bosons'', which corresponds to the pair $(G,\mathfrak{g})$, where $G$ is a central extension of the abelian group $\mathrm{GL}_1(K)=K^\ast$ by the tame symbol. \end{enumerate} \subsection{The Heisenberg algebra\label{Heisenberg-Lie-local}} Let $\Omega^{1}_{K/k}$ be the $K$-module of K\"{a}hler differentials, and let $\displaystyle{\tilde{\Omega}^{1}_{K/k}=\Omega^{1}_{K/k}/\mathcal{Q}}$, where $\displaystyle{\mathcal{Q}=\cap_{n\geq 0}\,\mathfrak{p}^n d(\mathcal{O})}$ (see Section \ref{differentials}). The abelian Lie algebra $\mathfrak{g}\mathfrak{l}_1(K)=K$ over the field $k$ is equipped with the bilinear, skew-symmetric form $c: \wedge^2 K\rightarrow k$, \begin{equation*} c(f,g) = -\Res (fdg),\quad f,g\in K, \end{equation*} where $dg\in \tilde{\Omega}^{1}_{K/k}$. Bilinear form $c$ is continuous with respect to the $\mathfrak{p}$-adic topology in $K$ and the discrete topology in $k$, i.e., $c\in H^{2}_{\mathrm{c}}(K, k)\simeq\Hom_{\mathrm{c}}(\wedge^2 K, k)$ --- the group of continuous $2$-cocycles of $K$ with values in $k$. \begin{definition} The Heisenberg Lie algebra $\mathfrak{g}$ is a one-dimensional central extension of $K$ \begin{equation*} 0\rightarrow k\cdot C\rightarrow\mathfrak{g}\rightarrow K\rightarrow 0 \end{equation*} with the $2$-cocycle $c$. \end{definition} Denoting by $[~,~]$ the Lie bracket in $\mathfrak{g}=K\oplus k\cdot C$, we get \begin{equation*} [f+a\,C, g+b\,C]=c(f,g)\,C,\quad f,g\in K,~a,b\in k. \end{equation*} The Lie subalgebra $\mathfrak{g}_+ =\mathcal{O}_K \oplus k\cdot C$ is a maximal abelian subalgebra in $\mathfrak{g}$. \begin{remark} Let $\Aut\mathcal{O} =\{u\in\mathcal{O} : v(u)=1\}$ be the group of continuous automorphisms of the valuation ring $\mathcal{O}=k[[t]]$ (see~\cite{ben-zvi-frenkel}). It is easy to show that every continuous linear map $l: k((t)) \otimes_k k((t)) \rightarrow k$ which satisfies \begin{equation*} l(f\circ u, g\circ u) = l(f,g) \end{equation*} for all $f,g\in k((t))$ and $u\in\Aut\mathcal{O}$, is a constant multiple of the map $c$. This clarifies the natural role of the $2$-cocycle $c$ of $K$. In particular, every $\Aut\mathcal{O}$-invariant bilinear form $l$ is necessarily skew-symmetric, which can be considered as a simple algebraic version of the ``spin-statistics theorem''. \end{remark} \begin{definition} A $\mathfrak{g}$-module is a $k$-vector $V$, with a discrete topology, equipped with a $k$-algebra homomorphism $\rho:\mathfrak{g}\rightarrow\End V$ such that the $\mathfrak{g}$-action on $V$ is continuous, and $\rho(C)=\bm{I}$ --- the identity endomorphism of $V$. \end{definition} Equivalently, for every $v\in V$ there exists a open subspace $U$ in $K$, commensurable with $\mathfrak{p}$, that annihilate $v$: $\rho(U)\,v=0$. Setting $\bm{f}=\rho(f)\in\End V$ for every $f\in K$, we get \begin{equation*} [\bm{f},\bm{g}]=c(f,g)\bm{I} \end{equation*} ---projective representation of the abelian Lie algebra $K$. \begin{remark} Every choice of the uniformizer defines an isomorphism $K\simeq k((t))$ and a basis basis $\{t^n\}_{n\in\field{Z}}$ for $K$. Denoting $\bm{\alpha}_n=\rho(t^n)$ and using $c(t^m,t^n)=m\delta_{m,-n}$, we get commutation relations of the ``oscillator algebra'' \begin{equation*} [\bm{\alpha}_m,\bm{\alpha}_n]=m\delta_{m,-n}\bm{I}, \end{equation*} that characterizes free bosons in two-dimensional QFT. \end{remark} \begin{definition} The highest weight module for the Heisenberg Lie algebra $\mathfrak{g}$ is an irreducible $\mathfrak{g}$-module with the vector $\bm{1}\in V$ annihilated by the abelian subalgebra $\mathcal{O}_K\oplus \{0\}$. \end{definition} The following result is well-known (see, e.g., \cite[Lemma 9.13]{kac}). \begin{theorem} All irreducible highest weight modules for the Heisenberg Lie algebra $\mathfrak{g}$ are the trivial one-dimensional module $\mathit{k}=k\cdot\bm{1}$ with the highest vector $\bm{1}=1\in k$, and the Fock module \begin{equation*} \curly{F}=\Ind_{\mathfrak{g}_{+}}^{\mathfrak{g}}k, \end{equation*} induced from the one-dimensional $\mathfrak{g}_+$-module $k$. \end{theorem} \begin{remark} Let $U\mathfrak{g}$ be the universal enveloping algebra of the Lie algebra $\mathfrak{g}$. By definition, $$ \curly{F} =U\mathfrak{g} \underset{U\mathfrak{g}_{+}}{\otimes}k,$$ where $U\mathfrak{g}$ is considered as the right $U\mathfrak{g}_{+}$-module. Equivalently, $$\curly{F}=\curly{W}/\curly{D},$$ where $\curly{W}$ is the Weyl algebra of $\mathfrak{g}$ --- a quotient of $U \mathfrak{g}$ by the ideal generated by $C-\bm{1}$, where $\bm{1}$ now stands for the unit in $U\mathfrak{g}$, and $\curly{D}$ is the left ideal in $\curly{W}$ generated by $\mathcal{O}_K\oplus\{0\}$. \end{remark} Explicit realization of the Fock module $\curly{F}$ --- ``the bosonic Fock space'' --- depends on the decomposition of $K$ into a direct sum of isotropic subspaces with respect to the bilinear form $c$, \begin{equation} \label{decomposition} K=K_{+}\oplus K_{-}, \end{equation} where the subspace $K_{+}=\mathcal{O}_K$ is defined canonically. In this case \begin{equation*} \curly{F}\simeq \Symm^{\bullet} K_{-} \end{equation*} --- the symmetric algebra of the $k$-vector space $K_{-}$. The Fock space $\curly{F}$ is a $\field{Z}$-graded commutative algebra \begin{equation*} \curly{F}= \bigoplus_{n=0}^{\infty}\, \curly{F}^{(n)} \end{equation*} where $\curly{F}^{(n)}\simeq \Symm^n K_{-},~\curly{F}^{(0)}= k\cdot\bm{1}$, and $\curly{F}^{(n)}=\{0\}$ for $n<0$. For every $f=f_{+} + f_{-} \in K$ the operator $\bm{f}=\rho(f)\in \End\curly{F}$ is defined by \begin{equation} \label{action-ab} \bm{f}\cdot v=f_{-}\odot v+\sum_{i=1}^kc(f,v_i)\,v^i=f_{-}\odot v-\sum_{i=1}^k \Res\,(f_{+}\,dv_i)\,v^i, \end{equation} where $v=v_1\odot\cdots\odot v_k\in\curly{F}^{(k)}$ and $ v^i=v_1\odot\cdots\odot\hat{v_i}\odot\cdots\odot v_k\in\curly{F}^{(k-1)},~ i=1,\ldots,k$, and $\odot$ denotes for the multiplication in $\Symm^{\bullet}K_{-}$, the symmetric tensor product. In particular, $$\bm{f}\cdot\bm{1}=f_{-}.$$ The Fock module $\curly{F}$ is equipped with the linear topology given by the filtration associated with the $\field{Z}$-grading, which does not depend on the decomposition \eqref{decomposition}. \begin{remark} Every choice of the uniformizer defines the isomorphism $K\simeq k((t))$, and one can choose $K_{-}=t^{-1}k[t^{-1}]$. The mapping \begin{equation*} \curly{F}^{(n)}\ni v=t^{-m_1}\odot\dots \odot t^{-m_n}\mapsto x_{m_1}\dots x_{m_n}\in k[x_1, x_2,\dots] \end{equation*} establishes the isomorphism $\curly{F}\simeq k[x_1, x_2, \dots]$ between the bosonic Fock space and the polynomial ring in infinitely many variables $\{x_n\}_{n\in\mathbb{N}}$. Under this mapping $\bm{\alpha}_n\mapsto n\partial/\partial x_n,~\bm{\alpha}_{-n} \mapsto x_n,~n>0$ --- multiplication by $x_n$ operators --- and $\bm{\alpha}_0\mapsto 0$. \end{remark} \begin{remark} \label{splitting} For a general complete closed field $K$ there is no canonical choice of the isotropic subspace $K_-$ complementary to $K_+=\mathcal{O}_K$. However, every choice of an effective non-special divisor $D=P_{1}+\dots +P_{g}$ of degree $g$ on an algebraic curve $X$ and uniformizers $t_{i}$ at $P_{i}$, defines the complementary subspaces $K_{-}$ for all fields $K=F_{P}$, $P\in X$. Namely, let $\mathcal{A}(X,D)$ be the $k$-vector space of additive functions defined in Example \ref{Additive}, and let $\mathcal{A}_P(X,D)$ be the subspace of additive functions with the only pole at $P$. Set \begin{equation*} K_- =\left.\mathcal{A}_P(X,D)\right|_P \subset K. \end{equation*} According to part (i) of Lemma \ref{reciprocity}, the subspace $K_-$ is isotropic with respect to the bilinear form $c$ and decomposition \eqref{decomposition} holds. The subspace $K_-$ is spanned by $v^{(n)}_P=\left.\eta^{(n)}_P\right|_P,\,n\in\mathbb{N}$, and $d K_- = \left.\Omega_0(\ast P)\right|_P$. \end{remark} The bilinear form $c$ has a one-dimensional kernel $k$. Since $\mathcal{O}_K/k = \mathfrak{p}$, the form $c$ defines a non-degenerate continuous pairing $c:\mathfrak{p}\otimes K_{-}\rightarrow k$, so that $\mathfrak{p}=K^{\vee}_{-}=\Hom(K_{-},k)$ --- the topological dual to the $k$-vector space $K_{-}$. Correspondingly, topological dual to the bosonic Fock space $\curly{F}$ is the $k$-vector space $\curly{F}^{\vee}=\overline{\Symm^{\bullet} \mathfrak{p}}$ --- the completion of $\Symm^{\bullet} \mathfrak{p}$ with respect to the linear topology given by the filtration $\{F^n \Symm^{\bullet} \mathfrak{p}\}_{n=0}^{\infty}$, \begin{equation*} F^n \Symm^{\bullet} \mathfrak{p} =\oplus_{i=0}^n\Symm^i\mathfrak{p}. \end{equation*} The continuous pairing $(~,~):\curly{F}^{\vee}\otimes\curly{F}\rightarrow k$ is uniquely determined by the pairing between $\Symm^{\bullet} \mathfrak{p}$ and $\curly{F}=\Symm^{\bullet} K_{-}$, which is defined inductively by \begin{equation} \label{dual} (u,v)=\delta_{kl}\sum_{i=1}^{l}c(u_{1},v_{i})(u^{1},v^{i}), \end{equation} where $u=u_1\odot\cdots\odot u_k=u_{1}\odot u^{1}\in\Symm^k\mathfrak{p}$, and $v=v_1\odot\cdots \odot v_l=v_{i}\odot v^{i}\in\curly{F}^{(l)}$. The dual bosonic Fock space $\curly{F}^{\vee}$ is the right $\mathfrak{g}$-module with the lowest weight vector $\bm{1}^{\vee}$ annihilated by the subspace $K_{-}\oplus k$. Explicitly, the representation $\rho$ of $\mathfrak{g}$ in $\curly{F}$ defines a contragradient representation $\rho^{\vee}$ of $\mathfrak{g}$ in $\curly{F}^{\vee}$ by \begin{equation*} (u\cdot \rho^{\vee}(f),v)=(u,\rho(f)\cdot v),~\text{for all}~u\in\curly{F}^{\vee},\,v\in\curly{F}. \end{equation*} Namely, put $f=\tilde{f}_{+}+\tilde{f}_{-}\in K$, where now $\tilde{f}_{+}\in \mathfrak{p}$ and $\tilde{f}_{-}\in K_{-}\oplus k$. It follows from \eqref{action-ab} and \eqref{dual} that the operator $\bm{f}=\rho^\vee(f)\in\End\curly{F}^{\vee}$ is defined by \begin{equation} \label{action-ab-dual} u\cdot\bm{f}=\tilde{f}_{+}\odot u+ \sum_{i=1}^{k}c(u_i,f)u^i =\tilde{f}_{+}\odot u+\sum_{i=1}^{k}\Res\,(\tilde{f}_{-}du_i)u^i, \end{equation} where $u=u_1\odot\cdots\odot u_k\in\Symm^{k}\mathfrak{p}$ and $u^i=u_1\odot\cdots\odot\hat{u_i}\odot\cdots\odot u_k\in\Symm^{k-1}\mathfrak{p}$. \subsection{The lattice algebra\label{lattice algebra-local}} Let $k[\field{Z}]$ be the group algebra of the additive group $\field{Z}$. As a $k$-vector space, $k[\field{Z}]$ has a basis $\{e_n\}_{n\in\field{Z}}$, $e_{m}e_{n}=e_{m+ n}$. For every decomposition \eqref{decomposition}, define the ``constant term'' of $f\in K$ by $f(0)=f_{+}\!\!\!\mod\mathfrak{p}\in k$, so that for $f\in K_{-}$ we have $f(0)=0$. \begin{remark} When $K=F_{P}$ and $K_{-}=\left.\mathcal{A}_{P}(X,D)\right|_{P}$, $f(0)$ is the constant term of the formal Laurent expansion of $f\in k((t_{P}))$ with respect to the uniformizer $t_{P}$ for $K$, defined in Section \ref{A-functions}. \end{remark} \begin{definition} A lattice algebra $\mathfrak{l}$ associated with the decomposition \eqref{decomposition} is a semi-direct sum of the Heisenberg Lie algebra $\mathfrak{g}$ and the abelian Lie algebra $k[\field{Z}]$ with the Lie bracket \begin{equation*} [f+ aC +\alpha e_{m}, g+bC +\beta e_{n}]=c(f,g)C +\alpha mg(0)e_{m}-\beta nf(0)e_{n}, \end{equation*} where $f+ aC, g+bC\in\mathfrak{g}$. \end{definition} Corresponding irreducible highest weight module $\curly{B}$ for the lattice algebra $\mathfrak{l}$ is given by \begin{equation*} \curly{B}=k[\field{Z}]\odot\curly{F}, \end{equation*} where $k[\field{Z}]$ acts by multiplication, and $$\bm{f}(e_{n}\odot v)=-nf(0)e_{n}\odot v +e_{n}\odot \bm{f} \cdot v,\quad v\in\curly{F}.$$ The module $\curly{B}$ --- the Fock space of ``charged bosons'' is a $\field{Z}$-graded commutative algebra, $$\curly{B}=\bigoplus_{n\in\field{Z}}\curly{B}^{(n)},\quad \curly{B}^{(n)}=k\cdot e_{n}\odot \curly{F}.$$ The elements $e_{n}$, $n\in\field{Z}$, correspond to the shift operators $\bm{e}_{n}=\bm{e}^{n}$ in $\curly{B}$, where $$\bm{e}(e_{n}\odot v)=e_{n+1}\odot v,\quad v\in\curly{F}.$$ \begin{remark} \label{Z} Using canonical isomorphism $K^{\ast}/\mathcal{O}^{\ast}_{K}\simeq \field{Z}$ given by the valuation map $v: K^{\ast}\rightarrow\field{Z}$, the Fock space $\curly{B}$ can be also defined as the space of all functions $$F: K^{\ast}/\mathcal{O}^{\ast}_{K}\rightarrow\curly{F}$$ with finite support. \end{remark} \begin{remark} For every choice of the uniformizer $t$ for $K$, the mapping $$\curly{B}^{(n)}\ni e_{n}\odot t^{-m_{1}}\odot\cdots\odot t^{-m_{l}}\mapsto e^{nx_{0}}x_{m_{1}}\dots x_{m_{l}}\in e^{nx_{0}}k[x_{1},x_{2},\dots]$$ establishes the isomorphism $\curly{B}\simeq k[e^{x_{0}},e^{-x_{0}},x_{1},x_{2},\dots]$. Under this mapping, $\bm{\alpha}_n\mapsto n\partial/\partial x_n,~\bm{\alpha}_{-n} \mapsto x_n,~n>0$, $\bm{\alpha}_0\mapsto -\partial/\partial x_{0}$, and $\bm{e}\mapsto e^{x_{0}}$ --- a multiplication by the variable $e^{x_{0}}$ operator. \end{remark} Topological dual to $\curly{B}$ is the $k$-vector space $\curly{B}^{\vee}=\oplus_{n\in\field{Z}}\,\,k\!\cdot\! q^{n}\odot\curly{F}^{\vee}$, where $\{q^{n}\}_{n\in\field{Z}}$ is the basis in $k[\field{Z}]^{\vee}$ dual to the basis $\{e_{n}\}_{n\in\field{Z}}$. The continuous pairing $(~,~):\curly{B}^{\vee}\otimes\curly{B}\rightarrow k$ is given by \begin{displaymath} (q^{m}\odot u,e_{n}\odot v)=(u,v)\delta_{mn},\quad u\in\curly{F}^{\vee}, \;v\in\curly{F}. \end{displaymath} As in the case of the Heisenberg algebra, the representation $\rho$ of $\mathfrak{l}$ in $\curly{B}$ defines the contragradient representation $\rho^{\vee}$ in $\curly{B}^{\vee}$. The dual Fock space $\curly{B}^{\vee}$ is a right $\mathfrak{l}$-module with the lowest weight vector $\bm{1}^{\vee}$ annihilated by $K_{-}$. \subsection{The Heisenberg system\label{Heisenberg system}} The tame symbol for a complete closed field $K$ is defined by the same formula as in Section \ref{tame}, $$ \tau(f,g)=(-1)^{mn}\frac{f^{n}}{g^{m}}\!\!\!\mod\mathfrak{p}\in k^{\ast},$$ where $f,g\in K^{\ast}$ and $m=v(f), n=v(g)$. Let $G$ be the central extension of the multiplicative group $K^{\ast}$ by the tame symbol, $G\simeq K^{\ast}\times k^{\ast}$ with the group law $$(f_{1},\alpha_{1})(f_{2},\alpha_{2})=(f_{1}f_{2},\tau(f_{1},f_{2})^{-1}\alpha_{1}\alpha_{2}),$$ where $f_{1},f_{2}\in K^{\ast}$ and $\alpha_{1},\alpha_{2}\in k^{\ast}$. The group $G$ is a topological group with the topology defined by the decomposition $G\simeq \field{Z}\times\mathcal{O}^{\ast}_{K}\times k^{\ast}$, where $k^{\ast}$ and $\field{Z}$ have discrete topology, and $\mathcal{O}_{K}^{\ast}$ has $\mathfrak{p}$-adic topology. \begin{definition} A $G$-module is a $k$-vector space $V$, with the discrete topology and with a group homomorphism $R:G\rightarrow\End V$, such that $G$-action on $V$ is continuous and $R((1,\alpha))=\alpha\bm{I}$, $\alpha\in k^{\ast}$. \end{definition} For $f\in K^{\ast}$ setting $R(f)=R((f,1))\in\End V$, we get \begin{equation*} R(f_{1})R(f_{2}) =\tau(f_{1},f_{2})R(f_{1}f_{2}) \end{equation*} --- a projective representation of the multiplicative group $K^{\ast}$. Continuity means that for every $v\in V$ there exists a open subspace $U$ in $K$, commensurable with $\mathfrak{p}$, such that $U^{\ast}\times\{1\}$ fixes $v$, $R(U^{\ast})v=v$. Though the Heisenberg algebra $\mathfrak{g}$ is not a Lie algebra of the group $G$, there is an ``adjoint action'' of $G$ on $\mathfrak{g}$, defined by $$\mathrm{Ad}\,g\cdot x=x + \Res(f d\log h)\,C,$$ where $g=(h,\alpha)\in G$, $x=f+a\,C \in\mathfrak{g}$. Following Garland and Zuckerman \cite{garland-zuckerman}, we call the triple $(\mathfrak{g}, G, \mathrm{Ad})$ a Heisenberg system. Denote by $G_{+}=\mathcal{O}^{\ast}_{K}\times k^{\ast}$ the maximal abelian subgroup of $G$. Since the field $k$ has characteristic $0$, we have $\mathcal{O}^{\ast}_{K}\simeq k^{\ast}\times \exp\mathfrak{p}$. The representation $\rho$ of $\mathfrak{g}$ in $\curly{F}$, constructed in Section \ref{Heisenberg-Lie-local}, defines a representation $r$ of $G_{+}$ in $\curly{F}$ by the formula $$r(g_{+})=\beta\exp\rho(\varphi)=\beta\exp\bm{\varphi},\quad g_{+}=(\alpha\exp\varphi,\beta)\in G_{+},$$ where $\varphi\in\mathfrak{p}$ and $\alpha,\beta\in k^{\ast}$. Since $\curly{F}$ is the highest weight module with respect to the abelian subalgebra $\mathcal{O}_{K}+\{0\}$ of $\mathfrak{g}$, the operators $r(g_{+})\in\End\curly{F}$ are well-defined. \begin{definition} A representation of the Heisenberg system $(G,\mathfrak{g}, \mathrm{Ad})$ in a vector space $V$ is the pair $(R, dR)$, where $R: G\rightarrow\End V$ is a representation of the group $G$, and $dR: \mathfrak{g}\rightarrow\End V$ is a representation of the Heisenberg Lie algebra $\mathfrak{g}$, satisfying $dR(\mathrm{Ad}\,g \cdot x)=R(g)dR(x)R(g)^{-1}$. \end{definition} Let $R$ be a representation of $G$ induced by the representation $r$ of the subgroup $G_{+}$. Explicitly, the $G$-module $\Ind_{G_{+}}^{G}\curly{F}$ consists of all functions $F: G\rightarrow\curly{F}$, satisfying $$F(gg_{+})=r(g_{+})^{-1}F(g),\quad g_{+}\in G_{+},\,g\in G,$$ and such that corresponding sections over $G/G_{+}\simeq\field{Z}$ have finite support. The representation $R$ is given by $$R(g)F(h)=F(g^{-1}h),\quad g, h \in G.$$ Define the representation $dR$ of the Heisenberg Lie algebra $\mathfrak{g}$ by $$(dR(x)\cdot F)(g)=\rho(\Ad\,g^{-1}\cdot x)(F(g)),\quad x\in\mathfrak{g},\,g\in G.$$ \begin{theorem}[Garland-Zuckerman] \label{G-Z} The pair $(R,dR)$ is a representation of the Heisenberg system $(\mathfrak{g}, G, \mathrm{Ad})$, and $$\Ind_{G_{+}}^{G}\curly{F}\simeq\curly{B}.$$ \end{theorem} \begin{proof} See \cite[Sec. 3]{garland-zuckerman}. The isomorphism $\Ind_{G_{+}}^{G}\curly{F}\simeq\curly{B}$ follows from the first remark in Section \ref{lattice algebra-local}. \end{proof} \begin{remark} Every choice of the uniformizer for $K$ defines the group isomorphism $K^{\ast}\simeq k((t))^{\ast}=k^{\ast}\times\field{Z}\times tk[[t]]$. Explicitly, every $f\in K^{\ast}$ can be uniquely written in the form $$f=\alpha t^{n}\exp\varphi,\quad \alpha\in k^{\ast},\;\varphi\in tk[[t]]. $$ In particular, when $K=F_{P}$, $P\in X$, there is a natural choice of the uniformizer $t=t_{P}$ associated with the choice of an effective non-special divisor $D=P_{1}+\dots +P_{g}$ of degree $g$ on $X$, uniformizers $t_{i}$ at $P_{i}$, and additive functions $\eta_{i}$ (see Section \ref{A-functions}). Identifying the elements $e_{m}\odot v\in\curly{B}$ with the functions $F:\field{Z}\rightarrow\curly{F}$ defined by $F(n)=\delta_{mn}v$, $n\in\field{Z}$, we obtain by a straightforward computation (see \cite[Sec. 4]{garland-zuckerman}) that for $f=\alpha t^{n}\exp\varphi\in K^{\ast}$, \begin{align*} R(f)(e_{m}\odot v)=(-1)^{mn}\alpha^{-n-2m}e_{m+n}\odot \exp\bm{\varphi}\cdot v. \end{align*} Similarly, \begin{align*} dR(f)(e_{m}\odot v)=-mf(0)e_{m}\odot v +e_{m}\odot \bm{f} \cdot v, \end{align*} where now $f\in K$. \end{remark} As in the case of the lattice algebra, the representation $(R,dR)$ in $\curly{B}$ defines the contragradient representation $(R^{\vee}, dR^{\vee})$ in the dual Fock space $\curly{B}^{\vee}$. In particular, \begin{equation*} (q^{m}\odot u)R^{\vee}(f)=(-1)^{(m-n)n}\alpha^{n-2m}q^{m-n}\odot u\cdot \exp\bm{\varphi},\quad f=\alpha t^{n}\exp\varphi\in K^{\ast}. \end{equation*} \section{Global Theory\label{Global}} Here for an algebraic curve $X$ over algebraically closed field $k$ of characteristic zero we define global versions of local QFT's introduced in the previous section. Succinctly, these global QFT's can be characterized as follows. \begin{enumerate} \item[\textbf{1.}] ``QFT of additive bosons on $X$'', which corresponds to the global Heisenberg algebra $\mathfrak{g}_X$ --- the restricted direct sum of local Heisenberg algebras $\mathfrak{g}_P$ over all points $P\in X$. The global Fock space $\curly{F}_X$ is defined as the restricted symmetric tensor product of local Fock spaces $\curly{F}_P$ over all points $P\in X$. The global Fock space $\curly{F}_X$ is the highest weight $\mathfrak{g}_X$-module, and there exists a linear functional $\langle\,\cdot\,\rangle\,:\curly{F}_X\rightarrow k$ --- ``the expectation value'' functional, uniquely characterized by its normalization and the invariance property with respect to the space of additive functions. \item[\textbf{2.}] ``QFT of charged bosons on $X$'', which corresponds to global lattice algebra $\mathfrak{l}_X$. The global charged Fock space $\curly{B}_X$ is the highest weight $\mathfrak{l}_X$-module, and there exists a unique expectation value functional $\langle\,\cdot\,\rangle: \curly{B}_X\rightarrow k$ with similar properties. \item[\textbf{3.}] ``QFT of multiplicative bosons on $X$'', which corresponds to the action of the global Heisenberg system $(G_X,\mathfrak{g}_X, \mathrm{Ad})$ on $\curly{B}_{X}$ with the property that the expectation value functional is invariant under the group of multiplicative functions. The latter is equivalent to the generalized A.~Weil reciprocity law on algebraic curves. \end{enumerate} \subsection{Additive bosons on $X$\label{AB}} The theory consists of the following data. \begin{enumerate} \item[\textbf{AB1}] Non-special effective divisor $D_{\mathrm{ns}}=P_1+\dots + P_g$ of degree $g$ on $X$ with distinct points, uniformizers $t_i$ at $P_i$, and the $k$-vector space of additive functions $\mathcal{A}(X,D_{\mathrm{ns}})$ --- a subspace of $\field{A}_X$ containing $F=k(X)$, introduced in Example \ref{Additive}. \item[\textbf{AB2}] Local QFT's of additive bosons --- highest weight $\mathfrak{g}_P$-modules $\curly{F}_P$ for all points $P\in X$. \item[\textbf{AB3}] Global Heisenberg algebra $\mathfrak{g}_X$ --- a one-dimensional central extension of the abelian Lie algebra $\mathfrak{g}\mathfrak{l}_1(\field{A}_X)=\field{A}_X$ by the cocycle $c_X=\sum_{P\in X} c_P$. \item[\textbf{AB4}] The highest weight $\mathfrak{g}_X$-module --- the global Fock space $\curly{F}_X$ --- a restricted symmetric tensor product of $\curly{F}_P$ over all points $P\in X$. \item[\textbf{AB5}] The expectation value functional --- the linear mapping $\langle\,\cdot\,\rangle: \curly{F}_X\rightarrow k$, satisfying the following properties: \begin{itemize} \item[(i)] $\langle\textbf{1}_{X} \rangle=1$, where $\textbf{1}_{X}\in\curly{F}_X$ is the highest weight vector. \item[(ii)] $\langle \bm{a}\cdot v\rangle=0$ for all $a\in\mathcal{A}(X,D_{\mathrm{ns}})$ and $v\in\curly{F}_X$. \end{itemize} \end{enumerate} Parts \textbf{AB1} and \textbf{AB2} of the theory have been described in Sections \ref{second kind} and \ref{Heisenberg-Lie-local}. Here we introduce the global Heisenberg algebra $\mathfrak{g}_X$, construct the corresponding global Fock space $\curly{F}_X$, and prove that the expectation value functional $\langle\,\cdot\,\rangle$ satisfying properties (i) and (ii) exists and is unique. Let $c_X:\field{A}_X \times \field{A}_X\rightarrow k$ be the global bilinear form, \begin{equation*} c_X(x,y)=\sum_{P\in X}c_P(x_P,y_P) = -\sum_{P\in X}\Res_P(x_Pd y_P),\quad x,y\in\field{A}_X. \end{equation*} \begin{definition} The global Heisenberg Lie algebra $\mathfrak{g}_X$ is a one-dimensional central extension of the abelian Lie algebra $\field{A}_X$ \begin{equation*} 0\rightarrow k\,C\rightarrow\mathfrak{g}_X\rightarrow \field{A}_X\rightarrow 0 \end{equation*} by the two-cocycle $c_X$. \end{definition} \noindent The Lie subalgebra $\mathfrak{g}_{X}^{+}=\mathcal{O}_X\oplus kC$, where $\mathcal{O}_X = \prod_{P\in X}\mathcal{O}_P$, is the maximal abelian subalgebra of $\mathfrak{g}_{X}$. \begin{definition} The global Fock space $\curly{F}_X$ is the irreducible $\mathfrak{g}_{X}$-module with the vector $\bm{1}_{X}$ annihilated by the abelian subalgebra $\mathcal{O}_X\oplus \{0\}$. \end{definition} As in the local case, the global Fock module is induced from the one-dimensional $\mathfrak{g}_{X}^{+}$--module, $$\curly{F}_{X}=\Ind_{\mathfrak{g}_{X}^{+}}^{\mathfrak{g}_{X}} k.$$ According to the previous section, for $K=F_{P}$, $P\in X$, we have a decomposition \eqref{decomposition}, where $F_{P}^{(+)}=\mathcal{O}_{P}$ and $F_{P}^{(-)}=\left.\mathcal{A}_{P}(X,D)\right|_{P}$. This gives the following decomposition of the $k$-vector space $\field{A}_X$ into the direct sum of the isotropic subspaces with respect to the bilinear form $c_X$, \begin{equation} \label{decomposition-global} \field{A}_X = \mathcal{O}_{X}\oplus\mathcal{F}_{X}^{(-)}, \end{equation} where \begin{equation*} \mathcal{F}_X^{(-)} =\coprod_{P\in X} F_P^{(-)} \end{equation*} --- a restricted direct product over all $P\in X$ with respect to the zero subspaces $\{0\}\subset F_{P}^{(-)}$. The decomposition \eqref{decomposition-global} gives rise to the isomorphism \begin{equation*} \curly{F}_X\simeq\Symm^\bullet\mathcal{F}_X^{(-)}. \end{equation*} The global Fock space $\curly{F}_X$ carries a linear topology given by the natural filtration associated with the $\field{Z}$-grading. Equivalently, $\curly{F}_X$ can be defined as the symmetric tensor product \begin{equation*} \curly{F}_X = \underset{P\in X}{\widehat{\odot}}\curly{F}_P, \end{equation*} restricted with respect to the vectors $\bm{1}_P\in\curly{F}_P$, equipped with the product topology. In other words, $\bm{1}_X =\odot_{P\in X} \bm{1}_P$, and $\curly{F}_X$ is spanned by the vectors $$v= \underset{P\in X}\odot v_P,$$ where $v_P=\bm{1}_P$ for all but finitely many $P\in X$. For every $P\in X$ we have $v=v_{P}\odot v^{P}$, where $v^{P}=\odot_{Q\in X}\tilde{v}_{Q}$, $\tilde{v}_{Q}=v_{Q}$ for $Q\neq P$ and $\tilde{v}_{P}=\bm{1}_{P}$. Denote by $\rho_{P}$ corresponding representation of $\mathfrak{g}_{P}$ in $\curly{F}_{P}$, $P\in X$, and by $\rho$ --- the representation of $\mathfrak{g}_{X}$ in $\curly{F}_{X}$. Setting $\bm{x}=\rho(x)\in\End\curly{F}_{X}$ for $x=\{x_{P}\}_{P\in X}\in\field{A}_{X}$, we have for $v=\odot_{P\in X} v_P$, $$\bm{x}\cdot v=\sum_{P\in X} \bm{x}_{P}\cdot v_P\odot v^{P},$$ where $\bm{x}_{P}=\rho_{P}(x_{P})\in\End\curly{F}_{P}$. Set \begin{equation*} \mathfrak{P}_X =\prod_{P\in X}\mathfrak{p}. \end{equation*} Topological dual to the global Fock space $\curly{F}_X$ is the $k$-vector space $\curly{F}_X^{\vee}=\overline{\Symm^{\bullet}\mathfrak{P}_X}$ --- the completion of $\Symm^{\bullet}\mathfrak{P}_X$ with respect to the linear topology given by the natural filtration associated with the $\field{Z}$-grading. The dual global Fock space $\curly{F}_X^{\vee}$ is the right $\mathfrak{g}_X$-module with a lowest weight vector $\bm{1}_X^{\vee}$ annihilated by the abelian subalgebra $\mathcal{F}_X^{(-)}\oplus\{0\}$. Equivalently, \begin{equation*} \curly{F}_X^{\vee} = \overline{\underset{P\in X}{\widehat{\odot}}\curly{F}_P^\vee} \end{equation*} --- the completion of the symmetric tensor product restricted with respect to the vectors $\bm{1}_P^{\vee}$. The completion is taken with respect to the double filtration $\{F^{mn}\Symm^\bullet\mathfrak{P}_X\}$, \begin{equation*} F^{mn}\Symm^\bullet\mathfrak{P}_X= \sum_{i=0}^m\sum_{P_1,\dots,P_i\in X} \left(\bigoplus_{l_1+\dots + l_i=0}^n\Symm^{l_1} \mathfrak{p}_1\odot\dots\odot\Symm^{l_i}\mathfrak{p}_i\right). \end{equation*} In other words, the elements of $\curly{F}_X^\vee$ are infinite sums \begin{equation*} u=\sum_{n=0}^\infty\sum_{P_1,\dots,P_n\in X}a_{\scriptscriptstyle{P_1\dots P_n}}u_{\scriptscriptstyle{P_1\dots P_n}}, \end{equation*} \ where $u_{\scriptscriptstyle{P_1\dots P_n}}\in \overline{\curly{F}^\vee}_{P_1\dots P_n}$ --- a completion of the symmetric tensor product \begin{displaymath} \curly{F}_{P_1\dots P_n}^\vee=\curly{F}_{P_1}^\vee\odot\dots\odot\curly{F}_{P_n}^\vee \end{displaymath} with respect to the filtration \begin{equation*} F^{m}\curly{F}_{P_1\dots P_n}^\vee = \bigoplus_{l_1+\dots + l_n=0}^m\left(\Symm^{l_1}\mathfrak{p}_1\odot\dots\odot\Symm^{l_n} \mathfrak{p}_n\right). \end{equation*} Denote by $\{u^{(n)}_P\}_{n\in\field{N}}$ the basis for $\mathfrak{p}$ dual to the basis $\left\{v^{(n)}_P=\left.\eta^{(n)}_P\right|_P\right\}_{n\in\field{N}}$ for $F_P^{(-)}$ with respect to the pairing given by $c_{P}$ (see Section \ref{Heisenberg-Lie-local}). Then we obtain that $\curly{F}_X^\vee$ is a completion of $k[[u_{P}^{n}]]$ --- the ring of formal Taylor series in infinitely many variables $u^{(n)}_P,\,P\in X, n\in\field{N}$. This realization of $\curly{F}_X^\vee$ is used to prove the following main result for the QFT of additive bosons. \begin{theorem} \label{additive theorem} There exists a unique linear functional $\langle\, \cdot\,\rangle:\curly{F}_X\rightarrow k$ --- the expectation value functional --- satisfying the following properties. \begin{itemize} \item[\textbf{EV1}] $\langle\bm{1}_X\rangle = 1$. \item[\textbf{EV2}] $\langle\bm{a}\cdot v\rangle =0$ for all $a\in\mathcal{A}(X,D_{\mathrm{ns}})$ and $v\in\curly{F}_X$. \end{itemize} The functional $\langle\, \cdot\,\rangle$ has the form \begin{displaymath} \langle v\rangle =\left(\Omega_X,v\right), \end{displaymath} where \begin{equation*} \Omega_X=\exp\left\{-\frac{1}{2}\sum_{m,n=1}^\infty\sum_{P,Q\in X} c^{(mn)}_{PQ} u^{(m)}_Pu^{(n)}_Q\right\}\in\curly{F}_X^\vee, \end{equation*} and \begin{displaymath} c^{(mn)}_{PQ} = -\Res_Q(\eta^{(m)}_P d\eta^{(n)}_Q). \end{displaymath} \end{theorem} \begin{proof} It follows from decomposition \eqref{AD} that the linear functional $\langle v\rangle =(\Omega,v)$ verifies properties \textbf{EV1} and \textbf{EV2} if and only if it is normalized, $(\Omega,\bm{1}_X)=1$, and $\Omega\in\curly{F}_{X}^{\vee}$ satisfies the equations \begin{equation} \label{omega-eta} \Omega\cdot\boldsymbol{\eta^{(n)}_P}=0 \end{equation} for all $P\in X$ and $n\in\field{N}$, where $\boldsymbol{\eta^{(n)}_P}=\rho^{\vee}(\eta^{(n)}_P)$. Let $\eta^{(n)}_P =\beta^{(n)}_{P} + \gamma^{(n)}_{P}$, where $\beta^{(n)}_{P}=\{\beta^{(n)}_{PQ}\}_{Q\in X}, \gamma^{(n)}_{P}=\{\gamma^{(n)}_{PQ}\}_{Q\in X}\in\field{A}_{X}$ are given by \begin{equation*} \beta^{(n)}_{PQ} =\begin{cases} 0& \text{if $Q=P$}, \\ \left.\eta^{(n)}_P\right|_Q & \text{if $Q\neq P$}, \end{cases} \end{equation*} and \begin{equation*} \gamma^{(n)}_{PQ} =\begin{cases} \left.\eta^{(n)}_P\right|_P& \text{if $Q=P$}, \\ 0& \text{if $Q\neq P$}. \end{cases} \end{equation*} It follows from \eqref{action-ab-dual} that $\bm{\gamma^{(n)}_{P}}$ acts on $\curly{F}_X^\vee$ as a differentiation with respect to the variable $u^{(n)}_P$. For $Q\neq P$ we have $$\beta_{PQ}^{(n)}=a^{(n)}_{PQ} + \sum_{m=1}^{\infty}a_{PQ}^{(nm)}u_{Q}^{(m)},$$ where $a^{(n)}_{PQ}\in k$ and $$a_{PQ}^{(nm)}=c(\beta_{PQ}^{(n)},v_{Q}^{(m)})=-\Res_{Q}(\eta_{P}^{(n)}d\eta_{Q}^{(m)})=c_{PQ}^{(nm)}.$$ Since $c_{PP}^{(nm)}=0$ (see Lemma \ref{reciprocity}) we conclude that $\bm{\beta^{(n)}_{P}}$ acts on $\curly{F}_{X}^{\vee}$ as a multiplication by $\sum_{Q\in X} c^{(nm)}_{PQ} u^{(m)}_Q$. The equations \eqref{omega-eta} can be rewritten as \begin{equation} \label{system} \left(\frac{\partial}{\partial u^{(n)}_P} + \sum_{Q\in X} c^{(nm)}_{PQ} u^{(m)}_Q\right)\Omega=0, \quad P\in X,\;n\in\field{N}. \end{equation} As it follows from part (i) of Lemma \ref{reciprocity}, $$c_{PQ}^{(mn)}=c_{QP}^{(nm)},$$ so that the system of differential equations \eqref{system} is compatible and $\Omega_X$ is its unique normalized solution. \end{proof} \subsection{Charged additive bosons on $X$\label{CB}} The theory consists of the following data. \begin{enumerate} \item[\textbf{CB1}] Non-special effective divisor $D_{\mathrm{ns}}=P_1+\dots + P_g$ of degree $g$ on $X$ with distinct points, uniformizers $t_i$ at $P_i$, and the $k$-vector space of additive functions $\mathcal{A}(X,D_{\mathrm{ns}})$ , a subspace of $\field{A}_X$ containing $F=k(X)$, introduced in Example \ref{Additive}. \item[\textbf{CB2}] Local QFT's of charged additive bosons --- highest weight $\mathfrak{l}_P$-modules $\curly{B}_P$ for all points $P\in X$. \item[\textbf{CB3}] Global lattice algebra $\mathfrak{l}_X$ --- the semi-direct sum of the global Heisenberg algebra $\mathfrak{g}_{X}$ and the abelian Lie algebra $k[\Div_{0}(X)]$ with generators $e_{D}$, $D\in\Div_{0}(X)$ --- the group algebra of the additive group $\Div_{0}(X)$ of degree $0$ divisors on $X$. \item[\textbf{CB4}] The highest weight $\mathfrak{l}_X$-module --- the global Fock space $\curly{B}_X$ with the highest weight vector $\bm{1}_{X}\in\curly{B}_X$. \item[\textbf{CB5}] The expectation value functional --- the linear mapping $\langle\,\cdot\,\rangle: \curly{B}_X\rightarrow k$, satisfying the following properties: \begin{itemize} \item[(i)] $\langle\bm{e}_{\bm{D}}\cdot \bm{1}_{X} \rangle=1$ for all $D\in\Div_{0}(X)$. \item[(ii)] $\langle \bm{a}\cdot u\rangle=0$ for all $a\in\mathcal{A}(X,D_{\mathrm{ns}})$ and $u\in\curly{B}_X$. \end{itemize} \end{enumerate} As a $k$-vector space, the group algebra $k[\Div_0(X)]$ of the additive group $\Div_0(X)$ of degree $0$ divisors on $X$ has a basis $\{e_D\}_{D\in\Div_0(X)}$, $e_{D_1}e_{D_2}=e_{D_1 + D_2}$. For every $x=\{x_P\}\in\field{A}_X$ and $D=\sum_{P\in X} n_P\,P\in\Div_0(X)$ we put \begin{displaymath} x(D) = \sum_{P\in X} n_P x_P(0)\in k, \end{displaymath} where $x_P(0)=x_P^{+}\!\!\mod\mathfrak{p}\in k$ is the constant term of $x_P\in F_P$, defined by the decomposition \eqref{decomposition} associated with the non-special divisor $D_{\mathrm{ns}}$ (see Section \ref{lattice algebra-local}). \begin{definition} The global lattice algebra $\mathfrak{l}_{X}$ is a semi-direct sum of the global Heisenberg algebra $\mathfrak{g}_X$ and the abelian Lie algebra $k[\Div_0(X)]$ with the Lie bracket \begin{displaymath} [x + \alpha C+ \gamma e_{D_1}, y+\beta C + \delta e_{D_2}] = c_X(x,y)C +y(D_1)\gamma e_{D_1}- x(D_2)\delta e_{D_2} , \end{displaymath} where $x+\alpha C, y+\beta C\in\mathfrak{g}_X$, $\gamma,\delta\in k$. \end{definition} The global Fock space $\curly{B}_X$ is a symmetric tensor product of the group algebra $k[\Div_0(X)]$ and the Fock space of additive bosons $\curly{F}_X$, \begin{equation*} \curly{B}_X =k[\Div_0(X)]\odot \curly{F}_X = \bigoplus_{D\in\Div_0(X)}\curly{B}_X^D, \end{equation*} where \begin{displaymath} \curly{B}_X^D=k\cdot e_D\odot\curly{F}_X. \end{displaymath} The global Fock space $\curly{B}_X$ is the irreducible $\mathfrak{l}_X$-module, where $k[\Div_{0}(X)]$ acts by multiplication, \begin{align} \bm{e}_{\bm{D_{1}}}(e_{D_{2}}\odot v) & =e_{D_{1}+D_{2}}\odot v,\quad v\in\curly{F}_{X} \label{A-1} \\ \intertext{and} \bm{x}(e_{D}\odot v) & =-x(D)e_{D}\odot v +e_{D}\odot\bm{x}\cdot v,\quad v\in\curly{F}_{X}. \label{A-2} \end{align} For every $D =\sum_{P\in X}n_P\,P \in\Div_0(X)$ the subspace $\curly{B}_X^D$ is the irreducible $\mathfrak{g}_X$-module. It has the property that for $x=\{x_P\}_{P\in X}\in\field{A}_X$ such that $x_P\in k$ for all $P\in X$, the restriction of the operator $\bm{x}$ to the subspace $\curly{B}_X^D$ is equal to $-x(D)\bm{I}$, where $\bm{I}$ is the identity operator. In particular, when $x=c$ is a constant, $x(D)=c\deg D=0$, and $\bm{x}$ acts by zero in $\curly{B}_{X}$. \begin{remark} One can also define the extended global lattice algebra $\tilde{l}_{X}$ as a semi-direct sum of the global Heisenberg algebra $\mathfrak{g}_X$ and the abelian Lie algebra $k[\Div(X)]$, as well as its irreducible module --- the extended Fock space \begin{equation*} \tilde{\curly{B}}_{X}=k[\Div(X)]\odot \curly{F}_X = \bigoplus_{D\in\Div(X)}\curly{B}_X^D. \end{equation*} The action of $\tilde{l}_{X}$ in $\tilde{\curly{B}}_{X}$ is given by as the same formulas \eqref{A-1}--\eqref{A-2}, where now the constant adele $x=c$ acts in $\curly{B}_{X}^{D}$ by $(c\deg D)\bm{I}$. \end{remark} The dual Fock space $\curly{B}^\vee_X$ is defined as a completion of the direct sum of the dual spaces to $\curly{B}_X^D$ over $D\in\Div_{0}(X)$, given by the formal infinite sums. Explicitly, \begin{equation*} \curly{B}^\vee_X =\overline{ \bigoplus_{D\in\Div_0(X)}\curly{B}_X^\vee(D)}, \end{equation*} where \begin{displaymath} \curly{B}_X^\vee(D) =k\cdot q^D\odot\curly{F}_X^\vee, \end{displaymath} $q^D\in k[\Div_{0}(X)]^{\vee}$ are dual to $e_D$, and $\curly{F}_{X}^{\vee}$ was defined in Section \ref{AB}. \begin{theorem} \label{charged theorem} There exists a unique linear functional $\langle\,\cdot\,\rangle:\curly{B}_X \rightarrow k$ --- the expectation value functional --- satisfying the following properties: \begin{itemize} \item[\textbf{EV1}] $\langle\bm{e}_{\bm{D}}\cdot \bm{1}_X\rangle = 1$ for all $D\in\Div_{0}(X)$. \item[\textbf{EV2}] $\langle\bm{a}\cdot v\rangle =0$ for all $a\in\mathcal{A}(X,D_{\mathrm{ns}})$ and $v\in\curly{B}_X$. \end{itemize} The functional $\langle\,\cdot\,\rangle$ has the form \begin{displaymath} \langle v\rangle =(\hat{\Omega}_X,v), \end{displaymath} where \begin{equation*} \hat{\Omega}_X=\sum_{D\in\Div_0(X)} q^D\odot \exp\left\{\sum_{n=1}^\infty\sum_{P\in X}\eta^{(n)}_{P}(D) u^{(n)}_P\right\}\Omega_X \in\curly{B}_X^\vee, \end{equation*} and $\Omega_{X}$ is given in Theorem \rm{\ref{additive theorem}}. \end{theorem} \begin{proof} As in the proof of Theorem \ref{additive theorem}, put \begin{displaymath} \Omega=\sum_{D\in\Div_0(X)} q^D\odot \Omega_D,\quad\Omega_D\in\curly{F}_X^\vee. \end{displaymath} Condition $(\Omega, e_{D}\odot\bm{1}_{X})=1$ for all $D\in\Div_0(X)$ is equivalent to the normalization $(\Omega_{D}, \bm{1}_{X})=1$. The constants act by zero in $\curly{B}_{X}$, so it is sufficient to verify the equations \begin{equation} \label{vanish} (q^D\odot \Omega_D)\cdot\bm{\eta^{(n)}_P}=0 \end{equation} for all $D=\sum_{Q\in X}n_Q\,Q\in\Div_0(X)$ and $P\in X$. Since \begin{equation*} q^D\cdot\bm{\eta^{(n)}_P} =-\eta^{(n)}_{P}(D)\,q^{D}=-\sum_{Q\in X}n_Q\left.\eta^{(n)}_P\right|_Q(0) \,q^{D} \end{equation*} (note that, by definition in Section \ref{lattice algebra-local}, $\left.\eta^{(n)}_P\right|_P(0)=0$), we get from \eqref{vanish} that $\Omega_D$ satisfies the following system of differential equations \begin{equation*} \left(\frac{\partial}{\partial u^{(n)}_P} - \sum_{Q\in X}n_Q\left.\eta^{(n)}_P\right|_Q(0) +\sum_{Q\in X} c^{(nm)}_{PQ} u^{(m)}_Q \right)\Omega_D =0, \end{equation*} which has a unique normalized solution given by \begin{equation*} \Omega_{D}=\exp\left\{\sum_{n=1}^\infty\sum_{P\in X}\eta^{(n)}_{P}(D) u^{(n)}_P-\frac{1}{2}\sum_{m,n=1}^\infty\sum_{P,Q\in X} c^{(mn)}_{PQ} u^{(m)}_P u^{(n)}_Q\right\}. \qedhere \end{equation*} \end{proof} \subsection{Multiplicative bosons on $X$\label{MB}} The theory consists of the following data. \begin{enumerate} \item[\textbf{MB1}] Non-special effective divisor $D_{\mathrm{ns}}=P_1+\dots + P_g$ of degree $g$ on $X$ with distinct points, uniformizers $t_i$ at $P_i$, and the group of multiplicative functions $\mathcal{M}(X,D)\subset \field{J}_{X}$ associated with the $k$-vector space of additive functions $\mathcal{A}(X,D_{\mathrm{ns}})$, introduced in Example \ref{2}. \item[\textbf{MB2}] Local QFT's of multiplicative bosons --- representations $(R_{P},dR_{P})$ in $\curly{B}_{P}$ of the local Heisenberg systems $(G_{P},\mathfrak{g}_{P},\mathrm{Ad})$ for all points $P\in X$. \item[\textbf{MB3}] The global group $G^{0}_X$ --- a one-dimensional central extension of the subgroup $\field{J}_{X}^{0}$ of degree zero ideles by the global tame symbol $\tau_{X}$ --- and the global Heisenberg system $(G^{0}_{X}, \mathfrak{g}_{X},\mathrm{Ad})$. \item[\textbf{MB4}] The representation $(R_{X}, dR_{X})$ of the global Heisenberg system in the global Fock space $\curly{B}_X$. \item[\textbf{MB5}] The expectation value functional --- the linear mapping $\langle\,\cdot\,\rangle: \curly{B}_X\rightarrow k$, satisfying the following properties: \begin{itemize} \item[(i)] $\langle\bm{1}_{X} \rangle=1$. \item[(ii)] $\langle\bm{x}\cdot v\rangle =0$ for all $x\in\mathcal{A}(X,D_{\mathrm{ns}})$ and $v\in\curly{B}_X$. \item[(iii)] $\langle \bm{m}\cdot v\rangle=\langle v\rangle$ for all $m\in\mathcal{M}(X,D_{\mathrm{ns}})$ and $v\in\curly{B}_X$. \end{itemize} \end{enumerate} Parts \textbf{MB1} and \textbf{MB2} were described in Sections \ref{second kind} and \ref{Heisenberg system}. \begin{definition} The global group $G_{X}$ is a central extension of the group of ideles $\field{J}_{X}$ by the global tame symbol $\tau_{X}$ --- $G_{X}\simeq \field{J}_{X}\times k^{\ast}$ --- with the group law $$(a,\alpha)(b,\beta)=(ab, \tau_{X}(a,b)^{-1}\alpha\beta),\quad a, b\in \field{J}_{X},\;\alpha, \beta\in k^{\ast},$$ where $$\tau_{X}(a,b)=\prod_{P\in X}\tau_{P}(a_{P}, b_{P}).$$ \end{definition} The ``adjoint action'' of the global group $G_{X}$ on the global Heisenberg algebra $\mathfrak{g}_{X}$ is defined by $$\Ad g\cdot\tilde{x}=\tilde{x}+\sum_{P\in X}\Res_{P}(x_{P}d\log a_{P}) C,$$ where $g=(\{a_{P}\}_{P\in X},\alpha)\in G_{X}$, $\tilde{x}=x+\gamma C\in\mathfrak{g}_{X}$ and $x=\{x_{P}\}_{P\in X}\in\field{A}_{X}$. The triple $(G_{X},\mathfrak{g}_{X},\Ad)$ is called the global Heisenberg system. By definition, representation $(R_{X}, dR_{X})$ of the global Heisenberg system $(G_{X},\mathfrak{g}_{X},\Ad)$ is a pair $(R_{X}, dR_{X})$, where $R$ is a representation of the group $G_{X}$, and $dR_{X}$ is a representation the Lie algebra $\mathfrak{g}_{X}$ satisfying $dR_{X}(\Ad g\cdot\tilde{x})=R(g)dR_{X}(\tilde{x})R(g)$. As in the local case (see Section \ref{Heisenberg system}), representation $(R_{X}, dR_{X})$ of the global Heisenberg system $(G_{X},\mathfrak{g}_{X},\Ad)$ is induced by representation of the abelian subgroup $G^{+}_{X}=\prod_{P\in X}\mathcal{O}_{P}^{\ast}\times k^{\ast}$. Namely, every $g_{+}\in G_{X}^{+}$ can be written as $$g_{+}=(\{\alpha_{P}\exp \varphi_{P}\}_{P\in X},\beta),$$ where $\alpha_{P},\beta\in k^{\ast}$, $\varphi_{P}\in\mathfrak{p}$ for all $P\in X$, and we define a representation $r_{X}$ of $G_{X}^{+}$ in $\curly{F}_{X}$ by $$r_{X}(g_{+})=\beta\exp\rho(\varphi)=\beta\exp\bm{\varphi},\quad \bm{\varphi}=\{\bm{\varphi_{P}}\}_{P\in X}\in\End\curly{F}_{X}.$$ Explicitly, $$\exp\bm{\varphi}\cdot v =\underset{P\in X}\odot \exp\bm{\varphi_{P}}\cdot v_{P},\quad v=\underset{P\in X}\odot v_{P}\in\curly{F}_{X}.$$ As in Section \ref{Heisenberg system}, the operators $r(g_{+})\in\End\curly{F}_{X}$ are well-defined since $\varphi_{P}\in\mathfrak{p}$ for all $P\in X$. The $G_{X}$-module $\Ind_{G^{+}_{X}}^{G_{X}}\curly{F}_{X}$ consists of all functions $F: G_{X}\rightarrow\curly{F}_{X}$ satisfying $$F(gg_{+})=r_{X}(g_{+})^{-1}F(g),\quad g_{+}\in G_{X}^{+},\; g\in G_{X}$$ and such that corresponding sections over $G_{X}/G_{X}^{+}\simeq \Div(X)$ have finite support. The representation $R_{X}$ is given by $$R_{X}(g)F(h)=F(g^{-1}h),\quad g,h\in G_{X},$$ and the corresponding representation $dR_{X}$ of the global Heisenberg algebra $\mathfrak{g}_{X}$ is given by $$(dR_{X}(\tilde{x})F)(g)=\rho(\Ad g^{-1}\cdot\tilde{x})(F(g)),\quad \tilde{x}\in \mathfrak{g}_{X},\; g\in G_{X}.$$ We summarize these results as the global analog of Theorem \ref{G-Z}. \begin{theorem}\label{G-Z global} The pair $(R_{X}, dR_{X})$ is a representation of the global Heisenberg system $(G_{X},\mathfrak{g}_{X},\Ad)$, and $$\Ind_{G_{X}^{+}}^{G_{X}}\curly{F}_{X}\simeq\tilde{\curly{B}}_{X}.$$ \end{theorem} Explicit construction of the representations $R_{X}$ and $dR_{X}$ is the following. We identify the elements $e_{D}\odot v\in\tilde{\curly{B}}_{X}$ with the functions $F:\Div(X)\rightarrow\curly{F}_{X}$ defined by \begin{equation*} F(D')= \begin{cases} v, &\text{if}\;\; D'=D,\\ 0, &\text{otherwise} \end{cases} \end{equation*} and using the uniformizers $t_{P}$, introduced in Section \ref{A-functions}, represent every idele $a=\{a_{P}\}_{P\in X} \in \field{J}_{X}$ as $$a=\Big\{\alpha_{P} t_{P}^{v_{P}(a_{P})}\exp\varphi_{P}\Big\}_{P\in X},\quad\text{where}\quad \alpha_{P}\in k^{\ast},\;\varphi_{P}\in\mathfrak{p}.$$ Then the representation $R_{X}$ is defined by \begin{gather} \bm{a}\cdot(e_{D}\odot v)= R_{X}((a,1))(e_{D}\odot v) \nonumber \\ =(-1)^{\sum_{P\in X} v_{P}(D_{a})v_{P}(D)} \prod_{P\in X}\alpha_{P}^{-v_{P}(D_{a})-2v_{P}(D)} e_{D+D_{a}}\odot\exp\bm{\varphi}\cdot v, \label{R-1} \end{gather} where we put $D_{a}=\sum_{P\in X}v_{P}(a_{P})\cdot P\in \Div(X)$. In particular, when $a=\alpha$ is a constant idele, than the restriction of $R_{X}(\alpha)$ to the subspace $\curly{B}^{D}_{X}$ is $\alpha^{-2\deg D}\bm{I}$. Similarly, the representation $dR_{X}$ is given by \begin{align*} dR_{X}(x)(e_{D}\odot v)=-x(D)e_{D}\odot v +e_{D}\odot \bm{x} \cdot v,\quad x\in\field{A}_{X}. \end{align*} Let $\field{J}_{X}^{0}=\{a\in\field{J}_{X} : \deg D_{a}=0\}$ be the subgroup of degree $0$ ideles, and let $G_{X}^{0}=\field{J}_{X}^{0}\times k^{\ast}$ be the corresponding subgroup of $G_{X}$. In particular, the group of multiplicative functions $\mathcal{M}(X,D_{\mathrm{ns}})$, defined in Example \ref{2}, is a subgroup of $\field{J}_{X}^{0}$. Restriction of the representation $R_{X}$ to $G_{X}^{0}$ preserves the subspace $\curly{B}_{X}$ of $\tilde{\curly{B}}_{X}$, and for $a\in \field{J}_{X}^{0}$ the restriction of $R_{X}((a,1))$ to $\curly{B}_{X}$ is given by the same formula \eqref{R-1}. In particular, constant ideles $a=\alpha$ act by identity in $\curly{B}_{X}$. From now on we will consider only this representation of $G_{X}^{0}$ in $\curly{B}_{X}$, and will continue to denote it by $R_{X}$. Denote by $R_{X}^{\vee}$ the contragradient representation of $G_{X}^{0}$ in $\curly{B}^{\vee}_{X}$. We get from \eqref{R-1}, \begin{gather} (q^{D}\odot u)\cdot \bm{a}=((q^{D}\odot u)\cdot R^{\vee}_{X}((a,1))\nonumber \\ =(-1)^{\sum_{P\in X} v_{P}(D_{a})v_{P}(D)} \prod_{P\in X}\alpha_{P}^{v_{P}(D_{a})-2v_{P}(D)} q^{D-D_{a}}\odot u\cdot \exp\bm{\varphi}, \label{R-2} \end{gather} where we have used that $\sum_{P\in X} v_{P}(D)^{2} \equiv0\!\!\!\mod 2$ when $\deg D=0$. \begin{theorem} There exists a unique linear functional $\langle\,\cdot\,\rangle:\curly{B}_X \rightarrow k$ --- the expectation value functional --- satisfying the following properties: \begin{itemize} \item[\textbf{EV1}] $\langle\bm{1}_{X}\rangle =1$. \item[\textbf{EV2}] $\langle\bm{x}\cdot v\rangle =0$ for all $x\in\mathcal{A}(X,D_{\mathrm{ns}})$ and $v\in\curly{B}_X$. \item[\textbf{EV3}] $\langle\bm{m}\cdot v\rangle =\langle v\rangle$ for all $m\in\mathcal{M}(X,D_{\mathrm{ns}})$ and $v\in\curly{B}_X$. \end{itemize} It has the form \begin{displaymath} \langle v\rangle =(\bm{\Omega}_X,v), \end{displaymath} where \begin{equation*} \bm{\Omega}_X=\sum_{D\in\Div_0(X)}c(D)\, q^D\odot \exp\left\{\sum_{n=1}^\infty\sum_{P\in X}\eta^{(n)}_{P}(D) u^{(n)}_P\right\}\Omega_X \in\curly{B}_X^\vee. \end{equation*} Here $$c(D)=\prod _{P,Q\in X}c(P,Q)^{v_{P}(D)v_{Q}(D)},\quad D\in\Div_{0}(X),$$ where $c(P,Q)=c_{P,Q}\in k^{\ast}$ are given in Proposition \rm{\ref{Mult-Existence}}, and $\Omega_{X}$ --- in Theorem \rm{\ref{additive theorem}}. \end{theorem} \begin{proof} It follows from the proof of Theorem \ref{charged theorem} that conditions \textbf{EV1}--\textbf{EV2} ensure that $\bm{\Omega}_{X}$ has the form given above with some coefficients $c(D)\in k^{\ast}$. Since the constants act by identity in $\curly{B}_{X}$, it is sufficient to verify condition \textbf{EV3} for basic multiplicative functions $m=f_{P,Q}$. As in the proof of Theorem \ref{charged theorem}, we put \begin{equation*} \Omega_{D}=\exp\left\{\sum_{n=1}^\infty\sum_{R\in X}\eta^{(n)}_{R}(D) u^{(n)}_R\right\}\Omega_{X}, \end{equation*} so that $$\bm{\Omega}_{X}=\sum_{D\in\Div_0(X)}c(D)\, q^D\odot\Omega_{D}.$$ Using Lemma \ref{prime-taylor} and \eqref{R-2}, we obtain the following formula for the action of $\bm{f_{P,Q}}=R_{X}^{\vee}(f_{P,Q})$ on $q^{D}\odot\Omega_{D}$, \begin{equation*} (q^{D}\odot\Omega_{D})\cdot\bm{f_{P,Q}}=h(P,Q;D)\,q^{D+Q-P}\odot\Omega_{D+Q-P}, \end{equation*} where $$h(P,Q;D)=(-1)^{v_{P}(D)+v_{Q}(D)}\prod_{R\in X}\left(\frac{c(P,R)}{c(Q,R)}\right)^{-2v_{R}(D)}\frac{c(P,P)c(Q,Q)}{c(P,Q)c(Q,P)}.$$ Now the equations $$\bm{\Omega}_{X}\cdot \bm{f_{P,Q}}=\bm{\Omega}_{X}\;\; \text{for all}\;\; P,Q\in X,\; P\neq Q$$ are equivalent to the equations \begin{equation} \label{rec-c} c(D+Q-P)=h(P,Q;D)c(D)\;\;\text{for all}\;\; D\in\Div_{0}(X). \end{equation} It is easy to see, using the property $c(P,Q)=-c(Q,P)$ for $P\neq Q$, that unique solution of recurrence relations \eqref{rec-c} satisfying $c(0)=1$ is given by \begin{equation*} c(D)=\prod_{i,j=1}^{m}c(R_{i},R_{j})^{n_{i}n_{j}},\;\;\text{where} \;\;D=\sum_{i=1}^{m}n_{i}R_{i}\in\Div_{0}(X). \qedhere \end{equation*} \end{proof} \begin{remark} All results of this section trivially hold for the case when $X$ has genus 0. Using Remarks \ref{0-a}, \ref{0-m} and \ref{0-u}, one gets explicit elementary formulas for the expectation value functional $\langle\,\cdot\,\rangle$ for the quantum field theories of additive, charged, and multiplicative bosons on $\mathbb{P}^{1}_{k}$. \end{remark} \bibliographystyle{amsalpha}
1,314,259,992,597
arxiv
\section{Introduction} \label{S-Introduction} The recent opening and publication by Courtillot, Le Mou\"el and Lopes (\citealp{malburet2019}) of a \textit{pli cachet\'e} (sealed letter), entrusted to the French Academy of Science by Jean Malburet in 1918 \citep{malburet1918}, highlights the early interest in the search for a link between solar cycles and tides. In fact, the work of Malburet was already known from a detailed and luminous report he wrote (in french) for the journal \textit{`L'Astronomie'} in 1925 \citep{malburet1925}\footnote{This article is available on line from the Biblioth\`eque de France via Gallica: https://gallica.bnf.fr/ark:/12148/bpt6k9628963x/f369.item.}. In that report, he correctly estimates the tidal forcing of planets on the Sun as being proportional to $m_p/d_p^3$ where $m_p$ and $d_p$ are the mass of a planet, and its distance from the Sun, respectively. This scaling yields tidal forcings proportional to $\sim4.0$, $\sim3.8$, $\sim1.8$, and $\sim1.7$, for Jupiter, Venus, Earth and Mercury, respectively (the contributions of the other planets are at least 10 times smaller). With this in mind, Malburet shows a good correspondance between the dates of solar maxima and the dates of `weak deviations from Jupiter-Venus-Earth syzygies\footnote{alignment of all these planets with the Sun.}'. Malburet's idea was later taken up and extended by \citet{wood1972} and \citet{okhlopkov2016}. There are however two serious problems with this idea: \begin{enumerate} \item The amplitude of the tidal forcing on the Sun is extremely small ($<10^{-8}$ kg m$^{-3}$), yielding accelerations one thousand times smaller than observed in the convective zone of the Sun \citep{deJager2005}. The forcing is {\bf 100\,000 times smaller} that the tidal forcing of the four Gallilean satellites on Jupiter (which is similar to the tidal forcing of the Moon on Earth). \item The $\sim11.2$ years-period inferred from the `weak deviations from Jupiter-Venus-Earth syzygies' is an artificial construction which has no signature in the complete tidal signal, as demonstrated by \citet{okal1975}, and illustrated in Appendix \ref{A-tide}. \end{enumerate} It is therefore surprising that this idea receives a renewed attention \citep{scafetta2012,okhlopkov2016,baidolda2017,courtillot2021,charbonneau2022}. Even more surprising is the enthusiasm shown by Stefani and colleagues who have published no less than 7 articles exploiting this idea (see \citet{stefani2021} and references therein). It seems that the main reasons that give confidence to these authors is their demonstration that the solar cycle is \textit{clocked}, and probably the belief that this property requires a clocked forcing that only planetary motions can provide. Their demonstration, exposed in \citet{stefani2019}, rests upon the computation of `Dicke's ratio' of a 1000-years long time series of solar minima, which favors a clocked origin over a random-walk type origin. The main objective of the present article is to show that the demonstration of \citet{stefani2019} is invalid. I also show examples of fluid instabilities that naturally produce clocked-looking time series. \section{The demonstration of Stefani \textit{et al} (2019)} \label{S-Stefani} \citet{stefani2019} picked up an idea that \citet{dicke1978} proposed for testing whether there is a \textit{``chronometer hidden deep in the Sun''}. The aim of Dicke was to distinguish a clocked behaviour from an \textit{`eruption hypothesis'}, in which solar cycles would appear with a random phase. While Dicke restricted his analysis to the post-1705 time series of 25 solar maxima, \citet{stefani2019} extend it to 92 solar cycles starting in AD 1000, in an attempt to obtain a better statistical significance. \subsection{Dicke's ratio} \label{S-Dicke} \citet{dicke1978} noticed that a succession of 3 very short cycles starting in 1755 was followed by a very long cycle, as if the Sun was trying to keep up with some internal clock. He proposed several statistical tools to assess the existence of such a clock. Consider a time series $t(i)$ of $N$ events $i$. In a perfectly clocked time series, all event dates $t(i)$ are evenly spaced. When gaussian distributed noise is added, each event date is displaced from the regular grid by some random time, yielding a corresponding distribution of cycle durations $d(i) = t(i) - t(i-1)$. In contrast, when events occur with a random phase, cycle durations $d(i)$ have a gaussian distribution, and event dates are obtained as $t(i) = t(i-1)+d(i)$. Clocked and random-walk time series can yield the same mean cycle duration $\bar{d}$ and standard deviation $\sigma$, but their statistical properties are not all identical. \citet{dicke1978} introduced a ratio that measures this difference, which \citet{stefani2019} refer to as `Dicke's ratio'. Dicke's ratio $Di(n)$ computed for subsets of $n \le N$ consecutive events is defined by: \begin{equation} Di(n) = \frac{\sum_{i=2}^n \delta_n^2(i)}{\sum_{i=2}^n(\delta_n(i)-\delta_n(i-1))^2}, \end{equation} where $\delta_n(i)=t(i)-f_n(i)$ is the deviation of event date $t(i)$ from a best linear fit $f_n(i) = a_n i+b_n$ of the $n$ dates. According to \citet{dicke1978}, the expectation of Dicke's ratio is: \begin{equation} \mathbb{E}(Di_{clock}(n)) = \frac{n^2-1}{2(n^2+2n+3)} {\longrightarrow} \frac{1}{2} \textrm{ when } n \to \infty \label{E-clock} \end{equation} for a clocked time series, while it is: \begin{equation} \mathbb{E}(Di_{rand}(n)) = \frac{(n+2)(n^2-1)}{3(5n^2+6n-3)} {\longrightarrow} \frac{1}{15}n \textrm{ when } n \to \infty \label{E-random} \end{equation} for a random-walk time series. The expectation of $Di(n)$ is independent of $\bar{d}$ and $\sigma$ for both families, but the spread around the expectation does depend upon $\sigma$. \citet{dicke1978} applied this statistical tool to the time series of sunspot numbers starting in 1705. Due to the limited number of cycles, he could not reach a very definitive conclusion. \subsection{Schove's solar cycle time series} \label{S-Schove} To reach a firmer conclusion, \citet{stefani2019} complemented the post-1705 solar minima series by solar minima dates as far back as AD 1000, following \citet{schove1955}. Indeed, Schove published in 1955 the outcome of a very ambitious venture: dating maxima and minima of the solar cycle from 653 BC to AD 2025! Pre-1705 observations of sunspots being very rare, he mostly relied on reports of the observation of polar aurorae. In order to make up for the limited amount of reliable data, \citet{schove1955} explicitly mentions (p.131) that he made use of two {\bf assumptions} to build his 26-century-long table. These assumptions are reproduced in Figure \ref{F-Schove}. \begin{figure} \centerline{\includegraphics[width=0.8\textwidth,clip=]{Schove_assumptions.png} } \caption{The two assumptions made by \citet{schove1955} to construct his series of solar maxima, as displayed in his article p.131. } \label{F-Schove} \end{figure} \section{Arguments for a rebuttal} \label{S-invalidation} Schove's assumption $(b)$ as listed in Figure \ref{F-Schove} clearly suggests that his time series of solar maxima is {\bf clocked by construction}. In order to be more specific, I have built synthetic solar cycles to test the impact of Schove's assumptions on the character of the resulting time series \subsection{Synthetic solar cycles} \label{S-Synthetic} The well-documented 24 solar cycles from 1755 yield a cycle duration (time between maxima) of $11.0 \pm 2.0$ years. Extending backwards to AD 1000 with Schove's dates yields 92 maxima separated by $11.1 \pm 2.2$ years. I have built two different families of synthetic cycles: a random-walk family, and a clocked family. Both retain the post-1755 dates of solar maxima, as distributed by WDC-SILSO, Royal Observatory of Belgium, Brussels \citep{sidc2022}. \begin{itemize} \item The random-walk series are built by drawing at random normally distributed cycle durations, with a mean of $11.1$ years and a standard deviation of $2.0$ years. The dates of maxima are then constructed by cumulative difference from the date of the oldest post-1755 maximum. \item The clocked series are built by extending the post-1755 dates of maxima backwards in time with a constant duration of $11.1$ years, and then adding to the obtained dates a normal random noise with zero-mean and a standard deviation of $1.5$ years, this value providing the desired standard deviation of $2.0$ years for cycle durations. \end{itemize} \begin{figure} \centerline{\hspace*{0.015\textwidth} \includegraphics[width=0.415\textwidth,clip=]{pdf.pdf} \hspace*{-0.03\textwidth} \includegraphics[width=0.415\textwidth,clip=]{clock_pdf.pdf} } \vspace{-0.29\textwidth} \centerline{\bf \hspace{0.16 \textwidth} \color{black}{(a)} \hspace{0.34\textwidth} \color{black}{(b)} \hfill} \vspace{0.26\textwidth} \caption{Probability density function of the duration of synthetic solar cycles. (a) Random-walk synthetics: gaussian distribution of all 10\,000 realizations (teal), and pdf of the 3 realizations that comply with Schove's assumptions (magenta). (b) Clocked synthetics: nearly gaussian distribution of the 10\,000 realizations (teal), pdf of the 42 Schove-compliant realizations (magenta), and pdf of Schove cycle durations (red). } \label{F-pdf} \end{figure} Figure \ref{F-pdf} displays the probability density function (pdf) obtained with 10\,000 realizations, for both the random-walk series (Figure \ref{F-pdf}a) and the clocked series (Figure \ref{F-pdf}b). The pdf of Schove series is also drawn in Figure \ref{F-pdf}b. \subsection{The impact of Schove's assumptions} \label{S-impact} I have examined which of the 10\,000 realizations of both families comply with the two assumptions of \citet{schove1955} recalled in Figure \ref{F-Schove}. I find only 3 random-walk realizations, and 42 clocked realizations. In other words, the {\bf assumptions} used by \citet{schove1955} practically exclude random variations of the duration between solar maxima. The cycle duration pdf of Schove-compliant realizations are shown in Figure \ref{F-pdf}, while their time series and deviations from a linear fit are displayed in Appendix \ref{A-deviations} together with those of Schove's series. \begin{figure} \centerline{\hspace*{0.015\textwidth} \includegraphics[width=0.7\textwidth,clip=]{Dicke_maxima_10000_2years.pdf} } \vspace{-0.48\textwidth} \centerline{\bf \hspace{0.16 \textwidth} \color{black}{(a)} \hfill} \vspace{0.435\textwidth} \centerline{\hspace*{0.015\textwidth} \includegraphics[width=0.7\textwidth,clip=]{clock_Dicke_maxima_10000_2years.pdf} } \vspace{-0.48\textwidth} \centerline{\bf \hspace{0.16 \textwidth} \color{black}{(b)} \hfill} \vspace{0.435\textwidth} \caption{Dicke's ratios of (a) random-walk and (b) clocked synthetic solar cycles. Random selection of 100 realizations (grey), mean of all 10\,000 realizations (thick black), Schove-compliant realizations (red) and their mean (thick red), Schove time series (blue squares). The thick green and magenta lines show the expectation of Dicke's ratio for a random-walk (equation \ref{E-random}) and for a clocked law (equation \ref{E-clock}), respectively. Note that all synthetics share the same post-1755 reliable time series. } \label{F-Dicke} \end{figure} Figure \ref{F-Dicke} shows Dicke's ratios\footnote{Note that all Dicke's ratios are calculated backwards in time as in \citet{stefani2019}, since more recent dates are considered more reliable.} for a random selection of 100 realizations of both families. The thick black line shows the mean of Dicke's ratio for all realizations, while the thick red line is the mean of the Schove-compliant realizations. Note that Dicke's ratio of the rare Schove-compliant random-walk series is much lower that the mean of all realizations, and that individual realizations can plot far from the mean. \section{Quasi-clocked magnetohydrodynamic instabilities} \label{S-instabilities} The original goal of \citet{dicke1978} was to test the compatibility of solar cycle time series with an `eruption hypothesis' expressed by \citet{kiepenheuer1959} as `each cycle represents an independent eruption of the Sun which takes about 11 years to die down'. Fluid dynamics and magnetohydrodynamics instabilities do not necessarily behave that way, even when strong turbulence is present. For example, quasi-periodic magnetic oscillations have been reported in the VKS dynamo experiment \citep{berhanu2009} at Reynolds numbers above $10^7$. Nonetheless, we have seen that Dicke's ratio yields a more stringent measure of the clocked behaviour of a time series than provided by visual inspection or pdf. The Grenoble DTS$\Omega$ liquid sodium experiment exhibits magnetic fluctuations that are often quite regular \citep{schmitt2008}. From one such experiment, I have extracted time series of maximum induced magnetic intensity (see Appendix \ref{A-DTS} for details), and computed Dicke's ratio of several sequences of 100 consecutive maxima. Figure \ref{F-DTS-Dicke} shows that the behaviour of these magnetic fluctuations is quasi-clocked, even though the Reynolds number is $\simeq 8 \times 10^6$ and the standard deviation of the fluctuations about $30\%$. \begin{figure} \centerline{\includegraphics[width=0.7\textwidth,clip=]{Dicke_lat9_av1.pdf}} \caption{Dicke's ratio of magnetic fluctuations in the DTS$\Omega$ liquid sodium experiment (see Appendix \ref{A-DTS} for details). Three consecutive series of 100 maxima are plotted (peaks 101 to 200, 201 to 300, and 301 to 400). The green and magenta lines show the expectation of Dicke's ratio for a random-walk (equation \ref{E-random}) and for a clocked law (equation \ref{E-clock}), respectively. } \label{F-DTS-Dicke} \end{figure} \section{Conclusion} \label{S-conclusion} The demonstration by \citet{stefani2019} of a clocked behaviour for solar cycles is invalid because the 1000 years long time series they use \citep{schove1955} is {\bf clocked by construction}. The astrological quest for a link between solar cycles and planetary tides remains as unfounded as ever. Magnetohydrodynamics instabilities can produce quasi-periodic fluctuations that appear as almost clocked. \begin{acks} I thank Andr\'e Giesecke and Frank Stefani for providing clarifications on their computation of Dicke's ratio. I thank my colleagues of the geodynamo team of ISTerre for useful discussions and encouragements, and an anonymous reviewer for helpful suggestions. This article is dedicated to Emile Okal and to the memory of Don L. Anderson. \end{acks} \begin{materialsavailability} All matlab scripts and data used to produce the figures of this article are available as supplementary material. \end{materialsavailability} \begin{conflict} The author declares that he has no conflicts of interest. \end{conflict}
1,314,259,992,598
arxiv
\section{Six-Quark Models for the d'-Dibaryon} \noindent In the bag-string model, dibaryons are described as rotating strings with colored quark clusters at the ends \cite{Mul78}. This model leads to a linear Regge trajectory of excited states. It predicts that the lowest $L=1$ excited state is obtained for a diquark-tetraquark configuration at around 2100 MeV \cite{Mul78}. \begin{figure}[htb] \label{Fig.1} $$\mbox{ \epsfxsize 10.0 true cm \epsfysize 5.5 true cm \setbox0= \vbox{ \hbox { \centerline{ \epsfbox{fig1ep.ps} } } } \box0 } $$ \vspace{0.1cm} \caption[The d'-dibaryon in the quark cluster model] {The d'-dibaryon in the $q^4$-$q^2$ quark cluster model.} \end{figure} The drawback of this approach is that it does not respect the Pauli principle. Only the quarks within the individual clusters are antisymmetrized but not the quarks belonging to different clusters. This is a good approximation for high angular momentum states because in this case the system is fairly elongated and the probability of cluster overlap is small. On the other hand, for a low lying $L=1$ excitation, such as the $d'$ one expects a considerable amount of quark exchange between the two clusters. \begin{table}[htb] \caption{The mass ($M_{d'}$) and size ($b_6$) of the $d'$ in the quark cluster model. The masses of the diquark and tetraquark are also given.} \begin{center} \begin{tabular}{|c||c|c|c|c|}\hline & $M_{2q}$ & $M_{4q}$ & $M_{d'}$ & $b_6$\\ Set I & 645 & 1455 & 2440& 0.76 \\ Set II & 637 & 1501 & 2634 & 0.70 \\ Set III & 621 & 1309 & 2111 & 0.95 \\ \hline \end{tabular} \end{center} \end{table} The confining forces between the colored quarks prevent large separations of the clusters and the typical size of such a system is expected to be about 1 fm. {}From our experience with the $NN$ system we know that the Pauli principle plays an important role at such distances \cite{Yam91}. \vspace{0.5cm} \noindent {\it 3.1~ The Quark Cluster Model of the $d'$-Dibaryon } \vspace{0.5cm} \nobreak \noindent In this model, the $d'$ is described as a nonrelativistic six-quark system in which the quarks interact via the two-body potentials of eq.(\ref{Ham}). Tensor and spin-orbit interactions have been omitted since it has previously been shown that they give a negligible contribution to the $d'$ mass \cite{Glo94,Wag95}. The six-quark wave function is expanded into the cluster basis \begin{eqnarray} \label{rgmwf} \mid \Psi_{d'}^{J=0,T=0}> & = & {\cal A} {\Bigl \vert} \Biggl [ \Bigl [ \Phi_{T}^{S_T=1,T_T=0} (\b{\rho}_{T},\b{\lambda}_{T},\b{\eta}_T) \times \ \Y{211}_C \nonumber \\ & & \otimes\Phi_{D}^{S_D=0,T_D=0} (\b{\rho}_{D}) \times \Y{11}_C \Bigr ]^{S=1,T=0} \otimes\chi_{L=1}({\bf R}) \Biggr ]^{J=0,T=0} \ \Y{222}_C \Bigr >, \end{eqnarray} where $\Phi_T^{S_T=1,T_T=0}(\b{\rho}_{T},\b{\lambda}_{T},\b{\eta}_T)$ and $\Phi_B^{S_D=0,T_D=0}(\b{\rho}_{D})$ are the internal wave functions of the tetraquark (T) and diquark (D) clusters, respectively and $\chi_{L=1}({\bf R})$ is the wave function of the relative motion of the two clusters. We use the same harmonic oscillator parameter for the internal and relative motion wave functions. The Young diagrams in eq.(\ref{rgmwf}) show that two color triplet clusters $[211]_C$ and $[11]_C$ are coupled to a $[222]_C$ color-singlet six-quark state. Furthermore, they show that the tetraquark and $d'$ wave function are not fully antisymmetric but have mixed symmetry in color space. This nonfactorizability of the color space considerably complicates the calculation. \begin{figure}[htb] \label{Fig.2} $$\mbox{ \epsfxsize 11.5 true cm \epsfysize 6.5 true cm \setbox0= \vbox{ \hbox { \centerline{ \epsfbox{fig2ep.ps} } } } \box0 } $$ \vspace{0.2cm} \caption[Potential matrix elements] {The direct, one-quark and two-quark exchange diagrams that have to be evaluated for each two-body potential. The horizontal bars indicate the confinement, the one-gluon, one-pion, or one-sigma exchange interactions in eq.( \ref{Ham}).} \end{figure} \par The advantage of the cluster model is that it provides a continuous transition from the $q^6$ six-quark state to the $q^4-q^2$ clusterized state by smoothly going through all intermediate configurations. There is no rigid and artificial boundary between these extreme configurations; everything is contained in one and the same RGM wave function. This important property is a direct consequence of the Pauli principle on the quark level, which is ensured by the antisymmetrizer ${\cal A}$ \begin{equation} \label{ant} {\cal A}= 1 - 8P_{46}^{OSTC} +6P_{35}^{OSTC} P_{46}^{OSTC}, \end{equation} where $P_{ij}^{OSTC}$ is the permutation operator of the i-th and j-th quark in orbital (O) spin-isospin (ST) and color space (C), respectively. The direct, as well as the one- and two-quark exchange contributions for the two-body potential of eq.(\ref{Ham}) are depicted in fig.2. The solution for the unknown relative wave function $\chi_L({\bf R})$ and the unknown eigenenergy is obtained from the variational principle \begin{equation} \delta\left[{ \langle\Psi_{d'}\vert H-E\vert \Psi_{d'} \rangle \over\langle\Psi_{d'}\vert \Psi_{d'}\rangle }\right ]=0, \end{equation} where the variation is with respect to the relative wave function $\chi_L ({\bf R}) $. The results for the energy (mass) of the $d'$ as well as for the harmonic oscillator parameter $b_6$ which minimizes the $d'$ mass are shown in table 2 for the parameter sets of table 1. \vspace{0.5cm} \noindent {\it 3.2~ Shell-Model Calculation for a J$^P$=0$^-$, T=0 six-quark system} \vspace{0.5cm} \noindent Next, we calculate the mass of the $d'$-dibaryon in the translationally invariant shell-model (TISM) \cite{Glo94,Wag95}. Due to the negative parity of the $d'$, only an odd number of oscillator quanta $N=1,3,5,...$ is allowed. There is only one $N=1$ state which is compatible with $J^p=0^-,T=0$ \begin{equation} \label{smgs} \mid \Psi_{d'_{g.s.}}> = \mid N=1, [51]_O, (\lambda\mu)=(10), L=1, S=1, T=0, [321]_{ST}>. \end{equation} For an unambigious classification of TISM states one has to specify the number of internal excitation quanta $N$, the Elliot symbol $(\lambda\mu)$, the Young pattern $[f]_O$ of the spatial permutational $S_6$-symmetry, further the total orbital angular momentum $L$, total spin $S$ and total isospin $T$ of the system. The specification of the intermediate $SU(4)_{ST}$ symmetry is necessary because in general, the same symmetry in $STC$ space can be obtained from several states with different intermediate $ST$ symmetries. The mass of the $d'$ is then given in first order perturbation theory by the expectation value of the Hamiltonian between the lowest harmonic oscillator state of eq.({\ref{smgs}) \begin{equation} \label{smm} M_{d'}(b_6) = <\Psi_{d'_{g.s.}} \mid H \mid \Psi_{d'_{g.s.}}>. \end{equation} In order to estimate the effect of configuration mixing with excited shell model states we include in addition ten $N=3$ states with orbital $[42]_O$ symmetry \cite{Wag95}. In this case also the $[51]_{ST}$, $[411]_{ST}$, $[33]_{ST}$, $[321]_{ST}$, and $[2211]_{ST}$ $S_6$ permutational symmetries are allowed. \par With fixed parameters of the quark-quark interaction determined from eq.(\ref{constraints}) we minimize the $d'$ mass with respect to the harmonic oscillator parameter $b_6$ in the six-quark wave function. Note, that the harmonic oscillator parameter of the single baryon ($b$) and the $d'$ ($b_6$) wave function are different. The value of $b_6$ which minimizes the $d'$ mass is a measure of the size of the system and is also given in table 3. \begin{table}[htb] \caption{The mass ($M_{d'}$) and size ($b_6$) of the $d'$ in the six-quark shell model without and with configuration mixing.} \begin{center} \begin{tabular}{|c||c|c||c|c|}\hline & \multicolumn{2}{c||}{$N=1$} & \multicolumn{2}{c|}{$N=1$ $\&$ $N=3$ } \\ \hline & $M^{(N=1)}_{d'}$ [MeV] & $b_6$ [fm] & $M_{d'}$ [MeV] & $b_6$ [fm] \\ \hline Set I & 2484 & 0.78 & 2413 & 0.78 \\ Set II & 2636 & 0.72 & 2553 & 0.73 \\ Set III & 2112 & 0.95 & 2063 & 0.96 \\ \hline \end{tabular} \end{center} \end{table} \section{ Discussion and Summary} \noindent As is evident from tables 2 and 3 the $d'$ mass of the cluster model is lower than the single $N=1$ shell-model mass of eq.(\ref{smm}) but higher than the shell-model result with configuration mixing. Note that the $d'$ mass calculated with set II (without pion and sigma-exchange between quarks) is some 150-200 MeV higher than the result with chiral interactions (set I). In any case, the calculated mass is about 350 MeV higher than the value required by experiment. However, the confinement strength $a_c$ in the three-quark and six-quark system need not be the same. If we assume (set III) that $a_c$ in the six-quark system is weaker than in the nucleon one obtains considerably smaller values for $M_{d'}$. This assumption is supported by the harmonic oscillator relation for $a_c$ \begin{equation} \label{confstr} a_c \propto {1\over m_q b^4} {1\over N} \end{equation} which is inversely proportional to the number of quarks $N$ in the system. A weaker confinement strength is also expected due to the larger hadronic size of the $d'$ ($b_6$) as compared to the hadronic size of the nucleon $(b)$. Set III differs from Set I of table 1 only in the strength of the parameter $a_c$ for which we take the value $a_c=5.0$ MeV/fm$^2$ in the six-quark calculation. Finally, both calculations give similar results for the $d'$ mass and for its size. Let us briefly discuss the reasons for this. The outer product of the orbital $[4]_O$ (tetraquark) and $[2]_O$ (diquark) symmetries gives the following six-quark symmetries \begin{equation} [4]_O \otimes [2]_O = [42]_O \ \ \oplus \ \ [51]_O \ \ \oplus \ \ [6]_O. \end{equation} With the exception of the $[6]_O$ symmetry which is incompatible with $d'$ quantum numbers these are also included in the enlarged $N=3$ shell-model basis \cite{Wag95}. Analogously, the outer product of the two clusters in spin-isospin space leads to \begin{equation} \label{STSYM} [31]_{ST} \otimes [2]_{ST} = [51]_{ST} \ \ \oplus \ \ [42]_{ST} \ \ \oplus \ \ [33]_{ST} \ \ \oplus \ \ [411]_{ST} \ \ \oplus \ \ [321]_{ST}. \end{equation} Comparison with eq.(10) in ref.\cite{Glo94} shows that the $q^4-q^2$ cluster model wave function comprises the same $S_6$-symmetries in orbital and spin-isospin space (with the exception of the $[2211]_{ST}$ symmetry) as our enlarged shell model basis. Thus the trial function space spanned by both sets of basis functions is not very different. \par In summary, we have calculated the mass of a $J^P=0^-$ $T=0$ six-quark system in the NRQM using two different assumptions for the spatial distribution of the six quarks. The parameters have been determined from the constraints of eq.(\ref{constraints}). As in our previous works \cite{Glo94,Wag95} our results are typically 300-400 MeV above the required resonance energy. However, for a weaker confinement strength $a_c$ in the six-quark system as suggested by eq.(\ref{confstr}) we find a mass for the $d'$ that is considerably smaller. The assumption of a weaker confinement strength in the six-quark system does not affect previous results of the model in the $B=2$ sector such as $NN$ scattering phase shifts or deuteron electromagnetic form factors which are completely insensitive to the model and strength of confinement \cite{Shi89}.
1,314,259,992,599
arxiv
\section{Introduction}\label{sintr} {\em Computability logic} (CoL) \cite{Jap0}-\cite{JapCL12}, is an elegant theory of (multi-)agent computability. In CoL, computational problems are seen as games between a machine and its environment and logical operators stand for operations on games. It understands interaction among agents in its most general --- game-based --- sense. On the other hand, other formalisms such as situation calculus appear to be too rudimentary to represent complex interactions among agents. In particular, CoL supports {\it query/knowledge duality} (or we call it `querying knowledge'): what is a query for one agent becomes new knowledge for another agent. This duality leads to dynamic knowledge migration from one agent to another agent. Note that traditional agent/object-oriented approaches \cite{LCF} fail to support this duality. Therefore, CoL provides a promising basis for multiagent programming. In this paper, we discuss a web-based implemention of agent programming based on CoL, which can also be seen as a distributed logic programming (or LogicWeb\cite{Loke}) model with distributed processing. We assume the following in our model: \begin{itemize} \item Each agent corresponds to a web site with a URL. An agent's knowledgebase(KB) is described in its homepage. \item Agents are initially inactive. An inactive agent becomes activated when another agent invokes a query for the former. \item Our model supports query/knowledge duality and querying knowledge. That is, knowledge of an agent can be obtained from another agent by invoking queries to the latter. \end{itemize} To make things simple, we choose \mbox{\bf CL1} -- the most basic fragment of CoL -- as our target language. \mbox{\bf CL1}\ is obtained by adding to classical propositional logic two additional choice operators: disjunction ($\add$) and conjunction ($\adc$) operators. The choice disjunction $\add$ models decision steps by the machine. The choice conjunction $\adc$ models decision steps by the environment. For example, $green \add red$ is a game where the machine must choose either $green$ or $red$, while $green \adc red$ is a game where the environment must choose either $green$ or $red$. In this paper, we present \mbox{\bf CL1$^\Omega$}\ which is a web-based implementation of \mbox{\bf CL1}. This implementation is very simple and straightfoward and its correctness is rather obvious. What is interesting is that \mbox{\bf CL1$^\Omega$}\ is a novel distributed logic programming model with no centralized control. It would provide a good starting point for future distributed logic programming as well as high-level web programming. The rest of this paper is organized as follows. Some basic terminology of $\mbox{\bf CL1}$ and $\mbox{\bf CL1$^\Omega$}$ will be reviewed in Section 2. Section 3 introduces the execution phase of a formula $F$ from its proof. \section{\mbox{\bf CL1$^\Omega$}}\label{s2tb} We review the most basic fragment of propositional computability logic called $\mbox{\bf CL1}$ \cite{JapCL1}. Its language extends that of classical propositional logic by incorporating into it $\adc$ and $\add$. As always, there are infinitely many {\bf atoms} in the language, for which we will be using the letters $p,q,r,\ldots$ as metavariables. The two atoms: $\twg$ and $\tlg$ have a special status in that their interpretation is fixed. Formulas of this language, referred to as {\bf $\mbox{\bf CL1}$-formulas}, are built from atoms in the standard way: \begin{definition} The class of $\mbox{\bf CL1}$-formulas is defined as the smallest set of expressions such that all atoms are in it and, if $F$ and $G$ are in it, then so are $\gneg F$, $F\mlc G$, $F \mld G$, $F \mli G$, $F\adc G$, $F \add G$. \end{definition} \begin{definition} Let $F$ be a $\mbox{\bf CL1}$-formula. An interpretation is a function $^*$ which sends $F$ to a game $F^*$. $F$ is said to be valid if, for every interpretation $^*$, there is a machine who wins the game $F^*$ for all possible scenarios corresponding to different behaviors by the environment. \end{definition} Now we define $\mbox{\bf CL1$^\Omega$}$, a slight extension to $\mbox{\bf CL1}$ with environment parameters. Let $F$ be a $\mbox{\bf CL1}$-formula. We introduce a new {\it env-annotated} formula $F^\omega$ which reads as `play $F$ against an agent $\omega$. For an $\adc$-occurrence $O$ in $F^\omega$, we say $\omega$ is the {\it matching} environment of $O$. For example, $(p \adc (q \adc r))^{w.com}$ is an agent-annotated formula and $w.com$ is the matching environment of both occurrences of $\adc$. We extend this definition to subformulas and formulas. For a subformula $F'$ of the above $F^\omega$, we say that $\omega$ is the {\it matching} environment of both $F'$ and $F$. In introducing environments to a formula $F$, one issue is whether we allow `env-switching' formulas of the form $(F[R^u])^w$. Here $F[R]$ represents a formula with some occurrence of a subformula $R$. That is, the machine initially plays $F$ against agent $w$ and then switches to play against another agent $u$ in the course of playing $F$. This kind of formulas are difficult to process. For this reason, in this paper, we focus on non `env-switching' formulas. This leads to the following definition: \begin{definition} The class of $\mbox{\bf CL1$^\Omega$}$-formulas is defined as the smallest set of expressions such that (a) For any $\mbox{\bf CL1}$-formula $F$ and any agent $\omega$, $F^\omega$ are in it and, (b) if $H$ and $J$ are in it, then so are $\gneg H$, $H\mlc J$, $H\mld J$, $H\mli J$. \end{definition} \begin{definition} \noindent Given a $\mbox{\bf CL1$^\Omega$}$-formula $J$, the skeleton of $J$ -- denoted by $skeleton(J)$ -- is obtained by replacing every occurrence $F^\omega$ by $F$. \end{definition} \noindent For example, $skeleton((p \adc (q \adc r))^{w.com}) = p \adc (q \adc r)$. We often use $F$ instead of $F^{\omega}$ when it is irrelevant. In addition, we assume that each agent is identified with a physical URL address and the KB of an agent is stored in its homepage. The following definitions comes from \cite{JapCL1}. They apply both to $\mbox{\bf CL1}$ and $\mbox{\bf CL1$^\Omega$}$. Understanding $E\mli F$ as an abbreviation of $\neg E \mld F$, a {\bf positive} occurrence of a subformula is one that is in the scope of an even number of $\neg$'s. Otherwise, the occurrence is {\bf negative}. A {\bf surface occurrence} of a subformula means an occurrence that is not in the scope of a choice ($\add$ or $\adc$) operator. A formula is {\bf elementary} iff it does not contain the choice operators. The {\bf elementarization} of a formula is the result of replacing, in it, every surface occurrence of the form $F_1\add ... \add F_n$ by $\oo$ , and every surface occurrence of the form $F_1\adc ... \adc F_n$ by $\pp$. A formula is {\bf stable} iff its elementarization is valid in classical logic, otherwise it is {\bf instable}. $F${\bf -specification} of $O$, where $F$ is a formula and $O$ is a surface occurrence in $F$, is a string $\alpha$ which can be defined by: \begin{itemize} \item $F$-specification of the occurrence in itself is the empty string. \item If $F$ = $\neg G$, then $F$-specification of an occurrence that happens to be in $G$ is the same as the $G$-specification of that occurrence. \item If $F$ is $G_1\mlc ... \mlc G_n$, $G_1\mld ... \mld G_n$, or $G_1\mli G_2$, then $F$-specification of an occurrence that happens to be in $G_i$ is the string $i.\alpha$, where $\alpha$ is the $G_i$-specification of that occurrence. \end{itemize} The proof system of \mbox{\bf CL1$^\Omega$}\ is identical to that $\mbox{\bf CL1}$ and has the following two rules, with $H$, $F$ standing for $\mbox{\bf CL1$^\Omega$}$-formulas and $\vec H$ for a set of $\mbox{\bf CL1$^\Omega$}$-formulas: \\ Rule (A): ${\vec H}\vdash F$, where $F$ is stable and, whenever $F$ has a positive (resp. negative) surface occurrence of $G_1\adc ... \adc G_n$ (resp. $G_1\add ... \add G_n$) whose matching environment is $\omega$, for each i$\in \{1,...,n\}$, $\vec H$ contains the result of replacing in $F$ that occurrence by $G_i^\omega$. Rule (B): $H\vdash F$, where $H$ is the result of replacing in $F$ a negative (resp. positive) surface occurrence of $G_1\adc ... \adc G_n$ (resp. $G_1\add ... \add G_n$) whose matching environment is $\omega$ by $G_i^\omega$ for some i$\in \{1,...,n\}$. \begin{examplee}\label{ex01} $\mbox{\bf CL1$^\Omega$} \vdash ((p\adc q)\mlc(p\adc q))\mli (p\adc q)^\omega$ where $p$, $q$ represent distinct non-logical atoms, and $\omega$ is an agent. Note that $\omega$ plays no roles in the proof procedure. \end{examplee} \begin{enumerate} \item $(p\mlc p)\mli p^\omega$, rule A, no premise \item $(q\mlc q)\mli q^\omega$, rule A, no premise \item $((q\adc p)\mlc p)\mli p^\omega$, rule B, 1 \item $((p\adc q)\mlc (q\adc p))\mli p^\omega$, rule B, 3 \item $((p\adc q)\mlc q)\mli q^\omega$, rule B, 2 \item $((p\adc q)\mlc (p\adc q))\mli q^\omega$, rule B, 5 \item $((p\adc q)\mlc (p\adc q))\mli (p\adc q)^\omega$, rule A, 4 6 \end{enumerate} \begin{examplee}\label{ex02} $\mbox{\bf CL1$^\Omega$} \vdash p\mli (q\add p)^\omega$ where $p$, $q$ represent distinct non-logical atoms. \end{examplee} \begin{enumerate} \item $p\mli p^\omega$, rule (A). no premise \item $p\mli (q\add p)^\omega$, rule B. 1 \end{enumerate} \section{Execution Phase}\label{s22tog} The machine model of \mbox{\bf CL1}\ is designed to process only one query/formula at one time. In distributed systems, however, it is natural for an agent to receive/process multiple queries. For this reason, we introduce multiple queries to our machine. What changes are required for the machine to be able to process multiple queries at the same time? The answer is: {\it time slicing}/ {\it context switching}. That is, we assume that our machine supports multiprogramming by processing multiple queries in a time-interleaved fashion. Concurrency typically causes a lot of complications including mutual exclusive access to resources. Fortunately, in our setting, concurrency causes relatively little complications, as there is no interaction between queries. As discussed, the machine for \mbox{\bf CL1$^\Omega$}\ requires to handle multiple queries. To do this, it maintains a queue for storing multiple queries $Q_1,\ldots,Q_n$ We assume that the machine processes $Q_1,\ldots,Q_n$ by executing the following $n$ procedures {\it concurrently}: \[ Exec(KB\mli Q_1), \ldots, Exec(KB\mli Q_n) \] \noindent Here $KB$ is the knowledgebase associated with the machine. Below we will introduce an algorithm that executes a formula $J$. The algorithm contains two stages: \\ {\bf Algorithm Exec(J)}: \% $J$ is a $\mbox{\bf CL1$^\Omega$}$-formula \\ \begin{enumerate} \item First stage is to initialize a temporary variable $E$ to $J$, activate all the resource agents specified in $J$ by invoking proper queries to them. That is, for each negative occurrence of an annotated formula $F^\omega$ in $J$, activate $\omega$ by querying $F^\mu$ to $\omega$. Here $\mu$ is the current machine; On the other hand, we assume that all the querying agents -- which appear positively in $J$ -- are already active. \item The second stage is to play $J$ according to the following $loop$ procedure (which is from \cite{JapCL1}): \end{enumerate} procedure $loop(Tree)$: \% $Tree$ is a proof tree of $J$ \\ {\bf Case} $E$ is derived by Rule (A): \\ \hspace{3em} Wait for the matching adversary $\omega$ to make a move $\alpha =\beta i$, where $\beta$ \ $E$-specifies a positive (negative) surface occurrence of a subformula $G_1\adc\ldots\adc G_n$ ($G_1\add\ldots\add G_n$) and $i\in\{1,\ldots,n\}$. Let $H$ be the result of substituting in $E$ the above occurrence by $G_i$. Then update $E$ to $H$. \\ {\bf Case} $E$ is derived by Rule (B): \\ \hspace{3em} Let $H$ be the premise of $E$ in the proof. $H$ is the result of substituting, in $E$, a certain negative (resp. positive) surface occurrence of a subformula $G_1\adc\ldots\adc G_n$ (resp. $G_1\add\ldots\add G_n$) by $G_i$ for some $i\in\{1,\ldots,n\}$. Let $\beta$ be the $E$-specification of that occurrence. Then make the move $\beta i$, update $E$ to $H$. Let $\omega$ be the matching environment. Then inform $\omega$ of the move $\beta i$. The following proposition has been proved in \cite{JapCL1}. \begin{proposition}\label{sound} $\mbox{\bf CL1}\vdash F$ iff $F$ is valid {\em (}any $\mbox{\bf CL1}$-formula $F${\em )}. \end{proposition} The following proposition follows easily from Proposition \ref{sound}, together with the observation that $\mbox{\bf CL1}$-proof of $F$ encodes an {\it environment-independent} winning strategy for $F$. \begin{proposition}\label{sound2} $\mbox{\bf CL1$^\Omega$}\vdash J$ iff $skeleton(J)$ is valid {\em (}any $\mbox{\bf CL1$^\Omega$}$-formula $J${\em )}. \end{proposition} \begin{proof} Let $F$ be $skeleton(J)$. It is known from \cite{JapCL1} that every $\mbox{\bf CL1$^\Omega$}$(/$\mbox{\bf CL1}$)-proof of $J$ encodes an environment-independent winning strategy for $J$. It follows that a machine with such a strategy wins $J$ against any environment. Hence $F$ is valid. Conversely, suppose there is no $\mbox{\bf CL1$^\Omega$}$/$\mbox{\bf CL1}$-proof of $J$. Since $\mbox{\bf CL1$^\Omega$}$-proof of $J$ is in fact identical to $\mbox{\bf CL1}$-proof of $F$, it follows from \cite{JapCL1} that there is no machine who can win $F^*$ for some interpretation $*$. Therefore $F$ is not valid. \end{proof} \section{Examples}\label{sec:modules} \newenvironment{exmple}{ \begingroup \begin{tabbing} \hspace{2em}\= \hspace{3em}\= \hspace{3em}\= \hspace{3em}\= \hspace{3em}\= \hspace{3em}\= \kill}{ \end{tabbing}\endgroup} \newenvironment{example2}{ \begingroup \begin{tabbing} \hspace{8em}\= \hspace{2em}\= \hspace{2em}\= \hspace{10em}\= \hspace{2em}\= \hspace{2em}\= \hspace{2em}\= \kill}{ \end{tabbing}\endgroup} In our context, a $\mbox{\bf CL1$^\Omega$}$-web page corresponds simply to a $\mbox{\bf CL1}$-formula with a URL. An example is provided by the following ``weather'' agent which contains today's weather (sunny or cloudy) and temperature (hot or cold). \begin{exmple} \> $agent\ weather.com$.\\ \>$cloudy$.\\ \>$hot$. \end{exmple} Our language permits `querying knowledge' of the form $Q^\omega$ in KB. This requires the current machine to invoke the query $Q$ to the agent $\omega$. Now let us consider the $dress$ agent which gives advice on the dress codes according to the weather condition. It contains the following four rules and two querying knowledges $(cloudy \add sunny)$ and $(hot \add cold)$ relative to the $weather$ agent. \begin{exmple} \> $agent\ dress.com$.\\ \> \% dress codes \\ \>$(cloudy \mlc hot)\mli green$. \\ \>$(sunny \mlc hot) \mli yellow$. \\ \>$(cloudy \mlc cold)\mli blue$. \\ \>$(sunny \mlc cold) \mli red$. \\ \> $(cloudy \add sunny)^{weather.com}$.\\ \> $(hot \add cold))^{weather.com}$.\\ \end{exmple} Now, consider a goal ?- $dress.com \mli green \add blue \add yellow \add red$. Solving this goal has the effect of activating $weather.com$ and then replacing $(cloudy \add sunny)$ with $cloudy$ and $(hot \add cold)$ with $hot$ and then eventually answering $green$ to the user. Note that two queries to $weather.com$ execute concurrently within $weather.com$. \section{Conclusion} \label{s5thr} In this paper, we proposed an agent programming model based on $\mbox{\bf CL1}$. Unlike other formalisms such as LogicWeb\cite{Loke} and distributed logic programming\cite{LCF}, this model does not require any centralized control. Our next goal is to replace $\mbox{\bf CL1}$ with much more expressive $\mbox{\bf CL12}$\cite{Japtow}. \section{Acknowledgements} We thank Giorgi Japaridze for many helpful comments. \bibliographystyle{ieicetr}
1,314,259,992,600
arxiv
\section{Introduction} The quantum double, or Drinfeld center, $Z(\mathcal{C}) $ of a spherical fusion category $ \mathcal{C}$ over $\mathbb{C} $ is the category of half-braidings of $\mathcal{C} $ by objects $X \in \mathcal{C} $. $Z(\mathcal{C}) $ is itself a fusion category which is Morita equivalent to $\mathcal{C} \otimes \mathcal{C}^{op} $. A remarkable property of the quantum double is that it is a modular tensor category, meaning that it is braided with an invertible S-matrix \cite{MR1966525}. Modular tensor categories appear in a variety of contexts, including conformal field theory, topological quantum field theory, and quantum computation. On the other hand, every fusion category can be thought of as a category of modules over a commutative algebra in its center. Thus the quantum double construction provides a bridge between the theory of ordinary fusion categories and that of modular fusion categories. Some of the most interesting known examples of fusion categories were discovered through the study of finite-index subfactors, and in particular from the classification of small-index subfactors. Subfactors with index less than $4$ have principal graphs which are Dynkin diagrams, and the corresponding fusion categories are related to quantum $SU(2) $. In the paper ``Exotic subfactors with Jones indices $\frac{5+\sqrt{13}}{2} $ and $\frac{5+\sqrt{17}}{2} $'' \cite{MR1686551}, Asaeda and Haagerup constructed two new subfactors, the Haagerup subfactor (index $\frac{5+\sqrt{13}}{2}$) and the Asaeda-Haagerup subfactor (index $ \frac{5+\sqrt{17}}{2}$). They called these subfactors exotic since unlike other known examples of subfactors, these subfactors were not constructed from symmetries of finite or quantum groups. In \cite{MR1228532}, the second-named author developed a general method for constructing subfactors which admit a certain type of group symmetry, using endomorphisms of the Cuntz C$^*$-algebras and their von Neumann algebra completions. The method works well for what have come to be known as quadratic fusion categories: fusion categories containing a non-invertible simple object $X$ such that every simple object is either invertible or is isomorphic to an invertible simple object tensored with $X$. A typical example is the principal even part of the Haagerup subfactor, whose subcategory of invertible objects is $\text{Vec}_{\mathbb{Z}/3\mathbb{Z}} $, and which satisfies $\text{dim}(\text{Hom}(gX,Xh ))=\delta_{g,h^{-1}} $ for $g$ and $ h$ in $\mathbb{Z}/3\mathbb{Z} $. The generalized Haagerup fusion categories are a class of quadratic fusion categories which have a similar structure, but with the group ${\mathbb{Z}/3\mathbb{Z}} $ replaced by other finite Abelian groups. Systems of equations for constructing such categories for groups of odd order were determined in \cite{MR1832764}, and a generalized Haagerup category for $\mathbb{Z}/5\mathbb{Z} $ was constructed by solving these equations. Solutions for several others groups were found by Evans and Gannon in \cite{MR2837122} by exploiting symmetries that they observed in the modular data. The theory of generalized Haagerup subfactors was extended to groups of even order in \cite{IzumiNote}. The situation here is more complicated, with a certain cocycle $\epsilon $, which is absent in the odd case, playing an important and somewhat mysterious role. \textbf{Modular data.} The $S$-matrix of a modular tensor category $\mathcal{C}$ is a matrix with rows and columns indexed by simple objects of $\mathcal{C} $. The entries are given by (normalized) scalar values of Hopf links whose two components are labeled by simple objects of $\mathcal{C} $ and where the crossings correspond to the braiding. The normalization of the $S$-matrix requires taking a square root of the global dimension, so there are in general two choices. However in the unitary case the global dimension is positive and we take the positive square root. The $T$-matrix is a diagonal matrix whose entries are given by scalars corresponding to the twists of the simple obejcts coming from the braiding. For modular tensor categories over $\mathbb{C} $, the $S$ and $T$ matrices are unitary matrices \cite{ MR2183279}, and satisfy \begin{equation} \label{moddata} \alpha(ST)^3=S^2=C=T^{-1}CT \end{equation} for some scalar $\alpha $, where $C$ is the conjugation matrix giving the dual data of simple objects. There is a corresponding projective representation of the modular group $SL_2(\mathbb{Z}) $ sending the matrices $$ \left( \begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array} \right ) \text{ and } \left( \begin{array}{cc} 1 & 1 \\ 0 & 1 \end{array} \right )$$ to $S$ and $T$ respectively \cite{MR1797619} . For the quantum double of a unitary fusion category, the constant $\alpha $ in (\ref{moddata}) is always $1$, and one gets an actual representation of $SL_2(\mathbb{Z}) $. The fusion rules of $\mathcal{C} $ are determined from the S-matrix by the Verlinde formula \begin{equation}\label{verlinde} \text{dim}(\text{Hom}(X_i\otimes X_j, X_k))=\sum_{r} \limits \displaystyle\frac{S_{X_i,X_r}S_{X_j,X_r}S_{\overline{X_k},X_r}}{S_{1,X_r}} \end{equation} where the $X_i$ are the simple objects of $\mathcal{C} $ with $X_0=1 $, and $\overline{ X_r}$ is the dual object to $X_r $ \cite{MR954762, MR1002038,MR1797619}. A pair of unitary matrices $S $ and $T$ satisfying (\ref{moddata}) for some scalar $\alpha $ and order $2$ matrix $C$, and such that the right hand side of (\ref{verlinde}) gives an integer for each $i,j,k $, the collection of which form consistent structure constants for a based ring, is called modular data. A modular tensor category gives rise to modular data as described above; this modular data is uniquely determined up to a choice of order of the simple objects and a choice of square root of the global dimension. Conversely, given modular data, one can ask whether it is realized as an invariant of a modular tensor category, and if so, whether such a modular tensor category is unique. For small rank categories, classification of modular data has proven to be an effective technique for the classification of modular tensor categories (see \cite{MR2544735}). \textbf{Quantum doubles of quadratic fusion categories.} The most basic examples of modular tensor categories are quantum doubles of finite groups and fusion categories associated to quantum groups at roots of unity. Quantum doubles of quadratic fusion categories provide a rich source of new and interesting examples. In \cite{MR1782145,MR1832764}, the second-named author showed how the Cuntz algebra formalism can be used to explicitly describe the quantum double of many quadratic fusion categories. He computed the modular data of the Haagerup subfactor, and showed how to compute the modular data for similar quadratic categories associated to Abelian grops of odd order. In \cite{MR2837122}, Evans and Gannon simplified the modular data of the Haagerup subfactor and computed the modular data of several more generalized Haageup subfactors. They further argued based on patterns in the modular data that the Haagerup subfactor and its generalizations should not be thought of as exotic, but rather as belonging to a well-behaved family. They also generalized the modular data of the Haagerup subfactor in several ways and made conjectures about the categorical relization of these generalized modular data. It still remained unclear how the Asaeda-Haagerup subfactor fit into this picture, or indeed what its modular data is. The quantum double of the Asaeda-Haagerup subfactor was first studied in the dissertation of Asaeda \cite{MR2701681}, but a detailed description was not obtained. In \cite{AHcat} it was shown that the Asaeda-Haagerup subfactor is also related to quadratic fusion categories, but in a somewhat more complicated way than in the case of the Haagerup subfactor. The even parts of the Asaeda-Haagerup subfactor are Morita equivalent to three quadratic fusion categories, one of which is a $ \mathbb{Z}/2\mathbb{Z}$-orbifold of a generalized Haagerup category for the group $\mathbb{Z}/4\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$. Since the quantum double is an invariant of Morita equivalence, it suffices to consider the latter category to study the quantum double of the Asaeda-Haagerup subfactor. \textbf{Results.} The initial goal of the present work was to compute the modular data of the Asaeda-Haagerup subfactor, which we do. However, we also compute the modular data of several other interesting small index subfactors which arise from generalized Haagerup subfactors for groups of even order and their equivariantizations and de-equivariantizations. In fact, five of the seven known finite-depth subfactor pairs at index $3+\sqrt{5} $, which has recently been the focus of extensive classification work (see for example \cite{1308.5691,1406.3401}) are related to generalized Haagerup subfactors for order $4$ groups \cite{IzumiNote}, and we compute the modular data for all five (which belong to four distinct Morita equivalence classes). Our main result is the computation of the quantum double of the Asaeda-Haagerp subfactor, announced in \cite{AHcat} and proven here. \begin{theorem} The quantum double of the Asaeda-Haagerup subfactor has $22$ simple objects. The eigenvalues of the T-matrix are $\{ \pm1,\pm i \} \cup \{ e^{\frac{6l^2 \pi i }{17}} \}_{1 \leq l \leq 8} $. The full S-matrix is given in Theorem \ref{ahdouble}. \end{theorem} Evans and Gannon made the remarkable observation that the modular data of the Haagerup subfactor and its generalizations can be interpreted as a graft of the modular data of the quantum double of the dihedral group $\mathcal{D}_3$ and modular data associated to $SO(13) $ at level $2$; they generalized this grafted modular data into a series parametrized by the natural numbers, and showed that the first few instances are realized by generalized Haagerup subfactors for cyclic groups of odd order. It would be interesting to find a similar generalization of the Asaeda-Haagerup modular data. While the Asaeda-Haagerup modular data also appears to be composed of two principal blocks, we have not yet found such a generalization. We note however that the $8 \times 8$ block of the $S$-matrix corresponding to the $T$-eigenvalues $e^{\frac{6l^2 \pi i }{17}} $ is very similar to the $6 \times 6 $ block of the $S$-matrix of the Haagerup subfactor corresponding to the $T$-eigenvalues $e^{\frac{12 l^2 \pi i }{13}} $. After our results were announced, Morrison and Walker found a purely combinatorial method to deduce the number of simple objects in the quantum double of the Asaeda-Haagerup subfactor (and various other subfactors), as well as the induction functor giving the underlying objects of the simple half-braidings in the original fusion categories \cite{1404.3955}. However, their method does not give an explicit description of the quantum double or formulas for the modular data. There is another fusion category whose $S$-matrix differes from that of the Asaeda-Haagerup fusion categories in only a few entries. This category is a $\mathbb{Z}/2\mathbb{Z} $-de-equivariantization of a generalized Haagerup category associated to $\mathbb{Z}/8\mathbb{Z} $, and we give its modular data as well. We also consider four subfactors of index $3+\sqrt{5}$, which are known as the $3^{\mathbb{Z}/4\mathbb{Z} } $, $3^{\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z} } $, $4442 $, and $2D2$ subfactors. The $4442$ subfactor was constructed in \cite{1208.3637}, while the other three subfactors were constructed in \cite{IzumiNote}, which also gave an alternate construction of the $4442$ subfactor. An alternate construction of the $2D2$ subfactor was given in \cite{1406.3401}. The names are after their principal graphs: \begin{center} \includegraphics[width=0.6in]{3333.eps} \quad \includegraphics[width=1in]{4442.eps} \quad \includegraphics[trim= 0 -1in 0 0in, clip, width=1in]{2d2.eps} . \end{center} The principal graphs of the first two subfactors in the preceding list are each the graph on the left; the subfactors are distinguished by the group structure of the subcategory of invertible objects in the principal even part, which is either $\mathbb{Z}/4\mathbb{Z} $ or $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$. The prinicpal graphs of the $4442$ and $2D2$ subfactors are the middle and right graphs, respectively. These four subfactors all arise from generalized Haagerup categories associated to order four groups or their orbifolds, so we can compute their quantum doubles as well. \begin{theorem} \begin{enumerate} \item The quantum double of the $3^{\mathbb{Z}/4\mathbb{Z} } $ subfactor has rank $26$. The eigenvalues of the $T$-matrix are $\{ \pm 1,\pm i \} \cup \{ e^{\pm \frac{3l^2 \pi i }{10}} \}_{1 \leq l \leq 4} $. All of the entries of the $S$-matrix are in $\mathbb{Q}(\sqrt{5}) $. \item The quantum double of the $3^{\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z} } $ subfactor has rank $40$. The eigenvalues of the $T$-matrix are $\{ \pm 1, \pm e^{\pm \frac{2 \pi i }{5}} \} $. There is a modular tensor subcategory of rank $10$, and the modular data decomposes as a tensor product of the modular data of this subcategory and the modular data of a rank $4$ subcategory. All of the entries of the $S$-matrix are in $\mathbb{Q}(\sqrt{5}) $. \item The quantum double of the $2D2$ subfactor has rank $10$. The eigenvalues of the $T$-matrix are $\{ 1,\pm i, e^{\pm \frac{4 \pi i}{5}} \}$. All of the entries of the $S$-matrix are in $\mathbb{Q}(\sqrt{5}, i ) $. \item The quantum double of the $4442$ subfactor is graded by $\mathbb{Z}/3 \mathbb{Z} $, and has rank $48$. The $0$-graded component has rank $24$ and the other two graded components each have rank $12$. The eigenvalues of the $T$-matrix are $\{ \pm 1, e^{\pm \frac{2\pi i}{3}}, \pm e^{\pm \frac{2 \pi i }{5}} , e^{\pm \frac{4 \pi i }{15}}, e^{\pm \frac{14 \pi i }{15}} \} $. Each entry of the $S$-matrix can be written as an element of $\mathbb{Q}(\sqrt{5}) $ multiplied by a cube root of unity. \end{enumerate} \end{theorem} The complete modular data for all of these examples is given below. We note that the modular data of the $2D2$ subfactor is very similar to the modular data of the rank $10$ modular subcategory of the even part of $3^{\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z} } $ subfactor. It is shown in \cite{1308.5723} that the dual even part of the third ``fish subfactor'' is the same as the dual even part of the $3^{\mathbb{Z}/4\mathbb{Z} } $ subfactor. Therefore our results give the quantum double of this subfactor as well. There are only two additional known finite-depth subfactors with index $3+\sqrt{5}$, up to duality, and it has been conjectured that there are no others. One of these subfactors has an even part which is a tensor product of two rank two fusion categories, so its quantum double is known. The other one is the second fish subfactor, whose modular data is still unknown. Finally, we mention that it would be desirable to develop general formulas for the modular data of generalized Haagerup subfactors for arbitrary finite Abelian groups, and for the modular data of their orbifolds. Unfortunately, this seems to be out of reach until we achieve a better understanding of the cocycle $\epsilon $ appearing in the structure equations of these categories. We include as an online supplement to this paper the Mathematica notebook ModularData.nb, which contains the modular data the six examples discussed in this paper and computes the corresponding fusion rules from the Verlinde formula. \textbf{Organization.} The paper is organized as follows. In Section 2 we review some background material on categories of endomorphims, the quantum double, generalized Haagerup categories, and the orbifold construction. In Section 3 we describe the basic outline of our method to compute the quantum doubles of the generalized Haagerup categories and their orbifolds, which follows the general ideas of \cite{MR1782145,MR1832764}. In Section 4 we first give a general description of the tube algebra of a generalized Haagerup category, including multiplication formulas with respect to a certain basis and a description of the group-related part of the tube algebra. Then we work out the full tube algebra and compute the modular data for the groups $\mathbb{Z}/4\mathbb{Z} $ and $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$. In Section 5 we describe the tube algebra for a graded extension of a generalized Haagerup category by a finite-order group automorphism, and use this to compute the modular data of the $4442$ subfactor. In Section 6 we describe the tube algebra of a certain type of $ \mathbb{Z}/2\mathbb{Z}$-de-equivariantization of a generalized Haagerup category, and use this to compute the modular data of the Asaeda-Haagerup subfactor and of the $2D2$ subfactor. Some of our results require complicated and somewhat tedious but elementary calculations. We try to strike a balance by including enough detail for the reader to follow the train of the argument and reconstruct any calculations without being overly pedantic. \textbf{Acknowledgements.} This work was originally motivated by the results of \cite{AHcat}, which was joint work with Noah Snyder, whom we would also like to thank for helpful conversations. Pinhas Grossman would also like to thank Terry Gannon, David Jordan, and Scott Morrison for helpful conversations. Pinhas Grossman was partially supported by ARC grant DP140100732. Masaki Izumi was supported in part by the Grant-in-Aid for Scientific Research (B) 22340032, JSPS. Part of this work was done during Pinhas Grossman's visit to Kyoto University in 2013, and he is grateful for the hospitality. \section{Preliminaries} \subsection{Categories of finite-index endomorphisms} Let $M$ be a properly infinite factor with separable predual (all factors in this paper will be assumed to have separable predual.) Let $\text{End}_0 (M) $ be the set of normal unital $*$-endomorphisms of $M$ whose images have finite-index. Then $\text{End}_0 (M) $ is a strict C$^*$-tensor category, where the tensor product is composition of endomorphisms and for any $\rho, \sigma \in \text{End}_0 (M)$, the morphism space is given by $$\text{Hom}(\rho,\sigma)=\{ v \in M : v\rho(x)=\sigma(x)v, \ \forall x \in M \} .$$ The tensor product of morphisms $u\in \text{Hom}(\rho_1, \sigma_1) $ and $v \in \text{Hom}(\rho_2,\sigma_2) $ is given by $$ u \rho_1(v)=\sigma_1(v)u \in \text{Hom} (\rho_1 \circ \rho_2, \sigma_1 \circ \sigma_2).$$ We often suppress ``Hom'' and simply write $(\rho,\sigma) $ for the morphism space. Following common practice, we also sometimes use $ (\rho,\sigma)$ to mean $\text{dim}(\text{Hom}(\rho,\sigma)) $. For $\rho \in \text{End}_0 (M) $, we set $$d(\rho)=[M:\rho(M)]^{\frac{1}{2}} ,$$ the statistical dimension of $\rho $. A sector $[\rho] $ is the isomorphism class of an object $\rho $. In this paper we will be interested in full finite tensor subcategories $\text{End}_0(M) $ which are closed under unitary conjugation and duality. These are unitary fusion categories. \subsection{The quantum double} For a discussion of the quantum double for subfactors, see \cite{MR1782145}; for the categorical context, see \cite{MR1966525}. Let $\mathcal{C} $ be a strict monoidal category. A half-braiding for an object $X \in \mathcal{C} $ is a family of isomorphisms $$e_X(Y):X \otimes Y \rightarrow Y \otimes X, \ Y \in \mathcal{C}$$ satisfying $$ (t \otimes id_X) \circ e_X(Y)=e_X(Z) \circ (id_X \otimes t), \ \forall Y,Z \in \mathcal{C}, \ t:Y \rightarrow Z$$ and $$e_X(Y \otimes Z)=(id_Y \otimes e_X(Z)) \circ (e_X(Y) \otimes id_Z), \ \forall Y,Z \in \mathcal{C}.$$ The quantum double, or Drinfeld center, $Z(\mathcal{C}) $ is the category whose objects are half-braidings $(X, e_X) $ of objects in $\mathcal {C}$ and whose morphisms are given by $$\text{Hom}((X, e_X) ,(Y, e_Y) )=$$ $$\{ t \in \text{Hom}(X,Y): (id_Z \otimes t) \circ e_X(Z) =e_Y(Z) \circ (t \otimes id_Z), \ \forall Z \in \mathcal{C} \} .$$ The quantum double is a braided monoidal category, with tensor product of $(X, e_X) $ and $(Y,e_Y) $ given by $$(X\otimes Y, e_{X \otimes Y} )$$ $$e_{X \otimes Y}(Z) = e_X(Z) \otimes id_Y \circ id_X \otimes e_Y(Z) \ \forall Z \in \mathcal{C}.$$ A modular tensor category is a braided spherical fusion category $ \mathcal{C}$ whose $S$-matrix is non-degenrate, where the $S$-matrix is defined by $$S_{X,Y}= \frac{d_X d_Y}{ \sqrt{\text{dim}(\mathcal{C}) }} \text{tr}_{X \otimes Y} (c_{X,Y} \circ c_{Y,X}) ,$$ for simple objects $X$ and $Y$, $ c$ is the braiding on $\mathcal{C} $, $d_X $ is the quantum dimension, $\text{dim} (\mathcal{C}) $ is the global dimension, and $tr$ is the normalized spherical trace on $\text{End} (X \otimes Y)$. The $S$-matrix is only defined up to the choice of square root of the global dimension. The $T$-matrix is defined by $$T_{X,Y}=d_X \delta_{X,Y} \text{tr}_{X \otimes X}(c_{X,X}) , $$ and the conjugation matrix $C$ is defined by $$C_{X,Y} =\delta_{X,\bar{Y}},$$ where $\bar{Y} $ is the dual object of $Y$. For a modular tensor category over $\mathbb{C} $, the $S$-matrix is symmetric, $S$ and $T$ are unitary \cite{MR2183279}, and we have the relations $$\alpha(ST)^3=S^2=C =T^{-1}CT $$ for a scalar $\alpha $ \cite{MR1153682,MR1797619}. If $\mathcal{C} $ is a spherical fusion category over $\mathbb{C} $, then $Z(\mathcal{C}) $ is a modular tensor category. We fix $\sqrt {\text{dim} (Z(\mathcal{C})) }=\text{dim}(\mathcal{C})$, and then $\alpha=1 $ \cite{MR1966525}. \subsection{The tube algebra} In this subsection we summarize the theory developed in \cite{MR1782145}. Let $M$ be a properly infinite factor, and let $\mathcal{C} $ be a finite full monoidal sub-category of $\text{End}_0 (M) $ which is closed under unitary conjugation and taking duals. Let $\Delta= \{ \rho_{\xi} \}_{\xi \in \Delta_0}$ be a set of endomorphisms of $M$ representing the simple objects of $\mathcal{C} $, and containing $\rho_0=id$. The tube algebra of $\Delta $ is an algebra with underlying vector space $$\text{Tube} \ \Delta = \bigoplus_{{\xi}, {\zeta},{\eta} \in \Delta_0} \text{Hom}(\rho_{\xi} \rho_{\zeta}, \rho_{\zeta}\rho_{\eta}) .$$ An element $X \in \text{Hom}(\rho_{\xi} \rho_{\zeta}, \rho_{\zeta}\rho_{\eta}) $ is denoted as an element of the tube algebra by $(\xi \; \zeta|X| \zeta \; \eta ) $. A $*$-algebra structure is defined on Tube $\Delta$ as follows. For each $\xi,\eta, \zeta \in \Delta_0 $, let $N^{\zeta}_{\xi,\eta} =(\rho_{\xi} \rho_{\eta},\rho_{\zeta})$. If $N^{\zeta}_{\xi,\eta} > 0$, we write $\zeta \prec \xi\eta$ and fix a family of isometries $\{(T^{\zeta}_{\xi,\eta})_i \in \text{Hom}(\rho_{\zeta},\rho_{\xi} \rho_{\eta}) \}_{1 \leq i \leq N^{\zeta}_{\xi,\eta} }$ satisfying the Cuntz algebra relations. We also fix duality isometries $R_{\zeta} \in \text{Hom}(1,\rho_{\bar{\zeta}}\rho_{\zeta}) $ and $\bar{R}_{\zeta} \in \text{Hom}(1,\rho_{\zeta}\rho_{\bar{\zeta}}) $ for each $\zeta \in \Delta_0 $ (where $\rho_{\bar{\zeta}} $ is the representative in $\Delta $ of the dual of $\rho_{\zeta} $). Define multiplication by \begin{multline}\label{tubemult} (\xi \; \zeta|X| \zeta \; \eta )(\xi' \; \zeta'|Y| \zeta' \; \eta' )=\\ \delta_{\eta,\xi'} \sum_{\nu \prec \zeta \zeta' } \limits \sum_{i=1}^{N^{\nu}_{\zeta,\zeta'}} \limits (\xi \; \nu|T^{\nu}_{\zeta,\zeta'})_i^* \rho_{\zeta}(Y)X\rho_{\xi}(T^{\nu}_{\zeta,\zeta'})_i) |\nu \; \eta' ) \end{multline} and an involution by \begin{equation} \label{tubeinv} (\xi \; \zeta|X| \zeta \; \eta )= d(\zeta)( \eta\; \bar{\zeta}|\rho_{\bar{\zeta}}(\rho_{\xi}(\bar{R}_{\zeta}^*)X^* )R_{\zeta} |\bar{\zeta } \; \xi ). \end{equation} These operations do not depend on the choice of isometries $ (T^{\zeta}_{\xi,\eta})_i$ and$R_{\zeta}$, and make $\text{Tube} \ \Delta $ into a C$^*$-algebra. For each $\xi \in \Delta_0 $, let $1_{\xi} =(\xi \; 0| 1 | 0 \; \xi )$, and let $\mathcal{A}_\xi = 1_{\xi} (\text{Tube} \ \Delta) 1_{\xi}$. Let $$\textbf{t} =\sum_{\xi \in \Delta_0} \limits d(\rho_{\xi}) ( \xi \; \bar{\xi}| R_{\xi} \bar{R}_{\xi}^*| \bar{\xi} \ \xi ) \in \text{Tube} \ \Delta ,$$ and let $T_0$ be the linear operation on $\text{Tube}\ \Delta $ of left multiplication by $\textbf{t} $. Let $S_0 $ be the linear transformation on $\sum_{\xi \in \Delta_0} \limits \mathcal{A}_{\xi} $ defined by $$S_0( ( \xi\; \eta|X | \eta \; \xi ))=(\bar{\eta} \; \xi | R^*_{\eta}\rho_{\bar{\eta}}(X\rho_{\xi}(\bar{R}_{\eta})) | \xi \; \bar{\eta}) .$$ \begin{theorem}\cite{MR1782145} (a) The minimal central projections of $\text{Tube} \ \Delta $ are in bijection with the simple objects of $Z(\mathcal{C}) $. If $(\sigma, e_{\sigma}) $ is a simple object of $Z(\mathcal{C}) $ with corresponding minimal central projection $p_{\sigma} \in \text{Tube} \ \Delta $, then for any $\xi \in \Delta_0 $, we have $$(\sigma,\rho_{\xi})=\text{Rank} ( p_{\sigma} 1_{\xi}) .$$ (b) The center of $\text{Tube} \ \Delta $ is invariant under $T_0 $ and $S_0$. Identify the minimal central projections of $\text{Tube} \ \Delta $ with the simple objects of $Z(\mathcal{C}) $, so that each minimal central projection $P_i$ corresponds to a simple object $\tilde{P}_i $ in $Z(\mathcal{C}) $. Introduce the basis $\{Q_i=\frac{\sqrt{\Lambda} }{d (\tilde{P}_i) } P_i \} $, where $\Lambda $ is the global dimension, for the center of Tube $\Delta $. The matrix of $S_0$ with respect to the basis $\{ Q_i\} $ is the $S$-matrix of $Z(\mathcal{C}) $. The matrix of $T_0$ with respect to the basis $\{ P_i\} $ is the $T$-matrix of $Z(\mathcal{C}) $. \end{theorem} \begin{remark} \begin{enumerate} \item Choosing a set of matrix units for the tube algebra determines a unitary half-braiding of $ \mathcal{C}$. \item In \cite{MR1782145} the quantum double was defined using unitary half-braidings; however for unitary fusion categories the unitary quantum double was shown to be equivalent to the ordinary quantum double in \cite{MR1966525}. \end{enumerate} \end{remark} Let $\{P_i \}_{i \in I} $ be the set of minimal central projections in Tube $\Delta$, and let $\tilde{P_i} $ be the corresponding simple objects in $Z(\mathcal{C}) $. Define the linear functional $$\phi_{\Delta}(\xi \; \zeta | X | \zeta \; \eta ) = d(\rho_{\xi})^2 \delta_{\xi,\eta}\delta_{{\zeta},0} X .$$ Then $\phi_{\Delta}(P_i)=\displaystyle \frac{\text{d}(\tilde{P}_i)^2}{\Lambda} $, and we have the following formula for the $S$-matrix: \begin{equation} \label{sform1} S_{\tilde{P_i},\tilde{P_j}}=\frac{\Lambda}{d(\tilde{P_i})d(\tilde{P_j})}\phi_{\Delta}(S_0(P_i)P_j). \end{equation} There is another useful formula for finding entries of the $S$-matrix. Fix $i \in I$ and $\eta \in \Delta_0 $ such that $P_i 1_{\eta} \neq 0 $. Let $p_i$ be a minimal projection subordinate to $P_i 1_{\eta}$. Let $\{p_j^{\xi,k} \}_{\xi \in \Delta_0, 1 \leq k \leq \text{rank}(P_j\mathcal{A}_{\xi})} $ be a decomposition of $P_j$ into mutually orthogonal minimal projections. For $\xi \in \Delta_0 $, define the linear functional $$ \phi_{\xi}(x) = R^*_{\xi}\rho_{\bar{\xi}}(x)R_{\xi} , \ x \in M.$$ Then \begin{equation} \label{sform2} S_{\tilde{P_i},\tilde{P_j}}=\frac{\Lambda}{d(\tilde{P_j})} \sum_{\substack{\xi \in \Delta_0 \\ 1 \leq k \leq \text{rank}(P_i\mathcal{A}_{\xi})}} \limits d(\xi) \phi_{\xi}( X^*_{\xi,k} Y^*_{\xi}), \end{equation} where $X_{\xi,k} $ is the component of $p_j^{\xi,k} $ in $(\xi \eta, \eta \xi) $ and $Y_{\xi}$ is the component of $p_i$ in $(\eta \xi,\xi \eta) $ \cite{MR1782145}. This second formula has the advantage that it only requires knowing a single minimal component of $P_i$, and was used extensively in the examples in \cite{MR1832764}. However, for the examples we discuss in this paper, the first formula will often be more useful since the full projections $P_i$ are eigenvectors of $\textbf{t} $ whereas their minimal components are not in general. \subsection{Generalized Haagerup categories and their orbifolds} We recall the following construction from \cite{IzumiNote}. Let $G$ be a finite Abelian group acting by outer automorphisms on an inifinite factor $M$; we denote the corresponding automorphisms by $\alpha_g, g \in G $. Let $\rho_0$ be an irreducible self-conjugate finite-index endomorphism satisfying the fusion rules $$[\alpha_g ][\rho_0]=[\rho_0][ \alpha_{-g}] $$ and $$[\rho_0]^2=[id]\bigoplus_{g \in G} \limits [\alpha_g][ \rho_0] .$$ The fusion category tensor generated by $\rho_0 $ and $\alpha_g, \ g \in G $ is called a generalized Haagerup category. It was shown in \cite{IzumiNote} that if $H^2(G,\mathbb{T}) =0$, there is an endomorphism $\rho $ in the same sector as $\rho_0 $ and a family of $|G|+1 $ isometries $T_g \in (\alpha_g \rho, \rho^2), \ g \in G $, $S \in (id, \rho^2)$ satisfying the Cuntz algebra relations, such that: \begin{equation} \label{e11} \alpha_g \circ \rho = \rho \circ \alpha_{-g}, \ \forall g \in G \end{equation} \begin{equation}\label{e12} \alpha_g (S)=S, \ \ \alpha_g(T_h)=\epsilon_g(h)T_{h+2g} \, \forall g,h \in G \end{equation} \begin{equation}\label{e13} \rho(S)=\frac{1}{d}S+\frac{1}{\sqrt{d}}\sum_{g \in G} \limits T_g^2 \end{equation} \begin{multline}\label{e14} \rho(T_g)=\epsilon_g(-g)[\eta_{-g}T_{-g}SS^*+\frac{\overline{\eta_{-g}}}{\sqrt{d}}ST_{-g}^* \\ +\sum_{h,k \in G}A_{ -g}(h,k)T_{h-g}T_{h+k-g}T_{k-g}^*] ,\, \forall g\in G \end{multline} where $d=d(\rho)$, the $A_g(h,k) $ are complex numbers, the $\epsilon_g(h) $ are signs, the $\eta_g $ are cube roots of unity satisfying \cite[(3.1)-(3.9)]{IzumiNote}. Conversely, any solution to \cite[(3.1)-(3.9)]{IzumiNote} gives rise to a fusion category on a von Neumann algebra completion of a corresponding Cuntz algebra, which is a generalized Haagerup category if the action of $G $ on the factor is outer. We will denote a generalized Haagerup category by $\mathcal{C}_{G,A,\epsilon, \eta} $, and assume we are given a concrete representation as endomorphsims of an infinite factor $M$ with structure constants $A$, $\epsilon $, and $\eta $ such that $\rho $ and $\alpha_g, \ g \in G $ satisfy (\ref{e11})-(\ref{e14}). We will use the notation $n=|G|$ and $\Lambda=n(1+d^2) $, the global dimension. In all of the examples in this paper, $\eta $ will be identically $1$, so we assume from now on that this is the case and dispense with $\eta$ without further comment. \subsection{Orbifold categories} Given a finite group $G$ acting by tensor autoequivalences on a fusion category $\mathcal{C} $, the equivariantization $\mathcal{C}^G $ is a fusion category which is a categorical analogue of a crossed product. The global dimensions are related by $\text{dim}(\mathcal{C^G})=|G| \text{dim}(\mathcal{C}) $. One can recover $\mathcal{C} $ from $\mathcal{C}^G $ by a de-equivariantization construction. For categories of finite-index endomorphisms of a factor, both equivariantization and de-equivariantization can sometimes be realized by an orbifold construction, in which the von Neumann is enlarged to a crossed product by a finite group action and the endomorphisms are extended to the larger algebra. Let $\mathcal{C}_{G,A,\epsilon} $ be a generalized Haagerup category realized on a factor $M$ containing the Cuntz algebra $\mathcal{O}_{|G|+1} $ with generators $S$ and $T_g $, $g \in G $. Let $\theta $ be an automorphism of $G$ such that $$\epsilon_{\theta(h)}(\theta(g))=\epsilon_h(g), $$ $$ A_{\theta(g)}(\theta(h),\theta(k))=A_g(h,k), \ \forall g,h,k \in G .$$ Define an automorphism $\gamma $ on $M$ by $\gamma(S)=S $ and $\gamma(T_g)=T_{\theta(g)} $, $g \in G $. Then $$\gamma \circ \rho = \rho \circ \gamma $$ and $$\gamma \circ \alpha_g = \alpha_{\theta(g)} \circ \gamma .$$ The automorphism $\gamma $ thus induces an action of $\mathbb{Z}/ m\mathbb{Z} $ on $\mathcal{C}_{G,A,\epsilon} $, where $m$ is the order of $\gamma $. Let $P=M \rtimes_{\gamma} \mathbb{Z}/m\mathbb{Z} $ be the crossed product of $M$ by $ \gamma$; $P$ is the von Neumann algebra generated by $M$ and a unitary $\lambda $ satisfying $$\lambda^m=1 \text{ and } \lambda x \lambda^{-1}=\gamma(x), \ \forall x \in M .$$ We can extend $\rho $ to an endomorphism $\tilde{\rho} $ of $P $ by setting $$\tilde{\rho}(\lambda)=\lambda .$$ We denote the category of endomorphisms of $P$ tensor generated by $\tilde{\rho} $ by $\mathcal{C}^{\gamma}_{G,A,\epsilon} $; it is a $\mathbb{Z}/m\mathbb{Z} $ equivariantization of $\mathcal{C}_{G,A,\epsilon} $. The fusion rules of $\mathcal{C}^{\gamma}_{G,A,\epsilon} $ were computed in \cite{IzumiNote}. The category $\mathcal{C}^{\gamma}_{G,A,\epsilon} $ is Morita equivalent to the category of endomorphisms of $M$ tensor generated by $\rho $ and $\gamma $, which is a $\mathbb{Z}/m\mathbb{Z} $-graded extension of $\mathcal{C}_{G,A,\epsilon} $. We will refer to the graded extension as $\mathcal{C}_{G,A,\epsilon} \rtimes_{\gamma} \mathbb{Z}/m\mathbb{Z} $. \begin{example} The even part of the $4442 $ subfactor is a $\mathbb{Z} /3\mathbb{Z} $-equivariantization of a generalized Haagerup category for $G=\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z} $. Here $\theta $ is a cyclic permutation of the nonzero elements of $G$ \cite{IzumiNote}. \end{example} Now let $\mathcal{C}_{G,A,\epsilon} $ again be a generalized Haagerup category. Let $z \in G $ satisfy $2z=0$ and suppose $\epsilon_z(\cdot) $ is a character satisfying $\epsilon_z(z)=1 $. Let $P=M \rtimes_{\alpha_z} \mathbb{Z}/2\mathbb{Z} $ be the crossed product of $M$ by $ \alpha_z$; $P$ is the von Neumann algebra generated by $M$ and a unitary $\lambda $ satisfying $\lambda^2=1 $ and $\lambda x \lambda^{-1}=\alpha_z(x)$, $\forall x \in M $. Each $ \alpha_g$ can be extended to an automorphism $ \tilde{\alpha}_g$ of $P$ by setting $$ \tilde{\alpha}_g(\lambda)=\epsilon_z(g) \lambda .$$ Similarly, $\rho $ can be extended to an endomorphism $\tilde{\rho} $ of $P$ by setting $$\tilde{\rho} (\lambda) =\lambda.$$ Then $g \mapsto \tilde{\alpha}_g$ defines an action of $G$ on $P$, and we have $$\tilde{\alpha}_g \circ \tilde{\rho}=\tilde{\rho} \circ \tilde{\alpha}_{-g}, \ \forall g \in G .$$ Moreover, $$ [\tilde{\alpha}_g]=[\tilde{\alpha}_h] \text{ iff } g-h \in \{ 0,z\} $$ and if $G_0 \subset G$ is a set of representative elements for the $\{0,z\} $-cosets of $G$, we have $$[\tilde{\rho}^2] = [id]\bigoplus_{g \in G_0} 2[\tilde{\alpha}_{g} \tilde{\rho}] .$$ We will refer to an orbifold category of this form as $(\mathcal{C}_{G,A,\epsilon})_z $; it is a $\mathbb{Z}/2\mathbb{Z}$-deequivariantization of $\mathcal{C}_{G,A,\epsilon} $. \begin{example} \begin{enumerate} \item The principal even part of the $2D2$ subfactor is a $\mathbb{Z} /2\mathbb{Z} $-deequivariantization of a generalized Haagerup category for $G=\mathbb{Z}/4\mathbb{Z}$ \cite{IzumiNote}. \item The even parts of the Asaeda-Haagerup subfactor are Morita equivalent to a $\mathbb{Z} /2\mathbb{Z} $-deequivariantization of a generalized Haagerup category for $G=\mathbb{Z}/4\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z} $ \cite{AHcat}. \end{enumerate} \end{example} \section{Outline of method} We will compute quantum doubles of several examples of generalized Haagerup categories and their equivariantizations and de-equivariantizations. The tube algebras of the regular (non-orbifold) categories, the equivariantizatized categories, and the de-equivariantized categories all have different formulas for bases and multiplicative structure constants. But the general method to compute the quantum double is similar, and follows the approach of \cite{MR1782145, MR1832764}. In each case we begin by writing down a basis for the tube algebra and formulas for the multiplicative structure constants with respect to that basis. Then to compute the quantum double, we need to find matrix units for the tube algebra. For simplicity we will discuss the non-orbifold case. Since the tube algebra is large, we first consider the group-like part $ \mathcal{A}_G$ of the tube algebra: the span of elements of the form $(\alpha_g \; \xi|X|\xi \; \alpha_h )$. This subalgebra can be analyzed using the group structure. Formulas for the matrix units can be expressed in terms of characters of $G$. (For the orbifold cases it is a little more complicated.) Next, for each $g $ and $h$ in $G$ we consider the subspaces $\mathcal{A}_{\alpha_g,\alpha_h \rho } $ spanned by elements of the form $(\alpha_g \; \xi|X|\xi \; \alpha_h \rho )$. If $v \in \mathcal{A}_{\alpha_g,{\alpha}_h \rho } $ is a partial isometry such that $vv^* $ is a minmal projection $p$ in $\mathcal{A}_{\alpha_g} $, then $v^*v $ is a minimal projection in $\mathcal{A}_{{\alpha}_h \rho} $; these two minimal projections are equivalent in the tube algebra and have the same central cover. In this way we can find the minimal central projections in $\mathcal{A}_{{\alpha}_h \rho} $ whose central cover in the tube algebra is also the central cover of a minimal projection in $\mathcal{A}_{\alpha_g} $. Once we have all of the minimal central projections in the various $\mathcal{A}_{\alpha_g} $ and the corresponding minimal central projections in $\mathcal{A}_{\alpha_h \rho} $, we can combine those which are equivalent in the tube algebra, and we then have all of the minimal central projections in the tube algebra which are not orthogonal to $\mathcal{A}_G $. A significant shortcut in this step is that we only need consider one $g$ out of each pair $\{g,-g \} $ and only one $h $ out of each class $\{ h+2k \}_{k \in G} $, since we have equivalences in the tube algebra between $1_{\alpha_g} $ and $1_{\alpha_{-g}} $ and between $1_{{\alpha}_h \rho } $ and $1_{{\alpha}_{h+2k} \rho} $ for all $k$, implemented by the unitaries $( \alpha_g \; \rho | 1 | \rho \; \alpha_{-g} )$ and $(\alpha_h \rho \; \alpha_k| 1| \alpha_k\; \alpha_{h-2k} \rho) $, respectively. Once we have all of the minimal central projections which are not orthogonal to $\mathcal{A}_G $, we look at the part of the tube algebra which is orthogonal to $\mathcal{A}_G $. The main tool here is diagonalization of the $T$-matrix. For each $h$ (again, up to equivalence by addition by $2k$ for some $k$), we write down the matrix of $\textbf{t}_{{\alpha}_h \rho} $, the component of $\textbf{t} $ in $ \mathcal{A}_{{\alpha}_h \rho}$, with respect to our chosen basis $\mathcal{B}_{{\alpha}_h \rho} $ of $ \mathcal{A}_{{\alpha}_h \rho}$, and find its eigenvalues. Here it is easier to first figure out what the eigenvalues are through numerical calculations, and then verify directly that $\textbf{t}_{{\alpha}_h \rho} $ satisfies the appropriate minimal polynomial. Then we can compute the projections onto the eigenspaces of the $T$-eigenvalues. The linear algebra at this stage tends to get more difficult, even for a computer, since the $T$-eigenvalues seem to be $(|G|^2+4)^{th} $ roots of unity, as opposed to the $|G|^{th} $ roots of unity which appear in $\mathcal{A}_G $. The coefficients of the projections with respect to the basis $\mathcal{B}_{{\alpha}_h \rho} $ may therefore lie in a complicated number field. A significant challenge is deciding when the projections onto the $\textbf{t}_{{\alpha}_h \rho} $-eigenspaces are minimal in $\mathcal{A}_{{\alpha}_h \rho} $ and when minimal central projections in different $\mathcal{A}_{{\alpha}_h \rho} $ with the same $T$-eigenvalues are equivalent in the tube algebra. Sometimes we can figure out the structure of the tube algebra by counting dimensions of intertwiner spaces, but other times more care and creativity is needed. Once we have computed all of the minimal central projections, we can compute the $S$-matrix using (\ref{sform1}) or (\ref{sform2}). Sometimes it will be difficult to find nice expressions for those projections which are orthogonal to $\mathcal{A}_G $, which precludes the possibility of using (\ref{sform1}) or (\ref{sform2}) directly. If the multiplicity of the $T$-eigenvalue of such a projection $P_i$ is $1$, then $P_i$ is a projection onto an eigenspace of $\textbf{t} $, so we can express $P_i$ as a linear combination of powers of $\textbf{t} $. Then we can first calculate $$\phi_{\Delta}(S_0(\textbf{t}^k)\textbf{t}^l)$$ for powers $k,l$ of $ \textbf{t}$, and use this data to find the $S$-matrix entries for pairs of projections $P_i$ and $P_j$. The advantage here is that the required multiplications in the tube algebra in (\ref{sform1}) can now be carried out in a simpler number field. We use Mathematica to perform arithmetic in the tube algebra. The input is the set of structure constants for each $g$ and $h$ (representing the equivalence classes mentioned above) for multiplication in $\mathcal{A}_{\alpha_g} $, multiplication in $\mathcal{A}_{{\alpha}_h \rho} $, multiplication on $\mathcal{A}_{\alpha_g} \times \mathcal{A}_{\alpha_g,{\alpha}_h \rho} $, involution on $\mathcal{A}_{\alpha_g,{\alpha}_h \rho} $, multiplication on $\mathcal{A}_{\alpha_g,{\alpha}_h \rho} \times \mathcal{A}_{{\alpha}_h \rho,\alpha_g} $, and multiplication on $\mathcal{A}_{{\alpha}_h \rho, \alpha_g} \times \mathcal{A}_{\alpha_g,{\alpha}_h \rho} $. This is enough data to follow the outlined steps. However, we emphasize that a large portion of the calculations can be carried out by hand. In particular formulas for the minimal central projections of the tube algebra which are not orthogonal to $\mathcal{A}_G $ can be computed by hand, at least for the small rank examples we consider here. The corresponding parts of the S-matrix can then also be computed. The part of the calculation which requires a computer is the diagonalization of the $T$-matrix on the orthogonal part of the tube algebra, and calculating the corresponding block of the $S$-matrix. Finally, we note that it would be desirable to be able to describe the structure of the tube algebra of a generalized Haagerup category for an arbitrary group in terms of properties of the cocycle $\epsilon $. We do not resolve this problem in this paper. \section{Quantum doubles of generalized Haagerup categories} A description of the quantum double of a generalized Haagerup category associated to a group of odd order was given in \cite{MR1832764}. Further examples of such categories were computed and the corresponding modular data was simplified and analyzed in \cite{MR2837122}. In this section we give the multiplication formulas for the tube algebra in the general case, which is somewhat complicated by the presence of the cocycle $\epsilon $. We then compute the modular data for categories associated to the two order $4$ groups. \subsection{The tube algebra a generalized Haagerup category} Let $\mathcal{C}_{G,A,\epsilon} $ be a generalized Haagerup category. Let $$\Delta=\{ {\alpha}_g \}_{g \in G} \cup \{ \alpha_g \rho \}_{g \in G} .$$ We will use similar notation as in \cite{MR1832764}: the group $G$ will represented additively and the objects $ \alpha_g$ and $\alpha_g \rho $ will be denoted $g $ and ${}_g \rho $ inside the parentheses for a tube algebra element. We introduce a basis for Tube $\Delta $ as follows. Let $$\mathcal{B}_G=\{ (g\;k | 1 |k \; g ) \}_{g,k \in G} \cup \{ (g\;{}_k \rho| 1 |{}_k \rho \; -g \}_{g,k \in G} ,$$ $$\mathcal{B}_{G,{}_G\rho} =\{ (g \; {}_k \rho| T_{2k+g-h} | {}_k \rho \; {}_h \rho) \}_{ g,h,k \in G},$$ $$\mathcal{B}_{{}_G\rho,G} \{ ({}_h \rho \; {}_k \rho |T_{h-g}^* | {}_k \rho \; g) \}_{ g,h,k \in G},$$ $$ \mathcal{B}_{{}_G\rho} =\{ ({}_{h_1} \rho \; {}_{k} \rho |T_{k-h_2+g}T^*_{h_1-k+g} |{}_{k} \rho \; {}_{h_2} \rho ) \}_{ h_1,h_2,k,g \in G } ,$$ $$\cup \{ ({}_{h} \rho \; {}_{k} \rho |SS^* |{}_{k} \rho \; {}_{2k- h} \rho ) \}_{h, k \in G} \cup \{ ( {}_{h} \rho \; k |1| k \; {}_{h-2k} \rho )\}_{ h, k \in G} .$$ Then $$\mathcal{B}= \mathcal{B}_{G} \cup \mathcal{B}_{G,{}_G\rho} \cup \mathcal{B}_{{}_G\rho, G} \cup \mathcal{B}_{{}_G\rho}$$ is a basis for Tube $ \Delta$. We will write $\mathcal{A}_G $ for the span of $\mathcal{B}_G $, $\mathcal{A}_{G,{}_{G} \rho} $ for the span of $\mathcal{B}_{G,{}_G \rho}$, $\mathcal{A}_g $ for the span of elements of the tube algebra of the form $(g \; h |1| h \; g) $, and similarly for other subsets and elements of $\Delta $. We can compute the multiplication and involution for Tube $\Delta $ in terms of the basis $\mathcal{B} $, using (\ref{tubemult})-(\ref{tubeinv}) and (\ref{e11})-(\ref{e14}). We first collect some basic facts. Note that $$ (\alpha_g,\alpha_h)=(\alpha_g \rho, \alpha_h \rho)=\delta_{g,h} \mathbb{C}1, $$ $$ (\alpha_g,\alpha_h \rho^2 )=\delta_{g,h} \mathbb{C}S, \quad (\alpha_g \rho,\alpha_h \rho^2 )=\mathbb{C}T_{g+h}.$$ The tube algebra multiplication formula Equation \ref{tubemult} requires summing over $$\{\nu \in \Delta_0 : \nu \prec \zeta \zeta' \} $$ and isometries in each $(\nu, \zeta \zeta') $ satisfying the Cuntz algebra relations. For generalized Haagerup categories, all of the nonzero spaces $(\nu, \zeta \zeta') $ are $1$-dimensional and spanned by either $1$ or a Cuntz algebra generator; we take $1$ or the appropriate Cuntz algebra generator in each case as our canonical isometries for computing the tube algebra multiplication. We will now give formulas for multiplication and involution in the tube algebra in terms of the basis $\mathcal{B} $. We omit a few cases that are not needed in any following computations (namely multiplication from $\mathcal{A}_{G,{}_{G}\rho} \times \mathcal{A}_{{}_G \rho} $ to $\mathcal{A}_{G,{}_{G}\rho}$ and the involution on $\mathcal{A}_{{}_G \rho} $). The following useful Cuntz algebra calculations are immediate from (\ref{e11})-(\ref{e14}). \begin{lemma}\label{cuntzrel} We have the following identities in the Cuntz algebra. \begin{equation*}S^*\rho(T_a)S =0, \quad S^*\rho(T_a)T^*_b\rho(S) =\delta_{a,-b} \epsilon_a(-a) \frac{1}{d} \end{equation*} \begin{equation*}S^*\rho(T_aT^*_b) SS^*\rho(S)=\delta_{a,b }\epsilon_{a}(-a)\epsilon_{b}(-b)\frac{1}{d^2} , \quad S\rho(SS^*)SS^*\rho(S) =\frac{1}{d^3} \end{equation*} \begin{equation*} S^*\rho(SS^*)T_aT^*_b\rho(S)= \delta_{a,b} \frac{1}{d^2} , \quad T_a\rho(SS^*)SS^*\rho(T_b)= \epsilon_b(-b) \frac{1}{d^2}T_aT^*_{-b} \end{equation*} \begin{equation*}T^*_a\rho(SS^*)T_bT^*_c\rho(T_e)= \epsilon_{e}(-e) \frac{1}{d}A_{-e}(c+e,b-c)T_aT^*_{b-c-e} \end{equation*} \begin{equation*}T^*_a\rho(T_b)T_c = \epsilon_b(-b) A_{-b}(b+a,b+c)T_{a+b+c} \end{equation*} \begin{equation*} T_a\rho(SS^*)SS^*\rho(T_b)= \epsilon_b(-b) \frac{1}{d^2}T_aT^*_{-b} \end{equation*} \begin{multline*} S^* \rho(T_aT^*_b)T_cT^*_e\rho(S) =\delta_{ b+c-a,e} \epsilon_{a}(-a) \epsilon_{b}(-b) \frac{1}{d} A_{-b}(b-a,b+c) \end{multline*} \begin{multline*} T_a^*\rho(T_bT_c)T_eT^*_f\rho(T_g) =\epsilon_b(-b)\epsilon_c(-c)\epsilon_g(-g)[ \delta_{a,-b}\delta_{c,-e}\delta_{f,-g} SS^* \\+ \sum_{j\in G} \limits A_{-b}(a+b,b-c+j)A_{-g}(f+g,e-f+j)A_{-c}(j,c+e) \\ T_{j+b-c+a} T^*_{j+e-f-g} ] \end{multline*} \begin{multline*} T^*_a\rho(T_b)T^*_c\rho(T_e) = \epsilon_b(-b)\epsilon_e(-e)[\delta_{a,-b}\delta_{c,-e}SS^*\\ +\sum_{j} \limits A_{-b}(a+b,b+c+j)A_{-e}(c+e,j)T_{a+b+c+j}T^*_{j-e}] . \end{multline*} \end{lemma} We can calculate the tube algebra multiplication rules using the formulas in Lemma \ref{cuntzrel}. \begin{lemma} The adjoint operation on $ \mathcal{B}_G$ and $\mathcal{B}_{G,G\rho}$ is as follows. \begin{enumerate} \item $$(g \; k | 1 | k \; g )^* =(g \; -k | 1 | -k \; g ) $$ \item $$(g\;{}_k \rho |1 |{}_k \rho \; -g )^*=(-g \;{}_k \rho | 1 |{}_k \rho \; g ) $$ \item $$ (g\; {}_{k} \rho| T_{2k+g-h}| {}_k \rho \; {}_{h} \rho)^* = \epsilon_{-k-g+h}(g-h+2k)({}_{h} \rho \; {}_{k} \rho |T_{h-g}^*| {}_{k} \rho \; g) $$ \end{enumerate} \end{lemma} \begin{lemma}\label{Gmult} Multiplication among elements of $ \mathcal{B}_{G} $ is as follows. \begin{enumerate} \item $$(g\;k_1 | 1 |k_1 \; g )(g\;k_2 | 1 |k_2 \; g )=(g\; k_1+ k_2 | 1 | k_1+k_2 \; g )$$ \item $$(g \;k_1 | 1 |k_1 \; g )(g \;{}_{k_2} \rho | 1 |{}_{k_2} \rho \; -g)=(g\;{}_{k_1+ k_2 } \rho | 1 |{}_{k_1+ k_2} \rho \; -g)$$ \item $$(g \;{}_{k_1} \rho | 1 |{}_{k_1} \rho \; -g)(-g \;k_2 | 1 |k_2 \; -g )= (g \;{}_{k_1- k_2} \rho | 1|{}_{k_1- k_2} \rho \; -g)$$ \item $$(g\;{}_{k_1} \rho | 1 |{}_{k_1} \rho \; -g) (-g \; {}_{k_2} \rho |1 |{}_{k_2} \rho \; g )= (g\; k_1- k_2 | 1 | k_1- k_2 \; g ) $$ $$ +\delta_{2g,0} \sum_{r \in G} \epsilon_{g}(r+k_1-k_2)(g \; {}_{r} \rho | 1| {}_{r} \rho \; g )$$ \end{enumerate} \end{lemma} \begin{lemma}\label{GGGrhomult} Multiplication on $\mathcal{B}_G \times \mathcal{B}_{G,G\rho} $ is as follows. \begin{enumerate} \item $(g \; k_1 | 1| k_1 \; g) \cdot (g \; {}_{k_2} \rho | T_{g+2k_2-h} | {}_{k_2} \rho \; {}_h \rho )= $ $$ \epsilon_{k_1}(g-h+2k_2) (g \; {}_{k_1+ k_2} \rho | T_{g+2k_1+2k_2-h} | {}_{k_1+ k_2} \rho {}_h \rho ) $$ \item $(g_1 \; {}_{k_1} \rho | 1 | {}_{k_1} \rho \; g_2) \cdot (g_2 \; {}_{k_2} \rho | T_{g_2+2k_2-h} | {}_{k_2} \rho \; h )=$ $$\epsilon_{k_1-g_2-2k_2+h}(g_2+2k_2-h) \sum_{r \in G} \limits \epsilon_{g_1}(r+k_1-k_2) $$ $$A_{2k_1-g_2-2k_2+h}(r-k_1+k_2+g_2-h,r-k_1+k_2+g_2-h+2g_1)$$ $$ (g_1 \; {}_{r} \rho | T_{2r+2g_1+g_2-h} | {}_{r} \rho \; h )$$ \end{enumerate} \end{lemma} \begin{lemma}\label{GGrhoGrhoGmult} Multiplication on $\mathcal{B}_{G,{}_G\rho} \times \mathcal{B}_{{}_G\rho,G} $ is as follows. $(g_1 \; {}_{k_1} \rho| T_{2k_1+g_1-h} | {}_{k_1} \rho \; {}_{h} \rho) \cdot ({}_{h} \rho \; {}_{k_2} \rho |T_{h-g_2}^*| {}_{k_2} \rho \; g_2)=$ $$\displaystyle{ \epsilon_{k_1-h_2+g_2}(h_2-g_2)} [\delta_{g_1-g_2,0}( g_1 \; k_1- k_2 | 1 | k_1- k_2\; g_2 )$$ $$+ \delta_{g_1+g_2,0} \sum_{r \in G} \epsilon_{g_1}(r+k_1-k_2) $$ $$ A_{2k_1-h+g_2} (r-k_1-k_2+h-g_2,g_1-g_2)(g_1 \; {}_{r} \rho | 1 | {}_{ r } \rho \; g_2 )]$$ \end{lemma} \begin{lemma}\label{GrhoGGGrhomult} Multiplication on $\mathcal{B}_{{}_G\rho,G} \times \mathcal{B}_{G,{}_G\rho} $ is as follows. $({}_{h_1} \rho \; {}_{k_1} \rho |T_{h_1-g}^*| {}_{k_1} \rho \; g)\cdot (g \; {}_{k_2} \rho| T_{2k_2+g-h_2} | {}_{k_2} \rho \; {}_{h_2} \rho) =$ $$ \epsilon_{k_1-2k_2-g+h_2}(2k_2+g-h_2)[ \delta_{2k_2-2k_1+h_1-h_2,0} \frac{1}{d} ({}_{h_1} \rho \; k_1- k_2 |1 | k_1- k_2 \; {}_{h_2} \rho)$$ $$+\delta_{2k_1-2k_2-2g+h_1+h_2,0} \epsilon_{-g}(g+h_1)$$ $$({}_{h_1} \rho \; {}_{g+h_1+k_2-k_1} \rho| SS^* | {}_{ g+h_1+k_2-k_1 } \rho \; {}_{h_2} \rho)$$ $$+ \sum_{r,j \in G} \limits \epsilon_{h_1-r-k_1+k_2}(r+k_1-k_2)$$ $$ A_{2k_1-2k_2-g+h_2}(r-k_1+k_2+g-h_2,2k_2-2k_1+h_1-h_2+j) $$ $$A_{2h_1-r-k_1+k_2}(-h_1-g+r+k_1-k_2,j)$$ $$({}_{h_1} \rho \; {}_{r} \rho|T_{j+r-k_1+k_2+h_1-h_2} T^*_{j+2h_1-r-k_1+k_2} | {}_{r} \rho \; {}_{h_2} \rho)]$$ \end{lemma} \begin{lemma}\label{Grhomult} Multiplication among elements of $ \mathcal{B}_{{}_G \rho} $ is given as follows. \begin{enumerate} \item $$({}_{h_1} \rho \; k_1 |1|k_1 \; {}_{h_2} \rho ) \cdot ({}_{h_2} \rho \; k_2 | 1|k_2 \;{}_{h_3} \rho ) =({}_{h_1} \rho \; k_1+k_2 |1| k_1+ k_2 \; {}_{h_3} \rho)$$ \item $$( {}_{h_1} \rho \; {}_{k_1} \rho | SS^* |{}_{k_1} \rho \; {}_{h_2} \rho ) \cdot ( {}_{h_2} \rho \; k_2 | 1 | k_2 \; {}_{h_3} \rho ) =( {}_{h_1} \rho \; {}_{k_1-k_2} \rho| SS^*| {}_{k_1-k_2} \rho \; {}_{h_3} \rho) $$ \item $$({}_{h_1} \rho \; k_1 | 1 | k_1 \; {}_{h_2} \rho ) \cdot ({}_{h_2} \rho \; {}_{k_2} \rho | SS^* |{}_{k_2} \rho \; {}_{h_3} \rho ) =( {}_{h_1} \rho \; {}_{k_1+ k_2} \rho|SS^* | {}_{k_1+ k_2} \rho \; {}_{h_3} \rho) $$ \item$( {}_{h_1} \rho \; k_1 | 1 |k_1 \; {}_{h_2} \rho ) \cdot ({}_{h_2} \rho \; {}_{k_2} \rho |T_{k_2-h_3+g_2}T^*_{h_2-k_2+g_2} |{}_{k_2} \rho \; {}_{h_3} \rho ) $ $$=\epsilon_{k_1}(k_2-h_3+g_2) \epsilon_{k_1}(h_2-k_2+g_2) $$ $$({}_{h_1} \rho \; {}_{k_1+ k_2} \rho |T_{2k_1+k_2-h_3+g_2}T^*_{2k_1+h_2-k_2+g_2} |{}_{ k_1+ k_2 } \rho \; {}_{h_3} \rho ) $$ \item $({}_{h_1} \rho \; {}_{k_1} \rho |T_{k_1-h_2+g_1}T^*_{h_1-k_1+g_1} |{}_{k_1} \rho \; {}_{h_2} \rho ) \cdot ( {}_{h_2} \rho \; k_2 | 1 |k_2 \; {}_{h_3} \rho ) $ $$=({}_{h_1} \rho \; {}_{k_1-k_2} \rho | T_{k_1-h_2+g_1}T^*_{h_1-k_1+g_1} |{}_{k_1- k_2} \rho \; {}_{h_3} \rho ) $$ \item $( {}_{h_1} \rho \; {}_{k_1} \rho | SS^* |{}_{k_1} \rho \; {}_{h_2} \rho ) \cdot ( {}_{h_2} \rho \; {}_{k_2} \rho |SS^* |{}_{k_2} \rho \; {}_{h_3} \rho ) $ $$= \frac{1}{d^3}({}_{h_1} \rho \; k_1- k_2 |1 | k_1-k_2 \; {}_{h_3} \rho ) $$ $$ +\sum_{r \in G} \limits \frac{1}{d^2} \epsilon_{ h_1-r-k_1+k_2}(r+k_1-k_2) ({}_{h_1} \rho \; {}_{r} \rho | T_{r+k_1-k_2}T^*_{2h_1-r-k_1+k_2} |{}_{r} \rho\; {}_{h_3} \rho ) $$ \item $ ( {}_{h_1} \rho \; {}_{k_1} \rho |T_{k_1-h_2+g_1}T^*_{h_1-k_1+g_1} |{}_{k_1} \rho \; {}_{h_2} \rho ) \cdot ({}_{h_2} \rho \; {}_{k_2} \rho |SS^* |{}_{k} \rho \; {}_{2k_2- h_2} )$ $$=[\delta_{2k_1-h_1-h_2,0} \frac{1}{d^2} ({}_{h_1} \rho \; k_1- k_2 | 1 | k_1- k_2 \; {}_{h_2} \rho )$$ $$ +\sum_{r \in G} \limits \epsilon_{h_1-r-k_1+k_2}(r+k_1-k_2) \frac{1}{d}$$ $$A_{2h_1-r-k_!+k_2}(g_1-h_1+r-k_2,2k_1-h_1-h_2)$$ $$({}_{h_1} \rho \; {}_{r} \rho | T_{r+k_1-k_2}T^*_{-r+k_1+k_2+h_1-h_2} | {}_{r} \rho \; {}_{h_2} \rho )] $$ \item $ ({}_{h_1} \rho \; {}_{k_1} \rho |SS^* |{}_{k_1} \rho \; {}_{h_2} \rho ) \cdot ( {}_{h_2} \rho \; {}_{k_2} \rho |T_{k_2-h_3+g_1}T^*_{h_2-k_2+g_1} |{}_{k_2} \rho \; {}_{h_3} \rho ) $ $$=\epsilon_{k_1-k_2+h_3-g_1}(k_2-h_3+g_1) \epsilon_{k_1-h_2+k_2-g_1}(h_2-k_2+g_1) $$ $$[\delta_{2k_2-h_2-h_3,0}\frac{1}{d^2} ({}_{h_1} \rho \; k_1- k_2 1 | k_1-k_2 \; {}_{h_3} \rho ) $$ $$ +\sum_{r \in G} \limits \epsilon_{h_1-r-k_1+k_2}(r+k_1-k_2) \frac{1}{d}$$ $$A_{2k_1-k_2+h_3-g_1}(r-k_1-h_3+g_1+z_2,2k_2-h_3-h_2) $$ $$ ({}_{h_1} \rho \; {}_{r} \rho | T_{r+k_1+k_2-h_2-h_3} T^*_{-r+2h_1-k_1+k_2} |{}_{r} \rho\; {}_{h_3} \rho ) ]$$ \item $$( {}_{h_1} \rho \; {}_{k_1} \rho |T_{k_1-h_2+g_1}T^*_{h_1-k_1+g_1} |{}_{k_1} \rho \; {}_{h_2} \rho ) $$ $$ \cdot ( {}_{h_2} \rho \; {}_{k_2} \rho |T_{k_2-h_3+g_2}T^*_{h_2-k_2+g_2} |{}_{k_2} \rho \; {}_{h_3} \rho ) $$ $$=\epsilon_{k_1-h_2+k_2-g_2}(h_2-k_2+g_2) \epsilon_{k_1-k_2+h_3-g_2}(k_2-h_3+g_2) $$ $$[\delta_{2k_1-2k_2+h_3-h_1,0} \frac{1}{d} $$ $$A_{2k_1-h_2+k_2-g_2}(h_2+h_3-2k_2,g_1+g_2-k_1-k_2+h_2-h_2) $$ $$( {}_{h_1} \rho \; k_1-k_2 | 1 | k_1- k_2 \; {}_{h_3} \rho ) $$ $$+\delta_{k_1+k_2-g_1-g_2,0} \delta_{k_1-k_2+g_1-g_2+h_3-h_1,0} \epsilon_{-k_1+g_1}(h_1+k_1-g_1) $$ $$( {}_{h_1} \rho \; {}_{k_2+h_1-g_1} \rho |SS^* | {}_{k_2+h_1-g_1} \rho \; {}_{h_3} \rho ) $$ $$+\sum_{j,r \in G} \limits \epsilon_{h_1-r-k_1+k_2}(r+k_1-k_2)$$ $$A_{2k_1-k_2+h_3-g_2}(r-k_1-h_3+g_2,j+2k_2-h_3-h_2) $$ $$A_{2h_1-r-k_1+k_2} (g_1-h_1-k_2+r,j+2k_1-h_1-h_2) $$ $$A_{2k_1-h_2+k_2-g_2}(j,-k_1-k_2+g_1+g_2) $$ $$( {}_{h_1} \rho \; {}_{r} \rho | T_{j+k_1+k_2-h_3-h_2+r} T^*_{j-r+k_1+k_2-h_2+h_1} | {}_{r} \rho \; {}_{h_3} \rho )]$$ \end{enumerate} \end{lemma} Finally, we compute the action of $S_0$ on the tube algebra in terms of the basis $\mathcal{B} $. \begin{lemma} The action of $S_0$ on $\mathcal{B} $ is given as follows: \begin{enumerate} \item $$S_0 [(g \; k| 1 | k \; g ) ] =(-k \; g | 1 | g \; -k ) $$ \item $$S_0[(g \; {}_k \rho | 1|{}_k \rho \; g ) ] =\frac{1}{d}({}_k \rho \; g | 1 | g \; {}_k \rho) $$ \item $$S_0[( {}_{h} \rho \; k|1 | k \; {}_{h} \rho ) ] =d (-k \; {}_{h} \rho |1|{}_{h} \rho \; -k )$$ \item $S_0[( {}_{h} \rho \; {}_{k} \rho |SS^* |{}_{k} \rho \; {}_{h} \rho )] $ $$=\frac{1}{d}[( {}_{k} \rho \; {}_{h} \rho |SS^*|{}_{h} \rho \; {}_{k} \rho )+ \sum_{j \in G} \limits ( {}_{k} \rho \; {}_{h} \rho |T_jT^*_j|{}_{h} \rho \; {}_{k} \rho )]$$ \item $S_0[( {}_{h} \rho \; {}_{k} \rho |T_{k-h+g}T^*_{h-k+g} |{}_{k} \rho \; {}_{h} \rho ) ]=$ $$\epsilon_{-k}(k-h+g)\epsilon_{-k}(h-k+g) )\epsilon_{-k-h+g}(k+h-g) \epsilon_{h-3k+g}(-h+3k-g) $$ $$[\delta_{2k-2h,0} ( {}_{k} \rho \; {}_{h} \rho | SS^* )|{}_{h} \rho \; {}_{k} \rho )$$ $$+ \sum_{j \in G} \limits A_{-h+3k-g}( 2h-2k,j ) ( {}_{k} \rho \; {}_{h} \rho | T_{j+k+h-g} T^*_{j-h+3k-g}|{}_{h} \rho \; {}_{k} \rho )]$$ \end{enumerate} \end{lemma} We now determine the structure of $\mathcal{A }_G$. For $g \in G $ and $ \tau \in \hat{G}$, let $$p(g,\tau)=\frac{1}{|G|} \sum_{k \in G } \limits \tau(k) (g \; k | 1 | k \; g ) , $$ and let $$E(g,\tau)=\frac{1}{|G|} \sum_{k \in G } \limits \tau(k) (g \; {}_k \rho | 1 | {}_k \rho \; -g ) .$$ \begin{lemma} \begin{enumerate} \item The $p(g,\tau) $ are mutually orthogonal projections which sum to the identity of $\mathcal{A}_G $. \item If $2g \neq 0 $, then $$E(g,\tau)E(g,\tau')^*=\delta_{\tau,\tau'}p(g,\tau) .$$ \item If $2g=0 $ and $\epsilon_g(\cdot)$ is a character (which is always the case if all of the $A_g(h,k) $ are nonzero) then $$E(g,\tau)E(g,\tau')^*=\delta_{\tau,\tau'}[ p(g,\tau) + \delta_{\tau,\epsilon_g} n E(g,\tau) ] .$$ \end{enumerate} \end{lemma} \begin{proof} We prove (2) and (3). We have $$E(g,\tau)E(g,\tau')^*$$ $$=\left(\frac{1}{|G|} \sum_{k \in G } \limits \tau(k) (g \; {}_k \rho | 1 | {}_k \rho \; -g ) \right) \left( \frac{1}{|G|} \sum_{l \in G } \limits \overline{\tau'(l) } (-g \; {}_l \rho | 1 | {}_l \rho \; g ) \right)$$ $$ =\frac{1}{|G|^2} \sum_{k,l \in G} \limits \tau(k)\overline{\tau'(l)}[ (g\; k-l | S^* \alpha_k \rho(1)1\alpha_g(S)|k-l \; g ) $$ $$+ \sum_{r \in G} \limits (g\; {}_r \rho | T^*_{r+k-l} \alpha_k \rho(1)1\alpha_g(T_{r+k-l})| {}_r \rho \; g ) ] $$ $$ =\frac{1}{|G|^2} \sum_{k,m \in G} \limits \tau(k)\overline{\tau(k-m)} (g\; m | 1 |m \; g ) + \sum_{r \in G} \limits \epsilon_g(r+m) (g\; {}_r \rho | T^*_{r+m} T_{ 2g+r+m}| {}_r \rho \; g ) $$ $$= \left( \frac{1}{|G|} \sum_{k\in G} \limits \tau(k) \overline{\tau'(k)} \right)$$ $$\left[ \left( \frac{1}{|G|} \sum_{m \in G} \limits \tau'(m) (g\; m | 1 |m \; g ) \right) + \delta_{2g,0} \frac{1}{|G|} \sum_{r \in G} \limits (\sum_{m \in G} \limits \tau'(m) \epsilon_g(r+m) ) (g\; {}_r \rho | 1| {}_r \rho \; g ) \right]$$ $$=\delta_{\tau,\tau'}\left[ p(g,\tau) + \delta_{2g,0} \frac{1}{|G|} \sum_{r \in G} \limits \left(\sum_{m \in G} \limits \tau(m) \epsilon_g(r+m) \right) (g\; {}_r \rho | 1| {}_r \rho \; g ) \right].$$ If $2g \neq 0 $, then the second term vanishes and we get (2). If $2g=0 $ and $\epsilon_g $ is a character, then $$\frac{1}{|G|} \sum_{r \in G} \limits (\sum_{m \in G} \limits \tau(m) \epsilon_g(r+m) ) (g\; {}_r \rho | 1| {}_r \rho \; g ) =$$ $$\left( \sum_{m \in G} \limits \tau(m) \epsilon_g(m) \right) \left( \frac{1}{G} \sum_{r \in G} \limits \epsilon_g(r) (g\; {}_r \rho | 1| {}_r \rho \; g ) \right)$$ $$=\delta_{\epsilon_g,\tau} |G| E(G,\tau) ,$$ and we get (3). \end{proof} \begin{corollary} \label{gproj} If $\epsilon_g $ is a character for all $g \in G$ such that $2g=0 $, then the minimal central projections of $\mathcal{A}_G $ are: $$p(g,\tau)+p(-g,\bar{\tau} ), \ g \neq -g \in G$$ $$p(g,\tau)^{\pm} = \frac{1}{2}(p(g,\tau) \pm E(g,\tau) ) , \ g=-g \in G, \ \tau=\bar{\tau} \neq \epsilon_g \in \hat{G} $$ $$p(g,\tau)+p(g,\bar{\tau}), \ g=-g \in G, \ \tau \neq \bar{\tau} \in \hat{G} $$ $$p(g)^0=\frac{n}{\Lambda}(p(g, \tau) +d E(g,\tau) ) \text{ and } $$ $$p(g)^1=\frac{n}{\Lambda}( \frac{\Lambda-n}{n} p(g, \tau) -d E(g,\tau) ), \ g=-g \in G, \tau = \epsilon_g \in \hat{G} .$$ \end{corollary} \begin{remark} Corollary \ref{gproj} implies that the number of irreducible half-braidings whose underlying object contains an invertible object is given by $$\frac{1}{2}(|G|^2+3|G_2|^2), $$ where $G_2$ is the subgroup of order two elements of $G$. \end{remark} Finally, we note a relation among the subalgebras $\mathcal{A}_{{}_h \rho} $. For fixed $h,k \in G_0 $, let $$u_{h,k}=( {}_h \rho \; k| 1 | k \; {}_{h-2k} \rho ) .$$ Then $$u_{h,k}^*=( {}_{h-2k} \rho \; -k| 1 | -k \; {}_{h} \rho ) ,$$ and $$u_{h,k}u_{h,k}^*=1_{{}_h \rho} , \quad u_{h,k}^*u_{h,k}=1_{{}_{h-2k} \rho}.$$ Therefore $1_{{}_h \rho} $ is equivalent to $1_{{}_{h-2k} \rho}$ in the tube algebra and $M_{h,k}=\text{Ad}(u_{h,k}) $ maps $\mathcal{A}_{{}_h \rho} $ isomorphically onto $\mathcal{A}_{{}_{h-2k} \rho} $. \subsection{Example: $\mathbb{Z} / 4 \mathbb{Z}$} For $ G=\mathbb{Z}/4\mathbb{Z}$, it was shown in \cite{IzumiNote} that there is a unique generalized Haagerup category $\mathcal{C}_{G,A,\epsilon} $. We label the elements of $G$ by $\{0,1,2,3\}$, in that order. The structure constants are as follows: $$\epsilon_1(3)=\epsilon_3(1)=-1, \quad \epsilon_2(g)=(-1)^g, \quad \epsilon_g(h)=1 \text{ otherwise}.$$ Let $$a=-\displaystyle \frac{1+\sqrt{5}}{2} + i \sqrt{\displaystyle \frac{1+\sqrt{5}}{2}}.$$ Define $G \times G$ matrices as follows: $$A=\frac{1}{d-1}\left( \begin{array}{cccc} d-2 & -1 & -1 & -1 \\ -1 & -1 & a & -a \\ -1 & \bar{a} & -1 & a \\ -1 & -\bar{a} & \bar{a} & -1 \\ \end{array} \right) \quad B_1=\left( \begin{array}{cccc} 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & -1 \\ 1 & 1 & 1 & -1 \\ 1 & -1 & -1 & 1 \\ \end{array} \right) $$ Set $$A_0(h,k)=A(h,k), \quad A_1(h,k)=B_1(h,k)A(h,k) $$ and define the remaining $A_g(h,k)$ by \begin{equation} \label{a+2} A_{g+2h}(p,q)=\epsilon_h(g)\epsilon_h(g+p)\epsilon_h(g+q)\epsilon_h(g+p+q)A_g(p,q), \end{equation} which is one of the structure equations for a generalized Haagerup category \cite{IzumiNote}. We will describe the tube algebra of $\mathcal{C}_{G,A,\epsilon} $. Using Corollary \ref{gproj} we can list the $14$ minimal central projection in $\mathcal{A}_G$. We label the elements of $\hat{G} $ by their values on $1$, namely $1,-1,i,-i$. Then the minimal central projections of $\mathcal{A}_G $ are: $$\{ p(0)^0, p(0)^1, p(2)^0,p(2)^1, p(0,-1)^\pm, p(2,1)^\pm, p(0,i)+p(0,-i),p(2,i)+p(2,-i), $$ $$p(1,1)+p(3,1), p(1,-1)+p(3,-1),p(1,i)+p(3,-i), p(1,-i)+p(3,i)\} .$$ For $g,h \in G $ and $\tau \in \hat{G} $, let $$J(\tau,g,h)=\frac{1}{|G|}\sum_{k \in G} \limits \tau(k)(g \; {}_k \rho | T_{2k-h+g} | {}_k \rho \; {}_h \rho) .$$ We compute $K(\tau,g,h)=J(\tau,g,h)J(\tau,g,h)^* $ for all $\tau $, for $g=0,1,2 $, and for $h=0,1$. We do not need the other values of $g $ and $h$ because $\mathcal{A}_1 $ and $\mathcal{A}_3 $ are isomorphic, and similarly $\mathcal{A}_{{}_h \rho } $ and $\mathcal{A}_{{}_{h+2} \rho } $ are isomorphic for all $h$. Let $\mu= \Lambda/(\Lambda-4)$. Then $K(\tau,g,h)$ is given by the following tables. \begin{table}[H] \label{tab1} \caption{$K(\tau,g,h)$ for $h=0$} \centering \begin{tabular}{c | c c c c} \diagbox{g}{$\tau$} & $1$ & $-1$ & $i$ & $-i$ \\ \hline 0 & $\mu \cdot p(0)^1$ & $2 \cdot p(0,-1)^+ $ & $p(0,-i)$ & $p(0,i)$ \\ 1 & $p(1,1)$ & $p(1,-1)$ & $p(1,-i)$ & $p(1,i)$ \\ 2 &$2 \cdot p(2)^+ $ & $ \mu \cdot p(2)^1 $ & $p(2,-i) $ & $p(2,i) $ \\ \end{tabular} \end{table} \begin{table}[H] \label{tab1} \caption{$K(\tau,g,h)$ for $h=1$} \centering \begin{tabular}{c | c c c c} \diagbox{g}{$\tau$} & $1$ & $-1$ & $i$ & $-i$ \\ \hline 0 & $\mu \cdot p(0)^1$ & $2 \cdot p(0,-1)^- $ & $p(0,-i)$ & $p(0,i)$ \\ 1 & $p(1,1)$ & $p(1,-1)$ & $p(1,-i)$ & $p(1,i)$ \\ 2 &$2 \cdot p(2,1)^-$ & $ \mu \cdot p(2)^1$ & $p(2,-i) $ & $p(2,i) $ \\ \end{tabular} \end{table} From these tables we can write down the $14$ the minimal central projections in the tube algebra corresponding to the minimal central projections of $\mathcal{A}_G $. Let $L(\tau,g,h) =J(\tau,g,h)^*J(\tau,g,h)$, and let $M=M_{0,1}+M_{1,1}$, which maps $\mathcal{A}_{\rho}+ \mathcal{A}_{{}_1 \rho}$ isomorphically onto $\mathcal{A}_{{}_2 \rho}+ \mathcal{A}_{{}_3 \rho}$. Then the following are minimal central projections in the tube algebra. \begin{align*} P_1 = p(0)^0 & & P_2 = p(2)^0 \end{align*} \begin{eqnarray*} P_3 &=& p(0)^1 +\frac{1}{\mu}(id+M)(L(1,0,0)+L(1,0,1)) \end{eqnarray*} \begin{eqnarray*} P_4 &=& p(2)^1 +\frac{1}{\mu}(id+M)(L(-1,2,0)+L(-1,2,1)) \end{eqnarray*} \begin{eqnarray*} P_5 &=&p(0,-1)^+ +\frac{1}{2}(id+M)L(-1,0,0) \end{eqnarray*} \begin{eqnarray*} P_6 &=& p(2,1)^+ +\frac{1}{2}(id+M)L(1,2,0) \end{eqnarray*} \begin{eqnarray*} P_7 &=& p(0,-1)^- +\frac{1}{2}(id+M)L(-1,0,1) \end{eqnarray*} \begin{eqnarray*} P_8 &=& p(2,1)^- +\frac{1}{2}(id+M)L(1,2,1) \end{eqnarray*} \begin{eqnarray*} P_9 &=& p(0,i)+p(0,-i) +(id+M)(L(i,0,0)+L(i,0,1) ) \end{eqnarray*} \begin{eqnarray*} P_{10} &=& p(2,i)+p(2,-i) +(id+M)(L(i,2,0)+L(i,2,1) ) \end{eqnarray*} \begin{eqnarray*} P_{11} &=& p(1,1)+p(3,1) +(id+M)(L(1,1,0)+L(1,1,1) ) \end{eqnarray*} \begin{eqnarray*} P_{12} &=& p(1,-1)+p(3,-1) +(id+M)(L(-1,1,0)+L(-1,1,1) ) \end{eqnarray*} \begin{eqnarray*} P_{13} &=& p(1,i)+p(3,-i) +(id+M)(L(-i,1,0)+L(-i,1,1) ) \end{eqnarray*} \begin{eqnarray*} P_{14} &=& p(1,-i)+p(3,i) +(id+M)(L(i,1,0)+L(i,1,1) ). \end{eqnarray*} This immediately tells us the structure of the corresponding $14$ half-braidings of $\mathcal{C}_{G,A,\epsilon} $. \begin{lemma} \begin{enumerate} \item $\alpha_0 $ and $\alpha_2 $ each have a unique half-braiding. \item $\alpha_g+\sum_{h \in G} \limits \alpha_h \rho $ and $2\alpha_g+\sum_{h \in G} \limits \alpha_h \rho $ each have a unique irreducible half-braiding for $g \in \{0,2 \} $. \item $ \alpha_g+\alpha_h \rho+ \alpha_{h+2} \rho$ has a unique irreducible half-braiding for $g \in \{0,2 \}, \ h \in \{0, 1 \} $. \item $\alpha_1+\alpha_3+\sum_{g \in G} \limits \alpha_g \rho $ has four irreducible half-braidings. \end{enumerate} \end{lemma} \begin{proof} Each $P_i$, $1 \leq i \leq 14 $, corresponds to an irreducible half-braiding, and the multiplicity of a simple object $\xi \in \Delta$ in the underlying object of the half-braiding associated to $P_i $ is given by the rank of $P_i 1_{\xi} $. \end{proof} The $\textbf{t} $ eigenvalues of $P_1-P_{14} $ are given by the vector $$ (1,1,1,1,1,1,1,1,1,-1,1,-1,i,-i ).$$ Let $$\textbf{t}_{h} = d( {}_h \rho \; {}_h \rho| SS^* | {}_h \rho \; {}_h \rho ),$$ the component of $\textbf{t} $ in $\mathcal{A}_{{}_h \rho} $. We would like to know the eigenvalues of left multiplication by $\textbf{t}_h $. It is difficult to compute the eigenvalues directly, but we can approximately calculate the eigenvalues numerically and figure out what they are, and then verify directly that these values are correct by checking that $\textbf{t}_h$ satisfies the appropriate minimal polynomial. Let $$q(x)=(x^4-1)(x^2-e^{\frac{3\pi i}{5}})(x^2-e^{-\frac{3\pi i}{5}}) (x-e^{\frac{4\pi i}{5}})(x-e^{-\frac{4\pi i}{5}}) .$$ \begin{lemma} For each $h$, the eigenvalues of $\textbf{t}_{h}$ are $$\{ \pm1, \pm i, \pm e^{\pm \frac{3 \pi i }{10}} , e^{\pm \frac{4\pi i}{5}}\}.$$ \end{lemma} \begin{proof} We write down the matrix $\textbf{t}_{h}$ of left multiplication by $\textbf{t}_h $ with respect to the basis $\mathcal{B}_{{}_h \rho} $, and check that $q(\textbf{t}_h)=0 $, and that $\textbf{t}_h $ is not annihilated by any proper factor of $q(x) $. \end{proof} For each eigenvalue $\zeta $ of $\textbf{t}_h $, let $q_{\zeta}(x) =q(x)/(x-\zeta)$. Then the projection onto the $\zeta$-eigenspace is given by $$p^{\zeta}_{h} =\displaystyle \frac{q_{\zeta} (\textbf{t}_{h})}{q_{\zeta} (\zeta)} .$$ Note that $\text{dim}(\mathcal{A}_{{}_h \rho}) =20 $ and $\text{dim}(\mathcal{A}_{{}_h \rho, {}_{h+1} \rho }) =16 $ for all $h \in G$. \begin{lemma} \begin{enumerate} \item The projections $p^{ e^{\frac{4\pi i}{5}} }_h , p^{e^{-\frac{4\pi i}{5}}}_{h},p^{i}_h,p^{-i}_h$ are not minimal in $\mathcal{A}_{{}_h \rho} $. \item Each $\mathcal{A}_{{}_h \rho} $ is Abelian. \item The ten minimal projections in $\mathcal{A}_{{}_h \rho} $ which are orthogonal to $P_1-P_{14}$ have $\textbf{t}_h$-eigenvalues $ \{ \pm i, \pm e^{\pm \frac{3 \pi i}{10}}, e^{\pm \frac{4 \pi i}{5}} \}$, with $ e^{\pm \frac{4 \pi i}{5}}$ each occurring twice. \item The projections $p^i_0-L(-i,1,0) $ and $p^i_1-L(-i,1,1) $ are not equivalent in the tube algebra. Similarly for the projections $p^{-i}_0-L(i,1,0) $ and $p^{-i}_1-L(i,1,1) $. \item For each $\textbf{t}_h $-eigenvalue $\zeta$ which is not a fourth root of unity, $p^{\zeta}_0 $ is equivalent to $p^{\zeta}_1 $ in the tube algebra. \end{enumerate} \end{lemma} \begin{proof} We check directly that $p_h^{\pm i} $ is not equal to the component of $P_{13/14} $ in $\mathcal{A}_{{}_h \rho} $, which shows that $p_h^{\pm i} $ are not minimal. We can also check the action of $p^{ e^{\pm \frac{4\pi i}{5}} }_h $ on the basis $\mathcal{B}_{{}_h \rho} $ and see that the range of each of these projections has dimension greater than one. Therefore, there are at least $10$ mutually orthogonal nonzero projections in $\mathcal{A}_{{}_h \rho} $ for each $h$ which are also orthogonal to $P_1-P_{14} $. Since there are also $10$ mutually orthogonal nonzero projections in each $\mathcal{A}_{{}_h \rho} $ which are subordinate to the sum of $P_1 $ to $P_{14}$, and $\text{dim}(\mathcal{A}_{{}_h \rho} )=20$, this implies that $\mathcal{A}_{{}_h \rho} $ is Abelian, and then we know the $\textbf{t}_h$-eigenvalues of all $20$ minimal central projections. This proves (1)-(3). Let $p$ be the componet of $p_h^{\pm i} $ orthogonal to $P_{13/14} $. Then we can compute the entry of the $S$-matrix corresponding to the central cover of of $p$ in the tube algebra and the identity using Equation \ref{sform1}, and this entry is $\frac{2d} { \Lambda} $ for all $h$ and $\pm $. Therefore the object of the half-braiding corresponding to the central cover of $p$ has dimension $2d$, so $p$ is equivalent to a projection in $\mathcal{A}_{{}_{h+2} \rho} $, and not to any projections in $\mathcal{A}_{{}_{h+1} \rho} $. This shows (4). Finally, since $\text{dim}(\mathcal{A}_{{}_h \rho, {}_{h+1} \rho }) =16 $ we get (5). \end{proof} \begin{corollary} \begin{enumerate} \item The object $\sum_{g \in G} \limits \alpha_g \rho $ has eight irreducible half-braidings. \item The object $\alpha_h \rho + \alpha_{h+2} \rho$ has two irreducible half-braidings for each $h \in \{0,1\} $. \end{enumerate} \end{corollary} To write down all the minimal central projections in the tube algebra, the only remaining task is to decompose each $p^{ e^{ \pm \frac{4\pi i}{5}} }_h$, and for each of these two eigenvalues, to match up the two subprojections for $h=0$ with the two subprojections for $h=1$. The decomposition can be achieved by multiplying each $p^{\zeta}_{h} $ by $x$ and $x^2$, where $x$ is some basis element in $ \mathcal{B}_{{}_h \rho}$, and then solving a quadratic equation in the coefficient vectors of $p^{\zeta}_h$, $x p^{\zeta}_h$, and $x^2 p^{\zeta}_h$ with respect to $ \mathcal{B}_{{}_h \rho}$. This calculation can be done with a computer and we do not have any nice expression for the minimal subprojections of $p^{\zeta}_{h} $. The resulting subprojections can then be matched up by checking which pairings give a consistent $S$-matrix, which we can then compute. \begin{theorem} \begin{enumerate} \item The quantum double of the generalized Haagerup category for $ \mathbb{Z} / 4 \mathbb{Z} $ has $26$ simple objects, which we label by the integers $1,...,26$. \item The eigenvalues of the $T$-matrix are given by the vector $$( 1,1,1,1,1,1,1,1,1,-1,1,-1,i,-i,i,-i,i,-i,$$ $$e^{-\frac{3 \pi i}{10}},-e^{-\frac{3 \pi i}{10}} ,e^{\frac{3 \pi i}{10}} ,-e^{\frac{3 \pi i}{10}},e^{-\frac{ 4 \pi i}{5}},e^{-\frac{4 \pi i}{5}}, e^{\frac{ 4\pi i}{5}},e^{\frac{4 \pi i}{5}} ) $$ \item The $S$-Matrix is as follows: \begin{itemize} \item $S_{(1-14)\times(1-14)}$ is the matrix \renewcommand{\arraystretch}{1.5} \resizebox{\linewidth}{!}{% $\frac{1}{8}\left( \begin{array}{cccccccccccccc} \frac{8}{\Lambda} & \frac{8}{\Lambda} & \frac{8d^2}{\Lambda} & \frac{8d^2}{\Lambda} & 1 & 1 & 1 & 1 & 2 & 2 & 2 & 2 & 2 & 2 \\ \frac{8}{\Lambda} & \frac{8}{\Lambda} & \frac{8d^2}{\Lambda} & \frac{8d^2}{\Lambda} & 1 & 1 & 1 & 1 & -2 & -2 & -2 & -2 & 2 & 2 \\ \frac{8d^2}{\Lambda} & \frac{8d^2}{\Lambda} & \frac{8}{\Lambda} & \frac{8}{\Lambda} & 1 & 1 & 1 & 1 & 2 & 2 & 2 & 2 & 2 & 2 \\ \frac{8d^2}{\Lambda} & \frac{8d^2}{\Lambda} & \frac{8}{\Lambda} & \frac{8}{\Lambda} & 1 & 1 & 1 & 1 & -2 & -2 & -2 & -2 & 2 & 2 \\ 1 & 1 & 1 & 1 & 3 & 3 & -1 & -1 & 2 & 2 & -2 & -2 & -2 & -2 \\ 1 & 1 & 1 & 1 & 3 & 3 & -1 & -1 & -2 & -2 & 2 & 2 & -2 & -2 \\ 1 & 1 & 1 & 1 & -1 & -1 & 3 & 3 & 2 & 2 & -2 & -2 & -2 & -2 \\ 1 & 1 & 1 & 1 & -1 & -1 & 3 & 3 & -2 & -2 & 2 & 2 & -2 & -2 \\ 2 & -2 & 2 & -2 & 2 & -2 & 2 & -2 & 4 & -4 & 0 & 0 & 0 & 0 \\ 2 & -2 & 2 & -2 & 2 & -2 & 2 & -2 & -4 & 4 & 0 & 0 & 0 & 0 \\ 2 & -2 & 2 & -2 & -2 & 2 & -2 & 2 & 0 & 0 & 4 & -4 & 0 & 0 \\ 2 & -2 & 2 & -2 & -2 & 2 & -2 & 2 & 0 & 0 & -4 & 4 & 0 & 0 \\ 2 & 2 & 2 & 2 & -2 & -2 & -2 & -2 & 0 & 0 & 0 & 0 & -4 & 4 \\ 2 & 2 & 2 & 2 & -2 & -2 & -2 & -2 & 0 & 0 & 0 & 0 & 4 & -4 \\ \end{array} \right)$ } \item $S_{(15-26)\times(15-26)}$ is the matrix $$\frac{1}{2 \sqrt{5}}\left( \begin{array}{cccccccccccc} c_3& c_2& c_1 &c_4 & 1 & -1 & -1 & 1 & 1 & -1 & -1 & 1 \\ c_2&c_3& c_4 &c_1 & -1 & 1 & 1 & -1 & 1 & -1 & -1 & 1 \\ c_1 &c_4 & c_3 & c_2& 1 & -1 & -1 & 1 & 1 & -1 & -1 & 1 \\ c_4 &c_1 & c_2 &c_3& -1 & 1 & 1 & -1 & 1 & -1 & -1 & 1 \\ 1 & -1 & 1 & -1 & c_2&c_3& c_1 &c_4 & c_2& c_3&c_1 & c_4 \\ -1 & 1 & -1 & 1 &c_3& c_2& c_4 &c_1 & c_2& c_3&c_1 & c_4 \\ -1 & 1 & -1 & 1 &c_1 &c_4 & c_2 &c_3&c_4 & c_1 &c_3& c_2 \\ 1 & -1 & 1 & -1 &c_4 &c_1 & c_3 & c_2&c_4 & c_1 &c_3& c_2 \\ 1 & 1 & 1 & 1 & c_2& c_2& c_4 &c_4 &c_3& c_3&c_1 & c_1 \\ -1 & -1 & -1 & -1 &c_3&c_3& c_1 &c_1 &c_3& c_3&c_1 & c_1 \\ -1 & -1 & -1 & -1 &c_1 &c_1 & c_3 &c_3&c_1 & c_1 &c_3& c_3 \\ 1 & 1 & 1 & 1 &c_4 &c_4 & c_2 & c_2&c_1 & c_1 &c_3& c_3 \\ \end{array} \right),$$ where $c_k=2 \text{cos} \frac{k \pi }{5} \in \{ \pm \frac{1}{2} \pm\frac{\sqrt{5}}{2} \}.$ \item $S_{(1-4)\times(15-26)}=(S_{(15-26) \times (1-4 ) })^T$ is the matrix $$\frac{1}{4 \sqrt{5}}\left( \begin{array}{cccccccccccc} 1 & 1 & 1 & 1 & 2 & 2 & 2 & 2 & 2 & 2 & 2 & 2 \\ -1 & -1 & -1 & -1 & -2 & -2 & -2 & -2 & 2 & 2 & 2 & 2 \\ -1 & -1 & -1 & -1 & -2 & -2 & -2 & -2 & -2 & -2 & -2 & -2 \\ 1 & 1 & 1 & 1 & 2 & 2 & 2 & 2 & -2 & -2 & -2 & -2 \\ \end{array} \right)$$ \item $S_{(5-8)\times(15-18)} =(S_{(15-18)\times(5-8)} )^T$ is the matrix $$\frac{1}{4} \left( \begin{array}{cccc} 1 & 1 & -1 & -1 \\ -1 & -1 & 1 & 1 \\ -1 & -1 & 1 & 1 \\ 1 & 1 & -1 & -1 \\ \end{array} \right)$$ \item All other entries are $0$. \end{itemize} \end{enumerate} \end{theorem} \begin{remark} It was observed in \cite{MR2837122} that the $13^{th}$ roots of unity which appear in the $T$-matrix for the quantum double of the Haagerup subfactor (corresponding to the group $ \mathbb{Z}/3\mathbb{Z}$) are $e^{\frac{12 l^2 \pi i}{13}}$, for $1 \leq l \leq 6 $, and the entry of the $S$-matrix corresponding to $l,l'$ is $ -\frac{2}{\sqrt{13}} \text{cos} (\frac{2 l l' \pi}{13}) $. Here the $20^{th}$ roots of unity which appear in the $T$-matrix and are not also fourth roots of unity are $ e^{\pm \frac{6 l^2 \pi i}{20}}$, for $1 \leq l \leq 4 $. However, we did not find a similarly nice expression for the corresponding $8 \times 8 $ block of the $S$-matrix. \end{remark} \subsection{Example: $\mathbb{Z} / 2 \mathbb{Z} \times \mathbb{Z} / 2 \mathbb{Z}$} For $ G=\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z} / 2 \mathbb{Z}$, it was shown in \cite{IzumiNote} that there is a unique generalized Haagerup category $\mathcal{C}_{G,A,\epsilon} $ The structure constants are as follows. We label the elements of $G$ by $\{0,a,b,c\}$. Set $$\epsilon_a(a)=\epsilon_b(b)=\epsilon_c(c)=\epsilon_a(c)=\epsilon_b(a)=\epsilon_c(b)=-1,$$ and $$\epsilon_g(h)=1$$ otherwise. Note that $\epsilon$ is a bicharacter. Define $G \times G$ matrices as follows: $$A=\frac{1}{d-1}\left( \begin{array}{cccc} d-2 & -1 & -1 & -1 \\ -1 & -1 & \sqrt{d} & \sqrt{d} \\ -1 & \sqrt{d} & -1 & \sqrt{d} \\ -1 & \sqrt{d} & \sqrt{d} & -1 \\ \end{array} \right), \quad B_a=\left( \begin{array}{cccc} 1 & 1 & 1 & 1 \\ 1 & -1 & 1 & -1 \\ 1 & 1 & -1 & -1 \\ 1 & -1 & -1 & 1 \\ \end{array} \right),$$ $$ B_b=\left( \begin{array}{cccc} 1 & 1 & 1 & 1 \\ 1 & 1 & -1 & -1 \\ 1 & -1 & -1 & 1 \\ 1 & -1 & 1 & -1 \\ \end{array} \right), B_c=\left( \begin{array}{cccc} 1 & 1 & 1 & 1 \\ 1 & -1 & -1 & 1 \\ 1 & -1 & 1 & -1 \\ 1 & 1 & -1 & -1 \\ \end{array} \right).$$ Set $A_0=A$ and $A_x(g,h)=A(g,h)B_x(g,h) $ for $x \in \{ a,b,c\} $. We consider the tube algebra of $\mathcal{C}_{G,A,\epsilon} $. For each $g \in G $, let $\hat{g} \in \hat{G} $ be the character defined by $ \epsilon_g(\cdot)$. Since $G=G_2 $, $\mathcal{A}_G $ is Abelian, and we can write down its $32$ minimal central projections using Corollary \ref{gproj}. They are: $$p(g)^0,p(g)^1, g \in G $$ and $$p(g,\hat{h})^\pm, g \neq h \in G.$$ For $g,h \in G $ and $\tau \in \hat{G} $, let $$J(\tau,g,h)=\frac{1}{|G|}\sum_{k \in G} \limits \tau(k)(g \; {}_k \rho | T_{h+g} | {}_k \rho \; {}_h \rho) .$$ We compute $K(\tau,g,h)=J(\tau,g,h)J(\tau,g,h)^* $ for all $\tau,g,h$. We have $$K(\hat{g},g,h)=\mu \cdot p(g)^1 ,$$ and $$K(\hat{k},g,h)= p(g,\hat{k})+\sigma(k,g,h)E(g,\hat{k}) , \ k \neq g $$ where $\sigma(k,g,h)$ is a sign. \begin{table}[H] \caption{The sign $\sigma(k,g,h)$} \centering \begin{minipage}{.4 \textwidth} \begin{tabular}{c | c c c} \diagbox{h}{$\hat{k}$} & $\hat{a}$ & $\hat{b}$ & $\hat{c}$ \\ \hline $0$ & $+$ & $+$ & $+$ \\ $a$ & $-$ & $-$ & $+$ \\ $b$ &$+$ & $- $ & $- $ \\ $c$ & $-$ & $+$ & $-$ \\ \end{tabular} \caption*{g=0} \end{minipage} \begin{minipage}{.4 \textwidth} \begin{tabular}{c | c c c c} \diagbox{h}{$\hat{k}$} & $\hat{0} $ & $\hat{b}$ & $\hat{c}$ \\ \hline $0$ & $-$ & $+$ & $-$ \\ $a$ & $+$ & $+$ & $+$ \\ $b$ & $-$ & $- $ & $+ $ \\ $c$ & $+$ & $-$ & $-$ \\ \end{tabular} \caption*{g=a} \end{minipage} \end{table} \begin{table}[H] \caption{The sign $\sigma(k,g,h)$} \centering \begin{minipage}{.4 \textwidth} \begin{tabular}{c | c c c c} \diagbox{h}{$\hat{k}$} & $\hat{0} $ & $\hat{a}$ & $\hat{c}$ \\ \hline $0$ & $-$ & $-$ & $+$ \\ $a$ & $+$ & $-$ & $-1$ \\ $b$ & $+$ & $+ $ & $+ $ \\ $c$ & $-$& $+$ & $-$ \\ \end{tabular} \caption*{g=b} \end{minipage} \begin{minipage}{.4 \textwidth} \begin{tabular}{c | c c c c} \diagbox{h}{$\hat{k}$} & $\hat{0} $ & $\hat{a}$ & $\hat{b}$ \\ \hline $0$ & $-$ & $+$ & $-$ \\ $a$ & $-$ & $-$ & $+$ \\ $b$ & $+$ & $- $& $- $ \\ $c$ & $+$& $+$ & $+$ \\ \end{tabular} \caption*{g=c} \end{minipage} \end{table} From these tables, we can write down the corresponding $32$ minimal central projections in the tube algebra as linear combinations of projections in $\mathcal{A}_G $ and the elements $L(\tau,g,h)= J(\tau,g,h)^*J(\tau,g,h)$: $$p(g)^0, \quad p(g)^1+\frac{1}{\mu} \sum_{h \in G} \limits L(\hat{g},g,h) , \ g \in G$$ $$ p(g,\hat{k})^\pm + \frac{1}{2} \sum_{h \in G} \delta_{ \sigma(k,g,h),\pm} L(\hat{k},g,h), \ g \neq k \in G.$$ This tells us the structure of $32$ half-braidings of $\mathcal{C}_{G,A,\epsilon} $. \begin{lemma} \begin{enumerate} \item For each $g \in G$ there is a unique half-braiding for $\alpha_g $. \item For each $g \in G$ there is a unique irreducible half-braiding for $\alpha_g + \sum_{k\in G} \limits \alpha_g \rho $. \item For each $g, h, k \in G $ with $h \neq k $ there is a unique irreducible half-braiding for $\alpha_g+\alpha_h \rho +\alpha_k \rho $. \end{enumerate} \end{lemma} Each $\mathcal{A}_{{}_h \rho}$ has dimension $24$, and $\mathcal{A}_{{}_h \rho, {}_k \rho} $ has dimension $20$ for $k \neq h $. On the other hand, for each $h$ exactly $16$ of the $32$ known minimal central projections are not orthogonal to $1_{{}_h \rho} $, with an intersection of $12$ for $\mathcal{A}_{{}_h \rho}$ and $\mathcal{A}_{{}_k \rho}$, $k \neq h $. Therefore the subalgebra of $\mathcal{A}_{{}_h \rho} $ which is orthogonal to the known minimal central projections has dimension $8$ for each $h$, and these subalgebras are mutually unitarily equivalent in the tube algebra. To find the remaining miminal central projections, we once again figure out the eigenvalues of $\textbf{t}_h $ with respect to $\mathcal{B}_{{}_h \rho} $ numerically, and then verify the minimal polynomial precisely; it is $$q(x)=(x^2-1)(x^2-e^{\frac{4 \pi i}{5} })(x^2-e^{-\frac{4 \pi i}{5} }) .$$ As before we let $q_{\zeta}(x)=\frac{q(x)}{x -\zeta} $, and $$p^{\zeta}_h=\frac{q_{\zeta}(\textbf{t}_h)}{q_{\zeta}(\zeta)} , \ \zeta \in \{ \pm e^{\pm \frac{2 \pi i}{5}} \} .$$ We define projections $$r(\tau,h)=\frac{1}{4} \sum_{k \in G} \limits \tau(k) ({}_h \rho \; k| 1 | k \; {}_h \rho ), \ h \in G, \ \tau \in \hat{G} .$$ Then for each $h$ and $\zeta \in \{ - e^{\pm \frac{2 \pi i}{5}} \}$, the set $\{ p^{\zeta}_h \cdot r(\tau, h ) \}_{\tau \in \hat{G} } $ contains three distinct nonzero projections. The last remaining step is to match up the components of $p^{\zeta}_h$ for different $h$. \begin{lemma} The object $\sum_{g \in G} \limits {\alpha}_g \rho $ has $8$ half-braidings, one each with $T$-eigenvalues $e^{\pm \frac{2 \pi i}{5}}$ and three each with $T$-eigenvalues $- e^{\pm \frac{2 \pi i}{5}}$. \end{lemma} We can now write down the $40 \times 40$ $S$-matrix using (\ref{sform2}). However it turns out that in this case the modular data decomposes into a simpler form. \begin{lemma} Consider the unique half-braidings corresponding to the objects $id $, $id+ \sum_{k \in G} \limits \alpha_k \rho $, $\alpha_a + \rho+ \alpha_a \rho $, $\alpha_a +\alpha_b \rho+ \alpha_c \rho $, $\alpha_b + \rho+ \alpha_b \rho $, $\alpha_b +\alpha_a \rho+ \alpha_c \rho $, $\alpha_c + \rho+ \alpha_c \rho $, $\alpha_c +\alpha_a \rho+ \alpha_b \rho $, and the two half-braidings for $ \sum_{g \in G} \limits \alpha_g \rho$ with $ T$-eigenvalues $e^{\pm \frac{2 \pi i}{5}} $. These ten objects of the quantum double generate a modular tensor subcategory. \end{lemma} \begin{proof} We compute the $S$-matrix and check the fusion rules using the Verlinde formula. \end{proof} \begin{theorem} Consider the matrices\\ \renewcommand{\arraystretch}{1.5} \resizebox{\linewidth}{!}{% $S_a=\frac{1}{20}\left( \begin{array}{cccccccccc} 5-2 \sqrt{5} & 5+2 \sqrt{5} & 5 & 5 & 5 & 5 & 5 & 5 & 4 \sqrt{5} & 4 \sqrt{5} \\ 5+2 \sqrt{5} & 5-2 \sqrt{5} & 5 & 5 & 5 & 5 & 5 & 5 & -4 \sqrt{5} & -4 \sqrt{5} \\ 5 & 5 & 15 & -5 & -5 & -5 & -5 & -5 & 0 & 0 \\ 5 & 5 & -5 & 15 & -5 & -5 & -5 & -5 & 0 & 0 \\ 5 & 5 & -5 & -5 & 15 & -5 & -5 & -5 & 0 & 0 \\ 5 & 5 & -5 & -5 & -5 & 15 & -5 & -5 & 0 & 0 \\ 5 & 5 & -5 & -5 & -5 & -5 & 15 & -5 & 0 & 0 \\ 5 & 5 & -5 & -5 & -5 & -5 & -5 & 15 & 0 & 0 \\ 4 \sqrt{5} & -4 \sqrt{5} & 0 & 0 & 0 & 0 & 0 & 0 & 10+2 \sqrt{5} & -10+2 \sqrt{5} \\ 4 \sqrt{5} & -4 \sqrt{5} & 0 & 0 & 0 & 0 & 0 & 0 & -10+2 \sqrt{5} & 10+2 \sqrt{5} \\ \end{array} \right)$ } $$S_b=\frac{1}{2}\left( \begin{array}{cccc} 1 & 1 & 1 & 1 \\ 1 & 1 & -1 & -1 \\ 1 & -1 & 1 & -1 \\ 1 & -1 & -1 & 1 \\ \end{array} \right),$$ and the diagonal matrices $T_a$ and $T_b$ given by $$\text{Diagonal}(T_a)=(1,1,-1,-1,-1,-1,-1,-1,e^{\frac{2 \pi}{5}},e^{-\frac{2 \pi}{5}} )$$ and $$\text{Diagonal}(T_b)=(1,-1,-1 ,-1).$$ With an appropriate ordering of the simple objects, the modular data of the generalized Haagerup category for $ \mathbb{Z} / 2 \mathbb{Z} \times \mathbb{Z} / 2 \mathbb{Z} $ is given by $$S=S_a \otimes S_b , \quad T=T_a \otimes T_b .$$ \end{theorem} The pairs of matrices $(S_a, T_a)$ and $(S_b, T_b) $ each form modular data, with $(ST)^3=-I $ in each case. The rank $4$ modular tensor category of invertible objects of the quantum double, which corresponds to the modular data $(S_b,T_b) $, is related to $D_4 $ (see \cite{MR2544735}); the rank $10$ modular modular category corresponding to $(S_a, T_a)$ appears to be new and is not realized as a quantum double. \section{The quantum double of an equivariantization of a generalized Haagerup category } We now consider an equivariantization of a generalized Haagerup category coming from the orbifold construction described in Section 2. Let $G,A,\epsilon $ be as above, represented by automorphisms $\alpha_g, g \in G $ and an endomorphism $\rho $ on an infinite factor $M$, satisfying (\ref{e11})-(\ref{e13}), and let $\theta $ be an automorphism of $G$ with period $m$ which leaves $A$ and $\epsilon $ invariant. Let $ \gamma$ be the corrseponding automorphism of $M$. Since the $\mathbb{Z}/m\mathbb{Z} $ equivariantization of $\mathcal{C}_{G,A,\epsilon} $ is Morita equivalent to the $\mathbb{Z}/m\mathbb{Z} $-graded extension of $\mathcal{C}_{G,A,\epsilon} $ generated by $\gamma $, to study the quantum double it suffices to consider the latter. \subsection{The tube algebra of the $\mathbb{Z}/m\mathbb{Z} $-graded extension} Let $$\Delta=\{ \gamma^i \alpha_g \}_{ 0 \leq i \leq m-1, \ g \in G} \cup \{\alpha_g \rho \}_{0 \leq i \leq m-1, \ g \in G} .$$ In writing elements of the tube algebra we will represent the automorpism $ \gamma^i \alpha_g $ by the pair $(i,g )$. We introduce a basis for Tube $\Delta $ as follows. Let $H$ be the semidirect of $G $ with $\mathbb{Z}/m\mathbb{Z} $ determined by $\theta $. Let $$\mathcal{B}_H=\{ ((i,g) \; (j,k) | 1 |(j,k) \; (i, \theta^{-j}(g) ) \}_{0 \leq i, j \leq m-1, \ , g,k \in G} $$ $$\cup \{ ( (i,g) \;{}_{(j,k)} \rho | 1 |{}_{(j,k)} \rho \; (i,\theta^{-i}(k)-\theta^{-k}(g)-k) ) \}_{0 \leq i, j \leq m-1, \ g,k \in G} ,$$ $$\mathcal{B}_{H,{}_H\rho} = \{ ( (i,g) \;{}_{(j,k)} \rho | T_{\theta^{i} (g)+\theta^j(k)+\theta^{i+j}(k-h) } |{}_{(j,k)} \rho \; {}_{(i,h)} \rho )\}_{0 \leq i, j \leq m-1, \ , g,h,k \in G},$$ $$\mathcal{B}_{{}_H\rho, H} =\{ ( {}_{(i,h)} \rho \;{}_{(j,k)} \rho | T^*_{\theta^{i} (h)+\theta^j(k)-\theta^{i+j}(g+k) } |{}_{(j,k)} \rho \; (i,g) ) \}_{0 \leq i, j \leq m-1, \ , g,h,k \in G},$$ $$ \mathcal{B}_{{}_H\rho} =\{ ( {}_{(i,g)} \rho \;{}_{(j,k)} \rho | T_{m+\theta^j(k)+\theta^{i+j}(-h)} $$ $$T^*_{m+\theta^{i} (g)+\theta^{i+j}(-k) } |{}_{(j,k)} \rho \; (i,h) ) \}_{0 \leq i, j \leq m-1, \ , g,h,k,m \in G} ,$$ $$\cup \{ ( {}_{(i,g)} \rho \;{}_{(j,k)} \rho |SS^* |{}_{(j,k)} \rho \; (i,k+\theta^{-i}(k)-\theta^{-j}(g)) ) \}_{0 \leq i, j \leq m-1, \ , g,k \in G} ,$$ $$\cup \{ ( {}_{(i,g)} \rho \;{(j,k)} |1 |{(j,k)} \; (i,-k-\theta^{-i}(k)+\theta^{-j}(g)) ) \}_{0 \leq i, j \leq m-1, \ , g,k \in G} .$$ Then $\mathcal{B}= \mathcal{B}_H \cup \mathcal{B}_{H,{}_H\rho} \cup \mathcal{B}_{H,{}_H\rho} \cup \mathcal{B}_{{}_H\rho, H} $ is a basis for the tube algebra. We now write down the tube algebra multiplication rules, but for simplicity we only write down the multiplication in each $\mathcal{A}_{{}_{(i,g)} \rho} $, rather than for all of $\mathcal{A}_{{}_H \rho} $. This is all that is needed to compute the modular data in our example below. \begin{lemma} Multiplication in $\mathcal{A}_{ H } $ is given as follows: \begin{enumerate} \item $$({(i,g)} \; {(j,h)} | 1 | {(j,h)} \; {(i,l)} ) \cdot ({(i,l)} \; {(k,m)} | 1 | {(k,m)} \; {(i,n)} )$$ $$=( (i,g) \; (j+k, \theta^{-k}(h)+m ) |1|(j+k, \theta^{-k}(h)+m ) \; (i,n ) ).$$ \item $$({(i,g)} \; {(j,h)} | 1 | {(j,h)} \; {(i,l)} ) \cdot ({(i,l)} \; {}_{(k,m)} \rho | 1 | {}_{(k,m)} \rho \; {(i,n)} )$$ $$=( (i,g) \; {}_{(j+k, \theta^{-k}(h)+m )} \rho |1| {}_{ (j+k, \theta^{-k}(h)+m )} \rho \; (i,n ) ).$$ \item $$({(i,g)} \; {}_{(j,h)} \rho | 1 | {}_{(j,h)} \rho \; {(i,l)} ) \cdot ({(i,l)} \; {(k,m)} | 1 | {(k,m)} \; {(i,n)} )$$ $$=( (i,g) \; {}_{(j+k, \theta^{-k}(h)-m ) } \rho|1|{}_{(j+k, \theta^{-k}(h)-m )} \rho \; (i,n ) ).$$ \end{enumerate} \end{lemma} \begin{lemma} Multiplication on $\mathcal{A}_{ H } \times \mathcal{A}_{H,{}_H \rho} $ is given as follows: \begin{enumerate} \item $$({(i,g)} \; {(j,h)} | 1 | {(j,h)} \; {(i,g)} ) \cdot ({(i,g)} \; {}_{(k,l)} \rho | T_{\theta^{i}(g)+\theta^{k}(l)+\theta^{i+k}(l-m)} | {}_{(k,l)} \rho \; {}_{(i,m)} \rho )$$ $$ =\epsilon_h(\theta^{i}(g)+\theta^{k}(l)+\theta^{i+k}(l-m)) $$ $$ ({(i,g)} \; {}_{(j+k, \theta^{-k}(h)+l)} \rho | T_{\theta_j(2h)+\theta^{i+j}(g)+\theta^{j+k}(l)+\theta^{i+j+k}(l-m)} | {}_{(j+k, \theta^{-k}(h)+l)} \rho \; {}_{(i,m)} \rho ) $$ \item $$({(i,g)} \; {}_{(j,h)} \rho | 1 |{}_{(j,h)} \rho \; {(i,g)} ) \cdot ({(i,g)} \; {}_{(k,l)} \rho | T_{\theta^{i}(g)+\theta^{k}(l)+\theta^{i+k}(l-m)} | {}_{(k,l)} \rho \; {}_{(i,m)} \rho )$$ $$= \epsilon_{-h}(\theta^{i}(g)+\theta^{k}(l)+\theta^{i+k}(l-m)) $$ $$ \epsilon_{\theta^j(-2h)+\theta^{j+i}(g)+\theta^{j+k}(l)+\theta^{i+j+k}(l-m)}( \theta^j(2h)-\theta^{j+i}(g)-\theta^{j+k}(l)+\theta^{i+j+k}(m-l)) $$ $$\sum_{c \in G } \limits \epsilon_g(\theta^{j+k} ( c-l )+\theta^j(h) )$$ $$A_{\theta^j(2h)-\theta^{j+i}(g)-\theta^{j+k}(l)+\theta^{i+j+k}(m-l)}(-\theta^j(h)+\theta^{j+k}(c)+\theta^{i+j}(g) +\theta^{i+j+k}(l-m), $$ $$\theta^{i+j}(g+h)+\theta^{i+j+k}(c-m)+\theta^{j+k}(l)+\theta^i(2g)-\theta^j(2h)) $$ $$ ({(i,g)} \; {}_{(j+k,c)} \rho | T_{\theta^i(2g)+\theta^{j+k}(c)+\theta^{i+j+k}(c-m)-\theta^j(h)+\theta^{i+j}(g+h)} | {}_{(j+k,c)} \rho \; {}_{(i,m)} \rho ) $$ \end{enumerate} \end{lemma} \begin{lemma} Multiplication on $\mathcal{A}_{H, {}_H \rho } \times \mathcal{A}_{{}_H \rho, H} $ is given as follows. \item $$ ({(i,g)} \; {}_{(k,l)} \rho | T_{\theta^{i}(g)+\theta^{k}(l)+\theta^{i+k}(l-m)} | {}_{(k,l)} \rho \; {}_{(i,m)} \rho ) $$ $$\cdot ({}_{(i,m)} \rho \; {}_{(p,q)} \rho | T^*_{\theta^{i}(m)+\theta^{p}(q)+\theta^{i+p}(q-g)} | {}_{(p,q)} \rho \; {(i,g)} )$$ $$= \epsilon_l(\theta^{i}(m)+\theta^{p}(q)+\theta^{i+p}(g-q)) $$ $$\epsilon_{-\theta^k(2l)+\theta^{k+i}(m)+\theta^{k+p}(q)+\theta^{k+i+p}(g-q)}(\theta^k(2l)-\theta^{k+i}(m)-\theta^{k+p}(q)+\theta^{k+i+p}(q-g)) $$ $$[\delta_{\theta^k(2l)-\theta^{i+k}(m)-\theta^{k+p}(q)+\theta^{k+i+p}(q-g), \theta^{i}(g)+\theta^{k}(l)+\theta^{i+k}(l-m)} $$ $$({(i,g)} \; {(k+p,\theta^{-p}(l)-q)} | 1 | {(k+p,\theta^{-p}(l)-q)} \; (i,g) )$$ $$+ \sum_{c \in G} \limits \epsilon_g(\theta^{k+p}(c-q)+\theta^k(l) )$$ $$\delta_{\theta^{k+p}(c)+ \theta^{i+k+p}(g-c)-\theta^i(g) ,0 } $$ $$A_{\theta^k(2l) - \theta^{k+i}(m)-\theta^{k+p}(q)+\theta^{k+i+p}(q-g)}( \theta^{k+p}(c)-\theta^k(l)+ \theta^{i+k}(m)+\theta^{i+k+p}(g-q), $$ $$\theta^{i}(g)-\theta^{k}(l)+\theta^{i+k}(l)+\theta^{k+p}(q)+\theta^{k+i+p}(g-q) $$ $$ ({(i,g)} \; {}_{(k+p,c)} \rho |1 | {}_{(k+p,c)} \rho \; (i,g) )]$$ \end{lemma} \begin{lemma} Multiplication on on $\mathcal{A}_{ {}_H \rho,H } \times \mathcal{A}_{H,{}_H \rho} $ is given as follows. \item $$ ({}_{(i,m)} \rho \; {}_{(p,q)} \rho | T^*_{\theta^{i}(m)+\theta^{p}(q)+\theta^{i+p}(q-g)} | {}_{(p,q)} \rho \; {(i,g)} )$$ $$\cdot ({(i,g)} \; {}_{(k,l)} \rho | T_{\theta^{i}(g)+\theta^{k}(l)+\theta^{i+k}(l-m)} | {}_{(k,l)} \rho \; {}_{(i,m)} \rho ) $$ $$=\epsilon_{-q}(\theta^{i}(g)+\theta^{k}(l)+\theta^{i+k}(l-m))$$ $$\epsilon_{-\theta^p(2q)+\theta^{i+p}(g)+\theta^{k+p}(l)+\theta^{i+k+p}(l-m)}(\theta^p(2q)-\theta^{i+p}(g)-\theta^{k+p}(l)+\theta^{i+k+p}(m-l)) [$$ $$\delta_{-\theta^{k+p}(l)+\theta^{i+k+p}(m-l), \theta^{i}(m)-\theta^{p}(q)+\theta^{i+p}(q)} \frac{1}{d} $$ $$({}_{(i,m)} \rho \; {(p+k, \theta^{-k}(q)-l)} |1 | {(p+k, \theta^{-k}(q)-l)} \; {}_{(i,m)} \rho \; ) $$ $$+ \sum_{c \in G } \limits \epsilon_{-m}(\theta^{p+k}(c-l )+\theta^p(q))$$ $$\epsilon_{-\theta^i(2m)+\theta^{i+p+k}(c-l )+\theta^{i+p}(q)}(\theta^i(2m)-\theta^{i+p+k}(c-l )-\theta^{i+p}(q))[ $$ $$\delta_{\theta^{p+k}(c)-\theta^{p}(q) ,-\theta^{i+p}(g)+\theta^{i+k+p}(m-l) } \delta_{ -\theta^{i}(m)+\theta^{p}(q)+\theta^{i+p}(2q-g) , -\theta^{i+p+k}(c-l ) }$$ $$ ({}_{(i,m)} \rho \; {}_{(p+k, c)} \rho |SS^* | {}_{(p+k, c)} \rho \; {}_{(i,m)} \rho \; ) $$ $$+\sum_{r \in G} \limits $$ $$A_{\theta^p(2q)-\theta^{i+p}(g)-\theta^{k+p}(l)+\theta^{i+k+p}(m-l)}(\theta^{p+k}(c)-\theta^{p}(q) +\theta^{i+p}(g)+\theta^{i+k+p}(l-m),$$ $$\theta^{k+p}(l)+\theta^{i+k+p}(l-m)+\theta^{i}(m)-\theta^{p}(q)+\theta^{i+p}(q)+r) $$ $$A_{\theta^i(2m)-\theta^{i+p+k}(c+\theta^{-k}(q)-l )}(-\theta^{i}(m)+\theta^{p}(q)+\theta^{i+p}(2q-g) + \theta^{i+p+k}(c-l ) ,r) $$ $$({}_{(i,m)} \rho \; {}_{(p+k, c)} \rho | T_{r+\theta^i(m)+\theta^{p+k}(c)+\theta^{i+p}(q)+\theta^{i+p+k}(l-m) }$$ $$T^*_{r+\theta^i(2m)+\theta^{i+p+k}(l-c )+\theta^{i+p}(q)} | {}_{(p+k, c)} \rho \; {}_{(i,m)} \rho \; ) ]]$$ \end{lemma} \begin{lemma} Multiplication in $\mathcal{A}_{{}_{(i,g)} \rho} $ is given as follows. \begin{enumerate} \item $$({}_{(i,g)} \rho \; (j,h) |1 | {}_{(j,h)} \rho \; (i,g) \rho ) \cdot ( {}_{(i,g)} \rho \;(k,l) |1 | (k,l) \rho \; {}_{(i,g)} \rho ) $$ $$= ({}_{(i,g)} \rho \; (j+k,\theta^{-k}(h)+l) |1 | (j+k,\theta^{-k}(h)+l) \rho \; (i,g) \rho )$$ \item $$({}_{(i,g)} \rho \; {}_{(j,h)} \rho | SS^* | {}_{(j,h)} \rho \; {}_{(i,g)} \rho ) \cdot ({}_{(i,g)} \rho \;(k,l) |1 | (k,l) \rho \; {}_{(i,g)} \rho ) $$ $$=({}_{(i,g)} \rho \; {}_{(j+k,\theta^{-k}(h)-l)} \rho |SS^* |{}_{(j+k,\theta^{-k}(h)-l)} \rho \; {}_{(i,g)} \rho ) $$ \item $$({}_{(i,g)} \rho \; (j,h) | 1 | (j,h) \rho \; {}_{(i,g)} \rho ) \cdot ({}_{(i,g)} \rho \;{}_{(k,l)} \rho |SS^* | {}_{(k,l)} \rho \; {}_{(i,g)} \rho ) $$ $$ ({}_{(i,g)} \rho \;{}_{(j+k,\theta^{-k}(h)+l)} \rho |SS^* |{}_{(j+k,\theta^{-k}(h)+l)}\rho \; {}_{(i,g)} \rho ) $$ \item $$({}_{(i,g)} \rho \; {}_{(j,h)} \rho | T_{m+\theta^j(h)-\theta^{i+j}(g)}T^*_{m+\theta^i(g)-\theta^{i+j}(h)} | {}_{(j,h)} \rho \; {}_{(i,g)} \rho ) $$ $$\cdot ({}_{(i,g)} \rho \;(k,l) |1 | (k,l) \rho \; {}_{(i,g)} \rho ) $$ $$=({}_{(i,g)} \rho \; {}_{(j+k,\theta^{-k}(h)-l)} \rho | T_{m+\theta^j(h)-\theta^{i+j}(g)}$$ $$T^*_{m+\theta^i(g)-\theta^{i+j}(h)} |{}_{(j+k,\theta^{-k}(h)-l)} \rho \; {}_{(i,g)} \rho ) $$ \item $$({}_{(i,g)} \rho \; (j,h) | 1 | (j,h) \; {}_{(i,g)} \rho ) $$ $$\cdot ({}_{(i,g)} \rho \;{}_{(k,l)} \rho |T_{n+\theta^k(l)-\theta^{i+k}(g)}T^*_{n+\theta^i(g)-\theta^{i+k}(l)} | {}_{(k,l)} \rho \; {}_{(i,g)} \rho ) $$ $$=\epsilon_h(n+\theta^k(l)-\theta^{i+k}(g)) \epsilon_h(n+\theta^i(g)-\theta^{i+k}(l)) $$ $$ ({}_{(i,g)} \rho \;{}_{(j+k,\theta^{-k}(h)+l)} \rho |T_{\theta^j(n+2h)+\theta^(j+k)(l)-\theta^{i+j+k}(g)}$$ $$T^*_{\theta^j(n+2h)+\theta^{i+j}(g)-\theta^{i+j+k}(l)} |{}_{(j+k,\theta^{-k}(h)+l)}\rho \; {}_{(i,g)} \rho ) $$ \item $$({}_{(i,g)} \rho \; {}_{(j,h)} \rho | SS^* | {}_{(j,h)} \rho \; {}_{(i,g)} \rho ) \cdot ({}_{(i,g)} \rho \;{}_{(k,l)} \rho |SS^* | {}_{(k,l)} \rho \; {}_{(i,g)} \rho ) $$ $$= \frac{1}{d^3} ({}_{(i,g)} \rho \; (j+k, \theta^{-k}(h)-l) |1| (j+k, \theta^{-k}(h)-l) \; {}_{(i,g)} \rho)$$ $$+ \sum_{c} \limits \epsilon_{-g}(\theta^{j+k}(c+ \theta^{-k}(h)-l ) ) $$ $$\epsilon_{\theta^i(-2g)+\theta^{i+j+k}(c+ \theta^{-k}(h)-l )}(-(\theta^i(-2g)+\theta^{i+j+k}(c+ \theta^{-k}(h)-l ))) \frac{1}{d^2}$$ $$ ({}_{(i,g)} \rho \; {}_{(j+k, c)} \rho |T_{\theta^{j+k}(c+ \theta^{-k}(h)-l )}T^*_{\theta^i(2g)-\theta^{i+j+k}(c+ \theta^{-k}(h)-l )}) | {}_{(j+k, c)} \rho \; {}_{(i,g)} \rho)$$ \item $$({}_{(i,g)} \rho \; {}_{(j,h)} \rho | T_{m+\theta^j(h)-\theta^{i+j}(g)}T^*_{m+\theta^i(g)-\theta^{i+j}(h)} | {}_{(j,h)} \rho \; {}_{(i,g)} \rho ) $$ $$\cdot ({}_{(i,g)} \rho \;{}_{(k,l)} \rho |SS^*| {}_{(k,l)} \rho \; {}_{(i,g)} \rho ) $$ $$= \delta_{\theta^i(g)-\theta^j(h) +\theta^{i+j}(g-h),0} \frac{1}{d^2} ({}_{(i,g)} \rho \; (j+k, \theta^{-k}(h)-l) |1 | (j+k, \theta^{-k}(h)-l) \; {}_{(i,g)} \rho)$$ $$+ \sum_{c} \limits \epsilon_{-g}(\theta^{j+k}(c+ \theta^{-k}(h)-l ) ) $$ $$ \epsilon_{\theta^i(-2g)+ \theta^{i+j+k}(c+ \theta^{-k}(h)-l )}(-(\theta^i(-2g)+ \theta^{i+j+k}(c+ \theta^{-k}(h)-l ))) \frac{1}{d}$$ $$A_{\theta^i(2g) +\theta^{i+j+k}(l-c) - \theta^{i+j}(h) }(m-\theta^i(g)+\theta^{i+j+k}(c-l),-\theta^i(g)+\theta^j(h)+\theta^{i+j}(h-g)) $$ $$ ({}_{(i,g)} \rho \; {}_{(j+k, c)} \rho | T_{\theta^{j+k}(c+ \theta^{-k}(h)-l )} T_{\theta^i(g)+\theta^j(h)-\theta^{i+j}(g)+\theta^{i+j+k}(l-c)}^* | {}_{(j+k, c)} \rho \; {}_{(i,g)} \rho)$$ \item $$({}_{(i,g)} \rho \; {}_{(j,h)} \rho | SS^* | {}_{(j,h)} \rho \; {}_{(i,g)} \rho ) $$ $$\cdot ({}_{(i,g)} \rho \;{}_{(k,l)} \rho |T_{n+\theta^k(l)-\theta^{i+k}(g)}T^*_{n+\theta^i(g)-\theta^{i+k}(l)} | {}_{(k,l)} \rho \; {}_{(i,g)} \rho ) $$ $$=\epsilon_{-h}(n+\theta^k(l)-\theta^{i+k}(g)) \epsilon_{-h}(n+\theta^i(g)-\theta^{i+k}(l)) $$ $$\epsilon_{\theta^j(n-2h)+\theta^{j+k}(l)-\theta^{i+j+k}(g)}(-(\theta^j(n-2h)+\theta^{j+k}(l)-\theta^{i+j+k}(g))) $$ $$\epsilon_{\theta^j(n-2h)+\theta^{i+j}(g)-\theta^{i+j+k}(l)}(-(\theta^j(n-2h)+\theta^{i+j}(g)-\theta^{i+j+k}(l))) $$ $$[ \delta_{-\theta^{i}(g)+\theta^{k}(l)+\theta^{i+k}(l-g),0} \frac{1}{d^2} $$ $$({}_{(i,g)} \rho \; (j+k, \theta^{-k}(h)-l) |1 | (j+k, \theta^{-k}(h)-l) \; {}_{(i,g)} \rho)$$ $$+ \sum_{c} \limits \epsilon_{-g}(\theta^{j+k}(c+ \theta^{-k}(h)-l ) )$$ $$\epsilon_{\theta^i(-2g)+\theta^{i+j+k}(c+ \theta^{-k}(h)-l )}(-(\theta^i(-2g)+\theta^{i+j+k}(c+ \theta^{-k}(h)-l ))) \frac{1}{d} $$ $$A_{\theta^j(2h-n)-\theta^{j+k}(l)+\theta^{i+j+k}(g)}(\theta^{j+k}(c)+\theta^j(n-h)-\theta^{i+j+k}(g),$$ $$\theta^{j+k}(l)-\theta^{i+j}(g)+\theta^{i+j+k}(l-g)) $$ $$ ({}_{(i,g)} \rho \; {}_{(j+k, c)} \rho |T_{\theta^{j+k}(c) +\theta^j(h)-\theta^{i+j}(g)+\theta^{i+j+k}(l-g) }$$ $$T^*_{\theta^i(2g)+\theta^{i+j+k}(l-c)- \theta^{i+j}(h)} ) | {}_{(j+k, c)} \rho \; {}_{(i,g)} \rho)]$$ \item $$({}_{(i,g)} \rho \; {}_{(j,h)} \rho | T_{m+\theta^j(h)-\theta^{i+j}(g)}T^*_{m+\theta^i(g)-\theta^{i+j}(h)} | {}_{(j,h)} \rho \; {}_{(i,g)} \rho ) $$ $$\cdot ({}_{(i,g)} \rho \; {}_{(k,l)} \rho | T_{n+\theta^k(l)-\theta^{i+k}(g)}T^*_{n+\theta^i(g)-\theta^{i+k}(l)} | {}_{(k,l)} \rho \; {}_{(i,g)} \rho ) $$ $$=\epsilon_{-h}(n+\theta^k(l)-\theta^{i+k}(g)) \epsilon_{-h}(n+\theta^i(g)-\theta^{i+k}(l)) $$ $$\epsilon_{\theta^j(n-2h)+\theta^{j+k}(l)-\theta^{i+j+k}(g)}(\theta^j(2h-n)-\theta^{j+k}(l)+\theta^{i+j+k}(g)) $$ $$\epsilon_{\theta^j(n-2h)+\theta^{i+j}(g)-\theta^{i+j+k}(l)}(\theta^j(2h-n)-\theta^{i+j}(g)+\theta^{i+j+k}(l)) $$ $$[ \delta_{\theta^i(g)-\theta^j(h)-\theta^{i+j}(h)+\theta^{j+k}(l)+\theta^{i+j+k}(l-g),0} $$ $$\frac{1}{d} A_{\theta^j(2h-n)-\theta^{i+j}(g)+\theta^{i+j+k}(l)}(-\theta^{j+k}(l)+\theta^{i+j}(g)+\theta^{i+j+k}(g-l),$$ $$m+\theta^j(n-h)-\theta^{i+j+k}(l))$$ $$({}_{(i,g)} \rho \; (j+k, \theta^{-k}(h)-l) |1 | (j+k, \theta^{-k}(h)-l) \; {}_{(i,g)} \rho)$$ $$+ \sum_{c \in G} \limits \epsilon_{-g}(\theta^{j+k}(c-l )+ \theta^{j}(h) ) $$ $$\epsilon_{-\theta^i(2g)+\theta^{i+j+k}(c-l )+ \theta^{i+j}(h)}(\theta^i(2g)+\theta^{i+j+k}(l-c )- \theta^{i+j}(h) ) $$ $$[ \delta_{\theta^{j+k}(c)+\theta^j(n-h)-\theta^{i+j+k}(g),0 }\delta_{m+\theta^j(n-h)-\theta^{i+j+k}(l),0} \delta_{m-\theta^i(g)+\theta^{i+j+k}(c-l),0} $$ $$ ({}_{(i,g)} \rho \; {}_{(j+k, c)} \rho | SS^*| {}_{(j+k, c)} \rho \; {}_{(i,g)} \rho)$$ $$+\sum_{r \in G} \limits A_{\theta^j(2h-n)-\theta^{j+k}(l)+\theta^{i+j+k}(g)}(\theta^{j+k}(c)+\theta^j(n-h)-\theta^{i+j+k}(g), $$ $$\theta^{j+k}(l)-\theta^{i+j}(g)+\theta^{i+j+k}(l-g)+r) $$ $$A_{\theta^{i}(2g)-\theta^{i+j+k}(c+ \theta^{-k}(h)-l )}(m-\theta^i(g)+\theta^{i+j+k}(c-l),-\theta^i(g)+\theta^j(h)+\theta^{i+j}(h-g)+r) $$ $$ A_{\theta^j(2h-n)-\theta^{i+j}(g)+\theta^{i+j+k}(l)}(r,m+\theta^j(n-h)-\theta^{i+j+k}(l)) $$ $$ ({}_{(i,g)} \rho \; {}_{(j+k, c)} \rho | T_{r+\theta^{j+k}(c) +\theta^j(h)-\theta^{i+j}(g)+\theta^{i+j+k}(l-g)} $$ $$T^*_{r + \theta^i(g)- \theta^{i+j}(g) +\theta^j(h)+\theta^{i+j+k}(l-c)} | {}_{(j+k, c)} \rho \; {}_{(i,g)} \rho)]]$$ \end{enumerate} \end{lemma} Formulas for the action of $S_0$ on the basis $\mathcal{B} $ can be computed in a similar way, and we omit them. Note that for any $(i,g), (j,h) \in H$, $\text{Ad} [ ( (i,g) \; (j,h) | 1 | (j,h)\; Ad_{(j,h)} [(i,g)] ) ]$ maps $\mathcal{A}_{(i,g)} $ isomorphically to $\mathcal{A}_{Ad_{(j,h)} [(i,g)]} $, and similarly $\mathcal{A}_{{}_{(i,g)} \rho} $ is isomorphic to $\mathcal{A}_{{}_{(j,h)^{-1}(i,g)(j,-h)} \rho} $. \subsection{Example: the $4442$ subfactor} In this subsection we consider the $\mathbb{Z}/3\mathbb{Z} $-graded extension of the generalized Haagerup category for $G=\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z} $ coming from an automorphism of $G$ which cyclically permutes the non-trivial elements. This category is Morita equivalent to the even part of the self-dual $4442$ subfactor with principal graph \centerline{ \includegraphics[width=1in]{4442.eps} ,} first constructed using planar algebras methods in \cite{1208.3637}. We use the notation of the previous section for $\mathcal{C}_{G,A,\epsilon} $ with $G=\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z} $, and consider the automorphism $\theta $ satisfying $$\theta(a)=b, \ \theta(b)=c, \theta(c)=a .$$ Then $A$ and $\epsilon $ are invariant under $\theta $. Note that $H= (\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}) \rtimes_{\theta} \mathbb{Z}/3\mathbb{Z}$ is isomorphic to the alternating group on four letters. We denote a typical element of $H$ by $(i,g), \ i \in \{0,1,2\}, \ j \in \{0,a,b,c \} $. We consider the tube algebra for $\mathcal{C}_{G,A,\epsilon} \rtimes_{\gamma} \mathbb{Z}/3\mathbb{Z} $, where $\gamma $ is the automorphism of the category coming from $\theta $. The tube algebra inherits the $\mathbb{Z}/3\mathbb{Z} $-grading, and we look for the minimal central projections in the graded components of the tube algebra separately. We first look at the $0$-graded component of the tube algebra, which contains the tube algebra of the regular (non-extended) generalized Haagerup category $\mathcal{C}_{G,A,\epsilon} $. Consider the $32$ minimal central projections of the smaller tube algebra: $p(g)^0$, $p(g)^1$, and $p(g,\hat{h})^{\pm}$ for $g \neq h \in G $. In the larger tube algebra, $\text{dim}(\mathcal{A}_{(0,0)})=24 $ and $\text{dim}(\mathcal{A}_{(0,g)})=8 $ for $g \neq 0 $. Moreover, $ 1_g $ is equivalent to $1_h $ for $g,h \neq 0 \in G $, since $g$ and $h$ are in the same conjugacy class in $H$. Therefore there are $8$ rank three minimal central projection in $\mathcal{A}_{(0,G \backslash \{0\})} $; they are: $$p(a,\hat{a})^l+p(b,\hat{b})^l+p(c,\hat{c})^l, \ l \in \{0,1\} $$ and $$ p(a,\hat{0} )^{\sigma}+p(b,\hat{0} )^{\sigma}+p(c,\hat{0} )^{\sigma} , \quad p(a,\hat{b} )^{\sigma}+p(b,\hat{c} )^{\sigma}+p(c,\hat{a} )^{\sigma}, $$ $$ p(a,\hat{c} )^{\sigma}+p(b,\hat{a} )^{\sigma}+p(c,\hat{b} )^{\sigma} , \quad \sigma \in \{ \pm \} .$$ For $\mathcal{A}_{(0,0)} $, we also consider the projections $$p^{\omega} =\frac{1}{3}\sum_{i=0}^{2} \omega^i ( (0,0) \; (i,0) | 1 | (i,0) \; (0,0) ),$$ for $\omega $ a cube root of unity. Then the minimal central projections of $\mathcal{A}_{(0,0)} $ are $$p^{\omega} p(0)^0, \ p^{\omega}p(0)^1, \ \omega \in \{1,e^{\frac{2\pi i}{3}},e^{-\frac{2\pi i}{3}} \},$$ which each have rank $1$, and $$p(0,\hat{a})^{\sigma}+p(0,\hat{b})^{\sigma}+p(0,\hat{c})^{\sigma}, \ \sigma \in \{ \pm \}, $$ which each have rank three. Therefore there are $16$ minimal central projections in $\mathcal{A}_{(0,G)} $. To find the corresponding minimal central projections in the entire tube algbera, we consider elements of the form $$J(\tau,g,h,i)=\frac{1}{4}\sum_{k \in G} \tau(k) ( (0,g) \; {}_{(i,k)} \rho |T_{ g+\theta^i(k)+\theta^{i}(k+h) } |{}_{(i,k)} \rho \; {}_{(0,h)} \rho ) $$ for $i \in \mathbb{Z}/3\mathbb{Z}, \ g,h \in G, \tau \in \hat{G} $ and $$J^{\omega}(\tau,g,h)=\frac{1}{3} \sum_{i=0}^2 \omega^i J(\tau,g,h,i),$$ for $ \ g,h \in G, \tau \in \hat{G}, \omega \in \{1,e^{\frac{2\pi i}{3}},e^{-\frac{2\pi i}{3}} \} $. Let $$K(\tau,g,h,i)=J(\tau,g,h,i)J(\tau,g,h,i)^* $$ and $$K^{\omega}(\tau,g,h)=J^{\omega}(\tau,g,h)J^{\omega}(\tau,g,h)^* .$$ First we look at intertwiners from $\mathcal{A}_{(0,0)} $ to $\mathcal{A}_{{}_{(0,h)} \rho} $ for $h=0,a $. \begin{lemma} \begin{enumerate} \item We have $K^{\omega}(\hat{0},0,0 ) = K^{\omega}(\hat{b},0,a ) = \frac{10}{5+2\sqrt{5} } p^{\omega} p(0)^1$. \item For $g \neq 0 $, we have $K(\hat{g},0,0,0)=2 p(0,\hat{g})^+$. \item For $\tau \neq \tau' $, we have $$J(\tau,0,0,0)J(\tau',0,0,0)^*=J(\tau,0,a,0)J(\tau',0,a,0)^*=0 .$$ \item We have $$K(\hat{a},0,a,0)=2 p(0,\hat{c})^+, \ K(\hat{0},0,a,0)=2p( 0,\hat{b})^- , \ K(\hat{c},0,a,0 )=2p(0,\hat{a})^- .$$ \end{enumerate} \end{lemma} We can now write down all of the minimal central projections in the tube algebra which have a non-trivial component in $\mathcal{A}_{(0,0)} $, using the isomorphisms from $\mathcal{A}_{{}_{(0,g)} \rho} $ to $ \mathcal{A}_{{}_{(0,h) } \rho} $ for $g,h \neq 0$. Corresponding to these projections are $8$ irreducible half-braidings. \begin{lemma} \begin{enumerate} \item The identity object $(0,0) $ and the object $(0,0)+ \sum_{ g \in G} \limits {}_{(0,g)} \rho $ each have three irreducible half-braidings. \item The objects $3(0,0)+2 \sum_{0 \neq g \in G} \limits {}_{(0,g)} \rho $ and $3 ((0,0) +{}_{(0,0)} \rho )+ \sum_{0 \neq g \in G} \limits {}_{(0,g)} \rho $ each have a unique irreducible half-braiding. \end{enumerate} \end{lemma} Next we look at intertwiners from $\mathcal{A}_{(0,a)} $ to $\mathcal{A}_{{}_{(0,h)} \rho} $ for $h=0,a $. \begin{lemma} \begin{enumerate} \item We have \begin{multline*} K(\hat{a},a,0,2 ) =K(\hat{b},a,0,1 )=K(\hat{c},a,0,0)\\= K(\hat{a},a,a,0 ) =K(\hat{0},a,a,1 )=K(\hat{c},a,a,2 )= \frac{10}{5+2\sqrt{5} } p(a)^1. \end{multline*} \item We have $$K(\hat{0},a,0,i )=2p(a,\hat{b})^+ , \quad i=0,1,2 .$$ and $$K(\hat{b},a,a,0) = 2p(a,\hat{b})^+$$ \item We have $$K(\hat{a},a,0,0 ) =K(\hat{c},a,0,1 )=K(\hat{b},a,0,2)= K(\hat{0},a,a,2 )=2p(a,\hat{c})^-.$$ \item We have $$K(\hat{b},a,0,0 ) =K(\hat{a},a,0,1 )=K(\hat{c},a,0,2)=K(\hat{c},a,a,1 )= 2p(a,\hat{0})^-.$$ \item We have $$K(\hat{0},a,a,0 ) =K(\hat{a},a,a,2 )=2p(a,\hat{0})^+ $$ $$ K(\hat{a},a,a,1 ) =K(\hat{c},a,a,0 )=2p(a,\hat{c})^+ $$ $$K(\hat{b},a,a,2 ) =K(\hat{b},a,a,1 )= 2p(a,\hat{b})^-.$$ \end{enumerate} \end{lemma} We can now write down the other eight minimal central projections in the tube algebra which have non-trivial components in $\mathcal{A}_{(0,G)} $, and the objects with the corresponding half-braidings. \begin{lemma} \begin{enumerate} \item The objects $\sum_{0 \neq g \in G} \limits ( (0,g) + {}_{(0,g)} \rho )+3 {}_{(0,0)} \rho $ and $\sum_{0 \neq g \in G} \limits ( (0,g) + 2{}_{(0,g)} \rho ) $ each have three irreducible half-braidings. \item The objects $ \sum_{0 \neq g \in G} \limits (0,g)$ and $\sum_{0 \neq g \in G} \limits ( (0,g) + 3{}_{(0,g)} \rho )+3 {}_{(0,0)} \rho $ each have a unique irreducible half-braiding. \end{enumerate} \end{lemma} The dimension of $\mathcal{A}_{{}_{(0,0)} \rho} $ is $72$, and for $h \neq 0$ the dimensions of $\mathcal{A}_{{}_{(0,h)} \rho} $ and $\mathcal{A}_{ {}_{(0,0)} \rho, {}_{(0,h)} \rho} $ are $56$ and $48$, respectively. On the other hand, the dimensions of the subalgebras of $\mathcal{A}_{{}_{(0,0)} \rho} $, $\mathcal{A}_{{}_{(0,h)} \rho} $, and $\mathcal{A}_{ {}_{(0,0)} \rho, {}_{(0,h)} \rho} $ which are supported by the sum of the $16$ minimal central projections computed above are $48$, $32 $, and $24$, respectively. That means that the orthogonal part of $1_{{}_{(0,h)} \rho} $ supports a $24$ dimensional subalgebra for each $h$, and these subalgberas are unitarily equivalent in the tube algbera. As in the case for $\mathcal{C}_{G,A,\epsilon} $, we find that for each $h \in G$ the minimal polynomial of $\textbf{t}_{(0,h)}$ with respect to $\mathcal{B}_{{}_{(0,h)} \rho} $ is $$q(x)=(x^2-1)(x^2-e^{\frac{4 \pi i}{5} })(x^2-e^{\frac{6 \pi i}{5} }) .$$ We let $q_{\zeta}(x)=\frac{q(x)}{x -\zeta} $, and $p^{\zeta}_h=\frac{q_{\zeta}(\textbf{t}_{(0,h)h}) }{q_{\zeta}(\zeta)} $ for $\zeta \in \{ \pm e^{\frac{2 \pi i}{5}} , \pm e^{\frac{3 \pi i}{5}} \} $. We find that each $p^{\zeta}_h $ is a rank three projection, which is a minimal central projection in $\mathcal{A}_{{}_{(0,h)} \rho} $ iff $\zeta \in \{ - e^{\frac{2 \pi i}{5}} , e^{\frac{3 \pi i}{5}} \} $. Then to finish writing down the minimal central projection of the $0$-graded component, it remains only to match up the minimal components of $p^{\zeta}_h $ for $h=0$ and $h=1$ for each $\zeta \in \{ e^{\frac{2 \pi i}{5}} , -e^{\frac{3 \pi i}{5}} \} $. \begin{lemma} The object $\sum_{h \in G} \limits {}_{(0,h)} \rho $ has $6$ half-braidings and the object $3 \sum_{h \in G} \limits {}_{(0,h)} \rho$ has $2$ irreducible half-braidings. \end{lemma} Next we consider the $1$-graded component of the tube algebra. The analysis of the $2$-graded component will be identical. Here $\mathcal{A}_{(1,g)} $ is isomorphic to $\mathcal{A}_{(1,h)} $ and similarly $\mathcal{A}_{{}_{(1,g)} \rho} $ is isomorphic to $\mathcal{A}_{{}_{(1,h)} \rho} $ for all $g$ and $h$ in $G$, so it suffices to consider $\mathcal{A}_{(1,0)} $ and $\mathcal{A}_{{}_{(1,0)} \rho} $, which have dimensions $6$ and $54$ respectively. The intertwiner space $\mathcal{A}_{(1,0), {}_{(1,0)} \rho} $ has dimension $12$. The algebra $\mathcal{A}_{(1,0)} $ is Abelian, and the $\omega$-eigenspace of $\textbf{t}_{(1,0)} $ is $2$-dimensional for each cube root of unity $\omega $. For each $\omega $, let $$r_{\omega}=\frac{1}{3} \sum_{i=0}^2 \omega^i ( (1,0 ) \; (i,0) | 1 |(i,0) \; (1,0) )$$ and let $$s_{\omega}=\frac{1}{3} \sum_{i=0}^2 \omega^i ( (1,0 ) \; {}_{(i,0)} \rho | 1 |{}_{(i,0)} \rho \; (1,0) ) .$$ Then $(s_{\omega})^2 =r_{\omega}+s_{\omega}$ for each $\omega $ and the six minimal projections of $\mathcal{A}_{(1,0)} $ are given by $$p(\omega)^0=\frac{5+\sqrt{5}}{10}r_{\omega}-\frac{1}{\sqrt{5}}s_{\omega} $$ and $$p(\omega)^1= \frac{5-\sqrt{5}}{10}r_{\omega}+\frac{1}{\sqrt{5}}s_{\omega} .$$ The $\textbf{t}_{(0,1)} $ eigenvalue of each $p(\omega)^{i} $ is $\omega $. We next check the intetwiner space $\mathcal{A}_{(1,0), {}_{(1,0)} \rho} $. \begin{lemma} \begin{enumerate} \item The vector space $p(\omega)^0\mathcal{A}_{(1,0), {}_{(1,0)} \rho} $ is $3$-dimensional and the vector space $p(\omega)^1\mathcal{A}_{(1,0), {}_{(1,0)} \rho} $ is $1$-dimensional for each $\omega $. \item Let $$J_{\omega}= \frac{1}{6}\sum_{i=0}^2 \limits \omega^i [ \sqrt{\frac{1+\sqrt{5}}{5} } ( (1,0) \; {}_{(i,0)} \rho | T_0 | {}_{(i,0)} \rho\; {}_{(1,0)} \rho ) $$ $$+\sum_{g \neq 0} \limits \sqrt{\frac{3-\sqrt{5}}{5} } ( (1,0) \; {}_{(i,g)} \rho | T_{\theta^i(g) +\theta^{1+i}(g)} | {}_{(i,g)} \rho\; {}_{(1,0)} \rho ) ] .$$ Then $J_{\omega}J_{\omega}^*=p(\omega)^1 $. \end{enumerate} \end{lemma} We can compute the minimal central projections in the tube algebra corresponding to $p(\omega)^1 $ using $J_{\omega} $. For the minimal central projections $p(\omega)^0 $, one can look for an orthonormal basis for $p(\omega)^0\mathcal{A}_{(1,0), {}_{(1,0)} \rho} $, but it is easier to just diagonalize $\textbf{t}_{{}_{(1,0)} \rho } $. By checking dimensions, we will see that the complement of $J_{\omega}^*J_{\omega} $ in the $\omega $-eigenspace of $\textbf{t}_{{}_{(1,0)} \rho } $ is the right support of $p(\omega)^0\mathcal{A}_{(1,0), {}_{(1,0)} \rho} $. \begin{lemma} The objects $\sum_{g \in G} \limits ( 1,g) + {}_{(1,g)} \rho ) $ and $\sum_{g \in G} \limits ( 1,g) + 3 {}_{(1,g)} \rho ) $ each have three irreducible half-braidings, whose $T$-eigenvalues are the cube roots of unity. Similarly the objects $\sum_{g \in G} \limits ( 2,g) + {}_{(2,g)} \rho ) $ and $\sum_{g \in G} \limits ( 2,g) + 3 {}_{(2,g)} \rho ) $ each have three irreducible half-braidings, whose $T$-eigenvalues are the cube roots of unity. \end{lemma} The orthogonal part of $\mathcal{A}_{(1,0)} $ has dimension $24$. We find that the minimal polynomial of $\textbf{t}_{{}_{(1,0)} \rho} $ is $$q(x)=(x^3+1) (x-e^{\frac{2 \pi i}{5}}) (x-e^{-\frac{2 \pi i}{5}}) (x-e^{\frac{4 \pi i}{15}}) (x-e^{-\frac{4 \pi i}{15}}) (x-e^{\frac{14 \pi i}{15}}) (x-e^{-\frac{14 \pi i}{15}}).$$ We compute the correspoding projections $p^{\zeta}_{(1,0)} $ for $\zeta \in \{ e^{\pm \frac{2 \pi i}{5}},e^{ \pm \frac{4 \pi i}{15}}, e^{\pm \frac{14 \pi i}{15} }\} $, which are all rank $2$ projections. \begin{lemma} The objects $2\sum_{g \in G} \limits {}_{(1,g)} \rho $ and $2\sum_{g \in G} \limits {}_{(2,g)} \rho $ each have six irreducible half-braidings, whose $T$-eigenvalues are $ \{ \pm e^{\frac{2 \pi i}{5}},\pm e^{\frac{4 \pi i}{15}},\pm e^{\frac{8 \pi i}{15} }\}$. \end{lemma} We now have formulas for the $48$ minimal central projections in the tube algebra ($24$ in the $0$-graded part and $12$ each in the other parts), and we can compute the $S$-matrix. Each entry can be expressed as an element of $\mathbb{Q}[\sqrt{5}] $ multiplied by a cube root of unity. \begin{theorem} For an approproate ordering of the simple objects, the modular data is as follows. The $T$-matrix has diagonal vector $$(1,1,1,1,1,1,-1,-1,1,1,1,1,1,1,-1,-1,$$ $$e^{-\frac{2\pi i}{5}} ,e^{-\frac{2\pi i}{5}} ,e^{-\frac{2\pi i}{5}} ,e^{\frac{2\pi i}{5}} ,e^{\frac{2\pi i}{5}} ,e^{\frac{2\pi i}{5}},e^{\frac{3\pi i}{5}} ,e^{-\frac{3\pi i}{5}},1,1,1,1,\omega,\omega,\omega,\omega, \omega^2,\omega^2,\omega^2,\omega^2,$$ $$ e^{\frac{4\pi i}{5}},e^{\frac{4\pi i}{5}},e^{-\frac{14\pi i}{15}},e^{-\frac{14\pi i}{15}},e^{\frac{14\pi i}{15}},e^{\frac{14\pi i}{15}},e^{-\frac{4\pi i}{15}},e^{-\frac{4\pi i}{15}},e^{-\frac{2\pi i}{5}},e^{-\frac{2\pi i}{5}},e^{\frac{2\pi i}{5}},e^{\frac{2\pi i}{5}}) .$$ The $S$-matrix is given piecewise as follows: \begin{itemize} \item $S_{(1-8) \times (1-8)}$ is the matrix $$ \frac{1}{\Lambda}\left( \begin{array}{cccccccc} 1 & 1 & 1 &d^2&d^2&d^2&3d^2& 3 \\ 1 & 1 & 1 &d^2&d^2&d^2&3d^2& 3 \\ 1 & 1 & 1 &d^2&d^2&d^2&3d^2& 3 \\ d^2&d^2&d^2& 1 & 1 & 1 & 3 &3d^2\\ d^2&d^2&d^2& 1 & 1 & 1 & 3 &3d^2\\ d^2&d^2&d^2& 1 & 1 & 1 & 3 &3d^2\\ 3d^2&3d^2&3d^2& 3 & 3 & 3 & -3 &-3d^2\\ 3 & 3 & 3 &3d^2&3d^2&3d^2&-3d^2& -3 \\ \end{array} \right) $$ \item $S_{(1-8) \times (9-16)}=(S_{(9-16) \times (1-8)})^T$ is the matrix $$ \frac{1}{8} \left( \begin{array}{cccccccc} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ -1 & -1 & -1 & -1 & -1 & -1 & 3 & 3 \\ -1 & -1 & -1 & -1 & -1 & -1 & 3 & 3 \\ \end{array} \right) $$ \item $S_{(1-8) \times (17-24)}=(S_{(17-24) \times (1=8)})^T$ is the matrix $$ \frac{1}{6 \sqrt{5}}\left( \begin{array}{cccccccc} 1 & 1 & 1 & 1 & 1 & 1 & 3 & 3 \\ 1 & 1 & 1 & 1 & 1 & 1 & 3 & 3 \\ 1 & 1 & 1 & 1 & 1 & 1 & 3 & 3 \\ -1 & -1 & -1 & -1 & -1 & -1 & -3 & -3 \\ -1 & -1 & -1 & -1 & -1 & -1 & -3 & -3 \\ -1 & -1 & -1 & -1 & -1 & -1 & -3 & -3 \\ -3 & -3 & -3 & -3 & -3 & -3 & 3 & 3 \\ 3 & 3 & 3 & 3 & 3 & 3 & -3 & -3 \\ \end{array} \right)$$ \item $S_{(1-6) \times (25-36)}=(S_{(25-36) \times (1-6)})^T$ is the matrix \renewcommand{\arraystretch}{1.5} \resizebox{\linewidth}{!}{% $\frac{1}{3 \sqrt{5}}\left( \begin{array}{cccccccccccc} c_1 & c_1 & c_2 & c_2 & c_1 & c_1 & c_2 & c_2 & c_1 & c_1 & c_2 & c_2 \\ \omega c_1 & \omega ^2 c_1 & \omega c_2 & \omega ^2 c_2 & \omega c_1 & \omega ^2 c_1 & \omega c_2 & \omega ^2 c_2 & \omega c_1 & \omega ^2 c_1 & \omega c_2 & \omega ^2 c_2 \\ \omega ^2 c_1 & \omega c_1 & \omega ^2 c_2 & \omega c_2 & \omega ^2 c_1 & \omega c_1 & \omega ^2 c_2 & \omega c_2 & \omega ^2 c_1 & \omega c_1 & \omega ^2 c_2 & \omega c_2 \\ c_2 & c_2 & c_1 & c_1 & c_2 & c_2 & c_1 & c_1 & c_2 & c_2 & c_1 & c_1 \\ \omega c_2 & \omega ^2 c_2 & \omega c_1 & \omega ^2 c_1 & \omega c_2 & \omega ^2 c_2 & \omega c_1 & \omega ^2 c_1 & \omega c_2 & \omega ^2 c_2 & \omega c_1 & \omega ^2 c_1 \\ \omega ^2 c_2 & \omega c_2 & \omega ^2 c_1 & \omega c_1 & \omega ^2 c_2 & \omega c_2 & \omega ^2 c_1 & \omega c_1 & \omega ^2 c_2 & \omega c_2 & \omega ^2 c_1 & \omega c_1 \\ \end{array} \right)$ } \item $S_{(1-6) \times (37-48)}=(S_{(37-48) \times (1-36)})^T$ is the matrix \renewcommand{\arraystretch}{1.5} \resizebox{\linewidth}{!}{% $\frac{1}{3 \sqrt{5}} \left( \begin{array}{cccccccccccc} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \omega & \omega ^2 & \omega & \omega ^2 & \omega & \omega ^2 & \omega & \omega ^2 & \omega & \omega ^2 & \omega & \omega ^2 \\ \omega ^2 & \omega & \omega ^2 & \omega & \omega ^2 & \omega & \omega ^2 & \omega & \omega ^2 & \omega & \omega ^2 & \omega \\ -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 \\ -\omega & -\omega ^2 & -\omega & -\omega ^2 & -\omega & -\omega ^2 & -\omega & -\omega ^2 & -\omega & -\omega ^2 & -\omega & -\omega ^2 \\ -\omega ^2 & -\omega & -\omega ^2 & -\omega & -\omega ^2 & -\omega & -\omega ^2 & -\omega & -\omega ^2 & -\omega & -\omega ^2 & -\omega \\ \end{array} \right)$ } \item $S_{(9-16) \times (9-16)}$ is the matrix $$ \frac{1}{8} \left( \begin{array}{cccccccc} 5 & -3 & -3 & 1 & 1 & 1 & -3 & 1 \\ -3 & 5 & -3 & 1 & 1 & 1 & -3 & 1 \\ -3 & -3 & 5 & 1 & 1 & 1 & -3 & 1 \\ 1 & 1 & 1 & 5 & -3 & -3 & 1 & -3 \\ 1 & 1 & 1 & -3 & 5 & -3 & 1 & -3 \\ 1 & 1 & 1 & -3 & -3 & 5 & 1 & -3 \\ -3 & -3 & -3 & 1 & 1 & 1 & 1 & -3 \\ 1 & 1 & 1 & -3 & -3 & -3 & -3 & 1 \\ \end{array} \right)$$ \item $S_{(17-22) \times (17-22)}$ is the matrix $$\frac{1}{6 \sqrt{5}} \left( \begin{array}{cccccc} c_1 & c_1 & c_1 & c_3 & c_3 & c_3 \\ c_1 & c_1 & c_1 & c_3 & c_3 & c_3 \\ c_1 & c_1 & c_1 & c_3 & c_3 & c_3 \\ c_3 & c_3 & c_3 & c_1 & c_1 & c_1 \\ c_3 & c_3 & c_3 & c_1 & c_1 & c_1 \\ c_3 & c_3 & c_3 & c_1 & c_1 & c_1 \\ \end{array} \right)$$ \item $S_{(17-22) \times (25-36)}=(S_{(25-36) \times (17-22)})^T$ is the matrix \renewcommand{\arraystretch}{1.5} \resizebox{\linewidth}{!}{% $\frac{1}{3 \sqrt{5}} \left( \begin{array}{cccccccccccc} -1 & -1 & 1 & 1 & -1 & -1 & 1 & 1 & -1 & -1 & 1 & 1 \\ -\omega ^2 & -\omega & \omega ^2 & \omega & -\omega ^2 & -\omega & \omega ^2 & \omega & -\omega ^2 & -\omega & \omega ^2 & \omega \\ -\omega & -\omega ^2 & \omega & \omega ^2 & -\omega & -\omega ^2 & \omega & \omega ^2 & -\omega & -\omega ^2 & \omega & \omega ^2 \\ -1 & -1 & 1 & 1 & -1 & -1 & 1 & 1 & -1 & -1 & 1 & 1 \\ -\omega ^2 & -\omega & \omega ^2 & \omega & -\omega ^2 & -\omega & \omega ^2 & \omega & -\omega ^2 & -\omega & \omega ^2 & \omega \\ -\omega & -\omega ^2 & \omega & \omega ^2 & -\omega & -\omega ^2 & \omega & \omega ^2 & -\omega & -\omega ^2 & \omega & \omega ^2 \\ \end{array} \right) $ } \item $S_{(17 -22) \times (37-48)}=(S_{(37-48) \times (17-22)})^T$ is the matrix \renewcommand{\arraystretch}{1.5} \resizebox{\linewidth}{!}{% $\frac{1}{3\sqrt{5}}\left( \begin{array}{cccccccccccc} c_1 & c_1 & c_3 & c_3 & c_1 & c_1 & c_3 & c_3 & c_1 & c_1 & c_3 & c_3 \\ \omega ^2 c_1 & \omega c_1 & \omega ^2 c_3 & \omega c_3 & \omega ^2 c_1 & \omega c_1 & \omega ^2 c_3 & \omega c_3 & \omega ^2 c_1 & \omega c_1 & \omega ^2 c_3 & \omega c_3 \\ \omega c_1 & \omega ^2 c_1 & \omega c_3 & \omega ^2 c_3 & \omega c_1 & \omega ^2 c_1 & \omega c_3 & \omega ^2 c_3 & \omega c_1 & \omega ^2 c_1 & \omega c_3 & \omega ^2 c_3 \\ c_3 & c_3 & c_1 & c_1 & c_3 & c_3 & c_1 & c_1 & c_3 & c_3 & c_1 & c_1 \\ \omega ^2 c_3 & \omega c_3 & \omega ^2 c_1 & \omega c_1 & \omega ^2 c_3 & \omega c_3 & \omega ^2 c_1 & \omega c_1 & \omega ^2 c_3 & \omega c_3 & \omega ^2 c_1 & \omega c_1 \\ \omega c_3 & \omega ^2 c_3 & \omega c_1 & \omega ^2 c_1 & \omega c_3 & \omega ^2 c_3 & \omega c_1 & \omega ^2 c_1 & \omega c_3 & \omega ^2 c_3 & \omega c_1 & \omega ^2 c_1 \\ \end{array} \right)$ } \item $S_{(23-24) \times (17-24)}=(S_{(17-24) \times (23-24)})^T$ is the matrix $$ \frac{1}{2 \sqrt{5}} \left( \begin{array}{cccccccc} c_1 & c_1 & c_1 & c_3 & c_3 & c_3 & c_4 & c_2 \\ c_3 & c_3 & c_3 & c_1 & c_1 & c_1 & c_2 & c_4 \\ \end{array} \right)$$ \item $S_{(25 -36) \times (25-36)}$ is the matrix \renewcommand{\arraystretch}{1.5} \resizebox{\linewidth}{!}{% $\frac{1}{3 \sqrt{5}}\left( \begin{array}{cccccccccccc} c_1 & c_1 & c_2 & c_2 & \omega ^2 c_1 & \omega c_1 & \omega ^2 c_2 & \omega c_2 & \omega c_1 & \omega ^2 c_1 & \omega c_2 & \omega ^2 c_2 \\ c_1 & c_1 & c_2 & c_2 & \omega c_1 & \omega ^2 c_1 & \omega c_2 & \omega ^2 c_2 & \omega ^2 c_1 & \omega c_1 & \omega ^2 c_2 & \omega c_2 \\ c_2 & c_2 & c_1 & c_1 & \omega ^2 c_2 & \omega c_2 & \omega ^2 c_1 & \omega c_1 & \omega c_2 & \omega ^2 c_2 & \omega c_1 & \omega ^2 c_1 \\ c_2 & c_2 & c_1 & c_1 & \omega c_2 & \omega ^2 c_2 & \omega c_1 & \omega ^2 c_1 & \omega ^2 c_2 & \omega c_2 & \omega ^2 c_1 & \omega c_1 \\ \omega ^2 c_1 & \omega c_1 & \omega ^2 c_2 & \omega c_2 & \omega c_1 & \omega ^2 c_1 & \omega c_2 & \omega ^2 c_2 & c_1 & c_1 & c_2 & c_2 \\ \omega c_1 & \omega ^2 c_1 & \omega c_2 & \omega ^2 c_2 & \omega ^2 c_1 & \omega c_1 & \omega ^2 c_2 & \omega c_2 & c_1 & c_1 & c_2 & c_2 \\ \omega ^2 c_2 & \omega c_2 & \omega ^2 c_1 & \omega c_1 & \omega c_2 & \omega ^2 c_2 & \omega c_1 & \omega ^2 c_1 & c_2 & c_2 & c_1 & c_1 \\ \omega c_2 & \omega ^2 c_2 & \omega c_1 & \omega ^2 c_1 & \omega ^2 c_2 & \omega c_2 & \omega ^2 c_1 & \omega c_1 & c_2 & c_2 & c_1 & c_1 \\ \omega c_1 & \omega ^2 c_1 & \omega c_2 & \omega ^2 c_2 & c_1 & c_1 & c_2 & c_2 & \omega ^2 c_1 & \omega c_1 & \omega ^2 c_2 & \omega c_2 \\ \omega ^2 c_1 & \omega c_1 & \omega ^2 c_2 & \omega c_2 & c_1 & c_1 & c_2 & c_2 & \omega c_1 & \omega ^2 c_1 & \omega c_2 & \omega ^2 c_2 \\ \omega c_2 & \omega ^2 c_2 & \omega c_1 & \omega ^2 c_1 & c_2 & c_2 & c_1 & c_1 & \omega ^2 c_2 & \omega c_2 & \omega ^2 c_1 & \omega c_1 \\ \omega ^2 c_2 & \omega c_2 & \omega ^2 c_1 & \omega c_1 & c_2 & c_2 & c_1 & c_1 & \omega c_2 & \omega ^2 c_2 & \omega c_1 & \omega ^2 c_1 \\ \end{array} \right)$ } \item $S_{(25-36) \times (37-48)}=(S_{(37-48) \times (25-36)})^T$ is the matrix \renewcommand{\arraystretch}{1.5} \resizebox{\linewidth}{!}{% $\frac{1}{3 \sqrt{5}} \left( \begin{array}{cccccccccccc} \omega ^2 & \omega & \omega ^2 & \omega & \omega & \omega ^2 & \omega & \omega ^2 & 1 & 1 & 1 & 1 \\ \omega & \omega ^2 & \omega & \omega ^2 & \omega ^2 & \omega & \omega ^2 & \omega & 1 & 1 & 1 & 1 \\ -\omega ^2 & -\omega & -\omega ^2 & -\omega & -\omega & -\omega ^2 & -\omega & -\omega ^2 & -1 & -1 & -1 & -1 \\ -\omega & -\omega ^2 & -\omega & -\omega ^2 & -\omega ^2 & -\omega & -\omega ^2 & -\omega & -1 & -1 & -1 & -1 \\ \omega & \omega ^2 & \omega & \omega ^2 & 1 & 1 & 1 & 1 & \omega ^2 & \omega & \omega ^2 & \omega \\ \omega ^2 & \omega & \omega ^2 & \omega & 1 & 1 & 1 & 1 & \omega & \omega ^2 & \omega & \omega ^2 \\ -\omega & -\omega ^2 & -\omega & -\omega ^2 & -1 & -1 & -1 & -1 & -\omega ^2 & -\omega & -\omega ^2 & -\omega \\ -\omega ^2 & -\omega & -\omega ^2 & -\omega & -1 & -1 & -1 & -1 & -\omega & -\omega ^2 & -\omega & -\omega ^2 \\ 1 & 1 & 1 & 1 & \omega ^2 & \omega & \omega ^2 & \omega & \omega & \omega ^2 & \omega & \omega ^2 \\ 1 & 1 & 1 & 1 & \omega & \omega ^2 & \omega & \omega ^2 & \omega ^2 & \omega & \omega ^2 & \omega \\ -1 & -1 & -1 & -1 & -\omega ^2 & -\omega & -\omega ^2 & -\omega & -\omega & -\omega ^2 & -\omega & -\omega ^2 \\ -1 & -1 & -1 & -1 & -\omega & -\omega ^2 & -\omega & -\omega ^2 & -\omega ^2 & -\omega & -\omega ^2 & -\omega \\ \end{array} \right)$ } \item $S_{(37-48) \times (37-48)}$ is the matrix \renewcommand{\arraystretch}{1.5} \resizebox{\linewidth}{!}{% $\frac{1}{3 \sqrt{5}} \left( \begin{array}{cccccccccccc} \omega c_4 & \omega ^2 c_4 & \omega c_2 & \omega ^2 c_2 & c_4 & c_4 & c_2 & c_2 & \omega ^2 c_4 & \omega c_4 & \omega ^2 c_2 & \omega c_2 \\ \omega ^2 c_4 & \omega c_4 & \omega ^2 c_2 & \omega c_2 & c_4 & c_4 & c_2 & c_2 & \omega c_4 & \omega ^2 c_4 & \omega c_2 & \omega ^2 c_2 \\ \omega c_2 & \omega ^2 c_2 & \omega c_4 & \omega ^2 c_4 & c_2 & c_2 & c_4 & c_4 & \omega ^2 c_2 & \omega c_2 & \omega ^2 c_4 & \omega c_4 \\ \omega ^2 c_2 & \omega c_2 & \omega ^2 c_4 & \omega c_4 & c_2 & c_2 & c_4 & c_4 & \omega c_2 & \omega ^2 c_2 & \omega c_4 & \omega ^2 c_4 \\ c_4 & c_4 & c_2 & c_2 & \omega ^2 c_4 & \omega c_4 & \omega ^2 c_2 & \omega c_2 & \omega c_4 & \omega ^2 c_4 & \omega c_2 & \omega ^2 c_2 \\ c_4 & c_4 & c_2 & c_2 & \omega c_4 & \omega ^2 c_4 & \omega c_2 & \omega ^2 c_2 & \omega ^2 c_4 & \omega c_4 & \omega ^2 c_2 & \omega c_2 \\ c_2 & c_2 & c_4 & c_4 & \omega ^2 c_2 & \omega c_2 & \omega ^2 c_4 & \omega c_4 & \omega c_2 & \omega ^2 c_2 & \omega c_4 & \omega ^2 c_4 \\ c_2 & c_2 & c_4 & c_4 & \omega c_2 & \omega ^2 c_2 & \omega c_4 & \omega ^2 c_4 & \omega ^2 c_2 & \omega c_2 & \omega ^2 c_4 & \omega c_4 \\ \omega ^2 c_4 & \omega c_4 & \omega ^2 c_2 & \omega c_2 & \omega c_4 & \omega ^2 c_4 & \omega c_2 & \omega ^2 c_2 & c_4 & c_4 & c_2 & c_2 \\ \omega c_4 & \omega ^2 c_4 & \omega c_2 & \omega ^2 c_2 & \omega ^2 c_4 & \omega c_4 & \omega ^2 c_2 & \omega c_2 & c_4 & c_4 & c_2 & c_2 \\ \omega ^2 c_2 & \omega c_2 & \omega ^2 c_4 & \omega c_4 & \omega c_2 & \omega ^2 c_2 & \omega c_4 & \omega ^2 c_4 & c_2 & c_2 & c_4 & c_4 \\ \omega c_2 & \omega ^2 c_2 & \omega c_4 & \omega ^2 c_4 & \omega ^2 c_2 & \omega c_2 & \omega ^2 c_4 & \omega c_4 & c_2 & c_2 & c_4 & c_4 \\ \end{array} \right)$ } \end{itemize} All other entries are $0$. \end{theorem} \section{The quantum double of a $\mathbb{Z}/2\mathbb{Z} $ de-equivariantization of a generalized Haagerup category} In this section we consider $\mathbb{Z}/2\mathbb{Z} $-de-equivariantizations of generalized Haagerup subfactors via the orbifold construction of Section 2. It is shown in \cite{AHcat} that the even parts of the Asaeda-Haagerup subfactor with index $\frac{5+\sqrt{17}}{2} $ and principal graph \centerline{ \includegraphics[width=2in]{AHpg_unlabeled.eps} } are Morita equivalent to a de-equivariantization of a generalized Haagerup category for $G=\mathbb{Z}/4\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$. This allows for the computation of the quantum double, which had been an open problem since the discovery of the Asaeda-Haagerup subfactor in the 1990s \cite{MR1686551}. There is also a de-equivariantization of the generalized Haagerup category for $G=\mathbb{Z}/8\mathbb{Z}$ which has the same fusion rules as the de-equivariantization of the $\mathbb{Z}/4\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$ category. We can compute the modular data of this category as well, which is very similar to that of the Asaeda-Haagerup categories. Another interesting example of a $\mathbb{Z}/2\mathbb{Z} $-de-equivariantization is that of the generalized Haagerup subfactor for $G=\mathbb{Z}/4\mathbb{Z} $. Here one gets the even part of a subfactor with index $3+\sqrt{5} $ and principal graph \centerline{ \includegraphics[width=1.5in]{2d2.eps} .} We compute the modular data for this category as well, which has rank $10$. \subsection{The tube algebra of the de-equivariantization.} Let $\mathcal{C}_{G,A,\epsilon} $ be given, with endomorphism $\rho $ and $\alpha_g, \ g\in G $ acting on a factor $M$ containing the Cuntz algebra $ \mathcal{O}_{|G|+1}$, and let $z\in G_2$ be given such that $\epsilon_z $ is a character satisfying $\epsilon_z(z)=1 $. Then we perform the orbifold construction to obtain the orbifold endomorphisms $\tilde{\alpha}_g $ and $\tilde{\rho}$ on $P\rtimes_{ \alpha_z} \mathbb{Z}/2\mathbb{Z} $, which is generated by $M $ and a unitary $\lambda $ implementing $\alpha_z $. By an abuse of notation, we will often suppress the tilde when referring to orbifold endomorphisms of $P$. Thus whether $\rho $, for example, should be considered as an endomorphism of $M$ or of $P$ will be determined by context. We choose a subset $G_0 \subset G $ of representives of the $\{0,z\} $-cosets of $G$. Let $\pi: G \rightarrow G_0 $ be the projection function which sends $g \in G $ to the representative of $g+\{0,z \} $ in $G_0$. We also define $w:G \rightarrow \{0,1\}$ by $$w(g)= \begin{cases} 1, & \text{if }g \notin G_0 \\ 0, & \text{if }g \in G_0 \end{cases}.$$ When writing expressions for elements of $G$, it will be useful to have variables which take values in $\{0,z\} $. We use subscripts for such variables, e.g. $z_0,z_1,z_2$. Also, if $z_i \in \{0,z\}$, we set $z'_i =\delta_{z_i,z}$. Let $$\Delta=\{ \tilde{\alpha}_g \}_{g \in G_0} \cup \{ \tilde{\alpha}_g \tilde{\rho} \}_{g \in G_0} .$$ We introduce a basis for Tube $\Delta $ as follows. Let $$\mathcal{B}_G=\{ (g\;k | 1 |k \; g ) \}_{g,k \in G_0} \cup \{ (g\;{}_k \rho| \lambda^{w(-g)} |{}_k \rho \; \pi(-g) \}_{g,k \in G_0} ,$$ $$\mathcal{B}_{G,{}_G\rho} =\{ (g \; {}_k \rho| T_{2k+g-h+z_0} \lambda^{z'_0}| {}_k \rho \; {}_h \rho) \}_{ g,h,k \in G_0, \ z_0 \in \{ 0,z\}},$$ $$\mathcal{B}_{{}_G\rho,G} \{ ({}_h \rho \; {}_k \rho |T_{h-g+z_0}^* \lambda^{z'_0}| {}_k \rho \; g) \}_{ g,h,k \in G_0, \ z_0 \in \{ 0,z\}},$$ $$ \mathcal{B}_{{}_G\rho} =\{ ({}_{h_1} \rho \; {}_{k} \rho |T_{k-h_j+g+z_2}T^*_{h_i-k+g+z_1} \lambda^{z'_1+z'_2}|{}_{k} \rho \; {}_{h_2} \rho ) \}_{ h_1,h_2,k,g \in G_0, \ z_1,z_2 \in \{ 0,z\} } ,$$ $$\cup \{ ({}_{h} \rho \; {}_{k} \rho |SS^* \lambda^{w(2k-h)}|{}_{k} \rho \; {}_{\pi(2k- h)} \rho ) \}_{h, k \in G_0} $$ $$\cup \{ ( {}_{h} \rho \; k | \lambda^{w(h-2k)} | k \; {}_{\pi(h-2k)} \rho )\}_{ h, k \in G_0} .$$ Then $$\mathcal{B}= \mathcal{B}_{G} \cup \mathcal{B}_{G,{}_G\rho} \cup \mathcal{B}_{{}_G\rho, G} \cup \mathcal{B}_{{}_G\rho}$$ is a basis for Tube $ \Delta$. We can compute the multiplication and involution for Tube $\Delta $ in terms of the basis $\mathcal{B} $, using (\ref{tubemult})-(\ref{tubeinv}), (\ref{e11})-(\ref{e13}), and the properties of the orbifold construction. \begin{lemma} We have \begin{enumerate} \item $ (\tilde{\alpha}_g,\tilde{\alpha}_h)=(\tilde{\alpha}_g \rho, \tilde{\alpha}_h \rho)=\delta_{g,h} \mathbb{C}+\delta_{g,h+z}\mathbb{C} \lambda $ \item $(\tilde{\alpha}_g,\tilde{\alpha}_h \tilde{\rho}^2 )=\delta_{g,h} \mathbb{C}S+\delta_{g,h+z}\mathbb{C} S\lambda $ \item $(\tilde{\alpha}_g \tilde{\rho},\tilde{\alpha}_h \tilde{\rho}^2 )=\mathbb{C}T_{g+h}+\mathbb{C}T_{g+h+z} \lambda$ \item $S\lambda=\lambda S $, \quad $T_g \lambda = \epsilon_z(g) \lambda T_g $ \end{enumerate} \end{lemma} To compute the tube algebra multiplication, we need to choose an orthonomal basis of isometries for each $(\nu, \zeta \zeta'), \ \nu,\zeta,\zeta' \in \Delta $. Unlike for a regular generalized Haagerup category, $\Delta $ is not closed under tensor product with invertible objects. There are three cases we need to consider. If $\zeta \zeta' = \tilde{\alpha}_g$, then we take $$ T^{\tilde{\alpha}_{\pi(g)}}_{\zeta,\zeta'}=\lambda^{w(g)} .$$ Similarly, if $\zeta \zeta' = \tilde{\alpha}_g\tilde{\rho} $, then we take $$T^{\tilde{\alpha}_{\pi(g)}\tilde{\rho}}_{\zeta,\zeta'}=\lambda^{w(g)} .$$ Finally, if $\zeta \zeta' = \tilde{\alpha}_g \tilde{\rho}^2 $, then we take $$T^{\tilde{\alpha}_{\pi(g)} }_{\zeta,\zeta'}=S\lambda^{w(g)} $$ and $$(T^{\tilde{\alpha}_{h} \tilde{\rho}}_{\zeta,\zeta'})_1=T_{h+g}, \quad (T^{\tilde{\alpha}_{h} \tilde{\rho}}_{\zeta,\zeta'})_2=T_{h+g+z} \lambda.$$ We can now compute the multplication rules for the tube algebra, which are somewhat complicated by the presence of $ \lambda$. \begin{lemma} The adjoint operation on $ \mathcal{B}_G$ and $\mathcal{B}_{G,G\rho}$ is as follows. \begin{enumerate} \item $$(g \; k | 1 | k \; g )^* = \epsilon_z({g \cdot w(-k)})(g \; \pi(-k) | 1 | \pi(-k) \; g ) $$ \item $$(g\;{}_k \rho | \lambda^{w(-g)} |{}_k \rho \; \pi(-g) )^*= \epsilon_z({k \cdot w(-g)})(\pi(-g)\;{}_k \rho | \lambda^{w(-g)} |{}_k \rho \; g ) $$ \item $$ (g\; {}_{k} \rho| T_{2k+g-h+z_1} \lambda^{z'_1}| {}_k \rho \; {}_{h} \rho)^* =\epsilon_z({k z'_1}) \epsilon_{-k-g+h+z_1}(g-h+2k+z_1)$$ $$({}_{h} \rho \; {}_{k} \rho | \lambda^{z'_1}T_{h-g+z_1}^*| {}_{k} \rho \; g) $$ \end{enumerate} \end{lemma} \begin{lemma}\label{Gmult} Multiplication among elements of $ \mathcal{B}_{G} $ is as follows: \begin{enumerate} \item $(g\;k_1 | 1 |k_1 \; g )(g\;k_2 | 1 |k_2 \; g )=$ $$\epsilon_z(g \cdot w(k_1+k_2))(g\; \pi(k_1+ k_2) | 1 | \pi(k_1+k_2) \; g )$$ \item $(g \;k_1 | 1 |k_1 \; g )(g \;{}_{k_2} \rho | \lambda^{w(-g)} |{}_{k_2} \rho \; \pi(-g))=$ $$\epsilon_z(g\cdot w(k_1+k_2)+k_1 \cdot w(-g))(g\;{}_{\pi(k_1+ k_2) } \rho | \lambda^{w(-g)} |{}_{\pi(k_1+ k_2)} \rho \; \pi(-g))$$ \item $(g \;{}_{k_1} \rho | \lambda^{w(-g)} |{}_{k_1} \rho \; \pi(- g))(\pi(-g) \;k_2 | 1 |k_2 \; \pi(-g) )=$ $$ \epsilon_z({g\cdot w(k_1-k_2)})(g \;{}_{\pi(k_1- k_2)} \rho | \lambda^{w(-g)} |{}_{\pi(k_1- k_2)} \rho \; \pi(-g))$$ \item $(g\;{}_{k_1} \rho | \lambda^{w(-g)} |{}_{k_1} \rho \; \pi(- g)) (\pi(-g) \; {}_{k_2} \rho | \lambda^{w(-g)} |{}_{k_2} \rho \; g )=$ $$ \epsilon_z({k_1 \cdot w(-g)}) [[\epsilon_z({g\cdot w(k_1-k_2)})(g\; \pi(k_1- k_2) | 1 | \pi(k_1- k_2) \; g ) +$$ $$ \delta_{g,0} \sum_{r \in G} (\epsilon_z(g \cdot w(r) )\epsilon_{g}(r+k_1-k_2)(g \; {}_{\pi(r)} \rho | 1| {}_{\pi(r)} \rho \; \pi(-g) )]]$$ \end{enumerate} \end{lemma} \begin{lemma}\label{GGGrhomult} Multiplication on $\mathcal{B}_G \times \mathcal{B}_{G,G\rho} $ is as follows: \begin{enumerate} \item $(g \; k_1 | 1| k_1 \; g) \cdot (g \; {}_{k_2} \rho | T_{g+2k_2-h+z_1}\lambda^{z'_1} | {}_{k_2} \rho \; {}_h \rho )= $ \begin{multline*} \epsilon_z(h\cdot w(k_1+k_2)+k_1 z'_1) \epsilon_{k_1}(g-h+2k_2+z_1)\\ (g \; {}_{\pi(k_1+ k_2)} \rho | T_{g+2k_1+2k_2-h+z_1}\lambda^{z_1} | {}_{\pi(k_1+ k_2)} \rho {}_h \rho ) \end{multline*} \item $(g_1 \; {}_{k_1} \rho | \lambda^{w(-g_1)}| {}_{k_1} \rho \; g_2) \cdot (g_2 \; {}_{k_2} \rho | T_{g_2+2k_2-h+z_1} \lambda^{z'_1} | {}_{k_2} \rho \; h )=$ $$\epsilon_z(k_1z'_1+(r+k_1+k_2)(z'_1+w(-g_1))) $$ $$\epsilon_{k_1-g_2-2k_2+h-z_1}(g_2+2k_2-h+z_1) $$ $$\sum_{r \in G} \limits \epsilon_z((g_1+g_2+h)\cdot w(r) )\epsilon_{g_1}(r+k_1-k_2) $$ $$A_{2k_1-g_2-2k_2+h-z_1}(r-k_1+k_2+g_2-h+z_1,r-k_1+k_2+g_2-h+2g_1+z_1)$$ $$ (g_1 \; {}_{\pi(r)} \rho | T_{2r+2g_1+g_2-h+z_1}\lambda^{z'_1+w(-g_1)} | {}_{\pi(r)} \rho \; h )$$ \end{enumerate} \end{lemma} \begin{lemma}\label{GGrhoGrhoGmult} Multiplication on $\mathcal{B}_{G,{}_G\rho} \times \mathcal{B}_{{}_G\rho,G} $ is as follows: $(g_1 \; {}_{k_1} \rho| T_{2k_1+g_1-h+z_1} \lambda^{z'_1}| {}_{k_1} \rho \; {}_{h} \rho) \cdot ({}_{h} \rho \; {}_{k_2} \rho |\lambda^{z'_2}T_{h-g_2+z_2}^*| {}_{k_2} \rho \; g_2)=$ $$\displaystyle{\epsilon_z({k_1 z'_2} ) \epsilon_{k_1-h_2+g_2-z_2}(h_2-g_2+z_2)}$$ $$[\delta_{g_1-g_2,z_1+z_2}\epsilon_z(g_1 w(k_1-k_2))$$ $$( g_1 \; \pi(k_1- k_2) | \lambda^{z'_1+z'_2} | \pi(k_1- k_2)\; g_2 )$$ $$+ \delta_{g_1+g_2,z_1+z_2} \sum_{r \in G} \epsilon_z({(r+k_1+k_2)(z'_1+z'_2)})$$ $$ \epsilon_z({g_1 \cdot w(r) }) \epsilon_{g_1}(r+k_1-k_2) $$ $$ A_{2k_1-h+g_2-z_2} (r-k_1-k_2+h-g_2+z_2,g_1-g_2+z_1+z_2)$$ $$(g_1 \; {}_{\pi(r)} \rho | \lambda^{z'_1+z'_2} | {}_{\pi(r)} \rho \; g_2 )]$$ \end{lemma} \begin{lemma}\label{GrhoGGGrhomult} Multiplication on $\mathcal{B}_{{}_G\rho,G} \times \mathcal{B}_{G,{}_G\rho} $ is as follows: $({}_{h_1} \rho \; {}_{k_1} \rho |\lambda^{z'_1}T_{h_1-g+z_1}^*| {}_{k_1} \rho \; g)\cdot (g \; {}_{k_2} \rho| T_{2k_2+g-h_2+z_2} \lambda^{z'_2}| {}_{k_2} \rho \; {}_{h_2} \rho) =$ $$\epsilon_z({k_1 z'_2+(g+h_1)(z'_1+z'_2)}) \epsilon_{k_1-2k_2-g+h_2-z_2}(2k_2+g-h_2+z_2)$$ $$[ \delta_{2k_2-2k_1+h_1-h_2,z_1+z_2}\epsilon_z({h_2 \cdot w(k_1-k_2))} \frac{1}{d}$$ $$({}_{h_1} \rho \; \pi(k_1- k_2) | \lambda^{z'_1+z'_2} | \pi(k_1- k_2) \; {}_{h_2} \rho)$$ $$+\delta_{2k_1-2k_2-2g+h_1+h_2,z_1+z_2} $$ $$\epsilon_z({(g+h_1)(z'_1+z'_2)}+h_2 \cdot w(r))$$ $$\epsilon_{-g+z_1}(g+h_1+z_1)$$ $$({}_{h_1} \rho \; {}_{\pi(g+h_1+k_2-k_1)} \rho| SS^*\lambda^{z'_2+z'_1} | {}_{\pi(g+h_1+k_2-k_1)} \rho \; {}_{h_2} \rho)$$ $$+ \sum_{r,j \in G} \limits \epsilon_z({(r+k_1+k_2)(z'_1+z'_2)}+h_2 \cdot w(r)) \epsilon_{h_1-r-k_1+k_2}(r+k_1-k_2)$$ $$ A_{2k_1-2k_2-g+h_2+z_2}(r-k_1+k_2+g-h_2+z_2,2k_2-2k_1+h_1-h_2+z_1+z_2+j) $$ $$A_{2h_1-r-k_1+k_2}(-h_1-g+r+k_1-k_2+z_1,j)$$ $$({}_{h_1} \rho \; {}_{\pi(r)} \rho|T_{j+r-k_1+k_2+h_1-h_2+z_1+z_2} T^*_{j+2h_1-r-k_1+k_2} \lambda^{z'_2+z'_1} | {}_{\pi(r)} \rho \; {}_{h_2} \rho)]$$ \end{lemma} \begin{lemma}\label{Grhomult} Multiplication among elements of $ \mathcal{B}_{{}_G \rho} $ is given as follows: \begin{enumerate} \item $({}_{h_1} \rho \; k_1 |\lambda^{w(h_1-2k_1)}|k_1 \; {}_{h_2} \rho ) \cdot ({}_{h_2} \rho \; k_2 | \lambda^{w(h_2-2k_2)}|k_2 \;{}_{h_3} \rho ) =$ $$\epsilon_z({h_1 \cdot w(k_1+k_2) +k_1 \cdot w(h_2-2k_2 )}) $$ $$ ({}_{h_1} \rho \; \pi(k_1+k_2) | \alpha_{k_1}( \lambda^{w(h_2-2k_2)+w(h_1-2k_1)}| \pi(k_1+ k_2) \; {}_{h_3} \rho)$$ \item $( {}_{h_1} \rho \; {}_{k_1} \rho | SS^* \lambda^{w(2k_1-h_1)} |{}_{k_1} \rho \; {}_{h_2} \rho ) $\\ $ \cdot ( {}_{h_2} \rho \; k_2 | \lambda^{w(h_2-2k_2)} | k_2 \; {}_{h_3} \rho ) =$ $$\epsilon_z({k_1w(h_2-2k_2)+h_1 \cdot w(k_1-k_2)} )$$ $$( {}_{h_1} \rho \; {}_{\pi(k_1-k_2)} \rho| SS^*\lambda^{w(h_2-2k_2)+w(2k_1-h_1)}| {}_{\pi(k_1-k_2)} \rho \; {}_{h_3} \rho) $$ \item $({}_{h_1} \rho \; k_1 | \lambda^{w(h_1-2k_1)} | k_1 \; {}_{h_2} \rho ) \cdot ({}_{h_2} \rho \; {}_{k_2} \rho | SS^* \lambda^{w(2k_2-h_2)}|{}_{k_2} \rho \; {}_{h_3} \rho ) =$ $$\epsilon_z({k_1w(2k_2-h_2)+h_1\cdot w(k_1+k_2)}) $$ $$( {}_{h_1} \rho \; {}_{\pi(k_1+ k_2)} \rho|SS^* \lambda^{w(h_1-2k_1)+w(2k_2-h_2)}| {}_{\pi(k_1+ k_2)} \rho \; {}_{h_3} \rho) $$ \item$( {}_{h_1} \rho \; k_1 | \lambda^{w(h_1-2k_1)}|k_1 \; {}_{h_2} \rho ) $\\ $ \cdot ({}_{h_2} \rho \; {}_{k_2} \rho |T_{k_2-h_3+g_2+z_2}T^*_{h_2-k_2+g_2+z_1} \lambda^{z'_1+z'_2}|{}_{k_2} \rho \; {}_{h_3} \rho ) = $ $$\epsilon_z({(h_1 +h_2+h_3) \cdot w(k_1+k_2)+k_1(z'_1+z'_2)}) $$ $$\epsilon_{k_1}(k_2-h_3+g_2+z_2) \epsilon_{k_1}(h_2-k_2+g_2+z_1) $$ $$({}_{h_1} \rho \; {}_{\pi(k_1+ k_2)} \rho |T_{2k_1+k_2-h_3+g_2+z_2}$$ $$T^*_{2k_1+h_2-k_2+g_2+z_1} \lambda^{z'_1+z'_2 +w(h_1-2k_1)} |{}_{\pi(k_1+ k_2)} \rho \; {}_{h_3} \rho ) $$ \item $({}_{h_1} \rho \; {}_{k_1} \rho |T_{k_1-h_2+g_1+z_2}T^*_{h_1-k_1+g_1+z_1} \lambda^{z'_1+z'_2}|{}_{k_1} \rho \; {}_{h_2} \rho ) $ \\ $ \cdot ( {}_{h_2} \rho \; k_2 | \lambda^{w(h_2-2k_2)} |k_2 \; {}_{h_3} \rho ) =$ $$\epsilon_z({(k_1+h_1+h_2)\cdot w(h_2-2k_2)+h_2 \cdot w(k_1-k_2)}) $$ $$({}_{h_1} \rho \; {}_{\pi(k_1-k_2)} \rho | T_{k_1-h_2+g_1+z_2}$$ $$T^*_{h_1-k_1+g_1+z_1} \lambda^{z'_1+z'_2+w(h_2-2k_2)} |{}_{\pi(k_1- k_2)} \rho \; {}_{h_3} \rho ) $$ \item $( {}_{h_1} \rho \; {}_{k_1} \rho | SS^*\lambda^{w(2k_1-h_1)}|{}_{k_1} \rho \; {}_{h_2} \rho ) $\\ $ \cdot ( {}_{h_2} \rho \; {}_{k_2} \rho |SS^* \lambda^{w(2k_2-h_2)} |{}_{k_2} \rho \; {}_{h_3} \rho ) =$ $$\epsilon_z({k_1w(2k_2-h_2)}) $$ $$[\epsilon_z({h_1 \cdot w(k_1-k_2)} )\frac{1}{d^3}$$ $$({}_{h_1} \rho \; \pi(k_1- k_2) |\lambda^{w(2k_2-h_2)+w(2k_1-h_1)} | \pi(k_1-k_2) \; {}_{h_3} \rho ) $$ $$ +\sum_{r \in G} \limits \epsilon_z({(r+k_1-k_2)(w(2k_2-h_2)+w(2k_1-h_1))}+h_1 \cdot w(r))$$ $$\frac{1}{d^2} \epsilon_{ h_1-r-k_1+k_2}(r+k_1-k_2) $$ $$ ({}_{h_1} \rho \; {}_{\pi(r)} \rho | T_{r+k_1-k_2}T^*_{2h_1-r-k_1+k_2}\lambda^{w(2k_2-h_2)+w(2k_1-h_1)} |{}_{\pi(r)} \rho\; {}_{h_3} \rho ) ]$$ \item $ ( {}_{h_1} \rho \; {}_{k_1} \rho |T_{k_1-h_2+g_1+z_2}T^*_{h_1-k_1+g_1+z_1}\lambda^{z'_1+z'_2} |{}_{k_1} \rho \; {}_{h_2} \rho ) $\\ $\cdot ({}_{h_2} \rho \; {}_{k_2} \rho |SS^* \lambda^{w(2k_2-h_2)}|{}_{k} \rho \; {}_{\pi(2k_2- h_2}) )=$ $$\epsilon_z({(k_1+h_1+h_2 ) w(2k_2-h_2) })$$ $$[\delta_{2k_1-h_1-h_2,z_1+z_2} \epsilon_z({h_1 \cdot w( k_1-k_2) }) \frac{1}{d^2}$$ $$({}_{h_1} \rho \; \pi(k_1- k_2) | \lambda^{w(2k_2-h_2)+ z'_1+z'_2} ) | \pi(k_1- k_2) \; {}_{h_2} \rho )$$ $$ +\sum_{r \in G} \limits \epsilon_z({(r+k_1+k_2)(w(2k_2-h_2)+z'_1+z'_2 ) }+h_1 \cdot w(r)) $$ $$\epsilon_{h_1-r-k_1+k_2}(r+k_1-k_2) \frac{1}{d}$$ $$A_{2h_1-r-k_!+k_2}(g_1-h_1+r-k_2+z_1,2k_1-h_1-h_2+z_2-z_1)$$ $$({}_{h_1} \rho \; {}_{\pi(r)} \rho | T_{r+k_1-k_2}T^*_{-r+k_1+k_2+h_1-h_2+z_2-z_1}\lambda^{w(2k_2-h_2)+z'_1+z'_2} | {}_{\pi(r)} \rho \; {}_{h_2} \rho )] $$ \item $ ({}_{h_1} \rho \; {}_{k_1} \rho |SS^* \lambda^{w(2k_1-h_1)}|{}_{k_1} \rho \; {}_{h_2} \rho ) $\\ $ \cdot ( {}_{h_2} \rho \; {}_{k_2} \rho |T_{k_2-h_3+g_1+z_2}T^*_{h_2-k_2+g_1+z_1}\lambda^{z'_1+z'_2} |{}_{k_2} \rho \; {}_{h_3} \rho ) =$ $$ \epsilon_z({k_1(z'_1+z'_2)})$$ $$\epsilon_{k_1-k_2+h_3-g_1-z_2}(k_2-h_3+g_1+z_2) \epsilon_{k_1-h_2+k_2-g_1-z_1}(h_2-k_2+g_1+z_1) $$ $$[\delta_{2k_2-h_2-h_3,z_1+z_2}\epsilon_z({(h_1+h_2+h_3)w(k_1-k_2)})\frac{1}{d^2} $$ $$ ({}_{h_1} \rho \; \pi(k_1- k_2) |\lambda^{w(2k_1-h_1)+z'_1+z'_2} | \pi(k_1-k_2) \; {}_{h_3} \rho ) $$ $$ +\sum_{r \in G} \limits \epsilon_z({(r+k_1-k_2)(z'_1+z'_2+w(2k_1-h_1))}+(h_1+h_2+h_3)\cdot w(r))$$ $$\epsilon_{h_1-r-k_1+k_2}(r+k_1-k_2) \frac{1}{d}$$ $$A_{2k_1-k_2+h_3-g_1-z_2}(r-k_1-h_3+g_1+z_2,2k_2-h_3-h_2+z_2-z_1) $$ $$ ({}_{h_1} \rho \; {}_{\pi(r)} \rho | T_{r+k_1+k_2-h_2-h_3+z_2-z_1}$$ $$T^*_{-r+2h_1-k_1+k_2}\lambda^{z'_1+z'_2+w(2k_1-h_1)} |{}_{\pi(r)} \rho\; {}_{h_3} \rho ) ]$$ \item $$( {}_{h_1} \rho \; {}_{k_1} \rho |T_{k_1-h_2+g_1+z_2}T^*_{h_1-k_1+g_1+z_1}\lambda^{z'_1+z'_2} |{}_{k_1} \rho \; {}_{h_2} \rho ) $$ $$ \cdot ( {}_{h_2} \rho \; {}_{k_2} \rho |T_{k_2-h_3+g_2+z_4}T^*_{h_2-k_2+g_2+z_3} \lambda^{z'_3+z'_4} |{}_{k_2} \rho \; {}_{h_3} \rho ) =$$ $$ \epsilon_z({(k_1+h_1+h_2)(z'_3+z'_4)}) $$ $$\epsilon_{k_1-h_2+k_2-g_2-z_3}(h_2-k_2+g_2+z_3) \epsilon_{k_1-k_2+h_3-g_2-z_4}(k_2-h_3+g_2+z_4) $$ $$[\delta_{2k_1-2k_2+h_3-h_1,z_1+z_2+z_3+z_4}\epsilon_z({h_1 \cdot w(k_1-k_2)}) \frac{1}{d} $$ $$A_{2k_1-h_2+k_2-g_2-z_3}(h_2+h_3-2k_2+z_3-z_4,g_1+g_2-k_1-k_2+h_2-h_2+z_3+z_2) $$ $$( {}_{h_1} \rho \; \pi(k_1-k_2) | \lambda^{z'_1+z'_2+z'_3+z'_4} | \pi(k_1- k_2) \; {}_{h_3} \rho ) $$ $$+\delta_{k_1+k_2-g_1-g_2,z_2+z_3} \delta_{k_1-k_2+g_1-g_2+h_3-h_1,z_1+z_4} $$ $$\epsilon_z(h_1 \cdot w(k_2+h_1-g_1+z_1 ) ) $$ $$\epsilon_{-k_1+g_1-z_1}(h_1+k_1-g_1+z_1) $$ $$( {}_{h_1} \rho \; {}_{\pi(k_2+h_1-g_1)} \rho |SS^*\lambda^{z'_1+z'_2+z'_3+z'_4} | {}_{\pi(k_2+h_1-g_1)} \rho \; {}_{h_3} \rho ) $$ $$+\sum_{j,r \in G} \limits \epsilon_z({(r+k_1+k_2)(z'_1+z'_2+z'_3+z'_4)}+h_3 \cdot w(r))$$ $$ \epsilon_{h_1-r-k_1+k_2}(r+k_1-k_2)$$ $$A_{2k_1-k_2+h_3-g_2+z_4}(r-k_1-h_3+g_2+z_4,j+2k_2-h_3-h_2+z_4-z_3) $$ $$A_{2h_1-r-k_1+k_2} (g_1-h_1-k_2+r+z_1,j+2k_1-h_1-h_2+z_1-z_2) $$ $$A_{2k_1-h_2+k_2-g_2+z_3}(j,-k_1-k_2+g_1+g_2+z_2+z_3) $$ $$( {}_{h_1} \rho \; {}_{\pi(r)} \rho | T_{j+k_1+k_2-h_3-h_2+z_4-z_3+r}$$ $$ T^*_{j-r+k_1+k_2-h_2+h_1+z_2-z_1} \lambda^{z'_1+z'_2+z'_3+z'_4} | {}_{\pi(r)} \rho \; {}_{h_3} \rho )]$$ \end{enumerate} \end{lemma} \begin{lemma} The action of $S_0$ on $\mathcal{B} $ is given as follows: \begin{enumerate} \item $S_0[ (g \; k| 1 | k \; g ) ] =$ $$\epsilon_z({k+(g+k) \cdot w(-k))}(\pi(-k) \; g | 1 | g \; \pi(-k) ) $$ \item $S_0[(g \; {}_k \rho | \lambda^{w(-g)} |{}_k \rho \; g ) ]=$ $$\frac{1}{d}\epsilon_z({k \cdot w (-g)})({}_k \rho \; g | \lambda^{w(-g)} | g \; {}_k \rho) $$ \item $S_0[( {}_{h} \rho \; k| \lambda^{w(h-2k)} | k \; {}_{h} \rho )] =$ $$d \cdot \epsilon_z({k+h \cdot w(-k)+k \cdot (w(h-2k)+w(-k))})$$ $$(\pi(-k) \; {}_{h} \rho |\lambda^{w(h-2k)}|{}_{h} \rho \; \pi(-k) )$$ \item $S_0[( {}_{h} \rho \; {}_{k} \rho |SS^*\lambda^{w(2k-h)} |{}_{k} \rho \; {}_{h} \rho )] =$ $$\frac{1}{d}\epsilon_z({k w(2k-h)})[( {}_{k} \rho \; {}_{h} \rho |SS^*\lambda^{w(2k-h)}|{}_{h} \rho \; {}_{k} \rho )$$ $$+ \sum_{j \in G} \limits ( {}_{k} \rho \; {}_{h} \rho |T_jT^*_j\lambda^{w(2k-h)}|{}_{h} \rho \; {}_{k} \rho )]$$ \item $S_0[( {}_{h} \rho \; {}_{k} \rho |T_{k-h+g+z_2}T^*_{h-k+g+z_1}\lambda^{z'_1+z'_2} |{}_{k} \rho \; {}_{h} \rho ) ] =$ $$\epsilon_z({k(z'_1+z'_2)}) \epsilon_{-k}(k-h+g+z_2)\epsilon_{-k}(h-k+g+z_1) )$$ $$\epsilon_{-k-h+g+z_2}(k+h-g+z_2) \epsilon_{h-3k+g+z_1}(-h+3k-g+z_1) $$ $$[\delta_{2k-2h,z_1+z2} ( {}_{k} \rho \; {}_{h} \rho | SS^*\lambda^{z'_1+z'_2} )|{}_{h} \rho \; {}_{k} \rho )$$ $$+ \sum_{j \in G} \limits A_{-(h-3k+g+z_1)}( 2h-2k+z_1-z_2,j ) $$ $$( {}_{k} \rho \; {}_{h} \rho | T_{j-(-k-h+g+z_2)} T^*_{j-(h-3k+g+z_1)}\lambda^{z'_1+z'_2} )|{}_{h} \rho \; {}_{k} \rho )]$$ \end{enumerate} \end{lemma} For fixed $h,g \in G_0 $, let $$u_{h,g}=( {}_h \rho \; g| \lambda^{w(h-2g)} | g \; {}_{\pi(h-2g)} \rho ) .$$ Then $$u_{h,g}^*=\epsilon_z(h\cdot w(-g)+g\cdot w(h-2g)) $$ $$( {}_{\pi(h-2g)} \rho \; \pi(-g)| \lambda^{w(h-2g)} | \pi(-g) \; {}_{h} \rho ) ,$$ and $$u_{h,g}u_{h,g}^*=1_{{}_h \rho} , \quad u_{h,g}^*u_{h,g}=1_{{}_{\pi(h-2g)} \rho}.$$ Therefore $1_{{}_h \rho} $ is equivalent to $1_{{}_{\pi(h-2g)} \rho} $ in the tube algebra, and $M_{h,g}=Ad(u_{h,g}) $ maps $\mathcal{A}_{{}_h \rho} $ isomoprhically onto $\mathcal{A}_{\pi(h-2g) \rho} $. \subsection{Example: The Asaeda-Haagerup fusion categories} It was shown in \cite{AHcat} that the even parts of the Asaeda-Haagerup subfactor are Morita equivalent to the de-equivariantization of a generalized Haagerup category for $G=\mathbb{Z} /4 \mathbb{Z} \times \mathbb{Z} /2 \mathbb{Z} $ with the following structure constants. We order $G$ as follows: $$(0,0),(1,0),(2,0),(3,0),(0,1),(1,1),(2,1),(3,1) .$$ Set $$c=\frac{1}{4}(1-d+i\sqrt{10d-2}) , \quad f=\sqrt{\frac{1}{2}(d-1-i\sqrt{26d+2})}, $$ $$ g=\frac{1}{2}\sqrt{-3d-1+i\sqrt{50d+6} }, \quad h=\frac{1}{4}(d+3-i(\sqrt{2d-10})).$$ Define $G \times G$ matrices as follows: $$A=\frac{1}{d-1}\left( \begin{array}{cccccccc} d-2 & -1 & -1 & -1 & -1 & -1 & -1 & -1 \\ -1 & -1 & c & c & -f & f & -g & -g \\ -1 & \bar{c} & -1 & c & i \sqrt{d} & h & -i \sqrt{d} & \bar{h} \\ -1 & \bar{c} & \bar{c} & -1 & -\bar{f} & -\bar{g} & \bar{g} & -\bar{f} \\ -1 & -\bar{f} & -i \sqrt{d} & -f & -1 & -f & i \sqrt{d} & -\bar{f} \\ -1 & \bar{f} & \bar{h} & -g & -\bar{f} & -1 & g & -\bar{h} \\ -1 & -\bar{g} & i \sqrt{d} & g & -i \sqrt{d} & \bar{g} & -1 & -g \\ -1 & -\bar{g} & h & -f & -f & -h & -\bar{g} & -1 \\ \end{array} \right)$$ $$ B_{(1,0)}=\left( \begin{array}{cccccccc} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & -1 & -1 \\ 1 & 1 & 1 & 1 & 1 & -1 & 1 & -1 \\ 1 & 1 & 1 & 1 & -1 & 1 & 1 & -1 \\ 1 & 1 & 1 & -1 & 1 & 1 & 1 & -1 \\ 1 & 1 & -1 & 1 & 1 & 1 & 1 & -1 \\ 1 & -1 & 1 & 1 & 1 & 1 & 1 & -1 \\ 1 & -1 & -1 & -1 & -1 & -1 & -1 & 1 \\ \end{array} \right) $$ $$B_{(0,1)}\left( \begin{array}{cccccccc} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & -1 & 1 & 1 & 1 & -1 \\ 1 & 1 & -1 & -1 & 1 & 1 & -1 & -1 \\ 1 & -1 & -1 & -1 & 1 & -1 & -1 & -1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & -1 & 1 & 1 & 1 & -1 \\ 1 & 1 & -1 & -1 & 1 & 1 & -1 & -1 \\ 1 & -1 & -1 & -1 & 1 & -1 & -1 & -1 \\ \end{array} \right)$$ $$B_{(1,1)}=\left( \begin{array}{cccccccc} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & -1 & 1 & 1 & 1 & 1 & -1 \\ 1 & -1 & -1 & 1 & 1 & 1 & -1 & -1 \\ 1 & 1 & 1 & -1 & -1 & 1 & 1 & 1 \\ 1 & 1 & 1 & -1 & 1 & 1 & 1 & -1 \\ 1 & 1 & 1 & 1 & 1 & 1 & -1 & -1 \\ 1 & 1 & -1 & 1 & 1 & -1 & -1 & -1 \\ 1 & -1 & -1 & 1 & -1 & -1 & -1 & -1 \\ \end{array} \right)$$ Set $\epsilon_{(0,1)}((a,b)) =1$ for all $(a,b)$. Set $\epsilon_{(1,0)} ((2,1) ) =\epsilon_{(1,0)}((3,1))=-1$ and $\epsilon_{(1,0)} ((a,b) )=1$ otherwise. Together with the cocyle relation $\epsilon_{h+k}(g)=\epsilon_h(g)\epsilon_k(g+2h) $, this determines $\epsilon$. Set $$A_{(0,0)}(h,k)=A(h,k), \quad A_{(1,0)}(h,k)=B_{(1,0)}(h,k) A(h,k), $$ $$\quad A_{(0,1)}(h,k)=B_{(0,1)}(h,k) A(h,k), \quad A_{(1,1)}(h,k)=B_{(1,1)}(h,k) A(h,k) $$ and use (\ref{a+2}) to define the remaining $A_g(h,k)$. \begin{remark} We emphasize that for a large section of the following computation, all that will be needed is $\epsilon$. \end{remark} Fix a representation of $\mathcal{C}_{G,A,\epsilon} $ on a factor $M$ containing the Cuntz algebra $\mathcal{O}_9 $ with generators $S$ and $T_g$, $g \in G$. Let $z=(0,1) \in G $. Then the category we are interested in is the orbifold category $(\mathcal{C}_{G,A,\epsilon} )_z$. Since $\epsilon_z $ is identically $1$, $\alpha_z $ acts trivially on the Cuntz algebra. Therefore the category generated by $\rho $ and $\alpha_g $ on the closure of the Cuntz algebra is already the orbifold category, and we may dispense with $\lambda $. We take $G_0= \mathbb{Z} /4 \mathbb{Z} \subset G$, so that $G_0$ is a subgroup and $\{ \alpha_g\}_{g \in G_0} $ is closed under composition. All the formulas in the previous section simplify greatly, since $\epsilon_z $ is identically $1$, $w $ is identically $0$ on $G_0$, and we discard powers of $\lambda $. Let $m=|G_0| =4$ and $\Lambda = m(1+d^2)$, the gobal dimension of the orbifold category. For $(g,\tau) \in G_0 \times \hat{G_0} $, define $$p(g,\tau)=\frac{1}{m}\sum_{h \in G_0} \tau(h) (g \; h|1| h \; g ) $$ $$E(g,\tau)=\frac{1}{m}\sum_{h \in G_0} \tau(h) (g \; {}_h \rho|1| {}_h \rho \; -g ) .$$ \begin{lemma} \begin{enumerate} \item The $p(g,\tau) $ are mutually orthogonal projections which sum to the identity of $\mathcal{A}_G $. \item We have $$E(g,\tau)E(g,\tau')^*=\delta_{\tau,\tau'}p(g,\tau) $$ unless $g=0$ and $\tau $ is the trivial character. \item $E(0,1)E(0,1)^* =p(0,1)+2mE(0,1) $, where the argument $1$ refers to the trivial character. \end{enumerate} \end{lemma} As before, we label the characters of $\mathbb{Z}/4\mathbb{Z} $ by their values on $1$. \begin{corollary} There are $14$ minimal central projections in $\mathcal{A}_G $: $$p(1,\tau)+p(3,\bar{\tau} ), \ \tau \in \{1,i,-1,-i \}$$ $$p(0,-1)^{\pm} = \frac{1}{2}(p(0,-1) \pm E(0,-1) ) $$ $$p(2,\tau)^{\pm} = \frac{1}{2}(p(2,\tau) \pm E(2,\tau) ), \tau \in \{-1,1\} $$ $$p(g,i)+p(g,-i), \ g \in \{ 0,2 \} $$ $$p(0)^0=\frac{m}{\Lambda}(p(0, 1) +d E(0,1) )$$ $$p(0)^1=\frac{m}{\Lambda}( \frac{\Lambda-m}{m} p(0,1) -d E(0,1) ).$$ \end{corollary} For a character $\tau \in \hat{G_0}$, $g,h\in G_0 $, and $z_0 \in \{ 0,z\} $, let $$J(\tau,g,h,z_0)=\frac{1}{4} \sum_{k \in G_0} \limits \tau(k) (g \; {}_{k} \rho| T_{2k+g-h+z_0}| {}_k \rho \; {}_{h} \rho) .$$ Let $$K(\tau,g,h,z_0) =J( \tau,g,h,z_0)J(\tau,g,h,z_0)^*.$$ Let $\mu=\frac{\Lambda}{\Lambda-4} $. Then we have the following tables for $K(\tau,g,h,z_0)$. \begin{table}[H] \label{tab1} \caption{$K(\tau,g,h,z_0)$ for $h=0, z_0=0 $} \centering \begin{tabular}{c | c c c c} \diagbox{g}{$\tau(1)$} & $1$ & $-1$ & $i$ & $-i$ \\ \hline 0 & $\mu p(0)^1$ & $2p(0,-1)^+$ & $p(0,i)$ & $p(0,-i)$ \\ 1 & $p(1,1)$ & $p(1,-1)$ & $p(1,i)$ & $p(1,-i)$ \\ 2 &$2p(2,1)^+ $ & $2p(2,1)^- $ & $p(2,i) $ & $p(2,-i) $ \\ \end{tabular} \end{table} \begin{table}[H] \label{tab2} \caption{$K(\tau,g,h,z_0)$ for $h=0, z_0=z $, part 1} \centering \begin{tabular}{c | c } \diagbox{g}{$\tau(1)$} & $1$ \\ \hline 0 & $\frac{1}{2}(p(0,i)+p(0,-i)+i E(0,i)-i E(0,-i))$ \\ 1 & $\frac{1}{2}(p(1,i)+p(1,-i))$ \\ 2 & $\frac{1}{2}(p(2,i)+p(2,-i)-i E(2,i)+i E(2,-i))$ \\ \end{tabular} \end{table} \begin{table}[H] \label{tab3} \caption{$K(\tau,g,h,z_0)$ for $h=0, z_0=z $, part 2} \centering \begin{tabular}{c | c} \diagbox{g}{$\tau(1)$} & $-1$ \\ \hline 0 & $\frac{1}{2}(p(0,i)+p(0,-i)-i E(0,i)+i E(0,-i))$ \\ 1 & $\frac{1}{2}(p(1,i)+p(1,-i))$ \\ 2 & $\frac{1}{2}(p(2,i)+p(2,-i)+i E(2,i)-i E(2,-i))$ \\ \end{tabular} \end{table} \begin{table}[H] \label{tab4} \caption{$K(\tau,g,h,z_0)$ for $h=0, z_0=z $, part 3} \centering \begin{tabular}{c | c c} \diagbox{g}{$\tau(1)$} &$i$ & $-i$ \\ \hline 0 & $p(0,-1)^+ +\frac{\mu}{2}p(0)^1$ & $p(0,-1)^+ +\frac{\mu}{2}p(0)^1$ \\ 1 & $\frac{1}{2}(p(1,1)+p(1,-1))$ & $\frac{1}{2}(p(1,1)+p(1,-1))$ \\ 2 & $p(2,1)^- + p(2,-1)^-$ & $p(2,1)^- + p(2,-1)^-$ \\ \end{tabular} \end{table} \begin{table}[H] \label{tab5} \caption{$K(\tau,g,h,z_0)$ for $h=1, z_0=0 $} \centering \begin{tabular}{c | c c c c} \diagbox{g}{$\tau(1)$} &$1$&$-1$ & $i$ & $-i$ \\ \hline 0 & $\mu p(0)^1 $ & $2p(0,-1)^-$ & $p(0,i) $ & $p(0,-i)$ \\ 1 & $p(1,1)$ & $p(1,-1)$ & $p(1,i)$ & $p(1,-i)$ \\ 2 &$2p(2,1)^+$ & $2p(2,-1)^-$ & $p(2,i)$ & $p(2,-i)$ \\ \end{tabular} \end{table} \begin{table}[H] \label{tab6} \caption{$K(\tau,g,h,z_0)$ for $h=1, z_0=z $, part 1} \centering \begin{tabular}{c | c} \diagbox{g}{$\tau(1)$} & $1$\\ \hline 0 & $\frac{1}{2}(p(0,i)+p(0,-i)+ E(0,i)+ E(0,-i)) $ \\ 1 & $\frac{1}{2} (p(1,i)+p(1,-i))$ \\ 2 &$\frac{1}{2}(p(2,i)+p(2,-i)-E(2,i)-E(2,-i))$\\ \end{tabular} \end{table} \begin{table}[H] \label{tab7} \caption{$K(\tau,g,h,z_0)$ for $h=1, z_0=z $, part 2} \centering \begin{tabular}{c | c} \diagbox{g}{$\tau(1)$} & $-1$\\ \hline 0 & $\frac{1}{2}(p(0,i)+p(0,-i)- E(0,i)- E(0,-i))$ \\ 1 & $\frac{1}{2}(p(1,i)+p(1,-i))$ \\ 2 & $\frac{1}{2}(p(2,i)+p(2,-i)+ E(2,i)+E(2,-i))$\\ \end{tabular} \end{table} \begin{table}[H] \label{tab8} \caption{$K(\tau,g,h,z_0)$ for $h=1, z_0=z $, part 3} \centering \begin{tabular}{c | c c} \diagbox{g}{$\tau(1)$} & $i$ & $-i$ \\ \hline 0 & $p(0,-1)^-+\frac{\mu}{2}p(0)^1 $ & $p(0,-1)^-+\frac{\mu}{2}p(0)^1$ \\ 1 & $\frac{1}{2}(p(1,1)+p(1,-1))$ & $\frac{1}{2}(p(1,1)+p(1,-1))$ \\ 2& $p(2,1)^- + p(2,-1)^+$ & $p(2,1)^- + p(2,-1)^+$ \\ \end{tabular} \end{table} Then as in previous examples, we can write down the $14$ minimal central projections in the tube algebra corresponding to the $14$ minimal central projections in $\mathcal{A}_G $ using the elements \begin{multline*} L(\tau,g,h,z_0)=J( \tau,g,h,z_0)^*J(\tau,g,h,z_0), \\ g,h \in \mathbb{Z}/4\mathbb{Z}, \ \tau \in \hat{\mathbb{Z}/4\mathbb{Z}}, \ z_0 \in \{ 0,z\} . \end{multline*} Here it is slightly more complicated, since many of the $K(\tau,g,h,z_0) $ have rank $2$. For those $ K(\tau,g,h,z_0)$ with distinct $\textbf{t}$-eigenvalues, the corresponding $L(\tau,g,h,z_0) $ can be easily decomposed, while for those rank two $ K(\tau,g,h,z_0)$ with a unique $ \textbf{t}$-eigenvalue, more care is required. In particular, the $K(\tau,1,h,z) $ are each rank two projections in $\mathcal{A}_{1} $. Since all of the minimal projections in $\mathcal{A}_1 $ have different $\textbf{t}_1 $ eigenvalues, we can split the corresponding $L(\tau,1,h,z) $ by taking a linear combination of $L$ and $\textbf{t}_{{}_h \rho} L$. The only case for which we can't deduce the necessary decomposition of $L$ from the tables is $K(\pm i, 2,h,z) $, since these are rank two projections in $\mathcal{A}_2 $ which are eigenvectors for $\textbf{t}_2 $. Therefore here we need to consider additional elements in $\mathcal{A}_{G,{}_G \rho} $. Let $$J_h= p(2,1)^-J(i,2,h,z), \ h=0,1.$$ Then we have $$J_hJ_h^*=p(2,1)^- .$$ Then $L_h=J_h^*J_h$ is equivalent to $p(2,1)^- $ in the tube algebra and $L(i,2,h,z)-L_h $ is equivalent to $p(2,-1)^- $. We can now write down $14$ minimal central projections in the tube algebra, Let $$M=M_{0,1}+M_{1,1} ,$$ so that $M$ maps $\mathcal{A}_{\rho} +\mathcal{A}_{1 \rho} $ isomorphically onto $\mathcal{A}_{2 \rho} +\mathcal{A}_{3 \rho} $. Then the following are minimal central projections in the tube algebra: \begin{eqnarray*} P_1 &=& p(0)^0 \end{eqnarray*} \begin{eqnarray*} P_2 &=& p(0)^1 +(id+M)(\frac{1}{\mu}[L(1,0,0,0)+L(1,0,1,0)] \\ & & + \frac{4}{2\mu-\mu^2}[L(i,0,0,z)+L(i,0,1,z) -L(i,0,0,z)^2-L(i,0,1,z)^2]) \end{eqnarray*} \begin{eqnarray*} P_3 &=& p(0,i)+p(0,-i) +(id+M)\\ & &(L(i,0,0,0)+L(1,0,0,z) +L(i,0,1,0)+L(1,0,1,z)) \end{eqnarray*} \begin{eqnarray*} P_{4} &=& p(2,i)+p(2,-i) +(id+M) \\ & &(L(i,2,1,0)+L(1,2,1,z) + L(i,2,0,0)+L(1,2,0,z)) \end{eqnarray*} \begin{eqnarray*} P_5 &=& p(1,1)+p(3,1) +(id+M)(L(1,1,0,0)+L(1,1,1,0)\\ & & + L(i,1,0,z)+L(i,1,1,z) +\textbf{t}( L(i,1,0,z) +L(i,1,1,z))) \end{eqnarray*} \begin{eqnarray*} P_6 &=& p(1,-1)+p(3,-1) +(id+M)(L(-1,1,0,0)+L(-1,1,1,0)\\ & & + L(i,1,0,z)+L(i,1,1,z) -\textbf{t}( L(i,1,0,z) +L(i,1,1,z))) \end{eqnarray*} \begin{eqnarray*} P_7 &=& p(1,i)+p(3,-i) +(id+M)(L(i,1,0,0)+L(i,1,1,0)\\ & & + L(1,1,0,z)+L(1,1,1,z) -i \textbf{t}( L(1,1,0,z) +L(1,1,1,z))) \end{eqnarray*} \begin{eqnarray*} P_8 &=& p(1,-i)+p(3,i) +(id+M)(L(-i,1,0,0)+L(-i,1,1,0)\\ & & + L(1,1,0,z)+L(1,1,1,z) +i \textbf{t}( L(1,1,0,z) +L(1,1,1,z))) \end{eqnarray*} \begin{eqnarray*} P_9 &=& p(0,-1)^+ + (id+M)\\ & & (\frac{1}{2}L(-1,0,0,0) +\frac{2}{\mu-2}[\frac{\mu}{2}L(i,0,0,z)-L(i,0,0,z)^2]) \end{eqnarray*} \begin{eqnarray*} P_{10} &=& p(0,-1)^- + (id+M) \\ & & (\frac{1}{2}L(-1,0,1,0) +\frac{2}{\mu-2}[\frac{\mu}{2}L(i,0,1,z)-L(i,0,1,z)^2]) \end{eqnarray*} \begin{eqnarray*} P_{11} &=& p(2,1)^+ +\frac{1}{2}(id+M)(L(1,2,0,0)+L(1,2,1,0)) \end{eqnarray*} \begin{eqnarray*} P_{12} &=& p(2,1)^- + (id+M)(L_0+L_1) \end{eqnarray*} \begin{eqnarray*} P_{13} &=& p(2,-1)^+ +(id+M)( \frac{1}{2}L(-1,2,0,0)+L(i,2,1,z)-L_1) \end{eqnarray*} \begin{eqnarray*} P_{14} &=& p(2,-1)^- +(id+M)(L(i,2,0,z)-L_0+\frac{1}{2}L(-1,2,1,0)) . \end{eqnarray*} \begin{lemma} \begin{enumerate} \item The objects $\alpha_0$ , $\alpha_0+2\sum_{g \in G_0} \limits \alpha_g \rho $, $2\alpha_0+2\sum_{g \in G_0} \limits \alpha_g \rho $, $\alpha_0+2(\rho+\alpha_2 \rho) $, $\alpha_0+2(\alpha_1 \rho+\alpha_3 \rho) $, and $2\alpha_2+2\sum_{g \in G_0} \limits \alpha_g \rho $ each have a unique irreducible half-braiding. \item The objects $\alpha_1+\alpha_3+2\sum_{g \in G_0} \limits \alpha_g \rho $ and $\alpha_2+\sum_{g \in G_0} \limits \alpha_g \rho $ each have four irreducible half-braidings. \end{enumerate} \end{lemma} \begin{lemma} The quantum double has rank $22$. The object $2\sum_{g \in G_0} \limits \alpha_g \rho$ has eight irreducible half-braidings. \end{lemma} \begin{proof} We have $dim(\mathcal{A}_{\rho})=dim(\mathcal{A}_{{}_1 \rho})=4+dim(\mathcal{A}_{\rho,{}_1 \rho})=68 $. On the other hand, the dimensions of the subalgebras of $\mathcal{A}_{\rho}$, $\mathcal{A}_{{}_1 \rho}$, and $ \mathcal{A}_{\rho,{}_1 \rho}$ which are orthogonal to the $14$ known minimal central projections are each $32$. This implies in particular that these orthogonal subalgebras of $\mathcal{A}_{{}_h \rho}$ are mutually unitarily equivalent for all $h_1,h_2 $. Let $$u_2=(\rho \; 2| 1 | 2 \; \rho ) .$$ Then it is readily seen from the multiplication formulas that \begin{multline*} \{u_2 \}'=\text{span} (\{ 1_{\rho}, u_2 \} \cup \{(\rho \; k| SS^*| k \; \rho ) \}_{k \in \{ 0,2\}} \\ \cup \{ (\rho \; k| T_{k+g+z_0}T_{-k+g+z_0}^*| k \; \rho ) \\ +(\rho \; k+2| T_{k+g+z_0}T_{-k+g+z_0}^*| k+2 \; \rho ) \}_{k \in \mathbb{Z}/2\mathbb{Z}, \ g \in \mathbb{Z}/4\mathbb{Z}, \ z_0 \in \{0,z\}}). \end{multline*} Therefore $\{u_2 \}' $ has dimension $20$. Moreover, it can be checked that $\{u_2 \}' $ is Abelian. Therefore $ \mathcal{A}_{\rho}$ has exactly $20$ simple summands, all of which have dimension at most $4$, since the commutant of any self-adjoint unitary in $M_n(\mathbb{C}) $ is noncommutative if $n \geq 3 $. Since exactly $12$ of the known minimal central projections have nonzero components in $\mathcal{A}_{{}_h \rho} $ for each $h$, the orthogonal subalgebras must each have exactly $20-12=8 $ simple summands. Since all of these simple summands have dimension at most $4$ and the dimension of the subalgebra is $32$, each simple summand must be isomorphic to $M_2(\mathbb{C}) $, and the corresponding central projections have rank $2$. \end{proof} As in the previous examples, we first find the eigenvalues of $\textbf{t}_{{}_h \rho} $ numerically and then verify the minimal polynomial. \begin{lemma} The $ \textbf{t}$-eigenvalues corresponding to the half-braidings of $2\sum_{g \in G_0} \limits \alpha_g \rho $ are $$ e^{ \frac{6l^2 \pi i}{17}}, \ 1 \leq l \leq 8.$$ \end{lemma} \begin{remark} The fact that the coefficients of $\pi i $ in the numerators of the exponents form the series $6l^2 $ was guessed by following \cite{MR2837122}, who observed a similar fact in the case of the Haagerup subfactor and its generalizations. We also follow their work in obtaining a simple expression for the corresponding $8 \times 8$ block of the $S$-matrix. \end{remark} \begin{theorem} \label{ahdouble} The quantum double of the Asaeda-Haagerup fusion category has rank $22$ and the modular data is as follows. \begin{enumerate} \item The $T$-matrix has diagonal $$(1,1,1,-1,1,-1,I,-I,1,1,1,1,1,1,$$ $$e^{\frac{6\cdot 1^2 \pi i}{17}} ,e^{\frac{6\cdot 2^2 \pi i}{17}},e^{\frac{6\cdot 3^2 \pi i}{17}},e^{\frac{6\cdot 4^2 \pi i}{17}},e^{\frac{6\cdot 5^2 \pi i}{17}},e^{\frac{6\cdot 6^2 \pi i}{17}},e^{\frac{6\cdot 7^2 \pi i}{17}},e^{\frac{6\cdot 8^2 \pi i}{17}}) .$$ \item The matrix $S_{1-14,1-14} $ is \renewcommand{\arraystretch}{1.5} \resizebox{\linewidth}{!}{% $\frac{1}{8} \left( \begin{array}{cccccccccccccc} \frac{8}{\Lambda}& \frac{8d^2}{\Lambda} & 2 & 2 & 2 & 2 & 2 & 2 & 1 & 1 & 1 & 1 & 1 & 1 \\ \frac{8d^2}{\Lambda}& \frac{8}{\Lambda}& 2 & 2 & 2 & 2 & 2 & 2 & 1 & 1 & 1 & 1 & 1 & 1 \\ 2 & 2 & 4 & -4 & 0 & 0 & 0 & 0 & 2 & 2 & -2 & -2 & -2 & -2 \\ 2 & 2 & -4 & 4 & 0 & 0 & 0 & 0 & 2 & 2 & -2 & -2 & -2 & -2 \\ 2 & 2 & 0 & 0 & 4 & -4 & 0 & 0 & -2 & -2 & 2 & 2 & -2 & -2 \\ 2 & 2 & 0 & 0 & -4 & 4 & 0 & 0 & -2 & -2 & 2 & 2 & -2 & -2 \\ 2 & 2 & 0 & 0 & 0 & 0 & -4 & 4 & -2 & -2 & -2 & -2 & 2 & 2 \\ 2 & 2 & 0 & 0 & 0 & 0 & 4 & -4 & -2 & -2 & -2 & -2 & 2 & 2 \\ 1 & 1 & 2 & 2 & -2 & -2 & -2 & -2 & 5 & -3 & 1 & 1 & 1 & 1 \\ 1 & 1 & 2 & 2 & -2 & -2 & -2 & -2 & -3 & 5 & 1 & 1 & 1 & 1 \\ 1 & 1 & -2 & -2 & 2 & 2 & -2 & -2 & 1 & 1 & 5 & -3 & 1 & 1 \\ 1 & 1 & -2 & -2 & 2 & 2 & -2 & -2 & 1 & 1 & -3 & 5 & 1 & 1 \\ 1 & 1 & -2 & -2 & -2 & -2 & 2 & 2 & 1 & 1 & 1 & 1 & 5 & -3 \\ 1 & 1 & -2 & -2 & -2 & -2 & 2 & 2 & 1 & 1 & 1 & 1 & -3 & 5 \\ \end{array} \right)$ } \item The matrix $S_{15-22,15-22} $ is given by $$S_{kl}=-\frac{2}{\sqrt{17}} \text{cos}\left(\frac{12 \pi l k }{17} \right) , \quad 1 \leq k,l \leq 8 .$$ \item For $ 15 \leq j \leq 22 $, we have $$S_{1j}=\frac{1}{\sqrt{17}}, \quad S_{2j}=-\frac{1}{\sqrt{17}} , \quad S_{ij}=0, \ 1 \leq i \leq 14 $$ \end{enumerate} \end{theorem} \begin{remark} There is also an orbifold category of a generalized Haagerup category for $G=\mathbb{Z}/8\mathbb{Z} $. This category has the same fusion rules as the $\mathbb{Z}/4\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z} $ orbifold, and we can compute the modular data as well, with some work. The quantum double here also has rank $22$. In this case $e^{\frac{(2k+1)\pi i}{4}} $ occur as $T$-eigenvalues and $\pm i $ do not occur; the rest of the $T$-eigenvalues are the same. With an appropriate ordering, the $T$-matrix has diagonal $$(1,1,1,1, e^{\frac{3\pi i}{4}},e^{-\frac{\pi i}{4}},e^{\frac{\pi i}{4}},e^{-\frac{3\pi i}{4}}, 1,1,-1,-1,-1,-1,$$ $$e^{\frac{6\cdot 1^2 \pi i}{17}} ,e^{\frac{6\cdot 2^2 \pi i}{17}},e^{\frac{6\cdot 3^2 \pi i}{17}},e^{\frac{6\cdot 4^2 \pi i}{17}},e^{\frac{6\cdot 5^2 \pi i}{17}},e^{\frac{6\cdot 6^2 \pi i}{17}},e^{\frac{6\cdot 7^2 \pi i}{17}},e^{\frac{6\cdot 8^2 \pi i}{17}}) .$$ and the $S$-matrix differs from the Asaeda-Haagerup $S$-matrix above only in the blocks $$S_{5-8,5-8}= \frac{1}{2} \left( \begin{array}{cccc} 0 & 0 & -1 &1\\ 0 & 0 & 1 & -1 \\ -1&1 & 0 & 0 \\ 1 & -1& 0 & 0 \\ \end{array} \right) $$ and $$S_{11-12,11-22}=S_{13-14,13-14}=\frac{1}{8} \left( \begin{array}{cc} -3 & 5 \\ 5 & -3 \\ \end{array} \right). $$ \end{remark} \section{Example: $2D2$} The $2D2$ subfactor is the subfactor with index $3+\sqrt{5} $ whose principal even part is a $\mathbb{Z}/2\mathbb{Z} $ deequivariantization of the generalized Haagerup category corresponding to $G=\mathbb{Z} /4 \mathbb{Z} $. It was constructed in \cite{IzumiNote}, with an alternative construction using planar algebras given in \cite{1406.3401}. We use the same notation for the generalized Haagerup category $\mathcal{C}_{G,A,\epsilon} $ for $G=\mathbb{Z}/4\mathbb{Z}$ as in Section 2, and take $z=2 \in G$. Again our goal is to compute the quantum double of the orbifold category $(\mathcal{C}_{G,A,\epsilon})_z$ and find its modular data. Let $G_0=\{ 0,1 \} $. We have $$\mathcal{B}_G=\{ (g\;k | 1 |k \; g ) \}_{g,k \in G_0} \cup \{ (g\;{}_k \rho| \lambda^{w(-g)} |{}_k \rho \; g \}_{g,k \in G_0} $$ and $$\mathcal{B}_{G,G\rho} =\{ (g \; {}_k \rho| T_{2k+g-h+z_0}| {}_k \rho \; {}_h \rho) \}_{ g,h,k \in G_0, \ z_0 \in \{ 0,z\}}.$$ For $g \in G_0 $, we order $\mathcal{B}_g $ by first listing the two terms in the left set, with $k=0 $ first, and then the two terms in the right set, again with $k=0$ first. For $g,h \in G_0$, we order $\mathcal{B}_{g,{}_h \rho} $ by first listing the two terms with $z_0=0$, with $k=0 $ first, and then the two terms with $z_0=z $, again with $k=0 $ first. We define elements in the tube algebra by their coordinate vectors with respect to these ordered bases as follows: $$\begin{array}{ccccccc} p(0)_1 &=& \frac{1}{\Lambda}(1,1,d,d)_{\mathcal{B}_{0}} & \quad & p(0)_2 &=& \frac{1}{\Lambda}(\frac{\Lambda-2}{2},\frac{\Lambda-2}{2},-d,-d)_{\mathcal{B}_{0}} \\ p(0)_3 &=& \frac{1}{4}(1,-1,1,-1)_{\mathcal{B}_{0}} &\quad& p(0)_4 &=& \frac{1}{4}(1,-1,-1,1)_{\mathcal{B}_{0}} \\ p(1)_1 &=& \frac{1}{4} (1,i,1,-i)_{\mathcal{B}_{1}} & \quad& p(1)_2 &=& \frac{1}{4} (1,i,-1,i)_{\mathcal{B}_{1}} \\ p(1)_3 &=& \frac{1}{4} (1,-i,1,i)_{\mathcal{B}_{1}} & \quad & p(1)_4 &=& \frac{1}{4} (1,-i,-1,-i)_{\mathcal{B}_{1}} \\ \end{array}.$$ and $$\begin{array}{ccccccc} J(0,0)_1 &=& \sqrt{\frac{\Lambda-2}{4\Lambda} } (1,1,0,0)_{\mathcal{B}_{0, \rho}} & \quad & J(0,0)_2 &=& \sqrt{\frac{\Lambda-2}{4\Lambda} }(0,0,1,-1)_{\mathcal{B}_{0, \rho}} \\ J(0,0)_3 &=& \frac{1}{2\sqrt{2}}(1,-1,0,0)_{\mathcal{B}_{0, \rho}}& \quad & J(0,0)_4 &=& \frac{1}{2\sqrt{2}}(0,0,1,1)_{\mathcal{B}_{0, \rho}} \\ J(0,1)_1 &=& \sqrt{\frac{\Lambda-2}{4\Lambda} } (1,-1,0,0)_{\mathcal{B}_{0, {}_1\rho}} & \quad & J(0,1)_3 &=& \frac{1}{2\sqrt{2}}(1,1,0,0)_{\mathcal{B}_{0, {}_1\rho}} \\ J(0,1)_2 &=& \sqrt{\frac{\Lambda-2}{4\Lambda} }(0,0,1,-1)_{\mathcal{B}_{0, {}_1\rho}} & \quad & J(0,1)_4 &=& \frac{1}{2\sqrt{2}}(0,0,1,1)_{\mathcal{B}_{0, {}_1 \rho}} \\ J(1,0)_1 &=& \frac{1}{4}(1,i,1,i)_{\mathcal{B}_{0, \rho}} & \quad & J(1,0)_3 &=&\frac{1}{4}(1,-i,1,-i)_{\mathcal{B}_{0, \rho}} \\ J(1,0)_2 &=&\frac{1}{4}(1,i,-1,-i)_{\mathcal{B}_{0, \rho}}& \quad & J(1,0)_4 &=& \frac{1}{4}(1,-i,-1,i)_{\mathcal{B}_{0, \rho}} \\ J(1,1)_1 &=&\frac{1}{4}(1,i,-i,-1)_{\mathcal{B}_{0, {}_1\rho}} & \quad & J(1,1)_2 &=&\frac{1}{4}(1,i,i,1)_{\mathcal{B}_{0, {}_1\rho}} \\ J(1,1)_3 &=&\frac{1}{4}(1,-i,i,-1)_{\mathcal{B}_{0, {}_1\rho}} & \quad & J(1,1)_4 &=& \frac{1}{4}(1,-i,-i,1)_{\mathcal{B}_{0, {}_1 \rho}} \\ \end{array}.$$ Let $K(i,j)_k =J(i,j)_k J(i,j)_k^* $ and $L(i,j)_k =J(i,j)_k^*J(i,j)_k$. Then we verify the following by direct calculation. \begin{lemma} \begin{enumerate} \item $\mathcal{A}_{G} $ is Abelian and the $p(i)_j, \ 0 \leq i \leq 1, \ 1 \leq j \leq 4 $ are its minimal projections. \item $K(0,i)_j=p(0)_2, \ 0 \leq 1 \leq i, 1 \leq j \leq 2 $. \item $K(0,0)_j=p(0)_3, \ 3 \leq j \leq 4 $ \item $K(0,1)_j=p(0)_4, \ 3 \leq j \leq 4 $ \item $K(1,i)_j =p(1_+j, \ 0 \leq i \leq 1, \ 1 \leq j \leq 4 $. \item $J(i,j)_kJ(i',j')_{k'}^*=0 $ if $(i,j,k) \neq (i'j'k') $. \end{enumerate} \end{lemma} We can write down $8$ minimal central projection in the tube algebra. \renewcommand{\arraystretch}{1.5} $\begin{array}{c c c} P_1 &=& p(0)_1\\ P_2 &=& p(0)_2+L(0,0)_1 +L(0,0)_2 +L(0,1)_1 +L(0,1)_2 \\ P_3 &=& p(0)_3 +L(0,0)_3 +L(0,0)_4 \\ P_4 &=& p(0)_4 +L(0,1)_3 +L(0,1)_4 \\ P_5 &=& p(1)_1 + L(1,0)_1+L(1,1)_1\\ P_6 &=& p(1)_2 + L(1,0)_2+L(1,1)_2 \\ P_7 &=& p(1)_3 + L(1,0)_3+L(1,1)_3 \\ P_8 &=& p(1)_4 + L(1,0)_4+L(1,1)_4 \\ \end{array}$ \begin{lemma} \begin{enumerate} \item The objects $ \alpha_0$, $\alpha_0+2\rho $, $\alpha_0+2 \alpha_1 \rho$, $\alpha_0+2\rho+2 \alpha_1 \rho$ each have a unique irreducible half-braiding. \item The object $ \alpha_1+\rho+\alpha_1 \rho$ has four irreducible half-braidings. \end{enumerate} \end{lemma} The $T$-eigenvalues of these $8$ minimal central projections are given by the vector $$(1,1,1,1,i,i,-i,-i)$$ We can check that there are exactly two other minimal central projections in the tube algebra, which each have a rank two component in each $\mathcal{A}_{h} $. Then we can find the minimal polynomial of $\textbf{t} $ and diagonalize to find these last two projections. \begin{lemma} The object $\rho+\alpha_1 \rho $ has two irreducible half-braidings, with $T$-eigenvalues $e^{\pm \frac{4 \pi i}{5}} $. \end{lemma} Finally, we can compute the $S$-matrix using Formula \ref{sform2}. \begin{theorem} The modular data for the $2D2$ subfactor is as follows. The $T$-matrix has diagonal $$(1,1,1,1,i,i,-i,-i,e^{\frac{4 \pi i}{5}}, e^{-\frac{4 \pi i}{5}} ).$$ The $S$-matrix is:\\ \renewcommand{\arraystretch}{1.5} \resizebox{\linewidth}{!}{% $ \frac{1}{20}\left( \begin{array}{cccccccccc} 5-2 \sqrt{5} & 5+2 \sqrt{5} & 5 & 5 & 5 & 5 & 5 & 5 & 4 \sqrt{5} & 4 \sqrt{5} \\ 5+2 \sqrt{5} & 5-2 \sqrt{5} & 5 & 5 & 5 & 5 & 5 & 5 & -4 \sqrt{5} & -4 \sqrt{5} \\ 5 & 5 & 15 & -5 & -5 & -5 & -5 & -5 & 0 & 0 \\ 5 & 5 & -5 & 15 & -5 & -5 & -5 & -5 & 0 & 0 \\ 5 & 5 & -5 & -5 & -5+10 i & -5-10 i & 5 & 5 & 0 & 0 \\ 5 & 5 & -5 & -5 & -5-10 i & -5+10 i & 5 & 5 & 0 & 0 \\ 5 & 5 & -5 & -5 & 5 & 5 & -5-10 i & -5+10 i & 0 & 0 \\ 5 & 5 & -5 & -5 & 5 & 5 & -5+10 i & -5-10 i & 0 & 0 \\ 4 \sqrt{5} & -4 \sqrt{5} & 0 & 0 & 0 & 0 & 0 & 0 & -10+2 \sqrt{5} & 10+2 \sqrt{5} \\ 4 \sqrt{5} & -4 \sqrt{5} & 0 & 0 & 0 & 0 & 0 & 0 & 10+2 \sqrt{5} & -10+2 \sqrt{5} \\ \end{array} \right) $ } \end{theorem} It is interesting to compare this $S$-matrix with the matrix $S_a $ of the rank $10$ modular tensor subcategory of the quantum double of the $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z} $ generalized Haagerup category from Section 3. \begin{remark} Although the tube algebra in this $\mathbb{Z}/4\mathbb{Z}$ case is smaller than that in the Asaeda-Haagerup ($\mathbb{Z}/4\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z} $) case, this computation is in some sense less natural, because here the orbifold breaks the group symmetry. In the Asaeda-Haagerup case, the element $(0,1) $ acts trivially on the Cuntz algebra and the orbifold preserves the group symmetry. In the $\mathbb{Z}/8\mathbb{Z}$ case, the tube algebra is as large as in the $\mathbb{Z}/4\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z} $ case, but the orbifold breaks the group symmetry, so the computation is ugly, and we have omitted it. \end{remark} \newcommand{}{} \bibliographystyle{alpha}
1,314,259,992,601
arxiv
\section{Introduction}\label{intro} The Internet of Things (IoT) is expected to connect billions of low-end devices to the Internet. It thereby drastically increases communication without a human source or destination. The total count of products and businesses that use IoT technologies has increased to about 25 percent, and the number of connected devices is projected to reach 43 billion by 2023\cite{iotg}. Bluetooth has been a significant backbone for most of these connected devices and applications\cite{7000963}. Sniffing Bluetooth traffic has not been straightforward because of the manufacturer-dependent adaptive channel hopping behavior and shared 2.4 GHz spectrum of Bluetooth's device. Various approaches have predicted hop changes, allowing the user to be traceable~\cite{10.1145/2906388.2906403}. Nevertheless, these hopping challenges are mostly for the private data packets being exchanged in Bluetooth. As we go for the public packets such as beacons and keep-alive messages, which are emitted in three channels, it is much easier to sniff them accurately. These beacons reveal the sender's device identity in the form of MAC address. Devices that perform \textit{MAC randomization} can hide device's identity to some extent. Bluetooth Classic (BT) does not randomize the addresses and has already been shown to be de-anonymized\cite{9152700}. Even MAC address randomization in BLE has been claimed to be defeated specific to apple devices\cite{martin2019handoff} and for generalized devices\cite{ccnc21/JounasVAF21}. \cite{ccnc21/JounasVAF21} claim to get 100\% device association for small set of devices on sniffing public-packets in a controlled environment(inside Faraday cage) as seen in Figure \ref{fig:8}. The addresses shown in figure \ref{fig:8} are LAP (Lower Address Part) of anonymized MAC addresses seen by \cite{ccnc21/JounasVAF21} in the trace. There is a need to evaluate the performance of \cite{ccnc21/JounasVAF21} for a large population of devices in real-world scenarios. If the results of Figure \ref{fig:8} are similar in realistic environments, immense threats to user-privacy are posed in BLE. \begin{figure}[htbp] \hfill\includegraphics[scale=0.50]{./plots/ground_truth.png}\hspace*{\fill} \caption{Perfect association of MAC addresses achieved by ~\cite{ccnc21/JounasVAF21} on sniffing public-packets in the controlled environment for BLE with MAC randomization. Each color represents a device broadcasting with anonymized addresses} \label{fig:8} \end{figure} \textit{Amidst raising privacy intrusion findings in the Bluetooth, there has been an absence of frameworks to test these suggestions in scalable real-world conditions}. Current BLE simulators are mostly focusing on throughput, latency, and signal-to-noise ratio (SNR) features rather than the security and privacy aspects of the standard. There has been an inability to incorporate the real-world device parameters into the simulation framework. Without these advancements, it is impossible to generate a realistic BLE trace that considers integral factors like MAC address randomization. This is because the implementation of address randomization is dependent on the device manufacturer. Lack of controlled simulated traces presently halts the retrieval of \textit{ground truth} in large-scale scenarios. \textit{Ground truth} here refers to the knowledge of a set of randomized MAC addresses that were emitted from a particular device. It is needed to successfully evaluate device fingerprinting solutions and propose adjustments in the standard to guarantee the user's privacy.\label{bstack} \textit{To the best of our knowledge, none of the current available BLE simulators support and consider privacy aspects, specifically MAC address randomization}. The current state-of-the-art open-source for simulating wireless communications in general, NS-3\footnote{https://www.nsnam.org/}, is very weak in support of BLE standard to much-advanced WiFi stack it possesses. In fact, the official release of NS-3 still lacks BLE support. A different open-source implementation of BLE stack without MAC randomization have been released based on NS-3 framework\cite{ns1,ns2}. There has also been an implementation of BLE in Omnet++ framework too\footnote{http://cc.oulu.fi/ kmikhayl/BLE.html}. We rigorously tested and chose \cite{ns2} as the base BLE stack (BLE 4.1) of our proposed simulator. This is because, firstly, it is currently most accurate, efficient, and organized. Secondly, it is in the NS-3 framework, which gives users the freedom to perform BLE experiments co-existing with the latest WiFi standards. Most of the BLE trace collection is for public packets and is done passively through sniffers. Private packets are mostly encrypted, and capturing them is illegal in many countries. Expensive hardware like Ubertooth One\cite{u1} is required to sniff on data channels. Moreover, as stated earlier, channel hopping in BLE data packets makes the capturing worse. Unfortunately, current simulation tools are not meant for generating sniffed public BLE traces. This is because simulation time explodes with a large number of devices due to the number of simulation events increasing when handling the inter-node public packets. We are interested in the full processing of broadcast packets only at the sniffer. \textit{SimBle} addresses this issue and proposes optimized sniffing in Section \ref{opt}, which eliminates exponential run-time while being able to generate the exact same trace. In this paper, we first study and present different privacy guidelines across released Bluetooth standards. Then, we develop and introduce the simulator \textit{SimBle}, which incorporates standard-compliant MAC address randomization capable of emulating any BLE device. This is made possible as \textit{SimBle} introduces the notion of \textit{device class}, which differentiates various kinds of devices like phones, smartwatches, and headsets based on the frequency of transmitted beacons. \textbf{The four major contributions of this paper are:} \begin{enumerate} \item Study of different privacy features present in the BLE standard that is necessary to be introduced in Simulation. \item Architecture and implementation of a new BLE simulation stack in the form of \textit{SimBle} in NS-3 which considers user-privacy and distinguishes the devices spawned in it. \item Case study of the only generic MAC address association algorithm present in literature. It is made possible for scalable scenarios after generating the \textit{ground truth} using our solution \item Release of an open-source simulator along with tools and methods to generate a realistic Bluetooth trace with associated \textit{ground truth} \end{enumerate} The rest of this paper is organized as follows. Section \ref{back} defines the overview of different privacy measures recommended by the BLE standard. We present our BLE simulation stack, \textit{SimBle} in Section \ref{sec3} and \ref{sec4}. Section \ref{valid} validates the functionality of \textit{SimBle}. In Section \ref{cst}, we perform a case study of the generic MAC address association strategy available in literature using simulated \textit{ground truth}. We show the strategy's effectiveness and then discuss possible amendments to the BLE standard that this case study has forced to consider. Finally, Section \ref{discussion} discusses the impact of privacy-preserving BLE provisions on other research domains and how real-world traces from \textit{SimBle} would address big challenges. We also present the conclusion of our work along with looking into the future directions. \section{Background}\label{back} This section discusses how BLE handles MAC level addressing. We look into different addressing modes supported by BLE. But we are mostly interested in private addresses as they are fundamental in preserving user privacy. Afterward, we present a study of privacy provisions currently proposed by the standard. Finally, we identify the factors that must be taken into account for designing the simulator that respects user privacy. \subsection{BLE MAC addressing} Bluetooth has been there for quite some time now, but it is the Bluetooth Low Energy (BLE) variant\cite{btorg} that has been used by the majority of the IoT devices. When a particular BLE device communicates, it keeps sending advertising packets on three public channels specified by the standard. These packets include a link-layer MAC address, which acts as an identifier to the device[\cite{bt41}, p. 69]. To avoid the user leaking the identifier to the world, recent BLE standards have continuously forced all the devices to update their publicly advertised MAC addresses. Various addressing modes have been specified in the standard [\cite{bt52}, p. p. 2988] which are briefly described next. In BLE, we identify the devices using a device address and an address type [\cite{bt52}, p. 2988]. This means that whenever we compare two device addresses, the same 48-bit addresses does not guarantee the same device. This is because the two addresses could have different types. The address type could either be a public device address or a random device address, which are both 48 bits long. The device has the freedom to use at least one or both types of device addresses. Pubic device addresses are traditional MAC addresses that are created in accordance with \textit{Universal addresses} section of the IEEE 802-2014 standard\cite{macstd}. They are more prevalent, but it is the random device address which is privacy-preserving. Random device address could either be static or private. A static address is a 48-bit randomly generated address meeting specific standard requirements. On the other hand, private addresses are again either resolvable or non-resolvable[\cite{bt52}, p. 2991]. These specific subtypes are identified by the two most significant bits of the random device address, as shown in the table \ref{table:1}. \begin{table}[] \centering \begin{tabular}{ |c|c|c| } \hline Address [47:46] & Address Sub-Type\\ \hline 0b00 & Non-resolvable private address \\ 0b01 & Resolvable private address \\ 0b10 & Reserved for future use \\ 0b11 & Static device address \\ \hline \end{tabular} \caption{Sub-types of random device addresses} \label{table:1} \end{table} BLE device's Identity Address is one of Public device address or Random static device address. When a device is continuing with Resolvable private addresses, it must also possess an Identity Address. \subsection{BLE privacy provisions} \label{ss:blepprovision} The key to privacy provided by the BLE link layer is using private addresses, which we described in the previous sub-section[\cite{bt52}, p. 3201]. This again reflects the importance of the introduction of MAC address randomization done by \textit{SimBle}. BLE recommends devices to generate a resolvable private address. The link-layer corresponding to the host sets a timer and regenerates a new resolvable private address when the timer expires. Moreover, once the Link Layer is reset, a new resolvable private address is generated, and the timer is allowed to start with an arbitrary value in the allowed range. To maintain the efficiency of connection establishment, the standard recommends setting the timer to 15 minutes. BLE\cite{bt51}\cite{bt52} does not allow private devices to use its Identity Address in any advertising packet. The Host could instruct the Controller to advertise, scan, or initiate a connection using a resolvable private address after enabling the resolving list. The state machine for the link layer of BLE consists of various states[\cite{bt52}, p. 2985]. A device could be found in either of these states. For instance, advertising, scanning, and initiation states have different guidelines by the standard. In the advertising state, the link layer is allowed to perform device filtering based on the device address of the peer device to minimize the number of devices to which it responds. This could be done according to a local \textit{white list} which contains a set of records comprising of both the device address and the device address type (public or random) [\cite{bt52}, p. 3202]. If the device is in scanning or initiating state, it is recommended to use private addresses. The scanning device should use the resolvable or non-resolvable private address as the device address. Whenever a scanning device receives an advertising packet that contains a resolvable private address for the advertiser's device address, after address resolution, the scanner's filter policy decides to respond with a scan request or not. Having over-viewed the BLE standard's privacy-related recommendations, especially the latest release BLE 5.2, we proceed in what follows to incorporate the key elements to the simulator. The simulator should not only care of including resolvable private addresses that are integral to BLE privacy but also bring together other MAC address randomization related aspects. The proposed simulation stack \textit{SimBle}, is thus designed in such a manner that adding further privacy-specific features in the future is relatively straightforward. \section{SimBle: Design \& Architecture}\label{sec3} This section aims at providing the solution to the problem of emulating devices that follow network and device privacy-provisions of BLE. This step is a key to generating realistic traces with associated \textit{ground truth}. If we successfully come up with a device-specific privacy-preserving simulation, we could easily produce traces that resemble real scenarios. This has profound implications. It enables us to practically evaluate any MAC address-based device-fingerprinting or privacy-intrusion solutions that are suggested in the literature. In the following, we introduce our BLE simulation stack that we call as \textit{SimBle}. We first look at different design aspects of \textit{SimBle} and then we present our \textit{SimBle} architecture. \subsection{Design considerations} The first aspect that we should take into consideration is the \textbf{device heterogeneity}. Indeed, BLE gives vendors the flexibility to implement privacy features respecting specific guidelines released by the standard. Therefore, different mobile phone manufacturing companies like Apple and Samsung could have different implementation parameters related to randomization. Even one vendor could have a range of devices supporting various BLE releases. Hence, device distinction is an essential feature for BLE simulation, which is currently absent in available simulators. The second aspect that we have to consider is \textbf{privacy provisions}. As we saw in the previous section, the central component of BLE privacy provisioning is the MAC address randomization procedure. If devices violate these recommendations and, for example, advertise it's identity address, then the device and, thus, network privacy is compromised, leading to traceability. \textit{Simble} needs to introduce these provisions specifically MAC address randomization in its framework. Finally, the last aspect is the \textbf{flexibility to generate realistic traces}. Indeed, one of the significant demands in the research community is BLE traces' availability, which could replicate different real-world scenarios like mobility, crowd density, and kind of devices present in the zone where the trace was collected. Trace collection is impractical for the large population using active means like installing specific applications on user devices. Even passive methods, like the usage of sniffers, would require massive deployment and user consent. That is why \textit{SimBle} also aims to include a framework for releasing a ready-to-use utility for trace generation in various user-specified scenarios. We show a case-study of MAC address association algorithm in section~\ref{cst} using traces and associated \textit{ground truth} from this framework. In the following subsections, we detail how these design choices are implemented in \textit{SimBle}. \subsubsection{Device heterogeneity}\label{hetero} As discussed earlier in the previous section, different vendors have the freedom with some bounds in the implementation of BLE stack in the device. For example, Apple picks from the range for values to decide how frequently the device changes a randomized MAC address. We need to distinguish for each device introduced in \textit{SimBle} so that simulation would be able to replicate its behavior in terms of privacy features. In the following, we define the device's type through two points: the device's class and the supported standard version. \begin{enumerate}[label=(\alph*)] \item \textit{Notion of Device Class:} We find a property to classify the device into various groups where the behavior is similar irrespective of manufacturer. This property is the \textit{frequency of transmitting beacons}, which is characteristic of a device with a maximum variation of 10ms~\cite[p.~2751]{bt51}. The base value of the beacon transmission period is between [20~ms; 10.24~s]. Based on this property, we classify BLE devices into the following \textit{device classes}: \begin{itemize} \item \textit{Frequent Emitters}: For this class, the frequency of transmitting beacons is from a normal distribution of mean 50~ms and standard deviation 10~ms. This represents a highly active device like earbuds. We expect these kinds of devices to also swap their randomized MAC address quickly. \item \textit{Moderate Emitters}: These are devices with a moderate frequency of advertisements. We describe them to be from a normal distribution of mean 300~ms and standard deviation 25~ms. From our experimentation, most smartphones, especially iPhones, are falling into this category. \item \textit{Semi-Moderate Emitters}: These are devices which are still active in transmitting regular beacons on broadcast channels. They follow a normal distribution of mean 500~ms and standard deviation 25~ms. This class again mainly includes phones. \item \textit{Low Emitters}: These are devices which are least active in sending out advertisements. We define them to have inter beacon transmission intervals from a normal distribution of mean 2~s and standard deviation 500~ms. Smartwatches generally fall in this category. \end{itemize} A user, when instantiating a node in \textit{SimBle} could choose any of the stated device classes. If the user enables beacons, nodes automatically set their behavior to that of the specified class. However, we give the flexibility to specify the exact beacon frequency of a device if a user knows it beforehand through experimentation. \item \textit{BLE standards: } The frequency of changing a randomized MAC address does depend on the standard. In the most prevalent release currently in terms of the number of devices, the BLE 4.0, for instance, devices change their MAC addresses every 15 minutes\cite{bt41}. In recent releases like BLE 5.2, devices are allowed to change their address before 15 minutes too. Therefore, it is crucial to specify a BLE node with its standard before using its privacy features in the simulation. \textit{SimBle} gives the user the option to mention the standard they want to run on top of the declared node, which enables controlling the privacy features associated. \end{enumerate} \subsubsection{Realistic trace generation} \label{opt} One of the major motivations of this paper is to address the issue of generating realistic Bluetooth traces finally. We identify following components that are essential to be taken care of for \textit{SimBle} to emulate real-world trace collection: \begin{enumerate} \item \textbf{Privacy features:} As already stated earlier, \textit{SimBle} not only introduces BLE network and device privacy features like MAC address randomization but also identifies key parameters that are necessary to get real-world traces. These factors as introduced before in section \ref{sec3} are \textit{swapDelay}, \textit{randInterval}, \textit{Device Class} and the BLE release version . As mentioned above, making sure of correct device-specific parameters enables \textit{SimBle} to emulate any vendor device's privacy features. \item \textbf{Passive sniffing:} Trace collection using active methods like user participation is not practical for BLE. Indeed, we need to recruit volunteers and install the specific application on user devices. There has been rapid growth in contact tracing and trajectory-reconstruction using BLE recently, and the research community requires more real-world traces collected through passive sniffing. The capture of BLE packets should fall under the principle of "legal capture" in different countries. It is mostly not valid for private packets and requires special authorization. Therefore, BLE passive sniffing generally refers to listening on public channels. \textit{SimBle} introduces a framework for the user to deploy an arbitrary number of sniffers and nodes to be placed in a sniffing zone. On top of it, different mobility models could be installed on BLE nodes' varying density, which enables recreating realistic environments. Hence, we could emulate real-world BLE sniffing. \item \textbf{Ground truth:} Introducing privacy in BLE simulation automatically answers the search of \textit{ground truth} in randomized-address traces. \textit{Ground truth} here refers to the knowledge of the history of randomized MAC addresses emitted by a device. We need this to evaluate MAC association algorithms or device fingerprinting methods in general, that are increasingly being proposed \cite{ccnc21/JounasVAF21} \cite{martin2019handoff} \cite{9152700}. \textit{SimBle} generates \textit{ground truth} trace by matching each device's generated private addresses to the \textit{Node ID}, which acts a unique identifier to the device in simulation time. \end{enumerate} \subsubsection{Optimizing trace generation}\label{optim} As discussed earlier, passive sniffing is the most practical method for BLE trace collection. We identify a major issue in the generation of real-world traces inside a simulation. As the number of nodes increases, the number of simulation-events due to processing inter-node packets also increases quadratically. This has a significant impact on the time and resources needed for simulation. But we are only interested in the node-sniffer interaction in case of public packet capture. \textit{SimBle} addresses this problem and gives the user the flexibility to specify a \textit{flag} in simulation, which induces filtered and optimized handling of broadcast packets at nodes. This reduces the simulation duration significantly and thus makes trace-collection feasible. We discuss more on this and look at the obtained gain in performances in Section~\ref{valid}. \subsection{Architecture} After having figured out the design, we have a brief look into the architecture of a BLE \textit{Node} inside \textit{SimBle} in the Figure \ref{fig:1}. As discussed earlier in the Section \ref{bstack}, we use the base BLE stack of \cite{ns2}. Components of \textit{NetDevices} except the \textit{PrivacyManager} were defined in the base stack. \textit{Application} and \textit{Packet socket interface} are NS-3 wide entities not specific to BLE. We created the new component, \textit{PrivacyManager} that takes care of all BLE privacy features. A node in \textit{SimBle} carries the same meaning as in NS-3. It is a physical entity with a unique integer ID and contains \textit{NetDevices} and \textit{Applications}. In this paper, we could think the \textit{Node} to be equivalent to a device/hardware in the real world. We show in Figure \ref{fig:1} single instance of \textit{Application} and \textit{NetDevice} for illustration but could be multiple in principle. \textit{NetDevice} is an integral object of a node representing a physical interface on it. Here, we are interested in the Bluetooth interface. \textit{NetDevice} communicates with the help of interfaces to the \textit{Application}. \textit{Packet socket interface} connects the application interfaces to the \textit{NetDevice} here. IPv4/IPv6 stack could also be installed by the user on the node in parallel. Let's have a brief look at the roles of other components of NetDevice which were already present in the base BLE stack\cite{ns2}. \begin{figure}[htbp] \centering \includegraphics[scale=0.80]{./plots/ble_stack-Page-1.pdf} \caption{Architecture of a node in \textit{SimBle}} \label{fig:1} \end{figure} \textit{BroadbandManager} helps add a link to the list of links that can be associated with a NetDevice. A link here refers to a BLE association between two nodes. It also handles checking if there are new packets in the NetDevice queue and forwards them to the right \textit{LinkManager's} queue. \textit{LinkManager} is the entity associated with a particular BroadbandManager. It setups a link to a specific receiver with the role(Master/Slave) as expected at the end of the setup process. LinkManager also manages \textit{TransmitWindow} which is the next time the device can send a packet over the associated link. \textit{LinkController} is majorly responsible for monitoring and handling the re-transmissions and state changes in the link. It checks if the \textit{ACK} was received for the sent packet and also fires list of callbacks to other NetDevice objects if the link changes. Lastly, \textit{PHY} mainly takes the responsibility of handling link bandwidth, bit-rates, transmission power, and bit-errors. We introduce a new module, \textit{PrivacyManager} in \textit{SimBle} which takes care of all the privacy-related aspects of a device. In the forthcoming section, we discuss how MAC address randomization is managed by the \textit{PrivacyManager}. \section{SimBle: Privacy provisions}\label{sec4} Hereafter, we describe the \textit{PrivacyManager} implementation and the MAC address randomization of BLE. We describe in details the implementation of \textit{PrivacyManager} or, to be specific, the MAC address randomization. All the introduced algorithms follow the BLE standard guidelines\cite{bt52}. \begin{figure}[htbp] \hfill\includegraphics[scale=0.60]{./plots/ble_stack-Page-3.pdf}\hspace*{\fill} \caption{\textit{PrivacyManager} in \textit{SimBle}} \label{fig:14} \end{figure} Overview of the \textit{PrivacyManager} is illustrated in the Figure \ref{fig:14}. \textit{Main} in the figure represents the base class of the \textit{PrivacyManager} from which member functions are called. We could observe in the figure that the function UPDATE is called on the device startup. UPDATE generates new Resolvable private addresses for the calling node using the function GENERATE. It recursively calls itself after the expiration of the time associated with the current private address. On the event of packet reception or checking of the existence of a link to a destination, CHECKVALIDATION is called. On every call, it checks with RESOLVE with a particular private address. RESOLVE returns on turn the validity status and the device's identity address, which generated the private address. In the following, we describe the functions of \textit{PrivacyManager} in detail. \subsection{\textbf{KEY generation and distribution}} \textit{PrivacyManager} focuses on supporting Resolvable private addresses~-- the center of all privacy provisions in current BLE release\cite{bt52} (cf. Section~\ref{ss:blepprovision}) For node to generate a resolvable private address, it must have either the Local Identity Resolving Key (IRK) or the Peer Identity Resolving Key (IRK)\label{pair}. This 128 bit key is a proof of possession of a particular private address. In real devices, IRK's are exchanged through specific control messages. In \textit{SimBle}, we generate IRK randomly at each Node when it is initialized in the simulation. The delay caused in the key exchange for real hardware is emulated by \textit{swapDelay} which we describe in the next section. Simultaneously, the Node also generates an Identity Address, which is a unique identifier to the device. In this paper, the Node or the \textit{NetDevice} essentially mean the same in terms of BLE associated parameters. This is because the remaining modules inside the node (i.e., the socket and the application modules), are not dependent on the BLE standard itself. Finally, before creating links in \textit{SimBle} and installing an application on top declared nodes, each node updates a list in their respective \textit{NetDevice}. This list contains (IRK : Identity Address) pairs of each of the fellow BLE nodes instantiated in the simulator. \subsection{\textbf{Generation of Randomized MAC}} The format of a Resolvable private address is shown in fig \ref{fig:2}. The resolvable private address is generated with the IRK and a 24-bit number known as \textit{prand}. We see that it could be mainly divided into two blocks of 24 bits each. The first block consists of 24 bit hash introduced in [Alg. \ref{AddressGeneration} line \ref{gen:prune}]. \textit{SimBle} incorporates the AES (Advanced Encryption Standard) support as it is recommended by the standard\cite{bt52} for encrypting the plain-text data into ciphered block \cite{aes1} \cite{aes} in the process of randomized MAC address generation. \begin{figure}[htbp] \hfill\includegraphics[scale=0.6]{./plots/ble_stack-Page-2.pdf}\hspace*{\fill} \caption{Format of a Resolvable Private Address} \label{fig:2} \end{figure} The second block consists of \textit{prand}. \textit{Prand} in the case of Resolvable private address has two most significant bits as 1 and 0 as shown in the figure \ref{fig:2}. The random part of \textit{prand} must consist of at least one bit as 0 and one bit as 1. We discover in detail the generation of the Resolvable private address by \textit{PrivacyManager} in [Alg. \ref{AddressGeneration}]. \begin{algorithm}[htbp] \caption{SimBle's Resolvable Private Address generation} \label{AddressGeneration} \begin{algorithmic}[1] \Procedure{Generate}{$IRK$} \Comment{Input variable} \label{gen}\newline \Comment{Prepare encryption inputs} \State $prand \gets genPrand()$ \label{gen:prand} \State $padding \gets genPaddingBits(104)$ \label{gen:pad} \State $plaintext \gets Concatenate(padding, prand)$ \label{gen:concat}\newline \Comment{AES encryption} \State $aesobj \gets AES(IRK)$ \label{gen:aesob} \State $ciphertext \gets aesobj.getEncrypt(plaintext)$ \newline \label{gen:cipher} \Comment{Getting MAC address} \State $prunedcipher \gets getLeastSigBits(ciphertext, 24)$ \label{gen:prune} \State $macstr \gets Concatenate(prunedcipher, prand)$ \label{gen:macstr} \State $macaddr \gets toMacHex(macstr)$\newline \label{gen:macaddr} \State \textbf{return} \Comment{Returns a Resolvable Private Address} \EndProcedure \newline \Procedure{Update}{$randInterval, swapDelay, IRK$} \quad \label{up} \Comment{Input variables} \State $roundIndex = getCurrentRoundIndex()$ \label{up:ri} \State $macDevice = \Call{Generate} {IRK}$ \label{up:setmac}\newline \Comment{Check if this call is just after device initialization} \If{$roundIndex == 1$}\newline \Comment{Calculate time offset for recursive callback} \State $nextUpOffset \gets getURV(0, randInterval)\newline+ swapDelay$ \label{up:urv} \Else \State $nextUpOffset \gets randInterval + swapDelay$ \label{up:rintvl} \EndIf\label{SatControlIf} \newline \Comment{Schedule a callback after offset expires} \State $incRoundIndex()$ \label{up:incr} \State Schedule(\Call{Update} {}, nextUpOffset) \label{up:call1} \EndProcedure \end{algorithmic} \end{algorithm} Each of the node in \textit{SimBle} has an instance of \textit{PrivacyManager} as illustrated earlier in the figure \ref{fig:2}. [Alg. \ref{AddressGeneration}] performs two major functions. GENERATE in [Alg. \ref{AddressGeneration} line \ref{gen}], takes as input the \textit{IRK} and generates a resolvable private address for that node. While UPDATE [Alg. \ref{AddressGeneration} line \ref{gen}] take care of necessary calls to update a device's MAC address according to the user specified BLE standard and device class that we are trying to emulate. Whenever GENERATE is called we generate a 24 bits value with two most significant bits as \textit{10}. Rest of the bits are random and we use this value as \textit{prand}, the trailing half a resolvable private address [Alg. \ref{AddressGeneration} line \ref{gen:prand}]. This generated \textit{prand} is then padded by 104 null bits such that the most significant byte of the \textit{prand} becomes the most significant byte of padding [Alg. \ref{AddressGeneration} line \ref{gen:concat}]. We call this value \textit{plaintext} as it is given as input for encryption. Then, we generate an instance of AES algorithm initialized with the IRK of the current node [Alg. \ref{AddressGeneration} line \ref{gen:aesob}]. AES instance then encrypts the \textit{plaintext} to generate 128 bits of \textit{ciphertext} [Alg. \ref{AddressGeneration} line \ref{gen:cipher}]. We take 24 most significant bits of \textit{ciphertext} [Alg. \ref{AddressGeneration} line \ref{gen:prune}] and concatenate to the earlier generated \textit{prand} to generate a string of 48 bits [Alg. \ref{AddressGeneration} line \ref{gen:concat}]. The generated string is then finally formatted in IEEE 802.11 MAC address format to produce a resolvable private address [Alg. \ref{AddressGeneration} line \ref{gen:macaddr}]. Once the randomized MAC address is generated, the next step is to change this address dynamically while respecting the standard. This is done by the UPDATE function of \textit{PrivacyManager} which takes three arguments. One of them is \textit{IRK}, the identity resolving key of the node, which we have already discussed. The other two arguments are device-dependent with the freedom to users for allocating any specific values. They are as follows: \begin{itemize} \item \textbf{randInterval:} This is the time after which a specific device generates a new resolvable private address. In BLE 4.1 standard\cite{bt41}, the most prevalent Bluetooth standard in current mobile devices, this interval is fixed to 15 minutes. However, in the most recent release, BLE 5.2\cite{bt52}, the vendor is flexible to randomize the MAC address before the mark of 15 minutes. But standard recommends not to update the addresses too frequently as it might affect the paired devices' performance. It is due to an increase in the number of control messages that need to be exchanged after generating a new address. \textit{SimBle} takes the BLE standard and device class as input from the user at the initialization of nodes to calculate the respective \textit{randInterval} value. \item \textbf{swapDelay:} It is introduced to emulate the behavior of the device in practice. We see from the experiments that devices take some time before they develop a new randomized address and advertise. This delay is caused due to resources used in address generation and in updating the current MAC level state. \textit{swapDelay} could be device-specific. We empirically choose the value to be 10 times the \textit{frequency of transmitting beacons}. We do after measuring the value of this delay in experiments done on a large-set of BLE devices broadcasting beacons. \end{itemize} On receiving the input arguments, UPDATE first checks the iteration index of this call and stores it as \textit{roundIndex} [Alg. \ref{AddressGeneration} line \ref{up:ri}]. For calls to UPDATE, \textit{roundIndex} has the value greater than or equal to 1. It distinguishes the two states in which a node can generate a new address. The first state(\textit{roundIndex}=1) is when a node goes for obtaining a new address just after spawning inside the simulation. While the second state(\textit{roundIndex}$>$1) is when the node requests an address after the expiration of the old one. GENERATE is called from UPDATE to assign the device a new resolvable private address [Alg. \ref{AddressGeneration} line \ref{up:setmac}]. After assigning the randomized address, UPDATE calculates the duration for which this address would be valid. If the device has called UPDATE for the first round, then we calculate this duration by taking a random value out of uniform random variable distribution in [0, \textit{randInterval}] and adding the \textit{swapDelay} to this value [Alg. \ref{AddressGeneration} line \ref{up:urv}]. We do this to respect the standard guidelines for setting the address expiration timers as discussed in Section~\ref{ss:blepprovision}. Else if the device has already changed it's MAC address since spawning, then we assign the offset to be the sum of \textit{randInterval} and \textit{swapDelay} [Alg. \ref{AddressGeneration} line \ref{up:rintvl}]. Finally, we increase the \textit{roundIndex} and schedule a recursive callback to UPDATE after the expiration of offset that we just calculated above [Alg. \ref{AddressGeneration} line \ref{up:call1}] in order to get resolvable private addresses during the simulation time. \subsection{\textbf{Resolution of Randomized MAC}} Generation of MAC address is not sufficient for a BLE device. The receiving node must be able to "resolve" or associate the private address with the sending device's identity. A Resolvable private address may be resolved if the sending device’s IRK is available to the receiver. If the address is resolved, the receiving device can associate this address with the peer device. To support this privacy-preserving feature, we need to figure out solutions to two major questions inside a device; how to resolve a private address of a device? And, where do we need to check the validity of the private address in the packet being handled inside \textit{SimBle}? The solution to the first question is given by RESOLVE [Alg. \ref{AddressResolution} line \ref{resl}] while CHECKVALIDATION [Alg. \ref{AddressResolution} line \ref{val}] answers the second question that we arise above. As briefly stated earlier, RESOLVE returns a tuple consisting of (\textit{resolved}, \textit{resIDAdd}). Here \textit{resolved} states if the resolution attempt of the \textit{privateAddress} was successful or not. If the private address is resolved then \textit{resIDAdd} consists of the Identity Address of the node creating the private address, else it is a empty string in the returned pair. Whenever a node receives resolvable private address, the corresponding \textit{PrivacyManager} calls RESOLVE with \textit{privateAddress} and \textit{irkIAddPairList} as input. While \textit{privateAddress} is the sending device's randomized MAC address, \textit{irkIAddPairList} is the locally maintained list of (\textit{IRK}, \textit{Identity Address}) pairs at the resolving node, as described in section \ref{pair}. RESOLVE first extracts \textit{hash} and \textit{prand} part of the the private address [Alg. \ref{AddressResolution} line \ref{res:sprand}] as described earlier in Figure \ref{fig:2}. We pad 104 null bits to the extracted \textit{senderPrand} such that the most significant byte of the \textit{senderPrand} becomes the most significant byte of \textit{plaintext}, which is the resulted byte array after padding. \begin{algorithm}[htbp] \caption{SimBle's Resolvable Private Address resolution} \label{AddressResolution} \begin{algorithmic}[1] \Procedure{Resolve}{$privateAddress, \newline irkIAddPairList$} \label{resl} \Comment{Input variable}\newline \Comment{Extract hash and random part of privateAddress} \State $senderHash \gets extractHash(privateAddress)$ \label{res:shash} \State $senderPrand \gets extractPrand(privateAddress)$ \label{res:sprand} \State $padding \gets genPaddingBits(104)$ \State $plaintext \gets Concatenate(padding, senderPrand)$ \label{res:plain} \State $resolved \gets FALSE$ \State $resIDAdd \gets NULLSTR$\newline \Comment{Check if Sender hash is valid} \For{\texttt{$IRK, IDAdd \quad in \quad irkIAddPairList$}} \State \texttt{$aesobj \gets AES(IRK)$} \label{res:aesob} \State \texttt{$ciphertext \gets aesobj.getEncrypt(plaintext)$} \label{res:cipher} \State \texttt{$localHash \gets getLeastSigBits(ciphertext, 24)$} \State \texttt{$resolved \gets isEqual(localHash, senderHash)$} \State \If{$resolved == TRUE$} \State $resIDAdd \gets IDAdd$ \EndIf \EndFor\newline \Comment{Return resolved status \& Identity Address} \State \textbf{return ($PAIR(resolved, resIDAdd)$)} \EndProcedure \newline \Procedure{CheckValidation}{} \label{val} \newline \Comment{Call RESOLVE to validate private address if any of the function calls below is triggered in \textit{SimBle}} \If{ \State $\textbf{BroadbandManager:} LinkExists(),\newline GetLinkManager(), GetLink()$\label{val:brod} \State $\textbf{LinkController:} CheckReceivedAckPacket()$ \label{val:cond} \newline} \State $\Call{Resolve}{privateAddress, irkIAddPairList}$ \label{val:res} \EndIf \EndProcedure \end{algorithmic} \end{algorithm} Before considering a \textit{privateAddress} to be resolved, the handling node checks the validity of the address. Valid private address refers to the address which was resolved using one of the \textit{IRK's} in the list available at the resolving node. To get this verification, we first take out a (\textit{IRK} : \textit{Identity Address}) pair from the \textit{irkIAddPairList}. We generate an instance of AES algorithm initialized with the IRK from the current pair [Alg. \ref{AddressResolution} line \ref{res:aesob}]. AES instance then encrypts the \textit{plaintext} to generate 128 bits of \textit{ciphertext} [Alg. \ref{AddressResolution} line \ref{res:cipher}]. We take 24 most significant bits of \textit{ciphertext} to generate the \textit{localHash}. If the value of \textit{localHash} matches the earlier extracted \textit{senderHash} [Alg. \ref{AddressResolution} line \ref{res:shash}] for any of the iterations, RESOLVE successful returns the (TRUE, \textit{Identity Address}) pair. Otherwise resolution is considered a failure and RESOLVE returns the (FALSE, \textit{" "}) pair. After resolving a private address, we look into the framework of \textit{SimBle} to investigate the modules that need address resolution. We identify two modules that need to call \textit{PrivacyManager}'s RESOLVE procedure: \textit{BroadbandManager} and \textit{LinkController} through CHECKVALIDATION [Alg. \ref{AddressResolution} line \ref{val:brod}]. Whenever \textit{BroadbandManager} receives a packet from the \textit{NetDevice}, RESOLVE is recalled in two cases. First is when it checks/tries to fetch the link. The second is when it requests the \textit{LinkManager} to the destination node. We do this to ensure that the identity address resolved by the node suggested by the destination address matches with the identity address of the existing link. Finally, CHECKVALIDATION also needs to check if the sender address of the correctly received packet by the \textit{LinkController} could be resolved using one of the stored \textit{IRK}'s at the receiver~[Alg. \ref{AddressResolution} line \ref{val:cond}]. \section{Validation}\label{valid} For validation of \textit{SimBle}, it is fundamental to evaluate the functionalities of the introduced \textit{PrivacyManager}. Therefore resolvable private address generation and resolution must be validated. Specifically, we must show that generated randomized addresses are very close to what real-world devices advertise. Also, we have to show that BLE data communication continues flawlessly between the paired devices even when they change their advertised MAC address. In this case, we assume that the devices have exchanged each other's \textit{IRK} during initialization. All the MAC addresses shown in the paper are hashed using SHA-256 and truncated to the first 8 bytes for illustration purposes. \subsection{Validating private address generation} To know if \textit{SimBle} can emulate a real-world trace, we first collect real-traces obtained form real experimentation. Then, we compare the difference between real-traces obtained from capturing public packets from actual devices to that of traces generated from initializing similar behavior devices inside the simulator. This comparison aims to show that \textit{Simble} could emulate the same behavior in terms of randomized MAC advertisements and the transmission of public packets. \subsubsection{Experimental setup} As a sniffer, we use the Bluetooth chipset of the Raspberry Pi 4B to capture Bluetooth public packets. Capture is done in a controlled environment inside a Faraday cage. We choose two devices Apple iPad Pro 3 and iPad Mini 2, emitting public packets in the cage for 40 minutes using BLE 4.1, which is captured by the Raspberry Pi. We are mainly interested in captured timestamps and LAP (lower address part) of the advertised beacons in the collected traces. LAP refers to the least significant 24 bits of a BLE MAC address. Even though we do trace-collection in non-public environments, we still present hashed values to protect the device's privacy. While for the devices inside the simulator, we assign the BLE standard in initialization as the release 4.1, which fixes the interval of MAC address regeneration to 15 minutes. Afterward, we install a broadcast application on top of spawned nodes. We assign the frequency of beacon transmissions in the application as the mean device broadcast interval observed from the real-world sniffer capture. We found this value to be 2 seconds. Moreover, we place a sniffer at the center of a square area of 10 meters in which initialized emitting devices are statically present. Sniffer captures on three public BLE channels. The chosen area's size is kept small to avoid transmission errors because of the distance between the devices and the sniffer. This is because errors are not present in the Faraday cage real-world experiment described earlier. The simulation parameters are illustrated in Table~\ref{table:2}. \begin{table}[] \centering \begin{tabular}{ |c|c|c| } \hline Parameter & Value\\ \hline Simulation area & 10*10 \\ Packet size & 20 bytes\\ Simulation duration & 2410 seconds \\ Packet sending Duration & 2400 seconds\\ Path loss model & nakagami \\ Num of nodes & N \\ Mobility model(nodes) & static \\ Num of sniffers & M \\ Mobility model(sniffer) & static \\ beacon interval & 2 seconds \\ Connection Interval & 6.25ms \\ Swap delay & 10* beacon interval \\ BLE standard & BLE 4.1 \\ \hline \end{tabular} \caption{Simulation parameters for \textit{SimBle} validation} \label{table:2} \end{table} \begin{figure}[htbp] \centering \begin{subfigure}{0.5\textwidth} \hfill\includegraphics[scale=1]{./plots/mass_assoc.pdf}\hspace*{\fill} \caption{Real-World} \label{fig:7a} \end{subfigure} \begin{subfigure}{0.50\textwidth} \hfill\includegraphics[scale=1]{./plots/mass_assoc_ns.pdf}\hspace*{\fill} \caption{SimBle} \label{fig:7b} \end{subfigure} \caption{Observed public packet addresses in real-world vs \textit{SimBle} by two devices. Each color represents a device broadcasting anonymized addresses.} \label{fig:7} \end{figure} \subsubsection{Observations} The first observation is related to the changing of the MAC addresses. In this case, for the real experiments, we turn on the Bluetooth of the two IPad devices at the start of sniffing since otherwise first change in MAC address would be random, and it would be hard to use that trace for validation. As we can see in Figure~\ref{fig:7a}, randomized MAC addresses change every 15 minutes along with the capture duration. Like real IPad devices, IPads emulated inside the simulation change their MAC addresses after 15 minutes, shown in Figure~\ref{fig:7b}. \begin{figure}[htbp] \hfill\includegraphics[scale=1]{./plots/pubpackt.pdf}\hspace*{\fill} \caption{Real-world vs SimBle in inter public packet times} \label{fig:5} \end{figure} After validating the role of \textit{PrivacyManager} in private address generation, we validate if the rest of the BLE stack could emulate the chosen real device. We do this by looking at the inter-packet times for public packets observed at the sniffer inside the \textit{SimBle} and the real-world. We maintain the same experimental setup and generated traces. We observe in Figure \ref{fig:5} that for both the devices, real-world and \textit{SimBle} inter-packet intervals at the sniffer have the mean value of 2 seconds. A deviation of 20 milliseconds is expected for the sniffers as they capture on either of three public BLE channels on random and may miss some public packets on one of the three channels. A public packet on Bluetooth is broadcasted on all three public channels within a time-frame of 20 milliseconds. This validates the overall working of public packets in \textit{SimBle}. \begin{figure}[htbp] \hfill\includegraphics[scale=1]{./plots/mass_assoc_data.pdf}\hspace*{\fill} \caption{Sent and received data packets by two paired BLE devices inside \textit{SimBle}} \label{fig:6} \end{figure} \subsection{Validating private address resolution} To validate the resolution of private addresses in \textit{SimBle}, we consider a simple scenario, where a transmitter and receiver nodes are paired inside it. This allows us to look into global trace obtained by send and receive logs and deduce if the data communication was continuous in-spite of sender and receiver changing their MAC addresses. As we can see in Figure \ref{fig:6}, the sender changes its private address around 13 minutes. However, the receiver BLE application continues to process and receive packets as it could resolve the new private address to the sender's Identity Address, having possession of its \textit{IRK}. Similarly, around 32 minutes, we observe that the receiver changes its private address. Still, it is communicated to the sender through beacons, and hence, the sender this time around resolves and verifies the receiver's private address. Therefore, the sender could be seen sending its data to the receiver seamlessly. This experiment thus ensures that \textit{SimBle}'s [Alg. \ref{AddressResolution}] is functional in handling BLE MAC randomization. \subsection{Validating optimized trace-collection} We discussed in Section~\ref{optim} about the need to optimize the trace-collection procedure to obtain traces in a reasonable time. We validate the improvement brought by \textit{SimBle} in terms of run-time by increasing the density of devices up to 1 device per square meter around a sniffer for a simulation duration of 30 seconds. The density is varied by increasing the number of devices up to 100 in 100 square meters around the sniffer. As we can observe, in Figure~\ref{fig:9}, optimized sniffing gives a performance gain in simulation run-time up to a factor of 100. In conclusion, since we generally have to simulate a considerably longer duration to test BLE privacy provisions as most MAC addresses change around 15 minutes, \textit{SimBle} can optimize the sniffing to generate traces in a reasonable amount of time. \begin{figure}[htbp] \hfill\includegraphics[scale=1]{./plots/timp.png}\hspace*{\fill} \caption{Performance gain in run-time with optimized sniffing inside simulation} \label{fig:9} \end{figure} \section{Case Study}\label{cst} MAC address association refers to defeating the anonymization techniques used by the devices and being able to track a particular device. Recently many strategies have been suggested to achieve this goal of associating different private addresses advertised publically from the same device \cite{ccnc21/JounasVAF21}\cite{becker2019tracking} \cite{celosia2019saving} \cite{martin2019handoff}. For instance, \cite{becker2019tracking} \cite{celosia2019saving} show that manufacturers like Apple and Microsoft leak partial identifiers in the data field of public packets, which can be easily exploited. In \cite{martin2019handoff}, authors reverse engineer continuity protocol messages of Apple devices. They show that finger-printing the device, as well as behaviorally profiling users, is possible using the contents of public BLE messages. They also demonstrate that predictable frame sequence numbers in them leave the possibility of tracking Apple devices across space and time. As we mention in the Section \ref{intro}, \cite{9152700} also discuss a de-anonymization strategy. Authors of \cite{9152700} mention that the focus of their solution is Bluetooth Classic (BT) not BLE, because of the absence of MAC address randomization. Besides, the proposed strategy requires specific sniffing devices and targets only private packets. We believe that this approach can not be considered as fully generic and scalable. Contrary to the above BLE strategies~\cite{becker2019tracking}\cite{martin2019handoff}\cite{celosia2019saving} which target specific devices like Apple, \cite{ccnc21/JounasVAF21} propose a method which associates MAC addresses from a device based on emitted public packets. This makes [6] independent of the device vendor and generic for any BLE device as it just relies on beacons and whatever the used application. They identify devices across time using an identifier that discriminates a subset of devices at a given time, that is, a weak identifier, and achieve close to $100\%$ accuracy for controlled environments as shown in Figure~\ref{fig:8}. Therefore, \textit{we decided to implement and study performances of ~\cite{ccnc21/JounasVAF21} when using \textit{SimBle}, since to the best of our knowledge, it is the only generic BLE MAC address association strategy currently available in the literature.} We evaluate it using the traces and the \textit{ground truth} generated by \textit{SimBle}. \subsection{Algorithm Overview} \label{loicalgo} The association strategy proposed in~\cite{ccnc21/JounasVAF21} could be briefed into the following three steps: \begin{enumerate} \item \textit{Identifying the MAC conflicts across time: } Whenever we look at passively sniffed traces across time for public BLE packets, it is very probable that two or more devices change their randomized MAC addresses around the same time. These are identified as \textit{conflicts} by~\cite{ccnc21/JounasVAF21} and seen over the entire sniffing duration as \textit{conflict clusters}. The authors also define the \textit{dswap} as the time that separates the consecutive and distinct private addresses from a particular device. For each address change seen in the trace, there is a set of appearing and disappearing MAC addresses in the interval \textit{dswap}. They are associated using the Linear Assignment \cite{martello1987linear} where the weights of possible associations are chosen as distances between \textit{weak identifiers}, which is described next. \item \textit{Finding a weak identifier: } A device constant could be a weak identifier if it is accessible to the sniffer and it splits the device population into a few groups that are distributed as uniformly as possible. \cite{ccnc21/JounasVAF21} choose the fixed part of the time between advertising packets in BLE as the weak identifier and call it \textit{characteristic time}. \item \textit{Resolving MAC conflicts: } \textit{Union Find} \cite{harfst2000potential} is used to break the conflict clusters into groups of appearing and disappearing MACs. Finally, all conflicts seen in the observed trace are resolved by using the absolute difference between the characteristic times as association weights for the Linear Assignment. \end{enumerate} \subsection{Study of the association strategy} \label{study} We identify three aspects for which the association strategy \cite{ccnc21/JounasVAF21} is most sensitive in terms of effectiveness: \begin{enumerate} \item \myitem{Conflict size and \textit{dswap} chosen: } As the number of devices in the sniffing zone increases, the number of devices that change their private addresses around the same time also increase. We see in section \ref{loicalgo} that weak identifier is used to resolve conflicts. We define the number of devices in a single conflict as \textit{conflict size}. Increasing conflict sizes in the conflict cluster have two major consequences in \cite{ccnc21/JounasVAF21}. Firstly, weak identifiers would not be effective in resolving conflicts during Linear Assignment. This is because a large number of devices cause more possible associations to have similar weights. Secondly, we identify the strategy~\cite{ccnc21/JounasVAF21} to be quadratic in run-time. Thus, using Linear Assignment for the resolution of a huge set of conflicting MAC addresses is practically not feasible for device-tracking purposes. We see \textit{dswap} as critical parameter in \cite{ccnc21/JounasVAF21}. It could not be chosen arbitrarily large, as this results in very large conflict clusters containing MAC addresses that are probably not single conflict. On the contrary, relatively small value leads to the exclusion of actual conflicts. For the evaluation of association strategy, we use \textit{dswap} to be 10 times \textit{characteristic time} as recommended to be optimal by~\cite{ccnc21/JounasVAF21}. \item \myitem{Device diversity in the population: } The effectiveness of association is also dependent on the diversity of devices in the sniffed trace. This is because \textit{characteristic times} of devices vary more with diversity. Thus it is easy for the Linear assignment to group conflict pairs with similar weights. \cite{ccnc21/JounasVAF21} also uses the vendor information in public packets as an identifier while resolving conflicts. Filtering out possible associations with different vendors in the advertised packet increases the chance of correct MAC address association. \item \myitem{Mobility observed in trace: } \textit{Characteristic times} as a \textit{weak identifier} is calculated from the observed packet timestamps sequence in the trace. If there is a high degree of mobility around the sniffer, then devices keep coming and leaving the sniffing zone. This causes an error in the value chosen by \cite{ccnc21/JounasVAF21} for possible association pairs' weight during conflict resolution. Hence the accuracy of MAC address association should decrease naturally. \end{enumerate} \subsection{Evaluation} \label{eval} In the following, we evaluate the accuracy of MAC address association and growth of conflict cluster size for various realistic scenarios. In \textbf{scenario 1}, we choose \textit{BLE 4.1}, since it is the most prevalent BLE release in devices today. We also choose a \textit{single device class}, which is smartphones. Smartphones largely fall into the device class \textit{moderate emitters} as stated earlier in Section \ref{hetero}. The randomization interval in BLE 4.1 is set to 15 minutes. For \textbf{scenario 2}, we choose \textit{BLE 4.1} and \textit{multiple device classes}. We emulate the environment with different device classes to include co-existing smartphones, smartwatches, earbuds e.t.c. Finally, in \textbf{scenario 3}, we consider \textit{BLE 5.2} and \textit{multiple device classes}. Here we emulate a diverse range of devices supporting the latest release, BLE 5.2, in them. We choose this BLE standard because, unlike BLE 4.1, vendors can keep private address generation interval to be less than 15 minutes. Though standard advises avoiding smaller values for randomization interval than 15 minutes as it could affect performance due to connection times. We deliberately keep the randomization interval as uniform distribution in the range (3, 15) minutes to observe how \cite{ccnc21/JounasVAF21} performs when more and more vendors start to quicken private address generation. We evaluate all the scenarios for the following mobility-profiles: \begin{enumerate} \item \textit{Static-Confined: } Here the devices are static and are always present in the sniffing zone. \item \textit{Mobile-Free: }In this profile, devices are mobile and are free to leave and enter the sniffing zone. We try to mimic human mobility by using a random-walk mobility model with a speed of 1.5 $m/s$ and direction change after every 2 $s$. \end{enumerate} We generate all the traces and associated \textit{ground truth} by simulating several BLE devices and a sniffer for 40 minutes using \textit{SimBle}. We prefer a longer duration than multiple simulation runs of small duration as it gives detailed insight on how conflicts evolve with time. It is essential to note how accurately strategy in Section \ref{loicalgo} resolves the MAC addresses from a single device in the capture duration. For \textit{Static-Confined} mobility-profile, we place a sniffer in the center of a square of 100 square meters and vary the number of BLE devices/nodes up to 100. We choose this area to make sure that nodes are always in sniffing range of the sniffer. As shown in Table \ref{table:2}, we use the \textit{Nakagmi} path loss model and consider the successful BLE transmission range to be around 20 meters. While in the case of \textit{Mobile-Free} mobility-profile, we deliberately take a square of 2500 square meters and place the sniffer in the middle of it. BLE nodes are performing random-walk in that area and thus move in and out of the sniffing range. \begin{figure*}[htbp] \centering \begin{subfigure}{0.5\textwidth} \includegraphics[scale=0.95]{./plots/accuracypaper1.pdf} \caption{Scenario 1} \label{fig:10a} \end{subfigure}% \begin{subfigure}{0.50\textwidth} \hfill\includegraphics[scale=0.95]{./plots/conflictpaper1.pdf}\hspace*{\fill} \caption{Scenario 1} \label{fig:10b} \end{subfigure} \begin{subfigure}{0.5\textwidth} \includegraphics[scale=0.95]{./plots/accuracypaper2.pdf} \caption{Scenario 2} \label{fig:10c} \end{subfigure}% \begin{subfigure}{0.50\textwidth} \hfill\includegraphics[scale=0.95]{./plots/conflictpaper2.pdf}\hspace*{\fill} \caption{Scenario 2} \label{fig:10d} \end{subfigure} \begin{subfigure}{0.5\textwidth} \includegraphics[scale=0.95]{./plots/accuracypaper3.pdf} \caption{Scenario 3} \label{fig:10e} \end{subfigure}% \begin{subfigure}{0.50\textwidth} \hfill\includegraphics[scale=0.95]{./plots/conflictpaper3.pdf}\hspace*{\fill} \caption{Scenario 3} \label{fig:10f} \end{subfigure} \caption{ Accuracy of MAC address associations and average conflict size observed by MAC association strategy\cite{ccnc21/JounasVAF21} on \textit{SimBle} generated traces for \textit{Static-Confined} and \textit{Mobile-Free} mobility-profiles, described in Section \ref{eval}} \label{fig:11} \end{figure*} \subsection{Results and Analysis} \begin{enumerate} \item \textbf{Scenario 1: } First, we observe how well the algorithm\cite{ccnc21/JounasVAF21} can defeat MAC randomization and correctly associate private addresses for BLE 4.1 with \textit{moderate emitters}. MAC addresses change after every 15 minutes in BLE 4.1. For average conflict sizes below 10, we expect the algorithm in Section \ref{loicalgo} to perform well both in run-time and accuracy. We observe in the Figure \ref{fig:10a} that accuracy of association is above $98\%$ for \textit{Static-Confined} mobility-profile. Even in the case of \textit{Mobile-Free} nodes, minimum accuracy of around $91\%$ is seen for 100 devices. Average conflicts increase with an increase in the number of devices as expected in Figure \ref{fig:10b}, but they are well beneath the bound of 10 conflicts. Hence, the accuracy of MAC address association is very high for both mobility-profiles. \item \textbf{Scenario 2: } We just saw how accurately MAC addresses from \textit{moderate emitters}, which are generally mobile phones is associated. We present a further realistic scenario, where we allow all device classes (Section \ref{hetero}). This favors MAC association as described in Section~\ref{study}. We again stick to the privacy behavior of BLE 4.1 as it is the most prevalent standard in current devices. As expected, we observe an increase in accuracy for both the scenarios in Figure~\ref{fig:10c}. While MAC addresses of \textit{Static-Confined} nodes are associated with accuracy close to $100\%$, the minimum accuracy of association for \textit{Mobile-Free} devices also increased to $93\%$. Conflict sizes observed are also small for up to 100 devices, as seen in Figure~\ref{fig:10d}. \item \textbf{Scenario 3: } Finally, we go for multiple device classes but with privacy behavior of BLE 5.2, which allows vendors to change the private address of the device before the interval of 15 minutes (Section~\ref{eval}). We expect the conflict sizes to rise and hence a decrease in accuracy for a large number of devices. We see a relative decrease in accuracy in the Figure~\ref{fig:10e} when compared to the previous Figure~\ref{fig:10c} as expected. For 100 devices accuracy of MAC address associations decrease to around $89\%$ for both mobility-profiles. Conflict sizes increase to a maximum value of 13 as seen in Figure~\ref{fig:10f}, but it is still not large enough to degrade the efficiency of the association strategy~\cite{ccnc21/JounasVAF21}. \end{enumerate} \textit{Results of the case study shows that current MAC address randomization proposed by the BLE standard is not enough to safeguard user-privacy. The association strategy\cite{ccnc21/JounasVAF21} can successfully defeat the randomization procedure and correctly fingerprint close to $90\%$ of the devices even in highly dense and mobile scenarios. An adversary could setup multiple sniffers strategically and easily track a particular user device.} \label{obs} The high accuracy of MAC address association in the initial case study made us look into the methods to avoid device-traceability. We reduced the \textit{randomization interval} of the device population to 3 minutes. Devices changing their private addresses quickly should lead to higher \textit{conflict sizes} and hence lower accuracy of association by \cite{ccnc21/JounasVAF21}. Using the mobility-profile \textit{Mobile-Free}, we varied the number of devices inside \textit{SimBle} to 100 for this smaller value of \textit{randomization interval}. Devices belong to multiple device classes. We observe in Figure \ref{fig:12} that indeed accuracy decreases to a minimum of around $78\%$ with \textit{conflict size} growing to 97. \begin{figure}[htbp] \centering \begin{subfigure}{0.5\textwidth} \hfill\includegraphics[scale=1]{./plots/accuracypaper4.pdf}\hspace*{\fill} \caption{Real-World} \label{fig:12a} \end{subfigure} \begin{subfigure}{0.50\textwidth} \hfill\includegraphics[scale=1]{./plots/conflictpaper4.pdf}\hspace*{\fill} \caption{\textit{SimBle}} \label{fig:12b} \end{subfigure} \caption{Accuracy of MAC address associations and average conflict size observed by MAC association strategy\cite{ccnc21/JounasVAF21} on \textit{SimBle} generated traces for \textit{Mobile-Free} mobility-profile with \textit{Randomization interval} of 3 minutes} \label{fig:12} \end{figure} With single device classes, \cite{ccnc21/JounasVAF21} might get lower accuracy, but $78\%$ accurate associations are still a threat to user-privacy. Hence lowering the \textit{randomization interval} is not the only solution the BLE standard should address. Based on the case study, we summarize the following recommendations to lower the accuracy of successful MAC address association possibly: \begin{enumerate} \item Recommended \textit{randomization interval} must be lowered. This might lead to increased connection times. Optimization in the IRK exchange and resolving the list at the receiver could allow BLE devices to change address frequently without compromising performance. \item The parameter exploited by \cite{ccnc21/JounasVAF21} in \ref{loicalgo} is the \textit{characteristic time} that acts as \textit{weak identifier}. This parameter is unique to a device and varies for the device population. This makes the identification of the device easier. We suggest the standard to recommend vendors having similar \textit{characteristic times} \end{enumerate} \section{Final remarks and future steps}\label{discussion} \label{conl} MAC address randomization is indispensable for protecting user-privacy in BLE as we see in Section \ref{back}. If devices keep on advertising their true MAC address or their \textit{Identity Address}, they could easily be tracked by co-coordinated passive sniffing. Widespread usage of resolvable private addresses could potentially protect the privacy of users to some extent. On the other side, vendor-dependent MAC address randomization has lead to the retrieval of realistic BLE traces more and more challenging. The lack of \textit{ground truth} in randomized traces and impracticality of large-scale passive trace collection is making the testing of solutions based on trajectory reconstruction or user identification \cite{8888137} \cite{michau2017bluetooth} \cite{xu2020route} \cite{bhaskar2014bluetooth} \cite{alghamdi2018bluemap} \cite{alhamoud2014presence} \cite{shao2018bledoorguard} almost impossible. \textit{All of the existing and future works based on device-identification using MAC address in BLE must be revisited with the introduction of BLE privacy-provisions like private addresses.} \textit{SimBle} is the answer to this issue as researchers could now generate large-scale trace traces with devices of their interest and use it to validate their works. Sniffers could be deployed accordingly to emulate real-world passive trace-collection for BLE. The works that do BLE MAC address association or device-fingerprinting are threats to privacy provisions of BLE\cite{ccnc21/JounasVAF21}\cite{becker2019tracking} \cite{celosia2019saving} \cite{martin2019handoff} as these strategies lead to tracking of users. \textit{Only \textit{SimBle} can allow the community to compare the effectiveness of any two of these available solutions.} This is because we need exact/identical conditions for comparing the evaluations. It is not only hard for experiments/test-beds to emulate identical conditions but are also not scalable. Moreover, as discussed earlier, finding \textit{ground truth} for experimentally obtained traces is practically impossible for large-scale testing. \textit{SimBle} is the first BLE simulation stack capable of generating traces that preserve privacy. It introduces resolvable private addresses that are the core to BLE device and network privacy-provisions. We showed that it is capable of emulating the behavior of any real BLE device/hardware. Users have to choose the appropriate device class they want to test, based on the targeted device. It resolved the lack of \textit{ground truth} for scalable scenarios after the introduction of MAC address randomization. \textit{SimBle} provides the associated \textit{ground truth} with every trace that is generated. We presented the case study to the only generic MAC address association strategy for BLE available in literature using \textit{SimBle}. Realistic device and mobility scenarios were used in the evaluation. \textit{The case study revealed the user-privacy trade-off even with the usage of MAC address randomization as close to $90\%$ private addresses could be associated correctly in the worst-case case. This enforces the need to revise the recommendations currently proposed in the standard.} Regarding future works, the key distribution could be done by using control messages rather than pre-installation at the node. BLE stack could be enriched by the addition of different device pairing modes. Also, as one of the aims of \textit{SimBle} is to emulate any real device, more and more vendor-specific information could be added to facilitate usability. Finally, we aim to evaluate and compare more BLE privacy-related works in the future using \textit{SimBle}. \newpage \section{Introduction}\label{intro} The Internet of Things (IoT) is expected to connect billions of low-end devices to the Internet. It thereby drastically increases communication without a human source or destination. The total count of products and businesses that use IoT technologies has increased to about 25 percent, and the number of connected devices is projected to reach 43 billion by 2023\cite{iotg}. Bluetooth has been a significant backbone for most of these connected devices and applications\cite{7000963}. Sniffing Bluetooth traffic has not been straightforward because of the manufacturer-dependent adaptive channel hopping behavior and shared 2.4 GHz spectrum of Bluetooth's device. Various approaches have predicted hop changes, allowing the user to be traceable~\cite{10.1145/2906388.2906403}. Nevertheless, these hopping challenges are mostly for the private data packets being exchanged in Bluetooth. As we go for the public packets such as beacons and keep-alive messages, which are emitted in three channels, it is much easier to sniff them accurately. These beacons reveal the sender's device identity in the form of MAC address. Devices that perform \textit{MAC randomization} can hide device's identity to some extent. Bluetooth Classic (BT) does not randomize the addresses and has already been shown to be de-anonymized\cite{9152700}. Even MAC address randomization in BLE has been claimed to be defeated specific to apple devices\cite{martin2019handoff} and for generalized devices\cite{ccnc21/JounasVAF21}. \cite{ccnc21/JounasVAF21} claim to get 100\% device association for small set of devices on sniffing public-packets in a controlled environment(inside Faraday cage) as seen in Figure \ref{fig:8}. The addresses shown in figure \ref{fig:8} are LAP (Lower Address Part) of anonymized MAC addresses seen by \cite{ccnc21/JounasVAF21} in the trace. There is a need to evaluate the performance of \cite{ccnc21/JounasVAF21} for a large population of devices in real-world scenarios. If the results of Figure \ref{fig:8} are similar in realistic environments, immense threats to user-privacy are posed in BLE. \begin{figure}[htbp] \hfill\includegraphics[scale=0.50]{./plots/ground_truth.png}\hspace*{\fill} \caption{Perfect association of MAC addresses achieved by ~\cite{ccnc21/JounasVAF21} on sniffing public-packets in the controlled environment for BLE with MAC randomization. Each color represents a device broadcasting with anonymized addresses} \label{fig:8} \end{figure} \textit{Amidst raising privacy intrusion findings in the Bluetooth, there has been an absence of frameworks to test these suggestions in scalable real-world conditions}. Current BLE simulators are mostly focusing on throughput, latency, and signal-to-noise ratio (SNR) features rather than the security and privacy aspects of the standard. There has been an inability to incorporate the real-world device parameters into the simulation framework. Without these advancements, it is impossible to generate a realistic BLE trace that considers integral factors like MAC address randomization. This is because the implementation of address randomization is dependent on the device manufacturer. Lack of controlled simulated traces presently halts the retrieval of \textit{ground truth} in large-scale scenarios. \textit{Ground truth} here refers to the knowledge of a set of randomized MAC addresses that were emitted from a particular device. It is needed to successfully evaluate device fingerprinting solutions and propose adjustments in the standard to guarantee the user's privacy.\label{bstack} \textit{To the best of our knowledge, none of the current available BLE simulators support and consider privacy aspects, specifically MAC address randomization}. The current state-of-the-art open-source for simulating wireless communications in general, NS-3\footnote{https://www.nsnam.org/}, is very weak in support of BLE standard to much-advanced WiFi stack it possesses. In fact, the official release of NS-3 still lacks BLE support. A different open-source implementation of BLE stack without MAC randomization have been released based on NS-3 framework\cite{ns1,ns2}. There has also been an implementation of BLE in Omnet++ framework too\footnote{http://cc.oulu.fi/ kmikhayl/BLE.html}. We rigorously tested and chose \cite{ns2} as the base BLE stack (BLE 4.1) of our proposed simulator. This is because, firstly, it is currently most accurate, efficient, and organized. Secondly, it is in the NS-3 framework, which gives users the freedom to perform BLE experiments co-existing with the latest WiFi standards. Most of the BLE trace collection is for public packets and is done passively through sniffers. Private packets are mostly encrypted, and capturing them is illegal in many countries. Expensive hardware like Ubertooth One\cite{u1} is required to sniff on data channels. Moreover, as stated earlier, channel hopping in BLE data packets makes the capturing worse. Unfortunately, current simulation tools are not meant for generating sniffed public BLE traces. This is because simulation time explodes with a large number of devices due to the number of simulation events increasing when handling the inter-node public packets. We are interested in the full processing of broadcast packets only at the sniffer. \textit{SimBle} addresses this issue and proposes optimized sniffing in Section \ref{opt}, which eliminates exponential run-time while being able to generate the exact same trace. In this paper, we first study and present different privacy guidelines across released Bluetooth standards. Then, we develop and introduce the simulator \textit{SimBle}, which incorporates standard-compliant MAC address randomization capable of emulating any BLE device. This is made possible as \textit{SimBle} introduces the notion of \textit{device class}, which differentiates various kinds of devices like phones, smartwatches, and headsets based on the frequency of transmitted beacons. \textbf{The four major contributions of this paper are:} \begin{enumerate} \item Study of different privacy features present in the BLE standard that is necessary to be introduced in Simulation. \item Architecture and implementation of a new BLE simulation stack in the form of \textit{SimBle} in NS-3 which considers user-privacy and distinguishes the devices spawned in it. \item Case study of the only generic MAC address association algorithm present in literature. It is made possible for scalable scenarios after generating the \textit{ground truth} using our solution \item Release of an open-source simulator along with tools and methods to generate a realistic Bluetooth trace with associated \textit{ground truth} \end{enumerate} The rest of this paper is organized as follows. Section \ref{back} defines the overview of different privacy measures recommended by the BLE standard. We present our BLE simulation stack, \textit{SimBle} in Section \ref{sec3} and \ref{sec4}. Section \ref{valid} validates the functionality of \textit{SimBle}. In Section \ref{cst}, we perform a case study of the generic MAC address association strategy available in literature using simulated \textit{ground truth}. We show the strategy's effectiveness and then discuss possible amendments to the BLE standard that this case study has forced to consider. Finally, Section \ref{discussion} discusses the impact of privacy-preserving BLE provisions on other research domains and how real-world traces from \textit{SimBle} would address big challenges. We also present the conclusion of our work along with looking into the future directions. \section{Background}\label{back} This section discusses how BLE handles MAC level addressing. We look into different addressing modes supported by BLE. But we are mostly interested in private addresses as they are fundamental in preserving user privacy. Afterward, we present a study of privacy provisions currently proposed by the standard. Finally, we identify the factors that must be taken into account for designing the simulator that respects user privacy. \subsection{BLE MAC addressing} Bluetooth has been there for quite some time now, but it is the Bluetooth Low Energy (BLE) variant\cite{btorg} that has been used by the majority of the IoT devices. When a particular BLE device communicates, it keeps sending advertising packets on three public channels specified by the standard. These packets include a link-layer MAC address, which acts as an identifier to the device[\cite{bt41}, p. 69]. To avoid the user leaking the identifier to the world, recent BLE standards have continuously forced all the devices to update their publicly advertised MAC addresses. Various addressing modes have been specified in the standard [\cite{bt52}, p. p. 2988] which are briefly described next. In BLE, we identify the devices using a device address and an address type [\cite{bt52}, p. 2988]. This means that whenever we compare two device addresses, the same 48-bit addresses does not guarantee the same device. This is because the two addresses could have different types. The address type could either be a public device address or a random device address, which are both 48 bits long. The device has the freedom to use at least one or both types of device addresses. Pubic device addresses are traditional MAC addresses that are created in accordance with \textit{Universal addresses} section of the IEEE 802-2014 standard\cite{macstd}. They are more prevalent, but it is the random device address which is privacy-preserving. Random device address could either be static or private. A static address is a 48-bit randomly generated address meeting specific standard requirements. On the other hand, private addresses are again either resolvable or non-resolvable[\cite{bt52}, p. 2991]. These specific subtypes are identified by the two most significant bits of the random device address, as shown in the table \ref{table:1}. \begin{table}[] \centering \begin{tabular}{ |c|c|c| } \hline Address [47:46] & Address Sub-Type\\ \hline 0b00 & Non-resolvable private address \\ 0b01 & Resolvable private address \\ 0b10 & Reserved for future use \\ 0b11 & Static device address \\ \hline \end{tabular} \caption{Sub-types of random device addresses} \label{table:1} \end{table} BLE device's Identity Address is one of Public device address or Random static device address. When a device is continuing with Resolvable private addresses, it must also possess an Identity Address. \subsection{BLE privacy provisions} \label{ss:blepprovision} The key to privacy provided by the BLE link layer is using private addresses, which we described in the previous sub-section[\cite{bt52}, p. 3201]. This again reflects the importance of the introduction of MAC address randomization done by \textit{SimBle}. BLE recommends devices to generate a resolvable private address. The link-layer corresponding to the host sets a timer and regenerates a new resolvable private address when the timer expires. Moreover, once the Link Layer is reset, a new resolvable private address is generated, and the timer is allowed to start with an arbitrary value in the allowed range. To maintain the efficiency of connection establishment, the standard recommends setting the timer to 15 minutes. BLE\cite{bt51}\cite{bt52} does not allow private devices to use its Identity Address in any advertising packet. The Host could instruct the Controller to advertise, scan, or initiate a connection using a resolvable private address after enabling the resolving list. The state machine for the link layer of BLE consists of various states[\cite{bt52}, p. 2985]. A device could be found in either of these states. For instance, advertising, scanning, and initiation states have different guidelines by the standard. In the advertising state, the link layer is allowed to perform device filtering based on the device address of the peer device to minimize the number of devices to which it responds. This could be done according to a local \textit{white list} which contains a set of records comprising of both the device address and the device address type (public or random) [\cite{bt52}, p. 3202]. If the device is in scanning or initiating state, it is recommended to use private addresses. The scanning device should use the resolvable or non-resolvable private address as the device address. Whenever a scanning device receives an advertising packet that contains a resolvable private address for the advertiser's device address, after address resolution, the scanner's filter policy decides to respond with a scan request or not. Having over-viewed the BLE standard's privacy-related recommendations, especially the latest release BLE 5.2, we proceed in what follows to incorporate the key elements to the simulator. The simulator should not only care of including resolvable private addresses that are integral to BLE privacy but also bring together other MAC address randomization related aspects. The proposed simulation stack \textit{SimBle}, is thus designed in such a manner that adding further privacy-specific features in the future is relatively straightforward. \section{SimBle: Design \& Architecture}\label{sec3} This section aims at providing the solution to the problem of emulating devices that follow network and device privacy-provisions of BLE. This step is a key to generating realistic traces with associated \textit{ground truth}. If we successfully come up with a device-specific privacy-preserving simulation, we could easily produce traces that resemble real scenarios. This has profound implications. It enables us to practically evaluate any MAC address-based device-fingerprinting or privacy-intrusion solutions that are suggested in the literature. In the following, we introduce our BLE simulation stack that we call as \textit{SimBle}. We first look at different design aspects of \textit{SimBle} and then we present our \textit{SimBle} architecture. \subsection{Design considerations} The first aspect that we should take into consideration is the \textbf{device heterogeneity}. Indeed, BLE gives vendors the flexibility to implement privacy features respecting specific guidelines released by the standard. Therefore, different mobile phone manufacturing companies like Apple and Samsung could have different implementation parameters related to randomization. Even one vendor could have a range of devices supporting various BLE releases. Hence, device distinction is an essential feature for BLE simulation, which is currently absent in available simulators. The second aspect that we have to consider is \textbf{privacy provisions}. As we saw in the previous section, the central component of BLE privacy provisioning is the MAC address randomization procedure. If devices violate these recommendations and, for example, advertise it's identity address, then the device and, thus, network privacy is compromised, leading to traceability. \textit{Simble} needs to introduce these provisions specifically MAC address randomization in its framework. Finally, the last aspect is the \textbf{flexibility to generate realistic traces}. Indeed, one of the significant demands in the research community is BLE traces' availability, which could replicate different real-world scenarios like mobility, crowd density, and kind of devices present in the zone where the trace was collected. Trace collection is impractical for the large population using active means like installing specific applications on user devices. Even passive methods, like the usage of sniffers, would require massive deployment and user consent. That is why \textit{SimBle} also aims to include a framework for releasing a ready-to-use utility for trace generation in various user-specified scenarios. We show a case-study of MAC address association algorithm in section~\ref{cst} using traces and associated \textit{ground truth} from this framework. In the following subsections, we detail how these design choices are implemented in \textit{SimBle}. \subsubsection{Device heterogeneity}\label{hetero} As discussed earlier in the previous section, different vendors have the freedom with some bounds in the implementation of BLE stack in the device. For example, Apple picks from the range for values to decide how frequently the device changes a randomized MAC address. We need to distinguish for each device introduced in \textit{SimBle} so that simulation would be able to replicate its behavior in terms of privacy features. In the following, we define the device's type through two points: the device's class and the supported standard version. \begin{enumerate}[label=(\alph*)] \item \textit{Notion of Device Class:} We find a property to classify the device into various groups where the behavior is similar irrespective of manufacturer. This property is the \textit{frequency of transmitting beacons}, which is characteristic of a device with a maximum variation of 10ms~\cite[p.~2751]{bt51}. The base value of the beacon transmission period is between [20~ms; 10.24~s]. Based on this property, we classify BLE devices into the following \textit{device classes}: \begin{itemize} \item \textit{Frequent Emitters}: For this class, the frequency of transmitting beacons is from a normal distribution of mean 50~ms and standard deviation 10~ms. This represents a highly active device like earbuds. We expect these kinds of devices to also swap their randomized MAC address quickly. \item \textit{Moderate Emitters}: These are devices with a moderate frequency of advertisements. We describe them to be from a normal distribution of mean 300~ms and standard deviation 25~ms. From our experimentation, most smartphones, especially iPhones, are falling into this category. \item \textit{Semi-Moderate Emitters}: These are devices which are still active in transmitting regular beacons on broadcast channels. They follow a normal distribution of mean 500~ms and standard deviation 25~ms. This class again mainly includes phones. \item \textit{Low Emitters}: These are devices which are least active in sending out advertisements. We define them to have inter beacon transmission intervals from a normal distribution of mean 2~s and standard deviation 500~ms. Smartwatches generally fall in this category. \end{itemize} A user, when instantiating a node in \textit{SimBle} could choose any of the stated device classes. If the user enables beacons, nodes automatically set their behavior to that of the specified class. However, we give the flexibility to specify the exact beacon frequency of a device if a user knows it beforehand through experimentation. \item \textit{BLE standards: } The frequency of changing a randomized MAC address does depend on the standard. In the most prevalent release currently in terms of the number of devices, the BLE 4.0, for instance, devices change their MAC addresses every 15 minutes\cite{bt41}. In recent releases like BLE 5.2, devices are allowed to change their address before 15 minutes too. Therefore, it is crucial to specify a BLE node with its standard before using its privacy features in the simulation. \textit{SimBle} gives the user the option to mention the standard they want to run on top of the declared node, which enables controlling the privacy features associated. \end{enumerate} \subsubsection{Realistic trace generation} \label{opt} One of the major motivations of this paper is to address the issue of generating realistic Bluetooth traces finally. We identify following components that are essential to be taken care of for \textit{SimBle} to emulate real-world trace collection: \begin{enumerate} \item \textbf{Privacy features:} As already stated earlier, \textit{SimBle} not only introduces BLE network and device privacy features like MAC address randomization but also identifies key parameters that are necessary to get real-world traces. These factors as introduced before in section \ref{sec3} are \textit{swapDelay}, \textit{randInterval}, \textit{Device Class} and the BLE release version . As mentioned above, making sure of correct device-specific parameters enables \textit{SimBle} to emulate any vendor device's privacy features. \item \textbf{Passive sniffing:} Trace collection using active methods like user participation is not practical for BLE. Indeed, we need to recruit volunteers and install the specific application on user devices. There has been rapid growth in contact tracing and trajectory-reconstruction using BLE recently, and the research community requires more real-world traces collected through passive sniffing. The capture of BLE packets should fall under the principle of "legal capture" in different countries. It is mostly not valid for private packets and requires special authorization. Therefore, BLE passive sniffing generally refers to listening on public channels. \textit{SimBle} introduces a framework for the user to deploy an arbitrary number of sniffers and nodes to be placed in a sniffing zone. On top of it, different mobility models could be installed on BLE nodes' varying density, which enables recreating realistic environments. Hence, we could emulate real-world BLE sniffing. \item \textbf{Ground truth:} Introducing privacy in BLE simulation automatically answers the search of \textit{ground truth} in randomized-address traces. \textit{Ground truth} here refers to the knowledge of the history of randomized MAC addresses emitted by a device. We need this to evaluate MAC association algorithms or device fingerprinting methods in general, that are increasingly being proposed \cite{ccnc21/JounasVAF21} \cite{martin2019handoff} \cite{9152700}. \textit{SimBle} generates \textit{ground truth} trace by matching each device's generated private addresses to the \textit{Node ID}, which acts a unique identifier to the device in simulation time. \end{enumerate} \subsubsection{Optimizing trace generation}\label{optim} As discussed earlier, passive sniffing is the most practical method for BLE trace collection. We identify a major issue in the generation of real-world traces inside a simulation. As the number of nodes increases, the number of simulation-events due to processing inter-node packets also increases quadratically. This has a significant impact on the time and resources needed for simulation. But we are only interested in the node-sniffer interaction in case of public packet capture. \textit{SimBle} addresses this problem and gives the user the flexibility to specify a \textit{flag} in simulation, which induces filtered and optimized handling of broadcast packets at nodes. This reduces the simulation duration significantly and thus makes trace-collection feasible. We discuss more on this and look at the obtained gain in performances in Section~\ref{valid}. \subsection{Architecture} After having figured out the design, we have a brief look into the architecture of a BLE \textit{Node} inside \textit{SimBle} in the Figure \ref{fig:1}. As discussed earlier in the Section \ref{bstack}, we use the base BLE stack of \cite{ns2}. Components of \textit{NetDevices} except the \textit{PrivacyManager} were defined in the base stack. \textit{Application} and \textit{Packet socket interface} are NS-3 wide entities not specific to BLE. We created the new component, \textit{PrivacyManager} that takes care of all BLE privacy features. A node in \textit{SimBle} carries the same meaning as in NS-3. It is a physical entity with a unique integer ID and contains \textit{NetDevices} and \textit{Applications}. In this paper, we could think the \textit{Node} to be equivalent to a device/hardware in the real world. We show in Figure \ref{fig:1} single instance of \textit{Application} and \textit{NetDevice} for illustration but could be multiple in principle. \textit{NetDevice} is an integral object of a node representing a physical interface on it. Here, we are interested in the Bluetooth interface. \textit{NetDevice} communicates with the help of interfaces to the \textit{Application}. \textit{Packet socket interface} connects the application interfaces to the \textit{NetDevice} here. IPv4/IPv6 stack could also be installed by the user on the node in parallel. Let's have a brief look at the roles of other components of NetDevice which were already present in the base BLE stack\cite{ns2}. \begin{figure}[htbp] \centering \includegraphics[scale=0.80]{./plots/ble_stack-Page-1.pdf} \caption{Architecture of a node in \textit{SimBle}} \label{fig:1} \end{figure} \textit{BroadbandManager} helps add a link to the list of links that can be associated with a NetDevice. A link here refers to a BLE association between two nodes. It also handles checking if there are new packets in the NetDevice queue and forwards them to the right \textit{LinkManager's} queue. \textit{LinkManager} is the entity associated with a particular BroadbandManager. It setups a link to a specific receiver with the role(Master/Slave) as expected at the end of the setup process. LinkManager also manages \textit{TransmitWindow} which is the next time the device can send a packet over the associated link. \textit{LinkController} is majorly responsible for monitoring and handling the re-transmissions and state changes in the link. It checks if the \textit{ACK} was received for the sent packet and also fires list of callbacks to other NetDevice objects if the link changes. Lastly, \textit{PHY} mainly takes the responsibility of handling link bandwidth, bit-rates, transmission power, and bit-errors. We introduce a new module, \textit{PrivacyManager} in \textit{SimBle} which takes care of all the privacy-related aspects of a device. In the forthcoming section, we discuss how MAC address randomization is managed by the \textit{PrivacyManager}. \section{SimBle: Privacy provisions}\label{sec4} Hereafter, we describe the \textit{PrivacyManager} implementation and the MAC address randomization of BLE. We describe in details the implementation of \textit{PrivacyManager} or, to be specific, the MAC address randomization. All the introduced algorithms follow the BLE standard guidelines\cite{bt52}. \begin{figure}[htbp] \hfill\includegraphics[scale=0.60]{./plots/ble_stack-Page-3.pdf}\hspace*{\fill} \caption{\textit{PrivacyManager} in \textit{SimBle}} \label{fig:14} \end{figure} Overview of the \textit{PrivacyManager} is illustrated in the Figure \ref{fig:14}. \textit{Main} in the figure represents the base class of the \textit{PrivacyManager} from which member functions are called. We could observe in the figure that the function UPDATE is called on the device startup. UPDATE generates new Resolvable private addresses for the calling node using the function GENERATE. It recursively calls itself after the expiration of the time associated with the current private address. On the event of packet reception or checking of the existence of a link to a destination, CHECKVALIDATION is called. On every call, it checks with RESOLVE with a particular private address. RESOLVE returns on turn the validity status and the device's identity address, which generated the private address. In the following, we describe the functions of \textit{PrivacyManager} in detail. \subsection{\textbf{KEY generation and distribution}} \textit{PrivacyManager} focuses on supporting Resolvable private addresses~-- the center of all privacy provisions in current BLE release\cite{bt52} (cf. Section~\ref{ss:blepprovision}) For node to generate a resolvable private address, it must have either the Local Identity Resolving Key (IRK) or the Peer Identity Resolving Key (IRK)\label{pair}. This 128 bit key is a proof of possession of a particular private address. In real devices, IRK's are exchanged through specific control messages. In \textit{SimBle}, we generate IRK randomly at each Node when it is initialized in the simulation. The delay caused in the key exchange for real hardware is emulated by \textit{swapDelay} which we describe in the next section. Simultaneously, the Node also generates an Identity Address, which is a unique identifier to the device. In this paper, the Node or the \textit{NetDevice} essentially mean the same in terms of BLE associated parameters. This is because the remaining modules inside the node (i.e., the socket and the application modules), are not dependent on the BLE standard itself. Finally, before creating links in \textit{SimBle} and installing an application on top declared nodes, each node updates a list in their respective \textit{NetDevice}. This list contains (IRK : Identity Address) pairs of each of the fellow BLE nodes instantiated in the simulator. \subsection{\textbf{Generation of Randomized MAC}} The format of a Resolvable private address is shown in fig \ref{fig:2}. The resolvable private address is generated with the IRK and a 24-bit number known as \textit{prand}. We see that it could be mainly divided into two blocks of 24 bits each. The first block consists of 24 bit hash introduced in [Alg. \ref{AddressGeneration} line \ref{gen:prune}]. \textit{SimBle} incorporates the AES (Advanced Encryption Standard) support as it is recommended by the standard\cite{bt52} for encrypting the plain-text data into ciphered block \cite{aes1} \cite{aes} in the process of randomized MAC address generation. \begin{figure}[htbp] \hfill\includegraphics[scale=0.6]{./plots/ble_stack-Page-2.pdf}\hspace*{\fill} \caption{Format of a Resolvable Private Address} \label{fig:2} \end{figure} The second block consists of \textit{prand}. \textit{Prand} in the case of Resolvable private address has two most significant bits as 1 and 0 as shown in the figure \ref{fig:2}. The random part of \textit{prand} must consist of at least one bit as 0 and one bit as 1. We discover in detail the generation of the Resolvable private address by \textit{PrivacyManager} in [Alg. \ref{AddressGeneration}]. \begin{algorithm}[htbp] \caption{SimBle's Resolvable Private Address generation} \label{AddressGeneration} \begin{algorithmic}[1] \Procedure{Generate}{$IRK$} \Comment{Input variable} \label{gen}\newline \Comment{Prepare encryption inputs} \State $prand \gets genPrand()$ \label{gen:prand} \State $padding \gets genPaddingBits(104)$ \label{gen:pad} \State $plaintext \gets Concatenate(padding, prand)$ \label{gen:concat}\newline \Comment{AES encryption} \State $aesobj \gets AES(IRK)$ \label{gen:aesob} \State $ciphertext \gets aesobj.getEncrypt(plaintext)$ \newline \label{gen:cipher} \Comment{Getting MAC address} \State $prunedcipher \gets getLeastSigBits(ciphertext, 24)$ \label{gen:prune} \State $macstr \gets Concatenate(prunedcipher, prand)$ \label{gen:macstr} \State $macaddr \gets toMacHex(macstr)$\newline \label{gen:macaddr} \State \textbf{return} \Comment{Returns a Resolvable Private Address} \EndProcedure \newline \Procedure{Update}{$randInterval, swapDelay, IRK$} \quad \label{up} \Comment{Input variables} \State $roundIndex = getCurrentRoundIndex()$ \label{up:ri} \State $macDevice = \Call{Generate} {IRK}$ \label{up:setmac}\newline \Comment{Check if this call is just after device initialization} \If{$roundIndex == 1$}\newline \Comment{Calculate time offset for recursive callback} \State $nextUpOffset \gets getURV(0, randInterval)\newline+ swapDelay$ \label{up:urv} \Else \State $nextUpOffset \gets randInterval + swapDelay$ \label{up:rintvl} \EndIf\label{SatControlIf} \newline \Comment{Schedule a callback after offset expires} \State $incRoundIndex()$ \label{up:incr} \State Schedule(\Call{Update} {}, nextUpOffset) \label{up:call1} \EndProcedure \end{algorithmic} \end{algorithm} Each of the node in \textit{SimBle} has an instance of \textit{PrivacyManager} as illustrated earlier in the figure \ref{fig:2}. [Alg. \ref{AddressGeneration}] performs two major functions. GENERATE in [Alg. \ref{AddressGeneration} line \ref{gen}], takes as input the \textit{IRK} and generates a resolvable private address for that node. While UPDATE [Alg. \ref{AddressGeneration} line \ref{gen}] take care of necessary calls to update a device's MAC address according to the user specified BLE standard and device class that we are trying to emulate. Whenever GENERATE is called we generate a 24 bits value with two most significant bits as \textit{10}. Rest of the bits are random and we use this value as \textit{prand}, the trailing half a resolvable private address [Alg. \ref{AddressGeneration} line \ref{gen:prand}]. This generated \textit{prand} is then padded by 104 null bits such that the most significant byte of the \textit{prand} becomes the most significant byte of padding [Alg. \ref{AddressGeneration} line \ref{gen:concat}]. We call this value \textit{plaintext} as it is given as input for encryption. Then, we generate an instance of AES algorithm initialized with the IRK of the current node [Alg. \ref{AddressGeneration} line \ref{gen:aesob}]. AES instance then encrypts the \textit{plaintext} to generate 128 bits of \textit{ciphertext} [Alg. \ref{AddressGeneration} line \ref{gen:cipher}]. We take 24 most significant bits of \textit{ciphertext} [Alg. \ref{AddressGeneration} line \ref{gen:prune}] and concatenate to the earlier generated \textit{prand} to generate a string of 48 bits [Alg. \ref{AddressGeneration} line \ref{gen:concat}]. The generated string is then finally formatted in IEEE 802.11 MAC address format to produce a resolvable private address [Alg. \ref{AddressGeneration} line \ref{gen:macaddr}]. Once the randomized MAC address is generated, the next step is to change this address dynamically while respecting the standard. This is done by the UPDATE function of \textit{PrivacyManager} which takes three arguments. One of them is \textit{IRK}, the identity resolving key of the node, which we have already discussed. The other two arguments are device-dependent with the freedom to users for allocating any specific values. They are as follows: \begin{itemize} \item \textbf{randInterval:} This is the time after which a specific device generates a new resolvable private address. In BLE 4.1 standard\cite{bt41}, the most prevalent Bluetooth standard in current mobile devices, this interval is fixed to 15 minutes. However, in the most recent release, BLE 5.2\cite{bt52}, the vendor is flexible to randomize the MAC address before the mark of 15 minutes. But standard recommends not to update the addresses too frequently as it might affect the paired devices' performance. It is due to an increase in the number of control messages that need to be exchanged after generating a new address. \textit{SimBle} takes the BLE standard and device class as input from the user at the initialization of nodes to calculate the respective \textit{randInterval} value. \item \textbf{swapDelay:} It is introduced to emulate the behavior of the device in practice. We see from the experiments that devices take some time before they develop a new randomized address and advertise. This delay is caused due to resources used in address generation and in updating the current MAC level state. \textit{swapDelay} could be device-specific. We empirically choose the value to be 10 times the \textit{frequency of transmitting beacons}. We do after measuring the value of this delay in experiments done on a large-set of BLE devices broadcasting beacons. \end{itemize} On receiving the input arguments, UPDATE first checks the iteration index of this call and stores it as \textit{roundIndex} [Alg. \ref{AddressGeneration} line \ref{up:ri}]. For calls to UPDATE, \textit{roundIndex} has the value greater than or equal to 1. It distinguishes the two states in which a node can generate a new address. The first state(\textit{roundIndex}=1) is when a node goes for obtaining a new address just after spawning inside the simulation. While the second state(\textit{roundIndex}$>$1) is when the node requests an address after the expiration of the old one. GENERATE is called from UPDATE to assign the device a new resolvable private address [Alg. \ref{AddressGeneration} line \ref{up:setmac}]. After assigning the randomized address, UPDATE calculates the duration for which this address would be valid. If the device has called UPDATE for the first round, then we calculate this duration by taking a random value out of uniform random variable distribution in [0, \textit{randInterval}] and adding the \textit{swapDelay} to this value [Alg. \ref{AddressGeneration} line \ref{up:urv}]. We do this to respect the standard guidelines for setting the address expiration timers as discussed in Section~\ref{ss:blepprovision}. Else if the device has already changed it's MAC address since spawning, then we assign the offset to be the sum of \textit{randInterval} and \textit{swapDelay} [Alg. \ref{AddressGeneration} line \ref{up:rintvl}]. Finally, we increase the \textit{roundIndex} and schedule a recursive callback to UPDATE after the expiration of offset that we just calculated above [Alg. \ref{AddressGeneration} line \ref{up:call1}] in order to get resolvable private addresses during the simulation time. \subsection{\textbf{Resolution of Randomized MAC}} Generation of MAC address is not sufficient for a BLE device. The receiving node must be able to "resolve" or associate the private address with the sending device's identity. A Resolvable private address may be resolved if the sending device’s IRK is available to the receiver. If the address is resolved, the receiving device can associate this address with the peer device. To support this privacy-preserving feature, we need to figure out solutions to two major questions inside a device; how to resolve a private address of a device? And, where do we need to check the validity of the private address in the packet being handled inside \textit{SimBle}? The solution to the first question is given by RESOLVE [Alg. \ref{AddressResolution} line \ref{resl}] while CHECKVALIDATION [Alg. \ref{AddressResolution} line \ref{val}] answers the second question that we arise above. As briefly stated earlier, RESOLVE returns a tuple consisting of (\textit{resolved}, \textit{resIDAdd}). Here \textit{resolved} states if the resolution attempt of the \textit{privateAddress} was successful or not. If the private address is resolved then \textit{resIDAdd} consists of the Identity Address of the node creating the private address, else it is a empty string in the returned pair. Whenever a node receives resolvable private address, the corresponding \textit{PrivacyManager} calls RESOLVE with \textit{privateAddress} and \textit{irkIAddPairList} as input. While \textit{privateAddress} is the sending device's randomized MAC address, \textit{irkIAddPairList} is the locally maintained list of (\textit{IRK}, \textit{Identity Address}) pairs at the resolving node, as described in section \ref{pair}. RESOLVE first extracts \textit{hash} and \textit{prand} part of the the private address [Alg. \ref{AddressResolution} line \ref{res:sprand}] as described earlier in Figure \ref{fig:2}. We pad 104 null bits to the extracted \textit{senderPrand} such that the most significant byte of the \textit{senderPrand} becomes the most significant byte of \textit{plaintext}, which is the resulted byte array after padding. \begin{algorithm}[htbp] \caption{SimBle's Resolvable Private Address resolution} \label{AddressResolution} \begin{algorithmic}[1] \Procedure{Resolve}{$privateAddress, \newline irkIAddPairList$} \label{resl} \Comment{Input variable}\newline \Comment{Extract hash and random part of privateAddress} \State $senderHash \gets extractHash(privateAddress)$ \label{res:shash} \State $senderPrand \gets extractPrand(privateAddress)$ \label{res:sprand} \State $padding \gets genPaddingBits(104)$ \State $plaintext \gets Concatenate(padding, senderPrand)$ \label{res:plain} \State $resolved \gets FALSE$ \State $resIDAdd \gets NULLSTR$\newline \Comment{Check if Sender hash is valid} \For{\texttt{$IRK, IDAdd \quad in \quad irkIAddPairList$}} \State \texttt{$aesobj \gets AES(IRK)$} \label{res:aesob} \State \texttt{$ciphertext \gets aesobj.getEncrypt(plaintext)$} \label{res:cipher} \State \texttt{$localHash \gets getLeastSigBits(ciphertext, 24)$} \State \texttt{$resolved \gets isEqual(localHash, senderHash)$} \State \If{$resolved == TRUE$} \State $resIDAdd \gets IDAdd$ \EndIf \EndFor\newline \Comment{Return resolved status \& Identity Address} \State \textbf{return ($PAIR(resolved, resIDAdd)$)} \EndProcedure \newline \Procedure{CheckValidation}{} \label{val} \newline \Comment{Call RESOLVE to validate private address if any of the function calls below is triggered in \textit{SimBle}} \If{ \State $\textbf{BroadbandManager:} LinkExists(),\newline GetLinkManager(), GetLink()$\label{val:brod} \State $\textbf{LinkController:} CheckReceivedAckPacket()$ \label{val:cond} \newline} \State $\Call{Resolve}{privateAddress, irkIAddPairList}$ \label{val:res} \EndIf \EndProcedure \end{algorithmic} \end{algorithm} Before considering a \textit{privateAddress} to be resolved, the handling node checks the validity of the address. Valid private address refers to the address which was resolved using one of the \textit{IRK's} in the list available at the resolving node. To get this verification, we first take out a (\textit{IRK} : \textit{Identity Address}) pair from the \textit{irkIAddPairList}. We generate an instance of AES algorithm initialized with the IRK from the current pair [Alg. \ref{AddressResolution} line \ref{res:aesob}]. AES instance then encrypts the \textit{plaintext} to generate 128 bits of \textit{ciphertext} [Alg. \ref{AddressResolution} line \ref{res:cipher}]. We take 24 most significant bits of \textit{ciphertext} to generate the \textit{localHash}. If the value of \textit{localHash} matches the earlier extracted \textit{senderHash} [Alg. \ref{AddressResolution} line \ref{res:shash}] for any of the iterations, RESOLVE successful returns the (TRUE, \textit{Identity Address}) pair. Otherwise resolution is considered a failure and RESOLVE returns the (FALSE, \textit{" "}) pair. After resolving a private address, we look into the framework of \textit{SimBle} to investigate the modules that need address resolution. We identify two modules that need to call \textit{PrivacyManager}'s RESOLVE procedure: \textit{BroadbandManager} and \textit{LinkController} through CHECKVALIDATION [Alg. \ref{AddressResolution} line \ref{val:brod}]. Whenever \textit{BroadbandManager} receives a packet from the \textit{NetDevice}, RESOLVE is recalled in two cases. First is when it checks/tries to fetch the link. The second is when it requests the \textit{LinkManager} to the destination node. We do this to ensure that the identity address resolved by the node suggested by the destination address matches with the identity address of the existing link. Finally, CHECKVALIDATION also needs to check if the sender address of the correctly received packet by the \textit{LinkController} could be resolved using one of the stored \textit{IRK}'s at the receiver~[Alg. \ref{AddressResolution} line \ref{val:cond}]. \section{Validation}\label{valid} For validation of \textit{SimBle}, it is fundamental to evaluate the functionalities of the introduced \textit{PrivacyManager}. Therefore resolvable private address generation and resolution must be validated. Specifically, we must show that generated randomized addresses are very close to what real-world devices advertise. Also, we have to show that BLE data communication continues flawlessly between the paired devices even when they change their advertised MAC address. In this case, we assume that the devices have exchanged each other's \textit{IRK} during initialization. All the MAC addresses shown in the paper are hashed using SHA-256 and truncated to the first 8 bytes for illustration purposes. \subsection{Validating private address generation} To know if \textit{SimBle} can emulate a real-world trace, we first collect real-traces obtained form real experimentation. Then, we compare the difference between real-traces obtained from capturing public packets from actual devices to that of traces generated from initializing similar behavior devices inside the simulator. This comparison aims to show that \textit{Simble} could emulate the same behavior in terms of randomized MAC advertisements and the transmission of public packets. \subsubsection{Experimental setup} As a sniffer, we use the Bluetooth chipset of the Raspberry Pi 4B to capture Bluetooth public packets. Capture is done in a controlled environment inside a Faraday cage. We choose two devices Apple iPad Pro 3 and iPad Mini 2, emitting public packets in the cage for 40 minutes using BLE 4.1, which is captured by the Raspberry Pi. We are mainly interested in captured timestamps and LAP (lower address part) of the advertised beacons in the collected traces. LAP refers to the least significant 24 bits of a BLE MAC address. Even though we do trace-collection in non-public environments, we still present hashed values to protect the device's privacy. While for the devices inside the simulator, we assign the BLE standard in initialization as the release 4.1, which fixes the interval of MAC address regeneration to 15 minutes. Afterward, we install a broadcast application on top of spawned nodes. We assign the frequency of beacon transmissions in the application as the mean device broadcast interval observed from the real-world sniffer capture. We found this value to be 2 seconds. Moreover, we place a sniffer at the center of a square area of 10 meters in which initialized emitting devices are statically present. Sniffer captures on three public BLE channels. The chosen area's size is kept small to avoid transmission errors because of the distance between the devices and the sniffer. This is because errors are not present in the Faraday cage real-world experiment described earlier. The simulation parameters are illustrated in Table~\ref{table:2}. \begin{table}[] \centering \begin{tabular}{ |c|c|c| } \hline Parameter & Value\\ \hline Simulation area & 10*10 \\ Packet size & 20 bytes\\ Simulation duration & 2410 seconds \\ Packet sending Duration & 2400 seconds\\ Path loss model & nakagami \\ Num of nodes & N \\ Mobility model(nodes) & static \\ Num of sniffers & M \\ Mobility model(sniffer) & static \\ beacon interval & 2 seconds \\ Connection Interval & 6.25ms \\ Swap delay & 10* beacon interval \\ BLE standard & BLE 4.1 \\ \hline \end{tabular} \caption{Simulation parameters for \textit{SimBle} validation} \label{table:2} \end{table} \begin{figure}[htbp] \centering \begin{subfigure}{0.5\textwidth} \hfill\includegraphics[scale=1]{./plots/mass_assoc.pdf}\hspace*{\fill} \caption{Real-World} \label{fig:7a} \end{subfigure} \begin{subfigure}{0.50\textwidth} \hfill\includegraphics[scale=1]{./plots/mass_assoc_ns.pdf}\hspace*{\fill} \caption{SimBle} \label{fig:7b} \end{subfigure} \caption{Observed public packet addresses in real-world vs \textit{SimBle} by two devices. Each color represents a device broadcasting anonymized addresses.} \label{fig:7} \end{figure} \subsubsection{Observations} The first observation is related to the changing of the MAC addresses. In this case, for the real experiments, we turn on the Bluetooth of the two IPad devices at the start of sniffing since otherwise first change in MAC address would be random, and it would be hard to use that trace for validation. As we can see in Figure~\ref{fig:7a}, randomized MAC addresses change every 15 minutes along with the capture duration. Like real IPad devices, IPads emulated inside the simulation change their MAC addresses after 15 minutes, shown in Figure~\ref{fig:7b}. \begin{figure}[htbp] \hfill\includegraphics[scale=1]{./plots/pubpackt.pdf}\hspace*{\fill} \caption{Real-world vs SimBle in inter public packet times} \label{fig:5} \end{figure} After validating the role of \textit{PrivacyManager} in private address generation, we validate if the rest of the BLE stack could emulate the chosen real device. We do this by looking at the inter-packet times for public packets observed at the sniffer inside the \textit{SimBle} and the real-world. We maintain the same experimental setup and generated traces. We observe in Figure \ref{fig:5} that for both the devices, real-world and \textit{SimBle} inter-packet intervals at the sniffer have the mean value of 2 seconds. A deviation of 20 milliseconds is expected for the sniffers as they capture on either of three public BLE channels on random and may miss some public packets on one of the three channels. A public packet on Bluetooth is broadcasted on all three public channels within a time-frame of 20 milliseconds. This validates the overall working of public packets in \textit{SimBle}. \begin{figure}[htbp] \hfill\includegraphics[scale=1]{./plots/mass_assoc_data.pdf}\hspace*{\fill} \caption{Sent and received data packets by two paired BLE devices inside \textit{SimBle}} \label{fig:6} \end{figure} \subsection{Validating private address resolution} To validate the resolution of private addresses in \textit{SimBle}, we consider a simple scenario, where a transmitter and receiver nodes are paired inside it. This allows us to look into global trace obtained by send and receive logs and deduce if the data communication was continuous in-spite of sender and receiver changing their MAC addresses. As we can see in Figure \ref{fig:6}, the sender changes its private address around 13 minutes. However, the receiver BLE application continues to process and receive packets as it could resolve the new private address to the sender's Identity Address, having possession of its \textit{IRK}. Similarly, around 32 minutes, we observe that the receiver changes its private address. Still, it is communicated to the sender through beacons, and hence, the sender this time around resolves and verifies the receiver's private address. Therefore, the sender could be seen sending its data to the receiver seamlessly. This experiment thus ensures that \textit{SimBle}'s [Alg. \ref{AddressResolution}] is functional in handling BLE MAC randomization. \subsection{Validating optimized trace-collection} We discussed in Section~\ref{optim} about the need to optimize the trace-collection procedure to obtain traces in a reasonable time. We validate the improvement brought by \textit{SimBle} in terms of run-time by increasing the density of devices up to 1 device per square meter around a sniffer for a simulation duration of 30 seconds. The density is varied by increasing the number of devices up to 100 in 100 square meters around the sniffer. As we can observe, in Figure~\ref{fig:9}, optimized sniffing gives a performance gain in simulation run-time up to a factor of 100. In conclusion, since we generally have to simulate a considerably longer duration to test BLE privacy provisions as most MAC addresses change around 15 minutes, \textit{SimBle} can optimize the sniffing to generate traces in a reasonable amount of time. \begin{figure}[htbp] \hfill\includegraphics[scale=1]{./plots/timp.png}\hspace*{\fill} \caption{Performance gain in run-time with optimized sniffing inside simulation} \label{fig:9} \end{figure} \section{Case Study}\label{cst} MAC address association refers to defeating the anonymization techniques used by the devices and being able to track a particular device. Recently many strategies have been suggested to achieve this goal of associating different private addresses advertised publically from the same device \cite{ccnc21/JounasVAF21}\cite{becker2019tracking} \cite{celosia2019saving} \cite{martin2019handoff}. For instance, \cite{becker2019tracking} \cite{celosia2019saving} show that manufacturers like Apple and Microsoft leak partial identifiers in the data field of public packets, which can be easily exploited. In \cite{martin2019handoff}, authors reverse engineer continuity protocol messages of Apple devices. They show that finger-printing the device, as well as behaviorally profiling users, is possible using the contents of public BLE messages. They also demonstrate that predictable frame sequence numbers in them leave the possibility of tracking Apple devices across space and time. As we mention in the Section \ref{intro}, \cite{9152700} also discuss a de-anonymization strategy. Authors of \cite{9152700} mention that the focus of their solution is Bluetooth Classic (BT) not BLE, because of the absence of MAC address randomization. Besides, the proposed strategy requires specific sniffing devices and targets only private packets. We believe that this approach can not be considered as fully generic and scalable. Contrary to the above BLE strategies~\cite{becker2019tracking}\cite{martin2019handoff}\cite{celosia2019saving} which target specific devices like Apple, \cite{ccnc21/JounasVAF21} propose a method which associates MAC addresses from a device based on emitted public packets. This makes [6] independent of the device vendor and generic for any BLE device as it just relies on beacons and whatever the used application. They identify devices across time using an identifier that discriminates a subset of devices at a given time, that is, a weak identifier, and achieve close to $100\%$ accuracy for controlled environments as shown in Figure~\ref{fig:8}. Therefore, \textit{we decided to implement and study performances of ~\cite{ccnc21/JounasVAF21} when using \textit{SimBle}, since to the best of our knowledge, it is the only generic BLE MAC address association strategy currently available in the literature.} We evaluate it using the traces and the \textit{ground truth} generated by \textit{SimBle}. \subsection{Algorithm Overview} \label{loicalgo} The association strategy proposed in~\cite{ccnc21/JounasVAF21} could be briefed into the following three steps: \begin{enumerate} \item \textit{Identifying the MAC conflicts across time: } Whenever we look at passively sniffed traces across time for public BLE packets, it is very probable that two or more devices change their randomized MAC addresses around the same time. These are identified as \textit{conflicts} by~\cite{ccnc21/JounasVAF21} and seen over the entire sniffing duration as \textit{conflict clusters}. The authors also define the \textit{dswap} as the time that separates the consecutive and distinct private addresses from a particular device. For each address change seen in the trace, there is a set of appearing and disappearing MAC addresses in the interval \textit{dswap}. They are associated using the Linear Assignment \cite{martello1987linear} where the weights of possible associations are chosen as distances between \textit{weak identifiers}, which is described next. \item \textit{Finding a weak identifier: } A device constant could be a weak identifier if it is accessible to the sniffer and it splits the device population into a few groups that are distributed as uniformly as possible. \cite{ccnc21/JounasVAF21} choose the fixed part of the time between advertising packets in BLE as the weak identifier and call it \textit{characteristic time}. \item \textit{Resolving MAC conflicts: } \textit{Union Find} \cite{harfst2000potential} is used to break the conflict clusters into groups of appearing and disappearing MACs. Finally, all conflicts seen in the observed trace are resolved by using the absolute difference between the characteristic times as association weights for the Linear Assignment. \end{enumerate} \subsection{Study of the association strategy} \label{study} We identify three aspects for which the association strategy \cite{ccnc21/JounasVAF21} is most sensitive in terms of effectiveness: \begin{enumerate} \item \myitem{Conflict size and \textit{dswap} chosen: } As the number of devices in the sniffing zone increases, the number of devices that change their private addresses around the same time also increase. We see in section \ref{loicalgo} that weak identifier is used to resolve conflicts. We define the number of devices in a single conflict as \textit{conflict size}. Increasing conflict sizes in the conflict cluster have two major consequences in \cite{ccnc21/JounasVAF21}. Firstly, weak identifiers would not be effective in resolving conflicts during Linear Assignment. This is because a large number of devices cause more possible associations to have similar weights. Secondly, we identify the strategy~\cite{ccnc21/JounasVAF21} to be quadratic in run-time. Thus, using Linear Assignment for the resolution of a huge set of conflicting MAC addresses is practically not feasible for device-tracking purposes. We see \textit{dswap} as critical parameter in \cite{ccnc21/JounasVAF21}. It could not be chosen arbitrarily large, as this results in very large conflict clusters containing MAC addresses that are probably not single conflict. On the contrary, relatively small value leads to the exclusion of actual conflicts. For the evaluation of association strategy, we use \textit{dswap} to be 10 times \textit{characteristic time} as recommended to be optimal by~\cite{ccnc21/JounasVAF21}. \item \myitem{Device diversity in the population: } The effectiveness of association is also dependent on the diversity of devices in the sniffed trace. This is because \textit{characteristic times} of devices vary more with diversity. Thus it is easy for the Linear assignment to group conflict pairs with similar weights. \cite{ccnc21/JounasVAF21} also uses the vendor information in public packets as an identifier while resolving conflicts. Filtering out possible associations with different vendors in the advertised packet increases the chance of correct MAC address association. \item \myitem{Mobility observed in trace: } \textit{Characteristic times} as a \textit{weak identifier} is calculated from the observed packet timestamps sequence in the trace. If there is a high degree of mobility around the sniffer, then devices keep coming and leaving the sniffing zone. This causes an error in the value chosen by \cite{ccnc21/JounasVAF21} for possible association pairs' weight during conflict resolution. Hence the accuracy of MAC address association should decrease naturally. \end{enumerate} \subsection{Evaluation} \label{eval} In the following, we evaluate the accuracy of MAC address association and growth of conflict cluster size for various realistic scenarios. In \textbf{scenario 1}, we choose \textit{BLE 4.1}, since it is the most prevalent BLE release in devices today. We also choose a \textit{single device class}, which is smartphones. Smartphones largely fall into the device class \textit{moderate emitters} as stated earlier in Section \ref{hetero}. The randomization interval in BLE 4.1 is set to 15 minutes. For \textbf{scenario 2}, we choose \textit{BLE 4.1} and \textit{multiple device classes}. We emulate the environment with different device classes to include co-existing smartphones, smartwatches, earbuds e.t.c. Finally, in \textbf{scenario 3}, we consider \textit{BLE 5.2} and \textit{multiple device classes}. Here we emulate a diverse range of devices supporting the latest release, BLE 5.2, in them. We choose this BLE standard because, unlike BLE 4.1, vendors can keep private address generation interval to be less than 15 minutes. Though standard advises avoiding smaller values for randomization interval than 15 minutes as it could affect performance due to connection times. We deliberately keep the randomization interval as uniform distribution in the range (3, 15) minutes to observe how \cite{ccnc21/JounasVAF21} performs when more and more vendors start to quicken private address generation. We evaluate all the scenarios for the following mobility-profiles: \begin{enumerate} \item \textit{Static-Confined: } Here the devices are static and are always present in the sniffing zone. \item \textit{Mobile-Free: }In this profile, devices are mobile and are free to leave and enter the sniffing zone. We try to mimic human mobility by using a random-walk mobility model with a speed of 1.5 $m/s$ and direction change after every 2 $s$. \end{enumerate} We generate all the traces and associated \textit{ground truth} by simulating several BLE devices and a sniffer for 40 minutes using \textit{SimBle}. We prefer a longer duration than multiple simulation runs of small duration as it gives detailed insight on how conflicts evolve with time. It is essential to note how accurately strategy in Section \ref{loicalgo} resolves the MAC addresses from a single device in the capture duration. For \textit{Static-Confined} mobility-profile, we place a sniffer in the center of a square of 100 square meters and vary the number of BLE devices/nodes up to 100. We choose this area to make sure that nodes are always in sniffing range of the sniffer. As shown in Table \ref{table:2}, we use the \textit{Nakagmi} path loss model and consider the successful BLE transmission range to be around 20 meters. While in the case of \textit{Mobile-Free} mobility-profile, we deliberately take a square of 2500 square meters and place the sniffer in the middle of it. BLE nodes are performing random-walk in that area and thus move in and out of the sniffing range. \begin{figure*}[htbp] \centering \begin{subfigure}{0.5\textwidth} \includegraphics[scale=0.95]{./plots/accuracypaper1.pdf} \caption{Scenario 1} \label{fig:10a} \end{subfigure}% \begin{subfigure}{0.50\textwidth} \hfill\includegraphics[scale=0.95]{./plots/conflictpaper1.pdf}\hspace*{\fill} \caption{Scenario 1} \label{fig:10b} \end{subfigure} \begin{subfigure}{0.5\textwidth} \includegraphics[scale=0.95]{./plots/accuracypaper2.pdf} \caption{Scenario 2} \label{fig:10c} \end{subfigure}% \begin{subfigure}{0.50\textwidth} \hfill\includegraphics[scale=0.95]{./plots/conflictpaper2.pdf}\hspace*{\fill} \caption{Scenario 2} \label{fig:10d} \end{subfigure} \begin{subfigure}{0.5\textwidth} \includegraphics[scale=0.95]{./plots/accuracypaper3.pdf} \caption{Scenario 3} \label{fig:10e} \end{subfigure}% \begin{subfigure}{0.50\textwidth} \hfill\includegraphics[scale=0.95]{./plots/conflictpaper3.pdf}\hspace*{\fill} \caption{Scenario 3} \label{fig:10f} \end{subfigure} \caption{ Accuracy of MAC address associations and average conflict size observed by MAC association strategy\cite{ccnc21/JounasVAF21} on \textit{SimBle} generated traces for \textit{Static-Confined} and \textit{Mobile-Free} mobility-profiles, described in Section \ref{eval}} \label{fig:11} \end{figure*} \subsection{Results and Analysis} \begin{enumerate} \item \textbf{Scenario 1: } First, we observe how well the algorithm\cite{ccnc21/JounasVAF21} can defeat MAC randomization and correctly associate private addresses for BLE 4.1 with \textit{moderate emitters}. MAC addresses change after every 15 minutes in BLE 4.1. For average conflict sizes below 10, we expect the algorithm in Section \ref{loicalgo} to perform well both in run-time and accuracy. We observe in the Figure \ref{fig:10a} that accuracy of association is above $98\%$ for \textit{Static-Confined} mobility-profile. Even in the case of \textit{Mobile-Free} nodes, minimum accuracy of around $91\%$ is seen for 100 devices. Average conflicts increase with an increase in the number of devices as expected in Figure \ref{fig:10b}, but they are well beneath the bound of 10 conflicts. Hence, the accuracy of MAC address association is very high for both mobility-profiles. \item \textbf{Scenario 2: } We just saw how accurately MAC addresses from \textit{moderate emitters}, which are generally mobile phones is associated. We present a further realistic scenario, where we allow all device classes (Section \ref{hetero}). This favors MAC association as described in Section~\ref{study}. We again stick to the privacy behavior of BLE 4.1 as it is the most prevalent standard in current devices. As expected, we observe an increase in accuracy for both the scenarios in Figure~\ref{fig:10c}. While MAC addresses of \textit{Static-Confined} nodes are associated with accuracy close to $100\%$, the minimum accuracy of association for \textit{Mobile-Free} devices also increased to $93\%$. Conflict sizes observed are also small for up to 100 devices, as seen in Figure~\ref{fig:10d}. \item \textbf{Scenario 3: } Finally, we go for multiple device classes but with privacy behavior of BLE 5.2, which allows vendors to change the private address of the device before the interval of 15 minutes (Section~\ref{eval}). We expect the conflict sizes to rise and hence a decrease in accuracy for a large number of devices. We see a relative decrease in accuracy in the Figure~\ref{fig:10e} when compared to the previous Figure~\ref{fig:10c} as expected. For 100 devices accuracy of MAC address associations decrease to around $89\%$ for both mobility-profiles. Conflict sizes increase to a maximum value of 13 as seen in Figure~\ref{fig:10f}, but it is still not large enough to degrade the efficiency of the association strategy~\cite{ccnc21/JounasVAF21}. \end{enumerate} \textit{Results of the case study shows that current MAC address randomization proposed by the BLE standard is not enough to safeguard user-privacy. The association strategy\cite{ccnc21/JounasVAF21} can successfully defeat the randomization procedure and correctly fingerprint close to $90\%$ of the devices even in highly dense and mobile scenarios. An adversary could setup multiple sniffers strategically and easily track a particular user device.} \label{obs} The high accuracy of MAC address association in the initial case study made us look into the methods to avoid device-traceability. We reduced the \textit{randomization interval} of the device population to 3 minutes. Devices changing their private addresses quickly should lead to higher \textit{conflict sizes} and hence lower accuracy of association by \cite{ccnc21/JounasVAF21}. Using the mobility-profile \textit{Mobile-Free}, we varied the number of devices inside \textit{SimBle} to 100 for this smaller value of \textit{randomization interval}. Devices belong to multiple device classes. We observe in Figure \ref{fig:12} that indeed accuracy decreases to a minimum of around $78\%$ with \textit{conflict size} growing to 97. \begin{figure}[htbp] \centering \begin{subfigure}{0.5\textwidth} \hfill\includegraphics[scale=1]{./plots/accuracypaper4.pdf}\hspace*{\fill} \caption{Real-World} \label{fig:12a} \end{subfigure} \begin{subfigure}{0.50\textwidth} \hfill\includegraphics[scale=1]{./plots/conflictpaper4.pdf}\hspace*{\fill} \caption{\textit{SimBle}} \label{fig:12b} \end{subfigure} \caption{Accuracy of MAC address associations and average conflict size observed by MAC association strategy\cite{ccnc21/JounasVAF21} on \textit{SimBle} generated traces for \textit{Mobile-Free} mobility-profile with \textit{Randomization interval} of 3 minutes} \label{fig:12} \end{figure} With single device classes, \cite{ccnc21/JounasVAF21} might get lower accuracy, but $78\%$ accurate associations are still a threat to user-privacy. Hence lowering the \textit{randomization interval} is not the only solution the BLE standard should address. Based on the case study, we summarize the following recommendations to lower the accuracy of successful MAC address association possibly: \begin{enumerate} \item Recommended \textit{randomization interval} must be lowered. This might lead to increased connection times. Optimization in the IRK exchange and resolving the list at the receiver could allow BLE devices to change address frequently without compromising performance. \item The parameter exploited by \cite{ccnc21/JounasVAF21} in \ref{loicalgo} is the \textit{characteristic time} that acts as \textit{weak identifier}. This parameter is unique to a device and varies for the device population. This makes the identification of the device easier. We suggest the standard to recommend vendors having similar \textit{characteristic times} \end{enumerate} \section{Final remarks and future steps}\label{discussion} \label{conl} MAC address randomization is indispensable for protecting user-privacy in BLE as we see in Section \ref{back}. If devices keep on advertising their true MAC address or their \textit{Identity Address}, they could easily be tracked by co-coordinated passive sniffing. Widespread usage of resolvable private addresses could potentially protect the privacy of users to some extent. On the other side, vendor-dependent MAC address randomization has lead to the retrieval of realistic BLE traces more and more challenging. The lack of \textit{ground truth} in randomized traces and impracticality of large-scale passive trace collection is making the testing of solutions based on trajectory reconstruction or user identification \cite{8888137} \cite{michau2017bluetooth} \cite{xu2020route} \cite{bhaskar2014bluetooth} \cite{alghamdi2018bluemap} \cite{alhamoud2014presence} \cite{shao2018bledoorguard} almost impossible. \textit{All of the existing and future works based on device-identification using MAC address in BLE must be revisited with the introduction of BLE privacy-provisions like private addresses.} \textit{SimBle} is the answer to this issue as researchers could now generate large-scale trace traces with devices of their interest and use it to validate their works. Sniffers could be deployed accordingly to emulate real-world passive trace-collection for BLE. The works that do BLE MAC address association or device-fingerprinting are threats to privacy provisions of BLE\cite{ccnc21/JounasVAF21}\cite{becker2019tracking} \cite{celosia2019saving} \cite{martin2019handoff} as these strategies lead to tracking of users. \textit{Only \textit{SimBle} can allow the community to compare the effectiveness of any two of these available solutions.} This is because we need exact/identical conditions for comparing the evaluations. It is not only hard for experiments/test-beds to emulate identical conditions but are also not scalable. Moreover, as discussed earlier, finding \textit{ground truth} for experimentally obtained traces is practically impossible for large-scale testing. \textit{SimBle} is the first BLE simulation stack capable of generating traces that preserve privacy. It introduces resolvable private addresses that are the core to BLE device and network privacy-provisions. We showed that it is capable of emulating the behavior of any real BLE device/hardware. Users have to choose the appropriate device class they want to test, based on the targeted device. It resolved the lack of \textit{ground truth} for scalable scenarios after the introduction of MAC address randomization. \textit{SimBle} provides the associated \textit{ground truth} with every trace that is generated. We presented the case study to the only generic MAC address association strategy for BLE available in literature using \textit{SimBle}. Realistic device and mobility scenarios were used in the evaluation. \textit{The case study revealed the user-privacy trade-off even with the usage of MAC address randomization as close to $90\%$ private addresses could be associated correctly in the worst-case case. This enforces the need to revise the recommendations currently proposed in the standard.} Regarding future works, the key distribution could be done by using control messages rather than pre-installation at the node. BLE stack could be enriched by the addition of different device pairing modes. Also, as one of the aims of \textit{SimBle} is to emulate any real device, more and more vendor-specific information could be added to facilitate usability. Finally, we aim to evaluate and compare more BLE privacy-related works in the future using \textit{SimBle}. \newpage
1,314,259,992,602
arxiv
\section{Introduction} \subsection{The ASEP and main results} In this paper, we study the upper-tail Large Deviation Principle (LDP) of the \textit{asymmetric simple exclusion process} (ASEP) with step initial data. The ASEP is a continuous-time Markov chain on particle configurations $\textbf{x} = (\textbf{x}_1 > \textbf{x}_2 > \cdots)$ in $\Z$. The process can be described as follows. Each site $i\in \Z$ can be occupied by at most one particle, which has an independent exponential clock with exponential waiting time of mean $1$. When the clock rings, the particle jumps to the right with probability $q$ or to the left with probability $p=1-q$. However, the jump is only permissible when the target site is unoccupied. For our purposes, it suffices to consider configurations with a rightmost particle. At any time $t \in \R_{>0}$, the process has the configuration $x(t)=(x_1(t)>x_2(t)>\cdots)$ in $\Z$, where $x_j(t)$ denotes the location of the $j$-th rightmost particle at this time. {Appearing first in the biology work of Macdonald, Gibbs, and Pipkin \cite{pip} and introduced to the mathematics community two years later by \cite{spitzer},} the ASEP has since become the ``default stochastic model to study transport phenomena", including mass transport, traffic flow, queueing behavior, driven lattices and turbulence. We refer to \cite{bcs,liggett,liggett2,spohn} for the mathematical study of and related to the ASEP. When $q=1,$ we obtain the \textit{totally asymmetric simple exclusion process} (TASEP), which allows jumps only to the right. It connects to several other physical systems such as the exponential last-passage percolation, zero-temperature directed polymer in a random environment, the corner growth process and is known to possess complete determinantal structure (\textit{free-fermionicity}). We refer the readers to \cite{joh,liggett,liggett2,pra} and the references therein for more thorough treatises of the TASEP. The dynamics of ASEP are uniquely determined once we specify its initial state. In the present paper, we restrict our attention to the ASEP started from the \textit{step} initial configuration, i.e. $x_j(0)=-j$, $j=1,2,\ldots$. We set $\gamma = q-p$ and assume $q>\frac12$, i.e., ASEP has a drift to the right. An observable of interest in ASEP is $H_0(t)$, the integrated current through 0 which is defined as: \begin{align}\label{def:ht} H_0(t) := \mbox{ the number of particles to the right of zero at time }t. \end{align} $H_0(t)$ can also be interpreted as the one-dimensional height function of the interface growth of the ASEP and thus carries significance in the broader context of the Kardar-Parisi-Zhang (KPZ) universality class. We will elaborate on the connection to KPZ universality class later in Section \ref{sec:pre}. As a well-known random growth model itself, the large-time behaviors of ASEP with step initial conditions have been well-studied. Indeed, it is known \cite[Chapter VIII, Theorem 5.12]{liggett} that the current satisfies the following strong law of large numbers: \begin{align*} \tfrac1t{H_0\big(\tfrac{t}\gamma\big)} \rightarrow \tfrac{1}{4}, \mbox{ almost surely as } t\to \infty. \end{align*} The strong law has been later complemented by fluctuation results in the seminal works by Tracy and Widom. In a series of papers \cite{tw3}, \cite{tw1} \cite{tw2}, Tracy and Widom exploit the integrability of ASEP with step initial data and establish via contour analysis that $H_0(t)$ when centered by $\frac{t}{4}$ has typical deviations of the order $t^{1/3}$ and has the following asymptotic fluctuations: \begin{align}\label{eq:clt2} {\tfrac{1}{t^{1/3}}2^{4/3}\big(-H_0\big(\tfrac{t}\gamma\big) + \tfrac{t}{4}\big) \implies \xi_{\operatorname{GUE}},} \end{align} where $\xi_{\operatorname{GUE}}$ is the GUE Tracy-Widom distribution \cite{tw4}. When $q = 1$, \eqref{eq:clt2} recovers the same result on TASEP, which has been proved earlier by \cite{joh}. Given the existing fluctuation results on the ASEP with step initial data, it is natural to inquire into its {Large Deviation Principle (LDP)}. Namely, we seek to find the probability of when the event $-H_0(\frac{t}{\gamma})+\frac{t}{4}$ has deviations of order $t$. Intriguingly, one expects the lower- and upper-tail LDPs to have different speeds: the upper-tail deviation is expected to occur at speed $t$ whereas the lower-tail has speed $t^2$: \begin{align*}\tag{Lower Tail} {\mathbb{P}\left(-H_0\big(\tfrac{t}\gamma\big)+\tfrac{t}{4} < -\tfrac{t}{4}y\right) \approx e^{-t^2\Phi_{-}(y)};} \end{align*} \begin{align*}\tag{Upper Tail} {\mathbb{P}\left(-H_0\big(\tfrac{t}\gamma\big)+\tfrac{t}{4} > +\tfrac{t}{4}y\right)\approx e^{-t\Phi_{+}(y)}.} \end{align*} Thus, the upper tail corresponds to ASEP being ``too slow" while the lower tail corresponds to ASEP being ``too fast". Heuristically, we can make sense of such speed differentials. Because of the nature of the exclusion process, when a \textit{single} particle is moving slower than the usual, it forces \textit{all} the particles on the left of it to be automatically slower. Hence ASEP becomes slow if \textit{only one} particle is moving slow. This event has probability of the order $\exp(-O(t))$. However, in order to ensure that there are many particles on the right side of origin (this corresponds to ASEP being fast), it requires a large number of particles to move fast \textit{simulatenously}. This event is much more unlikely and happens with probability $\exp(-O(t^2))$. In this article, we focus on the \textit{upper-tail} deviations of the ASEP with step initial data and present the first proof of the ASEP upper-tail LDP on the \textit{complete} real line. Consider ASEP with $q\in (\frac12,1)$ and set $p=1-q$ and $\tau=p/q\in (0,1)$. Our first theorem computes the $s$th-\textit{Lyapunov exponent} of $\tau^{H_0(t)}$, which is the limit of the logarithm of $\Ex[\tau^{sH_0(t)}]$ scaled by time: \begin{theorem} \label{thm:frac_mom} For $s\in (0,\infty)$ we have \begin{align}\label{eq:exp} \lim_{t\to \infty} \frac1t\log \Ex [\tau^{sH_0(t)}]=-h_q(s)=:-(q-p)\frac{1-\tau^{\frac{s}2}}{1+\tau^{\frac{s}2}}. \end{align} \end{theorem} It is well known (see Proposition 1.12 in \cite{gl20} for example) that the \textit{upper-tail} large deviation principle of the stochastic process $\log \tau^{H_0(t)}$ is the Legendre-Fenchel dual of the Lyapunov exponent in \eqref{eq:exp}. Since $\tau<1$, as a corollary, we obtain the following \textit{upper-tail} large deviation rate function for $-H_0(t)$. \begin{theorem}\label{thm:ldp} { For any $y\in (0,1)$ we have \begin{align}\label{eq:ldp} \lim_{t\to\infty}\frac1t\log\P\left(-H_0\big(\tfrac{t}\gamma\big)+\tfrac{t}{4} > \tfrac{t}{4}y\right)=-[\sqrt{y}-(1-y)\tanh^{-1}(\sqrt{y})]=:-\Phi_{+}(y), \end{align}} where $\gamma=2q-1$. Furthermore, we have the following asymptotics near zero: \begin{align}\label{eq:asy} \lim_{y\to 0^+} y^{-3/2}\Phi_{+}(y)=\tfrac23. \end{align} \end{theorem} \begin{figure}[h!] \begin{center} \includegraphics[width=7cm]{true_phi.PNG} \ \ \includegraphics[width=7cm]{dp_phi.PNG} \vspace{-2mm} \caption{The figure on the left is the plot of $\Phi_{+}(y)$. The right one is the plot of $\widetilde\Phi_{+}(y)$.} \label{tphi} \end{center} \end{figure} \begin{remark} Note that our large deviation result is restricted to $y\in (0,1)$ as $\P(-H_0\big(\tfrac{t}\gamma\big)+\tfrac{t}{4} > \tfrac{t}{4}y)=0$ for $y\ge 1$. Furthermore, although Theorem \ref{thm:ldp} makes sense when $q=1$, one cannot recover it from Theorem \ref{thm:frac_mom}, which only makes sense for $\tau=(1-q)/q\in (0,1)$. However, as mentioned before, \cite{joh} has already settled the $q=1$ TASEP case and obtained the upper-tail rate function in a variational form. We will later show in Appendix \ref{app} that \cite{joh} variational formula for TASEP matches with our rate function in \eqref{eq:ldp}. \end{remark} \begin{remark} {Recently, the work \cite{dp} has obtained a one-sided large deviation bound for the upper tail of the ASEP. In particular, they showed \begin{align} \mathbb{P}\left(-H_0\big(\tfrac{t}\gamma\big)+\tfrac{t}{4} > \tfrac{t}{4}y\right) \le \Con e^{-t\widetilde\Phi_{+}(y)},\quad y\in (0,1). \end{align} The function $\widetilde\Phi_{+}$ coincides with the correct rate function $\Phi_{+}$ defined in \eqref{eq:ldp} only for $y \le y_0:= \frac{1-2\sqrt{q(1-q)}}{1+2\sqrt{q(1-q)}}$, as captured by Figure \ref{tphi}. We will further compare and contrast our results and method with \cite{dp} later in Section \ref{sec:pre}.} \end{remark} \begin{remark} For $y$ small enough, following \eqref{eq:clt2} and upper tail decay of GUE Tracy-Widom distribution \cite{dumaz}, one expects {$$\P\left(-H_0\big(\tfrac{t}\gamma\big)+\tfrac{t}{4} > \tfrac{t}{4}y\right) \approx \P(\xi_{\operatorname{GUE}} >2^{-2/3}yt^{2/3}) \approx e^{-\frac{2}3y^{3/2}t}$$} Thus the asymptotics in \eqref{eq:asy} shows that $\Phi_{+}(y)$ indeed recovers the expected GUE Tracy-Widom tails as $y\to 0^+$. \end{remark} \subsection{Sketch of proof}\label{sec:ske} In this section we present a sketch of the proof of our main results. {As explained before, Theorem \ref{thm:ldp} can be obtained from Theorem \ref{thm:frac_mom} by standard Legendre-Fenchel transform technique. {So here we only give a brief account of the proof idea of Theorem \ref{thm:frac_mom}. A more detailed overview of the proofs of our main results can be found in Section \ref{sec:fraccase}}.} The main component of our proof is the following $\tau$-Laplace transform formula for ${H_0(t)}$ that appears in Theorem 5.3 in \cite{bcs}: \begin{theorem}[Theorem 5.3 in \cite{bcs}] \label{thm:laplace} Fix any $\delta\in (0,1)$. For $\zeta>0$ we have \begin{align}\label{eq:laplace} \Ex \left[F_q(\zeta\tau^{H_0(t)})\right]=\det(I+K_{\zeta,t}), \quad F_q(\zeta):=\prod_{n=0}^{\infty}\frac{1}{1+\zeta\tau^n}. \end{align} Here $\det(I + K_{\zeta, t})$ is the Fredholm determinant of $K_{\zeta,t}: L^2(\mathfrak{C}(\tau^{1-\frac{\delta}{2}})) \rightarrow L^2(\mathfrak{C}(\tau^{1-\frac{\delta}{2}})),$ and $\mathfrak{C}(\tau^{1-\frac{\delta}{2}})$ denotes a positively-oriented circular contour centered at 0 with radius $\tau^{1-\frac{\delta}{2}}.$ The operator $K_{\zeta, t}$ is defined through the integral kernel \begin{align} \label{def: ker} K_{\zeta,t}(w,w') &: =\frac1{2\pi \i}\int\limits_{\delta-\i\infty}^{\delta+\i\infty} \Gamma(-u)\Gamma(1+u)\zeta^u \frac{g_t({w})}{g_t({\tau^uw})}\frac{\d u}{w'-\tau^u w},\ \mbox{ for }g_t(z)=\exp\left(\frac{(q-p)t}{1+\frac{z}{\tau}}\right). \end{align} \end{theorem} \begin{remark} The original statement of the above theorem in \cite{bcs} appears in a much more general setup with general conditions on the contours. We will explain the choice of our contours stated above in Section \ref{sec:leading} and check that it satisfies the general criterion for contours as stated in Theorem 5.3 in \cite{bcs}. \end{remark} \noindent We next recall that the Fredholm determinant is defined as a series as follows. \begin{align} \det(I+K_{\zeta,t}) & :=1+\sum_{L=1}^{\infty} \tr(K_{\zeta,t}^{\wedge L}) \label{eq:fdhm} \\ & := 1+\sum_{L=1}^{\infty} \frac{1}{L!}\int_{\mathfrak{C}(\tau^{1-\frac{\delta}2})}\cdots \int_{\mathfrak{C}(\tau^{1-\frac{\delta}2})} \det(K_{\zeta,t}(w_i,w_j))_{i,j=1}^{L}\prod_{i=1}^L \d w_i. \label{eq:f-series} \end{align} The notation $K_{\zeta,t}^{\wedge L}$ comes from the exterior algebra definition, which we refer to \cite{sim77} for more details. As a clarifying remark, we use this exterior algebra notation only for the simplicity of its expression and rely essentially on the definition in \eqref{eq:f-series} throughout the rest of the paper. To extract information on the fractional moments of $\tau^{H_0(t)}$, we combine the formula in \eqref{eq:laplace} with the following elementary identity, which is a generalized version of Lemma 1.4 in \cite{dt19}. \begin{lemma}\label{lm:frac_mom} Fix $n\in \Z_{>0}$ and $\alpha\in [0,1)$. Let $U$ be a nonnegative random variable with finite $n$-th moment. Let $F: [0,\infty)\to [0,1]$ be a $n$-times differentiable function such that $\int_0^{\infty} \zeta^{-\alpha}F^{(n)}(\zeta)\d \zeta$ is finite. Assume further that $\norm{F^{(k)}}_{\infty}<\infty$ for all $1\le k\le n$. Then the $(n-1+\alpha)$-th moment of $U$ is given by \begin{align*} \Ex [U^{n-1+\alpha}]=\dfrac{\int\limits_0^{\infty} \zeta^{-\alpha}\Ex[U^nF^{(n)}(\zeta U)]\d \zeta}{\int\limits_0^{\infty} \zeta^{-\alpha}F^{(n)}(\zeta)\d \zeta}=\dfrac{\int\limits_0^{\infty} \zeta^{-\alpha}\frac{\d^n}{\d\zeta^n}\Ex[F(\zeta U)]\d \zeta}{\int\limits_0^{\infty} \zeta^{-\alpha}F^{(n)}(\zeta)\d \zeta}. \end{align*} \end{lemma} The proof of this lemma follows by an interchange of measure justified by Fubini's theorem and the dominated convergence theorem, as $\Ex[U^n]$ and $\norm{F^{(k)}}_{\infty} < \infty$ for all $1 \le k \le n.$ For $s>0$, we apply this lemma with $U = \tau^{H_0(t)}$, $n = \lfloor s \rfloor +1$ and $\alpha=s-\lfloor s \rfloor$. We take $F(x)= F_q(x)$ defined in \eqref{eq:laplace} which is shown to be satisfy the hypothesis of Lemma \ref{lm:frac_mom} (see Proposition \ref{p:etau}). As a result, we transform the computation of $\Ex[\tau^{sH_0(t)}]$ into that of \begin{align}\label{eq:int1} \int_0^{\infty} \zeta^{-\alpha}\frac{\d^n}{\d\zeta^n}\Ex[F_q(\zeta \tau^{H_0(t)})]\d \zeta. \end{align} Utilizing the exact formula from \eqref{eq:laplace} and the definition of Fredholm determinant from \eqref{eq:f-series}, we can write the above expression as a series where we identify the leading term (corresponding to $L=1$ term of the series) and a higher-order term (corresponding to $L\ge 2$ terms of the series). We eventually show that the asymptotics of the leading term matches with the exact asymptotics in \eqref{eq:exp} while the higher-order term decays much faster. This leads to the proof of Theorem \ref{thm:frac_mom}. {The above description of our method is in line with the Lyapunov moment approach adopted in the works of \cite{dt19}, \cite{gl20} and \cite{lin20} to obtain upper-tail large deviation results of other integrable models, such as the KPZ equation. Namely, we extract fractional moments from the ($\tau$-)Laplace transform such as \eqref{eq:laplace} according to Lemma \ref{lm:frac_mom}. In particular, our work draws from those of \cite{dt19} and \cite{lin20}, which studied the fractional moments of the Stochastic Heat Equation (SHE) and the half-line Stochastic Heat Equation, respectively. We will further contextualize the connections of our work to \cite{dt19}, \cite{gl20} and \cite{lin20} in Section \ref{sec:pre}. In the following text, however, we emphasize a few key differences and technical challenges unique to the ASEP that we have encountered and resolved in our proof.} {First, unlike SHE or half-line SHE, the usual Laplace transform is not available in case of the ASEP. Instead, we only have the $\tau$-Laplace transform for our observable of interest. As a result, we have formulated Lemma \ref{lm:frac_mom} in our paper, which is more generalized than its prototype in \cite[Lemma 1.4]{dt19}, to feed in the $\tau$-Laplace transform. Consequently, we have worked with $\tau$-exponential functions in our analysis.} {Another key difference is that the kernel $K_{\zeta, t}$ in \eqref{def: ker} in our model is much more intricate than its counterpart in the KPZ model and leads to much more involved analysis of the leading term. Indeed, $K_{\zeta,t}$ is asymmetric and as $u$ varies in $(\delta - \i \infty, \delta + \i \infty)$, the function $\frac{g_t(w)}{g_t(\tau^uw)}$ appearing in the kernel $K_{\zeta,t}$, exhibits a periodic behavior, whereas the kernel in the KPZ models involves Airy functions in its integrand which have a unique maximum and are much easier to analyze. Furthermore, our model exhibits exponentially decaying moments of $\tau^{H_0(t)}$ as opposed to the exponentially increasing ones of the KPZ models in \cite{dt19} and \cite{lin20} and this demands a more precise understanding of the trace term of our Fredholm determinant expansion. For instance in Section \ref{sec:leading}, to obtain the precise asymptotics for our leading term, we have performed steepest descent analysis on the kernel $K_{\zeta, t}$, where the periodic nature of $\frac{g_t(w)}{g_t(\tau^uw)}$ results in infinitely many critical points. A major technical challenge in our proof is to argue how the contribution from only one of the critical points dominates the those from the rest and this is accomplished in the proof of Proposition \ref{p:leading}. Similarly, the asymmetry of the kernel in the ASEP model has led us to opt for the Hadamard's inequality approach as exemplified in Section 4 of \cite{lin20}, instead of the operator theory argument in \cite{dt19}, to obtain a sufficient upper bound for the higher-order terms in our paper in Section \ref{sec:higher}.} \subsection{Comparison to Previous Works}\label{sec:pre} {In a broader context, our main result on the Lyapunov exponent for the ASEP with step initial data and its upper-tail large deviation belongs to the undertakings of studying the intermittency phenomenon and large deviation problems of integrable models in the KPZ universality class.} As we have previously alluded to, the KPZ universality class contains a collection of random growth models that are characterized by scaling exponent of $1/3$ and certain universal non-Gaussian large time fluctuations. We refer to \cite{acq,kpz,sasamoto} and the references therein for more details. The ASEP is one of the standard one-dimensional models of the KPZ universality class and bears connection to several other integrable models in this class, such as the stochastic six-vertex model \cite{bcg,agg,evgeni}, {KPZ equation \cite{cldr,dot10,ss10,acq,kpz}}, and $q$-TASEP \cite{bcs}. On the other hand, the intermittency property is a universal phenomenon that captures high population concentrations on small spatial islands over large time. Mathematically, the intermittency of a random field is defined in terms of its Lyapunov exponents. In particular, the connection between integer Lyapunov moments and intermittency has long been an active area of study in the SPDE community in last few decades \cite{gar,car,ber,foon,hu,conus,chen,balan}. For the KPZ equation, \cite{kar} predicted the integer Lyapunov exponents for the SHE using replica Bethe anstaz techniques. {This result was later first rigorously attempted in \cite{ber} and correctly proven in \cite{che}}. Similar formulas were shown for the moments of the parabolic Anderson model, semi-discrete directed polymers, q-Whittaker process (see \cite{mcd} and \cite{bc}). For the ASEP, integer moments formula for $\tau^{H_0(t)}$ were obtained in \cite{bcs} using nested contour integral ansatz. {From the perspective of tail events, by studying the asymptotics of integer Lyapunov exponents formulas, one can extract one-sided bounds on the upper tails of integrable models. However, these integer Lyapunov exponents alone are not sufficient to provide the exact large deviation rate function.} Recently, a stream of effort has been devoted to studying large deviations for some KPZ class models by explicitly computing the fractional Lyapunov exponents. The work of \cite{dt19} set this series of effort in motion by solving the KPZ upper-tail large deviation principle through the fractional Lyapunov exponents of the SHE with delta initial data. \cite{gl20} soon extended the same result for the SHE for a large class of initial data, including any {random} bounded positive initial data and the stationary initial data. An exact way to compute every positive Lyapunov exponent of the half-line SHE was also uncovered in \cite{lin20}. In lieu of these developments, our main result for the ASEP with step initial data and its upper-tail large deviation fits into this broader endeavor of studying large deviation problems of integrable models with the Lyapunov exponent appproach. Meanwhile, in the direction of the ASEP, as mentioned before, \cite{dp} has produced a one-sided large deviation bound for the upper-tail probability appearing in \eqref{eq:ldp} which coincides with the correct rate function $\Phi_{+}$ defined in \eqref{eq:ldp} for $y \le y_0:= \frac{1-2\sqrt{q(1-q)}}{1+2\sqrt{q(1-q)}}$. {This result was sufficient for their purpose of establishing a near-exponential fixation time for the coarsening model on $\Z^2$ and \cite{dp} obtained it via steepest descent analysis on the exact formula for the probability of $H_0(t/\gamma)$.} More specially, they worked with the following result from \cite[Lemma 4]{tw2} as input: \begin{equation}\label{eq:tw} \P\left(-H_0\big(\tfrac{t}\gamma\big)+\tfrac{t}{4} > \tfrac{t}{4}y\right)= \frac{1}{2\pi \i}\int_{|\mu|= R}(\mu; \tau)_{\infty}\det(1 + \mu J_{m, t}^{(\mu)})\frac{\d \mu}{\mu}, \end{equation} where $m=\lfloor \frac{1}{4}t(1-y)\rfloor$, $R \in (\tau, \infty)\setminus\{1, \tau^{-1}, \tau^{-2}, \ldots\}$ is fixed, $(\mu;\tau)_{\infty}: = (1-\mu)(1 - \mu\tau)(1 - \mu\tau^2)\ldots$ is the infinite $\tau$-Pochhammer symbol and $J_{m,t}^{(\mu)}$ is the kernel defined in Equation (3.4) of \cite{dp}. Analyzing the exact pre-limit Fredholm determinant $\det(1 + \mu J_{m, t}^{(\mu)})$, \cite{dp} chose appropriate contours for the kernel $J_{m,t}^{(\mu)}$ that pass through its critical points and performed a steepest descent analysis. However, their choice of contours was unattainable beyond the threshold $y_0$. Namely, if we attempted to deform the same contours for $y > y_0$, we would inevitably cross poles, which rendered the steepest descent analysis much trickier. By adopting the Lyapunov moment approach, we have avoided this problem when looking for the precise large deviation rate function. In addition to the relavence of our upper-tail LDP result, it is also worthy to remark on the difficulty of obtaining a lower-tail LDP of the ASEP with step initial data. As explained before, the lower-tail $\mathbb{P}(-H_0\big(\tfrac{t}\gamma\big)+\tfrac{t}{4} < -\tfrac{t}{4}y)$ is expected to go to zero at a much faster rate of $\exp(-t^2 \Phi_{-}(y))$. The existence of the lower-tail rate function has so far only been shown in the case of TASEP in \cite{joh} through its connection to continuous log-gases. {The functional LDPs for TASEP for both tails have been studied in \cite{jen}, \cite{var}, \cite{x3} (upper tail), and \cite{ot17} (lower-tail). Large deviations for open systems with boundaries in contact with stochastic reservoirs has also been studied in physics literature. We mention \cite{DL98}, \cite{DLS03}, \cite{BD06} and the references therein for works in these directions.} {More broadly for integrable models in the KPZ universality class, lower tail of the KPZ equation has been extensively studied in both mathematics and physics communities. In the physics literature, \cite{le2016large} provided the first prediction of the large deviation tails of the KPZ equation for narrow wedge initial data. For the upper tail, their analysis also yields subdominant corrections (\cite[Supp.~Mat.]{supp}). Furthermore, the physics work of \cite{sasorov2017large} first predicted lower-tail rate function of the KPZ equation for narrow wedge initial data in an analytical form, followed by the derivations in \cite{JointLetter} and \cite{ProlhacKrajenbrink} via different methods. The asymptotics of deep lower tail of KPZ equation was later obtained in \cite{KrajLedou2018} for a wide class of initial data. From the mathematics front, the work \cite{cg} provided detailed, rigorous tail bounds for the lower tail of the KPZ equation for narrow wedge initial data. The precise rate function of its lower-tail LDP was later proved in \cite{tsai18} and \cite{cr}, which confirmed the prediction of existing physics literature. The four different routes of deriving the lower-tail LDP in \cite{sasorov2017large}, \cite{JointLetter}, \cite{ProlhacKrajenbrink} and \cite{tsai18} were later shown to be closely related in \cite{largedev}. A new route has also been recently obtained in the physics work of \cite{doussal2019large} (see also \cite{ProlhacRiemannKPZ}).} {In the short time regime, large deviations for the KPZ equation has been studied extensively in physics literature (see \cite{x1}, \cite{x2}, \cite{krajenbrink} and the references therein for a review). Recently, \cite{lt21} rigorously derived the large deviation rate function of the KPZ equation in the short-time regime in a variational form and recovered deep lower-tail asymptotics, confirming existing physics predictions}. For non-integrable models, large deviations of first-passage percolation were studied in \cite{chow} and more recently \cite{basu}. For last-passage percolation with general weights, recently, geometry of polymers under lower tail large deviation regime has been studied in \cite{basu2}. \subsection*{Notation} Throughout the rest of the paper, we use $\Con = \Con(a,b,c,\ldots) > 0$ to denote a generic deterministic positive finite constant that is dependent on the designated variables $a, b, c, \ldots$. However, its particular content may change from line to line. We also use the notation $\mathfrak{C}(r)$ to denote a positively oriented circle with center at origin and radius $r>0$. \subsection*{Outline} The rest of this article is organized as follows. In Section \ref{sec:fraccase}, we introduce the main ingredients for the proofs of Theorem \ref{thm:frac_mom} and \ref{thm:ldp}. In particular, we reduce the proof of our main results to Proposition \ref{p:leading} (asymptotics of the leading order) and Proposition \ref{p:ho} (estimates for the higher order), which are proved in Sections \ref{sec:leading} and \ref{sec:higher} respectively. Finally, in Appendix \ref{app} we compare our rate function $\Phi_{+}(y)$, defined in \eqref{eq:ldp}, to that of TASEP. \subsection*{Acknowledgements} We are grateful to Ivan Corwin for suggesting the problem and providing numerous stimulating discussions. His encouragement and inputs on earlier drafts of the paper have been invaluable. We also thank Evgeni Dimitrov, {Li-Cheng Tsai}, {Yier Lin} and Mark Rychnovsky for helpful conversations and {Pierre Le Doussal and Alexandre Krajenbrink for providing many valuable references to the physics literature}. The authors were partially supported by Ivan Corwin's NSF grant DMS:1811143 as well as the Fernholz Foundation's ``Summer Minerva Fellows" program. \section{Proof of Main Results} \label{sec:fraccase} In this section, we give a detailed outline of the proofs of Theorems \ref{thm:frac_mom} and \ref{thm:ldp}. In Section \ref{sec:prop} we collect some useful properties of $h_q$ and $F_q$ functions defined in \eqref{eq:ldp} and \eqref{eq:laplace} respectively. In Section \ref{sec:proof} we complete the proof of Theorems \ref{thm:frac_mom} and \ref{thm:ldp} assuming technical estimates on the leading order term (Proposition \ref{p:leading}) and higher order term (Proposition \ref{p:ho}). Throughout this paper, we fix $s>0$ and set $n=\lfloor s\rfloor +1 \ge 1$ and $\alpha=s-\lfloor s\rfloor$ so that $s=n-1+\alpha$. We also fix $q\in (\frac12,1)$ and set $p=1-q$ and $\tau=p/q \in (0,1)$ for the rest of the article. \subsection{Properties of $h_q(x)$ and $F_q(x)$} \label{sec:prop} Recall the Lyapunov exponent $h_q(x)$ defined in \eqref{eq:exp} and the $F_q(x)$ function defined in \eqref{eq:laplace}. The following two propositions investigates various properties of these two functions which are necessary for our later proofs. \begin{proposition}[Properties of $h_q$] \label{p:htau} Consider the function $h_q: (0,\infty) \to \mathbb{R}$ defined by $h_q(x)=(q-p)\frac{1-\tau^{\frac{x}2}}{1+\tau^{\frac{x}2}}$. Then, the following properties hold true: \begin{enumerate}[label=(\alph*), leftmargin=15pt] \item \label{a} $B_q(x):=\frac{h_q(x)}{x}$ is strictly positive and strictly decreasing with $$\lim_{x\to 0^+} B_q(x)=\tfrac14(p-q)\log \tau>0.$$ \item \label{b} $h_q$ is strictly subadditive in the sense that for any $x,y\in (0,\infty)$ we have $$h_q(x+y)<h_q(x)+h_q(y).$$ \item \label{c} $h_q$ is related to $\Phi_{+}$ defined in \eqref{eq:ldp} via the following Legendre-Fenchel type transformation: \begin{align*} \Phi_{+}(y)=\sup_{s\in \R_{>0}} \left\{s\frac{1-y}{4}\log\tau+\frac1{q-p}h_q(s)\right\}, \quad y\in (0,1). \end{align*} \end{enumerate} \end{proposition} \begin{proof} For \ref{a}, first, the positivity of $B_q(x)$ follows from the positivity of $h_q(x).$ To see its growth, taking the derivative of $B_q(x)$ we obtain \begin{align}\label{eq:bq} B_q'(x)= \frac{(q-p)(-x\tau^{\frac{x}{2}}\log\tau -1 + \tau^x)}{(1 + \tau^{\frac{x}{2}})^2x^2}. \end{align} Note that the numerator on the r.h.s of (\ref{eq:bq}) is 0 when $x=0$ and its derivative against $x$ is $\tau^{\frac{x}{2}}\log \tau(\tau^{\frac{x}{2}}-\frac{x}{2}\log\tau -1) < 0$ for $x > 0$. Thus $B_q'(x)$ is strictly negative when $x> 0$ and $B_q(x)$ is strictly decreasing for $x>0$. L'H\^opital's rule yields that $\lim_{x\to 0^+}B_q(x) =h_q'(0)= \frac{1}{4}(q-p)\log\tau.$ \medskip For \ref{b}, direct computation yields \begin{align}\label{eq:diff} h_q(x+y)-h_q(x)-h_q(y) =-(q-p)\frac{(1-\tau^{\frac{y}2})(1-\tau^{\frac{x}2})(1-\tau^{{\frac{x+y}2}})}{(1+\tau^{{\frac{x+y}2}})(1+\tau^{{\frac{x}2}})(1+\tau^{{\frac{y}2}})} < 0. \end{align} Lastly, for part \ref{c}, we fix $y\in (0,1)$ and define \begin{align*} g_{y}(s): = s\frac{1-y}{4}\log\tau+\frac1{q-p}h_q(s), \quad s>0. \end{align*} Direct computation yields $g_{y}'(s) = (\frac{1-y}{4} -\frac{\tau^{\frac{s}{2}}}{(1 + \tau^{\frac{s}{2}})^2})\log \tau$ and $g_y''(s)=\frac{\tau^{\frac{s}{2}}(\tau^{\frac{s}{2}}-1)\log^2\tau}{2(1+\tau^{\frac{s}{2}})^3} <0$. Thus $g_{y}(s)$ is concave on $(0,\infty)$ and hence attains its unique maxima when $g_y'(s)=0$ or equivalently $\frac{1-y}{4} =\frac{\tau^{\frac{s}{2}}}{(1 + \tau^{\frac{s}{2}})^2}.$ The last equation has $s = 2\log_{\tau} (\frac{1 - \sqrt{y}}{1 + \sqrt{y}})$ as the only positive solution and hence it defines the unique maximum. Substituting this $s$ back into $g_{y}(s)$ generates the final result as $\Phi_{+}(y).$ \end{proof} \begin{proposition}[Properties of $F_q(\zeta)$] \label{p:etau} Consider the function $F_q:[0,\infty) \to [0,1]$ defined by $F_q(\zeta):=\prod_{n=0}^{\infty}(1+\zeta\tau^n)^{-1}$. Then, the following properties hold true: \begin{enumerate}[label=(\alph*), leftmargin=15pt] \item \label{fa} $F_q$ is an infinitely differentiable function with $(-1)^nF_q^{(n)}(\zeta) \ge 0$ for all $x>0$. Furthermore, $\norm{F_q^{(n)}}_{\infty}<\infty$ for each $n$. \item \label{fb} For each $n\in \Z_{>0}$, and $\alpha\in [0,1)$, $(-1)^n\int_0^{\infty}\zeta^{-\alpha}F_q^{(n)}(\zeta)\d \zeta$ is positive and finite. \item \label{fc} All the derivatives of $F_q$ have superpolynomial decay. In other words for any $m, n \in \Z_{\ge 0}$ we have $$\sup_{\zeta>0} |\zeta^mF_q^{(n)}(\zeta)| < \infty.$$ \end{enumerate} \end{proposition} \begin{proof} (a) Note that $F_q(\zeta)=\prod_{n=0}^{\infty}(1+\zeta\tau^n)^{-1}=(-\zeta;\tau)_{\infty}^{-1}$ where we recall that $(-\zeta;\tau)_{\infty}$ is the $\tau$-Pochhammer symbol. As $(-\zeta;\tau)_{\infty}$ is analytic \cite[Corollary A.1.6.]{aar} and nonzero for $\zeta \in [0, \infty),$ its inverse $F_q(\zeta)$ is analytic. \smallskip We next rewrite $F_q(\zeta) = \prod_{n=0}^{\infty}f_n(\zeta),$ where $f_n(\zeta)=(1 + \zeta\tau^n)^{-1}$. Denote $H(\zeta): = \log F_q(\zeta).$ Since each $f_n(\zeta)\in(0,1)$ is analytic for $\zeta \in [0, \infty)$ and the product $\prod_{n=0}^{\infty}f_n(\zeta) \in (0,1) $ converges locally and uniformly, $H(\zeta)$ is well-defined and $H(\zeta) = \sum_{n=0}^{\infty}\log f_n(\zeta).$ Given that $|\sum_{n=0}^{\infty}\frac1{f_n(\zeta)}f_n'(\zeta)| = \sum_{n=0}^{\infty}\frac{\tau^n}{(1+\zeta\tau^n)}< \frac{1}{1-\tau},$ we have \begin{equation}\label{eq:deqn} H'(\zeta) = \frac{ F_q'(\zeta)}{F_q(\zeta)} = \sum_{n=1}^{\infty}\frac{f_n'(\zeta)}{f_n(\zeta)}=: G(\zeta). \end{equation} Note that $G(\zeta) = -\sum_{j = 1}^{\infty}\tau^j f_j(\zeta)$ and $|G(\zeta)| < \infty.$ For each $m \in \Z_{> 0}$, let us set $G^{(m)}(\zeta) := -\sum_{j =1}^{\infty}\tau^jf_j^{(m)}(\zeta).$ As $f_j^{(m)}(\zeta) = (-1)^{m} m!\frac{\tau^{mj}}{(1+\xi\tau^j)^{m+1}},$ we obtain $|G^{(m)}(\zeta)| \leq \frac{m!}{1- \tau^{m+1}}< \infty$ converges locally and uniformly. Induction on $m$ gives us that $G(\zeta)$ is infinitely differentiable and the $m$-th derivative of $G$ is $G^{(m)}$. It follows that $F_q(\zeta)$ is infinitely differentiable too. In particular, for any finite $n \in \Z_{\ge0}$, by Leibniz's rule on the relation \eqref{eq:deqn} we obtain \begin{align}\label{eq:leib} F_q^{(n+1)}(\zeta) = \sum_{k=0}^n\binom{n}{k}F_q^{(n-k)}(\zeta)G^{(k)}(\zeta). \end{align} Observe that $(-1)^{k+1}G^{(k)}$ is positive and finite. As $F_q$ is positive and finite, using \eqref{eq:leib}, induction gives us that $(-1)^{n}F_q^{(n)}$ is also positive and finite. As $\norm{G^{(m)}}_{\infty}$ and $\norm{F_q}_{\infty}$ are finite, using \eqref{eq:leib}, induction gives us that $\norm{F_q^{(n)}}_{\infty}$ is finite for any $n \in \Z_{\ge0}.$ \medskip (b) For $\alpha \in [0,1)$, positivity of the integral $(-1)^n\int_0^{\infty}\zeta^{-\alpha}F_q^{(n)}(\zeta)\d\zeta$ follows from part (a). To check the integrability, we first verify the $n = 0$ case. Since $\zeta \geq 0$ and $\tau \in (0,1),$ \begin{equation*} \begin{split} 0 &<\int_{0}^{\infty} \zeta^{- \alpha} F_q(\zeta) \d\zeta = \int_{0}^{\infty}\zeta^{- \alpha}\prod_{m=0}^{\infty}\frac{1}{1 + \zeta\tau^m}\d\zeta < \int_{0}^{\infty}\zeta^{-\alpha}\frac{1}{1 + \zeta}\d\zeta\\&= \int_{0}^1\zeta^{- \alpha}\frac{1}{1 + \zeta} \d\zeta + \int_1^{\infty}\frac{\d\zeta}{\zeta^{\alpha}(1 + \zeta)}<\int_0^{1} \zeta^{-\alpha} \d\zeta + \int_1^{\infty}\frac{\d\zeta}{\zeta^{\alpha + 1}}< \infty. \end{split} \end{equation*} When $n > 0$, using \eqref{eq:leib} and the fact the $|G^{(m)}(\zeta)|<\frac{m!}{1-\tau^{m+1}}$, the finiteness of $(-1)^n\int_0^{\infty}\zeta^{-\alpha}F_q^{(n)}(\zeta)\d\zeta$ follows from induction. \medskip (c) Clearly for each $m$ we have $F_q(\zeta) \le \frac{1}{(1+\zeta \tau^{m})^{m+1}}$ forcing superpolynomial decay of $F_q$. The superpolynomial decay of higher order derivative now follows via induction using \eqref{eq:leib}. \end{proof} \subsection{Proof of Theorem \ref{thm:frac_mom} and Theorem \ref{thm:ldp}} \label{sec:proof} Recall $H_0(t)$ from \eqref{def:ht}. As explained in Section \ref{sec:ske}, the main idea is to use Lemma \ref{lm:frac_mom} with $U=\tau^{H_0(t)}$ and $F=F_q$ defined in \eqref{eq:laplace}. Observe that Proposition \ref{p:etau} guarantees $F=F_q$ can be chosen in Lemma \ref{lm:frac_mom}. In the following proposition, we show that limiting behavior of $\Ex [\tau^{sH_0(t)}]$ is governed by the integral in \eqref{eq:int1} restricted to $[1,\infty)$. \begin{proposition}\label{p:red} For any $s>0$, we have \begin{align}\label{eq:red} \lim_{t\to\infty}\frac1t\log \Ex [\tau^{sH_0(t)}]=\lim_{t\to\infty}\frac1t\log \left[(-1)^n\int_1^{\infty} \zeta^{-\alpha}\frac{\d^n}{\d\zeta^n}\Ex[F_q(\zeta \tau^{H_0(t)})]\d \zeta\right], \end{align} where $n=\lfloor s\rfloor +1 \ge 1$ and $\alpha=s-\lfloor s\rfloor$ so that $s=n-1+\alpha$. \end{proposition} \begin{proof} Let $U=\tau^{H_0(t)}$. In this proof, we find an upper and a lower bound of $\Ex[U^s]$ and show that as $t \rightarrow \infty,$ after taking logarithm of $\Ex[U^s]$ and dividing by $t$, the two bounds give matching results. Note that as $\tau \in (0,1)$ and $H_0(t) \geq 0$ for any $n\in \Z_{\geq 0}$ and $t > 0,$ $U$ has finite $n$-th moment. By Proposition \ref{p:etau}, $F_q$ is $n$-times differentiable and $|\int_0^{\infty}x^{-\alpha}F_q^{(n)}(x)\d x| < \infty.$ Denoting $\d\P_U(u)$ as the measure corresponding to the random variable $U$ we have \begin{align} (-1)^n\int_1^{\infty} \zeta^{-\alpha}\frac{\d^n}{\d\zeta^n}\Ex[F_q(\zeta \tau^{H_0(t)})]\d \zeta & = (-1)^n\int_1^{\infty} \zeta^{-\alpha}\int_0^{\infty} u^nF_q^{(n)}(\zeta u)\d\P_U(u)\d \zeta. \label{eq:sim} \end{align} The $(-1)^n$ factor ensures that the above quantities are nonnegative via Proposition \ref{p:etau} \ref{fa}. By the finiteness of the $n$-th moment of $U$, $\norm{F_q^{(n)}}_{\infty}<\infty$ (by Proposition \ref{p:etau} \ref{fa}), and Fubini's theorem, we can interchange the integrals and obtain \begin{align} \mbox{r.h.s~of \eqref{eq:sim}} & = (-1)^n\int_0^{\infty} u^{n-1+\alpha} \int_1^{\infty} (\zeta u)^{-\alpha} F_q^{(n)}(\zeta u)\d (u\zeta) \d\P_U(u) \nonumber \\ & = (-1)^n\int_0^{\infty} u^{n-1+\alpha} \int_{u}^{\infty} x^{-\alpha} F_q^{(n)}(x)\d x \ \d\P_U(u) \label{eq:sim2} \end{align} Since the random variable $U\in [0,1]$, we can lower bound the inner integral on the r.h.s.~of \eqref{eq:sim2} by restricting the $x$-integral to $[1,\infty)$. Recalling that $s=n-1+\alpha$ we have \begin{align}\label{eq:low} \mbox{r.h.s.~of \eqref{eq:sim}} \ge (-1)^n\left(\int_1^{\infty} x^{-\alpha}F_q^{(n)}(x)\d x\right)\Ex [\tau^{sH_0(t)}]. \end{align} As for the upper bound for $\mbox{r.h.s.~of \eqref{eq:sim}}$, we may extend the range of integration to $[0,\infty)$. Apply Lemma \ref{lm:frac_mom} with $F\mapsto F_q$ and $U\mapsto \tau^{sH_0(t)}$ to get \begin{align}\label{eq:up} \mbox{r.h.s.~of \eqref{eq:sim}} \le (-1)^n\int_0^{\infty} \zeta^{-\alpha}\frac{\d^n}{\d\zeta^n}\Ex\left[ F_q(\zeta U)\right]\d \zeta = \left[(-1)^n\int_0^{\infty}\zeta^{-\alpha}F_q^{(n)}(\zeta)\d \zeta\right]\Ex [\tau^{sH_0(t)}] \end{align} Noting that both the prefactors in \eqref{eq:low} and \eqref{eq:up} are positive and free of $t$. Taking logarithms and dividing by $t$, we get the desired result. \end{proof} Next we truncate the integral in r.h.s.~of \eqref{eq:red} further. Recall the function $B_q(x)$ defined in Proposition \ref{p:htau} \ref{a}. We separate the range of integration $[1,\infty)$ into $[1,e^{tB_q(s/2)}]$ and $(e^{tB_q(s/2)},\infty)$ and make use of the Fredholm determinant formula for $\Ex[F_q(\zeta\tau^{H_0(t)})]$ from Theorem \ref{thm:laplace} to write the integral in r.h.s.~of \eqref{eq:red} as follows. \begin{align} \nonumber (-1)^n\int_1^{\infty} \zeta^{-\alpha}\frac{\d^n}{\d\zeta^n}\Ex[F_q(\zeta \tau^{H_0(t)})]\d \zeta & =(-1)^n\int_1^{e^{tB_q(\frac{s}{2})}} \zeta^{-\alpha}\frac{\d^n}{\d\zeta^n}\Ex[F_q(\zeta \tau^{H_0(t)})]\d \zeta+\mathcal{R}_s(t) \\ & = (-1)^n\int_{1}^{e^{tB_q(\frac{s}{2})}} \zeta^{-\alpha}\frac{\d^n}{\d\zeta^n}\det(I+K_{\zeta,t})\d \zeta +\mathcal{R}_s(t), \label{eq:diff_fred} \end{align} where \begin{align}\label{eq:calr} \mathcal{R}_s(t):=(-1)^n\int_{e^{tB_q(\frac{s}{2})}}^{\infty} \zeta^{-\alpha}\frac{\d^n}{\d\zeta^n}\Ex[F_q(\zeta \tau^{H_0(t)})]\d \zeta \end{align} Recall the definition of Fredholm determinant from \eqref{eq:f-series}. Assuming $\tr(K_{\zeta,t})$ to be differentiable for a moment we may split the first term in \eqref{eq:diff_fred} into two parts and write \begin{align}\label{eq:sep2} (-1)^n\int_{1}^{e^{tB_q(\frac{s}{2})}} \zeta^{-\alpha}\frac{\d^n}{\d\zeta^n}\det(I+K_{\zeta,t})\d \zeta = \mathcal{A}_s(t)+\mathcal{B}_s(t) \end{align} where \begin{align}\label{eq:cala} \mathcal{A}_{s}(t) & := (-1)^n\int_{1}^{e^{tB_q(\frac{s}{2})}} \zeta^{-\alpha}\frac{\d^n}{\d\zeta^n}\tr( K_{\zeta,t})\,\d\zeta, \\ \mathcal{B}_{s}(t) & := (-1)^n\int_{1}^{e^{tB_q(\frac{s}{2})}} \zeta^{-\alpha}\frac{\d^n}{\d\zeta^n}[\det(I+K_{\zeta,t})-\tr( K_{\zeta,t})]\,\d\zeta. \label{eq:Calb} \end{align} The next two propositions verify that both $\mathcal{A}_s(t)$ and $\mathcal{B}_{s}(t)$ are well-defined and we defer their proofs to Sections \ref{sec:leading} and \ref{sec:higher}, respectively. The first one guarantees that $\tr(K_{\zeta,t})$ is indeed infinitely differentiable and provides the asymptotics for $\operatorname{Re} [\mathcal{A}_s(t)]$. \begin{proposition}\label{p:leading} For each $\zeta>0$, the function $\zeta \mapsto\tr(K_{\zeta,t})$ is infinitely differentiable and thus $\mathcal{A}_s(t)$ in \eqref{eq:cala} is well defined. Furthermore, for any $s>0$, we have \begin{align}\label{eq:lead} \lim_{t\to\infty} \log\left( \operatorname{Re}[\mathcal{A}_{s}(t)]\right) = -h_q(s). \end{align} \end{proposition} From \eqref{eq:diff_fred}, we know that the Fredholm determinant $\det(I + K_{\zeta, t})$ is infinitely differentiable. Thus, proposition \ref{p:leading} renders $(\det(I +K_{\zeta,t})-\tr(K_{\zeta,t}))$ infinitely differentiable as well. Hence $\mathcal{B}_s(t)$ is well-defined. In fact, we have the following asymptotics for $\mathcal{B}_{s}(t)$. \begin{proposition} \label{p:ho} Fix any $s>0$ so that $s-\lfloor s\rfloor>0$. Recall $\mathcal{B}_s(t)$ from \eqref{eq:Calb}. There exists a constant $\Con=\Con(q,s)>0$ such that for all $t>0$, we have \begin{align}\label{eq:ho} |\mathcal{B}_s(t)| \le \Con \exp(-th_q(s)-\tfrac1{\Con}t), \end{align} where $h_q(s)$ is defined in \eqref{eq:exp}. \end{proposition} Note that Proposition \ref{p:ho} in its current form does not cover integer $s$. We later explain in Section \ref{sec:higher} why $s-\lfloor s \rfloor >0$ is necessary for our proof. However, this does not effect our main results as one can deduce Theorem \ref{thm:frac_mom} for integer $s$ as well via a simple continuity argument, which we present below. Assuming Propositions \ref{p:leading} and \ref{p:ho}, we now complete the proof of Theorem \ref{thm:frac_mom} and Theorem \ref{thm:ldp}. \begin{proof}[Proof of Theorem \ref{thm:frac_mom}] Fix $s>0$ so that $s-\lfloor s \rfloor >0$. Appealing to Proposition \ref{p:red} and \eqref{eq:diff_fred} and \eqref{eq:sep2} we see that $$\lim_{t\to\infty} \frac1t\log\Ex [\tau^{sH_0(t)}] =\lim_{t\to\infty} \frac1t\log\left[\mathcal{A}_s(t)+\mathcal{B}_s(t)+\mathcal{R}_s(t)\right],$$ where $\mathcal{A}_s(t)$, $\mathcal{B}_s(t)$, and $\mathcal{R}_s(t)$ are defined in \eqref{eq:cala}, \eqref{eq:Calb} and \eqref{eq:calr} respectively. For $\mathcal{R}_s(t)$, setting $V=\zeta \tau^{H_0(t)}$ and noting $s = n-1+ \alpha,$ we see that \begin{align*} |\mathcal{R}_s(t)| = \int_{e^{tB_q(\frac{s}{2})}}^{\infty} \zeta^{-\alpha-n}\Ex\left[|V^nF_q^{(n)}\left(V\right)|\right]\d \zeta \le \left[\sup_{v>0} |v^{n}F_q^{(n)}(v)|\right] s^{-1}\exp(-tsB_q(\tfrac{s}2)). \end{align*} The fact that $\sup_{v>0} |v^{n}F_q^{(n)}(v)|$ is finite follows from Proposition \ref{p:etau} \ref{fc}. Note that $sB_q(\tfrac{s}2)$ is strictly bigger than $h_q(s)=sB_q(s) > 0$ via Proposition \ref{p:htau} \ref{a}. By Proposition \ref{p:leading}, when $t$ is large, we see that $\operatorname{Re}[\mathcal{A}_{s}(t)]$ grows like $\exp(-th_q(s)) > \exp(-tsB_q(\frac{s}{2}))$. Similarly, Proposition \ref{p:ho} shows that $\operatorname{Re}[\mathcal{B}_s(t)]$ is bounded from above by $\Con\exp(-th_q(s) -\frac{1}{\Con}t)$ for some constant $\Con = \Con(q,s)$, which is strictly less than $\exp(-th_q(s))$ for large enough $t$. Indeed for all large enough $t$, we have $$\frac12\operatorname{Re}[\mathcal{A}_{s}(t)]\le \operatorname{Re}[\mathcal{A}_{s}(t)+\mathcal{B}_s(t)+\mathcal{R}_s(t)] \le\frac32\operatorname{Re}[\mathcal{A}_{s}(t)].$$ Taking logarithms and dividing by $t$, and noting that $\mathcal{A}_{s}(t)+\mathcal{B}_s(t)+\mathcal{R}_s(t)$ is always real, we get \eqref{eq:exp} for any noninteger positive $s$. To prove \eqref{eq:exp} for positive integer $s$, we fix $s\in \Z_{>0}$. For any $K>2$, observe that as $H_0(t)$ is a non-negative random variable (recall the definition from \eqref{def:ht}) we have $$\tau^{(s-K^{-1})H_0(t)} \ge \tau^{sH_0(t)} \ge \tau^{(s+K^{-1})H_0(t)}.$$ Taking expectations, then logarithms and dividing by $t$, in view of noninteger version of \eqref{eq:exp} we have $$-h_q(s-K^{-1}) \ge \limsup_{t\to\infty} \frac1t\log\Ex[\tau^{sH_0(t)}] \ge \liminf_{t\to\infty} \frac1t\log\Ex[\tau^{sH_0(t)}] \ge -h_q(s+K^{-1}).$$ Taking $K\to \infty$ we get the desired result for integer $s$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:ldp}] For the large deviation result, applying Proposition 1.12 in \cite{gl20}, with $X(t)=H_0(t/\gamma)\cdot\log\tau$, and noting the Legendre-Fenchel type identity for $\Phi_{+}(y)$ from Proposition \ref{p:htau} \ref{c}, we arrive at \eqref{eq:ldp}. To prove \eqref{eq:asy}, applying L-H\^opital rule a couple of times we get $$\lim_{y\to 0^+}\frac{\Phi_{+}(y)}{y^{3/2}}= \lim_{y\to0^+} \frac{2}{3}\frac{\Phi_{+}'(y)}{\sqrt{y}}= \lim_{x\to0^+} \frac{2}{3}\frac{\tanh^{-1}(x)}{x}=\lim_{x\to 0^+}\frac23\cdot\frac{1}{1-x^2}=\frac{2}{3}.$$ This completes the proof of the theorem. \end{proof} \section{Asymptotics of the Leading Term} \label{sec:leading} The goal of this section is to obtain exact asymptotics of $\operatorname{Re}[\mathcal{A}_s(t)]$ defined in \eqref{eq:cala} as $t\to\infty$. Recall the definition of the kernel $K_{\zeta,t}$ from \eqref{def: ker}. We employ a standard idea that the asymptotic behavior of the kernel $K_{\zeta,t}$ and its `derivative' (see \eqref{def: kerdrv}) and subsequently that of $\operatorname{Re}[\mathcal{A}_s(t)]$ can be derived by the \textit{steepest descent method}. Towards this end, we first collect all the technical estimates related to the kernel $K_{\zeta,t}$ in Section \ref{sec:tech} and go on to complete the proof of Proposition \ref{p:leading} in Section \ref{sec:lead}. \subsection{Technical estimates of the Kernel} \label{sec:tech} In this section, we analyze the kernel $K_{\zeta,t}$. Much of our subsequent analysis boils down to understanding the function $g_t(z)$, defined in \eqref{def: ker}, that appears in the kernel $K_{\zeta,t}$. Towards this end, we consider \begin{align}\label{eq:contour_fn} f(u,z):=\frac{(q-p)}{1+\frac{z}{\tau}}-\frac{(q-p)}{1+\frac{\tau^uz}{\tau}}, \end{align} so that the ratio $\frac{g_t(z)}{g_t(\tau^uz)}$ that appears in the kernel $K_{\zeta,t}$ defined in \eqref{def: ker} equals to $\exp\left(tf(u,z)\right)$. Below we collect some useful properties of this function $f(u,z)$. First note that $\partial_{z}f(u,z)=0$ has two solutions $z=\pm \tau^{1-\frac{u}2}$, and \begin{align}\label{eq;deri} \partial_{z}^2f(u,z)\big\vert_{z =-\tau^{1-\frac{u}2}}=-2(q-p)\frac{\tau^{\frac{3u}2-2}+\tau^{2u-2}}{(1-\tau^{\frac{u}2})^3}, \ \ \partial_{z}^2f(u,z)\big\vert_{z =\tau^{1-\frac{u}2}}=2(q-p)\frac{\tau^{\frac{3u}2-2}-\tau^{2u-2}}{(1+\tau^{\frac{u}2})^3}. \end{align} The following lemma tells us how the maximum of $\operatorname{Re} [f(u,z)]$ behaves. \begin{lemma}\label{lem:max} Fix $\rho>0$. For any $u\in \mathbb{C}$, with $\operatorname{Re} [u]=\rho$ and $z \in \mathfrak{C}(\tau^{1-\frac{\rho}2})$, we have \begin{align} \label{eq:ineq} \operatorname{Re} [f(u,z)]\le f(\rho,\tau^{1-\frac{\rho}2})=-h_q(\rho) \end{align} where $h_q(\rho)$ is defined in \eqref{eq:exp} and $\mathfrak{C}(\tau^{1-\frac{\rho}2})$ is the circle with center at the origin and radius $\tau^{^{1-\frac{\rho}2}}$. Equality in \eqref{eq:ineq} holds if and only if $\tau^{\i \operatorname{Im} u}=1$, and $z=\tau^{1-\frac{\rho}2}$ simultaneously. Furthermore, for the same range of $u$ and $z$, we have the following inequality: \begin{align}\label{max:ineq} f(\rho,\tau^{1-\frac{\rho}2})-\operatorname{Re} [f(u,z)] \ge \frac{(q-p)(1-\tau^{\frac\g2})\tau^{\frac{\rho}2}}{4(1+\tau^{\frac{\rho}2})^2}(2\tau^{\frac{\rho}2-1}|z-\tau^{1-\frac{\rho}2}|+|\tau^{\i \operatorname{Im} u}-1|). \end{align} \end{lemma} \begin{proof} Set $u=\rho+\i y$ and $z=\tau^{1-\frac{\rho}2}e^{\i\theta}$ with $x\in \R$ and $\theta\in [0,2\pi]$. Note that $f(\rho,\tau^{1-\frac{\rho}2})=-h_q(\rho)$, where $h_q(x)$ is defined in \eqref{eq:exp}. Direct computation yields \begin{align}\label{eq:rl} \operatorname{Re}[f(u,z)]= \frac{(q-p)(\tau^{\rho}-1)(|1+\tau^{\frac\g2}e^{-\i\theta}|^2+|1+\tau^{\frac\g2+\i y}e^{\i\theta}|^2)}{2|1+\tau^{\frac\g2}e^{-\i\theta}|^2|1+\tau^{\frac\g2+\i y}e^{\i\theta}|^2}. \end{align} Since $\tau<1$, applying the inequality $|1+\tau^{\frac\g2}e^{-\i\theta}|^2+|1+\tau^{\frac\g2+\i y}e^{\i\theta}|^2 \ge 2|1+\tau^{\frac\g2}e^{-\i\theta}||1+\tau^{\frac\g2+\i y}e^{\i\theta}|,$ and then noting that $|1+\tau^{\frac\g2}e^{-\i\theta}||1+\tau^{\frac\g2+\i y}e^{\i\theta}|\le (1+\tau^{\frac\g2})^2$, we see $(\mbox{r.h.s.~of \eqref{eq:rl}}) \le -(q-p)\frac{1-\tau^{\frac{\rho}2}}{1+\tau^{\frac{\rho}2}}$. Clearly equality holds if and only if $\theta=0$ and $\tau^{\i y}=1$ simultaneously. Furthermore, following the above inequalities, we have $\operatorname{Re}[f(\rho+\i y,z)] \le -(q-p)\frac{1-\tau^{\frac\rho2}}{|1+\tau^{\frac\rho2}e^{\i\theta}|}$ and $\operatorname{Re}[f(\rho+\i y,z)] \le -(q-p)\frac{1-\tau^{\frac\rho2}}{|1+\tau^{\frac\rho2+\i y}e^{\i\theta}|}$. This yields \begin{equation}\label{eq:fr} \begin{aligned} f(\rho,\tau^{1-\frac{\rho}2})-\operatorname{Re} [f(\rho+\i y,z)] & \ge (q-p)\left[\frac{1-\tau^{\frac{\rho}2}}{|1+\tau^{\frac\rho2}e^{\i\theta}|}-\frac{1-\tau^{\frac{\rho}2}}{1+\tau^{\frac{\rho}2}}\right] \hspace{-0.07cm} \ge \hspace{-0.07cm} \frac{(q-p)(\tau^{\frac{\rho}2}-\tau^\rho)|e^{\i\theta}-1|}{(1+\tau^{\frac{\rho}2})^2} \end{aligned} \end{equation} and \begin{align*} f(\rho,\tau^{1-\frac\rho2})-\operatorname{Re} [f(\rho+\i y,z)] & \ge (q-p)\left[\frac{1-\tau^{\frac{\rho}2}}{|1+\tau^{\frac\rho2+\i y}e^{\i\theta}|}-\frac{1-\tau^{\frac{\rho}2}}{1+\tau^{\frac{\rho}2}}\right] \ge \frac{(q-p)(1-\tau^{\frac\g2})\tau^{\frac{\rho}2}|\tau^{\i y}e^{\i\theta}-1|}{(1+\tau^{\frac{\rho}2})^2}. \end{align*} Adding the above two inequalities we have $f(\rho,\tau^{1-\frac\rho2})-\operatorname{Re} [f(\rho+\i y,z)] \ge \frac{(q-p)(1-\tau^{\frac\g2})\tau^{\frac\rho2}|\tau^{\i y}-1|}{2(1+\tau^{\frac\rho2})^2}$. Combining this with \eqref{eq:fr} and the substitution $ \tau^{1-\frac{\rho}{2}}e^{\i \theta}=z$ we get \eqref{max:ineq}. This completes the proof. \end{proof} Using the above technical lemma we can now explain the proof of Theorem \ref{thm:laplace}. \begin{proof}[Proof of Theorem \ref{thm:laplace}] Due to Theorem 5.3 in \cite{bcs}, the only thing that we need to verify is \begin{align} \label{eq:cond} \inf_{\substack{w,w'\in \mathfrak{C}({\tau^{1-\frac{\delta}2}})\\ u\in \delta+\i\R}} |w'-\tau^uw|>0\quad \mbox{and} \quad \sup_{\substack{w,w'\in \mathfrak{C}({\tau^{1-\frac{\delta}2}})\\ u\in \delta+\i\R}} \left|\frac{g_t(w)}{g_t(\tau^uw)}\right|>0. \end{align} Indeed, for every $u\in \delta+\i \R$ and $w,w'\in \mathfrak{C}(\tau^{1-\frac{\delta}2})$, we have $|w'-\tau^uw| \ge |w'|-|\tau^uw|=\tau^{1-\frac{\delta}2}-\tau^{1+\frac{\delta}2}>0$. Recall $f(u,z)$ from \eqref{eq:contour_fn}. Applying Lemma \ref{lem:max} with $\rho\mapsto \delta$ yields $$\left|\frac{g_t(w)}{g_t(\tau^uw)}\right|=|\exp(tf(u,w))| =\exp(t\operatorname{Re} [f(u,w)]) \le \exp(t f(\delta,\tau^{1-\frac\delta2}))=\exp(-th_q(\delta)),$$ where $h_q$ is defined in \eqref{eq:exp}. This verifies \eqref{eq:cond} and completes the proof. \end{proof} \begin{remark} We now explain our choice of the contour $K_{\zeta,t}$ defined in \eqref{def: ker}, which comes from the method of steepest descent. Suppose $\operatorname{Re}[u]=\delta$. As noted before, directly taking derivative of $f(u,z)= \exp(\frac{g_t(z)}{g_t(\tau^u z)})$, with respect to $z$ suggests that critical points are at $z = \pm\tau^{1 -\frac{u}{2}}$, and thus we take our contour to be $\mathfrak{C}(\tau^{1-\frac{\delta}{2}}),$ so that it passes through the critical points. \end{remark} Next we turn to the case of differentiability of $\tr(K_{\zeta,t})$ where $K_{\zeta, t}$ is defined in (\ref{def: ker}). Using the function $f$ defined in \eqref{eq:contour_fn}, we rewrite the kernel as follows. $$K_{\zeta,t}(w, w')= \frac{1}{2\pi \i}\int_{\delta-\i \infty}^{\delta + \i \infty}\Gamma(-u)\Gamma(1+u)\zeta^u e^{tf(u,w)}\frac{\d u}{w'-\tau^u w}.$$ Differentiating the integrand inside the integral in $K_{\zeta, t}(w. w')$ $n$-times defines a sequence of kernel $\{K_{\zeta, t}^{(n)}\}_{n \ge 1}: L^2(\mathfrak{C}(\tau^{1-\frac{\delta}{2}})) \rightarrow L^2(\mathfrak{C}(\tau^{1-\frac{\delta}{2}}))$ given by the kernel: \begin{align}\label{def: kerdrv} K_{\zeta,t}^{(n)}(w,w') := \frac1{2\pi\i}\int_{\delta-\i\infty}^{\delta+\i\infty}\Gamma(-u)\Gamma(1+u)(u)_n\zeta^{u-n} e^{t f(u,w)}\frac{\d u}{w'-\tau^uw}, \end{align} where $(a)_n:=\prod_{i=0}^{n-1}(a-i)$ for $n \in \Z_{>0}$ and $(a)_0 = 1$ is the Pochhammmer symbol and $\delta\in (0,1)$. We also set $K_{\zeta,t}^{(0)}:=K_{\zeta,t}$. \begin{remark} We remark that unlike Lemma 3.1 in \cite{dt19}, we do not aim to show that $K_{\zeta,t}$ is differentiable as an operator, or its higher order derivatives are equal to the operator $K_{\zeta,t}^{(n)}$. Indeed, showing convergence in the trace class norm is more involved because of the lack of symmetry and positivity of the operator $K_{\zeta,t}$. However, since we are dealing with the Fredholm determinant series only, for our analysis it is enough to investigate how each term of the series are differentiable and how their derivatives are related to $K_{\zeta,t}^{(n)}$. \end{remark} \begin{remark}\label{r:pole} Note that when viewing $K_{\zeta,t}^{(n)}$ as a complex integral, we can deform its $u$-contour to $\rho+\i\R$ for any $\rho \in(0, n\vee 1)$. This is due to the analytic continuity of the integrand as the factor $(u)_n$ removes the poles at $ 1, \ldots, n-1$ of $\Gamma(-u).$ \end{remark} The following lemma provides estimates of $K^{(n)}_{\zeta, t}$ that is useful for the subsequent analysis in Sections \ref{sec:leading} and \ref{sec:higher}. \begin{lemma} \label{l:kdbd} Fix $n \in \Z_{\ge 0}, t>0,\delta, \rho \in (0,n\vee 1),$ and consider any borel set $A\subset \R$. Recall $h_q(x)$ and $B_q(x)$ from Proposition \ref{p:htau} and $K_{\zeta,t}^{(n)}$ from \eqref{def: kerdrv}. For any $w\in \mathfrak{C}(\tau^{1-\frac{\delta}{2}})$ and $w'\in \mathbb{C}$ and $\zeta \in [1,e^{tB_q(\frac{s}2)}]$, there exists a constant $\Con = \Con(n, \delta, q)>0$ such that whenever $|w'|\neq \tau^{1+\frac\delta2}$ we have \begin{equation}\label{eq: idbd} \begin{aligned} \int_{A}\left|\frac{(\delta +\i y)_n \zeta^{\rho -n +\i y}}{\sin(-\pi(\delta + \i y))}e^{tf({\delta+\i y}, w)}\right|\frac{\d y }{|w'-\tau^{\delta+\i y}w|} & \leq \frac{\Con\zeta^{\rho - n }}{||w'|-\tau^{1+\frac\delta2}|}\exp(t\cdot \sup_{y\in A} { \operatorname{Re} [f({\delta+\i y}, w)]}) \\ & \leq \frac{\Con\zeta^{\rho - n }}{||w'|-\tau^{1+\frac\delta2}|}\exp(-th_q(\delta)). \end{aligned}. \end{equation} In particular when $w'\in \mathfrak{C}(\tau^{1-\frac{\delta}2})$ we have \begin{equation} \label{eq: kdbd} |K_{\zeta, t}^{(n)}(w, w') \leq \Con \zeta^{\delta - n }\exp(-t h_q(\delta)). \end{equation} Consequently, $K_{\zeta,t}^{(n)}(w, w')$ is continuous in the $\zeta$-variable. \end{lemma} \begin{proof} Fix $n\in \Z_{\ge 0}, t>0,$ $\delta,\rho\in (0,n\vee 1)$ and $w \in \mathfrak{C}(\tau^{1- \frac{\delta}{2}})$ and $w'\in \mathbb{C}$ such that $|w'|\neq \tau^{1+\frac\delta2}$. Throughout the proof the constant $\Con>0$ depends on $n,\delta,$ and $q$ -- we will not mention it further. \smallskip Consider the integral on the r.h.s.~of \eqref{eq: idbd}. Observe that when $\delta\notin \Z$, $|(\delta + \i y)_n|\leq \Con|y|^n$ and $\frac{1}{|\sin(-\pi(\delta + \i y))|} \leq \Con e^{-|y|/\Con}$. For $n\ge 2$, and $\delta \in \Z_{>0}\cap (0,n)$, we observe that the product $(\delta +\i y)_n$ contains the term $\i y$. Hence $|\frac{\i y}{\sin(-\pi(\delta +\i y))}| = |\frac{\i y}{\sin(-\pi(\i y))}| \le \Con e^{-|y|/\Con}$ for such an integer $\delta$. Whereas, $|\frac{\delta+\i y}{\i y}| \le \Con |y|^{n-1}$ for such an integer $\delta$. Finally, $|w'-\tau^{\delta+\i y}w| \geq ||w'| - |\tau^{\delta}w||=||w'|-\tau^{1+\frac\delta2}|$. Combining the aforementioned estimates, we obtain that \begin{align*} \mbox{r.h.s. of }(\ref{eq: idbd})\leq \int_{A}\Con |y|^n e^{- |y|/\Con}\zeta^{\rho - n}|e^{tf({\delta+\i y}, w)}|\frac{\d y}{||w'| - \tau^{1+\frac\delta2}|}. \end{align*} Since $\int_{\R}|y|^ne^{-|y|/\Con}\d y$ converges applying $|e^{tf({\delta+\i y}, w)}| \le e^{t\operatorname{Re} [f({\delta+\i y}, w)]}$ we arrive at the first inequality in \eqref{eq: idbd}. The second inequality follows by observing $\operatorname{Re} [f({\delta+\i y},w)] \le -h_q(\delta)$ by Lemma \ref{lem:max}. Recall $K_{\zeta,t}^{(n)}$ from \eqref{def: kerdrv}. Recall from Remark \ref{r:pole} that the $\delta$ appearing in \eqref{def: kerdrv} can be chosen in $(0, n\vee 1)$. Pushing the absolute value sign inside the explicit formula in \eqref{def: kerdrv} and applying Euler's reflection principle with change of variables $u = \delta + \i y$ yield \begin{equation*} |K_{\zeta,t}^{(n)}(w,w')| \le \frac{1}{2\pi}\int_{\R}\left|\frac{(\delta +\i y)_n \zeta^{\delta -n +\i y}}{\sin(-\pi(\delta + \i y))}e^{tf({\delta+\i y}, w)}\right|\frac{\d y }{|w' - \tau^{{\delta}+ \i y}w|} . \end{equation*} \eqref{eq: kdbd} now follows from \eqref{eq: idbd} by taking $\rho=\delta$. To see the continuity of $K_{\zeta,t}^{(n)}(w, w')$ in $\zeta,$ we fix $\zeta_1< \zeta_2 < \zeta_1 +1.$ By repeating the same set of arguments as above we arrive at \begin{align}\label{eq: kcz} |K_{\zeta_2,t}^{(n)}(w, w') - K_{\zeta_1,t}^{(n)}(w, w')|\le C|\zeta_2^{\delta -n} - \zeta_1^{\delta -n}|\exp(-th_q(\delta)) \end{align} with the same constant $\Con$ in \eqref{eq: kdbd}. Clearly l.h.s.~of \eqref{eq: kcz} converges to 0 when $\zeta_2 \rightarrow \zeta_1$, which confirms the kernel's $\zeta$-continuity. \end{proof} \subsection{Proof of Proposition \ref{p:leading}} \label{sec:lead} The goal of this section is to prove Proposition \ref{p:leading}. Before diving into the proof, we first settle the infinite differentiability separately in the next proposition. \begin{proposition}\label{ppn:dkernel} For any $n\in \Z_{\ge0}$ and $t>0$, the operator $K_{\zeta,t}^{(n)}$ defined in \eqref{def: kerdrv} is a trace-class operator with \begin{align}\label{eq:inttr} \tr (K_{\zeta,t}^{(n)}) = \frac1{2\pi \i}\int_{\mathfrak{C}(\tau^{1-\frac\delta2})} K_{\zeta,t}^{(n)}(w,w)\d w. \end{align} Furthermore, $\tr (K_{\zeta,t}^{(n)})$ is differentiable in $\zeta$ at each $\zeta>0$ and we have $\partial_{\zeta} \tr (K_{\zeta,t}^{(n)}) = \tr (K_{\zeta,t}^{(n+1)})$. \end{proposition} \begin{proof} Fix $n\in \Z_{\ge 0}, t>0$, and $\zeta>0$. $K_{\zeta,t}^{(n)}(w,w')$ is simultaneously continuous in both $w$ and $w'$ and $\partial_{w'}K_{\zeta,t}^{(n)}(w,w')$ is continuous in $w'$. By Lemma 3.2.7 in \cite{mcd} (also see \cite[page 345]{lax} or \cite{bor}) we see that $K_{\zeta,t}^{(n)}$ is indeed trace-class, and thus \eqref{eq:inttr} follows from Theorem 12 in \cite[Chapter 30]{lax}. To show differentiability of $\tr (K_{\zeta,t}^{(n)})$ in variable $\zeta$, we fix $\zeta_1,\zeta_2>0$. Without loss of generality we may assume $\zeta_1+1>\zeta_2>\zeta_1$. Let us define \begin{equation*} \begin{split} D_{\zeta_1,\zeta_2} & := \frac{\tr (K_{\zeta_2,t}^{(n)}) - \tr (K_{\zeta_1,t}^{(n)})}{\zeta_2-\zeta_1}- \tr (K_{\zeta_1,t}^{(n+1)}) \\ & = \frac1{(2\pi\i)^2}\int_{\mathfrak{C}(\tau^{1-\frac{\delta}{2}})}\int_{\delta-\i \infty}^{\delta+\i \infty} \Gamma(-u)\Gamma(1+u)R_{\zeta_1,\zeta_2;n}(u)e^{tf(u,w)}\frac{\d u}{w-\tau^u w} \d w, \end{split} \end{equation*} where \begin{align}\label{eq:rdef} R_{\zeta_1,\zeta_2;n}(u):=(u)_n\left[\frac{\zeta_2^{u-n}-\zeta_1^{u-n}}{\zeta_2-\zeta_1} - (u-n)\zeta_1^{u-n-1}\right]= \int_{\zeta_1}^{\zeta_2} \frac{(\zeta_2-\sigma)}{{\zeta_2-\zeta_1}}(u)_{n+2}\sigma^{u-n-2}\d\sigma \end{align} Taking absolute value and appealing to Euler's reflection principle, we obtain \begin{align}\label{eq: d} |D_{\zeta_1,\zeta_2}| & \le \left|\frac1{(2\pi\i)^2}\int_{\mathfrak{C}(\tau^{1-\frac{\delta}{2}})}\int_{\delta-\i \infty}^{\delta+\i \infty}\int_{\zeta_1}^{\zeta_2} \frac{(u)_{n+2}}{\sin(-\pi u)} \frac{(\zeta_2-\sigma)}{{\zeta_2-\zeta_1}}\sigma^{u-n-2} e^{tf(u,w)}\frac{\d\sigma\d u}{w-\tau^u w} \d w\right| \\ & \le \frac{\tau^{1-\frac{\delta}{2}}}{2\pi}\int_{\zeta_1}^{\zeta_2}|\sigma^{\delta+\i y-n-2}|\d\sigma \cdot \max_{w\in \mathfrak{C}(\tau^{1-\frac\delta2})}\int_{\R} \frac{(\delta+\i y)_{n+2}}{\sin(-\pi(\delta+\i y))} |e^{tf(\delta+\i y,w)}|\frac{\d y}{|w-\tau^{\delta+\i y} w|}. \nonumber \end{align} Note that Lemma \ref{l:kdbd} (\eqref{eq: idbd} specifically) we see that the above maximum is bounded by $\Con \exp(-th_q(\delta))$ where the constant $\Con$ is same as in \eqref{eq: idbd}. Since $|\sigma^{u-n-2}|= |\sigma^{\delta-n-2}|\le |\zeta_1^{\delta-n-2}|$ over the interval $[\zeta_1, \zeta_2]$ for $\delta \in (0,n\vee 1)$, we obtain \begin{equation*} |D_{\zeta_1,\zeta_2}| \le \Con \exp(-h_q(\delta))\int_{\zeta_1}^{\zeta_2}|\sigma^{u-n-2}|\d \sigma \leq \Con \exp(-th_q(\delta)) (\zeta_2 -\zeta_1)|\zeta_1^{\delta-n-2}|. \end{equation*} Thus, taking the limit as $ \zeta_2 - \zeta_1 \rightarrow 0$ yields $|D_{\zeta_1, \zeta_2}|\rightarrow 0$ and completes the proof. \end{proof} \begin{remark} We prove a higher order version of Proposition \ref{ppn:dkernel} later in Section \ref{sec:higher} as Proposition \ref{p:trL} which includes the statement of the above Proposition when $L =1$. However, we keep the above simple version for reader's convenience, which will serve as a guide in proving Proposition \ref{p:trL}. \end{remark} With the above results in place, we can now turn towards the main technical component of the proof of Proposition \ref{p:leading}. \begin{proof} [Proof of Proposition \ref{p:leading}] Before proceeding with the proof, we fix some notations. Fix $s>0$, and set $n=\lfloor s \rfloor+1 \ge 1$ and $\alpha=s-\lfloor s\rfloor\in [0,1)$ so that $s=n-1+\alpha$. Throughout the proof, we will denote $\Con$ to be positive constant depending only on $s,q$ -- we will not mention this further. We will also use the big $O$ notation. For two complex-valued functions $f_1(t)$ and $f_2(t)$ and $\beta\in \R$, the equations $f_1(t)=(1+O(t^{\beta}))f_2(t)$ and $f_1(t)=f_2(t)+O(t^{\beta})$ have the following meaning: there exists a constant $\Con>0$ such that for all large enough $t$, $$\left|\frac{f_1(t)}{f_2(t)}-1\right| \le \Con \cdot t^{\beta}, \mbox{ and } |f_1(t)-f_2(t)| \le \Con \cdot t^{\beta},$$ respectively. The constant $\Con>0$ value may change from line to line. For clarity we divide the proof into seven steps. In Steps 1 and 2, we provide the upper and lower bounds for $|\mathcal{A}_s(t)|$ and $\operatorname{Re}[\mathcal{A}_s(t)]$ respectively and complete the proof of \eqref{eq:lead}; in Steps 3--7, we verify the technical estimates assumed in the previous steps. \medskip \noindent\textbf{Step 1.} Recall $\mathcal{A}_s(t)$ from \eqref{eq:cala}. The goal of this step is to provide a different expression for $\mathcal{A}_{s}(t)$, which will be much more amenable to our analysis, as well as an upper bound for $|\mathcal{A}_{s}(t)|$. By Proposition \ref{ppn:dkernel}, we have $\frac{\d^n}{\d\zeta^n}\tr(K_{\zeta,t})=\tr(K_{\zeta,t}^{(n)})$ and consequently using the expression in \eqref{def: kerdrv} we have \begin{align*} \mathcal{A}_{s}(t):= (-1)^n\int_{1}^{e^{tB_q(\frac{s}2)}} \frac{\zeta^{-\alpha}}{(2\pi\i)^2}\int_{\mathfrak{C}(\tau^{1-\frac{\delta}{2}})}\int_{\delta-\i\infty}^{\delta +\i\infty}\Gamma(-u)\Gamma(1+u)(u)_n\zeta^{u-n}\frac{e^{tf(u,w)}\d u}{w-\tau^u w} \d w\d\zeta. \end{align*} where $\delta \in (0,1)$ is chosen to be less than $s$. We now proceed to deform the $u$-contour and $w$-contour sequentially. As we explained in Remark \ref{r:pole}, the integrand has no poles when $u=1,2,\ldots, n-1$. Hence $u$-contour can be deformed to $(s - \i \infty, s+\i \infty)$ as $s=n-1+\alpha\in (0,n).$ Next, for the $w$-contour, we wish to deform it from $\mathfrak{C}(\tau^{1-\frac\delta2})$ to $\mathfrak{C}(\tau^{1-\frac{s}2})$. In order to do so, we need to ensure that we do not cross any poles. We observe that the potential sources of poles lie in the exponent $f(u,w):=\frac{(q-p)}{1+w\tau^{-1}}-\frac{(q-p)}{1+\tau^{u-1}w}$ (recalled from \eqref{eq:contour_fn}) and in the denominator $w- \tau^u w.$ Since for any $w \in \mathfrak{C}(\tau^{1-\frac{\delta'}{2}}),$ where $\delta'\in (\delta, s)$, and $ u \in (s -\i \infty, s +\i \infty),$ we have $$|w - \tau^u w| \ge |w|- |\tau^uw|= \tau^{1-\frac{\delta'}{2}}(1-\tau^s) > 0, \quad |1+w\tau^{-1}|\ge |w\tau^{-1}|-1=\tau^{-\frac{\delta'}2}-1>0, $$ $$\mbox{ and } |1+\tau^{u-1}w|\ge 1- |\tau^{u-1}w| =1-\tau^{s-\frac{\delta'}2} > 0.$$ Thus, we can deform the $w$-contour to $\mathfrak{C}(\tau^{1-\frac{s}2})$ as well without crossing any poles. With the change of variable $u=s+\i y$, $w=\tau^{1-\frac{s}2}e^{\i\theta}$, and Euler's reflection formula we have \begin{align}\label{eq:trip} \mathcal{A}_{s}(t)= (-1)^n\int_{1}^{e^{tB_q(\frac{s}2)}} \frac{\zeta^{-1}}{4\pi^2}\int_{-\pi}^{\pi}\int_{\R}\frac{(s+\i y)_n\zeta^{\i y}}{\sin(-\pi (s+\i y))}e^{tf(s+\i y, \tau^{1-\frac{s}{2}}e^{\i \theta})}\frac{\d y}{1-\tau^{s+\i y} } \d \theta\d\zeta. \end{align} With this expression in hand, upper bound is immediate. By Lemma \ref{l:kdbd} (\eqref{eq: idbd} specifically with $\rho\mapsto n-1$, $\delta\mapsto s$) pushing the absolute value inside the integrals we see that \begin{align}\label{eq:up1} |\mathcal{A}_{s}(t)|\le \Con\exp(-th_q(s))\int_{1}^{e^{tB_q(\frac{s}2)}} \frac{1}{\zeta}\d\zeta = \Con \cdot tB_q(\tfrac{s}2)\exp(-th_q(s)) \end{align} for some constant $\Con=\Con(q,s)>0$. Hence taking logarithm and dividing by $t$, we get \begin{align}\label{eq:upper} \limsup\limits_{t\to\infty} |\mathcal{A}_s(t)| \le -h_q(s) = -(q-p)\frac{1-\tau^{\frac{s}2}}{1+\tau^{\frac{s}2}}. \end{align} \medskip \noindent\textbf{Step 2.} In this step, we provide a lower bound for $\operatorname{Re}[\mathcal{A}_s(t)]$. Set $\varepsilon=t^{-2/5}>0$. For each $k\in \Z$, set $v_k=-\frac{2\pi}{\log\tau}k$ and consider the interval $V_k:=[v_k -\varepsilon^2,v_k+\varepsilon^2 ].$ Also set $A_{\varepsilon}:=\{\theta \in [-\pi,\pi] : |e^{\i\theta}-1|\le \varepsilon |\log\tau|\}$. We divide the triple integral in \eqref{eq:trip} into following parts \begin{equation} \label{eq:A} \mathcal{A}_{s}(t) = \sum_{k \in \Z}(\mathbf{I})_k + (\mathbf{II}) + (\mathbf{III}), \end{equation} where \begin{align}\label{eq:1k} (\mathbf{I})_k & : = \int_{1}^{e^{tB_q(\frac{s}2)}} \int_{A_{\varepsilon}}\int_{V_k} \frac{(-1)^n}{4\pi^2\zeta}\frac{(s+\i y)_n\zeta^{\i y}}{\sin(-\pi (s+\i y))}\frac{e^{tf(s+\i y, \tau^{1-\frac{s}{2}}e^{\i \theta})}\d y}{1-\tau^{s+\i y} } \d \theta\d\zeta, \\ \label{eq:2k} (\mathbf{II}) & := \int_{1}^{e^{tB_q(\frac{s}2)}} \int_{A_{\varepsilon}}\int_{\R\setminus \cup_k V_k} \frac{(-1)^n}{4\pi^2\zeta}\frac{(s+\i y)_n\zeta^{\i y}}{\sin(-\pi (s+\i y))}\frac{e^{tf(s+\i y, \tau^{1-\frac{s}{2}}e^{\i \theta})}\d y}{1-\tau^{s+\i y} } \d \theta\d\zeta,\\ \label{eq:3k} (\mathbf{III}) & := \int_{1}^{e^{tB_q(\frac{s}2)}} \int_{[-\pi,\pi]\cap A_{\varepsilon}^c}\int_{\R} \frac{(-1)^n}{4\pi^2\zeta}\frac{(s+\i y)_n\zeta^{\i y}}{\sin(-\pi (s+\i y))}\frac{e^{tf(s+\i y, \tau^{1-\frac{s}{2}}e^{\i \theta})}\d y}{1-\tau^{s+\i y} } \d \theta\d\zeta. \end{align} In subsequent steps we obtain the following estimates for each integral. We claim that we have \begin{align}\label{eq:in1} (\mathbf{I})_0 = (1+O(t^{-\frac{1}{5}}))\frac{\Con_0}{\sqrt{t}}\exp(-th_q(s)), \end{align} where $h_q(s)$ is defined in \eqref{eq:exp} and \begin{align}\label{eq:co1} \Con_0 := \sqrt{\frac{(1+\tau^{\frac{s}2})^3}{4\pi(q-p)(\tau^{\frac{3s}2-2}-\tau^{2s-2})}}\frac{(-1)^n(s)_n}{\sin(-\pi s)(1-\tau^s)} > 0. \end{align} When $s$ is an integer the above constant is defined in a limiting sense. Note that $\Con_0$ is indeed positive as $n=\lfloor s \rfloor +1$. Furthermore, we claim that we have the following upper bounds for the other integrals: \begin{align}\label{eq:in2} \sum_{k\in \Z\setminus \{0\}} |(\mathbf{I})_k| \le \Con t^{-\frac{13}{10}}\exp(-th_q(s)). \end{align} where $v_k=-\frac{2\pi}{\log\tau}k$ and \begin{align}\label{eq:out} |(\mathbf{II})|, |(\mathbf{III})|\le \Con t\exp\left(-th_q(s)\right)\exp(-\tfrac1\Con t^{\frac{1}5}). \end{align} Assuming the validity of \eqref{eq:in1}, \eqref{eq:in2} and \eqref{eq:out} we can complete the proof of lower bound for \eqref{eq:lead}. Following the decomposition in \eqref{eq:A} we see that for all large enough $t$, \begin{align*} \operatorname{Re} [\mathcal{A}_s(t)] & \ge \operatorname{Re} [(\mathbf{I})_0]- \sum_{k\in Z\setminus \{0\}} |(\mathbf{I})_k| - | (\mathbf{II})|-|(\mathbf{III})| \\ & \ge \tfrac{1}{\sqrt{t}}\exp(-th_q(s))\left[\tfrac1{2}\Con_0-\Con t^{-\frac{4}5}- \Con t^{\frac32}\exp(-\tfrac1{\Con}t^{\frac{3}5})\right] \ge \tfrac{\Con_0}{4\sqrt{t}}\exp(-th_q(s)). \end{align*} Taking logarithms and dividing by $t$ we get that $\liminf_{t\to \infty} \operatorname{Re}[\mathcal{A}_s(t)] \ge -h_q(s)$. Combining with \eqref{eq:upper} we arrive at \eqref{eq:lead}. \medskip \noindent\textbf{Step 3.} In this step, we prove \eqref{eq:out}. Recall $(\mathbf{II})$ and $(\mathbf{III})$ defined in \eqref{eq:2k} and \eqref{eq:3k}. For each of them, we push the absolute value around each term of the integrand. We use \eqref{eq: idbd} from Lemma \ref{l:kdbd} to get \begin{align}\label{eq:ii} &|(\mathbf{II})| \le \Con \exp\bigg(t\sup_{\substack{y \in \R \setminus \cup_k V_k\\|e^{\i \theta }-1|\leq \varepsilon|\log\tau|}}\operatorname{Re}[f(s + \i y, \tau^{1-\frac{s}{2}}e^{\i \theta})]\bigg)\int_1^{e^{tB_q(\frac{s}2)}} \frac{\d\zeta}{\zeta},\\ &\label{eq:iii} |(\mathbf{III})|\le \Con \exp\bigg(t\sup_{\substack{y \in \R \\|e^{\i \theta }-1|> \varepsilon|\log\tau|}}\operatorname{Re}[f(s + \i y, \tau^{1-\frac{s}{2}}e^{\i \theta})]\bigg)\int_1^{e^{tB_q(\frac{s}2)}} \frac{\d\zeta}{\zeta}. \end{align} Note that in \eqref{eq:ii}, we have $|\tau^{\i y} -1|\ge |\tau^{\i t^{-\frac{4}{5}}} -1 | \ge \frac12|\log\tau| t^{-\frac{4}{5}}$ for all large enough $t$. Meanwhile in \eqref{eq:iii}, $|\tau^{1- \frac{s}{2}}(e^{\i \theta} -1)|\ge \tau^{1- \frac{s}{2}}\varepsilon|\log \tau| = \tau^{1- \frac{s}{2}}|\log \tau|t^{-\frac{2}{5}}$. In either case, appealing to \eqref{max:ineq} in Lemma \ref{lem:max} with $\rho\mapsto s$ gives us that \begin{align*} f(s, \tau^{1- \frac{s}{2}}) - \operatorname{Re}[f(s+\i y, \tau^{1-\frac{s}{2}}e^{\i \theta})] \ge \tfrac1\Con \cdot t^{-\frac{4}{5}}. \end{align*} Substituting $f(s, \tau^{1-\frac{s}{2}})$ with $-h_q(s)$ and evaluating the integrals in \eqref{eq:ii} and \eqref{eq:iii} gives us \eqref{eq:out}. \medskip \noindent\textbf{Step 4.} In this step and subsequent steps we prove \eqref{eq:in1} and \eqref{eq:in2}. Recall that $v_k=-\frac{2\pi}{\log \tau} k$ and $\varepsilon=t^{-\frac25}$. We focus on the $(\mathbf{I})_k$ integral defined in \eqref{eq:ik}. Our goal in this and next step is to show \begin{align}\label{eq:ik1} (\mathbf{I})_k = (1+O(t^{-\frac15}))\frac{ \Con_0(k)}{2\pi\sqrt{t}}\int_{1}^{e^{tB_q(\frac{s}2)}} \frac{\zeta^{\i v_k}}{\zeta}\int_{-\varepsilon^2}^{\varepsilon^2}\zeta^{\i y}\exp(-th_q(s+\i y)) \d y\d\zeta. \end{align} where \begin{align}\label{eq:cok} \Con_0(k):=\sqrt{\frac{(1+\tau^{\frac{s}2})^3}{4\pi(q-p)(\tau^{\frac{3s}2-2}-\tau^{2s-2})}}\frac{(-1)^n(s+\i v_k)_n}{\sin(-\pi (s+\i v_k))(1-\tau^s)} \end{align} Towards this end, note that in the argument for \eqref{eq:up1}, we push the absolute value around each term of the integrand. Thus, the upper bound achieved in \eqref{eq:up1} guarantees that the triple integral in $(\mathbf{I})_k$ is absolutely convergent. Thereafter, Fubini's theorem allows us to switch the order of integration inside $(\mathbf{I})_k$. By a change-of-variables, we see that \begin{align*} (\mathbf{I})_k = (-1)^n\int_{1}^{e^{tB_q(\frac{s}2)}} \frac{\zeta^{\i v_k-1}}{4\pi^2}\int_{-\varepsilon^2}^{\varepsilon^2}\frac{(s+\i y+\i v_k)_n\zeta^{\i y}}{\sin(-\pi (s+\i y+\i v_k))}\int_{A_\varepsilon}\frac{e^{tf(s+\i y, \tau^{1-\frac{s}{2}}e^{\i \theta})}\d \theta}{1-\tau^{s+\i y} } \d y\d\zeta, \end{align*} where recall $A_{\varepsilon}=\{\theta \in [-\pi,\pi] : |e^{\i\theta}-1|\le \varepsilon |\log\tau|\}$. Note that in this case range of $y$ lies in a small window of $[-t^{-\frac{4}5},t^{-\frac45}]$. As $s$ is fixed, one can replace $(s+\i y+\i v_k)_n$, $\sin (-\pi(s+\i y+\i v_k))$, and $1-\tau^{s+\i y}$ by $(s+\i v_k)_n$, $\sin (-\pi(s+\i v_k))$, and $1-\tau^{s}$ with an expense of $O(t^{-\frac{4}5})$ term (which can be chosen independent of $k$). We thus obtain \begin{align}\label{eq:ik} (\mathbf{I})_k = \frac{(-1)^n(s+\i v_k)_n(1+O(t^{-\frac45}))}{\sin(-\pi (s+\i v_k))(1-\tau^s)}\int_{1}^{e^{tB_q(\frac{s}2)}} \frac{\zeta^{\i v_k}}{4\pi^2\zeta}\int_{-\varepsilon^2}^{\varepsilon^2}\zeta^{\i y}\int_{A_\varepsilon}e^{tf(s+\i y, \tau^{1-\frac{s}{2}}e^{\i \theta})}\d \theta \d y\d\zeta. \end{align} We now evaluate the $\theta$-integral in the above expression. We claim that \begin{align}\label{theta1} \int_{A_\varepsilon} e^{tf(s+\i y, \tau^{1-\frac{s}{2}}e^{\i \theta})}\d \theta & = (1+O(t^{-\frac1{5}}))\sqrt{\frac{\pi(1+\tau^{\frac{s}2})^3}{t(q-p)(\tau^{\frac{3s}2-2}-\tau^{2s-2})}}\exp(-th_q(s+\i y)) \end{align} Note that \eqref{eq:ik1} follows from \eqref{theta1}. Hence we focus on proving \eqref{theta1} in next step. \medskip \noindent\textbf{Step 5.} In this step we prove \eqref{theta1}. For simplicity we let $u=s+\i y$ temporarily. Taylor expanding the exponent appearing in l.h.s.~of \eqref{theta1} around $\theta=-\frac{y}{2}\log\tau$ and using the fact $\partial_{z}f(u,z)|_{z =\tau^{1-\frac{u}{2}}} = 0$, we get \begin{align} \mbox{l.h.s.~of \eqref{theta1}} & = \int_{A_\varepsilon}e^{tf(u, \tau^{1-\frac{u}{2}}e^{\i (\theta + \frac{y}{2}\log \tau)})}\d \theta \nonumber \\ & = \exp(tf(u, \tau^{1-\frac{u}2}))\int_{A_\varepsilon} \exp\left(-\frac t2\partial_z^2f(u,\tau^{1-\frac{u}2})(\theta+\tfrac{y}{2}\log\tau)^2+ O(t^{-\frac{1}{5}})\right)\d \theta. \label{theta2} \end{align} Note that we have replaced the higher order terms by $O(t^{-\frac15})$ in the exponent above as $\theta, y$ are at most of the order $O(t^{-\frac25})$. Furthermore, for all $t$ large enough, \begin{align*} A_\varepsilon & =\{\theta \in [-\pi,\pi] : |e^{\i\theta}-1|\le \varepsilon |\log\tau|\} \\ & = \{\theta \in [-\pi,\pi] : |\sin\tfrac{\theta}{2}|\le \tfrac12\varepsilon |\log\tau|\} \supset \{\theta \in [-\pi,\pi] : |\theta|\le \varepsilon |\log\tau|\} \end{align*} As $y\in [-\varepsilon^2,\varepsilon^2]$, we see that $A_\varepsilon \supset \{\theta \in [-\pi,\pi] : |\theta+\frac{y}{2}\log\tau|\le \frac12\varepsilon|\log\tau|\}$ for all large enough $t$. Thus on $A_\varepsilon^c$ we have $|\theta+\frac{y}2\log\tau| \ge \frac12t^{-\frac25}|\log\tau|$. Furthermore for small enough $y$, by \eqref{eq;deri}, we have $\operatorname{Re}[\partial_z^2 f(u,\tau^{1-\frac{u}2})]>0$. Hence the above integral can be approximated by Gaussian integral. In particular, we have \begin{align} \mbox{r.h.s.~of \eqref{theta2}} & = (1 +O(t^{-\frac15}))\exp(tf(u, \tau^{1-\frac{u}2}))\sqrt{\frac{2\pi}{t\partial_z^2f(u,\tau^{1-\frac{u}2})}} \label{e1} \end{align} Observe that as $u=s+\i y$ and $y$ is at most $O(t^{-\frac45})$, $\partial_z^2f(u,\tau^{1-\frac{u}2})$ in r.h.s.~of \eqref{e1} can be replaced by $\partial_z^2f(s,\tau^{1-\frac{s}2})$ by adjusting the order term. Recall the expression for $\partial_z^2f(s,\tau^{1-\frac{s}2})$ from \eqref{eq;deri} and observe that from the definition of $f$ and $h_q$ from \eqref{eq:contour_fn} and \eqref{eq:exp} we have $f(u,\tau^{1-\frac{u}2})=h_q(s+\i y)$. We thus arrive at \eqref{theta1}. \medskip \noindent\textbf{Step 6.} In this step and we prove \eqref{eq:in1} and \eqref{eq:in2} starting from the expression of $(\mathbf{I})_k$ obtained in \eqref{eq:ik1}. As $y$ varies in the window of $y\in [t^{-\frac45},t^{-\frac45}]$, by Taylor expansion we may replace $th_q(s+\i y)$ appearing in the r.h.s.~of \eqref{eq:ik1} by $t(h_q(s)+\i yh_q'(s))$ at the expense of an $O(t^{-\frac35})$ term. Upon making a change of variable $r=\log\zeta-th_q'(s)$ we thus have \begin{align} (\mathbf{I})_k & = (1+O(t^{-\frac15}))\frac{ \Con_0(k)}{2\pi\sqrt{t}}\exp(-th_q(s))\int_{-th_q'(s)}^{{tB_q(\frac{s}2)}-th_q'(s)} e^{\i v_k (r+th_q'(s))}\int_{-\varepsilon^2}^{\varepsilon^2}e^{\i y r} \d y\d r \nonumber \\ & \label{eq:ik11} = (1+O(t^{-\frac15}))\frac{ \Con_0(k)}{2\pi\sqrt{t}}\exp(-th_q(s))\int_{-th_q'(s)}^{{tB_q(\frac{s}2)}-th_q'(s)} e^{\i v_k (r+th_q'(s))}\frac{e^{\i \varepsilon^2 r}-e^{-\i \varepsilon^2 r}}{\i r}\d r. \end{align} We claim that for $k=0$, (which implies $v_k=0$) we have \begin{align}\label{eq:c1} \int_{-th_q'(s)}^{{tB_q(\frac{s}2)}-th_q'(s)} \frac{e^{\i \varepsilon^2 r}-e^{-\i \varepsilon^2 r}}{\i r}\d r = 2\pi(1+O(t^{-\frac15})) \end{align} For $k\neq 0$, we have \begin{align}\label{eq:c2} \left|\int_{-th_q'(s)}^{{tB_q(\frac{s}2)}-th_q'(s)} e^{\i v_k (r+th_q'(s))}\frac{e^{\i \varepsilon^2 r}-e^{-\i \varepsilon^2 r}}{\i r}\d r\right| \le \Con t^{-\frac{4}5} \end{align} where $\Con>0$ can be chosen free of $k$. Assuming \eqref{eq:c1} and \eqref{eq:c2} we may now complete the proof of \eqref{eq:in1} and \eqref{eq:in2}. Indeed, for $k=0$ upon observing that $\Con_0=\Con_0(0)$ (recall \eqref{eq:co1} and \eqref{eq:cok}), in view of \eqref{eq:ik11} and \eqref{eq:c1} we get \eqref{eq:in1}. Whearas for $k\neq 0$, thanks to the estimate in \eqref{eq:c2}, in view of \eqref{eq:ik11}, we have \begin{align}\label{eq:ksum} \sum_{k\in \Z\setminus \{0\}} |(\mathbf{I})_k| \le \Con t^{-\frac{13}{10}} \exp(-th_q(s))\sum_{k\in \Z\setminus \{0\}} |\Con_0(k)|. \end{align} For $y\neq 0$, $|\frac{(s+\i y)_n}{\sin(-\pi(s+\i y))}|\le \Con |y|^ne^{-|y|/\Con}$ forces r.h.s.~of \eqref{eq:ksum} to be summable proving \eqref{eq:in2}. \medskip \noindent\textbf{Step 7.} In this step we prove \eqref{eq:c1} and \eqref{eq:c2}. Recalling that $\varepsilon^2=t^{-\frac45}$, we see that \begin{align}\label{eq:k0} \int_{-th_q'(s)}^{{tB_q(\frac{s}2)}-th_q'(s)} \frac{e^{\i \varepsilon^2 r}-e^{-\i \varepsilon^2 r}}{\i r}\d r = \int_{-t^{1/5}h_q'(s)}^{{t^{1/5}B_q(\frac{s}2)}-t^{1/5}h_q'(s)} \frac{2\sin r}{r}\d r. \end{align} Following the definition of $h_q$ and $B_q$ in Proposition \ref{p:htau} we observe that that $-h_q'(s)=\frac{\tau^{\frac{s}2}\log\tau}{(1+\tau^{\frac{s}2})^2}<0$ and $$B_q(s)-h_q'(s)=\frac{1-\tau^s+\tau^{\frac{s}{2}}s\log \tau}{s(1+\tau^{\frac{s}2})}=-sB_q'(s)>0,$$ where $B_q'(s)<0$ follows from \eqref{eq:bq}. Thus as $B_q$ is strictly decreasing (Proposition \ref{p:htau} \ref{a}) we have $B_q(\frac{s}{2}) > B_q(s) > h_q'(s)$. Thus the integral on r.h.s.~of \eqref{eq:k0} can be approximated by $(1+O(t^{-1/5}))\int_{\R} \frac{2\sin r}{r}\d r= 2\pi(1+O(t^{-1/5}))$. This proves \eqref{eq:c1}. We now focus on proving \eqref{eq:c2}. Towards this end, we divide the integral appearing in \eqref{eq:c2} into three regions as follows \begin{equation} \label{eq:ik15} \begin{aligned} \mbox{l.h.s.~of \eqref{eq:c2}} & \le \left|\int_{-th_q'(s)}^{-1} e^{\i v_k (r+th_q'(s))}\frac{e^{\i \varepsilon^2 r}-e^{-\i \varepsilon^2 r}}{\i r}\d r\right| +\left|\int_{-1}^{1} e^{\i v_k (r+th_q'(s))}\frac{e^{\i \varepsilon^2 r}-e^{-\i \varepsilon^2 r}}{\i r}\d r\right|\\ & \hspace{3cm}+\left|\int_{1}^{{tB_q(\frac{s}2)}-th_q'(s)} e^{\i v_k (r+th_q'(s))}\frac{e^{\i \varepsilon^2 r}-e^{-\i \varepsilon^2 r}}{\i r}\d r\right|. \end{aligned} \end{equation} Note that for the second term appearing in r.h.s.~of \eqref{eq:ik15} can be bounded by $4 t^{-\frac45}$ using $$\left|\int_{-1}^{1} e^{\i v_k (r+th_q'(s))}\frac{2\sin(\varepsilon^2r)}{r}\d r\right| \le \int_{-1}^{1} \left|\frac{2\sin(\varepsilon^2r)}{r}\right|\d r \le 4\varepsilon^2=4t^{-\frac{4}5}.$$ For the first term appearing in r.h.s.~of \eqref{eq:ik15}, by making a change of variable $r \mapsto r\frac{v_k-\varepsilon^2}{v_k+\varepsilon^2}$ we observe the following identity. \begin{align*} \int_{-th_q'(s)}^{-1} e^{\i v_k (r+th_q'(s))}\frac{e^{\i \varepsilon^2 r}}{\i r}\d r = \int_{-th_q'(s)\frac{v_k+\varepsilon^2}{v_k-\varepsilon^2}}^{-\frac{v_k+\varepsilon^2}{v_k-\varepsilon^2}} e^{\i v_k (r+th_q'(s))}\frac{e^{-\i \varepsilon^2 r}}{\i r}\d r \end{align*} This leads to \begin{align*} \int_{-th_q'(s)}^{-1} e^{\i v_k (r+th_q'(s))}\frac{e^{\i \varepsilon^2 r}-e^{-\i\varepsilon^2 r}}{\i r}\d r & = \int_{-th_q'(s)}^{-th_q'(s)\frac{v_k+\varepsilon^2}{v_k-\varepsilon^2}} e^{\i v_k (r+th_q'(s))}\frac{e^{-\i \varepsilon^2 r}}{\i r}\d r \\ & \hspace{2cm}+ \int_{-\frac{v_k+\varepsilon^2}{v_k-\varepsilon^2}}^{-1} e^{\i v_k (r+th_q'(s))}\frac{e^{-\i \varepsilon^2 r}}{\i r}\d r \end{align*} In the first integral the length of the interval is $O(t^{1/5})$. However, the integrand itself is $O(t^{-1})$. For the second integral, the length of the interval is $O(t^{-4/5})$, and the integrand itself is $O(1)$. Note that this is only possible when $k\neq 0$ (forcing $v_k\neq 0$). And indeed all the $O$ terms can be taken to be free of $v_k$ (and hence of $k$). Combining this we get that the first term appearing in r.h.s~of \eqref{eq:ik15} can be bounded by $\Con t^{-\frac45}$. An exact analogous argument provides the same bound for the third term in r.h.s.~of \eqref{eq:ik15} as well. This proves \eqref{eq:c2} completing the proof. \end{proof} \section{Bounds for the Higher order terms} \label{sec:higher} The goal of this section is to establish bounds for the higher-order term $\mathcal{B}_s(t)$ defined in (\ref{eq:Calb}). First, recall the Fredholm determinant formula from \eqref{eq:f-series}. Using the $\tr(K_{\zeta,t}^{\wedge L})$ notation from \eqref{eq:fdhm} we may rewrite $\mathcal{B}_s(t)$ as follows. \begin{align}\label{eq: bst} \mathcal{B}_s(t) = (-1)^n\int_1^{e^{tB_q(\frac{s}{2})}}\zeta^{-\alpha}\frac{\d^n}{\d \zeta^n}\bigg[1 + \sum_{L =2}^{\infty}\tr(K^{\wedge L}_{\zeta. t})\bigg]\d \zeta. \end{align} We claim that we could exchange the various integrals, derivatives and sums appearring in the r.h.s. of \eqref{eq: bst} and obtain $\mathcal{B}_s(t)$ through term-by-term differentiation, i.e. \begin{align}\label{eq:bfinal} \mathcal{B}_s(t) = (-1)^n \sum_{L=2}^{\infty}\int_1^{e^{tB_q(\frac{s}{2})}}\zeta^{-\alpha}\partial_{\zeta}^n(\tr(K_{\zeta,t}^{\wedge L}))\d \zeta. \end{align} Towards this end, we devote Section \ref{intrchnge} to its justification. Following the technical lemmas in Section \ref{intrchnge}, we proceed to prove Proposition \ref{p:ho} in Section \ref{pf: p:ho}. \subsection{Interchanging sums, integrals and derivatives}\label{intrchnge} Recall from (\ref{def: kerdrv}) the definition of $K_{\zeta, t}^{(n)}.$ As a starting point of our analysis, we introduce the following notations before providing the bounds on $|\partial_{\zeta}^n \tr(K_{\zeta, t}^{\wedge L})|.$ For any $n,L\in \Z_{>0}$, define \begin{equation} \label{mln} \mathfrak{M}(L, n) := \{\vec{m} = (m_1, \ldots, m_L) \in (\Z_{\geq 0})^L: m_1 + \cdots + m_L = n\} \mbox{ and } \binom{n}{\vec{m}}: = \frac{n!}{m_1!\cdots m_L!}. \end{equation} Furthermore, for any $L \in \Z_{>0},$ $\zeta \in \R_{>0}$ and $\vec{m} \in \mathfrak{M}(L, n)$, let \begin{equation}\label{def: Im} I_{\zeta}(\vec{m}) := {\int\ldots\int}\det(K_{\zeta, t}^{(m_i)}(w_i, w_j))_{i, j =1}^L\prod_{i = 1}^L\d w_i \end{equation} where $w_i$-contour lies on $\mathfrak{C}(\tau^{1-\frac{\delta}2})$. We also set $|\vec{m}|_{>0}: = |\{i \mid i \in \Z \cap [1, L], m_i >0\}|,$ i.e. the number of positive $m_i$ in $\vec{m}.$ To begin with, the next two lemma investigate the term-by-term $n$-th derivatives of $\tr(K^{\wedge L}_{\zeta,t})$ that appear on the r.h.s. of \eqref{eq:bfinal}. The following should be regarded as a higher order version of Proposition \ref{ppn:dkernel}. \begin{proposition}\label{p:trL} Fix $n,L \in \Z_{>0}$ and let $\mathfrak{M}(L, n)$ be defined as in $(\ref{mln}).$ Recall the function $B_q(x)$ from Proposition \ref{p:htau}. For any $t > 0$, the function $\zeta\mapsto \tr(K_{\zeta, t}^{\wedge L})$ is infinitely differentiable at each $\zeta \in [1, e^{tB_q(\frac{s}{2})}]$, with \begin{equation}\label{eq: exdi} \partial_\zeta^n \tr(K_{\zeta, t}^{\wedge L}) = \frac{1}{L!}\sum_{\vec{m}\in \mathfrak{M}(L, n)}\binom{n}{\vec{m}} I_{\zeta}(\vec{m}), \end{equation} where the r.h.s of \eqref{eq: exdi} converges absolutely uniformly. Furthermore, there exists a constant $\Con = \Con(n, \delta, q)>0$ such that for all $\vec{m}\in \mathfrak{M}(L,n)$ we have \begin{equation}\label{eq:exdi2} |I_\zeta(\vec{m})| \le \Con^L L^{\frac{L}2}\zeta^{L\delta-n}\exp(-th_q(\delta)), \quad |\partial_\zeta^n \tr(K_{\zeta, t}^{\wedge L})| \leq \frac{1}{L!}\Con^L L^n L^{\frac{L}{2}}\zeta^{L\delta-n}\exp(-tLh_q(\delta)). \end{equation} \end{proposition} \begin{proof} The proof idea is same as that of Proposition \ref{ppn:dkernel}, but it's more cumbersome notationally. For clarity we split the proof into four steps. In the first step, we introduce some necessary notations. In Steps 2-3, we prove \eqref{eq: exdi} and in the final step, we prove \eqref{eq:exdi2}. \medskip \noindent\textbf{Step 1.} In this step we summarize the notation we will require in the proof of \eqref{eq: exdi}. We fix $L\in \Z_{>0}, \delta\in (0,1),t>0$, and $\zeta_1, \zeta_2 > 0$ and recall $B_q(x)$ from Proposition \ref{p:htau}. We define $\vec\xi_k \in [1,e^{tB_q(\frac{s}{2})}]^L$ to be the vector whose first $k$ entries are $\zeta_2$ and the rest $L-k$ entries are $\zeta_1$: $$\vec\xi_k := (\xi_{k,1},\xi_{k,2},\ldots,\xi_{k,L}):=(\ \underbrace{ \zeta_2\ , \ \zeta_2\ ,\ \ldots\ ,\ \zeta_2}_{k \mbox{ times}}\ ,\ \underbrace{\zeta_1 \ ,\ \zeta_1\ ,\ \ldots \ ,\ \zeta_1 }_{L-k \mbox{ times}}\ ), \quad k=0,1,\ldots,L.$$ For any $\vec{m}=(m_1,m_2,\ldots,m_L)\in (\Z_{\ge 0})^L$ we define the following integral of mixed parameters \begin{equation}\label{def: mIm} I_{\zeta_1,\zeta_2}^{(k)}(\vec{m}) := {\int\ldots\int}\det(K_{\xi_{k,i}, t}^{(m_i)}(w_i, w_j))_{i, j =1}^L\prod_{i = 1}^L\d w_i. \end{equation} where $w_i$-contour lies on $\mathfrak{C}(\tau^{1-\frac{\delta}2})$. $I_{\zeta_1,\zeta_2}^{(k)}(\vec{m})$ serves as an interpolation between $I_{\zeta_1}(\vec{m})$ and $I_{\zeta_2}(\vec{m})$ defined in \eqref{def: Im} as $k$ increases from 0 to $L$ where the parameters $\zeta$ are now allowed to be different for different rows in the determinant. \medskip We next define $\vec{e}_k=(e_{k,1},e_{k,2},\ldots,e_{k,L})$ to be the unit vector with $1$ in the $k$-th position and $0$ elsewhere. With the above notations in place, for each $j,k \in \{1,2,\ldots,L\}$ and $\vec{m}\in (\Z_{\ge 0})^L$ we set \begin{align}\label{eq:d4} &\mathfrak{L}_{\zeta_1,\zeta_2}^{(1)}(\vec{m};k) :=\frac1{\zeta_2-\zeta_1}\left[I_{\zeta_1,\zeta_2}^{(k)}(\vec{m})-I_{\zeta_1,\zeta_2}^{(k-1)}(\vec{m})-(\zeta_2-\zeta_1)I_{\zeta_1,\zeta_2}^{(k-1)}(\vec{m}+\vec{e}_k)\right],\\ \label{eq:d42} &\mathfrak{L}_{\zeta_1,\zeta_2}^{(2)}(\vec{m};j,k) := I_{\zeta_1,\zeta_2}^{(j)}(\vec{m}+\vec{e}_k)-I_{\zeta_1,\zeta_2}^{(j-1)}(\vec{m}+\vec{e}_k). \end{align} Note that we define \eqref{eq:d4} modelling after $D_{\zeta_1,\zeta_2}$ in the proof of Proposition \ref{ppn:dkernel}. Here, the only differences between the three determinants of the respective $I_{\zeta_1,\zeta_2}^{(k)}(\vec{m})$'s lie in the $k$-th row, i.e. $K_{\zeta_2,t}^{(m_k)}$ v.s. $K_{\zeta_1,t}^{(m_k)}$ v.s. $K_{\zeta_1,t}^{(m_k+1)}.$ So we have isolated the differences and tried to reduce the question of differentiability to row-wise in \eqref{eq:d4}. Meanwhile, \eqref{eq:d42} ``measures" the distance between $I_{\zeta_1,\zeta_2}^{(k)}(\vec{m}+\vec{e}_k)$ and $I^{(k-1)}_{\zeta_1,\zeta_2}(\vec{m}+\vec{e}_k)$ where they differ only in $\xi_{k,k} = \zeta_2$ or $\zeta_1$ for $K_{\xi_{k,k},t}^{(m_k)}$ on the $k$-th row of the determinant. We finally remark that all the $w_i$-contours in the integrals appearing throughout the proof are on $\mathfrak{C}(\tau^{1-\frac\delta2})$ -- we will not mention this further. We would also drop $(w_i,w_j)$ from $K_{\bullet,t}^{(m_i)}(w_i,w_j)$ when it is clear from the context. \medskip \noindent\textbf{Step 2.} We show the infinite differentiability of $\tr(K_{\zeta, t}^{\wedge L})$ by proving \eqref{eq: exdi} in this step. The proof proceeds via induction on $n$. When $n=0$, observe that \eqref{eq: exdi} recovers the formula of $\tr(K_{\zeta, t}^{\wedge L}).$ This constitutes the base case. To prove the induction step, suppose \eqref{eq: exdi} holds for $n = N$. Then for $n = N+1$, we fix $\zeta_1, \zeta_2 > 0$. Without loss of generality, we assume $\zeta_1+1 > \zeta_2 > \zeta_1$ and consider \begin{align}\label{eq:d0} D_{\zeta_1, \zeta_2} & := \frac{\partial_{\zeta}^N \tr(K_{\zeta_2, t}^{\wedge L})- \partial_{\zeta}^N \tr(K_{\zeta_1,t}^{\wedge L}) }{\zeta_2 - \zeta_1} - \frac{1}{L!}\sum_{\vec{m}\in \mathfrak{M}(L, N+1)}\binom{N+1}{\vec{m}}I_{\zeta_1}(\vec{m}). \end{align} To prove \eqref{eq: exdi}, it suffices to show $|D_{\zeta_1,\zeta_2}| \to 0$ as $\zeta_2 \to \zeta_1$. Towards this end, we first claim that for all $\vec{m}\in \mathfrak{M}(L,N)$ and for all $j,k \in \{1,2,\ldots,L\}$ we have \begin{align}\label{eq:last} \big|\mathfrak{L}_{\zeta_1,\zeta_2}^{(1)}(\vec{m};k)\big| \to 0, \mbox{ and } \big|\mathfrak{L}_{\zeta_1,\zeta_2}^{(2)}(\vec{m};j,k)\big| \to 0,\mbox{ as }\zeta_2\to \zeta_1, \end{align} where $\mathfrak{L}_{\zeta_1,\zeta_2}^{(1)}(\vec{m};k)$ and $\mathfrak{L}_{\zeta_1,\zeta_2}^{(2)}(\vec{m};j,k)$ are defined in \eqref{eq:d4} and \eqref{eq:d42} respectively. We postpone the proof of \eqref{eq:last} to the next step. Assuming its validity, we now proceed to complete the induction step. Towards this end, we first manipulate the expression appearing in r.h.s.~of \eqref{eq:d0}. A simple combinatorial fact shows $$\sum_{\vec{m}\in \mathfrak{M}(L, N+1)}\binom{N+1}{\vec{m}} I_{\zeta_1}(\vec{m}) = \sum_{k=1}^L\sum_{\vec{m}\in \mathfrak{M}(L, N)}\binom{N}{\vec{m}}I_{\zeta_1}(\vec{m} + \vec{e}_k),$$ where $\vec{e}_k$ is defined in Step 1. Substituting this combinatorics back into the r.h.s. of \eqref{eq:d0} and using the induction step for $n=N$, allows us to rewrite $D_{\zeta_1, \zeta_2}$ as follows: \begin{align}\label{eq:d2} \mbox{r.h.s. of }\eqref{eq:d0} = \frac{1}{L!}\sum_{\vec{m}\in \mathfrak{M}(L, N)}\binom{N}{\vec{m}}\left[\frac{I_{\zeta_2}(\vec{m}) - I_{\zeta_1}(\vec{m})}{\zeta_2 - \zeta_1} - \sum_{k=1}^L I_{\zeta_1}(\vec{m}+\vec{e}_k) \right]. \end{align} Recalling the definition of $I_{\zeta}(\vec{m})$ in \eqref{def: Im} and that of $I_{\zeta_1,\zeta_2}^{(k)}(\vec{m})$ in \eqref{def: mIm}, we see that $\sum_{k=1}^L [I_{\zeta_1,\zeta_2}^{(k)}(\vec{m})-I_{\zeta_1,\zeta_2}^{(k-1)}(\vec{m})]$ telescopes to $I_{\zeta_2}(\vec{m}) - I_{\zeta_1}(\vec{m})$. Furthermore, if we recall $\mathfrak{L}_{\zeta_1,\zeta_2}^{(1)}(\vec{m};k)$ and $\mathfrak{L}_{\zeta_1,\zeta_2}^{(2)}(\vec{m};j,k)$ from \eqref{eq:d4} and \eqref{eq:d42} respectively, we observe that $$I_{\zeta_1,\zeta_2}^{(k-1)}(\vec{m}+\vec{e}_k)-I_{\zeta_1}(\vec{m}+\vec{e}_k)=I_{\zeta_1,\zeta_2}^{(k-1)}(\vec{m}+\vec{e}_k)-I_{\zeta_1,\zeta_2}^{(0)}(\vec{m}+\vec{e}_k)=\sum_{j=1}^k \mathfrak{L}_{\zeta_1,\zeta_2}^{(2)}(\vec{m};j,k).$$ Combining these observations, we have \begin{align} \mbox{r.h.s.~of \eqref{eq:d2}} & =\frac{1}{L!}\sum_{\vec{m}\in \mathfrak{M}(L, N)}\binom{N}{\vec{m}}\sum_{k=1}^L \frac{\left[I_{\zeta_1,\zeta_2}^{(k)}(\vec{m})-I_{\zeta_1,\zeta_2}^{(k-1)}(\vec{m})-(\zeta_2-\zeta_1)I_{\zeta_1}(\vec{m}+\vec{e}_k)\right]}{\zeta_2-\zeta_1}\nonumber \\ & = \frac{1}{L!}\sum_{\vec{m}\in \mathfrak{M}(L, N)}\binom{N}{\vec{m}}\sum_{k=1}^L \left[\mathfrak{L}^{(1)}_{\zeta_1,\zeta_2}(\vec{m};k)+\sum_{j=1}^{k-1} \mathfrak{L}_{\zeta_1,\zeta_2}^{(2)}(\vec{m};j,k)\right]. \label{eq:d3} \end{align} Clearly r.h.s.~of \eqref{eq:d3} goes to zero as $\zeta_2\to \zeta_1$ whenever \eqref{eq:last} is true. Thus by induction we have \eqref{eq: exdi}. \medskip \noindent\textbf{Step 3.} In this step we prove \eqref{eq:last}. Recall $\mathfrak{L}_{\zeta_1,\zeta_2}^{(1)}(\vec{m};k)$ from \eqref{eq:d4}. Following the definition of $I_{\zeta_1,\zeta_2}^{(k)}(\vec{m})$ from \eqref{def: mIm} we have \begin{equation*} \begin{aligned} \big|\mathfrak{L}_{\zeta_1,\zeta_2}^{(1)}(\vec{m};k)\big| & \le \int\cdots\int \frac1{\zeta_2-\zeta_1}\left|\det(K_{\xi_{k,i}, t}^{(m_i)})_{i, j =1}^L-\det(K_{\xi_{k-1,i}, t}^{(m_i)})_{i, j =1}^L\right. \\ & \hspace{4cm}\left.-(\zeta_2-\zeta_1)\det(K_{\xi_{k-1,i}, t}^{(m_i+e_{k,i})})_{i, j =1}^L\right|\prod_{i=1}^L \d w_i. \end{aligned} \end{equation*} Recall that in the above expression, up to a constant, the three determinants differ only in the $k$-th row. Hence the above expression can be written as $\int\cdots\int |\det(A)|\prod_{i=1}^L \d w_i$, where the entries of $A$ are given as follows: \begin{align*} A_{i,j} & = K_{\zeta_2,t}^{(m_i)}(w_i,w_j), \quad i<k, \quad A_{i,j} = K_{\zeta_1,t}^{(m_i)}(w_i,w_j), \quad i>k, \\ A_{k,j} & =\frac1{\zeta_2-\zeta_1}[K_{\zeta_2,t}^{(m_k)}(w_k,w_j)-K_{\zeta_1,t}^{(m_k)}(w_k,w_j)-(\zeta_2-\zeta_1)K_{\zeta_1,t}^{(m_k+1)}(w_k,w_j)] \\ & =\frac1{2\pi\i}\int_{\delta-\i\infty}^{\delta+\i\infty}\Gamma(-u)\Gamma(1+u)R_{\zeta_1,\zeta_2;m_k}(u)e^{tf(u,w_k)}\frac{\d u}{w_j-\tau^u w_k} \end{align*} where $R_{\zeta_1,\zeta_2;m_k}(u)$ is same as in \eqref{eq:rdef}. As $m_i$'s are at most $n$, by Lemma \ref{l:kdbd} (\eqref{eq: kdbd} specifically), we can get a constant $\Con>0$ depending only on $n,\delta,$ and $q$, so that $$|A_{i,j}| \le \Con(\zeta_1^{\delta-m_k}+\zeta_2^{\delta-m_k})\exp(-th_q(\delta))\le \Con(1+\zeta_2^{\delta})\exp(-th_q(\delta))$$ for all $i\neq k$. For $A_{k,j}$, we follow the same argument as in Proposition \ref{ppn:dkernel} (along the lines of \eqref{eq: d}) to get \begin{align*} |A_{k,j}| & \le \frac{\tau^{1-\frac{\delta}{2}}}{2\pi}\int_{\zeta_1}^{\zeta_2}\left|\sigma^{\delta+\i y-m_k-2}\right|\d\sigma \cdot \max_{w_j,w_k\in \mathfrak{C}(\tau^{1-\frac\delta2})}\int_{\R} \left|\frac{(\delta+\i y)_{m_k+2}}{\sin(-\pi(\delta+\i y))} e^{tf(\delta+\i y,w_k)}\right|\frac{\d y}{|w_j-\tau^{\delta+\i y} w_k|}. \end{align*} Note that by Lemma \ref{l:kdbd} (\eqref{eq: idbd} specifically) we see that the above maximum is bounded by $\Con \exp(-th_q(\delta))$ where again as $m_i$'s are at most $n$, the constant $\Con$ can be chosen dependent only on $n$,$\delta,$ and $q$. Since $|\sigma^{u-n-2}|= |\sigma^{\delta-m_k-2}|\le |\zeta_1^{\delta-m_k-2}| \le |\zeta_1^{\delta-2}|$ over the interval $[\zeta_1, \zeta_2]$ for $\delta \in (0,1)$, we obtain $$|A_{k,j}| \le \Con \exp(-th_q(\delta))\int_{\zeta_1}^{\zeta_2} |\sigma|^{\delta-m_k-2}\d\sigma \le \Con\exp(-th_q(\delta))\zeta_1^{\delta-2}(\zeta_2-\zeta_1).$$ As all the above estimates on $|A_{i,j}|$ are uniform in $w_i$'s, using Hadamard inequality we have \begin{align*} \int\cdots\int |\det(A)|\prod_{i=1}^L \d w_i & \le \Con^L L^{\frac{L}2}\exp(-tLh_q(\delta))(1+\zeta_2^{\delta})^{L-1}\zeta_1^{\delta-2}(\zeta_2-\zeta_1) \end{align*} Taking $\zeta_2\to \zeta_1$ above, we get the first part of \eqref{eq:last}. The proof of the second part of \eqref{eq:last} follows similarly by observing that the corresponding determinants also differ only in one row. One can then deduce the second part of \eqref{eq:last} using the uniform estimates of the kernel and difference of kernels given in \eqref{eq: kdbd} and \eqref{eq: kcz} respectively. As the proof follows exactly in the lines of above arguments, we omit the technical details. \medskip \noindent\textbf{Step 4.} In this step we prove \eqref{eq:exdi2}. Recall the definition of $I_{\zeta}(\vec{m})$ from \eqref{def: Im}. By Hadamard's inequality and Lemma \ref{l:kdbd} we have \begin{equation}\label{eq:hd} \begin{aligned} |\det(K_{\zeta, t}^{(m_i)})_{i, j = 1}^L |& \le L^{\frac{L}{2}} \prod_{i=1}^L\max_{w_i,w_j\in \mathfrak{C}(\tau^{1-\frac\delta2})} |K_{\zeta, t}^{(m_i)}(w_i, w_j)| \\ & \le L^{\frac{L}{2}}\prod_{i=1}^L\Con \zeta^{\delta-m_i}\exp(- t h_q(\delta))=\Con^L L^{\frac{L}{2}}\zeta^{L\delta-n}\exp(- t h_q(\delta)), \end{aligned} \end{equation} where the last equality follows as $\sum_{i=1}^L m_i=n$. Note that here also $\Con>0$ can be chosen to be dependent only on $n$, $\delta$, and $q$ as $m_i$'s are at most $n$. Recall that $w_i$-contour in $I_{\zeta}(\vec{m})$ lies on $\mathfrak{C}(\tau^{1-\frac\delta2})$. Thus in view of \eqref{eq:hd} adjusting the constant $\Con$ we obtain first inequality of \eqref{eq:exdi2}. For the second inequality, We observe the following recurrence relation: \begin{equation}\label{eq:rec} |\mathfrak{M}(L, n)| = |\{\vec{m} = (m_1, \ldots, m_L)\in \Z_{\geq 0}^L, \sum_{i = 1}^L m_i= n\}| \leq L\cdot|\mathfrak{M}(L, n-1)|. \end{equation} It follows immediately that $|\mathfrak{M}(L, n)| \leq L^n.$ Observe that for each $\vec{m}\in \mathfrak{M}(L,n),$ $\binom{n}{\vec{m}}$ is bounded from above by $n!$. Thus collectively with \eqref{eq: exdi} we have \begin{align*} |\partial_{\zeta}^n\tr(K_{\zeta,t}^{\wedge L})| \le \frac{n!L^n}{L!}\max_{\vec{m}\in \mathfrak{M}(L,n)}|I_{\zeta}(\vec{m})|. \end{align*} Applying the first inequality of \eqref{eq:exdi2} above leads to the second inequality of \eqref{eq:exdi2} completing the proof. \end{proof} \begin{lemma}\label{p: d-s} Fix $n\in \Z_{> 0}$, $\zeta \in [1, e^{tB_q(\frac{s}{2})}],$ and $t > 0$. Then \begin{equation*} \partial_{\zeta}^n \bigg(\sum_{L = 1}^{\infty}\tr(K_{\zeta,t}^{\wedge L}) \bigg) = \sum_{L =1}^{\infty}\partial_{\zeta}^n(\tr(K_{\zeta,t}^{\wedge L}) ). \end{equation*} \end{lemma} \begin{proof} On account of \cite[Proposition 4.2]{dt19}), it suffices to verify the following conditions: \begin{enumerate} \item $\sum_{L = 1}^{\infty}\tr(K_{\zeta,t}^{\wedge L})$ converges absolutely pointwise for $\zeta \in [1, e^{tB_q(\frac{s}{2})}];$ \item the absolute derivative series $ \sum_{L =1}^{\infty}\partial_{\zeta}^n(\tr(K_{\zeta,t}^{\wedge L}) )$ converges uniformly for $\zeta \in [1, e^{tB_q(\frac{s}{2})}].$ \end{enumerate} By Proposition \ref{p:trL}, we can pass the derivative inside the trace in $(2).$ Both $(1)$ and $(2)$ follow from (\ref{eq:exdi2}) in Proposition \ref{p:trL} as $\sum_{L=1}^{\infty} \frac{1}{L!}\Con^L L^n L^{\frac{L}{2}}\zeta^{L\delta-n}\exp(-tLh_q(\delta))<\infty$ for each $\zeta\in [1,e^{tB_q(\frac{s}2)}]$. \end{proof} Now, with the results from Lemmas \ref{p:trL} and \ref{p: d-s}, we are poised to justify the interchanges of operations leading to \eqref{eq:bfinal}. \begin{proposition} \label{p: s-i} For fixed $n, L \in \Z_{\geq 0}$, $\zeta \in [1, e^{tB_q(\frac{s}{2})}]$ and $t > 0$, \begin{equation} \label{eq: intrchge} \int_1^{e^{tB_q(\frac{s}{2})}}\zeta^{-\alpha} \partial_{\zeta}^n \bigg[1+\sum_{L = 2}^{\infty}\tr(K_{\zeta,t}^{\wedge L}) \bigg] \d\zeta=\sum_{L =2}^{\infty}\sum_{\vec{m}\in \mathfrak{M}(L, n)}\binom{n}{\vec{m}}\frac{1}{L!}\int_1^{e^{tB_q(\frac{s}{2})}}\zeta^{-\alpha} I_{\zeta}(\vec{m})\d\zeta. \end{equation} \end{proposition} \begin{proof} Thanks to Lemma \ref{p: d-s} we can switch the order of derivative and sum to get \begin{align*} \mbox{l.h.s.~of \eqref{eq: intrchge}} = \int_1^{e^{tB_q(\frac{s}2)}} \sum_{L=2}^{\infty} \zeta^{-\alpha} \partial_{\zeta}^n(\tr(K_{\zeta,t}^{\wedge L}))\d \zeta. \end{align*} We next justify the interchange of the integral and the sum in above expression. Note that via the estimate in \eqref{eq:exdi2} we have \begin{align*} \int_1^{e^{tB_q(\frac{s}2)}} \sum_{L=2}^{\infty} \zeta^{-\alpha} |\partial_{\zeta}^n(\tr(K_{\zeta,t}^{\wedge L}))|\d \zeta \le \sum_{L=2}^{\infty} \frac1{L!}\Con^LL^nL^{\frac{L}2}\exp(-th_q(\delta))\int_1^{e^{tB_q(\frac{s}2)}}\zeta^{L\delta-n-\alpha}\d\zeta <\infty. \end{align*} Hence Fubini's theorem justifies the exchange of summation and integration. Finally we arrive at r.h.s.~ of \eqref{eq: intrchge} by using the higher order derivative identity (see \eqref{eq: exdi}) from Proposition \ref{p:trL}. \end{proof} \subsection{Proof of Proposition \ref{p:ho}}\label{pf: p:ho} Finally, in this subsection we present the proof of Proposition \ref{p:ho} via obtaining an upperbound for $|\mathcal{B}_s(t)|$, defined in (\ref{eq:Calb}). \medskip Recall $I_{\zeta}(\vec{m})$ from \eqref{def: Im}. We first introduce the following technical lemma that upper bounds the absolute value of the integral $\int_1^{e^{tB_q(\frac{s}{2})}}\zeta^{-\alpha}I_{\zeta}(\vec{m})\d \zeta$ and will be an important ingredient in the proof of Proposition \ref{p:ho}. \begin{lemma}\label{l:itmdb} Fix $s>0$ so that $\alpha:=s-\lfloor s \rfloor >0$. Set $n=\lfloor s\rfloor+1$. Fix $L\in \Z_{>0}$ with $L\ge 2$ and $\vec{m}\in \mathfrak{M}(L, n)$, where $\mathfrak{M}(L,n)$ is defined in \eqref{mln}. There exists a constant $\Con = \Con(q,s)>0$ such that \begin{equation} \label{eq:imcase} \int_1^{e^{tB_q(\frac{s}{2})}}\zeta^{-\alpha}|I_{\zeta}(\vec{m})|\d \zeta \le \Con^L L^{\frac{L}{2}} \exp(-th_q(s) -\tfrac{1}{\Con}t). \end{equation} where $I_{\zeta}(\vec{m})$ is defined in \eqref{def: Im} and the functions $B_q$ and $h_q$ are defined in Proposition \ref{p:htau}. \end{lemma} \begin{proof} We split the proof into two steps as follows. Fix $L_0 = 2(n+1)$. In Step 1, we prove the inequality for when $2 \le L \le L_0$ and in Step 2, we consider the case when $L > L_0$. In both steps, we deform the $w$-contours in $I_{\zeta}(\vec{m})$ appropriately to achieve its upper bound. \medskip \noindent\textbf{Step 1. $2 \le L \le L_0.$} Fix $\vec{m} = (m_1, \ldots, m_L) \in \mathfrak{M}(L,n),$ where $\mathfrak{M}(L,n)$ is defined in \eqref{mln} and set \begin{equation}\label{eq: delta} \rho_i := \begin{cases} m_i + \frac{\alpha}{L} - \frac{1}{|\vec{m}|_{>0}} & \text{ if } m_i > 0\\ \frac{\alpha}{L} & \text{ if } m_i = 0. \end{cases} \end{equation} where recall that $|\vec{m}|_{>0}=|\{i\mid i\in \Z, m_i>0\}$. Recall the definition of $I_{\zeta}(\vec{m})$ in \eqref{def: Im}. Note that each $K_{\zeta,t}^{(m_i)}(w_i,w_j)$ (see \eqref{def: kerdrv}) are themselves are complex integral over $\delta+\i\R$. As $\alpha>0$ and $L\le L_0=2(n+1)$ we may take the $\delta$ appearing in the kernel in $K_{\zeta,t}^{(m_i)}$ is less than all the $\rho_i$'s. Note that this is only possible when $\alpha>0$. This is why we assumed this in the hypothesis here and as well as in the statement of Proposition \ref{p:ho}. In what follows we show that the contours of $K_{\zeta,t}^{(m_i)}(w_i,w_j)$ followed by $w_i$-contours can be deformed appropriately without crossing any pole in $I_{\zeta}(\vec{m})$. Indeed for each $K_{\zeta,t}^{(m_i)}$ in $I_{\zeta}(\vec{m})$ we can write $$K_{\zeta,t}^{(m_i)}(w_i, w_j) =\frac{1}{2\pi \i} \int_{\rho_i - \i \infty}^{\rho_i + \i \infty}\Gamma(-u_i)\Gamma(1+u_i)(u_i)_n \zeta^{u_i-n}e^{f(u_i,w_i)}\frac{\d u_i}{w_j-\tau^{u_i} w_i}.$$ As each $\rho_i \in (0, m_i\vee1)$ (see \eqref{eq: delta}), by Remark \ref{r:pole}, the above equality is true as we do not cross any poles in the integrand. Ensuing this change, we claim that we can deform the $w_i$-contour to $\mathfrak{C}(\tau^{1-\frac{\rho_i}{2}})$ one by one without crossing any pole in $I_{\zeta}(\vec{m})$. Similar to the argument given in the beginning of the proof of Proposition \ref{p:leading}, we note that as we deform the $w_i$-contours potential sources of poles in $I_{\zeta}(\vec{m})$ lie in the exponent $f(u_i,w_i):=\frac{(q-p)}{1+w_i\tau^{-1}}-\frac{(q-p)}{1+\tau^{u_i-1}w_i}$ (recalled from \eqref{eq:contour_fn}) and in the denominator $w_j- \tau^{u_i} w_i.$ Take $w_i \in \mathfrak{C}(\tau^{1-\frac{\delta_i}{2}}),$ $\delta_i, \in [\delta, \rho_i]$, and $u_i\in \rho_i+\i \R$. Observe that $$|w_j-\tau^{u_i}w_i| \ge |w_j|-|\tau^{u_i}w_i| \ge \tau^{1-\frac{\delta_j}2}-\tau^{1+\rho_i-\frac{\delta_i}2}>0, $$ $$|1+w_i\tau^{-1}| \ge |w_i\tau^{-1}|-1 \ge \tau^{-\frac{\delta_i}2}-1, \quad |1+\tau^{u_i-1}w_i| \ge 1-|\tau^{u_i-1}w_i| \ge 1-\tau^{\rho_i-\frac{\delta_i}{2}}.$$ This ensures that each $w_i$-contour can be taken as $\mathfrak{C}(\tau^{1-\frac{\rho_i}2})$ without crossing any pole. Permitting these contour deformations, we wish to apply Lemma \ref{l:kdbd}, \eqref{eq: idbd} specifically. Indeed we apply \eqref{eq: idbd} with $\rho,\delta\mapsto \rho_i$, $w\mapsto w'$, $w' \mapsto w_j$. Note that we indeed have $|w_j| \neq \tau^{1+\frac{\rho_i}2}$ here. We thus obtain \begin{equation}\label{eq:k} |K^{(m_i)}_{\zeta, t}(w_i,w_j)|\le \Con \zeta^{\rho_i - m_i}\exp(-th_q(\rho_i)). \end{equation} Here, $\Con $ is supposed to be dependent on $m_i,\rho_i,$ and $q$. Note that $\rho_i$ are in turn dependent on $m_i$, $s$ and $L$. Since $L$ is at most $L_0=2(n+1)$, there are at most finitely many choices of $m_i$'s which in turn produced finitely many choices of $\rho_i$'s. As $s$ is fixed, all of the $\rho_i$'s are uniformly bounded away from 0. Hence we can choose the constant $\Con$ to be dependent only $s$ and $q$ (recall that $n$ is also dependent on $s$). Observe that as $\vec{m}\in \mathfrak{M}(L,n)$ defined in \eqref{mln}, we have $\sum m_i=n$ and consequently $\sum \rho_i=n-1+\alpha=s$. In view of the estimate in \eqref{eq:k} and the definition of $I_{\zeta}(\vec{m})$ from \eqref{def: Im}, by Hadamard's inequality, we obtain \begin{equation*} |I_{\zeta}(\vec{m})| \leq \Con^L L^{\frac{L}{2}}\zeta^{s-n}\exp\left(-t\sum_{i =1}^Lh_q(\rho_i)\right) = \Con^L L^{\frac{L}{2}}\zeta^{-1 + \alpha}\exp\left(-t\sum_{i =1}^Lh_q(\rho_i)\right). \end{equation*} Thus \begin{equation}\label{eq: hoitmd} \int_1^{e^{tB_q(\frac{s}{2})}}\zeta^{-\alpha}|I_{\zeta}(\vec{m})|\d\zeta \leq\Con^L L^{\frac{L}{2}}\exp\left(-t\sum_{i =1}^Lh_q(\rho_i)\right)\int_1^{e^{tB_q(\frac{s}{2})}} \zeta^{-1}\d \zeta. \end{equation} Observe that $\int_x^y\zeta^{-1} d\zeta = \log\frac{y}{x}$. We appeal to the subadditivity $h_q(x) + h_q(y) > h_q(x+y)$ in Proposition \ref{p:htau} to get that $\sum_{i=1}^L h_q(\rho_i) \ge h_q(s-\rho_1)+h_q(\rho_1)$. Note that here we used the fact that $L\ge 2$. This leads to \begin{align}\label{eq:to} \mbox{r.h.s.~of \eqref{eq: hoitmd}} \le \Con^LL^{\frac{L}2}tB_q(\tfrac{s}2)\exp(-th_q(s))\exp(-t(h_q(s-\rho_1)+h_q(\rho_1)-h_q(s))) \end{align} Note that from \eqref{eq: delta}, $\rho_i \ge \frac{\alpha}{L} \ge \frac{\alpha}{L_0}$, this forces $\frac{\alpha}{L_0} \le s-\rho_1,\rho_1 \le s-\frac{\alpha}{L_0}$. Appealing to the strict subadditivity in \eqref{eq:diff} gives us that $h_q(s-\rho_1)+h_q(\rho_1)-h_q(s)$ can be lower bounded by a constant $\frac{1}{\Con}>0$ depending only on $s$ and $q$. Adjusting the constant $\Con$ we can absorb $tB_q(\frac{s}{2})$ appearing in r.h.s.~of \eqref{eq:to}, to get \eqref{eq:imcase}, completing our work for this step. \medskip \noindent\textbf{ Step 2. $L > L_0$.} Fix $\vec{m}= (m_1, \ldots, m_L)\in \mathfrak{M}(L, n).$ Recall the definition of $I_{\zeta}(\vec{m})$ in \eqref{def: Im}. Note that each $K_{\zeta,t}^{(m_i)}(w_i,w_j)$ (see \eqref{def: kerdrv}) are themselves are complex integral over $\delta+\i\R$. Here we set $ \delta= \min(\frac{1}{2}, \frac{s}{2})$. Thanks to \eqref{eq:exdi2} we have \begin{equation*} |I_{\zeta}(\vec{m})| \leq \Con^L L^{\frac{L}{2}}\zeta^{L\delta -n}\exp(-tLh_q(\delta)), \end{equation*} where the constant $\Con$ depends only on $n,\delta,$ and $q$ and thus only on $s$ and $q$. This leads to \begin{equation}\label{eq: shoitmd} \int_1^{e^{tB_q(\frac{s}{2})}}\zeta^{-\alpha}|I_\zeta(\vec{m})|\d\zeta \le\Con^L L^{\frac{L}{2}}\exp(-tLh_q(\delta))\int_1^{e^{tB_q(\frac{s}{2})}} \zeta^{-\alpha-n + L\delta}\d \zeta. \end{equation} Recall that $s=n-1+\alpha$. As $L\ge 2(n+1)$ and $\delta=\min(\frac12,\frac{s}2)$ we have $L\delta-n-\alpha>0$ in this case. Thus, we can upper bound the integral in \eqref{eq: shoitmd} to get \begin{equation}\label{eq: shoitmd2} \mbox{r.h.s.~of \eqref{eq: shoitmd}} \le \Con^L L^{\frac{L}{2}}\exp(-tLh_q(\delta))\frac{\exp(tB_q(\frac{s}{2})(-s + L\delta))}{-s + L\delta}. \end{equation} We incorporate $\frac{1}{-s + L\delta}$ into the constant $\Con$, Recall the definition of $B_q(x)$ from Proposition \eqref{p:htau}. We have $xB_q(x)=h_q(x)$. As $B_q(x)$ is strictly decreasing for $x>0$, (Proposition \ref{p:htau} \ref{a}, \ref{b}) we have \begin{equation*} \begin{split} \mbox{r.h.s.~of \eqref{eq: shoitmd2}} &\le \Con^L L^{\frac{L}{2}}\exp(-2th_q(\tfrac{s}{2}) -tL\delta(B_q(\delta)-B_q(\tfrac{s}{2})))\\& \leq \Con^L L^{\frac{L}{2}}\exp(- 2th_q(\tfrac{s}{2})) \le \Con^L L^{\frac{L}{2}}\exp(- th_q(s)-\tfrac{1}{\Con}t), \end{split} \end{equation*} where the last inequality above follows from \eqref{eq:diff} by observing that by subadditivity we can get a constant $\Con=\Con(q,s)>0$ such that $2h_q(\frac{s}{2})-h_q(s)\ge \frac1{\Con}$. This completes the proof. \end{proof} With Lemma \ref{l:itmdb}, we are now ready to prove Proposition \ref{p:ho}. \begin{proof}[Proof of Proposition \ref{p:ho}] Recall the definition of $\mathcal{B}_s(t)$ as defined in (\ref{eq:Calb}). Appealing to \eqref{eq: bst} and Proposition \eqref{p: s-i} we get that \begin{align}\label{bsl} |\mathcal{B}_{s}(t)| = \sum_{L =2}^{\infty}\frac{1}{L!}\sum_{\vec{m}\in \mathfrak{M}(L, n)}\binom{n}{\vec{m}}\int_1^{e^{tB_q(\frac{s}{2})}}\zeta^{-\alpha}|I_{\zeta}(\vec{m})|\d\zeta \end{align} Note that $\binom{n}{\vec{m}}$ is bounded from above by $n!$, and by \eqref{eq:rec} we have $|\mathfrak{M}(L, n)| \leq L^n$. Applying these inequalities along with the estimate in Lemma \ref{l:itmdb} we have that \begin{equation*} \begin{split} \mbox{r.h.s.~of \eqref{bsl}} \le \exp(-th_q(s)-\tfrac{1}{\Con}t)\sum_{L=2}^{\infty} \frac{1}{L!}\Con^L L^{\frac{L}{2}}L^n \end{split} \end{equation*} for some constant $\Con = \Con(q, s)>0.$ By Stirling's formula, $\sum_{L=2}^{\infty}\frac{1}{L!}\Con^L L^{\frac{L}{2}}L^n$ converges and hence adjusting the constant $\Con$, we obtain \eqref{eq:ho} completing the proof of the proposition. \end{proof}
1,314,259,992,603
arxiv
\section{Introduction} \setcounter{equation}{0} \setcounter{definition}{0} Fix $T>0$. Let $(E,\mathcal{B}(E),\mu)$ be a $\sigma$-finite measure space and $E$ be a Lusin space. Let $(\Omega,\mathcal{F},\mathbb{F},\Bbb{P})$, where $\mathbb{F}:=\{\mathcal{F}_t\}_{t\in[0,T]}$, be a complete filtered probability space satisfying the usual condition. We shall denote by $\mathcal{B}\mathcal{F}$ the $\sigma$-field of the progressively measurable sets on $[0,T]\times\Omega$, i.e., \begin{eqnarray*} \mathcal{B}\mathcal{F}=\{A\subset[0,T]\times\Omega:\ \forall t\in[0,T],\ A\cap([0,t]\times\Omega)\in\mathcal{B}([0,t])\otimes\mathcal{F}_t\}, \end{eqnarray*} where $\mathcal{B}([0,t])$ denotes the Borel $\sigma$-field on $[0,t]$. Let $(Z, \mathcal{Z})$ be a measurable space, and $\nu$ be a $\sigma$-finite positive measure on it. We assume that $N$ is a time-homogeneous Poisson random measure on $[0,T]\times Z$ with the intensity measure $\lambda_T\otimes\nu$ on $(\Omega,\mathcal{F},\mathbb{F},\Bbb{P})$, where $\lambda_T$ is the Lebesgue measure on $[0,T]$. We define the compensated Poisson random measure $\widetilde{N}$ by \begin{eqnarray*} \widetilde{N}((0,t]\times B)=N((0,t]\times B)-t\nu(B),\ \forall t\in[0,T],\ B\in\mathcal{Z}\text{ with }\nu(B)<\infty. \end{eqnarray*} The purpose of this paper is to establish the existence and uniqueness of strong solutions to the following stochastic generalized porous media equations driven by L\'{e}vy process: \begin{equation} \label{eq:1} \left\{ \begin{aligned} &dX(t)=L\Psi(X(t))dt+\int_{Z}f(t,X(t-),z)\widetilde{N}(dt,dz),\ \text{in}\ [0,T]\times E,\\ &X(0)=x \text{~on~} E, \end{aligned} \right. \end{equation} where $L$ is the infinitesimal generator of a symmetric sub-Markovian strongly continuous contraction semigroup $(P_t)_{t\geq0}$ on $L^2(\mu):=L^2(E,\mathcal{B}(E),\mu)$. $\Psi(\cdot):\Bbb{R}\rightarrow\Bbb{R}$ is a monotonically nondecreasing Lipschitz continuous function. $f:[0,T]\times \Omega\times F^*_{1,2}\times Z\rightarrow F^*_{1,2}$ is a $\mathcal{B}\mathcal{F}\otimes\mathcal{B}(F^*_{1,2})\otimes\mathcal{Z}$-measurable function. For the definition of the Hilbert space $F^*_{1,2}$ and the precise conditions on $\Psi$ and $f$, we refer to Section 2 and Section 3 respectively. The study of porous media equations has attained much interest in recent years. Recall that the classical porous media equation reads $$dX(t)=\Delta X^m(t)dt$$ on a domain in $\Bbb{R}^d$, we refer to \cite{A} for its mathematical treatment and the physical background, and also \cite[Section 4.3]{B} for the general theory of equations of such type. Stochastic porous media equations (SPMEs) have been intensively discussed since the foundational work in \cite{DR, DR04}. Meanwhile, there are plenty results about the existence and uniqueness of solutions and their long-time behaviors of SPMEs driven by Wiener process on general measure spaces (\cite{BDPR04, DRRW06, RRW, RW, RW07, RWW, RWX, RWX1, RWZ, W, WZ}). However, to the best of our knowledge, there seem to be very few results about SPMEs driven by L\'{e}vy-type or Poission-type perturbations on general measure spaces. Hou and Zhou investigated the existence and uniqueness of solutions to SPMEs driven by L\'{e}vy noise on a separable probability space in \cite{ZH} in a variational setting, following the approach of \cite{RRW}. Based on the methods used in \cite{ZH}, the ergodicity and the exponential stability of the same equation were obtained in \cite{ZH1} and \cite{GZ} respectively. In our framework, we consider \eref{eq:1} in $\sigma$-finite measurable spaces. We would also like to emphasize that the state space we work in is $F^*_{1,2}$, which is larger than the dual space of the extended Dirichlet space considered in \cite{RRW, ZH, ZH1, GZ}, hence we can allow more general initial conditions. In our case, we do not need the transience assumption of the Dirichlet form as in \cite{RRW} or the boundedness of $L^{-1}$ in $L^{r+1}(E,\mathcal{B}(E),\mu)$ for some $r\geq1$ as in \cite{ZH, ZH1, GZ}. In addition, in \cite{RRW, ZH, ZH1, GZ}, $\Psi$ is assumed to be continuous such that $r\Psi(r)\rightarrow\infty$ as $r\rightarrow\infty$. In this paper, we show that for Lipschitz continuous $\Psi$ this condition can be dropped for $L^2(\mu)$-initial data. The main contribution of this work is that we obtain the existence and uniqueness of strong solutions to \eref{eq:1} driven by L\'{e}vy noise, which generalizes many previous works \cite{BRR, RRW, RW, RWX, ZH}. This work is inspired by the recent paper \cite{RWX}, in which the first author together with R\"{o}ckner and Xie proved the existence and uniqueness of strong solutions to \eref{eq:1} driven by Wiener process. In this paper, we will follow the approximating techniques in \cite{RWX} and the local monotonicity arguments (cf. \cite{BLZ}). However, since we consider \eref{eq:1} driven by L\'{e}vy noise, our proofs are more involved with a substantial number of obstacles to overcome, which do not occur in \cite{RWX}. Besides, we will keep the assumptions for $L$ and $\Psi$ as in \cite{RWX}. Hence, the examples given in \cite{RWX} also apply here, furthermore, our $L$ can cover all examples mentioned in \cite{RWX}, such as generalized Schr\"odinger operators, i.e., $L=\Delta+2\frac{\nabla \rho}{\rho}\cdot\nabla$, fractional powers of the Laplacian, i.e., $L=-(-\Delta)^\alpha$, $\alpha\in(0,1]$, and Laplacians on fractals. Finally, we would like to refer \cite{LR, LR1, P, PR} for more background information and results on SPDEs, \cite{A, BDR} on SPMEs, \cite{RRW, RW, RWW, RWX, RWX1} and the references therein for comprehensive theories of stochastic generalized porous media equations. The paper is organized as follows: in Section 2, we introduce some notations and recall some known results for preparation. In Section 3, we present our assumptions and the main result. In Section 4, we construct approximated equations for $L$ and $\Psi$, and by using the local monotonicity arguments, we show the existence and uniqueness of solutions to the approximated equations. We also obtain some a priori estimates for those approximated solutions. Section 5 is devoted to prove that the limit of those approximated solutions solves \eref{eq:1}. \section{Notations and Preliminaries} \setcounter{equation}{0} \setcounter{definition}{0} Let us first recall some basic definitions and spaces which will be used throughout the paper (see \cite{RWX, WZ}). Let $(E,\mathcal{B}(E),\mu)$ be a $\sigma$-finite measure space, $(P_t)_{t\geq0}$ be a strongly continuous, symmetric sub-Markovian contraction semigroup on $L^2(\mu)$ with generator $(L, D(L))$. The $\Gamma$-transform of $(P_t)_{t\geq0}$ is defined by the following Bochner integral (\cite{HK}) \begin{eqnarray}\label{eqnarray1} V_ru:=\frac{1}{\Gamma(\frac{r}{2})}\int_0^\infty t^{\frac{r}{2}-1}e^{-t}P_tudt,~~u\in L^2(\mu),~~r>0. \end{eqnarray} In this paper, we consider the Hilbert space $(F_{1,2},\|\cdot\|_{F_{1,2}})$ defined by $$ F_{1,2}:=V_1(L^2(\mu)),~\text{with~norm}~\|f\|_{F_{1,2}}=|u|_2,~~\text{for}~~f=V_1u,~~ u\in L^2(\mu), $$ where the norm $|\cdot|_2$ is defined as $|u|_2=(\int_E |u|^2d\mu)^{\frac{1}{2}}$, and its inner product is denoted by $\langle \cdot, \cdot\rangle_2$. From \cite{FJS1,FJS2}, we know $$ V_1=(1-L)^{-\frac{1}{2}},~~\text{so~that}~~F_{1,2}=D\big((1-L)^{\frac{1}{2}}\big)~~\text{and}~~\|f\|_{F_{1,2}}=|(1-L)^{\frac{1}{2}}f|_2. $$ The dual space of $F_{1,2}$ is denoted by $F^*_{1,2}$ and $F^*_{1,2}=D((1-L)^{-\frac{1}{2}})$, it is equipped with norms \begin{eqnarray}\label{eqnarray5} \|\eta\|_{F^*_{1,2,\varepsilon}}:=\langle \eta, (\varepsilon-L)^{-1}\eta\rangle_2^{\frac{1}{2}},~~\eta\in F^*_{1,2},~~0<\varepsilon<\infty. \end{eqnarray} As a short remark, the sequence of norms $\|\cdot\|_{F^*_{1,2,\varepsilon}}$, $0<\varepsilon<\infty$, are in fact equivalent and satisfy \begin{eqnarray}\label{eqn22} \|\eta\|_{F^*_{1,2}}\leq\|\eta\|_{F^*_{1,2,\varepsilon}}\leq \frac{1}{\sqrt{\varepsilon}}\|\eta\|_{F^*_{1,2}},\ \forall \eta\in F^*_{1,2}. \end{eqnarray} The proof of \eref{eqn22} is not difficult and can be found in a forthcoming paper \cite{RWX1}, which is written by the first author, R\"{o}ckner and Xie. We will use this property later in the proof of Claim \ref{claim2}, when dealing with the convergence of $\{X^\varepsilon_\lambda\}_{0<\lambda<1}$ in $F^*_{1,2}$ as $\lambda\rightarrow0$ (cf. P9), also in the proof of Claim \ref{claim3} (cf. P12) and the uniqueness proof of Proposition \ref{Th2} (cf. P15). Because of some technical obstacles, we will consider these three proofs in $(F^*_{1,2},\|\cdot\|_{F^*_{1,2,\varepsilon}})$ instead of $(F^*_{1,2},\|\cdot\|_{F^*_{1,2}})$. Let $H$ be a separable Hilbert space with inner product $\langle\cdot,\cdot\rangle_H$ and $H^*$ its dual. Let $V$ be a reflexive Banach space such that $V\subset H$ continuously and densely. Then for its dual space $V^*$ it follows that $H^*\subset V^*$ continuously and densely. Identifying $H$ and $H^*$ via the Riesz isomorphism we have that $$V\subset H\subset V^*$$ continuously and densely. If $_{V^*}\langle\cdot,\cdot\rangle_V$ denotes the dualization between $V^*$ and $V$ (i.e. $_{V^*}\langle z,v\rangle_V:=z(v)$ for $z\in V^*, v\in V$), it follows that \begin{eqnarray}\label{eqn6} _{V^*}\langle z,v\rangle_V=\langle z,v\rangle_H,~~\text{for~all}~z\in H,~~v\in V. \end{eqnarray} $(V,H,V^*)$ is called a Gelfand triple. In \cite{RWX}, the authors constructed a Gelfand triple with $V=L^2(\mu)$ and $H=F^*_{1,2}$, the Riesz map which identifies $F_{1,2}$ and $F^*_{1,2}$ is $(1-L)^{-1}: F^*_{1,2}\rightarrow F_{1,2}$. We need the following lemma which was proved in \cite{RWX}. \begin{lemma} The map $$1-L:F_{1,2}\rightarrow F_{1,2}^*$$ extends to a linear isometry $$1-L:L^2(\mu)\rightarrow(L^2(\mu))^*,$$ and for all $u,v\in L^2(\mu)$, \begin{eqnarray}\label{eqn7} _{(L^2(\mu))^*}\langle(1-L)u, v\rangle_{L^2(\mu)}=\int_Eu\cdot v~d\mu. \end{eqnarray} \end{lemma} Denote $\mathcal{H}$ be a Banach space. Throughout the paper, let $L^2([0,T]\times\Omega;\mathcal{H})$ denote the space of all $\mathcal{H}$-valued square-integrable functions on $[0,T]\times\Omega$, $L^\infty([0,T],\mathcal{H})$ the space of all $\mathcal{H}$-valued measurable functions on $[0,T]$, $C([0,T];\mathcal{H})$ the space of all continuous $\mathcal{H}$-valued functions on $[0,T]$, $D([0,T];\mathcal{H})$ the space of all c\`{a}dl\`{a}g $\mathcal{H}$-valued functions on $[0,T]$. For two Hilbert spaces $H_1$ and $H_2$, the space of Hilbert-Schmidt operators from $H_1$ to $H_2$ is denoted by $L_2(H_1,H_2)$. For simplicity, the positive constants $c$, $C$, $C_1$, $C_2$ used in this paper may change from line to line. \section{Hypothesis and main result} \setcounter{equation}{0} \setcounter{definition}{0} In this paper, we study \eref{eq:1} with the following hypotheses: \medskip \noindent \textbf{(H1)} $\Psi(\cdot):\Bbb{R}\rightarrow \Bbb{R}$ is a monotonically nondecreasing Lipschitz function with $\Psi(0)=0$. \vspace{1mm} \medskip \noindent \textbf{(H2)} Suppose there exists a positive constant $C_1$ such that $$\int_{Z}\|f(t,u,z)\|_{F^*_{1,2}}^2\nu(dz)\leq C_1(1+\|u\|^2_{F^*_{1,2}}), \ \forall u\in F^*_{1,2}.$$ \vspace{1mm} \medskip \noindent \textbf{(H3)} Suppose there exists a positive constant $C_2$ such that $$\int_{Z}\|f(t,u_1,z)-f(t,u_2,z)\|_{F^*_{1,2}}^2\nu(dz)\leq C_2\|u_1-u_2\|^2_{F^*_{1,2}}, \ \forall u_1, u_2\in F^*_{1,2}.$$ \begin{definition} An $F^*_{1,2}$-valued c\`{a}dl\`{a}g $\mathcal{F}_t$-adapted process $\{X(t)\}_{t\in[0,T]}$ is called a strong solution to \eref{eq:1}, if we have \begin{eqnarray}\label{eqn1} {X}\in L^2([0,T]\times \Omega; L^2(\mu))\cap L^2(\Omega;L^\infty([0,T];F^*_{1,2})); \end{eqnarray} \begin{eqnarray}\label{eqn2} \int_0^\cdot \Psi({X}(s))ds\in C([0,T];F_{1,2}),\ \Bbb{P}\text{-a.s}.; \end{eqnarray} \begin{eqnarray}\label{eqn3} X(t)=x+L\int_0^t\Psi({X}(s))ds+\int_0^t\int_{Z}f(s,{X}(s-),z)\widetilde{N}(ds,dz),\ \forall t\in[0,T],\ \Bbb{P}\text{-a.s}.. \end{eqnarray} \end{definition} Now we can present the main result of this paper. \begin{theorem}\label{Th1} Suppose that \textbf{(H1)}-\textbf{(H3)} hold. Then, for each $x\in L^2(\mu)$, there is a unique strong solution $X$ to \eref{eq:1} and exist $C_1, C_2\in(0,\infty)$ satisfying \begin{eqnarray}\label{eqn4} \Bbb{E}\Big[\sup_{t\in[0,T]}|X(t)|_2^2\Big]\leq e^{C_1T}(2|x|_2^2+C_2). \end{eqnarray} Assume further that \begin{equation}\label{eq:2} \Psi(r)r\geq c r^2,\ \forall r\in \mathbb{R}, \end{equation} where $c\in(0, \infty)$. Then, for all $x\in F_{1,2}^*$, there is a unique strong solution $X$ to \eref{eq:1}. \end{theorem} \begin{remark} Suppose $W$ is a cylindrical Wiener process on $L^2(\mu)$, $B: [0, T]\times F^*_{1,2}\times \Omega\rightarrow L_2(L^2(\mu), F^*_{1,2})$ is progressively measurable. If we add a stochastic term of the type $B(t,X(t))dW(t)$ to the right hand side of \eref{eq:1}, and assume $B(\cdot,u,\cdot)$ satisfies Lipschitz and linear growth conditions w.r.t. $u\in F^*_{1,2}$. Then, Theorem \ref{Th1} continues to hold. For simplicity, in this paper we concentrate on the jump part of the noise. \end{remark} \vspace{2mm} The proof of Theorem \ref{Th1} is given in Section 5. \section{Approximations} To prove Theorem \ref{Th1}, we first consider the following approximating equations for \eref{eq:1}: \begin{equation} \label{eq:3} \left\{ \begin{aligned} &dX^\varepsilon(t)+(\varepsilon-L)\Psi(X^\varepsilon(t))dt=\int_{Z}f(t,X^\varepsilon(t-),z)\widetilde{N}(dt,dz),\ \text{in}\ [0,T]\times E,\\ &X^\varepsilon(0)=x \text{~on~} E, \end{aligned} \right. \end{equation} where $\varepsilon\in(0,1)$. We have the following proposition for \eref{eq:3}. \begin{proposition}\label{Th2} Suppose that \textbf{(H1)}-\textbf{(H3)} hold. Then, for each $x\in L^2(\mu)$, there is a unique $\mathcal{F}_t$-adapted strong solution to \eref{eq:3}, denoted by $X^\varepsilon$, i.e., it has the following properties, \begin{eqnarray}\label{eqn5} X^\varepsilon\in L^2([0,T]\times \Omega; L^2(\mu))\cap L^2(\Omega;L^\infty([0,T];F^*_{1,2})); \end{eqnarray} \begin{eqnarray}\label{eqn5.1} X^\varepsilon\in D([0,T];F^*_{1,2}),\ \Bbb{P}\text{-}a.s.; \end{eqnarray} \begin{eqnarray}\label{eqn6} X^\varepsilon(t)+\!(\varepsilon-L)\!\int_0^t\!\Psi({X^\varepsilon}(s))ds\!=\!x\!+\!\int_0^t\!\int_{Z}\!f(s,{X^\varepsilon}(s-),z)\widetilde{N}(ds,dz),\ \forall t\in[0,T],\ \Bbb{P}\text{-}a.s.. \end{eqnarray} Furthermore, there exist $C_1, C_2\in(0,\infty)$ such that for all $\varepsilon\in(0,1)$, \begin{eqnarray}\label{eqn7} \Bbb{E}\Big[\sup_{t\in[0,T]}|X^\varepsilon(t)|_2^2\Big]\leq e^{C_1T}(2|x|_2^2+C_2). \end{eqnarray} In addition, if \eref{eq:2} is satisfied, then, for all $x\in F^*_{1,2}$, there is a unique strong solution to \eref{eq:3} satisfying \eref{eqn5}, \eref{eqn5.1} and \eref{eqn6}. \end{proposition} \begin{proof} We proceed in two steps. First, we consider the case with initial date $x\in F^*_{1,2}$ and that \eref{eq:2} is satisfied. Then, we will prove the existence and uniqueness result with $x\in L^2(\mu)$ and without assumption \eref{eq:2}, by replacing $\Psi$ with $\Psi+\lambda I$, $\lambda\in(0,1)$ and letting $\lambda\rightarrow0$. \textbf{Step 1}: Assume $x\in F^*_{1,2}$ and that \eref{eq:2} is satisfied. Set $V:= L^2(\mu)$, $H:=F^*_{1,2}$, $Au:=(L-\varepsilon)\Psi(u)$ for $u\in V$. Under the Gelfand triple $L^2(\mu)\subset F^*_{1,2}\equiv F_{1,2}\subset (L^2(\mu))^*$, we can check the four conditions in \cite[Theorem 1.2]{BLZ} to get the existence and uniqueness of solutions to \eref{eq:3}. To make it more explicitly, the hemicontinuity follows directly from \cite[P213, Step 1, (i)]{RWX}, i.e., for all $u, v, w\in V=L^2(\mu)$, for $\iota\in\Bbb{R}$, $|\iota|\leq1$, \begin{eqnarray*} \lim_{\iota\rightarrow0}\ _{V^*}\!\langle A(u+\iota v), w\rangle_V-_{V^*}\!\!\langle Au, w\rangle_V=0. \end{eqnarray*} From \cite[Step1, (ii)]{RWX} and \textbf{(H3)}, we can see that the local monotonicity holds, i.e., for all $u_1, u_2\in V(:=L^2(\mu))$, \begin{eqnarray}\label{eqn8} &&2_{V^*}\langle Au_1-Au_2, u_1-u_2\rangle_V+\int_{Z}\|f(t,u_1,z)-f(t,u_2,z)\|^2_{F^*_{1,2}}\nu(dz)\nonumber\\ \leq\!\!\!\!\!\!\!\!&&\Big(\frac{2(1-\varepsilon)^2}{\widetilde{\alpha}}+C_3\Big)\|u_1-u_2\|^2_{F^*_{1,2}}. \end{eqnarray} Here $\widetilde{\alpha}:=(k+1)^{-1}$, $k:=Lip\Psi$, which is the Lipschitz constant of $\Psi$. \vspace{2mm} As a short remark, the difference of the estimates between \cite[(3.9)]{RWX} and \eref{eqn8} only lies in the second term in both left-hand sides. Since both terms satisfy Lipschitz condition, the local monotonicity is obvious. From \cite[Step1, (iii)]{RWX}, we can see easily that the coercivity holds, i.e., for all $u\in V(:=L^2(\mu))$, $$2_{V^*}\langle Au, u\rangle_V\leq \big[-2c+2\theta^2k^2(1-\varepsilon)\big]\cdot|u|_2^2+\Big[\frac{2(1-\varepsilon)}{\theta^2}+C_2\Big]\cdot\|u\|^2_{F^*_{1,2}}.$$ Here $\theta$ is a positive constant and small enough such that $-2c+2\theta^2k^2(1-\varepsilon)$ is negative. \cite[Step1, (iv)]{RWX} implies the growth condition, i.e., for all $u\in V(:=L^2(\mu))$, $$|Au|_{V^*}\leq 2k|u|_2.$$ By applying \cite[Theorem 1.2]{BLZ}, we know that there exists a unique strong solution to \eref{eq:3}, denoted by $X^\varepsilon$. \textbf{Step 2}: If $\Psi$ doesn't satisfy \eref{eq:2}, the hemicontinuity, local monotonicity and growth conditions still hold, but the coercivity condition not in general. In this case, we will approximate $\Psi$ by $\Psi+\lambda I$, $\lambda\in(0,1)$. Consider the following approximating equations for \eref{eq:3}: \begin{equation} \label{eq:4} \left\{ \begin{aligned} &dX^{\varepsilon}_{\lambda}(t)+(\varepsilon-L)\big(\Psi(X^\varepsilon_\lambda(t)+\lambda X^\varepsilon_\lambda(t)\big)dt=\int_{Z}f(t,X^\varepsilon_\lambda(t-),z)\widetilde{N}(dt,dz),\ \text{in}\ [0,T]\times E,\\ &X^\varepsilon_\lambda(0)=x\in F^*_{1,2} \text{~on~} E. \end{aligned} \right. \end{equation} By using the similar argument as in Step 1, it is easy to prove that there exists a unique strong solution to \eref{eq:4} which satisfies, $X^\varepsilon_\lambda\in D([0,T];F^*_{1,2})$, $\Bbb{P}$-a.s., $X^\varepsilon_\lambda\in L^2([0,T]\times \Omega; L^2(\mu))\cap L^2(\Omega;L^\infty([0,T];F^*_{1,2}))$, and \begin{eqnarray*} X^\varepsilon_\lambda(t)+\!\!(\varepsilon-L)\int_0^t\Psi(X^\varepsilon_\lambda(s))ds=x+\!\!\int_0^t\int_{Z}f(s,X^\varepsilon_\lambda(s-),z)\widetilde{N}(ds,dz),\ \forall t\in[0,T],\ \Bbb{P}\text{-a.s}., \end{eqnarray*} and \begin{eqnarray}\label{eqn9} \Bbb{E}\Big[\sup_{t\in[0,T]}\|X^\varepsilon_\lambda(t)\|^2_{F^*_{1,2}}\Big]<\infty. \end{eqnarray} In the following, we want to prove that the sequence $\{X^\varepsilon_\lambda\}_{\lambda\in(0,1)}$ converges to the solution of \eref{eq:3} as $\lambda\rightarrow0$. From now on, we assume that the initial date $x\in L^2(\mu)$. This proof is divided into three parts, which are given as three claims. \begin{claim}\label{claim1} \begin{eqnarray*} \Bbb{E}\Big[\sup_{s\in[0,T]}|X^\varepsilon_\lambda(s)|_2^2\Big]+4\lambda\varepsilon\Bbb{E}\int_0^T\|X^\varepsilon_\lambda(s)\|^2_{F^*_{1,2}}ds\leq e^{C_1T}(2|x|_2^2+C_2),\ \forall \varepsilon, \lambda\in(0,1). \end{eqnarray*} \end{claim} \begin{proof} Rewrite \eref{eq:4} as follows, for all $t\in[0,T]$, \begin{eqnarray}\label{eqn10} X^\varepsilon_\lambda(t)=x+\!\!\int_0^t(L-\varepsilon)\big(\Psi(X^\varepsilon_\lambda(s))+\lambda X^\varepsilon_\lambda(s)\big)ds+\!\!\int_0^t\int_{Z}f(s,X^\varepsilon_\lambda(s-),z)\widetilde{N}(ds,dz). \end{eqnarray} For $\alpha>\varepsilon$, applying the operator $(\alpha-L)^{-\frac{1}{2}}:F^*_{1,2}\rightarrow L^2(\mu)$ to both sides of \eref{eqn10}, we obtain \begin{eqnarray}\label{eqn11} &&\!\!\!\!\!\!\!\!(\alpha-L)^{-\frac{1}{2}}X^\varepsilon_\lambda(t)\nonumber\\ =&&\!\!\!\!\!\!\!\!(\alpha-L)^{-\frac{1}{2}}x+\int_0^t(L-\varepsilon)(\alpha-L)^{-\frac{1}{2}}\big(\Psi(X^\varepsilon_\lambda(s))+\lambda X^\varepsilon_\lambda(s)\big)ds\nonumber\\ &&\!\!\!\!\!\!\!\!+\int_0^t\int_{Z}(\alpha-L)^{-\frac{1}{2}}f(s,X^\varepsilon_\lambda(s-),z)\widetilde{N}(ds,dz). \end{eqnarray} Applying It\^{o}'s formula (cf, \cite[Remark A.2]{BHZ}) to $\big|(\alpha-L)^{-\frac{1}{2}}X^\varepsilon_\lambda(t)\big|_2^2$ with $H=L^2(\mu)$, $V=F_{1,2}$, we obtain that for $t\in[0,T]$, \begin{eqnarray}\label{eqn12} &&\!\!\!\!\!\!\!\!\big|(\alpha-L)^{-\frac{1}{2}}X^\varepsilon_\lambda(t)\big|_2^2\nonumber\\ =&&\!\!\!\!\!\!\!\!\big|(\alpha-L)^{-\frac{1}{2}}x\big|_2^2+2\int_0^t\ _{F^*_{1,2}}\big\langle(L-\varepsilon)(\alpha-L)^{-\frac{1}{2}}\Psi(X^\varepsilon_\lambda(s)),(\alpha-L)^{-\frac{1}{2}}X^\varepsilon_\lambda(s)\big\rangle_{F_{1,2}}ds\nonumber\\ &&\!\!\!\!\!\!\!\!+2\lambda\int_0^t\ _{F^*_{1,2}}\big\langle(L-\varepsilon)(\alpha-L)^{-\frac{1}{2}} X^\varepsilon_\lambda(s),(\alpha-L)^{-\frac{1}{2}}X^\varepsilon_\lambda(s)\big\rangle_{F_{1,2}}ds\nonumber\\ &&\!\!\!\!\!\!\!\!+2\int_0^t\int_{Z}\big\langle(\alpha-L)^{-\frac{1}{2}}X^\varepsilon_\lambda(s-),(\alpha-L)^{-\frac{1}{2}}f(s,X^\varepsilon_\lambda(s-),z)\big\rangle_2\widetilde{N}(ds,dz)\nonumber\\ &&\!\!\!\!\!\!\!\!+\int_0^t\int_{Z}\big|f(s,X^\varepsilon_\lambda(s-),z)\big|_2^2N(ds,dz). \end{eqnarray} From \cite[Claim 3.1]{RWX}, we know that in the right-hand side of \eref{eqn12}, the second term is non-positive, the third term can be dominated by \begin{eqnarray*} &&\!\!\!\!\!\!\!\!2\lambda\int_0^t\ _{F^*_{1,2}}\big\langle(L-\varepsilon)(\alpha-L)^{-\frac{1}{2}} X^\varepsilon_\lambda(s),(\alpha-L)^{-\frac{1}{2}}X^\varepsilon_\lambda(s)\big\rangle_{F_{1,2}}ds\nonumber\\ \leq&&\!\!\!\!\!\!\!\!2\lambda\varepsilon\int_0^t\big\|(\alpha-L)^{-\frac{1}{2}}X^\varepsilon_\lambda(s)\big\|_{F_{1,2}}^2ds. \end{eqnarray*} Multiplying both sides of \eref{eqn12} by $\alpha$, and using the above estimates, \eref{eqn12} yields that for all $t\in[0,T]$, \begin{eqnarray}\label{eqn13} &&\big|\sqrt{\alpha}(\alpha-L)^{-\frac{1}{2}}X^\varepsilon_\lambda(t)\big|_2^2+2\lambda\varepsilon\int_0^t\big\|\sqrt{\alpha}(\alpha-L)^{-\frac{1}{2}}X^\varepsilon_\lambda(s)\big\|^2_{F_{1,2}}ds\nonumber\\ \le\!\!\!\!\!\!\!\!&&\big|\sqrt{\alpha}(\alpha-L)^{-\frac{1}{2}}x\big|_2^2\nonumber\\ \!\!\!\!\!\!\!\!&&+2\int_0^t\int_{Z}\big\langle\sqrt{\alpha}(\alpha-L)^{-\frac{1}{2}}X^\varepsilon_\lambda(s-),\sqrt{\alpha}(\alpha-L)^{-\frac{1}{2}}f(s,X^\varepsilon_\lambda(s-),z)\big\rangle_2\widetilde{N}(ds,dz)\nonumber\\ \!\!\!\!\!\!\!\!&&+\int_0^t\int_{Z}\big|\sqrt{\alpha}(\alpha-L)^{-\frac{1}{2}}f(s,X^\varepsilon_\lambda(s-),z)\big|_2^2N(ds,dz)\nonumber\\ :=\!\!\!\!\!\!\!\!&&\big|\sqrt{\alpha}(\alpha-L)^{-\frac{1}{2}}x\big|_2^2+I_1(t)+I_2(t). \end{eqnarray} Since $\sqrt{\alpha}(\alpha-L)^{-\frac{1}{2}}$ is a contraction operator from $F^*_{1,2}$ to $L^2(\mu)$, \textbf{(H2)} implies, \begin{eqnarray}\label{eqn14} \!\!\!\!\!\!\!\!&&\Bbb{E}\big[I_2(t)\big]\nonumber\\ =\!\!\!\!\!\!\!\!&&\Bbb{E}\int_0^t\int_{Z}\big|\sqrt{\alpha}(\alpha-L)^{-\frac{1}{2}}f(s,X^\varepsilon_\lambda(s),z)\big|_2^2\nu(dz)ds\nonumber\\ \leq\!\!\!\!\!\!\!\!&&\Bbb{E}\int_0^t\int_{Z}\big\|f(s,X^\varepsilon_\lambda(s),z)\big\|^2_{F^*_{1,2}}\nu(dz)ds\nonumber\\ \leq\!\!\!\!\!\!\!\!&&C_1+C_1\Bbb{E}\int_0^t\big\|X^\varepsilon_\lambda(s)\big\|^2_{F^*_{1,2}}ds. \end{eqnarray} Using the Burkhold-Davis-Gundy (BDG) inequality (with $p$=1) and \eref{eqn14}, we obtain that for all $t\in[0,T]$, \begin{eqnarray}\label{eqn15} \!\!\!\!\!\!\!\!&&\Bbb{E}\big[\sup_{s\in[0,t]}|I_1(s)|\big]\nonumber\\ \leq\!\!\!\!\!\!\!\!&&C\Bbb{E}\Bigg[\int_0^t\int_{Z}\Big|\big\langle\sqrt{\alpha}(\alpha-L)^{-\frac{1}{2}}X^\varepsilon_\lambda(s-),\sqrt{\alpha}(\alpha-L)^{-\frac{1}{2}}f(s,X^\varepsilon_\lambda(s-),z)\big\rangle_2\Big|^2N(ds,dz)\Bigg]^{\frac{1}{2}}\nonumber\\ \leq\!\!\!\!\!\!\!\!&&C\Bbb{E}\Bigg[\sup_{s\in[0,t]}\big|\sqrt{\alpha}(\alpha-L)^{-\frac{1}{2}}X^\varepsilon_\lambda(s)\big|_2^2\cdot\Big(\int_0^t\int_{Z}\big|\sqrt{\alpha}(\alpha-L)^{-\frac{1}{2}}f(s,X^\varepsilon_\lambda(s-),z)\big|_2^2N(ds,dz)\Big)\Bigg]^{\frac{1}{2}}\nonumber\\ \leq\!\!\!\!\!\!\!\!&&\frac{1}{2}\Bbb{E}\Big[\sup_{s\in[0,t]}\big|\sqrt{\alpha}(\alpha-L)^{-\frac{1}{2}}X^\varepsilon_\lambda(s)\big|_2^2\Big]+C\Bbb{E}\int_0^t\int_{Z}\big|\sqrt{\alpha}(\alpha-L)^{-\frac{1}{2}}f(s,X^\varepsilon_\lambda(s),z)\big|_2^2\nu(dz)ds\nonumber\\ \leq\!\!\!\!\!\!\!\!&&\frac{1}{2}\Bbb{E}\Big[\sup_{s\in[0,t]}\big|\sqrt{\alpha}(\alpha-L)^{-\frac{1}{2}}X^\varepsilon_\lambda(s)\big|_2^2\Big]+C_2\Bbb{E}\int_0^t\big\|X^\varepsilon_\lambda(s)\big\|^2_{F^*_{1,2}}ds+C_2. \end{eqnarray} Since $L^2(\mu)$ is continuously embedding into $F^*_{1,2}$, by \eref{eqn13}-\eref{eqn15}, we obtain that, for all $t\in[0,T]$, \begin{eqnarray}\label{eqn16} \!\!\!\!\!\!\!\!&&\Bbb{E}\Big[\sup_{s\in[0,t]}\big|\sqrt{\alpha}(\alpha-L)^{-\frac{1}{2}}X^\varepsilon_\lambda(s)\big|_2^2\Big]+2\lambda\varepsilon\int_0^t\big\|\sqrt{\alpha}(\alpha-L)^{-\frac{1}{2}}X^\varepsilon_\lambda(s)\big\|_{F_{1,2}}^2ds\nonumber\\ \leq\!\!\!\!\!\!\!\!&&\big|\sqrt{\alpha}(\alpha-L)^{-\frac{1}{2}}x\big|_2^2+C_1\Bbb{E}\int_0^t|X^\varepsilon_\lambda(s)|^2_2ds+C_2\nonumber\\ \!\!\!\!\!\!\!\!&&+\frac{1}{2}\Bbb{E}\Big[\sup_{s\in[0,t]}\big|\sqrt{\alpha}(\alpha-L)^{-\frac{1}{2}}X^\varepsilon_\lambda(s)\big|_2^2\Big]. \end{eqnarray} Note that the first summand of the left-hand side of \eref{eqn16} is finite by \eref{eqn9}, since $|\sqrt{\alpha}(\alpha-L)^{-\frac{1}{2}}\cdot|_2$ is equivalent to $\|\cdot\|_{F^*_{1,2}}$, \eref{eqn16} shows that \begin{eqnarray}\label{eqn17} \!\!\!\!\!\!\!\!&&\Bbb{E}\Big[\sup_{s\in[0,t]}\big|\sqrt{\alpha}(\alpha-L)^{-\frac{1}{2}}X^\varepsilon_\lambda(s)\big|_2^2\Big]+4\lambda\varepsilon\int_0^t\big\|\sqrt{\alpha}(\alpha-L)^{-\frac{1}{2}}X^\varepsilon_\lambda(s)\big\|_{F_{1,2}}^2ds\nonumber\\ \leq\!\!\!\!\!\!\!\!&&2\big|\sqrt{\alpha}(\alpha-L)^{-\frac{1}{2}}x\big|_2^2+C_1\Bbb{E}\int_0^t|X^\varepsilon_\lambda(s)|^2_2ds+C_2. \end{eqnarray} Note that the left-hand side of \eref{eqn17} is an increasing function with respect to $\alpha$ and $\sqrt{\alpha}(\alpha-L)^{-\frac{1}{2}}$ is a contraction operator from $F^*_{1,2}$ to $L^2(\mu)$. Letting $\alpha\rightarrow\infty$, \cite[Proposition 1.3]{MR} and the monotone convergence theorem imply \begin{eqnarray*} \Bbb{E}\Big[\sup_{s\in[0,t]}|X^\varepsilon_\lambda(s)|_2^2\Big]+4\lambda\varepsilon\int_0^t\|X^\varepsilon_\lambda(s)\|_{F_{1,2}}^2ds\leq2|x|_2^2+C_1\Bbb{E}\int_0^t|X^\varepsilon_\lambda(s)|^2_2ds+C_2. \end{eqnarray*} The Gronwall inequality yields \begin{eqnarray*} \Bbb{E}\Big[\sup_{s\in[0,T]}|X^\varepsilon_\lambda(s)|_2^2\Big]+4\lambda\varepsilon\int_0^T\|X^\varepsilon_\lambda(s)\|_{F_{1,2}}^2ds\leq e^{C_1T}(2|x|_2^2+C_2). \end{eqnarray*}\hspace{\fill}$\Box$ \end{proof} \begin{claim}\label{claim2} There exists an $F^*_{1,2}$-valued c\`{a}dl\`{a}g $\mathcal{F}_t$-adapted process $\{X^\varepsilon(t)\}_{t\in[0,T]}$ such that $X^\varepsilon\in L^2(\Omega;L^\infty([0,T];F^*_{1,2}))\cap L^2([0,T]\times\Omega;L^2(\mu))$, and $$\lim_{\lambda\rightarrow0}\Bbb{E}\Big[\sup_{s\in[0,T]}\|X^\varepsilon_\lambda(s)-X^\varepsilon(s)\|^2_{F^*_{1,2}}\Big]=0.$$ \end{claim} \begin{proof} By It\^{o}'s formula we get that, for $\lambda', \lambda\in(0,1)$ and $t\in[0,T]$, \begin{eqnarray}\label{eqn18} \!\!\!\!\!\!\!\!&&\|X^\varepsilon_\lambda(t)-X^\varepsilon_{\lambda'}(t)\|^2_{F^*_{1,2,\varepsilon}}\nonumber\\ \!\!\!\!\!\!\!\!&&+2\int_0^t\big\langle\Psi(X^\varepsilon_\lambda(s))-\Psi(X^\varepsilon_{\lambda'}(s))+\lambda X^\varepsilon_\lambda(s)-\lambda'X^\varepsilon_{\lambda'}(s),X^\varepsilon_\lambda(s)-X^\varepsilon_{\lambda'}(s)\big\rangle_2ds\nonumber\\ =\!\!\!\!\!\!\!\!&&\int_0^t\int_{Z}\big\|f(s,X^\varepsilon_\lambda(s-),z)-f(s,X^\varepsilon_{\lambda'}(s-),z)\big\|^2_{F^*_{1,2,\varepsilon}}N(ds,dz)\nonumber\\ \!\!\!\!\!\!\!\!&&+2\int_0^t\!\!\int_{Z}\!\!\big\langle X^\varepsilon_\lambda(s-)-X^\varepsilon_{\lambda'}(s-),f(s,X^\varepsilon_\lambda(s-),z)-f(s,X^\varepsilon_{\lambda'}(s-),z)\big\rangle_{F^*_{1,2,\varepsilon}}\!\!\!\!\widetilde{N}(ds,dz). \end{eqnarray} From \cite[(3.27)]{RWX}, we know that \begin{eqnarray}\label{eqn19} \!\!\!\!\!\!\!\!&&2\int_0^t\big\langle\Psi(X^\varepsilon_\lambda(s))-\Psi(X^\varepsilon_{\lambda'}(s))+\lambda X^\varepsilon_\lambda(s)-\lambda'X^\varepsilon_{\lambda'}(s),X^\varepsilon_\lambda(s)-X^\varepsilon_{\lambda'}(s)\big\rangle_2ds\nonumber\\ \geq\!\!\!\!\!\!\!\!&&2\tilde{\alpha}\int_0^t\big|\Psi(X^\varepsilon_\lambda(s))-\Psi(X^\varepsilon_{\lambda'}(s))\big|_2^2ds\nonumber\\ \!\!\!\!\!\!\!\!&&+2\int_0^t\big\langle\lambda X^\varepsilon_\lambda(s)-\lambda'X^\varepsilon_{\lambda'}(s),X^\varepsilon_\lambda(s)-X^\varepsilon_{\lambda'}(s)\big\rangle_2ds. \end{eqnarray} Taking \eref{eqn19} into \eref{eqn18}, we obtain \begin{eqnarray}\label{eqn20} \!\!\!\!\!\!\!\!&&\|X^\varepsilon_\lambda(t)-X^\varepsilon_{\lambda'}(t)\|^2_{F^*_{1,2,\varepsilon}}+2\tilde{\alpha}\int_0^t\big|\Psi(X^\varepsilon_\lambda(s))-\Psi(X^\varepsilon_{\lambda'}(s))\big|_2^2ds\nonumber\\ \leq\!\!\!\!\!\!\!\!&&-2\int_0^t\big\langle\lambda X^\varepsilon_\lambda(s)-\lambda'X^\varepsilon_{\lambda'}(s),X^\varepsilon_\lambda(s)-X^\varepsilon_{\lambda'}(s)\big\rangle_2ds\nonumber\\ \!\!\!\!\!\!\!\!&&+\int_0^t\int_{Z}\big\|f(s,X^\varepsilon_\lambda(s-),z)-f(s,X^\varepsilon_{\lambda'}(s-),z)\big\|^2_{F^*_{1,2,\varepsilon}}N(ds,dz)\nonumber\\ \!\!\!\!\!\!\!\!&&+2\int_0^t\!\!\int_{Z}\!\!\big\langle X^\varepsilon_\lambda(s-)-X^\varepsilon_{\lambda'}(s-),f(s,X^\varepsilon_\lambda(s-),z)-f(s,X^\varepsilon_{\lambda'}(s-),z)\big\rangle_{F^*_{1,2,\varepsilon}}\!\!\!\!\widetilde{N}(ds,dz)\nonumber\\ \leq\!\!\!\!\!\!\!\!&&4(\lambda+\lambda')\int_0^t|X^\varepsilon_\lambda(s)|_2^2+|X^\varepsilon_{\lambda'}(s)|_2^2ds\nonumber\\ \!\!\!\!\!\!\!\!&&+\int_0^t\int_{Z}\big\|f(s,X^\varepsilon_\lambda(s-),z)-f(s,X^\varepsilon_{\lambda'}(s-),z)\big\|^2_{F^*_{1,2,\varepsilon}}N(ds,dz)\nonumber\\ \!\!\!\!\!\!\!\!&&+2\int_0^t\!\!\int_{Z}\!\!\big\langle X^\varepsilon_\lambda(s-)-X^\varepsilon_{\lambda'}(s-),f(s,X^\varepsilon_\lambda(s-),z)-f(s,X^\varepsilon_{\lambda'}(s-),z)\big\rangle_{F^*_{1,2,\varepsilon}}\!\!\!\!\widetilde{N}(ds,dz). \end{eqnarray} Taking expectation to both sides of \eref{eqn20}, we obtain, \begin{eqnarray}\label{eqn21} \!\!\!\!\!\!\!\!&&\Bbb{E}\Big[\sup_{s\in[0,t]}\|X^\varepsilon_\lambda(s)-X^\varepsilon_{\lambda'}(s)\|^2_{F^*_{1,2,\varepsilon}}\Big]+2\tilde{\alpha}\Bbb{E}\int_0^t\big|\Psi(X^\varepsilon_\lambda(s))-\Psi(X^\varepsilon_{\lambda'}(s))\big|_2^2ds\nonumber\\ \leq\!\!\!\!\!\!\!\!&&4(\lambda+\lambda')\Bbb{E}\int_0^t|X^\varepsilon_\lambda(s)|_2^2+|X^\varepsilon_{\lambda'}(s)|_2^2ds\nonumber\\ \!\!\!\!\!\!\!\!&&+\Bbb{E}\int_0^t\int_{Z}\big\|f(s,X^\varepsilon_\lambda(s),z)-f(s,X^\varepsilon_{\lambda'}(s),z)\big\|^2_{F^*_{1,2,\varepsilon}}\nu(dz)ds\nonumber\\ \!\!\!\!\!\!\!\!&&+2\Bbb{E}\Bigg[\sup_{s\in[0,t]}\Big|\int_0^s\!\!\int_{Z}\!\!\big\langle X^\varepsilon_\lambda(l-)-X^\varepsilon_{\lambda'}(l-),f(l,X^\varepsilon_\lambda(l-),z)-f(l,X^\varepsilon_{\lambda'}(l-),z)\big\rangle_{F^*_{1,2,\varepsilon}}\!\!\!\!\widetilde{N}(dl,dz)\Big|\Bigg]\nonumber\\ :=\!\!\!\!\!\!\!\!&&4(\lambda+\lambda')\Bbb{E}\int_0^t|X^\varepsilon_\lambda(s)|_2^2+|X^\varepsilon_{\lambda'}(s)|_2^2ds+J_1(t)+J_2(t). \end{eqnarray} By assumption \textbf{(H3)} and \eref{eqn22}, we get \begin{eqnarray}\label{eqn23} J_1(t)\!\!\!\!\!\!\!\!&&\leq\frac{1}{\sqrt{\varepsilon}}\Bbb{E}\int_0^t\int_{Z}\big\|f(s,X^\varepsilon_\lambda(s),z)-f(s,X^\varepsilon_{\lambda'}(s),z)\big\|^2_{F^*_{1,2}}\nu(dz)ds\nonumber\\ \!\!\!\!\!\!\!\!&&\leq \frac{C}{\sqrt{\varepsilon}}\Bbb{E}\int_0^t\big\|X^\varepsilon_\lambda(s)-X^\varepsilon_{\lambda'}(s)\big\|^2_{F^*_{1,2}}ds\nonumber\\ \!\!\!\!\!\!\!\!&&\leq \frac{C}{\sqrt{\varepsilon}}\Bbb{E}\int_0^t\big\|X^\varepsilon_\lambda(s)-X^\varepsilon_{\lambda'}(s)\big\|^2_{F^*_{1,2,\varepsilon}}ds. \end{eqnarray} Using BDG's inequality and Young's inequality, for $t\in[0,T]$, we get \begin{eqnarray}\label{eqn24} J_2(t)\leq\!\!\!\!\!\!\!\!&& C\Bbb{E}\Bigg[\Big|\int_0^t\int_{Z}\big\langle X^\varepsilon_\lambda(l-)-X^\varepsilon_{\lambda'}(l-),f(l,X^\varepsilon_\lambda(l-),z)-f(l,X^\varepsilon_{\lambda'}(l-),z)\big\rangle^2_{F^*_{1,2,\varepsilon}}N(dl,dz)\Big|^{\frac{1}{2}}\Bigg]\nonumber\\ \leq \!\!\!\!\!\!\!\!&& \Bbb{E}\Bigg[\Big|\sup_{l\in[0,t]}\|X^\varepsilon_\lambda(l)-X^\varepsilon_{\lambda'}(l)\|^2_{F^*_{1,2,\varepsilon}}\cdot\int_0^t\int_{Z}\|f(l,X^\varepsilon_\lambda(l-),z)-f(l,X^\varepsilon_{\lambda'}(l-),z)\|^2_{F^*_{1,2,\varepsilon}}N(dl,dz)\Big|^{\frac{1}{2}}\Bigg]\nonumber\\ \leq\!\!\!\!\!\!\!\!&&\frac{1}{2}\Bbb{E}\Big[\sup_{l\in[0,t]}\|X^\varepsilon_\lambda(l)-X^\varepsilon_{\lambda'}(l)\|^2_{F^*_{1,2,\varepsilon}}\Big]\nonumber\\ \!\!\!\!\!\!\!\!&&+C\Bbb{E}\int_0^t\int_{Z}\|f(l,X^\varepsilon_\lambda(l),z)-f(l,X^\varepsilon_{\lambda'}(l),z)\|^2_{F^*_{1,2,\varepsilon}}\nu(dz)dl\nonumber\\ \leq\!\!\!\!\!\!\!\!&&\frac{1}{2}\Bbb{E}\Big[\sup_{l\in[0,t]}\|X^\varepsilon_\lambda(l)-X^\varepsilon_{\lambda'}(l)\|^2_{F^*_{1,2,\varepsilon}}\Big]+\frac{C}{\sqrt{\varepsilon}}\Bbb{E}\int_0^t\big\|X^\varepsilon_\lambda(s)-X^\varepsilon_{\lambda'}(s)\big\|^2_{F^*_{1,2,\varepsilon}}ds. \end{eqnarray} Taking \eref{eqn23} and \eref{eqn24} into \eref{eqn21}, we get \begin{eqnarray*} \!\!\!\!\!\!\!\!&&\Bbb{E}\Big[\sup_{s\in[0,t]}\|X^\varepsilon_\lambda(s)-X^\varepsilon_{\lambda'}(s)\|^2_{F^*_{1,2,\varepsilon}}\Big]+2\tilde{\alpha}\Bbb{E}\int_0^t\big|\Psi(X^\varepsilon_\lambda(s))-\Psi(X^\varepsilon_{\lambda'}(s))\big|_2^2ds\nonumber\\ \leq\!\!\!\!\!\!\!\!&&4(\lambda+\lambda')\Bbb{E}\int_0^t|X^\varepsilon_\lambda(s)|_2^2+|X^\varepsilon_{\lambda'}(s)|_2^2ds+\frac{1}{2}\Bbb{E}\Big[\sup_{l\in[0,t]}\|X^\varepsilon_\lambda(l)-X^\varepsilon_{\lambda'}(l)\|^2_{F^*_{1,2,\varepsilon}}\Big]\nonumber\\ \!\!\!\!\!\!\!\!&&+\frac{C}{\sqrt{\varepsilon}}\Bbb{E}\int_0^t\big\|X^\varepsilon_\lambda(s)-X^\varepsilon_{\lambda'}(s)\big\|^2_{F^*_{1,2,\varepsilon}}. \end{eqnarray*} Since $x\in L^2(\mu)$, Gronwall's lemma and Claim \ref{claim1} imply that for some constant $C\in(0,\infty)$, which is independent of $\lambda, \lambda', \varepsilon$, \begin{eqnarray*} \!\!\!\!\!\!\!\!&&\Bbb{E}\Big[\sup_{s\in[0,t]}\|X^\varepsilon_\lambda(s)-X^\varepsilon_{\lambda'}(s)\|^2_{F^*_{1,2,\varepsilon}}\Big]+4\tilde{\alpha}\Bbb{E}\int_0^t\big|\Psi(X^\varepsilon_\lambda(s))-\Psi(X^\varepsilon_{\lambda'}(s))\big|_2^2ds\nonumber\\ \leq\!\!\!\!\!\!\!\!&&e^{\frac{C}{\sqrt{\varepsilon}}}\cdot 16e^{C_1T}(|x|_2^2+C_2)(\lambda+\lambda'). \end{eqnarray*} By \eref{eqn22}, we know that \begin{eqnarray}\label{eqn25} \!\!\!\!\!\!\!\!&&\Bbb{E}\Big[\sup_{s\in[0,t]}\|X^\varepsilon_\lambda(s)-X^\varepsilon_{\lambda'}(s)\|^2_{F^*_{1,2}}\Big]+4\tilde{\alpha}\Bbb{E}\int_0^t\big|\Psi(X^\varepsilon_\lambda(s))-\Psi(X^\varepsilon_{\lambda'}(s))\big|_2^2ds\nonumber\\ \leq\!\!\!\!\!\!\!\!&&e^{\frac{C}{\sqrt{\varepsilon}}}\cdot 16e^{C_1T}(|x|_2^2+C_2)(\lambda+\lambda'). \end{eqnarray} \eref{eqn25} implies that there exists an $F^*_{1,2}$-valued c\`{a}dl\`{a}g $\mathcal{F}_t$-adapted process $\{X^\varepsilon(t)\}_{t\in[0,T]}$ such that $X^\varepsilon\in L^2(\Omega;L^\infty([0,T];F^*_{1,2}))$ and $X^\varepsilon\in D([0,T];F^*_{1,2})$, $\Bbb{P}$-a.s.. This together with Claim \ref{claim1} imply that $$X^\varepsilon\in L^2([0,T]\times \Omega;L^2(\mu)).$$\hspace{\fill}$\Box$ \end{proof} \begin{claim}\label{claim3} $X^\varepsilon$ satisfies \eref{eq:3} and $\int_0^\cdot\Psi(X^\varepsilon(s))ds\in C([0,T];F^*_{1,2})$, $\Bbb{P}$-a.s.. \end{claim} \begin{proof} First, let us clarify that $X^\varepsilon$ satisfies \eref{eq:3}. From Claim \ref{claim2}, we know that as $\lambda\rightarrow0$, \begin{eqnarray}\label{eqn26} X^\varepsilon_\lambda\rightarrow X^\varepsilon\ \text{in}\ L^2(\Omega;L^\infty([0,T];F^*_{1,2})). \end{eqnarray} By BDG's inequality, \textbf{(H3)} and \eref{eqn26}, we have \begin{eqnarray*} \!\!\!\!\!\!\!\!&&\Bbb{E}\Bigg[\sup_{t\in[0,T]}\Big\|\int_0^t\int_{Z}f(s,X^\varepsilon_\lambda(s-),z)-f(s,X^\varepsilon(s-),z)\widetilde{N}(ds,dz)\Big\|^2_{F^*_{1,2}}\Bigg]\\ \!\!\!\!\!\!\!\!&&\leq C\Bbb{E}\Bigg[\int_0^T\int_{Z}\big\|f(s,X^\varepsilon_\lambda(s),z)-f(s,X^\varepsilon(s),z)\big\|^2_{F^*_{1,2}}\nu(dz)ds\Bigg]\\ \!\!\!\!\!\!\!\!&&\leq C\Bbb{E}\Big[\int_0^T\big\|X^\varepsilon_\lambda(s)-X^\varepsilon(s)\big\|^2_{F^*_{1,2}}ds\Big]\\ \!\!\!\!\!\!\!\!&&\leq CT\Bbb{E}\Big[\sup_{s\in[0,T]}\big\|X^\varepsilon_\lambda(s)-X^\varepsilon(s)\big\|^2_{F^*_{1,2}}\Big]\\ \!\!\!\!\!\!\!\!&&\longrightarrow0,\ \ \text{as}\ \ \lambda\longrightarrow0, \end{eqnarray*} which means that as $\lambda\rightarrow0$, \begin{eqnarray}\label{eqn27} \!\!\!\!\!\!\!\!&&\int_0^\cdot\int_{Z}f(s,X^\varepsilon_\lambda(s-),z)\widetilde{N}(ds,dz)\nonumber\\ \!\!\!\!\!\!\!\!&&\longrightarrow \int_0^\cdot\int_{Z}f(s,X^\varepsilon(s-),z)\widetilde{N}(ds,dz)\ \text{in}\ L^2(\Omega;L^\infty([0,T];F^*_{1,2})). \end{eqnarray} Notice that \begin{eqnarray*} \!\!\!\!\!\!\!\!&&\Bbb{E}\int_0^T\big|\Psi(X^\varepsilon_\lambda(s))+\lambda X^\varepsilon_\lambda(s)\big|_2^2ds\nonumber\\ \leq\!\!\!\!\!\!\!\!&&\Bbb{E}\int_0^T\big|\Psi(X^\varepsilon_\lambda(s))\big|_2^2+\lambda^2\big|X^\varepsilon_\lambda(s)\big|_2^2ds\nonumber\\ \leq\!\!\!\!\!\!\!\!&&\big((Lip\Psi)^2+\lambda^2\big)\Bbb{E}\int_0^T\big|X^\varepsilon_\lambda(s)\big|_2^2ds, \end{eqnarray*} which indicates \begin{eqnarray}\label{eqn31} \Psi(X^\varepsilon_\lambda(\cdot))+\lambda X^\varepsilon_\lambda(\cdot)\ \text{converges\ weakly\ to\ some\ element}\ Y \ \text{in}\ L^2(\Omega;L^2([0,T];L^2(\mu))). \end{eqnarray} Recall that $\forall t\in[0,T]$, \begin{eqnarray}\label{eqn28} X^\varepsilon_\lambda(t)=x+\int_0^t(L-\varepsilon)(\Psi(X^\varepsilon_\lambda(s))+\lambda X^\varepsilon_\lambda(s))ds+\int_0^t\int_{Z}f(s,X^\varepsilon_\lambda(s-),z)\widetilde{N}(ds,dz), \end{eqnarray} holds in $(L^2(\mu))^*$. Notice that from \eref{eqn26}-\eref{eqn28}, we know $\forall t\in[0,T]$, \begin{eqnarray}\label{Eq Zhai 1} X^\varepsilon(t)=x+\int_0^t(L-\varepsilon)Y(s)ds+\int_0^t\int_{Z}f(s,X^\varepsilon(s-),z)\widetilde{N}(ds,dz)\ \text{holds\ in}\ (L^2(\mu))^*. \end{eqnarray} So, in order to prove that $X^\varepsilon$ satisfies \eref{eq:3}, it remains to show $Y(\cdot)=\Psi(X^\varepsilon(\cdot))$, $dt\otimes\Bbb{P}$-a.s.. \skip 0.2cm Now, applying It\^{o}'s formula to $\|X^\varepsilon(t)\|^2_{F^*_{1,2,\varepsilon}}$ in $F^*_{1,2}$, we get \begin{eqnarray}\label{eqn32} \|X^\varepsilon(t)\|^2_{F^*_{1,2,\varepsilon}}=\!\!\!\!\!\!\!\!&&\|x\|^2_{F^*_{1,2,\varepsilon}}-2\int_0^t\big\langle Y(s),X^\varepsilon(s)\big\rangle_2ds\nonumber\\ \!\!\!\!\!\!\!\!&&+2\int_0^t\int_{Z}\big\langle X^\varepsilon(s-),f(s,X^\varepsilon(s-),z)\big\rangle_{F^*_{1,2,\varepsilon}}\widetilde{N}(ds,dz)\nonumber\\ \!\!\!\!\!\!\!\!&&+\int_0^t\int_{Z}\|f(s,X^\varepsilon(s-),z)\|^2_{F^*_{1,2,\varepsilon}}N(ds,dz). \end{eqnarray} Applying It\^{o}'s formula to the process $X^\varepsilon_\lambda$, see \cite[P304]{BLZ}, we have \begin{eqnarray*} \!\!\!\!\!\!\!\!&&e^{-Kt}\|X^\varepsilon_\lambda(t)\|^2_{F^*_{1,2,\varepsilon}}\nonumber\\ =\!\!\!\!\!\!\!\!&&\|x\|^2_{F^*_{1,2,\varepsilon}}+2\int_0^t\int_{Z}e^{-Ks}\big\langle X^\varepsilon_\lambda(s-),f(s,X^\varepsilon_\lambda(s-),z)\big\rangle_{F^*_{1,2,\varepsilon}}\widetilde{N}(ds,dz)\nonumber\\ \!\!\!\!\!\!\!\!&&+\int_0^t\int_{Z}e^{-Ks}\|f(s,X^\varepsilon_\lambda(s-),z)\|^2_{F^*_{1,2,\varepsilon}}N(ds,dz)\nonumber\\ \!\!\!\!\!\!\!\!&&+\int_0^te^{-Ks}\cdot\Big(2_{(L^2(\mu))^*}\big\langle(L-\varepsilon)(\Psi(X^\varepsilon_\lambda(s))+\lambda X^\varepsilon_\lambda(s)),X^\varepsilon_\lambda(s)\big\rangle_{L^2(\mu)}-K\|X^\varepsilon_\lambda(s)\|^2_{F^*_{1,2,\varepsilon}}\Big)ds. \end{eqnarray*} Taking expectation of both sides to the above equality and by \textbf{(H1)}, we get for $\phi\in L^\infty([0,T];L^2(\Omega;F^*_{1,2}))\cap L^2([0,T]\times\Omega;\mathcal{B}\mathcal{F},dt\otimes\Bbb{P};L^2(\mu))$, \begin{eqnarray*} \!\!\!\!\!\!\!\!&&\Bbb{E}\Big[e^{-Kt}\|X^\varepsilon_\lambda(t)\|^2_{F^*_{1,2,\varepsilon}}\Big]-\Bbb{E}\|x\|^2_{F^*_{1,2,\varepsilon}}\nonumber\\ \leq\!\!\!\!\!\!\!\!&&\Bbb{E}\!\Bigg[\!\int_0^t\!\!e^{-Ks}\Big(2_{(L^2(\mu))^*}\big\langle(L-\varepsilon)\big(\Psi(X^\varepsilon_\lambda(s))+\lambda X^\varepsilon_\lambda(s)\big)-(L-\varepsilon)\big(\Psi(\phi(s))+\lambda \phi(s)\big),X^\varepsilon_\lambda(s)-\phi(s)\big\rangle_{L^2(\mu)}\nonumber\\ \!\!\!\!\!\!\!\!&&~~~~-K\|X^\varepsilon_\lambda(s)-\phi(s)\|^2_{F^*_{1,2,\varepsilon}}+\int_{Z}\|f(s,X^\varepsilon_\lambda(s),z)-f(s,\phi(s),z)\|^2_{F^*_{1,2,\varepsilon}}\nu(dz)\Big)ds\Bigg]\nonumber\\ \!\!\!\!\!\!\!\!&&+\Bbb{E}\!\Bigg\{\!\int_0^t\!\!e^{-Ks}\Bigg(2_{(L^2(\mu))^*}\big\langle(L-\varepsilon)\big(\Psi(X^\varepsilon_\lambda(s))+\lambda X^\varepsilon_\lambda(s)\big)-(L-\varepsilon)\big(\Psi(\phi(s))+\lambda \phi(s)\big),\phi(s)\big\rangle_{L^2(\mu)}\nonumber\\ \!\!\!\!\!\!\!\!&&~~~~+2_{(L^2(\mu))^*}\big\langle(L-\varepsilon)(\phi(s))+\lambda \phi(s)),X^\varepsilon_\lambda(s)\big\rangle_{L^2(\mu)}-2K\big\langle X^\varepsilon_\lambda(s),\phi(s)\big\rangle_{F^*_{1,2,\varepsilon}}+K\|\phi(s)\|^2_{F^*_{1,2,\varepsilon}}\nonumber\\ \!\!\!\!\!\!\!\!&&~~~~+\int_{Z}\Big(2\big\langle f(s,X^\varepsilon_\lambda(s),z),f(s,\phi(s),z)\big\rangle_{F^*_{1,2,\varepsilon}}-\|f(s,\phi(s),z)\|^2_{F^*_{1,2,\varepsilon}}\Big)\nu(dz)\Bigg)ds\Bigg\}. \end{eqnarray*} Choosing $K$ to be $\frac{2(1-\varepsilon)^2}{(Lip\Psi+1)^{-1}}+C_3$ as in \eref{eqn8}. After some simple rearrangement, we find \begin{eqnarray*} \!\!\!\!\!\!\!\!&&\Bbb{E}\Big[e^{-Kt}\|X^\varepsilon_\lambda(t)\|^2_{F^*_{1,2,\varepsilon}}\Big]-\Bbb{E}\|x\|^2_{F^*_{1,2,\varepsilon}}\nonumber\\ \leq\!\!\!\!\!\!\!\!&&\Bbb{E}\!\Bigg\{\!\int_0^t\!\!e^{-Ks}\Bigg(2_{(L^2(\mu))^*}\big\langle(L-\varepsilon)\big(\Psi(X^\varepsilon_\lambda(s))+\lambda X^\varepsilon_\lambda(s)\big)-(L-\varepsilon)\big(\Psi(\phi(s))+\lambda \phi(s)\big),\phi(s)\big\rangle_{L^2(\mu)}\nonumber\\ \!\!\!\!\!\!\!\!&&~~~~+2_{(L^2(\mu))^*}\big\langle(L-\varepsilon)\big(\phi(s))+\lambda \phi(s)\big),X^\varepsilon_\lambda(s)\big\rangle_{L^2(\mu)}-2K\big\langle X^\varepsilon_\lambda(s),\phi(s)\big\rangle_{F^*_{1,2,\varepsilon}}+K\|\phi(s)\|^2_{F^*_{1,2,\varepsilon}}\nonumber\\ \!\!\!\!\!\!\!\!&&~~~~+\int_{Z}\Big(2\big\langle f(s,X^\varepsilon_\lambda(s),z),f(s,\phi(s),z)\big\rangle_{F^*_{1,2,\varepsilon}}-\|f(s,\phi(s),z)\|^2_{F^*_{1,2,\varepsilon}}\Big)\nu(dz)\Bigg)ds\Bigg\}. \end{eqnarray*} This together with \eref{eqn26}, \eref{eqn27}, \eref{eqn290} and \eref{eqn31} gives for any nonnegative function $\psi\in L^\infty([0,T];dt)$ that \begin{eqnarray*} \!\!\!\!\!\!\!\!&&\Bbb{E}\Big[\int_0^T\psi(t)\big(e^{-Kt}\|X^\varepsilon(t)\|^2_{F^*_{1,2,\varepsilon}}-\|x\|^2_{F^*_{1,2,\varepsilon}}\big)dt\Big]\nonumber\\ \leq\!\!\!\!\!\!\!\!&&\lim_{\lambda\rightarrow0}\inf\Bbb{E}\Big[\int_0^T\psi(t)\big(e^{-Kt}\|X^\varepsilon_\lambda(t)\|^2_{F^*_{1,2,\varepsilon}}-\|x\|^2_{F^*_{1,2,\varepsilon}}\big)dt\Big]\nonumber\\ \leq\!\!\!\!\!\!\!\!&&\lim_{\lambda\rightarrow0}\inf\Bbb{E}\Bigg\{\int_0^T\psi(t)\Bigg(\int_0^te^{-Ks}\Big(2_{(L^2(\mu))^*}\big\langle(L-\varepsilon)\big(\Psi(X^\varepsilon_\lambda(s))+\lambda X^\varepsilon_\lambda(s)\big)\nonumber\\ \!\!\!\!\!\!\!\!&&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~-(L-\varepsilon)\big(\Psi(\phi(s))+\lambda \phi(s)\big),\phi(s)\big\rangle_{L^2(\mu)}\nonumber\\ \!\!\!\!\!\!\!\!&&~~~~+2_{(L^2(\mu))^*}\big\langle(L-\varepsilon)\big(\Psi(\phi(s))+\lambda \phi(s)\big),X^\varepsilon_\lambda(s)\big\rangle_{L^2(\mu)}-2K\langle X^\varepsilon_\lambda(s),\phi(s)\rangle_{F^*_{1,2,\varepsilon}}+K\|\phi(s)\|^2_{F^*_{1,2,\varepsilon}}\nonumber\\ \!\!\!\!\!\!\!\!&&~~~~+\int_{Z}\big(2\langle f(s,X^\varepsilon_\lambda(s),z),f(s,\phi(s),z)\rangle_{F^*_{1,2,\varepsilon}}-\|f(s,\phi(s),z)\|^2_{F^*_{1,2,\varepsilon}}\big)\nu(dz)\Big)ds\Bigg)dt\Bigg\}. \end{eqnarray*} Again by \eref{eqn26}, \eref{eqn27}, \eref{eqn290} and \eref{eqn31}, we infer \begin{eqnarray}\label{eqn33} \!\!\!\!\!\!\!\!&&\Bbb{E}\Big[\int_0^T\psi(t)\big(e^{-Kt}\|X^\varepsilon(t)\|^2_{F^*_{1,2,\varepsilon}}-\|x\|^2_{F^*_{1,2,\varepsilon}}\big)dt\Big]\nonumber\\ \leq\!\!\!\!\!\!\!\!&&\Bbb{E}\Bigg\{\int_0^T\psi(t)\Bigg(\int_0^te^{-Ks}\Big(2_{(L^2(\mu))^*}\big\langle(L-\varepsilon)Y(s)-(L-\varepsilon)(\Psi(\phi(s)),\phi(s)\big\rangle_{L^2(\mu)}\nonumber\\ \!\!\!\!\!\!\!\!&&~~~~+2_{(L^2(\mu))^*}\big\langle(L-\varepsilon)(\Psi(\phi(s)),X^\varepsilon(s)\big\rangle_{L^2(\mu)}-2K\langle X^\varepsilon(s),\phi(s)\rangle_{F^*_{1,2,\varepsilon}}+K\|\phi(s)\|^2_{F^*_{1,2,\varepsilon}}\nonumber\\ \!\!\!\!\!\!\!\!&&~~~~+\int_{Z}\!\!\big(2\langle f(s,X^\varepsilon(s),z),f(s,\phi(s),z)\rangle_{F^*_{1,2,\varepsilon}}\!-\!\|f(s,\phi(s),z)\|^2_{F^*_{1,2,\varepsilon}}\big)\nu(dz)\Big)ds\!\!\Bigg)dt\Bigg\}. \end{eqnarray} On the other hand, by \eref{eqn32} we infer that \begin{eqnarray}\label{eqn34} \!\!\!\!\!\!\!\!&&\Bbb{E}\Big[\int_0^te^{-Ks}\|X^\varepsilon(s)\|^2_{F^*_{1,2,\varepsilon}}-\|x\|^2_{F^*_{1,2,\varepsilon}}ds\Big]\nonumber\\ =\!\!\!\!\!\!\!\!&&\Bbb{E}\Bigg[\int_0^te^{-Ks}\Big(2_{(L^2(\mu))^*}\big\langle(L-\varepsilon)Y(s),X^\varepsilon(s)\big\rangle_{L^2(\mu)}-K\|X^\varepsilon(s)\|^2_{F^*_{1,2,\varepsilon}}\nonumber\\ \!\!\!\!\!\!\!\!&&~~~~~~~~~~~~~~~~~~~~~~~~~~~+\int_{Z}\|f(s,X^\varepsilon(s),z)\|^2_{F^*_{1,2,\varepsilon}}\nu(dz)\Big)ds\Bigg]. \end{eqnarray} Combining \eref{eqn34} with \eref{eqn33}, we have \begin{eqnarray}\label{eqn35} \!\!\!\!\!\!\!\!&&\Bbb{E}\Bigg[\int_0^T\psi(t)\Big(\int_0^te^{-Ks}\big(2_{(L^2(\mu))^*}\big\langle(L-\varepsilon)Y(s)-(L-\varepsilon)(\Psi(\phi(s))),X^\varepsilon(s)-\phi(s)\big\rangle_{L^2(\mu)}\nonumber\\ \!\!\!\!\!\!\!\!&&~~~~~~-K\|X^\varepsilon(s)-\phi(s)\|^2_{F^*_{1,2,\varepsilon}}+\int_{Z}\|f(s,\phi(s),z)-f(s,X^\varepsilon(s),z)\|^2_{F^*_{1,2,\varepsilon}}\nu(dz)\big)ds\Big)dt\Bigg]\nonumber\\ \leq\!\!\!\!\!\!\!\!&&0. \end{eqnarray} Note that \eref{eqn35} also implies \begin{eqnarray}\label{eqn36} \!\!\!\!\!\!\!\!&&\Bbb{E}\Bigg[\int_0^T\psi(t)\Big(\int_0^te^{-Ks}\big(2_{(L^2(\mu))^*}\big\langle(L-\varepsilon)Y(s)-(L-\varepsilon)(\Psi(\phi(s))),X^\varepsilon(s)-\phi(s)\big\rangle_{L^2(\mu)}\nonumber\\ \!\!\!\!\!\!\!\!&&~~~~~~-K\|X^\varepsilon(s)-\phi(s)\|^2_{F^*_{1,2,\varepsilon}}\big)ds\Big)dt\Bigg]\leq0. \end{eqnarray} Put $\phi=X^\varepsilon-\epsilon\tilde{\phi}u$ in \eref{eqn36} for $\tilde{\phi}\in L^\infty([0,T]\times\Omega;dt\otimes\Bbb{P};\Bbb{R})$ and $u\in L^2(\mu)$, divide both sides by $\epsilon$ and let $\epsilon\rightarrow0$. Then we have \begin{eqnarray*} \!\!\!\!\!\!\!\!&&\Bbb{E}\Bigg[\int_0^T\psi(t)\Big(\int_0^te^{-Ks}\big(2_{(L^2(\mu))^*}\big\langle(L-\varepsilon)Y(s)-(L-\varepsilon)(\Psi(X^\varepsilon(s))),u\big\rangle_{L^2(\mu)}\big)ds\Big)dt\Bigg]\leq0. \end{eqnarray*} Hence, we infer \begin{eqnarray}\label{eqn35.1} Y(\cdot)=\Psi(X^\varepsilon(\cdot)), dt\otimes\Bbb{P}\text{-}a.s.. \end{eqnarray} Next, let us prove that $\int_0^\cdot\Psi(X^\varepsilon(s))ds\in C([0,T];F^*_{1,2})$, $\Bbb{P}$-a.s.. On the one hand, from \eref{eqn28} and \cite[Remark 1.1]{BLZ}, we know that $$\int_0^\cdot(L-\varepsilon)(\Psi(X^\varepsilon_\lambda(s))+\lambda X^\varepsilon_\lambda(s))ds\in C([0,T];(L^2(\mu))^*),\ \Bbb{P}\text{-a.s}.,$$ $$X^\varepsilon_\lambda\in D([0,T];(L^2(\mu))^*),\ \Bbb{P}\text{-a.s}.,$$ $$\int_0^\cdot\int_{Z}f(s,X^\varepsilon_\lambda(s-),z)\widetilde{N}(ds,dz)\in D([0,T];(L^2(\mu))^*),\ \Bbb{P}\text{-a.s}.,$$ on the other hand, from Claim \ref{claim2}, we know that $X^\varepsilon_\lambda\in D([0,T];F^*_{1,2})$, $\Bbb{P}$-a.s., and $$\int_0^\cdot\int_{Z}f(s,X^\varepsilon_\lambda(s-),z)\widetilde{N}(ds,dz)\in D([0,T];F^*_{1,2}),\ \Bbb{P}\text{-a.s}..$$ So $$\int_0^\cdot(L-\varepsilon)(\Psi(X^\varepsilon_\lambda(s))+\lambda X^\varepsilon_\lambda(s))ds\in C([0,T];F^*_{1,2}),\ \Bbb{P}\text{-a.s}.,$$ apparently, \begin{eqnarray}\label{eqn29} \int_0^\cdot\Psi(X^\varepsilon_\lambda(s))+\lambda X^\varepsilon_\lambda(s)ds\in C([0,T];F_{1,2}),\ \Bbb{P}\text{-a.s.}. \end{eqnarray} Taking \eref{eqn26}-\eref{eqn28}, \eref{eqn35.1} and \eref{eqn29} into account, we know that as $\lambda\rightarrow0$, \begin{eqnarray}\label{eqn290} \int_0^\cdot\Psi(X^\varepsilon_\lambda(s))+\lambda X^\varepsilon_\lambda(s)ds\rightarrow\int_0^\cdot\Psi(X^\varepsilon(s))ds\ \text{in}\ L^2(\Omega;C([0,T];F_{1,2})), \end{eqnarray} which indicates $\int_0^\cdot\Psi(X^\varepsilon(s))ds\in C([0,T];F_{1,2})$, $\Bbb{P}$-a.s.. The proof of Claim \ref{claim3} is complete. \hspace{\fill}$\Box$ \end{proof} \vspace{2mm} \textbf{Uniqueness} \vspace{2mm} If $X^\varepsilon_1$, $X^\varepsilon_2$ are two solutions to \eref{eq:3}, we have $\Bbb{P}$-a.s., \begin{eqnarray*} \!\!\!\!\!\!\!\!&&X^\varepsilon_1(t)-X^\varepsilon_2(t)+(\varepsilon-L)\int_0^t\Psi(X^\varepsilon_1(s))-\Psi(X^\varepsilon_2(s))ds\nonumber\\ =\!\!\!\!\!\!\!\!&&\int_0^t\int_{Z}f(s,X^\varepsilon_1(s-),z)-f(s,X^\varepsilon_2(s-),z)\widetilde{N}(ds,dz),\ \forall t\in[0,T]. \end{eqnarray*} Applying It\^{o}'s formula to $\|X^\varepsilon_1(t)-X^\varepsilon_2(t)\|^2_{F^*_{1,2,\varepsilon}}$ in $F^*_{1,2}$, we get \begin{eqnarray}\label{eqn37} \!\!\!\!\!\!\!\!&&\|X^\varepsilon_1(t)-X^\varepsilon_2(t)\|^2_{F^*_{1,2,\varepsilon}}+2\int_0^t\big\langle\Psi(X^\varepsilon_1(s))-\Psi(X^\varepsilon_2(s)),X^\varepsilon_1(s)-X^\varepsilon_2(s)\big\rangle_2ds\nonumber\\ =\!\!\!\!\!\!\!\!&&\int_0^t\int_{Z}\|f(s,X^\varepsilon_1(s-),z)-f(s,X^\varepsilon_2(s-),z)\|^2_{F^*_{1,2,\varepsilon}}N(ds,dz)\nonumber\\ \!\!\!\!\!\!\!\!&&+2\int_0^t\!\int_{Z}\big\langle X^\varepsilon_1(s-)-X^\varepsilon_2(s-),f(s,X^\varepsilon_1(s-),z)-f(s,X^\varepsilon_2(s-),z)\big\rangle_{F^*_{1,2,\varepsilon}}\!\!\widetilde{N}(ds,dz). \end{eqnarray} Since $\Psi$ is Lipschitz, we have \begin{eqnarray}\label{eqn38} \big(\Psi(r)-\Psi(r')\big)(r-r')\geq (Lip\Psi+1)^{-1}|\Psi(r)-\Psi(r')|^2,\ \forall r, r'\in\Bbb{R}. \end{eqnarray} Taking expectation of both sides to \eref{eqn37}, then taking \eref{eqn38} and \textbf{(H3)} into account, we obtain \begin{eqnarray*} \!\!\!\!\!\!\!\!&&\Bbb{E}\|X^\varepsilon_1(t)-X^\varepsilon_2(t)\|^2_{F^*_{1,2,\varepsilon}}+2(Lip\Psi+1)^{-1}\Bbb{E}\int_0^t|\Psi(X^\varepsilon_1(s))-\Psi(X^\varepsilon_2(s))|_2^2ds\nonumber\\ \leq\!\!\!\!\!\!\!\!&&C_3\int_0^t\Bbb{E}\|X^\varepsilon_1(s)-X^\varepsilon_2(s)\|^2_{F^*_{1,2,\varepsilon}}ds. \end{eqnarray*} The second term in the left-hand side of the above inequality is positive, thus we have \begin{eqnarray*} \Bbb{E}\|X^\varepsilon_1(t)-X^\varepsilon_2(t)\|^2_{F^*_{1,2,\varepsilon}}\leq C_3\int_0^t\Bbb{E}\|X^\varepsilon_1(s)-X^\varepsilon_2(s)\|^2_{F^*_{1,2,\varepsilon}}ds. \end{eqnarray*} By Gronwall's inequality, we get $X^\varepsilon_1(t)=X^\varepsilon_2(t)$, $\Bbb{P}$-a.s., $\forall t\in[0,T]$, which indicates the uniqueness. Hence the proof of Proposition \ref{Th2} is complete.\hspace{\fill}$\Box$ \end{proof} \vspace{2mm} \smallskip \section{Proof of Theorem \ref{Th1}} Based on Proposition \ref{Th2}, we are now ready to prove our main result Theorem \ref{Th1}. The idea is to prove the sequence $\{X^\varepsilon\}_{\varepsilon\in(0,1)}$ converges to the solution of \eref{eq:1} as $\varepsilon\rightarrow0$. \setcounter{equation}{0} \setcounter{definition}{0} \begin{proof} First, we rewrite \eref{eq:3} as following: \begin{eqnarray*} \!\!\!\!\!\!\!\!&&X^\varepsilon(t)+(1-L)\int_0^t\Psi(X^\varepsilon(s))ds\nonumber\\ =\!\!\!\!\!\!\!\!&&x+(1-\varepsilon)\int_0^t\Psi(X^\varepsilon(s))ds+\int_0^t\int_{Z}f(s,X^\varepsilon(s-),z)\widetilde{N}(ds,dz). \end{eqnarray*} Apply It\^{o}'s formula to $\|X^\varepsilon(t)\|^2_{F^*_{1,2}}$, and after taking expectation to both sides, we have \begin{eqnarray}\label{eqn39} \!\!\!\!\!\!\!\!&&\Bbb{E}\|X^\varepsilon(t)\|^2_{F^*_{1,2}}+2\Bbb{E}\int_0^t\langle\Psi(X^\varepsilon(s)),X^\varepsilon(s)\rangle_2ds\nonumber\\ =\!\!\!\!\!\!\!\!&&\Bbb{E}\|x\|^2_{F^*_{1,2}}+2(1-\varepsilon)\Bbb{E}\int_0^t\langle\Psi(X^\varepsilon(s)),X^\varepsilon(s)\rangle_{F^*_{1,2}}ds\nonumber\\ \!\!\!\!\!\!\!\!&&+\Bbb{E}\int_0^t\int_{Z}\|f(s,X^\varepsilon(s-),z)\|^2_{F^*_{1,2}}N(ds,dz). \end{eqnarray} Since $\Psi$ is Lipschitz, we have \begin{eqnarray}\label{eqn40} \Psi(r)r\geq\tilde{\alpha}|\Psi(r)|^2,\ \forall r\in\Bbb{R}. \end{eqnarray} By \eref{eqn39}, \eref{eqn40} and \textbf{(H2)}, we have \begin{eqnarray}\label{eqn41} \!\!\!\!\!\!\!\!&&\Bbb{E}\|X^\varepsilon(t)\|^2_{F^*_{1,2}}+2\tilde{\alpha}\Bbb{E}\int_0^t|\Psi(X^\varepsilon(s))|_2^2ds\nonumber\\ \leq\!\!\!\!\!\!\!\!&&\Bbb{E}\|x\|^2_{F^*_{1,2}}+2\Bbb{E}\int_0^t\|\Psi(X^\varepsilon(s))\|_{F^*_{1,2}}\cdot\|X^\varepsilon(s)\|_{F^*_{1,2}}ds\nonumber\\ \!\!\!\!\!\!\!\!&&+C_1\Bbb{E}\int_0^t\|X^\varepsilon(s)\|^2_{F^*_{1,2}}ds+C_1. \end{eqnarray} Since $L^2(\mu)$ is continuously embedded into $F^*_{1,2}$, and by Young's inequality, we know that \begin{eqnarray}\label{eqn42} \!\!\!\!\!\!\!\!&&\Bbb{E}\int_0^t\|\Psi(X^\varepsilon(s))\|_{F^*_{1,2}}\cdot\|X^\varepsilon(s)\|_{F^*_{1,2}}ds\nonumber\\ \leq\!\!\!\!\!\!\!\!&&\Bbb{E}\int_0^t|\Psi(X^\varepsilon(s))|_2\cdot\|X^\varepsilon(s)\|_{F^*_{1,2}}ds\nonumber\\ \leq\!\!\!\!\!\!\!\!&&\tilde{\alpha}\Bbb{E}\int_0^t|\Psi(X^\varepsilon(s))|_2^2ds+\frac{1}{4\tilde{\alpha}}\Bbb{E}\int_0^t\|X^\varepsilon(s)\|^2_{F^*_{1,2}}ds. \end{eqnarray} Taking \eref{eqn42} into \eref{eqn41}, after some simple rearrangements, we get that \begin{eqnarray*} \!\!\!\!\!\!\!\!&&\Bbb{E}\|X^\varepsilon(t)\|^2_{F^*_{1,2}}+\tilde{\alpha}\Bbb{E}\int_0^t|\Psi(X^\varepsilon(s))|_2^2ds\nonumber\\ \leq\!\!\!\!\!\!\!\!&&\Bbb{E}\|x\|^2_{F^*_{1,2}}+(\frac{1}{2\tilde{\alpha}}+C_1)\Bbb{E}\int_0^t\|X^\varepsilon(s)\|^2_{F^*_{1,2}}ds+C_1. \end{eqnarray*} By Gronwall's inequality, we know that \begin{eqnarray}\label{eqn43} \Bbb{E}\|X^\varepsilon(t)\|^2_{F^*_{1,2}}+\tilde{\alpha}\Bbb{E}\int_0^t|\Psi(X^\varepsilon(s))|_2^2ds\leq\big(\|x\|^2_{F^*_{1,2}}+C_1\big)\cdot e^{(\frac{1}{2\tilde{\alpha}}+C_1)T}. \end{eqnarray} In the following, we will prove the convergence of $\{X^\varepsilon\}_{\varepsilon\in(0,1)}$. Apply It\^{o}'s formula to $\|X^\varepsilon(t)-X^{\varepsilon'}(t)\|^2_{F^*_{1,2}}$, $\varepsilon, \varepsilon'\in(0,1)$, we get, for all $t\in[0,T]$, \begin{eqnarray}\label{eqn44} \!\!\!\!\!\!\!\!&&\|X^\varepsilon(t)-X^{\varepsilon'}(t)\|^2_{F^*_{1,2}}+2\int_0^t\big\langle\Psi(X^\varepsilon(s))-\Psi(X^{\varepsilon'}(s)),X^\varepsilon(s)-X^{\varepsilon'}(s)\big\rangle_2ds\nonumber\\ =\!\!\!\!\!\!\!\!&&2\int_0^t\big\langle\Psi(X^\varepsilon(s))-\Psi(X^{\varepsilon'}(s)),X^\varepsilon(s)-X^{\varepsilon'}(s)\big\rangle_{F^*_{1,2}}ds\nonumber\\ \!\!\!\!\!\!\!\!&&-2\int_0^t\big\langle\varepsilon\Psi(X^\varepsilon(s))-\varepsilon'\Psi(X^{\varepsilon'}(s)),X^\varepsilon(s)-X^{\varepsilon'}(s)\big\rangle_{F^*_{1,2}}ds\nonumber\\ \!\!\!\!\!\!\!\!&&+\int_0^t\int_{Z}\|f(s,X^\varepsilon(s-),z)-f(s,X^{\varepsilon'}(s-),z)\|^2_{F^*_{1,2}}N(ds,dz)\nonumber\\ \!\!\!\!\!\!\!\!&&+2\int_0^t\int_{Z}\big\langle X^\varepsilon(s-)-X^{\varepsilon'}(s-),f(s,X^\varepsilon(s-),z)-f(s,X^{\varepsilon'}(s-),z)\big\rangle_{F^*_{1,2}}\widetilde{N}(ds,dz). \end{eqnarray} Since $L^2(\mu)$ continuously embedded into $F^*_{1,2}$, the second term in the right-hand side of \eref{eqn44} can be dominated by \begin{eqnarray}\label{eqn45} \!\!\!\!\!\!\!\!&&-2\int_0^t\big\langle\varepsilon\Psi(X^\varepsilon(s))-\varepsilon'\Psi(X^{\varepsilon'}(s)),X^\varepsilon(s)-X^{\varepsilon'}(s)\big\rangle_{F^*_{1,2}}ds\nonumber\\ \leq\!\!\!\!\!\!\!\!&&2C\int_0^t\big(\varepsilon|\Psi(X^\varepsilon(s))|_2+\varepsilon'|\Psi(X^{\varepsilon'}(s))|_2\big)\cdot\|X^\varepsilon(s)-X^{\varepsilon'}(s)\|^2_{F^*_{1,2}}ds. \end{eqnarray} From \cite[(3.42)]{RWX}, we know that \begin{eqnarray}\label{eqn46} \!\!\!\!\!\!\!\!&&2\int_0^t\big\langle\Psi(X^\varepsilon(s))-\Psi(X^{\varepsilon'}(s)),X^\varepsilon(s)-X^{\varepsilon'}(s)\big\rangle_2ds\nonumber\\ \geq\!\!\!\!\!\!\!\!&&2\tilde{\alpha}\int_0^t|\Psi(X^\varepsilon(s))-\Psi(X^{\varepsilon'}(s))|_2^2ds. \end{eqnarray} Taking \eref{eqn45} and \eref{eqn46} into \eref{eqn44}, we get that \begin{eqnarray}\label{eqn47} \!\!\!\!\!\!\!\!&&\|X^\varepsilon(t)-X^{\varepsilon'}(t)\|^2_{F^*_{1,2}}+2\tilde{\alpha}\int_0^t|\Psi(X^\varepsilon(s))-\Psi(X^{\varepsilon'}(s))|_2^2ds\nonumber\\ \leq\!\!\!\!\!\!\!\!&&C_1\int_0^t|\Psi(X^\varepsilon(s))-\Psi(X^{\varepsilon'}(s))|_2\cdot\|X^\varepsilon(s)-X^{\varepsilon'}(s)\|_{F^*_{1,2}}ds\nonumber\\ \!\!\!\!\!\!\!\!&&+C_2\int_0^t\big(\varepsilon|\Psi(X^\varepsilon(s))|_2+\varepsilon'|\Psi(X^{\varepsilon'}(s))|_2\big)\cdot\|X^\varepsilon(s)-X^{\varepsilon'}(s)\|_{F^*_{1,2}}ds\nonumber\\ \!\!\!\!\!\!\!\!&&+\int_0^t\int_{Z}\|f(s,X^\varepsilon(s-),z)-f(s,X^{\varepsilon'}(s-),z)\|^2_{F^*_{1,2}}N(ds,dz)\nonumber\\ \!\!\!\!\!\!\!\!&&+2\int_0^t\int_{Z}\big\langle X^\varepsilon(s-)-X^{\varepsilon'}(s-),f(s,X^\varepsilon(s-),z)-f(s,X^{\varepsilon'}(s-),z)\big\rangle_{F^*_{1,2}}\widetilde{N}(ds,dz). \end{eqnarray} Taking expectation to both sides of \eref{eqn47}, by Young's equality, BDG's inequality and \textbf{(H3)}, we obtain that, for all $t\in[0,T]$, \begin{eqnarray*} &&\Bbb{E}\Big[\sup_{s\in[0,t]}\big\|X^\varepsilon(s)-X^{\varepsilon'}(s)\big\|^2_{F^*_{1,2}}\Big] +2\tilde{\alpha}\Bbb{E}\int_0^t\big|\Psi(X^\varepsilon(s))-\Psi(X^{\varepsilon'}(s))\big|_2^2ds\\ \leq\!\!\!\!\!\!\!\!&&\frac{1}{2}\Bbb{E}\Big[\sup_{s\in[0,t]}\big\|X^\varepsilon(s)-X^{\varepsilon'}(s)\big\|^2_{F^*_{1,2}}\Big]+ \tilde{\alpha}\Bbb{E}\int_0^t\big|\Psi(X^\varepsilon(s))-\Psi(X^{\varepsilon'}(s))\big|_2^2ds\\ &&+C_1\Bbb{E}\int_0^t\big\|X^\varepsilon(s)-X^{\varepsilon'}(s)\big\|^2_{F^*_{1,2}}ds +C_2\Bbb{E}\int_0^t\big(\varepsilon|\Psi(X^\varepsilon(s))|^2_2+\varepsilon'|\Psi(X^{\varepsilon'}(s))|_2^2\big)ds. \end{eqnarray*} This yields \begin{eqnarray}\label{eqn48} &&\Bbb{E}\Big[\sup_{s\in[0,t]}\big\|X^\varepsilon(s)-X^{\varepsilon'}(s)\big\|^2_{F^*_{1,2}}\Big] +2\tilde{\alpha}\Bbb{E}\int_0^t\big|\Psi(X^\varepsilon(s))-\Psi(X^{\varepsilon'}(s))\big|_2^2ds\nonumber\\ \leq\!\!\!\!\!\!\!\!&&C_1\Bbb{E}\int_0^t\big\|X^\varepsilon(s)-X^{\varepsilon'}(s)\big\|^2_{F^*_{1,2}}ds\nonumber\\ &&+C_2(\varepsilon+\varepsilon')\Bbb{E}\int_0^t\big(|\Psi(X^\varepsilon(s))|^2_2+|\Psi(X^{\varepsilon'}(s))|_2^2\big)ds. \end{eqnarray} Note that if the initial value $x\in F^*_{1,2}$ and \eref{eq:2} is satisfied, we have \eref{eqn43}. If $x\in L^2(\mu)$, we have \eref{eqn7}, then \textbf{(H1)} implies that there exists a positive constant C such that $$\sup_{\kappa\in(0,1)}\Bbb{E}\int_0^t|\Psi(X^\kappa(s))|^2_2ds\leq C.$$ Hence, by Gronwall's inequality and Young's inequality, we know that there exists a positive constant $C\in(0,\infty)$ which is independent of $\varepsilon, \varepsilon'$ such that \begin{eqnarray}\label{eqn49} &&\mathbb{E}\Big[\sup_{s\in[0,T]}\big\|X^\varepsilon(s)-X^{\varepsilon'}(s)\big\|^2_{F^*_{1,2}}\Big]+\mathbb{E}\int_0^T\big|\Psi(X^\varepsilon(s))-\Psi(X^{\varepsilon'}(s))\big|_2^2ds\nonumber\\ \leq\!\!\!\!\!\!\!\!&&C(\varepsilon+\varepsilon'). \end{eqnarray} Hence, there exists an $\mathcal{F}_t$-adapted process $X\in L^2(\Omega;L^\infty([0,T];F^*_{1,2}))$ such that $X\in D([0,T];F^*_{1,2})$, $\Bbb{P}$-a.s., and $X^\varepsilon\rightarrow X$ in $L^2(\Omega;L^\infty([0,T]; F^*_{1,2}))$ as $\varepsilon\rightarrow0$. Furthermore, from Claim \ref{claim1}, we know that $X\in L^2([0,T]\times\Omega;L^2(\mu))$. \vspace{2mm} Using the similar argument as in Claim \ref{claim3}, we know that $X$ satisfies \eref{eq:1} and $\int_0^\cdot\Psi(X(s))ds\in C([0,T];F_{1,2})$, $\Bbb{P}$-a.s.. This completes the existence proof for Theorem \ref{Th1}. \vspace{2mm} \textbf{Uniqueness} Suppose $X_1$ and $X_2$ are two solutions to \eref{eq:1}, we have $\Bbb{P}$-a.s., \begin{eqnarray}\label{eqn50} &&\!\!\!\!\!\!\!\!X_1(t)-X_2(t)-L\int_0^t\Psi(X_1(s))-\Psi(X_2(s))ds\nonumber\\ =&&\!\!\!\!\!\!\!\!\int_0^t\int_{Z}\big(f(s,X_1(s-),z)-f(s,X_2(s-),z)\big)\widetilde{N}(ds,dz),\ \forall t\in [0,T]. \end{eqnarray} Rewrite \eref{eqn50} as following \begin{eqnarray}\label{eqn51} &&\!\!\!\!\!\!\!\!X_1(t)-X_2(t)+(1-L)\int_0^t\Psi(X_1(s))-\Psi(X_2(s))ds\nonumber\\ =&&\!\!\!\!\!\!\!\!\int_0^t\Psi(X_1(s))-\Psi(X_2(s))ds\nonumber\\ &&\!\!\!\!\!\!\!\!+\int_0^t\int_{Z}\big(f(s,X_1(s-),z)-f(s,X_2(s-),z)\big)\widetilde{N}(ds,dz),\ \forall t\in [0,T]. \end{eqnarray} Apply It\^{o}'s formula to $\|X_1(t)-X_2(t)\|^2_{F^*_{1,2}}$ in $F^*_{1,2}$, we have \begin{eqnarray}\label{eqn52} &&\big\|X_1(t)-X_2(t)\big\|^2_{F^*_{1,2}}+2\int_0^t\big\langle\Psi(X_1(s))-\Psi(X_2(s)),X_1(s)-X_2(s)\big\rangle_2ds\nonumber\\ =\!\!\!\!\!\!\!\!\!&&2\int_0^t\big\langle \Psi(X_1(s))-\Psi(X_2(s)),X_1(s)-X_2(s)\big\rangle_{F^*_{1,2}}ds\nonumber\\ &&+2\int_0^t\int_{Z}\big\langle X_1(s-)-X_2(s-),f(s,X_1(s-),z)-f(s,X_2(s-),z)\big\rangle_{F^*_{1,2}}\widetilde{N}(ds,dz)\nonumber\\ &&+\int_0^t\int_{Z}\big\|f(s,X_1(s-),z)-f(s,X_2(s-),z)\big\|^2_{F^*_{1,2}}N(ds,dz). \end{eqnarray} Taking expectation to both sides of \eref{eqn52}, \eref{eqn46} and \textbf{(H3)} yield that \begin{eqnarray*} &&\Bbb{E}\big\|X_1(t)-X_2(t)\big\|^2_{F^*_{1,2}}+2\widetilde{\alpha}\Bbb{E}\int_0^t\big|\Psi(X_1(s))-\Psi(X_2(s))\big|_2^2ds\nonumber\\ \leq\!\!\!\!\!\!\!\!\!&&2\Bbb{E}\int_0^t\big\|\Psi(X_1(s))-\Psi(X_2(s))\big\|_{F^*_{1,2}}\cdot \big\|X_1(s)-X_2(s)\big\|_{F^*_{1,2}}ds\nonumber\\ &&+C_2\Bbb{E}\int_0^t\big\|X_1(s)-X_2(s)\big\|^2_{F^*_{1,2}}ds. \end{eqnarray*} Using Young's inequality to the above inequality, and since $L^2(\mu)\subset F^*_{1,2}$ continuously and densely, we obtain \begin{eqnarray*} &&\Bbb{E}\big\|X_1(t)-X_2(t)\big\|^2_{F^*_{1,2}}+2\widetilde{\alpha}\Bbb{E}\int_0^t\big|\Psi(X_1(s))-\Psi(X_2(s))\big|_2^2ds\nonumber\\ \leq\!\!\!\!\!\!\!\!\!&&2\widetilde{\alpha}\Bbb{E}\int_0^t\big|\Psi(X_1(s))-\Psi(X_2(s))\big|_2^2ds+ \frac{1}{2\widetilde{\alpha}}\Bbb{E}\int_0^t\big\|X_1(s)-X_2(s)\big\|^2_{F^*_{1,2}}ds\nonumber\\ &&+C_2\mathbb{E}\int_0^t\big\|X_1(s)-X_2(s)\big\|^2_{F^*_{1,2}}ds. \end{eqnarray*} Therefore, \begin{eqnarray*} &&\mathbb{E}\big\|X_1(t)-X_2(t)\big\|^2_{F^*_{1,2}}\leq (\frac{1}{2\widetilde{\alpha}}+C_2)\mathbb{E}\int_0^t\big\|X_1(s)-X_2(s)\big\|^2_{F^*_{1,2}}ds. \end{eqnarray*} By Gronwall's lemma, we get $X_1(t)=X_2(t)$, $\Bbb{P}\text{-a.s.}$, $\forall t\in[0,T]$. Consequently, Theorem \ref{Th1} is completely proved. \hspace{\fill}$\Box$ \end{proof}
1,314,259,992,604
arxiv
\section{Introduction} Among many models of inflation, Starobinsky inflation, \cite{starobinsky1980new} and Higgs inflation \cite{bezrukov2008standard} play the role of the most promising candidates for inflationary models. The recent Planck result \cite{akrami2020planck} has proved the fitness of these models yet disfavoring other models including chaotic inflation and power-law inflation. Starobinsky inflation has been through so many discussions and striking the problem by solely using gravity as the main cause of inflation. Although theoretically, this model is rather subtle \cite{kehagias2014remarks}, at many points its inflationary prediction is showing a good approximation with data \cite{akrami2020planck}. The model is just introducing the action with the $R^2$ term, that is why sometimes it is referred to by the $R^2$ model and also the simplest class of $f(R)$ theory (see \cite{de2010f,sotiriou2010f}). Higgs inflation \cite{bezrukov2008standard}, in another side, is quite economical in the model since there is no particle introduced beyond the standard model (SM). It simply introduces the non-minimal coupling $\xi$ between the Higgs field and Ricci scalar. Despite its advantages, Higgs inflation has a defect on the unitarity problem \cite{barbon2009naturalness,burgess2010higgs,hertzberg2010inflation} due to the largeness of the minimal coupling\footnote{Even so, the large $\xi$ is naturally problematic \cite{gorbunov2019scalaron}} $\sim 10^5$. Also, one can see ref. \cite{bezrukov2012distinguishing} and \cite{ketov2020equivalence} to distinguish both models and their equivalence respectively. To settle with the problems of both $R^2$ and Higgs inflation, there is a lot of research has been done to combine them in Higgs-$R^2$ inflation (see e.g. ref. \cite{rigopoulos2006large,byrnes2009non,garcia2020revisiting,battefeld2009non,gundhi2020scalaron,gorbunov2019scalaron,gorbunov2013r2,calmet2016higgs,ghilencea2018two,he2019violent,he2021occurrence,bezrukov2019some,bezrukov2020heatwave}). The addition of $R^2$ operator may 'heal' the non-minimal coupling of Higgs inflation \cite{gorbunov2019scalaron} also 'taming' the spiky behavior of oscillating inflaton field during preheating \cite{he2019violent}. In the Higgs-$R^2$ inflation, it is found that the cut-off scale is as large as the Planck scale \cite{ema2017higgs}. One of the interesting facts, once Higgs and $R^2$ terms are combined to become Higgs-$R^2$ inflation, the pure Higgs inflation couldn't be obtained where pure $R^2$ inflation is possible for their separate limits \cite{ema2017higgs, enckell2020higgs}. Higgs-$R^2$ inflation should have characteristics similar to multi-field inflation, as it is expected to produce a larger and detectable non-gaussianity \cite{rigopoulos2006large,byrnes2009non,garcia2020revisiting,battefeld2009non} compared to single-field inflation \cite{gangui1993three,maldacena2003non}. Perhaps, this distinction can lead to the observable result (for example by Planck satellite \cite{akrami2020planck}) which can bring us to the result whether the inflation is consisted of single-field or multi-field. However, it is still arguable if multi-field inflation may lead to large non-gaussianity comparable to a single-field case. For example, Ref. \cite{battefeld2009non} shown the insignificant effect of the multi-field model to produce non-gaussianity by insisting on the slow-roll condition. A nearly similar result is also discussed in Ref. \cite{byrnes2009non} which enhancement of non-gaussianity can only be possible at the late stage of inflation before the inflation ends when a slow-roll condition is violated. On another side, a different approach has been studied by Ref. \cite{enqvist2005non,kohri2010generation}, which non-gaussianity can be largely enhanced during preheating. The inflationary stage of the Higgs-$R^2$ model has been thoroughly discussed in \cite{rigopoulos2006large,byrnes2009non,garcia2020revisiting,battefeld2009non,gundhi2020scalaron,gorbunov2019scalaron,gorbunov2013r2,calmet2016higgs,ghilencea2018two}. We expect this model will follow the trajectory of effective single-field inflation called \textit{minimal two-field mode} \cite{he2019violent}. As for effective single-field inflation, it will provide a small non-gaussianity. In this paper, we will constrain the non-minimal coupling $\xi$ of the Higgs operator using the smallness of the non-gaussianity in the minimal two-field mode. We will see whether $\xi$ is affecting the non-gaussianity. In the preheating stage, the research on the Higgs-$R^2$ model has been done by many papers (see for example Ref. \cite{he2019violent,he2021occurrence,bezrukov2019some,bezrukov2020heatwave}). For comparison, in the single-field inflation with large non-minimal coupling, the preheating so violent and it is leading the more effective drain of the inflaton's potential energy \cite{ema2017violent}. However, in the Higgs-$R^2$ model, the preheating is a lot smoother and the inflaton's energy is drained less effectively \cite{ema2017violent}. Also during preheating, in the Higgs-$R^2$ model, a tachyonic preheating also existed \cite{he2021occurrence,bezrukov2020heatwave}. This way, it is expected to drain the inflaton's energy much more quickly. However, in the minimal two-field mode, the last effect is suppressed. Thus, the inefficient inflaton's energy drain is unavoidable. In inflation with large non-minimal coupling\footnote{We say "large" to be in $\mathcal{O}(10^2)$ or more and "small" to be in $\mathcal{O}(10)$ or less}, during the beginning of the preheating stage, which corresponds to the energy in the Einstein frame $\lesssim 1 \hspace{1mm}M_p$, the potential will be approximated as quadratic form. Many papers regard this quadratic potential to be the dominant stage where inflaton's energy is drained effectively, one can see \cite{bezrukov2009initial} for the Higgs inflation or \cite{ema2017violent} for the general single-field case. Also, one can see ref. \cite{he2019violent,he2021occurrence} for Higgs-$R^2$ model. In another case, for small non-minimal coupling, the quadratic regime can somehow be neglected. Thus, the preheating stage can be regarded in the Jordan frame, leaving the inflaton's oscillation being represented in quartic potential (see for example \cite{ballesteros2017standard,hashimoto}). In both cases, either in quadratic and quartic regimes, the reheating temperature is expected to be produced by the perturbative decay of the daughter fields from the product of the inflaton's oscillation. The energy level between the quadratic regime and the quartic regime strongly depends on the non-minimal coupling in the single-field case. We will see in this paper, that the value of the non-minimal coupling can affect the preheating stage and the reheating temperature. \section{The Higgs-$R^2$ Inflation: Features in The Inflationary Stage} In this part, we will discuss the features of Higgs-$R^2$ inflation. As it belongs to multi-field inflation, we will discuss the features of multi-field inflation and adopt it into the Higgs-$R^2$ model. For further treatment, the non-gaussianity of the Higgs-$R^2$ model will be considered. Finally, we will try to constrain the non-minimal coupling $\xi$ due to the smallness of non-gaussianity produced by the effective single field theory. \subsection{The multi-field inflation and slow-roll parameters}\label{subsectionmultifield} In this subsection, we will discuss the case of multi-field inflation and then simplify it into the Higgs-$R^2$ model later in the next section. We start by writing the action with $d$ number of scalar fields $\psi^a$ ($a=1,2,3,...d$) as \begin{equation}\label{actionmultifield} \begin{split} S=\int d^4x\sqrt{-g}\left[ \frac{M^2_p}{2}R-\frac{1}{2}\gamma_{ab}g^{\mu\nu}\partial_\mu \psi^a \partial_\nu \psi^b-U(\psi)\right],\\ \end{split} \end{equation} where $R$ is the Ricci scalar with metric tensor $g_{\mu \nu}$ (and determinant $g$). We also have the field metric $\gamma_{ab}$, which correspond to the mixing kinetic terms. $U(\psi)$ is the potential with the function of $\psi^a$. We also use $M_p=1/\sqrt{8\pi G}$ as the reduced Planck mass. In order to obtain the equation of motions, we take the field to be only time dependent as $\psi^a(\textbf{x},t)=\psi_0^a(t)$, hence we can write three background equations: \begin{equation}\label{backgroundequation} \begin{split} &H^2=\frac{1}{3M_p^2}\left[ \frac{1}{2}\Dot{\psi}^2_0+U(\psi)\right],\\ &\Dot{H}=-\frac{\Dot{\psi}^2_0}{2 M_p^2},\\ &D_t \Dot{\psi}^a_0+3 H \Dot{\psi}^a_0+U^a=0.\\ \end{split} \end{equation} We have defined the derivative potential $U_a=\partial U/\partial \psi^a$ and Hubble parameter $H=\Dot{a}/a$. Here we also introduce the covariant derivatives $D\Phi=d\Phi + \Gamma^a_{bc}\Phi^b d\psi^c_0$ and $D_t=D/dt$. Please note, the Christoffel symbols we used are coming from the field metric $\gamma_{ab}$ as $\Gamma^a_{bc}=(1/2)\gamma^{ad}(\partial_b \gamma_{dc}+\partial_c \gamma_{bd}-\partial_d \gamma_{bc})$. For the whole of this paper, the dot always represents the derivative in respect to physical time. Later, we will define the tangent direction $T^a$ and the normal direction $N^a$ to discuss the features of the trajectory. Both terms are defined as \cite{achucarro2011features} \begin{equation}\label{tn} \begin{split} &T^a\equiv \frac{\Dot{\psi^a_0}}{\Dot{\psi_0}},\\ &N^a\equiv\text{sign}(t)\left(\gamma_{bc}D_t T^b D_t T^c\right)^{-1/2} D_t T^a, \end{split} \end{equation} where $\text{sign}(t)=\pm 1$ responsible to adjust the orientation of $N^a$ with respect to $D_t T^a$. Please note, $T^a$ and $N^a$ obey the relation $N^a N_a=T^a T_a=1$ and $N^aT_a=0$. In addition, we can write $D_t T^a$ with the help of Eq. \eqref{backgroundequation} and \eqref{tn}, hence we obtain \begin{equation}\label{dt} D_t T^a= -\frac{\Ddot{\psi}_0}{\Dot{\psi}_0}T^a-\frac{1}{\Dot{\psi}_0}\left(3H\Dot{\psi}^a_0+U^a \right). \end{equation} Projecting the last equation to $T^a$ and $N^a$ separately, we get the new equations: \begin{equation}\label{dt2} \Ddot{\psi}_0+3H\Dot{\psi}_0+U_\psi=0 \end{equation} and \begin{equation}\label{dt3} D_tT^a=-\frac{U_N}{\Dot{\psi}_0}N^a, \end{equation} where in Eq. \eqref{dt2} and \eqref{dt3} we defined $U_\psi=T^aU_a$ and $U_N=N^aU_a$. In addition, they obey the relation $U_a=U_\psi T^a+U_N N^a$. With all requirements fulfilled, we can determine the slow-roll parameters. Straightforwardly we can write the slow-roll parameters as \begin{equation}\label{epsiloneta} \epsilon =-\frac{\Dot{H}}{H^2}=\frac{\Dot{\psi}^2_0}{2M_p^2H^2}, \hspace{1cm} \eta^a=-\frac{1}{H \Dot{\psi}_0}D_t\Dot{\psi}^a_0. \end{equation} Additionally, $\eta^a$ can be decomposed to $\eta^a=\eta^\parallel T^a+\eta^\perp N^a$, thus we obtain two other parameters \begin{equation}\label{etaparalel} \eta^\parallel\equiv -\frac{\Ddot{\psi}_0}{H \Dot{\psi}_0}, \end{equation} which can be regarded as the usual second slow-roll parameter and \begin{equation}\label{etaperp} \eta^\perp\equiv \frac{U_N}{H \Dot{\psi}_0}, \end{equation} as the turn parameter which bends the trajectory of inflaton. We can also have the second turn parameter $\xi^\perp$ as \begin{equation} \xi^\perp \equiv -\frac{\Dot{\eta}^\perp}{H\eta^\perp}. \end{equation} Actually, the last parameter will be redundant for the rest of this paper. Please note that $\epsilon$ and $\eta^\parallel$ tend to be small during inflation for slow-roll conditions, but $\eta^\perp$ does not have any constraint \cite{achucarro2011features}, it could be large, thus large bending from the geodesic is also possible. The radius of curvature $\sigma$ can be defined by \begin{equation}\label{sigma} \frac{1}{\sigma}=\left(\gamma_{ab} \frac{DT^a}{d\psi_0}\frac{DT^b}{d\psi_0}\right)^{1/2}=\frac{1}{\Dot{\psi}_0}\left(\gamma_{ab} \frac{DT^a}{dt}\frac{DT^b}{dt}\right)^{1/2}=H\eta^\perp, \end{equation} where the last line is provided by the help of Eq. \eqref{tn},\eqref{dt3}, and \eqref{etaperp}. Also using Eq. \eqref{epsiloneta} we can obtain \begin{equation}\label{eta1} |\eta^{\perp}|=\sqrt{2\epsilon}\frac{M_p}{\sigma \Dot{\psi}}. \end{equation} We will later use the last result to calculate the non-gaussianity of multi-field inflation. Furthermore, even though $\eta^\perp$ is not constrained, it is suppressed by the smallness of $\sqrt{\epsilon}$. Hence, the largeness of $\eta^\perp$ strongly depends on $\sigma$ and $\Dot{\psi}$. \subsection{A short preview of Higgs-$R^2$ inflation} We start from the following action in the Jordan frame of Higgs-$R^2$ inflation as \begin{equation}\label{h-r2action} \begin{split} S_J=\int d^4x\sqrt{-g_J}&\Bigg[ \frac{1}{2}M_{p}^2\left(R_J+\frac{R^2_J}{6M^2} \right)\\ &+\frac{\xi h^2R_J}{2}-\frac{g^{\mu\nu}_J}{2}\partial_\mu h \partial_\nu h-\frac{1}{4}\lambda h^4\Bigg], \\ \end{split} \end{equation} where $g_J$ and $R_J$ are respectively the determinant of the metric $g^{\mu\nu}_J$ and Ricci scalar in the Jordan frame. Here we take $h$ as the Higgs field in the unitary gauge, $\xi$ as the non-minimal coupling between Higgs field with gravity\footnote{One should not be confused by $\xi^\perp$ as second turn parameter and $\xi$ as the non-minimal coupling.}, and $\lambda$ as Higgs quartic coupling. In this paper we used $\lambda=0.01$. We can transform the action in Jordan frame $S_J$ to Einstein frame $S_E$ via Weyl transformation as \begin{equation}\label{conformal} \begin{split} & g^{\mu\nu}=\Omega g_J^{\mu\nu},\\ &\Omega\equiv 1+\frac{R_J}{3M^2}+\frac{\xi h^2}{M^2_p}\equiv e^{\sqrt{\frac{2}{3}}\frac{\phi}{M_p}}\equiv e^\chi,\\ \end{split} \end{equation} which we defined the scalaron field $\phi$ here \cite{gorbunov2019scalaron,ema2017higgs,bezrukov2019some}. Please note, $M$ in Eq. \eqref{h-r2action} and \eqref{conformal} correspond the effective mass of scalaron during the small field value. After transformation, we should obtain the action in the Einstein frame $S_E$ as \begin{equation}\label{se} S_E=\int d^4x\sqrt{-g}\left[ \frac{M^2_p}{2}R-\frac{1}{2}g^{\mu\nu}\partial_\mu \phi \partial_\nu \phi -\frac{1}{2}e^{-\chi} g^{\mu\nu}\partial_\mu h \partial_\nu h-U(\phi,h)\right], \end{equation} with potential \begin{equation}\label{potential} U(\phi,h)= \frac{1}{4}\lambda h^4 e^{-2\chi}+\frac{3}{4}M^2_p M^2\left[1-\left( 1+\frac{\xi h^2}{M_p} \right) e^{-\chi}\right]^2. \end{equation} Please note, action in Eq. \eqref{se} is similar with action \eqref{actionmultifield} for the Higgs-$R^2$ model with \begin{equation} \gamma_{ab}=\left(\begin{array}{cc} 1 & 0 \\ 0 & e^{-\chi} \end{array}\right), \hspace{1cm} \psi^a=\left(\phi, h\right). \end{equation} In our paper, we assume in Higgs-$R^2$ inflation, the trajectory of inflaton will follow a single-field approximation called \textit{minimal two-field mode}. In this mode, we use relation $h^2=\xi R_J/\lambda$ \cite{ema2017higgs,he2019violent}. Inserting this result to Eq. \eqref{conformal} we obtain \begin{equation}\label{h2} h^2=\frac{e^{\chi}-1}{\frac{\lambda}{\xi M_p^2}\left( \frac{\xi^2}{\lambda}+\frac{M_p^2}{3M^2} \right)}. \end{equation} In addition, the potential represents the minimal two-field mode is \cite{ema2017higgs} \begin{equation}\label{potentialminimum} U(\phi)=\frac{M_p^4}{4}\frac{ 1}{\frac{\xi^2}{\lambda}+\frac{M_p^2}{3M^2}}\left(1-e^{-\chi} \right)^2. \end{equation} With this, we can conclude the potential for a minimal two-field mode dominantly following the scalaron's trajectory with some deviation due to the shifting parameters. Using the last potential through constraint from CMB \cite{akrami2020planck}, we obtain \begin{equation}\label{cmb} \frac{\xi^2}{\lambda}+\frac{M_p^2}{3M^2}\equiv \frac{1}{\mathcal{C}}\approx 2.1 \times 10^9. \end{equation} With this in mind, we can rewrite Eq. \eqref{h2} to the simpler form \begin{equation}\label{h2x} h^2=\mathcal{C}\frac{\xi M_p^2}{\lambda}(e^{\chi}-1). \end{equation} This saves us by using $M$, thus we can work only by terms $\xi$ and $\lambda$, which are the properties of Higgs inflation. It is also important to show that $\xi^2/\lambda>M_p^2/3M^2$ which corresponds to Higgs-like inflation and $\xi^2/\lambda<M_p^2/3M^2$ which corresponds to $R^2$-like inflation \cite{ema2017higgs,he2019violent}. One can see, relation \eqref{h2x} is model-independent: It can be used for both Higgs-like inflation and $R^2$-like inflation as long as the minimal two-field mode is preserved. One can also analytically solve Eq. \eqref{h2x} during inflation. We can assume $e^\chi\gg 1$, hence we obtain the relation \begin{equation}\label{h2xx} h^2\simeq\mathcal{C}\frac{\xi M_p^2}{\lambda}e^{\chi}. \end{equation} During the end of inflation, (when $\epsilon=1$ and $\phi\approx 1 \hspace{1mm}M_p$), the Higgs field has a value \begin{equation}\label{higgsend} h_{end}^2\simeq \sqrt{\frac{2}{3}}\mathcal{C}\frac{\xi M_p}{\lambda}\phi_{end}. \end{equation} Lastly, the critical field value is obtained when \begin{equation}\label{criticalhiggs} \phi_{crit}\approx h_{crit}\simeq \sqrt{\frac{2}{3}}\mathcal{C}\frac{\xi M_p}{\lambda}. \end{equation} If we consider the large $\xi$ which corresponds to Higgs-like inflation, the critical field value can be pushed to be higher. For example, if we consider the nearly-pure Higgs inflation $\xi=4500$, the corresponding critical field value of Higgs is $1.75\times 10^{-4}\hspace{1mm}M_p$. However, if we take the small minimal coupling $\xi=10$, we got only $3.88 \times 10^{-7} \hspace{1mm} M_p$. This determination is really important in the late stage of preheating regime. We will show later, that the non-minimal coupling could potentially affect the reheating temperature. \subsection{The turn parameter in Higgs-$R^2$ inflation.} In this part, we will explicitly describe the features of Higgs-$R^2$ using the features of general multi-field inflation depicted in \ref{subsectionmultifield}. Straightforwardly, we can obtain the tangent and normal trajectory as \cite{he2018inflation} \begin{equation}\label{T} T^a=\frac{\Dot{\psi}^a_0}{\Dot{\psi}_0}=\frac{1}{\sqrt{\Dot{\phi}^2+e^{-\chi}\Dot{h}^2}}\left(\Dot{\phi},\Dot{h}\right), \end{equation} \begin{equation}\label{N} N^a=\text{sign(t)}\left(\gamma_{bc} \frac{DT^b}{dt}\frac{DT^c}{dt}\right)^{-1/2}\frac{DT^a}{dt}=\frac{e^{\chi/2}}{\sqrt{\Dot{\phi}^2+e^{-\chi}\Dot{h}^2}}\left(-e^{-\chi}\Dot{h},\Dot{\phi} \right), \end{equation} where partially we can define $\Dot{\theta}$ as \begin{equation}\label{thetadot} \Dot{\theta}^2\equiv \left(\gamma_{ab} \frac{DT^a}{dt}\frac{DT^b}{dt}\right)=e^{\chi}\frac{\left(\frac{\partial U}{\partial h}\Dot{\phi}-e^{-\chi}\frac{\partial U}{\partial \phi} \Dot{h}\right)^{2}}{\left(\Dot{\phi}^2+e^{-\chi}\Dot{h}^2\right)^{2}}. \end{equation} With these tools, we can approximate the turn parameter $\eta^\perp$ from Eq. \eqref{sigma} and \eqref{thetadot}. Finally, we obtain \begin{equation}\label{eta2} e^{\chi/2}\frac{\left(\frac{\partial U}{\partial h}\Dot{\phi}-e^{-\chi}\frac{\partial U}{\partial \phi} \Dot{h}\right)}{\left(\Dot{\phi}^2+e^{-\chi}\Dot{h}^2\right)}=H{\eta^\perp}. \end{equation} Solving the last result may take a lot of effort. However, assuming the trajectory of the inflaton is along the valleys for minimal two-field mode, we can neglect $\partial U/\partial h$. Hence, we can obtain the simplified result as \begin{equation} -e^{-\chi/2}\frac{\Dot{h}}{\Dot{\phi}^2}\frac{\partial U}{\partial \phi}\approx H {\eta^\perp}. \end{equation} It is also noted, if we use $3H\Dot{\phi}\approx -\frac{\partial U}{\partial \phi}$ during inflation and Eq. \eqref{h2xx}, we finally get \begin{equation}\label{etamin} \eta^{\perp}\approx 3 e^{-\chi/2}\frac{ \Dot{h}}{\Dot{\phi}}= \sqrt{\frac{3}{2}\frac{\xi}{\lambda} \mathcal{C}}. \end{equation} Thus, we obtain the relation on the turn parameter $\eta^\perp$. We found that turn parameter $\eta^\perp$ strongly depends on $\xi/\lambda$ which properties of Higgs inflation. \subsection{The non-gaussianity in the Higgs-$R^2$ inflation} In this section, we will use the formalism of non-gaussianity which was introduced by Ref. \cite{rigopoulos2006large} (see also \cite{rigopoulos2005non} and \cite{rigopoulos2006nonlinear}) for analytical and ref \cite{rigopoulos2007quantitative} for the numerical calculation. The parameter $f_{NL}$ provided by this formalism is \begin{equation}\label{fnl1} f_{NL}=f(\epsilon,\eta^\parallel, \eta^\perp,\xi^\perp, U(\phi,h), \Delta N). \end{equation} It is better to write Eq. \eqref{fnl1} more explicitly. Before we proceed, we should make an approximation because in the original paper in Ref. \cite{rigopoulos2006large}, this case is quite complicated. In this approximation, $f_{NL}$ can be written by \begin{equation}\label{fnl} f_{NL}=2(\epsilon +3\eta^\parallel-\xi^\perp/\eta^\perp)+\Psi\Delta N, \end{equation} where $\Psi\simeq 4(\eta^\perp)^2$ and $\Delta N$ correspond to the number of e-fold. The small non-gaussianity $f_{NL}\ll 1$ may be acquired since it is a function of slow-roll parameters. Thus, from Eq. \eqref{fnl}, the large non-gaussianity can be guaranteed in the end of inflation when $\epsilon=1$. In that case, the dominant term for non-gaussianity during inflation $f_{NL}\geq 1$ can be achieved if $4(\eta^\perp)^2 \Delta N\geq 1$. As $\Delta N $ is the number of e-fold, which means it could produce quite a large number, $\eta^\perp$ doesn't need to be large, as long it is not too small ($\eta^\perp\sim 0.07$ would be sufficient for $\Delta N \sim 50$). Using these criteria, we can simplify \eqref{fnl} to be \begin{equation} f_{NL}\approx 4(\eta^\perp)^2 \Delta N= 4\left(\sqrt{\frac{3}{2}\frac{\xi}{\lambda} \mathcal{C}}\right)^2 \Delta N, \end{equation} where in the last result we have substituted $\eta^{\perp}$ from Eq. \eqref{etamin}. If we put non-minimal coupling to be nearly-pure Higgs inflation which $\xi= \mathcal{O}(10^3)$ for $\lambda=0.01$, the contribution on the non-gaussianity is still $\ll 1$. With this result, it is convenient to say that the non-minimal coupling can be regarded as a free parameter since we can take it to be almost any value. Thus, the spectrum of this model can be extended from nearly-pure $R^2$ inflation to nearly-pure Higgs inflation. However, in the next section, we will try to constrain the non-minimal coupling $\xi$ to be a much smaller value. In addition, one should note, that it is very interesting that in the minimal two-field mode of the Higgs-$R^2$ model, the parameters of Higgs operator plays the most important part. At this point, we can make the parameter of $R^2$ inflation which is $M$ stay hidden under constant $\mathcal{C}$. \section{Preheating in The Quadratic Regime} Before we proceed, to calculate the energy released during preheating, we need to check the time required for inflaton to oscillate. It is based on the critical field value of Eq. \eqref{criticalhiggs}, to oscillate during the end of inflation until the end of the quadratic regime is \begin{equation}\label{tcrit} t_{crit}-t_{end}\simeq \frac{2}{\tilde{M}\mathcal{C}}\frac{\lambda}{\xi }. \end{equation} With the period of oscillation can be approximated by $T=\frac{2\pi}{\tilde{M}}$, the number of oscillations during the quadratic regime is approximated to be \begin{equation}\label{noscillation} n_{osc}=\frac{t_{crit}-t_{end}}{T}=\frac{1}{\pi \mathcal{C}}\frac{\lambda}{\xi} , \end{equation} leads to very long quadratic regime ($n_{osc}\simeq 6600$, for $\xi=1000$). Here we can deduce, that the transfer of inflaton's energy in this regime is not so effective and particle production is truly slow. However, to make it clear, we will conduct the calculation of the gauge bosons and fermions production. We also see later, that these productions are not effective. Please note, from here and onward, the term 'not effective' on the preheating stage means that the energy drain from the single crossing is so small compared to the whole energy, resulting in the inflaton's energy being drained by many oscillations. From the discussion, we found that the largeness of the non-minimal coupling greatly affects the duration of the quadratic regime. with larger non-minimal coupling $\xi$, the duration of the quadratic regime is shorter. This is contrary to pure single-field inflation, such as Higgs inflation, where the large non-minimal coupling $\xi$ will prolong the quadratic regime greatly. \subsection{The self-production of inflaton field} During the end of inflation, when the field value is $\phi<1\hspace{1mm}M_p$, the inflaton starts to oscillate. As we assume the inflaton is moving along the valley in the minimal two-field mode, the inflation can be depicted by considering only the scalaron trajectory. At this stage, the inflaton will decay non-perturbatively via parametric resonance. The equation of motions corresponding to the scalaron and Higgs boson can be depicted as \begin{equation}\label{phidotdot} \Ddot{\phi}+3H\Dot{\phi}+\frac{1}{\sqrt{6}M_p}e^{-\chi}\Dot{h}^2+\frac{\partial U}{\partial \phi}=0 \end{equation} \begin{equation}\label{hdotdot} \Ddot{h}+3H\Dot{h}-\sqrt{\frac{2}{3}}\frac{\Dot{\phi}\Dot{h}}{M_p}+e^{\chi}\frac{\partial U}{\partial h}=0 \end{equation} and an extra \begin{equation}\label{hubble2} 3M^2_p H^2 = \frac{1}{2}\Dot{\phi}^2+\frac{1}{2}e^{-\chi}\Dot{h}^2+ U(\phi, h). \end{equation} The numerical calculation corresponding to the last 3 equations can be seen in \cite{he2019violent,he2021occurrence,bezrukov2020heatwave}. However, we want to simplify the result by assuming the inflaton's trajectory in minimal two-field mode. This way, we adopt the potential \eqref{potentialminimum} and approximate the equation of motion to be \begin{equation}\label{eominflaton} \Ddot{\phi}+3H\Dot{\phi}+\frac{\partial U}{\partial \phi}=0, \end{equation} with the potential is written by the function of $\phi$ (see eq. \eqref{potentialminimum}) \begin{equation}\label{tildem} U(\phi)=\frac{3}{4}M_p^2\tilde{M}^2\left(1-e^{-\chi} \right)^2, \hspace{0.5cm}\tilde{M}^2=\frac{M^2}{1+\frac{3\xi^2 M^2}{\lambda M_p^2}}=\frac{M_p^2}{3}\mathcal{C}. \end{equation} If we consider the minimal two-field mode, the mass $M$ is constrained by Cosmic Microwave Background (CMB) (see \eqref{cmb}) and also depends on whether it is Higgs-like inflation or $R^2$-like inflation. Usually, $M$ is taken to be $\mathcal{O}(10^{-5})M_p$ \cite{he2019violent,he2021occurrence} which corresponds to the mass of scalaron during the small field value in the pure $R^2$ model. One can check the relation between $\xi$ and $M$ in a minimal two-field mode in Fig. \ref{xi-m}. \begin{figure}[t] \centering \includegraphics[width=8cm]{xi-m.pdf} \caption{Plot $\xi$ with $M$ is in order of Planck mass $M_p$} \label{xi-m} \end{figure} When $\phi< 1 $ $Mp$, the potential can be approximated by \begin{equation}\label{potentialquadratic} U(\phi)\simeq \frac{1}{2}\tilde{M}^2\phi^2-\frac{1}{2}\sqrt{\frac{2}{3}}\frac{\tilde{M}^2}{M_p}\phi^3, \end{equation} also automatically giving \begin{equation} \frac{\partial U}{\partial \phi}\simeq \tilde{M}^2\phi-\sqrt{\frac{3}{2}}\frac{\tilde{M}^2}{M_p}\phi^2. \end{equation} Here we named this regime to be \textit{quadratic regime}, corresponding to the first term in Eq. \eqref{potentialquadratic}. Also, if we consider the inflaton's trajectory is following \eqref{eominflaton}, the analytical solution is \begin{equation} \phi(t)\simeq \tilde{\phi} e^{-3Ht/2} \cos\left(\left[\tilde{M}^2-\frac{1}{4}(3H)^2\right]^{1/2}t\right), \end{equation} with $\tilde{\phi}$ refer to the amplitude of inflaton during the beginning of the preheating stage. With $\tilde{M}\gg H$, we can claim that during this stage the inflaton's oscillation is rather light damped. If we decompose $\phi$ in Heisenberg representation, namely \begin{equation}\label{heisenberg} \phi(x,t)=\frac{1}{(2\pi)^{3/2}}\int d^3k\left(\hat{a}_k \phi_k(t)e^{-i\bar{k}\cdot \bar{x}} +\hat{a}^\dagger_k \phi^*_k(t)e^{i\bar{k}\cdot \bar{x}}\right), \end{equation} we can write the equation of motion for inflaton self-production by \begin{equation}\label{inflatonself} \Ddot{\phi}_k+3H\Dot{\phi}_k+\left(\frac{k^2}{a^2}+\tilde{M}^2-\sqrt{\frac{3}{2}}\frac{\tilde{M}^2}{M_p}\phi\right)\phi_k=0. \end{equation} If we rescaled and redefined \begin{equation} {\varphi_k}\equiv a^{3/2}\phi_k, \hspace{0.5cm} {\kappa^2_\phi}\equiv \frac{k^2}{a^2}+\tilde{M}^2, \hspace{0.5cm} \phi\equiv \tilde{\phi} \sin (\tilde{M} t), \hspace{1cm} \tilde{M}t=2z_\phi+\frac{\pi}{2}, \end{equation} we can obtain \begin{equation} \frac{d^2\varphi_k}{dz^2_\phi}+\left(A_\phi-2q_\phi\cos(2z_\phi) \right)\varphi_k=0, \end{equation} where \begin{equation} A_\phi\equiv \frac{4}{\tilde{M}^2}\left(\frac{k^2}{a^2}+\tilde{M}^2\right) \hspace{1cm}\text{and}\hspace{1cm} q_\phi\equiv 2\sqrt{\frac{3}{2}}\frac{\tilde{\phi}}{M_p}, \end{equation} found to be the Mathieu equation. With the inflaton self-production is assumed to be non-relativistic, it is clear that $A_\phi \gg q_\phi$ for {the first oscillation after the end of inflation} and $q_\phi$ is much lower just after that, resulting the resonance is getting narrower. It is obvious, that the self-production of the scalaron field belongs to the narrow resonance. As usually happens, the particle fluctuation during the first-several oscillations is a rather broad-like resonance\footnote{see ref. \cite{kofman1994reheating,shtanov1995universe}}, the resonance self-production of the inflaton field and also its decay are simply neglected. With this, we can assume that the decay product to other channels could be dominant compared with inflaton self-production. One can also realize, that the minimal two-field mode causes $\partial U/\partial h$ to vanish in Eq. \eqref{hdotdot}, this is resulting in the oscillation being stagnant, which means there is almost no Higgs production due to this mode. We can approximate the potential energy during the end of inflation. Taking $\epsilon =1$ correspond to the end of inflation, we obtain the potential energy to be \begin{equation}\label{energyend} U(\phi_{end}) < 3.7 \times 10^{-11} M_p^4, \end{equation} which is independent of $\xi$. This result is evaluated by using Eq. \eqref{tildem}. Taking this potential as the starting value of the preheating stage is rather subtle. If the scalaron's field value during the end of inflation is $\phi_{end}\sim 1\hspace{1mm}M_p$, after the first crossing the scalaron's field value drops significantly. Numerically, one can see Fig \ref{scalaron}, the field value lost about $70\%$ after the first crossing but smoother after that. Hence, we will say that there should be a transition phase during the end of inflation and the start of preheating stage which happens somewhere before the first crossing. We mean, for rough estimation, we can take the field value of $\phi_{pre} \sim 0.5 M_p$ to be the starting point of the preheating stage. This corresponds to the potential \begin{equation}\label{energypreheating} U(\phi_{pre}) \sim 1 \times 10^{-11} M_p^4. \end{equation} We expect the whole preheating process could drain most of this energy before the perturbative reheating happens. \begin{figure}[t] \centering \includegraphics[width=10cm]{scalaronoscillation.pdf} \caption{Scalaron's trajectory after the end of inflation. The $\phi$ unit is in GeV and $t$ in second.} \label{scalaron} \end{figure} \subsection{The gauge bosons and fermions production}\label{gaugequadratic} As we already discussed in the previous subsection, the scalaron and Higgs self-production are suppressed. In this case, the production of other fields is expected to be large. Hence we straightforwardly write the $W$-boson equation of motion by assuming that gauge bosons are scalar fields \cite{bezrukov2009initial}. We assume in this quadratic regime, both $W$ and $Z$ bosons are identical and have the same coupling $g_W$. Hence we can write the $W$-boson's equation of motion as \begin{equation}\label{ww} \Ddot{W}_k+3H\Dot{W}_k+\left(\frac{k^2_W}{a^2}+{m}_W^2(t) \right)W_k=0, \end{equation} where $m_W^2$ can be defined by (see Eq. \eqref{higgsend}) \begin{equation}\label{mw} {m}_W^2=\frac{g^2_W}{4}\sqrt{\frac{2}{3}}\mathcal{C}\frac{\xi M_p}{\lambda}\tilde{\phi}\sin(\tilde{M}t). \end{equation} Please note, that the induced mass of $m_W$ was much larger than $\tilde{M}$ at the beginning of the preheating era. Indeed, gauge bosons production can only happen when \begin{equation} \delta t\lesssim \frac{4}{g^2_W}\sqrt{\frac{3}{2}}\frac{\lambda\tilde{M}}{\mathcal{C}\xi M_p \tilde{\phi}}\sim 1\times 10^{-7}/\tilde{\phi}. \end{equation} Hence, it is obvious that particle production can only be opened during zero-crossing. The mass of gauge bosons depend strictly on the amplitude $\tilde{\phi}(t)$. With those in mind, we approximate $\sin (\tilde{M}t)\simeq \tilde{M}t$. In addition, we can redefine Eq. \eqref{ww} to be \begin{equation}\label{wdefine} \mathcal{W}_k\equiv a^{3/2}W_k, \hspace{0.5cm} {\kappa^2_W}\equiv \frac{k^2_W}{K^2a^2}, \hspace{0.5cm} \tau_W=K_Wt, \hspace{0.5cm}K_W\equiv\left[\frac{g^2}{4}\sqrt{\frac{2}{3}}\mathcal{C}\frac{\xi M_p\tilde{M}}{\lambda}\tilde{\phi} \right]^{1/3}, \end{equation} thus we obtain \begin{equation} \frac{d^2\mathcal{W}_k}{d\tau^2_W}+\left( \kappa_W^2+\tau_W\right)\mathcal{W}_k=0, \end{equation} which is belong to Airy function. This way we obtain the solutions \begin{equation} \begin{split} &Ai(\tau_W) =\frac{1}{3}i\sqrt{\tau_W}\left[J_{-1/3}(b)+J_{+1/3}(b) \right],\\ &Bi(\tau_W)=i\sqrt{\frac{\tau_W}{3}}\left[J_{-1/3}(b)-J_{+1/3}(b) \right], \end{split} \end{equation} with $b=\frac{2}{3}\tau^{3/2}_W$ and $J_{\pm}(\tau_W)$ is the Bessel function of the order $\pm 1/3$. Both functions will run from conformal time $\tau_{W \hspace{1mm} end}=0$, as we taking the time when inflation is ended by $t=0$, into \begin{equation} \tau_{W \hspace{1mm}crit}=K_{crit}t_{crit}=\left[\frac{g^2}{4}\sqrt{\frac{2}{3}}\mathcal{C}\frac{\xi M_p\tilde{M}}{\lambda}\tilde{\phi}_{crit} \right]^{1/3} t_{crit}\simeq 4.2\times 10^5 \xi^{-1/3}. \end{equation} Taking this result into a plot, we can check it in Fig. \ref{Airy}. In the figure, we only take $\tau_W=50$ since it is more convenient\footnote{Taking $\tau_W \sim 10^5$ may cause the disruption on the figure of both $Ai$ and $Bi$.}. Also, we can predict, in $\tau$ more than $20$, the amplitude is only slightly getting smaller and the oscillation still not terminated in the quadratic regime. We can approximate the largest $W$-boson production for the first crossing: \begin{equation}\label{gaugedelltaw} \delta {\rho}_W= \int^\infty_0\frac{d k_W^3}{(2\pi)^3} \sqrt{k_W^2/a^2 +m_W^2}e^{-\pi\left(\frac{k_W}{K} \right)^2}\simeq m_W\frac{K^3_W}{8 \pi^3}. \end{equation} This way, we obtain the largest energy drain for the first crossing to be \begin{equation} \delta\rho_W\simeq 4.52 \times 10^{-20} \xi^{3/2} M_p^4. \end{equation} Here we assumed $k_W^2\ll m^2_W$ in our calculation and $m_W$ is evaluated along the half oscillation $\int^\pi_0\sin(\tilde{M}t)=1$. Also we used $\tilde{\phi}=\tilde{\phi}_{pre}=0.5M_p$. \begin{figure}[t] \centering \includegraphics[width=10cm]{Airy.pdf} \caption{Plot of $Ai(\tau_W)$ (\textcolor{red}{red} color) and $Bi(\tau_W)$ (\textcolor{blue}{blue} color). The figure only up to $\tau_W=50$ to make clear between red and blue plots. Instead, the process continues until $\sim 10^5$ oscillations (for $\xi\sim 10^2-10^3$), which means until the end of quadratic regime. The amplitude is in order of $M_p$.} \label{Airy} \end{figure} This way we can guarantee that in the quadratic regime, the gauge bosons production is inefficient. It is important to note, that the created gauge bosons are still oscillating, creating more daughter fields by secondary parametric resonance production. This way, the universe will constantly be filled with heavy matters due to these multiple resonances. It is also noted, that the fermions production is suppressed by the Pauli exclusion principle during this regime similar to Ref. \cite{bezrukov2009initial}. This way, we can focus on the gauge bosons. In addition, the production of Higgs bosons due to the oscillating gauge bosons is suppressed by the $\lambda/g^2_W$ term. This means, that in the quadratic regime the Higgs boson production can simply be neglected. It is also notable to mention, that a tachyonic preheating possibly happens, for instance, see Ref. \cite{he2021occurrence,bezrukov2020heatwave}. But since we assumed the inflaton's trajectory is following the two-field mode as the effective single-field mode, the tachyonic mode is suppressed due to the suppression of the Higgs field ($\partial U^2/\partial h^2\simeq 0$) in the quadratic regime. The inflaton's energy can be approximated to be drained by 2 gauge bosons production by multiplication of the series of amplitudes and the number of oscillations as \begin{equation}\label{gaugeenergy} \rho_{gauge}\sim 4 \times 10^{-14}\sqrt{\xi} M_p^4. \end{equation} Please note, that we already multiplied the last result by $4$ since we consider $2$ gauge bosons and for one single oscillation, there are $2$ zero-crossings. In Eq. \eqref{gaugeenergy}, if the non-minimal coupling is in $\mathcal{O}(10^4)$, they could drain the whole inflaton's energy. However, for the maximum value of $\xi$ in this model is $\xi\sim 4500$, we could say that gauge bosons could not drain the whole inflaton's energy. In this case, we proposed a new field that has a larger coupling than gauge bosons and is coupled with Higgs. We will discuss this possibility in the next subsection. \subsection{The wear off gravitational effect, end of quadratic regime, and dark matter} During inflation, the universe is filled by nothing but inflaton, leaving most of it to be nearly vacuum. As we consider the second Friedman equation \begin{equation} \Ddot{a}=-\frac{1}{6 M_p^2}(\rho_\phi+3P_\phi)a, \end{equation} the pressure $P_\phi=-\rho_\phi$ causes acceleration. This condition happens until the kinetic term of scalaron $\propto \Dot{\phi}^2$ is getting large and comparable with the inflaton's potential. The cosmological constant is only responsible for the expansion in the post-inflationary era which we omitted in the last equation. At this point, the kinetic term of Higgs is still suppressed by $e^{-\chi}$ and tends to be neglected. Thus, the scalaron still remains dominant at this point. This condition happens simultaneously with the creation of massive fields which constantly fill the universe, leaving the universe to enter the (heavy) matter-dominated era. At the end of the quadratic regime, when the field value of the scalaron enters the critical point, the gravitational effect is wearing-off. It is showed in the condition when $\phi_{crit}\approx h_{crit}$. Actually, the gravitational effect still remains implicitly in the second term of the coupling $\lambda+\frac{3\xi M^2}{M_P^2}$ (see next subsection on Eq. \eqref{polynomials}). However, this term tends to be negated by other terms\footnote{Even without cancellation, $\frac{3\xi M^2}{M_p^2}$ still much smaller than $\lambda$.}\cite{bezrukov2019some}. Thus, the Higgs quartic coupling will solely depend on $\lambda$. In this subsection, we will introduce a Lagrangian of the dark matter (DM) which we assumed only coupled with Higgs and invariant under global $Z_2$ symmetry. The importance of this DM is due to the fact that both gauge bosons failed to drain the inflaton's energy. The $S$ field is considered the DM candidate in this model. Here we have $S=(0 \hspace{0.5cm} s)^\top/\sqrt{2}$ and $\braket{S}=0$. The additional potential from Eq. \eqref{h-r2action} with $S$-field can be written by\footnote{Here we assumed the term $\frac{1}{2}m_s^2 s^2$ and $\frac{1}{4}\lambda_s s^4$ are much smaller during this time.} \begin{equation}\label{lagrangedm} \frac{1}{4}\lambda_{hs} h^2 s^2. \end{equation} During the quadratic regime, Eq. \eqref{lagrangedm} will be transformed by Weyl transformation on the Einstein frame into \begin{equation}\label{smass} \frac{1}{2\sqrt{{6}}}\mathcal{C}\frac{\xi M_p}{\lambda}\lambda_{hs} \phi \hspace{1mm} s^2. \end{equation} It means, that during the quadratic regime, the DM production can be produced by the perturbative decay of the scalaron at the tree level which also plays the role of the dominant decay of the scalaron. However, this effect should never be important during the preheating stage, since parametric resonance of DM is taking over. Before we proceed, we assume $\lambda_{hs}$ value\footnote{For simplicity, we assumed that the additional fields (the DM) would not affect the Higgs' running coupling and ruined our setting of $\lambda=0.01$.} is $\lambda< g^2\ll \lambda_{hs}\simeq 10$. This assumption is based on the fact that we expect the DM production is much greater than gauge bosons and fermions. With the large coupling of $\lambda_{hs}$, we expect that both parametric resonance and perturbative decay of scalaron to DM are dominant compared to the gauge bosons and fermions productions. The decay rate of scalaron to 2 DM can be obtained as \begin{equation}\label{inflatondecaydarkmatter} \Gamma_{\phi\rightarrow ss}=\frac{1}{384\pi}\mathcal{C}^2\frac{\xi^2 M_p^2}{\lambda^2 \tilde{M}}\lambda_{hs}^2. \end{equation} One can write the Boltzman equation for DM density via \begin{equation}\label{boltzman} \frac{d}{dt}(n_{s} a^3)=2 n_\phi(a)\Gamma_{\phi\rightarrow ss} a^3. \end{equation} Before we continue, it is important to note that if we put the non-minimal coupling to be large $\xi\gtrsim 587$, which we evaluated at $\tilde{\phi}=1\hspace{1mm}M_p$, the decay rate $\Gamma_{\phi\rightarrow SS}$ will be greater than Hubble parameter. It means the preheating stage is almost never happened. Thus, the smaller non-minimal coupling is necessary to make the successful preheating. As we continue, by using our previous constraint the decay should be much lower than the Hubble parameter ($\Gamma_{\phi\rightarrow ss}\ll H$). The DM density only fills a minor part of the universe during the early preheating stage but grows significantly due to parametric resonance. In addition, it is truly remarkable that during preheating, the particle production by perturbative decay should be much smaller than the resonance production, again for successful preheating. However, at the end of the quadratic regime, when the parametric resonance is extremely narrow, the resulting particle production due to parametric resonance is suppressed. The perturbative effect, enhanced by the Bose-Einstein Condensation (BEC), takes the role to drain the inflaton's energy. This enhanced\footnote{Let us call it that way, which is perturbative effect enhanced by BEC.} perturbative effect, even though it is small, is constantly draining the inflaton field. This effect becomes important during the transition of the quadratic regime and quartic regime. Nevertheless, we could not confirm this condition in the numerical study but we assumed such a condition is existed during the transition. The reason is: that during the end of the quadratic regime, the oscillation becomes extremely narrow and a mechanism should be existed to drain the inflaton's energy. The decay rate enhanced by BEC can be written by \cite{mukhanov2005physical} \begin{equation}\label{decays} \Gamma_{eff}\simeq\Gamma_{\phi\rightarrow SS} (1+2\bold{n}_k^{s}). \end{equation} We assume the initial condition of occupation number of scalaron found to be much larger than DM $\bold{n}_k^s$. In addition, the equation of motion in momentum space of DM coupled with scalaron can be written by \begin{equation}\label{dmeom} \Ddot{s}_k+\left(\frac{k_s^2}{a^2}+\frac{1}{\sqrt{6}}\mathcal{C}\frac{\xi M_p}{\lambda}\lambda_{hs} \tilde{\phi}\sin(\tilde{M}t)\right)s_k=0. \end{equation} The width of $k_s$, which correspond to the width band of the created particles, can be evaluated by assuming the energy of a single $S=\tilde{M}/2$ and we obtain \begin{equation} \left(\frac{\tilde{M}}{2}\right)=\frac{k^2_s}{a^2}+\frac{1}{\sqrt{6}}\mathcal{C}\frac{\xi M_p}{\lambda}\lambda_{hs} {\phi}. \end{equation} Finally, $\Delta k_s$ can be evaluated as \begin{equation} \Delta k_s =\sqrt{\frac{2}{3}}\mathcal{C}\frac{\xi M_p}{\lambda \tilde{M}}\lambda_{hs}\tilde{\phi}, \end{equation} where we neglected the expansion of the universe. We can calculate the occupation number $\bold{n}_k^s$ as \begin{equation}\label{nk} \bold{n}_k^s=\frac{n_s}{4\pi k_s^{*2}\Delta k_s/(2\pi)^3}=\sqrt{\frac{3}{8}}\frac{ \tilde{\phi}}{\pi \mathcal{C} \xi M_p }\frac{\lambda}{\lambda_{hs}}\frac{n_s}{n_\phi}, \end{equation} where we have evaluated Eq. \eqref{nk} by using $k_s^*=\tilde{M}/2$ and $n_\phi=\frac{1}{2}\tilde{M}\tilde{\phi}^2$. For further usage, it is important to write Eq. \eqref{dmeom} into the redefined form as \begin{equation}\label{sdefine} \mathcal{S}_k\equiv a^{3/2}s_k, \hspace{0.5cm} {\kappa^2_s}\equiv \frac{k_s^2}{K_s^2a^2}, \hspace{0.5cm} \tau_s=K_st, \hspace{0.5cm}K_s\equiv\left[\frac{\lambda_{hs}}{\sqrt{6}}\mathcal{C}\frac{\xi M_p\tilde{M}}{\lambda}\tilde{\phi} \right]^{1/3}, \end{equation} thus we obtain \begin{equation}\label{lames} \frac{d^2\mathcal{S}_k}{d\tau^2_s}+\left( \kappa_s^2+\tau_s\right)\mathcal{S}_k=0, \end{equation} which also belongs to the Airy function. The parametric resonance from this equation can enhance the BEC on this perturbative decay. Thus, the particle number density in the conformal mode of the DM is calculated via \begin{equation} \bar{n}_s=\int^\infty_0 \frac{d^3\kappa_s}{(2\pi)^3}e^{\pi \frac{\kappa^2_s}{K_s^2}}=\frac{1}{8\pi^3}K_s^{3/2}, \end{equation} which correspond to physical number density $n_s=\frac{1}{8\pi^3} K_s^3$. We can substitute the last result to the Boltzman equation Eq. \eqref{boltzman}, also with neglecting the expansion of the universe and assumed that $\bold{n}^s_k\gg 1$, we obtain the solution of the DM density as \begin{equation}\label{dmdensity} n_s\propto \exp\left(\frac{1}{128\pi\sqrt{2}} \sqrt{\mathcal{C}}\xi \frac{\lambda_{hs}}{\lambda}\tilde{\phi}t\right) \end{equation} Given $\tilde{\phi}=\frac{M_p}{\sqrt{3\pi}\tilde{M}t}$, the total exponential growth of $n_s$ can be approximated as \begin{equation} n_s\propto\exp\left(\frac{1}{128\pi\sqrt{2\pi}}\xi \frac{\lambda_{hs}}{\lambda}\right). \end{equation} \begin{figure}[t] \centering \includegraphics[width=8cm]{perturbativegrowth.pdf} \caption{The plot of non-minimal coupling $\xi$ with the growth of number density.} \label{perturbativegrowth} \end{figure} Before we proceed, if the enhanced decay rate at the end of the quadratic regime is taken into account, which is $\Gamma_{eff}$ (see Eq. \eqref{decays}), we can compare its result to the Hubble parameter $H$ during the same period. By using the relation \begin{equation} H^2=\frac{\rho}{3M_p^2}, \end{equation} where $\rho$ is evaluated as the energy density during the nearly end of the quadratic regime, we show that in order for the preheating to be followed by the quartic regime, the non-minimal coupling should be constrained as \begin{equation} \xi\lesssim 10.45. \end{equation} If the non-minimal coupling is larger than that value, the quartic regime is no longer needed, since the whole inflaton's energy is already converted perturbatively to lighter species even before the preheating is started. In addition, the last result in Eq. \eqref{dmdensity} shows the created particle number density due to this enhanced perturbative decay. It is depicted in Fig. \ref{perturbativegrowth}, when $\xi$ reaches $\approx 10$, the growth of the number density is escalated enormously. We expected the perturbative decay should be suppressed for successful preheating. On another side, to calculate the DM particle production due to parametric resonance, we can use Eq. \eqref{lames}. The solution of this equation also belongs to the Airy function. Furthermore, the energy density for each crossing can be calculated similarly with Eq. \eqref{gaugedelltaw} as \begin{equation}\label{gaugedelltas} \delta {\rho}_s= \int^\infty_0\frac{d k_s^3}{(2\pi)^3} \sqrt{k_s^2/a^2 +m_s^2}e^{-\pi\left(\frac{k_s}{K_s} \right)^2}\simeq m_s\frac{K_s^3}{8 \pi^3}, \end{equation} where the induced mass $m_s$ is evaluated from Eq. \eqref{smass}. Finally, the total energy drain by DM can be calculated similar with Eq. \eqref{gaugeenergy} by \begin{equation} \rho_{s}\sim 3.43 \times 10^{-12}\sqrt{\xi}M_p^4. \end{equation} Interestingly, if we used small non-minimal coupling: $\xi\approx 10$, we could still potentially drain the whole inflaton's energy. Furthermore, the mass of DM can be constrained by using the DM abundance $\Omega h^2=0.12$ \cite{aghanim2020planck}. Straightforwardly we obtain \cite{tanedo2011defense} \begin{equation} \Omega h^2 = \frac{1.55\times 10^{-10}}{\braket{\sigma_{ss\rightarrow hh} \nu}}\text{GeV}^{-2}. \end{equation} With $\braket{\sigma_{{ss\rightarrow hh}} \nu}$ correspond to the annihilation of the DM to 2 Higgs bosons, we can constrain the DM mass to be $m_S\simeq 1.4$ TeV. We expect, the DM to be this value can be detected in the near future such as in the Large Hadron Collider (LHC) or International Linear Collider (ILC). \section{Preheating in the Quartic Regime}\label{quartic} When the inflaton's field value is critical, the preheating is under the quartic regime. This way, the action in Jordan and Einstein's frames are indistinguishable. During this time, Higgs plays the main role as the oscillating inflaton field. Hence, it is showing the quartic potential behavior. As the inflaton is mainly caused by the Higgs field, the equation of motion of self-production Higgs boson can be approximated by \begin{equation}\label{higgself} \Ddot{h}+3H\Dot{h}+ \frac{\partial U}{\partial h}=0. \end{equation} The potential in Eq. \eqref{higgself} can be approximated as polynomials from potential in Eq. \eqref{potential} by \cite{bezrukov2019some} \begin{equation}\label{polynomials} \begin{split} U(\phi,h)&=\frac{1}{4}\left(\lambda+\frac{3\xi M^2}{M_p^2}\right)h^4+\frac{1}{2}M^2\phi^2+\frac{3\xi M^2}{\sqrt{6}M_p} \phi h^2\\ &+\frac{7}{36}\frac{M^2}{M_p^2}\phi^4+\frac{1}{2}\frac{\xi M^2}{M^2_p}\phi^2h^2-\frac{1}{\sqrt{6}}\frac{M^2}{M_p}\phi^3+...\\ \end{split} \end{equation} up to the leading order\footnote{Please note, the scalaron $\phi$ is already vanished in $<h_{crit}$.}. Even for the large $\xi$, the $\frac{3\xi M^2}{M_p^2}$ is so small compared by $\lambda$. Thus, the Higgs coupling will solely depend on $\lambda$. This way, we can approximate the remaining energy of the inflaton to be \begin{equation} V(h)=\frac{1}{4}\lambda h_{crit}^4\simeq 5.7 \times 10^{-33} \xi^4 M_p^4, \end{equation} which found to be $\xi$ dependent. In addition, we can approximate Eq. \eqref{higgself} using the Heisenberg representation similar to Eq. \eqref{heisenberg} and obtain \begin{equation}\label{hequation} \Ddot{h}_k+3H\Dot{h}_k+\left(\frac{k^2_h}{a^2}+3{\lambda}h^2 \right) h_k=0. \end{equation} Defining \begin{equation}\label{kappatau} \bold{h}_k=a h_k \hspace{0.5cm} \kappa^2_h\equiv\frac{k^2_h}{ {\lambda}\tilde{h}^2}, \hspace{0.5cm}a(\tau)=\frac{1}{2\sqrt{3}}\frac{\tilde{h}}{M_p}\tau,\hspace{0.5cm} \tau\equiv\left(6 {\lambda}M_p^2/\pi\right)^{1/4}\sqrt{t}, \end{equation} we can write Eq. \eqref{hequation} in the more straightforward as \begin{equation}\label{eomquartich} {\bold{h}_k}''+\left(\kappa^2_h+3cn^2\left(\tau, \frac{1}{\sqrt{2}}\right) \right)\bold{h}_k=0, \hspace{0.5cm}, \end{equation} where $\tilde{h}$ correspond to the amplitude of the Higgs field and prime correspond to the derivative in respect of conformal time $\tau$. Also, the solution of $\bold{h}_k(\tau)=\bar{\bold{h}}_k f(\tau)$ is obtained in the same way as ref. \cite{greene1997structure}: \begin{equation} f(\tau)=cn\left(\tau, \frac{1}{\sqrt{2}} \right), \end{equation} which is an elliptic cosine function. In the same way, we want to investigate the number of gauge bosons and fermions produced during this period. In order to get this, it is important to start with the equation of motion of the $W$-boson in a similar way to Eq. \eqref{ww} in the Heisenberg picture as \begin{equation}\label{wwquartic} \Ddot{W}_k+3H\Dot{W}_k+\left(\frac{k^2_W}{a^2}+{m}_W^2(t) \right)W_k=0, \hspace{0.5cm} m^2_W=\frac{g^2}{4}h^2 \end{equation} The analytical calculation of $W$-boson will be different in this regime compared by the quadratic ones. By following the redefinition in Eq. \eqref{kappatau} and $\mathcal{W}_k=a W_k$, we arrived at \begin{equation}\label{eomquarticw} \mathcal{W}_k''+\left(\kappa^2_W+\frac{g^2}{4 {\lambda}}cn^2\left(\tau, \frac{1}{\sqrt{2}}\right) \right)\mathcal{W}_k=0. \end{equation} We can generalize Eq. \eqref{eomquartich} and \eqref{eomquarticw} so it can be used for any species, which is \begin{equation}\label{eomquarticgeneral} {\varphi_k}''+\left(\kappa^2_\varphi+\Upsilon cn^2\left(\tau,\frac{1}{\sqrt{2}} \right) \right)\varphi_k=0, \end{equation} where (for instance) $\Upsilon=3$ correspond to the Higgs self-production, $\Upsilon=\frac{g^2}{4\lambda}$ for gauge bosons, $\Upsilon=\frac{y_\psi^2}{2\lambda}$ for fermions, and $\Upsilon=\frac{\lambda_{hs}}{4\lambda}$ for DM. \begin{figure}[t] \centering \includegraphics[width=16cm]{preheatingquartic.png} \caption{Plot of varied $\Upsilon$ which refer to different particle productions. The color different refer to different particles. \textcolor{blue}{blue}: $W$ boson, \textcolor{orange}{orange}: $Z$ boson, \textcolor{red}{red}: Higgs boson, \textcolor{green}{green}: Top quarks, \textcolor{brown}{brown}: Tauon, \textcolor{black}{black}: Bottom quarks, \textcolor{yellow}{yellow}: Charm quarks, and \textcolor{purple}{purple}: the DM ($S$-field). The values on the right of every plot correspond to the values of $\Upsilon$.} \label{numericquartic} \end{figure} In the numerical calculation on Eq. \eqref{eomquarticgeneral}, we are varying $\Upsilon$ based on the several standard model (SM) particles and assuming that $\kappa^2=0$. Mainly, we are testing $\Upsilon$ with SM particles and the DM ($S$-field) candidate. Previously, we assume both $W$ and $Z$ bosons are identical to simplify our previous result. However, in our numerical calculation in Fig. \ref{numericquartic} on the upper-left side, we made a clear distinction between their couplings. We showed with only slight differences in couplings, both $W$ and $Z$ bosons' production are extremely different. By seeing the numerical calculation of Eq. \eqref{eomquarticgeneral}, the result was strongly dependent on the ratio $\Upsilon$. Even though some peculiarity happens in $W$ and $Z$ bosons case and also with $\Upsilon=1\sim 2$, particle production with those values are much larger than $Z$ boson. But since we did not introduce a particle with such coupling, we will not discuss further this case. On contrary, we also investigate the small coupling's particle (small $\Upsilon$ indeed) which is not be taken into consideration since their production can be neglected, please refer to Fig. \ref{numericquartic} on the upper-right side. Lastly, in the Fig. \ref{numericquartic} on the bottom-right, we compare the DM production with $Z$ boson production. The variation of $\kappa$ could enhance or diminish particle production. The details on these conditions could be seen in Ref. \cite{greene1997structure}, from which the instability chart is also presented. But, in this paper, we will not consider such behavior. The preheating in the quartic regime analytically could be ceased out around $\sim 10^4$ oscillations. This, lead to inefficient preheating. Based on the remained energy density compared with the quadratic regime, we assume the calculation of the reheating temperature would indirectly depend on the preheating process. \section{The Reheating Temperature} The reheating temperature plays the role of the main feature of the model. At the beginning of this section, we assume the non-minimal coupling $\xi$ to be a free parameter. This way our spectrum can vary from a nearly-pure $R^2$ model or the nearly-pure Higgs model. With our approximation, the reheating temperature is obtained once the inflaton's field value is $h\lesssim h_{crit} $. Thus, using \begin{equation} \rho_{crit}=\frac{\pi^2}{30}g^*T^4_R, \end{equation} by the assumption that the remaining inflaton's energy density is converted to radiation. Hence, the maximum reheating temperature can be approximated as \begin{equation}\label{tr-upper} T_R^{upper}\lesssim 1.17 \times 10^{10} \xi \hspace{1mm}\text{GeV}, \end{equation} where it is strongly dependent on the non-minimal coupling $\xi$. However, it does not seem to be right if we imposed a small non-minimal coupling $\xi$. The reason is: that with small $\xi$ during that time, the resonance still occurs and the equilibrium conditions have not been obtained. This way, Eq. \eqref{tr-upper} should be taken as the upper bound of the reheating temperature. This consideration can only be evaluated if the preheating stage only happens in the quadratic regime only. After all, the reheating temperature should be obtained at least a slightly lower energy level compared by $h_{crit}$ since the preheating did not stop in the quadratic regime for small $\xi$. It is clear, during this time, the inflaton can be regarded as the Higgs boson since the effect of the gravity wear off at the end of the quadratic regime. Please note, that the reheating temperature is obtained when the decay of inflaton $\Gamma \simeq H$ stops when $\Gamma>H$. Thus, the reheating temperature can be written by \begin{equation} T_R= \left( \frac{90}{g^*\pi^2}\right)^{1/4}\sqrt{M_p \Gamma}. \end{equation} With $\Gamma$ can be evaluated by using Higgs boson decay to two bottom quarks as the dominance process, the reheating temperature can be approximated as \begin{equation}\label{tr-lower} T_R^{lower}\simeq 2 \times 10^7 \text{GeV}. \end{equation} In contrary to the reheating temperature in Eq. \eqref{tr-upper}, the reheating temperature in Eq. \eqref{tr-lower} must be taken as the lower bound. During the radiation-dominated era, the effect of BEC is supposed to be low \cite{lozanov2019lectures}. This very crucial understanding is the key to our constraint on $\xi$. To do this, we will assume the BEC occurs in this regime and later we pointed out some conditions to make this value to be small. Thus, in order to get a better approach to the reheating temperature with BEC, we can write the effective decay rate as \begin{equation}\label{gammah} \Gamma_{eff}\simeq \Gamma_{h\rightarrow b\bar{b}}(1+2\bold{n}_k^b). \end{equation} To calculate $\bold{n}_k^b$, we should take approximation that total energy of a single $b$ should be $\approx \tilde{M}/2$ and\footnote{We evaluated the total energy to be the total energy of scalaron in the end of quadratic regime, $\rho_\phi=\frac{1}{2}\tilde{M}^2\phi^2$, even though we already in quartic regime, which is rather higher but not a bad approximation.} we have relation: \begin{equation} \left( \frac{\tilde{M}}{2}\right)^2=\frac{k^2}{a^2}+\frac{y_b^2}{2}h^2. \end{equation} We can evaluate $\Delta k=|k_{max}-k_{min}|$ and finally one obtains (neglecting the expansion $a\approx 1$) \begin{equation} \Delta k=\frac{2y_b^2\tilde{h}^2}{\tilde{M}}. \end{equation} Later, the occupation number $\bold{n}_k^b$ related to the particle density ($n_b$) with average value of $k_*\simeq \frac{\tilde{M}}{2}$ is \begin{equation} \bold{n}_k^b=\frac{n_b}{4\pi k_*^2 \Delta k/(2\pi^3)}=\frac{4\pi^2 n_b}{\tilde{M}y_b^2 \tilde{h}^2}. \end{equation} The number density in the conformal mode of the $b$ is calculated via \begin{equation}\label{nko} \tilde{n}_b=\int^\infty_0\frac{d^3\kappa}{(2\pi)^3}e^{\pi \left(\frac{y_b^2}{2\lambda}\right)^{-1}\kappa^2}=\frac{1}{8}\left(\frac{y_b^2}{2\lambda} \right)^{3/2}, \end{equation} which correspond to the physical number density $n_b=\frac{1}{8}\left(y_b^2/2 \right)^{3/2} \tilde{h}^3$. By bringing the last result to eq. \eqref{nko}, we can approximate the decay rate $\Gamma$ of eq. \eqref{gammah} to be \begin{equation} \Gamma_{eff}=\Gamma_{h\rightarrow \bar{b}b} \left(1+\frac{\pi^2}{4\sqrt{2 }}\frac{\tilde{h}}{y_b \tilde{M}} \right)\simeq \Gamma_{h\rightarrow \bar{b}b} \left(1+\frac{\pi^2}{4}\frac{\sqrt{\mathcal{C}}}{y_b}\frac{\xi}{ \lambda} \right) . \end{equation} With $\tilde{h}$ evaluated at $h_{crit}$, the second term could have a significant effect. By putting all requirements, we can obtain the reheating temperature in the Higgs-$R^2$ model as much as \begin{equation}\label{tr} T_R\simeq 2 \times 10^7\sqrt{1+0.3 \sqrt{\xi}}\hspace{1mm}\text{GeV}. \end{equation} Interestingly, the BEC depends on the non-minimal coupling $\xi$, which affects the reheating temperature. If we take $\xi=10$ from the previous upper bound, the temperature is just slightly increased but still in the same order. It is also implied, that during the radiation-dominated era, the BEC becomes less important only if we used small $\xi$. Strictly speaking, the large $\xi$ will eventually ruin the basic theory of the radiation-dominated era since BEC suppose to be nearly absent in this regime. Hence, we propose the non-minimal coupling to be $\mathcal{O}(10)$ to minimize the effect on the reheating temperature. This also has the same agreement in the preheating, in which for $\xi\lesssim 10$, the preheating continue to the quartic regime. Finally, we obtain that small $\xi$ lead to a smaller reheating temperature. \section{Conclusion} We have investigated the Higgs-$R^2$ inflation, which corresponds to the inflation and reheating features in this model. For this paper, we used the effective single-field approximation called minimal two-field mode on the inflaton's trajectory. With this, the inflaton will follow the direction of such a mode. We investigated the model and shown the turn parameter $\eta^{\perp}$ is strongly dependent on the Higgs parameters, mainly by the non-minimal coupling\footnote{Since we take $\lambda$ as the fixed value in this paper} $\xi$. The turn parameter could potentially affect the largeness of the non-gaussianity $f_{NL}$. However, in our analytical approximation shown, even with the large $\xi$, this contribution failed to produce a large non-gaussianity. Of course, it is expected for the effective single-field mode. This way, the non-minimal coupling $\xi$ can be regarded as a free parameter, since no definite constraint is applied. Thus, the non-minimal coupling can run from small $\xi$ which corresponds to the nearly-pure $R^2$ model to large $\xi$ for nearly-pure Higgs inflation. In the preheating stage, we divided it into two regimes: quadratic regime and quartic regime. If we consider the quadratic regime, we found that the gauge bosons are dominated the whole particle productions. But, they failed to drain the whole inflaton's energy. Thus, we introduced the DM candidate with the large coupling. We expected this DM candidate could potentially drain the whole inflaton's energy. In addition, we found that if we imposed the non-minimal coupling to be $\xi\gtrsim 587$, the preheating would not exist. This came from the fact that $\Gamma_{\phi\rightarrow SS}>H$. Also, during the end of the quadratic regime, if we assumed the preheating is continued in the quartic regime, we should impose the constraint on the non-minimal coupling to be $\xi\lesssim 10$. If the constraint is violated, the preheating ceased once the quadratic regime is ended. This way, we obtain the upper bound of the reheating temperature to be \begin{equation} T_R\simeq 1.17 \times 10^{10}\xi \hspace{1mm}\text{GeV}. \end{equation} This reheating temperature is obtained if $\xi>10^3$. Thus, from here it is expected to obtain a large temperature $\gtrsim 10^{13}$ GeV. This value is close to pure Higgs inflation \cite{bezrukov2009initial}, as it is supposed to be. If $\xi$ is found to be lower than $\lesssim 10$, the preheating continued to the quartic regime. The energy level during this period is much lower than the quadratic ones. Hence the role of this regime is certainly is only to drain the remaining energy left by inflaton. During this time, the effect of gravity is worn off. Thus, the Higgs boson would solely act as inflaton. In this regime, the inflaton's energy remained small and the preheating only drained a small amount of energy. Finally, the reheating temperature is expected to come from the perturbative decay of the Higgs boson to daughter fields, giving \begin{equation} T_R\sim 10^7 \hspace{1mm}\text{GeV}. \end{equation} We calculated the effect of BEC, but we supposed this effect is negligible due to the nature of the radiation-dominated era. Hence, the BEC effect could be suppressed if only $\xi$ is small. \begin{acknowledgments} It is pleasure to thank Daijiro Suematsu, Ahsani Hafidzu Shali, and Idham Syah Alam for useful discussion. \end{acknowledgments} \nocite{*} \section{\label{sec:level1}First-level heading:\protect\\ The line break was forced \lowercase{via} \textbackslash\textbackslash} This sample document demonstrates proper use of REV\TeX~4.2 (and \LaTeXe) in manuscripts prepared for submission to AIP journals. Further information can be found in the documentation included in the distribution or available at \url{http://authors.aip.org} and in the documentation for REV\TeX~4.2 itself. When commands are referred to in this example file, they are always shown with their required arguments, using normal \TeX{} format. In this format, \verb+#1+, \verb+#2+, etc. stand for required author-supplied arguments to commands. For example, in \verb+\section{#1}+ the \verb+#1+ stands for the title text of the author's section heading, and in \verb+\title{#1}+ the \verb+#1+ stands for the title text of the paper. Line breaks in section headings at all levels can be introduced using \textbackslash\textbackslash. A blank input line tells \TeX\ that the paragraph has ended. \subsection{\label{sec:level2}Second-level heading: Formatting} This file may be formatted in both the \texttt{preprint} (the default) and \texttt{reprint} styles; the latter format may be used to mimic final journal output. Either format may be used for submission purposes; however, for peer review and production, AIP will format the article using the \texttt{preprint} class option. Hence, it is essential that authors check that their manuscripts format acceptably under \texttt{preprint}. Manuscripts submitted to AIP that do not format correctly under the \texttt{preprint} option may be delayed in both the editorial and production processes. The \texttt{widetext} environment will make the text the width of the full page, as on page~\pageref{eq:wideeq}. (Note the use the \verb+\pageref{#1}+ to get the page number right automatically.) The width-changing commands only take effect in \texttt{twocolumn} formatting. It has no effect if \texttt{preprint} formatting is chosen instead. \subsubsection{\label{sec:level3}Third-level heading: Citations and Footnotes} Citations in text refer to entries in the Bibliography; they use the commands \verb+\cite{#1}+ or \verb+\onlinecite{#1}+. Because REV\TeX\ uses the \verb+natbib+ package of Patrick Daly, its entire repertoire of commands are available in your document; see the \verb+natbib+ documentation for further details. The argument of \verb+\cite+ is a comma-separated list of \emph{keys}; a key may consist of letters and numerals. By default, citations are numerical; \cite{feyn54} author-year citations are an option. To give a textual citation, use \verb+\onlinecite{#1}+: (Refs.~\onlinecite{witten2001,epr,Bire82}). REV\TeX\ ``collapses'' lists of consecutive numerical citations when appropriate. REV\TeX\ provides the ability to properly punctuate textual citations in author-year style; this facility works correctly with numerical citations only with \texttt{natbib}'s compress option turned off. To illustrate, we cite several together \cite{feyn54,witten2001,epr,Berman1983}, and once again (Refs.~\onlinecite{epr,feyn54,Bire82,Berman1983}). Note that, when numerical citations are used, the references were sorted into the same order they appear in the bibliography. A reference within the bibliography is specified with a \verb+\bibitem{#1}+ command, where the argument is the citation key mentioned above. \verb+\bibitem{#1}+ commands may be crafted by hand or, preferably, generated by using Bib\TeX. The AIP styles for REV\TeX~4 include Bib\TeX\ style files \verb+aipnum.bst+ and \verb+aipauth.bst+, appropriate for numbered and author-year bibliographies, respectively. REV\TeX~4 will automatically choose the style appropriate for the document's selected class options: the default is numerical, and you obtain the author-year style by specifying a class option of \verb+author-year+. This sample file demonstrates a simple use of Bib\TeX\ via a \verb+\bibliography+ command referencing the \verb+sorsamp.bib+ file. Running Bib\TeX\ (in this case \texttt{bibtex sorsamp}) after the first pass of \LaTeX\ produces the file \verb+sorsamp.bbl+ which contains the automatically formatted \verb+\bibitem+ commands (including extra markup information via \verb+\bibinfo+ commands). If not using Bib\TeX, the \verb+thebibiliography+ environment should be used instead. \paragraph{Fourth-level heading is run in.}% Footnotes are produced using the \verb+\footnote{#1}+ command. Numerical style citations put footnotes into the bibliography\footnote{Automatically placing footnotes into the bibliography requires using BibTeX to compile the bibliography.}. Author-year and numerical author-year citation styles (each for its own reason) cannot use this method. Note: due to the method used to place footnotes in the bibliography, \emph{you must re-run BibTeX every time you change any of your document's footnotes}. \section{Math and Equations} Inline math may be typeset using the \verb+$+ delimiters. Bold math symbols may be achieved using the \verb+bm+ package and the \verb+\bm{#1}+ command it supplies. For instance, a bold $\alpha$ can be typeset as \verb+$\bm{\alpha}$+ giving $\bm{\alpha}$. Fraktur and Blackboard (or open face or double struck) characters should be typeset using the \verb+\mathfrak{#1}+ and \verb+\mathbb{#1}+ commands respectively. Both are supplied by the \texttt{amssymb} package. For example, \verb+$\mathbb{R}$+ gives $\mathbb{R}$ and \verb+$\mathfrak{G}$+ gives $\mathfrak{G}$ In \LaTeX\ there are many different ways to display equations, and a few preferred ways are noted below. Displayed math will center by default. Use the class option \verb+fleqn+ to flush equations left. Below we have numbered single-line equations, the most common kind: \begin{eqnarray} \chi_+(p)\alt{\bf [}2|{\bf p}|(|{\bf p}|+p_z){\bf ]}^{-1/2} \left( \begin{array}{c} |{\bf p}|+p_z\\ px+ip_y \end{array}\right)\;, \\ \left\{% \openone234567890abc123\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2}% \right\}% \label{eq:one}. \end{eqnarray} Note the open one in Eq.~(\ref{eq:one}). Not all numbered equations will fit within a narrow column this way. The equation number will move down automatically if it cannot fit on the same line with a one-line equation: \begin{equation} \left\{ ab12345678abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2}% \right\}. \end{equation} When the \verb+\label{#1}+ command is used [cf. input for Eq.~(\ref{eq:one})], the equation can be referred to in text without knowing the equation number that \TeX\ will assign to it. Just use \verb+\ref{#1}+, where \verb+#1+ is the same name that used in the \verb+\label{#1}+ command. Unnumbered single-line equations can be typeset using the \verb+\[+, \verb+\]+ format: \[g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow q^+g^+g^+ \dots ~. \] \subsection{Multiline equations} Multiline equations are obtained by using the \verb+eqnarray+ environment. Use the \verb+\nonumber+ command at the end of each line to avoid assigning a number: \begin{eqnarray} {\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1} \delta_{\sigma_1,-\sigma_2} (g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\ &&\times [\epsilon_jl_i\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1), \end{eqnarray} \begin{eqnarray} \sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2} (N^2-1)\nonumber \\ & &\times \left( \sum_{i<j}\right) \sum_{\text{perm}} \frac{1}{S_{12}} \frac{1}{S_{12}} \sum_\tau c^f_\tau~. \end{eqnarray} \textbf{Note:} Do not use \verb+\label{#1}+ on a line of a multiline equation if \verb+\nonumber+ is also used on that line. Incorrect cross-referencing will result. Notice the use \verb+\text{#1}+ for using a Roman font within a math environment. To set a multiline equation without \emph{any} equation numbers, use the \verb+\begin{eqnarray*}+, \verb+\end{eqnarray*}+ format: \begin{eqnarray*} \sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2} (N^2-1)\\ & &\times \left( \sum_{i<j}\right) \left( \sum_{\text{perm}}\frac{1}{S_{12}S_{23}S_{n1}} \right) \frac{1}{S_{12}}~. \end{eqnarray*} To obtain numbers not normally produced by the automatic numbering, use the \verb+\tag{#1}+ command, where \verb+#1+ is the desired equation number. For example, to get an equation number of (\ref{eq:mynum}), \begin{equation} g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow q^+g^+g^+ \dots ~. \tag{2.6$'$}\label{eq:mynum} \end{equation} A few notes on \verb=\tag{#1}=. \verb+\tag{#1}+ requires \texttt{amsmath}. The \verb+\tag{#1}+ must come before the \verb+\label{#1}+, if any. The numbering set with \verb+\tag{#1}+ is \textit{transparent} to the automatic numbering in REV\TeX{}; therefore, the number must be known ahead of time, and it must be manually adjusted if other equations are added. \verb+\tag{#1}+ works with both single-line and multiline equations. \verb+\tag{#1}+ should only be used in exceptional case - do not use it to number all equations in a paper. Enclosing single-line and multiline equations in \verb+\begin{subequations}+ and \verb+\end{subequations}+ will produce a set of equations that are ``numbered'' with letters, as shown in Eqs.~(\ref{subeq:1}) and (\ref{subeq:2}) below: \begin{subequations} \label{eq:whole} \begin{equation} \left\{ abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2} \right\},\label{subeq:1} \end{equation} \begin{eqnarray} {\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1} (g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\ &&\times [\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1).\label{subeq:2} \end{eqnarray} \end{subequations} Putting a \verb+\label{#1}+ command right after the \verb+\begin{subequations}+, allows one to reference all the equations in a subequations environment. For example, the equations in the preceding subequations environment were Eqs.~(\ref{eq:whole}). \subsubsection{Wide equations} The equation that follows is set in a wide format, i.e., it spans across the full page. The wide format is reserved for long equations that cannot be easily broken into four lines or less: \begin{widetext} \begin{equation} {\cal R}^{(\text{d})}= g_{\sigma_2}^e \left( \frac{[\Gamma^Z(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2} +\frac{[\Gamma^Z(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2} \right) + x_WQ_e \left( \frac{[\Gamma^\gamma(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2} +\frac{[\Gamma^\gamma(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2} \right)\;. \label{eq:wideeq} \end{equation} \end{widetext} This is typed to show the output is in wide format. (Since there is no input line between \verb+\equation+ and this paragraph, there is no paragraph indent for this paragraph.) \section{Cross-referencing} REV\TeX{} will automatically number sections, equations, figure captions, and tables. In order to reference them in text, use the \verb+\label{#1}+ and \verb+\ref{#1}+ commands. To reference a particular page, use the \verb+\pageref{#1}+ command. The \verb+\label{#1}+ should appear in a section heading, within an equation, or in a table or figure caption. The \verb+\ref{#1}+ command is used in the text where the citation is to be displayed. Some examples: Section~\ref{sec:level1} on page~\pageref{sec:level1}, Table~\ref{tab:table1},% \begin{table} \caption{\label{tab:table1}This is a narrow table which fits into a text column when using \texttt{twocolumn} formatting. Note that REV\TeX~4 adjusts the intercolumn spacing so that the table fills the entire width of the column. Table captions are numbered automatically. This table illustrates left-aligned, centered, and right-aligned columns. } \begin{ruledtabular} \begin{tabular}{lcr} Left\footnote{Note a.}&Centered\footnote{Note b.}&Right\\ \hline 1 & 2 & 3\\ 10 & 20 & 30\\ 100 & 200 & 300\\ \end{tabular} \end{ruledtabular} \end{table} and Fig.~\ref{fig:epsart}. \section{Figures and Tables} Figures and tables are typically ``floats''; \LaTeX\ determines their final position via placement rules. \LaTeX\ isn't always successful in automatically placing floats where you wish them. Figures are marked up with the \texttt{figure} environment, the content of which imports the image (\verb+\includegraphics+) followed by the figure caption (\verb+\caption+). The argument of the latter command should itself contain a \verb+\label+ command if you wish to refer to your figure with \verb+\ref+. Import your image using either the \texttt{graphics} or \texttt{graphix} packages. These packages both define the \verb+\includegraphics{#1}+ command, but they differ in the optional arguments for specifying the orientation, scaling, and translation of the figure. Fig.~\ref{fig:epsart}% \begin{figure} \includegraphics{fig_1 \caption{\label{fig:epsart} A figure caption. The figure captions are automatically numbered.} \end{figure} is small enough to fit in a single column, while Fig.~\ref{fig:wide}% \begin{figure*} \includegraphics{fig_2 \caption{\label{fig:wide}Use the \texttt{figure*} environment to get a wide figure, spanning the page in \texttt{twocolumn} formatting.} \end{figure*} is too wide for a single column, so instead the \texttt{figure*} environment has been used. The analog of the \texttt{figure} environment is \texttt{table}, which uses the same \verb+\caption+ command. However, you should type your caption command first within the \texttt{table}, instead of last as you did for \texttt{figure}. The heart of any table is the \texttt{tabular} environment, which represents the table content as a (vertical) sequence of table rows, each containing a (horizontal) sequence of table cells. Cells are separated by the \verb+&+ character; the row terminates with \verb+\\+. The required argument for the \texttt{tabular} environment specifies how data are displayed in each of the columns. For instance, a column may be centered (\verb+c+), left-justified (\verb+l+), right-justified (\verb+r+), or aligned on a decimal point (\verb+d+). (Table~\ref{tab:table4}% \begin{table} \caption{\label{tab:table4}Numbers in columns Three--Five have been aligned by using the ``d'' column specifier (requires the \texttt{dcolumn} package). Non-numeric entries (those entries without a ``.'') in a ``d'' column are aligned on the decimal point. Use the ``D'' specifier for more complex layouts. } \begin{ruledtabular} \begin{tabular}{ccddd} One&Two&\mbox{Three}&\mbox{Four}&\mbox{Five}\\ \hline one&two&\mbox{three}&\mbox{four}&\mbox{five}\\ He&2& 2.77234 & 45672. & 0.69 \\ C\footnote{Some tables require footnotes.} &C\footnote{Some tables need more than one footnote.} & 12537.64 & 37.66345 & 86.37 \\ \end{tabular} \end{ruledtabular} \end{table} illustrates the use of decimal column alignment.) Extra column-spacing may be be specified as well, although REV\TeX~4 sets this spacing so that the columns fill the width of the table. Horizontal rules are typeset using the \verb+\hline+ command. The doubled (or Scotch) rules that appear at the top and bottom of a table can be achieved by enclosing the \texttt{tabular} environment within a \texttt{ruledtabular} environment. Rows whose columns span multiple columns can be typeset using \LaTeX's \verb+\multicolumn{#1}{#2}{#3}+ command (for example, see the first row of Table~\ref{tab:table3}).% \begin{table*} \caption{\label{tab:table3}This is a wide table that spans the page width in \texttt{twocolumn} mode. It is formatted using the \texttt{table*} environment. It also demonstrates the use of \textbackslash\texttt{multicolumn} in rows with entries that span more than one column.} \begin{ruledtabular} \begin{tabular}{ccccc} &\multicolumn{2}{c}{$D_{4h}^1$}&\multicolumn{2}{c}{$D_{4h}^5$}\\ Ion&1st alternative&2nd alternative&lst alternative &2nd alternative\\ \hline K&$(2e)+(2f)$&$(4i)$ &$(2c)+(2d)$&$(4f)$ \\ Mn&$(2g)$\footnote{The $z$ parameter of these positions is $z\sim\frac{1}{4}$.} &$(a)+(b)+(c)+(d)$&$(4e)$&$(2a)+(2b)$\\ Cl&$(a)+(b)+(c)+(d)$&$(2g)$\footnote{This is a footnote in a table that spans the full page width in \texttt{twocolumn} mode. It is supposed to set on the full width of the page, just as the caption does. } &$(4e)^{\text{a}}$\\ He&$(8r)^{\text{a}}$&$(4j)^{\text{a}}$&$(4g)^{\text{a}}$\\ Ag& &$(4k)^{\text{a}}$& &$(4h)^{\text{a}}$\\ \end{tabular} \end{ruledtabular} \end{table*} The tables in this document illustrate various effects. Tables that fit in a narrow column are contained in a \texttt{table} environment. Table~\ref{tab:table3} is a wide table, therefore set with the \texttt{table*} environment. Lengthy tables may need to break across pages. A simple way to allow this is to specify the \verb+[H]+ float placement on the \texttt{table} or \texttt{table*} environment. Alternatively, using the standard \LaTeXe\ package \texttt{longtable} gives more control over how tables break and allows headers and footers to be specified for each page of the table. An example of the use of \texttt{longtable} can be found in the file \texttt{summary.tex} that is included with the REV\TeX~4 distribution. There are two methods for setting footnotes within a table (these footnotes will be displayed directly below the table rather than at the bottom of the page or in the bibliography). The easiest and preferred method is just to use the \verb+\footnote{#1}+ command. This will automatically enumerate the footnotes with lowercase roman letters. However, it is sometimes necessary to have multiple entries in the table share the same footnote. In this case, create the footnotes using \verb+\footnotemark[#1]+ and \verb+\footnotetext[#1]{#2}+. \texttt{\#1} is a numeric value. Each time the same value for \texttt{\#1} is used, the same mark is produced in the table. The \verb+\footnotetext[#1]{#2}+ commands are placed after the \texttt{tabular} environment. Examine the \LaTeX\ source and output for Tables~\ref{tab:table1} and \ref{tab:table2}% \begin{table} \caption{\label{tab:table2}A table with more columns still fits properly in a column. Note that several entries share the same footnote. Inspect the \LaTeX\ input for this table to see exactly how it is done.} \begin{ruledtabular} \begin{tabular}{cccccccc} &$r_c$ (\AA)&$r_0$ (\AA)&$\kappa r_0$& &$r_c$ (\AA) &$r_0$ (\AA)&$\kappa r_0$\\ \hline Cu& 0.800 & 14.10 & 2.550 &Sn\footnotemark[1] & 0.680 & 1.870 & 3.700 \\ Ag& 0.990 & 15.90 & 2.710 &Pb\footnotemark[2] & 0.450 & 1.930 & 3.760 \\ Au& 1.150 & 15.90 & 2.710 &Ca\footnotemark[3] & 0.750 & 2.170 & 3.560 \\ Mg& 0.490 & 17.60 & 3.200 &Sr\footnotemark[4] & 0.900 & 2.370 & 3.720 \\ Zn& 0.300 & 15.20 & 2.970 &Li\footnotemark[2] & 0.380 & 1.730 & 2.830 \\ Cd& 0.530 & 17.10 & 3.160 &Na\footnotemark[5] & 0.760 & 2.110 & 3.120 \\ Hg& 0.550 & 17.80 & 3.220 &K\footnotemark[5] & 1.120 & 2.620 & 3.480 \\ Al& 0.230 & 15.80 & 3.240 &Rb\footnotemark[3] & 1.330 & 2.800 & 3.590 \\ Ga& 0.310 & 16.70 & 3.330 &Cs\footnotemark[4] & 1.420 & 3.030 & 3.740 \\ In& 0.460 & 18.40 & 3.500 &Ba\footnotemark[5] & 0.960 & 2.460 & 3.780 \\ Tl& 0.480 & 18.90 & 3.550 & & & & \\ \end{tabular} \end{ruledtabular} \footnotetext[1]{Here's the first, from Ref.~\onlinecite{feyn54}.} \footnotetext[2]{Here's the second.} \footnotetext[3]{Here's the third.} \footnotetext[4]{Here's the fourth.} \footnotetext[5]{And etc.} \end{table} for an illustration. All AIP journals require that the initial citation of figures or tables be in numerical order. \LaTeX's automatic numbering of floats is your friend here: just put each \texttt{figure} environment immediately following its first reference (\verb+\ref+), as we have done in this example file. \begin{acknowledgments} We wish to acknowledge the support of the author community in using REV\TeX{}, offering suggestions and encouragement, testing new versions, \dots. \end{acknowledgments}
1,314,259,992,605
arxiv
\section{Introduction} For experiments at the running and planned high-energy machines such as LHC, B-factories (HERA-B, for example) and Tevatron with high luminosities, there are some hopes and possibilities to observe and study new hadrons, containing two heavy quarks, like the doubly charmed baryons $(ccq)$ (here and throughout this paper, q denotes a light quark u or d). Looking for such projects, it is important to have reliable theoretical predictions as a guide to the experimental searches for these baryons. In our previous papers we have already studied both the production mechanisms of doubly charmed baryons on the different future facilities \cite{PL}, \cite{1} and the total inclusive lifetimes of such systems \cite{2}. In this work we would like to address the question on the spectroscopy for these baryons in the framework of potential approach in the Buchm\" uller-Tye model. The $(QQ^{\prime }q)$ spectroscopy was also considered in some other models \cite{3,4,5,6}. The baryons with two heavy quarks include the features of both the dynamics in the $D$-meson with a fast moving light quark, surrounding a static $\bar 3$-color core, and the dynamics of heavy-heavy mesons ($J/\Psi$, $B_c$, $\Upsilon$), with two heavy quarks sensitive to the QCD potential at short distances. So, a rich spectrum is expected. First, there are excitations due to the relative motion of two heavy quarks in the quark-diquark approximation, which we use here. Some comments on the validity of this approximation will be given later. Second, we consider also the excitations of the light quark or combined excitations of both degrees of freedom. As we will show in this paper, some new features, related with the identity of two heavy quarks, arise in the dynamics of doubly charmed baryons (or doubly beauty baryons). The hadronic or electromagnetic transition between the $2P\to 1S$ diquark levels is forbidden in the absence of interaction of diquark with the light quark or without taking into account nonperturbative effects. The qualitative picture of this effect will be discussed below, and a rigorous quantitative solution of this problem will be given in our subsequent papers. This work is organized as follows. Section 2 is devoted to the determination of masses, values of radial wave functions (for $S$-levels) and their first derivatives (for $P$-levels) at the origin for both the diquark and the system of light quark-diquark. In Section 3 we evaluate the fine and hyperfine splitting of different levels for the above systems. Section 4 contains some comments on the radiative and hadronic decays of doubly charmed baryons. And, finally, Section 5 draws our conclusion. \section{Mass spectrum of doubly charmed baryons.} Investigating the baryon spectroscopy, one faces a three body problem in the framework of nonrelativistic quantum mechanics. Its reduced hamiltonian has the following expression: \begin{equation} H = \frac{p_x^2}{M}+\frac{p_y^2}{M}+v({\bf x},{\bf y}), \end{equation} where ${\bf x}$, ${\bf y}$ are Jacobi variables: \begin{eqnarray} {\bf x} &=& {\bf r}_2 - {\bf r}_1,\\ {\bf y} &=& (2{\bf r}_3 - {\bf r}_1 - {\bf r}_2 )\sqrt\frac{m}{2M+m}, \end{eqnarray} where $M$ is the heavy quark mass, and $m$ is the light quark mass. There are several methods for a solution of three-body Schr\" odinger equation \cite{5}. The first way is the variational methods, where one expands the wave-function in terms of the eigenstates for a symmetric harmonic oscillator, or gaussians, at different ranges in the ${\bf x}$ and ${\bf y}$ coordinates, or uses the hyperspherical formalism. The other methods are the quark-diquark approximation and Born-Oppenheimer approximation. The former is used in our evaluation of doubly charmed baryon spectrum. The reason is that the ground state of $(ccq)$ consists of a localized $(cc)$ cluster surrounded by the light quark q, with the average distance $\langle r_{cc}\rangle $ much less than $\langle r_{cq}\rangle $. However, when one accounts for the radial or orbital excitations of diquark, the average separation between the heavy quarks increases, and the quark-diquark structure is distroied. So, in this region, our results for the mass spectrum of these baryons are quite crude. Next, a dramatic simplification of $(ccq)$ dynamics is obtained in the Born-Oppenheimer or adiabatic approximation. Two heavy quarks have much less velocity than that of the light quark. When they move, the light quark wave function readjusts itself almost immediately to the state of minimal energy. Therefore, the calculation can be done in two steps: for any given ${\bf x}$, one computes the binding energy $\epsilon({\bf x})$, which is, then, used as an effective potential governing the relative motion of heavy quarks. This was done in \cite{3}: first, in the nonrelativistic potential model and, second, in a variant of MIT bag model. From our point of view, this method is the most suitable for the baryon spectroscopy. However, in this work, we will be satisfied with the accuracy, given by the quark-diquark approximation. In present work, we use the QCD-motivated potential \cite{7} given by Buchm\" uller and Tye \cite{8}, which was justified on the $J/\Psi$ and $\Upsilon$ spectra with the following values of parameters: \begin{equation} m_c = 1.486~GeV,\quad m_q = 0.385~GeV, \end{equation} where the light quark mass is obtained by the fitting of theoretical predictions with the Buchm\" uller-Tye potential for the $D$-meson mass to its experimental value. Of course the motion of light quark is relativistic inside the $D$ meson, as well as inside the baryons under consideraton and so can not be treated in the framework of nonrelativistic quantum mecanics. But we think, that considering its motion in both cases in the same way, we can obtain good approximation for the values of mass levels of our baryons. Estimating the mass spectrum of diquark excitations, one has to take into account the factor of 1/2 for the potential due to the fact, that the heavy quarks inside the diquark are in color antitriplet state. At distances $\sim 0.6-0.8$ fm we have an attraction of quarks inside the diquark, so we assume that the shape of potential in this region related with their pairwise interaction is the same as for quark - antiquark interaction in heavy mesons. At larger distances we can not tell anything about it, so our estimates for higher-lying levels is quite rough. But for low-lying energy levels in this system the wave function is already zero in the region of large distances, so that our predictions for them can be trusted. Then, solving the Schr\" odinger equations for the diquark and light quark excitations, one finds the diquark and $\Xi_{cc}^{++}$, $\Xi_{cc}^{+}$-baryons mass spectra and the characteristics of radial wave functions for both the diquark and the system of light quark-diquark $R_d(0)$, $R_l(0)$, $R_d^{\prime }(0)$ and $R_l^{\prime }(0)$, shown in Tables 1, 2 and 3. \begin{table}[t] \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Diquark state & Mass (GeV) & $\langle r^2\rangle ^{1/2}$ (fm) & Diquark state & Mass (Gev) & $\langle r^2\rangle ^{1/2}$ (fm) \\ \hline 1S & 3.16 & 0.58 & 3P & 3.66 & 1.36\\ \hline 2S & 3.50 & 1.12 & 4P & 3.90 & 1.86\\ \hline 3S & 3.76 & 1.58 & 3D & 3.56 & 1.13\\ \hline 2P & 3.39 & 0.88 & 4D & 3.80 & 1.59\\ \hline \end{tabular} \end{center} \caption{The $(cc)$-diquark spectrum: masses and mean-square radii.} \end{table} \begin{table}[b] \begin{center} \begin{tabular}{|c|c|c|c|} \hline n (diquark) & $R_{d(nS)}(0)(R_{d(nP)}^{\prime } (0))$ & n (diquark) & $R_{d(nS)}(0)(R_{d(nP)}^{\prime }(0))$ \\ \hline 1S & 0.530 & 2S & -0.452 \\ \hline 2P & 0.128 & 3P & -0.158 \\ \hline \end{tabular} \end{center} \caption{The characteristics of diquark radial wave functions $R_{d(nS)}(0)$ (in $GeV^{3/2}$), $R_{d(nP)}^{\prime } (0)$ (in $GeV^{5/2}$).} \end{table} \begin{table}[t] \begin{center} \begin{tabular}{|p{30mm}|c|p{20mm}|p{30mm}|c|p{20mm}|} \hline $n_d (diquark)$~- $n_l (light~quark)$ & Mass (GeV) & $R_{l(nS)}(0)$ $(R_{l(nP)}^{\prime } (0))$ & $n_d (diquark)$~- $n_l (light~quark)$ & Mass (GeV) & $R_{l(nS)}(0)$ $(R_{l(nP)}^{\prime }(0))$ \\ \hline 1S 1S & 3.56 & 0.499 & 1S 2P & 4.03 & 0.118\\ \hline 2S 1S & 3.90 & 0.502 & 2S 2P & 4.36 & 0.119\\ \hline 3S 1S & 4.16 & 0.505 & 3S 2P & 4.62 & 0.121\\ \hline 2P 1S & 3.79 & 0.501 & 2P 2P & 4.25 & 0.119\\ \hline 3P 1S & 4.06 & 0.504 & 3P 2P & 4.52 & 0.119\\ \hline 3D 1S & 3.96 & 0.503 & 3D 2P & 4.42 & 0.117\\ \hline \end{tabular} \end{center} \caption{The mass spectra and characteristics of light quark radial wave functions in the doubly charmed baryons: $\Xi_{cc}^{++}$ and $\Xi_{cc}^{+}$ with the different excitations of diquark, $n_d$, and light quark-diquark system, $n_l$.} \end{table} In calculations we assume, that the threshold mass value of doubly charmed baryons is determined by the hadronic decay into $\Lambda_c$-baryon and $D$-meson, and, hence, it equals 4.26 GeV \cite{9}. The threshold for the stability of diquark is estimated from the following result, stated for a heavy quark-antiquark pair \cite{10}: if a heavy quark and a corresponding antiquark are separated by a distance greater than 1.4-1.5 fm, then the most energetically favorable and probable configuration results in a production of light quark-antiquark pair, which leads to the fragmentation into a pair of flavored heavy mesons. So, we suppose that the same critial distance scale can be used for the colored diquark system, which results in the fragmentation of diquark to the heavy meson and heavy-light diquark. \section{Spin-dependent splitting.} In accordance with the results of refs. \cite{11,12} and \cite{SS}, we introduce the additional term to the potential to take into account the spin-orbital and spin-spin interactions, causing the splitting of $nL$-levels in both the diquark and light quark-diquark system ($n$ is the principal quantum number, $L$ is the orbital momentum). So, it has the form \begin{eqnarray} V_{SD}^{(d)}({\bf r}) &=& \frac{1}{2}\left(\frac{\bf L\cdot S}{2m_c^2}\right) \left( -\frac{dV(r)}{rdr}+ \frac{8}{3}\alpha_s\frac{1}{r^3}\right)\nonumber \\ && +\frac{2}{3}\alpha_s\frac{1}{m_c^2}\frac{\bf L\cdot S}{r^3}+\frac{4}{3} \alpha_s\frac{1}{3m_c^2}{{\bf S}_{c1}\cdot {\bf S}_{c2}}[4\pi\delta({\bf r})]\\ && +\frac{1}{3}\alpha_s\frac{1}{m_c^2}(-\frac{1}{4{\bf L}^2 -3}\times [ 6({\bf L\cdot S})^2+3({\bf L\cdot S})-2{\bf L}^2{\bf S}^2])\frac{1}{r^3}, \nonumber \end{eqnarray} for the diquark splitting, and \begin{eqnarray} V_{SD}^{(l)}({\bf r}) &=& \frac{1}{2}\left(\frac{\bf L\cdot S_d}{2m_c^2} + \frac{2\bf L\cdot S_l}{2m_l^2}\right) \left( -\frac{dV(r)}{rdr}+ \frac{8}{3}\alpha_s\frac{1}{r^3}\right)\nonumber \\ && +\frac{2}{3}\alpha_s\frac{1}{m_c m_l}\frac{(\bf L\cdot S_d + 2L\cdot S_l)}{r^3}+ \frac{4}{3}\alpha_s\frac{1}{3m_c m_l}{({\bf S}_{d}+{\bf L}_d)\cdot {\bf S}_{l}} [4\pi\delta({\bf r})]\\ && +\frac{1}{3}\alpha_s\frac{1}{m_c m_l}(-\frac{1}{4{\bf L}^2 -3}\times [ 6({\bf L\cdot S})^2+3({\bf L\cdot S})-2{\bf L}^2{\bf S}^2\nonumber \\ && -6({\bf L\cdot S_d})^2-3({\bf L\cdot S_d})+2{\bf L}^2{\bf S_d}^2]) \frac{1}{r^3}, \nonumber \end{eqnarray} for the light quark-diquark system, where $V(r)$ is the phenomenological potential (Buchm\" uller-Tye (BT) potential in our case), $S_l$ and $S_d$ are the light quark and diquark spins, respectively. The first term in both expressions takes into account the relativistic corrections to the potential $V(r)$. The second, third and fourth terms are the relativistic corrections coming from the account for the one gluon exchange between the quarks. $\alpha_s$ is the effective constant of quark-gluon interactions inside the baryons under consideration. Expression (6) for the additional part of potential, causing the splitting of levels in the light quark-diquark system is obtained from the summing of the pair interactions, like (5), for the light quark with each of the heavy quarks. We include also the correction, connected to the interaction of internal diquark orbital momentum with the light quark spin. The value of $\alpha_s$ parameter in (5), (6) can be determined in the following way. The splitting of the $S$-wave heavy quarkonium $(Q_1\bar Q_2)$ is determined by the expression \begin{equation} \Delta M(nS) = \frac{8}{9}\alpha_s\frac{1}{m_1m_2}|R_{nS}(0)|^2, \end{equation} where $R_{nS}(0)$ is the radial wave function of the quarkonium, at the origin. Using the experimental value of $1S$-state splitting in the $c\bar c$ system \cite{13} \begin{equation} \Delta M(1S,c\bar c) = 117\pm 2 MeV \end{equation} and the $R_{1S}(0)$ value calculated in the potential model with the BT potential for the $c\bar c$ system, one gets the value of $\alpha_s(\Psi)$ coupling constant for the effective Coulomb interaction of heavy quarks. In the present paper, we take into account the variation of the effective Coulomb coupling constant versus the reduced mass of the system $(\mu )$. In the one-loop approximation at the momentum scale $p^2$, the `running' coupling constant in QCD is determined by the expression \begin{equation} \alpha_s (p^2) = \frac{4\pi}{b\cdot\ln (p^2/\Lambda_{QCD}^2)}, \end{equation} where $b = 11 -2n_f/3$, and $n_f = 3$, when one takes into account the contribution by the virtual light quarks, $p^2< m_c^2$. We assume that the average kinetic energies of $c$-quarks inside the diquark and the light quark-diquark system (the average kinetic energy weakly depends on the reduced mass of the system) are equal to: \begin{equation} \langle T_{d}\rangle \approx 0.2~GeV, \end{equation} \begin{equation} \langle T_{l}\rangle \approx 0.4~GeV, \end{equation} so that, using the expression for the kinetic energy, \begin{equation} \langle T\rangle = \frac{\langle p^2\rangle }{2\mu }, \end{equation} where $\mu$ is the reduced mass of the system, one gets \begin{equation} \alpha_s(p^2) = \frac{4\pi}{b\cdot\ln (2\langle T\rangle \mu/\Lambda_{QCD}^2)}, \end{equation} so that $\alpha_s (\Psi) = 0.44$ at $\Lambda_{QCD}\approx 113~MeV$. As one can see from equations (5) and (6), in contrast to the $LS$-coupling in the diquark, there is the $jj$-coupling in the light quark-diquark system, where diquark and light quark have the different masses (here ${\bf LS}_l$ is diagonal at the given ${\bf J}_l$ momentum, $({\bf J}_l = {\bf L} + {\bf S}_l, {\bf J} = {\bf J}_l + {\bf \bar J})$, $\bf J$ is the total spin of the system, and $\bf\bar J$ is the total spin of the diquark (as we will see below in the case of interest, $\bf\bar J$ equals to ${\bf S}_d$)). To calculate the values of level shifts, appearing because of the spin-spin and spin-orbital interactions, one has to average expressions (5), (6) over the wave functions of the corresponding states. Then, because the leading contribution to the spin-orbital splitting of the light quark-diquark system is given by the term $\frac{1}{2}\frac{L\cdot S_l}{2m_l^2}(-\frac{dV(r)}{rdr}+\frac{8}{3}\alpha_s \frac{1}{r^3})$, we can use the state vectors with the given values of $\bf J$ and ${\bf J}_l$ as the first approximation to the eigenvectors of the potential. For the potential terms, which are not diagonal in these states, we can choose another basis of vectors with the given values of $\bf J$ and ${\bf S} = {\bf S}_l + {\bf\bar J}$ \begin{equation} |J;J_l\rangle = \sum_{S} (-1)^{(\bar J+S_l+L+J)}\sqrt {(2S+1)(2J_l+1)} \left\{\begin{array}{ccc} \bar J & S_l & S \\ L & J & J_l \end{array}\right\}|J;S\rangle \end{equation} or ${\bf J}$ and ${\bf J}_d$ \begin{equation} |J;J_l\rangle = \sum_{Jd} (-1)^{(\bar J+S_l+L+J)}\sqrt {(2J_d+1)(2J_l+1)} \left\{\begin{array}{ccc} \bar J & L & J_d \\ S_l & J & J_l \end{array}\right\}|J;Jd\rangle \end{equation} so that the potential terms of the order of $1/m_cm_l$, $1/m_c^2$ lead, generally speaking, to the mixing of levels with the different $J_l$ values at the given $J$ values. The identity of heavy quarks results in $S_d=1$ at even $L_d$ and $S_d=0$ at odd $L_d$, where $L_d$ is the diquark orbital momentum. Table 3 shows us that we has to take into account only the spin-orbital splitting of $1S 2P$ and $3D 1S$ levels. In the first case (the splitting of the light quark-diquark system levels $\Delta^{(J)}$ for $1S2P$) one has: \begin{equation} \Delta^{(\frac{5}{2})} = 17.4~MeV. \end{equation} The levels with $J = \frac{3}{2}$ (or $\frac{1}{2}$), but with the different $J_l$, get the mixing. For $J =\frac{3}{2}$, the mixing matrix is equal to \begin{equation} \left(\begin{array}{cc} 4.3 & -1.7 \\ -1.7 & 7.8 \end{array}\right)~MeV \end{equation} with the eigenvectors \begin{eqnarray} |1S2P(\frac{3}{2}^{\prime })\rangle &=& 0.986|J_l=\frac{3}{2}\rangle +0.164|J_l=\frac{1}{2}\rangle ,\\ |1S2P(\frac{3}{2})\rangle &=& -0.164|J_l=\frac{3}{2}\rangle +0.986|J_l=\frac{1}{2}\rangle , \nonumber \end{eqnarray} and the eigenvalues \begin{eqnarray} \lambda_1^{\prime } &=& 3.6~MeV,\\ \lambda_1 &=& 8.5~MeV.\nonumber \end{eqnarray} For $J=\frac{1}{2}$, the mixing matrix equals \begin{equation} \left(\begin{array}{cc} -3.6 & -55.0 \\ -55.0 & -73.0 \end{array}\right)~MeV \end{equation} with the eigenvectors \begin{eqnarray} |1S2P(\frac{1}{2}^{\prime })\rangle &=& 0.957|J_l=\frac{3}{2}\rangle -0.291|J_l=\frac{1}{2}\rangle ,\\ |1S2P(\frac{1}{2})\rangle &=& 0.291|J_l=\frac{3}{2}\rangle +0.957|J_l=\frac{1}{2}\rangle , \nonumber \end{eqnarray} and the eigenvalues \begin{eqnarray} \lambda_2^{\prime } &=& 26.8~MeV,\\ \lambda_2 &=& -103.3~MeV.\nonumber \end{eqnarray} In the second case (the splitting of diquark levels $\Delta^{(J_d)}$ for $3D1S$) one gets: \begin{eqnarray} \Delta^{(3)} &=& -3.02~MeV,\nonumber\\ \Delta^{(2)} &=& 2.19~MeV,\\ \Delta^{(1)} &=& 3.39~MeV.\nonumber\\ \end{eqnarray} For the spin-spin interactions inside the diquark, we have only a shift of the energy levels because of the identity of heavy quarks. The hyperfine splitting for the light quark-diquark system can be computed, making use of the following formula: \begin{equation} \Delta_{h.f.}^{(l)} = \frac{2}{9}(S(S+1)-\bar J(\bar J +1 ) - \frac{3}{4}) \alpha_s(2\mu T)\frac{1}{m_cm_l}*|R_l(0)|^2, \end{equation} where $R_l(0)$ is the value of radial wave function in the system of light quark-diquark at the origin, and \begin{equation} \Delta_{h.f.}^{(d) }= \frac{1}{9} \alpha_s(2\mu T)\frac{1}{m_c^2}*|R_d(0)|^2, \end{equation} for the diquark level shifts, where $R_d(0)$ is the radial wave function of diquark at the origin. For the $1S$ and $2S$-wave states of diquark we have the following values of the shifts (because of the identity of the quarks there are no splittings of these levels) \begin{eqnarray} \Delta (1S) &=& 6.3~MeV,\nonumber\\ \Delta (2S) &=& 4.6~MeV.\nonumber \end{eqnarray} The mass spectrum of doubly charmed baryons ($\Xi_{cc}^{++}$ and $\Xi_{cc}^{+}$), with the account for the calculated splittings is shown in Fig.1 and Table 4. \begin{figure}[t] \setlength{\unitlength}{0.6mm}\thicklines \begin{center} \begin{picture}(240,200) \put(15,56.6){\line(1,0){15}} \put(15,59.6){$1S1S$} \put(32,61){\line(1,0){15}} \put(32,47.8){\line(1,0){15}} \put(50,58){$^{3/2^{+}}$} \put(50,44.8){$^{1/2^{+}}$} \put(15,126){\line(1,0){15}} \put(15,128){$1S2S$} \put(62,148){\line(1,0){15}} \put(62,151){$2P2S$} \put(62,79){\line(1,0){15}} \put(62,82){$2P1S$} \put(79,83.4){\line(1,0){15}} \put(79,70.2){\line(1,0){15}} \put(97,80.4){$^{3/2^{-}}$} \put(97,67.4){$^{1/2^{-}}$} \put(15,90){\line(1,0){15}} \put(15,93){$2S1S$} \put(32,94.4){\line(1,0){15}} \put(32,81.2){\line(1,0){15}} \put(50,91.4){$^{3/2^{+}}$} \put(50,78.2){$^{1/2^{+}}$} \put(159,96){\line(1,0){15}} \put(159,99){$3D1S$} \put(179,94.7){\line(1,0){15}} \put(179,96.2){\line(1,0){15}} \put(179,97.34){\line(1,0){15}} \put(200,108.9){\line(1,0){15}} \put(220,106.59){$^{7/2^{+}}$} \put(200,105){\line(1,0){15}} \put(220,102){$^{5/2^{\prime +}}$} \put(200,100.74){\line(1,0){15}} \put(220,97.074){$^{3/2^{\prime +}}$} \put(200,87.54){\line(1,0){15}} \put(220,84.54){$^{1/2^{+}}$} \put(200,83){\line(1,0){15}} \put(220,80){$^{3/2^{+}}$} \put(200,78.1){\line(1,0){15}} \put(220,75.1){$^{5/2^{+}}$} \put(107,103){\line(1,0){15}} \put(107,106){$1S2P$} \put(125,107.03){\line(1,0){15}} \put(143,108.03){$^{1/2^{\prime -}}$} \put(125,104.03){\line(1,0){15}} \put(143,103.43){$^{5/2^{-}}$} \put(125,103.15){\line(1,0){15}} \put(143,99.045){$^{3/2^{-}}$} \put(125,102.023){\line(1,0){15}} \put(143,94.53){$^{3/2^{\prime -}}$} \put(125,89.94){\line(1,0){15}} \put(143,86.94){$^{1/2^{-}}$} \put(15,116){\line(1,0){15}} \put(15,119){$3S1S$} \put(32,120.4){\line(1,0){15}} \put(32,107.2){\line(1,0){15}} \put(50,117.4){$^{3/2^{+}}$} \put(50,104.2){$^{1/2^{+}}$} \put(62,106){\line(1,0){15}} \put(62,109){$3P1S$} \put(79,110.4){\line(1,0){15}} \put(79,97.2){\line(1,0){15}} \put(97,107.4){$^{3/2^{-}}$} \put(97,94.2){$^{1/2^{-}}$} \put(10,115){\line(1,0){230}} \put(185,118){$\Lambda_c~D$~~threshold} \put(107,136.1){\line(1,0){15}} \put(107,139.1){$2S2P$} \put(107,162){\line(1,0){15}} \put(107,165){$3S2P$} \put(159,126.2){\line(1,0){15}} \put(159,129.2){$2P2P$} \put(199,142){\line(1,0){15}} \put(199,145){$3D2P$} \put(159,152){\line(1,0){15}} \put(159,155){$3P2P$} \put(10,0){\framebox(230,200)} \put(0,0){$3.0$} \put(10,50){\line(1,0){3}} \put(0,50){$3.5$} \put(10,100){\line(1,0){3}} \put(0,100){$4.0$} \put(10,150){\line(1,0){3}} \put(0,150){$4.5$} \put(10,200){\line(1,0){3}} \put(0,200){$5.0$} \end{picture} \end{center} \caption{The spectrum of doubly charmed baryons: $\Xi_{cc}^{++}$ and $\Xi_{cc}^{+}$.} \label{pic1} \end{figure} \begin{table}[t] \begin{center} \begin{tabular}{|p{40mm}|c|p{40mm}|c|} \hline $(n_d (diquark)$~- $n_l (light~quark))$, J$^{P}$ & Mass (GeV) & ($n_d (diquark)$~- $n_l (light~quark))$, J$^{P}$ & Mass (GeV) \\ \hline (1S 1S)$1/2^{+}$ & 3.478 & (3P 1S)$1/2^{-}$ & 3.972 \\ \hline (1S 1S)$3/2^{+}$ & 3.61 & (3D 1S)$3/2^{\prime +}$ & 4.007 \\ \hline (2P 1S)$1/2^{-}$ & 3.702 & (1S 2P)$3/2^{\prime -}$ & 4.034 \\ \hline (3D 1S)$5/2^{+}$ & 3.781 & (1S 2P)$3/2^{-}$ & 4.039 \\ \hline (2S 1S)$1/2^{+}$ & 3.812 & (1S 2P)$5/2^{-}$ & 4.047\\ \hline (3D 1S)$3/2^{+}$ & 3.83 & (3D 1S)$5/2^{\prime +}$ & 4.05 \\ \hline (2P 1S)$3/2^{-}$ & 3.834 & (1S 2P)$1/2^{\prime -}$ & 4.052 \\ \hline (3D 1S)$1/2^{+}$ & 3.875 & (3S 1S)$1/2^{+}$ & 4.072\\ \hline (1S 2P)$1/2^{-}$ & 3.927 & (3D 1S)$7/2^{+}$ & 4.089 \\ \hline (2S 1S)$3/2^{+}$ & 3.944 & (3P 1S)$3/2^{-}$ & 4.104 \\ \hline \end{tabular} \end{center} \caption{The mass spectra and characteristics of light quark radial wave functions of doubly charmed baryons: $\Xi_{cc}^{++}$ and $\Xi_{cc}^{+}$ with the different excitations of diquark, $n_d$, and light quark-diquark system, $n_l$.} \end{table} \section{Transition between diquark levels: $2P\to 1S$ -- a laboratory for the long distance QCD?} Our plans for a future include a detail investigation of hadronic and radiative transitions in the spectrum of doubly charmed baryons. However, here we would like to discuss some new futures in the dynamics of baryons, containing two identical heavy quarks. So, because of the identity of two heavy quarks, we have a metastable $2P$-wave diquark state. This state has the $L=1$, $S=0$ quantum numbers, and, hence, the transition to the ground state $(L=0, S=1)$ would require the simultaneous change of orbital and spin quantum numbers. It is worth to stress that the existence of such state is possible only in the baryons with two heavy quarks, because in ordinary heavy baryons the light diquark is never excited because of extremely large size. We have two possible scenario for the realization of such transition: 1. A three-particle interaction via three-gluon vertex. This interaction breaks down quark-diquark picture and leads to some new wave functions of these states in the form: $C_1|L=1,S=0\rangle + C_2|L=0,S=1\rangle $, where $|C_2|\ll |C_1|$ for $2P$. So, as one can easily see, the account for such interactions makes the radiative $M1$-transition to the ground state to be possible. 2. A nonperturbative transition given by an operator, proportional to $\mu_B\; \vec r\cdot\vec\nabla \;\vec H(0)\cdot(\vec S_1 - \vec S_2)$, where $\mu_B$ is the Bohr magneton, $\vec H(\vec r)$ is the chromomagnetic field and $\vec r$ is the distance between two heavy quarks (here we have the same situation as in the case of transition between the orto- and para-hydrogen). This transition goes with the emission of pion. We would like to stress the nonperturbative nature of this transition, because for its realization, the necessary condition is that the chromomagnetic field at the different points is correlated. The absence of such correlation prevents this scenario of transition from realization, because it would require two consequent gluon exchanges with the light quark or quark-gluon sea, wherein the orbital or spin quantum numbers of state would change, what is impossible because of the identity of two heavy quarks. A detail quantitative investigation of the nature for this transition is a subject of our subsequent papers. It will allow us to gain more information on the nonperturbative dynamics of QCD (in the case of second type transition) and, particularly, on the behaviour of chromomagnetic field at large distances. So, we close this section with the question, posed in its title. \section{Conclusion} In this work we have used the Buchm\" uller-Tye potential model within the quark-diquark approximation to describe the mass spectrum of doubly charmed baryons: $\Xi_{cc}^{++}$ and $\Xi_{cc}^{+}$, including the fine and hyperfine splittings of mass levels. We have discussed the uncertainties involved and some possible ways to reduce them: the use of Born-Oppenheimer approximation and the corrected potential for the level splitting. In the previous section of article we have considered a new phenomena, arising in the radiative or hadronic transitions of $2P\to 1S$ diquark levels. We have commented on some possible scenaria of this transition and lessons, we could learn from a studying the nature of such transitions. The authors express their gratitude to Prof. A.Wagner and DESY Theory Group for a kind hospitality duiring a visit, when this paper was produced. This work is in part supported by the Russian Foundation for Basic Research, grants 96-02-18216 and 96-15-96575. A.I. Onishchenko aknowledges the support by the NORDITA grant.
1,314,259,992,606
arxiv
\section{Conclusion} In the arms race between authorship attribution and obfuscation, it is crucial that obfuscation can transfer when an adversary deploys a different attributor than the one assumed by the obfuscator. In this paper we showed that an ensemble that uses multiple base attribution classifiers, each exploiting random portions of the feature space, is able to achieve better transferability by a factor of 1.7$\times$ and 2.1$\times$. Moreover, we showed that this success holds even when the adversary's attributor operates off a different feature space. We also found that ensemble diversity in terms of disagreement is not crucial for transferability as it only hinders the obfuscator due to a decrease in the ensemble's probability of detection. \subsection{Adversary's Attribution Classifiers} \label{sec:adv-clf} To assess the transferability of the obfuscated samples to other classifiers, we train a series of classifiers and measure the performance of the baselines and the ensemble. Specifically, we train multiple classifiers that use different types of techniques to measure the \textit{cross-technique transferability} \cite{transferabilityIM2016} of the method used to generate the samples. These classifiers are also trained on the Writeprints feature set and are as follows: \textit{k-nearest neighbors} (KNN), \textit{naive-bayes} (NB), \textit{multilayer-perceptron} (MLP), \textit{logistic regression} (LR) in addition to the already trained \textit{random forest classifier} (RFC), \textit{support vector machine} (SVM), and the \textit{ensemble} (Ens) itself. Additionally, we incorporate counter-measures for the findings by Gröndahl et al. \cite{parchoice2020} that using an internal classifier results in highly specific transformations which are not only \textit{non-transferable} to other different classifiers but also to the same classifier if it is retrained. We accomplish this by training multiple versions of the classifiers that exhibit randomness (RFC, Ensemble, and MLP) during training and then report the average transferability result in their respective columns. In addition to these Writeprints based classifiers, we also measure the transferability of the samples to JGAAP \cite {juola2009jgaap}, a well-known system for authorship attribution that provides a wide-array of features and classifiers, and another MLP model trained on the Basic-9 feature set\cite{mcdonald2012use}. This setting allows us to explore the performance of the ensemble against an adversary that uses a different feature set and classifier implementation. We borrow the configuration recommended by Juola et al \cite{juola2010empirical} for JGAAP. The final configuration for JGAAP is listed in Table \ref{tbl:jgaap-config}. \input{tables/tbl-jgaap-config} \subsection{Data} The \textbf{Extended Brennan Greenstadt Corpus} \cite{adversarialStylometrySAfroz} comprises of writing samples submitted by various authors through Amazon's Mechanical Turk (AMT) platform. The corpus is unique because it was collected expressly for the purpose of adversarial stylometry in text and was vetted against a strict set of guidelines imposed by AMT and the authors themselves to ensure quality. The guidelines required that the submissions be professionally written, be free of anything other than the writing itself (i.e., citations, URLs, headings, etc.), and contained at least 6500 words. The imposition of these strict guidelines ensured that the submissions were of high quality, reflected the author's particular writing style, and that there was sufficient data to train an attribution classifier. Out of the 100 submissions, the authors selected the 45 that most closely followed the guidelines and then split them into nearly 500 word passages, averaging 15 documents per author, to create the final corpus. In our experiments, we divide the corpus into groups of 5 authors based on document length and report results on these groupings by further splitting them into a 80\% training and 20\% testing set. \subsection{Design of Transferability Experiments} We conduct several experiments with each corresponding to a particular internal classifier that is used by \mx{}. In each of these experiments, all \mx{} parameters are kept consistent and only its internal classifier is replaced. We configure \mx{} parameters based on the findings from the original pilot experiments performed on the EBG 5 dataset. The values for the different \mx{} parameters are specified in Table \ref{tbl:mx-config}. For each experiment, we report the average METEOR score of the obfuscated documents, the transferability rate across our fixed set of adversaries, and the overall attack success rate of that technique. \input{tables/tbl-mx-config} In its default setting, \mx{} stops obfuscation once the mutated document has been misclassified by its attribution classifier. We alter this behavior to allow \mx{} to continue obfuscation until all $M$ iterations have been performed. This alteration is partly inspired by the idea that stopping early because of one successful misclassification is detrimental to the overall goal of transferability to a wider set of adversaries. \subsection{Evaluation Metrics} Originally, \mx{} was evaluated through two metrics that measured the safety and soundness of the obfuscation. While effective for measuring obfuscation across one adversary, they fail to quantify the transferability of the obfuscated samples to multiple adversaries. To alleviate this, we utilize a third metric called the Attack Success Rate (ASR) to capture this information and slightly modify the safety metric to accommodate it. The final metrics for evaluation are as follows. \begin{enumerate} \item \textbf{Evasion Effectiveness:} An obfuscated document generated from an internal attribution classifier effectively evades an adversary if it is misclassified by that particular adversary. We refer to this property of the internal classifier as its \textit{transferability} to an adversary and report it as the percentage of obfuscated documents produced by that classifier that were misclassified by the adversary. For an adversary $i$ that misclassifies $a_i$ out of $n$ obfuscated documents generated by the internal classifier, we measure transferability as: \begin{equation} T_i = \frac{a_i}{n}\times100 \end{equation} \item \textbf{Attack Success Rate:} The attack success rate \cite{regularizedEnsemblesAndTransf2018} measures the overall transferability of the obfuscated documents across the entire set of adversaries. It is an average of all the transferability scores reported for that specific internal classifier. Given a fixed set of adversaries $m$, the attack success rate of classifier $A$ can be reported as: \begin{equation} ASR_A = \frac{\sum_{i=1}^{m}T_i}{m} \end{equation} \item \textbf{Semantic Similarity:} An obfuscated document has to maintain semantic similarity to the original document. As originally used to evaluate \mx{}, we use the METEOR score \cite{denkowski-lavie-2014-meteor} to assess this similarity. The score lies in the range [0, 1] with 1 indicating perfect similarity and 0 indicating the opposite. The final score reported is the average METEOR score of the obfuscated documents, where a higher score implies that the final documents were similar to the original ones. \end{enumerate} \subsection{Obfuscator's Attribution Classifiers} \noindent \textbf{Baseline obfuscator:} We use \mx{} as the baseline obfuscator for our experiments as its generator only requires black-box access to the attribution classifier. \cite{DBLP:journals/popets/MahmoodASSZ19} This loose coupling between the two components allows the attribution classifier to be easily swapped for another. Additionally, we use the Writeprints \cite{abbasi2008writeprints} feature set throughout the experiments to train the internal attribution classifiers for \mx{}. This feature set incorporates lexical and syntactic features to capture the stylometric properties of the author's writing style. The lexical features include character-level and word-level features such as total words, average length of words, proportions of different character classes, among others. The syntactic features include POS tags, use of function words, and various punctuations. The fitness function for \mx{} takes into account the detection probability of a given attribution classifier $C$ and the semantic similarity between the original and the obfuscated document. We train the two classifiers that were originally used for \mx{}, a \textit{random forest classifier} and a \textit{support vector machine}, on the Writeprints feature set to serve as baselines for comparing the performance of the ensemble. \medskip \noindent \textbf{Writeprints-Static + Ensemble:} We use the same feature set as our baselines to train the ensemble and construct it by training base classifiers on subspaces of the entire feature set. We use a linear SVM as the base classifier owing to its stability and demonstrated use as a base classifier for an ensemble in prior work \cite{ting2011feature}. We use the random subspace method \cite{randomsubspace1998kam} to construct the feature subspaces by randomly choosing distinct features to train each base classifier. The ensemble then reduces the results from these internal classifiers by polling their individual predictions through a majority vote because it gives a uniform weight to all the base classifiers. We configure the remaining parameters for the ensemble architecture by conducting small-scale experiments in a variety of settings. Just as before, we use the training portion of the EBG 5 dataset to select appropriate values for the following hyper-parameters: \begin{itemize} \item number of internal classifiers: $I_c \in \{5, 10, 15\}$ \item length of subspaces: $L_s \in \{30, 50, 80\}$ \end{itemize} The results from these experiments show that while a lower value of $L_s$ yields inaccurate individual classifiers, the accuracy of the overall ensemble is much higher. This ascertains the notion that a robust and highly accurate model can be created from a grouping of weak learners. In light of this, we conservatively set the value of $L_s = 30$ and $I_c = 10$ as we noticed that higher values yield similar results. These settings are retained throughout the entirety of our experiments. \section{Experimental Setup} In this section, we state the assumptions and describe the setup for our experiments. Specifically, we describe the dataset we use, the attribution classifiers used by the obfuscator and the adversaries, the layout of the experiments, and finally the evaluation metrics used to assess the results. \input{sections/experimental-setup/data} \input{sections/experimental-setup/obf-attrib-classifiers} \input{sections/experimental-setup/adversarial-classifiers} \input{sections/experimental-setup/design-transfer-exp} \input{sections/experimental-setup/evaluation-metrics} \section{Introduction} Authorship obfuscation is the process of concealing stylometric pointers in a text document that may reveal the identity of its author. The problem has become increasingly relevant today considering the erosion of privacy due to recent advances in the performance of state-of-the-art authorship attribution approaches. Sophisticated machine learning models can determine the author of a given text document \cite{juola2010empirical,stolerman2014breaking} using hand-crafted stylometric features \cite{abbasi2008writeprints, brennan2012adversarial, mcdonald2012use, clark2007algorithm, afroz2014doppelganger} or automated features such as word embeddings \cite{ruder2016character, howard2018universal}. State-of-the-art authorship attribution approaches have achieved impressive results in a multitude of settings ranging from social media posts \cite{almishari2014stylometric, overdorf2016blogs, rajapaksha2017identifying} to large-scale settings with up to a 100,000 possible authors \cite{narayanan2012feasibility}. The desire to maintain anonymity in this increasingly hostile environment motivates the need for effective authorship obfuscation methods. Obfuscation approaches can be broadly divided into two groups: those that do not rely on feedback from an authorship attribution classifier and those that do require such feedback. In the first group, there are a number of efforts especially from the PAN digital text forensics initiative \cite{panobfuscation}. These authorship obfuscation approaches mostly use rule based transformations (e.g., splitting or joining sentences) guided by some general criteria, such as moving the text towards some average point or moving it away from the author's writing patterns, text simplification, machine translation, etc. \cite{Keswani2016AuthorMT,castro2017AuthorMB, karadzhovPAN2016, Potthast2016AuthorOA}. These approaches generally struggle with achieving the appropriate trade off between evasion effectiveness and preserving text semantics. In the second group, obfuscators that rely on access to an authorship attribution classifier are more relevant to our research. In a seminal work, MacDonald et al. \cite{mcdonald2012use} proposed Anonymouth -- an obfuscator that relies on access to the attributor JStylo to guide manual text obfuscation. A\textsuperscript{4}NT \cite{a4nt2016shetty} proposed a \textit{generative adversarial network} (GAN) based automated approach to obfuscation and also requires access to the attribution classifier. More recently, \mx{} \cite{DBLP:journals/popets/MahmoodASSZ19} used a genetic algorithm and ParChoice \cite{parchoice2020} used combinatorial paraphrasing for automated obfuscation, and both require access to an attribution classifier. These methods have shown promise in effectively evading attribution classifiers while reasonably preserving text semantics. While prior authorship obfuscation methods can suitably trade off between evading attribution and preserving semantics, they do not work well when the adversary uses a different attribution classifier than the one used internally by the obfuscator \cite{DBLP:journals/popets/MahmoodASSZ19,parchoice2020,a4nt2016shetty}. However, it is important that authorship obfuscators can protect the author's identity even when the adversary uses a different attribution classifier. In other words, obfuscation should transfer to previously unseen attribution classifiers. The lack of transferability is essentially because of the mismatch between the obfuscator's internal classifier and the adversary’s attribution classifier. To address the transferability issue, our key insight is that if an obfuscator can evade a meta-classifier, which is based on multiple base classifiers that target different feature subspaces, it is more likely to evade an unseen attribution classifier. Building on this insight, we propose an ensemble-based approach for transferable authorship obfuscation. We explore the design space of the ensemble using different feature subspaces, base classifiers, and aggregation techniques. The experimental evaluation shows our ensemble based authorship obfuscation approach yields state-of-the-art transferability results. We find that obfuscation by our ensemble approach achieves an 1.7$\times$ and 2.1$\times$ better transferability in terms of the attack success rate (ASR) than the baseline RFC and SVM attributors. The ensemble achieves an average METEOR score of 0.36, which is comparable with the RFC at 0.42 and the SVM at 0.40. We summarize our key contributions and findings as follows: \begin{enumerate} \item We explore the problem of transferability of authorship obfuscation against unseen attribution classifiers. \item We propose an ensemble approach that consists of multiple base classifiers, each capturing different feature subspaces, to guide automated text obfuscation. \item We evaluate the evasion effectiveness, semantic preservation, and transferability of the ensemble obfuscator and show that it achieves much better transferability against unseen attribution classifiers than prior approaches. \end{enumerate} \section{Preliminaries \& Methods} \begin{figure*}[t] \includegraphics[width=\textwidth,keepaspectratio]{Black-box.pdf} \caption{Overview of the threat model involving an obfuscator and multiple adversaries. The obfuscator relies on access to an internal attribution classifier. The obfuscator consists of two components: a generator and an obfuscator. The generator generates an obfuscation and queries the internal classifier for feedback on its probability of detection. They repeat this for $M$ iterations and then, if evaded, the final document is verified against the adversaries.} \label{fig:black-box} \end{figure*} \subsection{Authorship Attribution vs. Obfuscation} Stylometry is the analysis of an author's writing style that helps distinguish them from other authors. For example, \textit{writeprints} \cite{abbasi2008writeprints} is a well-known stylometric feature set that has been used to analyze the writing style for the sake for authorship attribution. The primary goal of authorship obfuscation is to evade attribution by concealing such stylometric features in the document while retaining its original meaning. Early approaches such as Anonymouth \cite{mcdonald2012use} highlighted the distinctive stylometric properties of the text that could then be modified by the user to evade attribution. Follow up work at PAN-CLEF aimed to automatically obfuscate documents using simple predefined rules. For example, Mansoorizadeh et al. \cite{mansoorizadehPAN2016} used WordNet to identify synonyms for the most commonly used words by the author and replaced them with similar words. Castro et al. \cite{castro2017AuthorMB} used sentence simplification techniques, such as replacing contractions with their expansions, to obfuscate a document. Keswani et al. \cite{Keswani2016AuthorMT} used round-trip translation ($English \rightarrow German \rightarrow French \rightarrow English$) to obfuscate a document. While these automated approaches managed to evade attribution, they severely compromised the obfuscated text's semantics. These approaches, rather unsuccessfully, navigate the trade-off between evading attribution and preserving semantics \cite{DBLP:journals/popets/MahmoodASSZ19}. Recent work such as A\textsuperscript{4}NT \cite{a4nt2016shetty}, \mx \cite{DBLP:journals/popets/MahmoodASSZ19}, and ParChoice \cite{parchoice2020} employ more sophisticated adversarial obfuscation to evade authorship attribution classifiers. Their threat model assumes that the obfuscator can query the adversary's attribution classifier to guide obfuscation. For example, A\textsuperscript{4}NT uses a \textit{generative adversarial network} (GAN) for obfuscation that requires white-box access to the adversary's attribution classifier. \mx{} uses a genetic algorithm for obfuscation that requires black-box access to the adversary's attribution classifier. ParChoice uses combinatorial paraphrasing for obfuscation that requires black-box access to adversary's attribution classifier. While these obfuscation approaches achieve a better tradeoff between attribution evasion and preserving semantics, they all assume white/black box access to the the adversary's attribution classifier. This key assumption limits their effectiveness in the real world because the adversary's attribution classifier might be different or unknown. For example, the evasion effectiveness of \mx{} drops drastically when the adversary uses a different attribution classifier than assumed by \mx{} \cite{DBLP:journals/popets/MahmoodASSZ19}. Similarly, an adversarially retrained attribution classifier is resistant to obfuscation by ParChoice using the original classifier \cite{parchoice2020}. This lack of \textit{transferability} to unseen attribution classifiers has major ramifications in the real world as the obfuscator's effectiveness is questionable when the adversary happens to use a different attribution classifier. Figure \ref{fig:black-box} provides an overview of this threat model involving the obfuscator and multiple unseen adversaries. \subsection{Problem Statement} The obfuscator seeks to obfuscate the stylometric properties of an input document $D$ of author $A$ by modifying its text to produce an obfuscated document $D'$ such that the attributor incorrectly classifies the obfuscated document to another author $A'\neq A$. The state-of-the-art authorship obfuscators mainly consist of two components: a generator and an internal authorship attribution classifier $C$. The generator modifies the input document based on some rules and queries the internal classifier to predict whether these modifications would degrade the likelihood of successful authorship attribution. The two components work in tandem for $M$ iterations to progressively obfuscate the input document by generating new obfuscation samples and measuring the degradation in authorship attribution by $C$. It is noteworthy that the adversary might use a different authorship attribution classifier $C'\neq C$. There could, in fact, be multiple adversaries in this setting, with each using a different attribution classifier $C'$ than the obfuscator's internal classifier $C$. The primary goal of the obfuscator is to obfuscate an input document $D \rightarrow D'$ using its internal classifier $C$ such that it evades attribution by the adversary classifier $C'$. This problem is also referred to as \textit{transferability} in the field of adversarial machine learning. \subsection{Approach} \medskip \noindent \textbf{Intuition.} The obfuscator relies on feedback from its internal classifier as a proxy to identify suitable transformations that can help evade attribution by the adversary's classifier. These transformations essentially aim to move the document to the wrong side of the decision boundary, which partition different author classes, of the obfuscator's internal classifier. Since these transformations are specific to the decision boundary of the obfuscator's internal classifier, they may not achieve the same result on the adversary's attribution classifier. When they do not, the obfuscated document would evade the attribution classifier of the obfuscator but not that of the adversary. These differences in the decision boundaries of the two classifiers could be because of the differences in their emphasis on different features; thus, transformations that targeted a certain feature emphasized by one classifier might be rendered useless for the other classifier. To address this issue, our key insight is that if an obfuscator can evade a meta-classifier, whose decision boundary is based on the decision boundaries of multiple base classifiers, it is more likely to evade those base classifiers. We hypothesize that a meta-classifier consisting of multiple base classifiers, each emphasizing different features, will better capture the relative importance of various features. Intuitively, as the internal attribution classifier, this \textit{ensemble} of base classifiers can provide a more nuanced view of the entire feature space and can classify the document in a manner that essentially averages the decision boundaries of the base classifiers. \begin{figure} \includegraphics[width=0.48\textwidth,keepaspectratio]{Ensemble.pdf} \caption{Ensemble architecture for the feature subspacing technique. The original feature space is split into subspaces that are then used to train the base classifiers. The outputs of the base classifiers are then aggregated for the final prediction.} \label{fig:ensemble} \end{figure} \medskip \noindent \textbf{Ensemble Approach.} An ensemble is a learning algorithm that takes a set of classifiers and uses their individual outputs to make the final classification for a given input. The classifiers in this set are referred to as the base classifiers for the ensemble. The number of base classifiers affects how the model fits to the training set: too few and the model is likely to underfit and too many will likely result in overfitting. An efficient number can be determined using cross-validation though it is a time-consuming exercise to train multiple ensembles and validate their results. \cite{kyaw2016determineweaklearners} The base classifiers can be different classifiers trained on the same training set, or they could be the same and trained on different subsets of the training set (a technique known as \textit{bagging}) or even on subspaces of the feature set \cite{ting2011feature}. The outputs of the base classifiers are then polled by the ensemble through either a majority vote or by training another classifier (a technique called \textit{stacking}) \cite{ensemble2020dietterich}. A majority vote gives uniform weight to the output of each base classifier whereas stacking causes the weights to vary, as it can learn to downplay classifiers that are inaccurate more often. While these base classifiers might not be very accurate on their own, the ensemble together can capitalize on their knowledge and make more accurate predictions. We construct our ensemble using the feature subspace method and describe it as follows. A subspace is a subset of the entire universal set of features that are available to the classifier. We train the base classifiers of the ensemble on different subspaces of the feature set. The goal of using a subspace of features is to train a base learner that is specialized in that distinct and local set of features. This is motivated by our findings of feature importance and decision boundaries which we discuss later in Section \ref{sec:discussion}. The subspaces can be selected randomly \cite{randomsubspace1998kam}, through sampling \cite{stratifiedsampling2013ye}, or through feature selection techniques \cite{greedyfeature2013dyer}. Figure \ref{fig:ensemble} illustrates the architecture of our proposed ensemble. The original feature space is divided into multiple subspaces which are then used to train the base classifiers. The outputs from these base classifiers are then aggregated to produce the final classification of the ensemble for the given input. \section{Related Work} We survey related research on the transferability of adversarial attacks designed to evade machine learning classifiers. There is a rich body of literature in the image classification context on transferability of adversarial attacks in both white-box and black-box settings. Biggio et al. \cite{Biggio2013adversarialexamples} and Szegedy et al. \cite{Szegedy2014adversarialexample} first showed that an adversary can launch attacks by creating minor perturbations in the input that cause machine learning models to misclassify it. Follow up work has studied the practically of these adversarial attacks in the real world by studying whether they can transfer even when the adversary might not have complete access to the machine learning classifier (e.g., \cite{DBLP:journals/corr/LiuCLS16,Papernot2017blackbox,suciu2018failtransferability,Demontis2019whytransfer}). For example, Papernot et al. \cite{Papernot2017blackbox} proposed a black-box attack against a variety of machine learning approaches including deep neural networks, logistic regression, SVM, decision tree, and nearest neighbors, outperforming existing attacks in terms of transferability. Suciu et al. \cite{suciu2018failtransferability} and Demontis et al. \cite{Demontis2019whytransfer} studied if and why adversarial attacks (do not) transfer in real-world settings. They showed that the target model's complexity and its alignment with the adversary's source model significantly impact the transferability of adversarial attacks. Adversarial attacks in the continuous vision/image domain are different than adversarial attacks in the discrete text domain. % Much of prior work on adversarial attacks is focused on the vision domain and cannot be easily adapted to the text domain \cite{zhang20adversarialattacknlp}. Adversarial attacks on text classification models mostly work by simply misspelling certain words \cite{Gao2018adversarialtext,li2010textbugger}. While these attacks are effective, they are easy to counter by standard pre-processing steps such as fixing misspelled, out-of-vocabulary words. Jin et al. (TextFooler) \cite{Jin2020BERTrobust} and Garg et al. (BAE) \cite{garg2020bertadversarialexamples} proposed black-box adversarial attacks on text classification models by replacing certain words using word embeddings and language models, respectively. The evaluation showed that these black-box adversarial attacks at best only moderately transfer to unseen models. Recent adversarial attacks on machine learning based authorship attribution models employing feedback from authorship classifiers are also quite similar. Mahmood et al. (\mx{}) proposed a black-box adversarial attack that replaced selected words using word embeddings based on a genetic algorithm \cite{DBLP:journals/popets/MahmoodASSZ19}. Grondahl et al. \cite{parchoice2020} also proposed a similar black-box adversarial attack (ParChoice) that used paraphrasing to replace selected texts. While these adversarial attack approaches are effective at authorship obfuscation, they do not transfer well against unseen authorship classifiers. Transferable authorship obfuscation in such settings remains an open challenge that we address in our work. Another relevant line of research has investigated using ensembles to improve transferability of adversarial attacks. For example, Liu et al. \cite{DBLP:journals/corr/LiuCLS16} showed that if an adversarial attack succeeds in evading an ensemble of models it will have better transferability because the source ensemble model and the target models are more likely to share decision boundaries. Most recently, Che et al. \cite{che2020ensembleadversarialattack} studied the effectiveness of different ensemble strategies in improving transferability of adversarial attacks. They also conclude that an attack model that evades an ensemble of multiple source models is more likely to transfer to different target models. \subsection{Discussion} \label{sec:discussion} We now study the various decisions we've made concerning the ensemble and try to understand their impact on its inner workings. \medskip \noindent \textbf{Impact of ensemble diversity:} It is widely understood that combining a diverse set of individual classifiers leads to a more robust ensemble \cite{dietterich2000ensembles}. While the ensemble is diverse in the sense that the base classifiers are trained on distinct subspaces, we instead focus on the predictions of the base classifiers to measure diversity. Intuitively, the diversity of an ensemble is the difference in the predictions of the individual members that constitute it \cite{Kuncheva2004MeasuresOD}. To try and understand how the diversity of the ensemble impacts obfuscation, we train multiple ensembles with varying degrees of diversity and compare their performance with each other. To measure the diversity of an ensemble, we use the non-pairwise metric of entropy $E$ \cite{Kuncheva2004MeasuresOD}. The entropy value of an ensemble lies in the range $[0, 1]$, a value closer to 0 means that the individual classifiers mostly agree, and a value closer to 1 means that they mostly disagree with one another. This measure assumes that a diverse set of classifiers will disagree with one another as opposed to correlated classifiers which will agree more often. We train multiple ensembles and control for their diversity by selecting appropriate base classifiers. Specifically, we create 4 bins of entropy values $E \in \{0, 0.25, 0.5, 0.75\}$ and train 10 ensembles for each bin, each ensemble having approximately the same entropy as the bin it is assigned to. We then conduct 40 experimental runs of \mx{}, and in each run use one of these ensembles as the internal classifier to obfuscate documents and measure the attack success rate of these documents against our set of adversaries. \begin{figure} \centering \fbox{\includegraphics[width=0.4\textwidth,keepaspectratio]{Diversity-Boxplot.pdf}} \caption{The attack success rates of the ensembles belonging to each entropy bin. The y-axis represents the attack success rates while x-axis ticks represent the entropy value of the bin. Note the inverse relation between ASR and Entropy: samples sourced from more diverse ensembles fail to transfer well to other adversaries.} \label{fig:div-box-plot} \end{figure} Figure \ref{fig:div-box-plot} shows the results from these experiments. The y-axis represents the attack success rates while x-axis ticks represent the entropy value of the bin. Contrary to our intuition, we notice that ensembles with higher values of entropy (more diverse) had lower transferability while ensembles with lower entropy (less diverse) performed comparatively better. We explain our interpretation of these results as follows. Recalling that \mx{} uses the confidence score of the internal classifier to make decisions regarding obfuscation, we investigate the impact of diversity on the accuracy of the ensemble and the confidence of the classifier in its classifications. To increase the diversity of an ensemble, we need to promote disagreement between the individual classifiers that comprise it. Consequently, this makes the ensemble overall less confident in its final classification even if it is correct. Since \mx{} uses this confidence score as an indicator of attribution, a lower score leads to poorer decision making on \mx{}'s part and reduces the quality of obfuscation. We note that this problem is likely unique to how \mx{} operates and not necessarily an artefact of using ensembles. In future work, it will be interesting to explore how diversity impacts ensemble based transferability in different settings that are not bound by this restriction. \begin{figure*}[ht!] \centering \includegraphics[width=\textwidth,keepaspectratio]{Base-Classifiers-DB.pdf} \caption{The decision boundaries of the base classifiers inside the ensemble.} \label{fig:base-classifier-db} \end{figure*} \medskip \noindent \textbf{Impact of feature subspaces:} Our main set of experiments consider an ensemble of base classifiers trained on randomly selected subspaces of the feature set. We now consider more systematic approaches for constructing the subspaces and see how they affect transferability. At a higher level, the Writeprints feature set incorporates lexical and syntactic features that are qualitatively distinct. This distinction indicates the presence of a \textit{contextual subspace} within the feature set. More specifically, there are 9 distinct subspaces and are as follows: \textit{frequency of special characters}, \textit{letters}, \textit{digits}, \textit{parts-of-speech tags} and \textit{punctuation}, \textit{most common letter bigrams and trigrams}, \textit{percentage of certain function words}, and the \textit{ratio of unique words (hapax ratio)} \cite{abbasi2008writeprints}. We train the base classifiers on this division of subspaces and measure the attack success rate of the resultant ensemble and note that, in contrast to the random subspace method, this yields base classifiers that have different values of $L_s$. Additionally, we explore feature selection techniques to construct the subspaces. Using a one-way ANOVA test \cite{elssied2014anova}, we measure the dependency between the features and the author label, and use the highest-ranking features to train the first base classifier. We repeat this for the next base classifier by considering the remaining set of features and so on for the rest of the classifiers. This yields base classifiers of varying performance as the initial ones are highly accurate but the accuracy gradually drops as the remaining features are insufficient predictors . For this particular experiment, we set $I_c = 8$ and $L_s = 20$ for a consistent distribution of features. The results of the experiments are as follows: the contextual subspace ensemble yields an ASR of 37.1\% whereas the feature selection subspace ensemble yields an ASR of 34.7\%. We see that the results for the subspace ensemble are comparable with those of the random subspace ensemble which had an ASR of 38.0\%. Considering the security aspect of the techniques, the contextual subspaces approach is relatively more risky. Since the features composing these subspaces are easy to identify, an adversary can undo the effects of obfuscation through adversarial training: building a classifier to recognize obfuscation by training it on the obfuscated documents generated by the internal classifier \cite{ehae2014}. In contrast, an ensemble built using random subspaces offers a good balance: achieving a commensurable degree of transferability and providing a good defence against adversarial training as its random nature is unpredictable for the adversary. \medskip \noindent \textbf{Feature importance and decision boundaries:} While the goal of all classifiers is to map the input document to an author, there are fundamental differences in the way they operate and actually classify the data. These differences highlight the notion of feature importance: some features are more important for a particular type of classifier than to another. We now interpret these models to identify the features they consider important and see how it affects transferability. RFC is a collection of decision trees that also counts the votes of the individual outputs of the trees to make the final classification. The decision trees consist of several nodes that split the training set into subsets based on the values of certain features. Arguably, features that are used more often for splitting and split a sizable portion of the training set compared to others bear greater significance for the model. This is known as the Gini Importance of that feature \cite{breiman2017classification} and it is the number of times a particular feature was used for a split weighted by the number of samples it splits. SVM classifies the data by learning a hyperplane between the data points that separates the different class boundaries. In a linear SVM, this hyperplane represents the points at which the distance between the class boundaries is maximum. Since the coefficients of this hyperplane are associated with the features, their absolute values represent the significance of the corresponding feature relative to the other features. In a multi-class setting, the SVM has multiple hyperplanes separating each of the classes and each hyperplane has its own set of coefficients. \input{tables/tbl-feature-importance} We assess the differences between what features are important for the RFC and SVM. Table \ref{tbl:feat-importance} lists the top 5 features for the baseline RFC and one of the SVM hyperplanes. We note that these are different for the two classifiers; moreover, this trend holds even beyond the top 5 features. In a high-dimensional feature space such as Writeprints, this difference in feature emphasis by the classifier amounts to some features losing their relative importance and thus the obfuscator does not consider their relevance. This highlights a fundamental flaw in the obfuscator: the obfuscation will always be tuned to the features preferred by its internal classifier and fail to transfer to a different classifier emphasizing different feature. Our approach of using feature subspaces in an ensemble alleviates this flaw to an extent; base classifiers trained on smaller random sets of features emphasize the importance of those features. The base classifier then \textit{specializes} in its localized subspace of features and, while it may not be accurate, it is representative of a certain aspect of the feature space that might be of significance to an adversary's classifier. Taking a look at the decision boundaries of the base classifiers in Figure \ref{fig:base-classifier-db} helps explain how they improve transferability. We use Principal Component Analysis (PCA) \cite{WOLD1987PCA} to reduce the higher-dimensional feature space to two-dimensions for the plotting the boundaries. The data points are the documents from the test set projected into the PCA dimensions. The colored regions in the background represent the decision regions of the classifier for that particular label, i.e., points that fall in those regions are classified according to that label. We stress that this two-dimensional projection is merely an approximation of the actual high-dimensional feature space so some misalignments are expected. Looking at the decision boundaries of the base classifiers, we see that they vary significantly and that some of the classifiers perform better at classifying a particular author than others. The decision boundaries also highlight the limited access the classifiers have to the entire feature space, observable by the disjoint between the same decision region. While the projection takes into account the entire feature space, the decision region is only based on the subspace the particular classifier was concerned with. Figure \ref{fig:ensemble-db} shows the decision boundary of the ensemble formed from these base classifiers. We see that the decision region of the ensemble more closely encapsulates the data points than the base classifiers. Since the ensemble classifies according to the majority vote of the base classifiers, its decision boundary is approximately the average of all their decision boundaries. The voting mechanism also ensures that the base classifiers are weighted equally so as to not downplay the role of a certain subspace. Therefore, the ensemble capitalizes on the individual knowledge of the base classifiers and effectively serves as a \textit{middle-ground} for the obfuscator to compare against. \begin{figure} \centering \includegraphics[width=0.48\textwidth,keepaspectratio]{Ensemble-DB.pdf} \caption{The decision boundary of the ensemble. It provides a middle-ground between the boundaries of the base classifiers and serves as an appropriate attribution classifier.} \label{fig:ensemble-db} \end{figure} \subsection{Evaluation} \input{tables/tbl-main-results} The main results of the experiments are presented in Table \ref{tbl:main-results}. The rows correspond to the internal classifier used by \mx{} and the subsequent columns correspond to the classifier used by the adversary and the feature set they are trained on. The cell values are the percentage of documents generated using that method that were misclassified by the adversary's classifier, i.e., the transferability of that method. The final two columns contain the attack success rate (mean of transferability to adversaries) and the mean METEOR score of the technique. Additionally, we reiterate the counter-measure discussed in Section \ref{sec:adv-clf} and train multiple versions of classifiers that exhibit randomness during training. As such, the columns for the Ensemble, MLP, and RFC report the average transferability to different versions of the classifiers. \medskip \noindent \textbf{Impact of attribution classifier:} The transferability achieved by SVM ranges from 1.6\% ($SVM \rightarrow RFC$) to 93.7\% ($SVM \rightarrow SVM$), whereas for RFC it ranges from 5.8\% ($RFC \rightarrow LR$) to 29.1\% ($RFC \rightarrow Ensemble$). In comparison, the ensemble achieves transferability ranging from 15.8\% ($Ensemble \rightarrow LR$) to 71.9\% ($Ensemble \rightarrow Ensemble$). We notice that the cases where the baselines perform better are where the internal classifier and adversary’s classifier are the same ($SVM \rightarrow SVM, RFC \rightarrow RFC$); cases which are fairly trivial and advantageous to \mx{}. In fact, the 93.7\% in the case of $SVM \rightarrow SVM$ is unrealistic because the adversary's classifier is exactly the same as \mx{}, whereas $RFC \rightarrow RFC$ and $Ensemble \rightarrow Ensemble$ scenarios have been normalized by training multiple instances of the adversary and reporting their average (see Section \ref{sec:adv-clf}). Regardless, the ensemble still manages to outperform the other baseline in these trivial cases. The effects of re-training the ensemble and RFC are also evident in these cases. The average transferability of $RFC \rightarrow RFC$ is quite low at 28.2\%, corroborating the findings of Gröndahl et al. \cite{parchoice2020}. However, this doesn't appear to be true for the ensemble as it still reports a fairly high transferability at 71.9\% indicating its robustness. In the non-trivial cases where the adversary's classifier is different from the internal classifier, the ensemble fares far better than the baselines. In the case of KNN, the ensemble achieves a transferability of 41.6\% compared to the RFC at 19.4\%. In the case of NB, it achieves a transferability of 52.9\%, which is 32.3\% higher than the SVM at 20.6\%. On average, the ensemble achieves 21\% higher transferability than the SVM and 13.8\% higher than the RFC across the set of adversaries where the internal classifier is different. A comparison of the overall performance (trivial and non-trivial) between the ensemble and the baselines shows that the ensemble outperforms both across the wide range of adversarial settings. The overall attack success rate of the ensemble at 38.0\% is the best compared to the baselines, which is 1.7$\times$ higher than the transferability of the RFC at 21.7\% and 2.1$\times$ higher than the transferability of the SVM at 18.3\%. The ensemble does not perform as well as the other methods when we compare the METEOR scores. The samples generated by RFC and SVM retain better semantic similarity to the original documents with an average METEOR score of 0.42 and 0.40, respectively. In contrast, the ensemble reports an average score of 0.36 indicating that the generated samples differed more from their original selves. We attribute this lower score to the problem of balance between protecting the author's identity and being true to the original content. We believe that the effort to ensure transferability requires more substantial changes to be made to the document which leads to lower similarity between the source and the obfuscated text, and consequently a lower average METEOR score. The superior effectiveness of the ensemble as an internal classifier is undeniable when compared to the baselines. The high attack success rate and a comparable METEOR score make it a reliable alternative to other conventional classifiers for use alongside an obfuscator like \mx{}. \medskip \noindent \textbf{Impact of feature set:} \mx{} may have an inherent advantage when the obfuscator and adversary's classifiers are trained on the same feature set as this likely provides the obfuscator unfair insight into how the adversary operates. We test this by observing results from experiments where the adversary is trained on a different feature set and classification technique than the internal classifier. Within the JGAAP setting, the SVM does not perform as well as the other two settings. It surprisingly performs the worst in the $SVM \rightarrow SVM$ case, only achieving a transferability of 5\%. We attribute this to the difference between the kernel functions of the two SVMs; as opposed to a linear kernel used in the internal classifier, JGAAP's default setting uses a RBF kernel. In comparison, the ensemble and the RFC achieve higher degrees of transferability yielding an attack success rate of 34.3\% and 24\% respectively with the ensemble outperforming the RFC by 10.3\%. We see similar results in the Basic-9 setting, where the ensemble achieves a transferability that is 6\% higher than RFC and almost twice as high the SVM. This affirms the idea that the ensemble performs just as well against adversaries trained on a different feature set and outperforms other conventional classifiers. \section{Results} \input{sections/results/evaluation} \input{sections/results/discussion}
1,314,259,992,607
arxiv
\section{Introduction} Standard matrix factorization is used in a wide range of applications including statistics, optimization, and machine learning. To factor a given a matrix $M\in\mathbb R^{p\times q}$ of $\mathrm{rank}(M)=r$, we need to find size-$r$ vectors $a_1,...,a_p, b_1,...,b_q\in\mathbb R^r$ such that $M_{ij} = \langle a_i, b_j\rangle$. Often times, however, the matrix at hand as well as the elements in the factorization are required to have certain positivity structure~\cite{FP, GPT13, GPT15}. In statistical mixture models, for instance, we need to find a {\em nonnegative} factorization of the matrix at hand~\cite{CR93,GG12,KRS15,Vavasis09}. In other words, the vectors $a_i$ and $b_j$ need to be nonnegative. In the present article we study a more general type of factorization called positive semidefinite factorization. The vectors $a_i$ and $b_j$ in the decomposition are now replaced by $k\times k$ symmetric positive semidefinite matrices $A_i, B_j\in\mathcal S^k_+$, and $k$ is the size of the positive semidefinite factorization of $M$. Here the space of symmetric $k\times k$ matrices is denoted by $\mathcal S^k$, the cone of $k\times k$ positive semidefinite matrices by $\mathcal S^k_+$, and the inner product on $\mathcal S^k$ is given by $$ \langle A,B \rangle =\text{trace}(AB). $$ \begin{definition}Given a matrix $M\in\mathbb R^{p\times q}_{\geq 0}$ with nonnegative entries, a {\em positive semidefinite (psd) factorization} of size $k$ is a collection of matrices $A_1,...,A_p, B_1,..., B_q\in\mathcal S^{k}_+$ such that $M_{ij} = \langle A_i,B_j\rangle$. The {\em positive semidefinite rank} {\em (psd rank)} of the matrix $M$ is the smallest $k \in \mathbb{N}$ for which such a factorization exists. It is denoted by $\mathrm{rank}_{psd}(M)$. \end{definition} The nonnegativity constraint on the entries of $M$ is natural here since for any two psd matrices $A, B\in\mathcal S^k_+$, it is always the case that $\langle A, B\rangle \geq 0$. To see this, write $A = UU^T, B = VV^T$ for some $U,V\in\mathbb R^{k\times k}$. Then, $\text{trace}(AB) = \text{trace}((V^TU)(V^TU)^T) \geq 0$ since $((V^TU)(V^TU)^T)$ is positive semidefinite. Thus, in order for $M$ to have finite psd rank, its entries need to be nonnegative. Given a polytope $P$, the smallest number $k$ such that the polytope can be written as a projection of a linear slice of $\mathcal S^k_+$ is called the semidefinite extension complexity of $P$. This quantity is also equal to the psd rank of a slack matrix for the polytope $P$. This connection between positive semidefinite rank and semidefinite extension complexity is analogous to the connection between nonnegative rank and linear extension complexity, established in the seminal paper of Yannakakis~\cite{Yannakakis91}. This was the first paper in the line of work providing super-polynomial lower bounds on the linear and semidefinite extension complexities of families of polytopes~\cite{FMPTW,Rothvoss14,LRST,FSP15,LRS15}. The geometric aspects as well as many of the properties of psd rank have been studied in a number of recent articles~\cite{FGPRT,GPT13,GPT15,GPRT,GRT13,GRT}. In this paper we study the space $\mathcal M_{r, k}^{p\times q}$ (or $\mathcal M_{r,k}$ for short) of $p \times q$ nonnegative matrices of rank at most $r$ and psd rank at most $k$. By Tarski-Seidenberg's Theorem \cite[Theorem 2.76]{basu2005algorithms} this set is semialgebraic, i.e. it is defined by finitely many polynomial equations and inequalities, or it is a finite union of such sets. It lies inside the variety $\mathcal V^{p\times q}_r$ (or $\mathcal V_r$ for short) of $p\times q$ matrices of rank at most~$r$. We study the geometry of $\mathcal M_{r, k}$, and in particular, we investigate the {\em boundary} $\mathcal \partial \mathcal M_{r,k}$ of $\mathcal M_{r, k}$ as a subset of $\mathcal V_r$. \begin{definition} The {\em topological boundary} of $\mathcal M_{r, k}$, denoted by $\partial \mathcal M_{r, k}$, is its boundary as a subset of $\mathcal V_{r}$. In other words, it consists of all matrices $M\in\mathcal V_{r}$ such that for every $\epsilon > 0$, the ball with radius $\epsilon$ and center $M$, denoted by $\mathcal B_{\epsilon}(M)$, satisfies the condition that $\mathcal B_{\epsilon}(M)\cap\mathcal V_r$ intersects $\mathcal M_{r, k}$ as well as its complement $\mathcal V_r\setminus \mathcal M_{r, k}$. The {\em algebraic boundary} of $\mathcal M_{r, k}$, denoted by $\overline{\partial\mathcal M_{r, k}}$ is the Zariski closure of $\partial \mathcal M_{r, k}$ over~$\mathbb R$. \end{definition} In Section \ref{sec:psdrank2}, we completely describe $\partial \mathcal M^{p\times q}_{3, 2}$, as well as $\overline{\partial \mathcal M^{p\times q}_{3, 2}}$. More precisely, Corollary \ref{cor:algebraic} shows that a matrix $M$ lies on the boundary $\partial\mathcal M^{p\times q}_{3, 2}$ if and only if in every psd factorization $M_{ij} = \langle A_i, B_j\rangle$, at least three of the matrices $A_1,\dots, A_p$ and at least three of the matrices $B_1,\dots, B_q$ have rank one. In Sections~\ref{sec:geometricInterpretation} and~\ref{sec:higherPsdRank}, we study the general case $\partial\mathcal M^{p\times q}_{r, k}$. Conjecture \ref{thm:k+1k} is an analogue of Corollary \ref{cor:algebraic}. It states that a matrix $M$ lies on the boundary $\partial \mathcal M^{p\times q}_{r, k}$ if and only if in every psd factorization $M_{ij} = \langle A_i, B_j\rangle$, at least $k+1$ of the matrices $A_1,\dots, A_p$ have rank one and at least $k+1$ of the matrices $B_1,\dots, B_q$ have rank one. In Section~\ref{sec:5.1}, we give theoretical evidence supporting this conjecture in the simplest situation where $p=q=r=k+1$. In Section \ref{section:higher_psd_rank}, we present computational examples. Our code is available at \[ \hbox{\tt https://github.com/kaiekubjas/psd-rank}\hspace{0.1cm} .\] Our results are based on a geometric interpretation of psd rank, which is explained in Section \ref{sec:preliminaries}. Given a nonnegative matrix $M$ of rank $r$ satisfying $M \mathbf 1=\mathbf 1$, we can associate to it nested polytopes $P\subseteq Q \subseteq \mathbb{R}^{r-1} $. Theorem \ref{thm:spectrahedron}, proved in \cite{GRT}, shows that $M$ has psd rank at most $k$ if and only if we can fit a projection of a slice of the cone of $k\times k$ positive semidefinite matrices $\mathcal{S}^k_+$ between $P$ and $Q$. When we restrict to the case when the rank of $M$ is three, this result states that $M$ has psd rank two if and only if we can nest an ellipse between the two nested polygons $P$ and $Q$ associated to $M$. In Theorem \ref{main_theorem} we show that $M$ lies on the boundary $\partial \mathcal M^{p\times q}_{3, 2}$ if and only if every ellipse that nests between the two polygons $P$ and $Q$, touches at least three vertices of $P$ and at least three edges of $Q$. The statement of Conjecture \ref{conjecture:geometric_description2} is analogous to the statement of Theorem \ref{main_theorem} for the general case~$\partial \mathcal M^{p \times q}_{r, k}$. \subsection*{Acknowledgments} Part of this work was done while the first and second authors were visiting the Simons Institute for the Theory of Computing, UC Berkeley. We thank Kristian Ranestad and Bernd Sturmfels for very helpful discussions, Rekha Thomas for reading the first draft of the article and Sophia Sage Elia for making Figure~\ref{fig:circulant_matrices_3D}. \section{Preliminaries}\label{sec:preliminaries} Many of the basic properties of psd rank have been studied in \cite{FGPRT}. We give a brief overview of the results used in the present article. \subsection{Bounds} The psd rank of a matrix is bounded below by the inequality $$\mathrm{rank}(M)\leq \binom{\mathrm{rank}_{\text{psd}}(M)+1}2$$ since one can vectorize the symmetric matrices in a given psd factorization and consider the trace inner product as a dot product. On the other hand, the psd rank is upper bounded by the nonnegative rank $$\mathrm{rank}_{psd}(M)\leq \mathrm{rank}_+(M)$$ since one can obtain a psd factorization from a nonnegative factorization by using diagonal matrices. The psd rank of $M$ can be any integer satisfying these inequalities. \subsection{Geometric description} \subsubsection*{From nested polytopes to nonnegative matrices} We now describe the geometric interpretation of psd rank. Let $P\subseteq \mathbb R^{r-1}$ be a polytope and $Q\subseteq\mathbb R^{r-1}$ be a polyhedron such that $P\subseteq Q$. Assume that $P = \mathrm{conv}\{v_1,...,v_p\}$ and $Q$ is given by the inequality representation $Q = \{x\in\mathbb R^{r-1} : h_j^Tx \leq z_j, j=1,...,q\}$, where $v_1,...,,v_p, h_1,...,h_q\in\mathbb R^{r-1}$ and $z_1,\dots, z_q\in\mathbb R$. The {\em generalized slack matrix} of the pair $P, Q$, denoted by $S_{P,Q}$, is the $p\times q$ matrix whose $(i,j)$-th entry is $z_j - h_j^T v_i $. \begin{remark} The generalized slack matrix depends on the representations of $P$ and $Q$ as the convex hull of finitely many points and as the intersection of finitely many half-spaces whereas the slack matrix depends only on $P$ and $Q$. We will abuse the notation and write $S_{P,Q}$ for the generalized slack matrix as by the next result the $\mathrm{rank}_{psd}(S_{P, Q})$ is independent of the representations of $P$ and $Q$. \end{remark} \begin{theorem}[Proposition~3.6 in~\cite{GRT}]\label{thm:spectrahedron} Let $P\subset \mathbb R^{r-1}$ be a polytope and $Q \subseteq \mathbb R^{r-1}$ a polyhedron such that $P\subseteq Q$. Then, $\mathrm{rank}_{psd}(S_{P, Q})$ is the smallest integer $k$ for which there exists an affine subspace $L$ of $\mathcal S^k$ and a linear map $\pi$ such that $P\subseteq \pi(L\cap\mathcal S^k_+) \subseteq Q$. \end{theorem} A {\em spectrahedron} of size $k$ is an affine slice of the cone $\mathcal S_+^k$ of $k\times k$ positive semidefinite matrices. A {\em spectrahedral shadow} of size $k$ is a projection of a spectrahedron of size $k$. Therefore, Theorem \ref{thm:spectrahedron} states that the matrix $S_{P, Q}$ has psd rank at most $k$ if and only if one can fit a spectrahedral shadow of size $k$ between $P$ and $Q$. \begin{remark} Given $M$, the polytopes $P$ and $Q$ are not unique, but the statement of Theorem~\ref{thm:spectrahedron} holds regardless of which pair $P, Q$ such that $M = S_{P, Q}$, is chosen. \end{remark} \subsubsection*{From nonnegative matrices to nested polytopes} Given a $p \times q$ nonnegative matrix $M$, we can assume that it contains no zero rows as removing zero rows does not change its psd rank. Secondly, we may assume that $\mathbf 1$ is contained in the column span of $M$ as scaling its rows by scalars also keeps the psd rank fixed. Consider a rank-size factorization $M=AB$ with $A$ having rows $A_i=(a_i^T,1)$. Let $$ P=\text{conv}(a_1,\ldots,a_p) \text{ and } Q=\{x \in \mathbb{R}^{r-1}:(x^T,1)B \geq 0\}. $$ Then $P \subseteq Q$ and $S_{P,Q}=M$.\ Without loss of generality, we may further assume that $M\mathbf 1 = \mathbf 1$ by scaling the rows of $M$ by its row sums. The following lemma shows that in this case we can choose $P$ and $Q$ to be bounded. \begin{lemma}[Lemma~4.1 in~\cite{FGPRT}]\label{lem:nest} Let $M\in\mathbb R^{p\times q}_{\geq 0}$ be a nonnegative matrix and assume that $M\mathbf 1 = \mathbf 1$. Let $\mathrm{rank}(M) = r$. Then, there exist polytopes $P, Q\subseteq \mathbb R^{r-1}$ such that $P\subseteq Q$ and $M$ is the slack matrix of the pair $P, Q$. \end{lemma} \subsubsection*{The geometry of $\mathcal M_{r,k}^{p\times q}$} A point $M \in \mathcal M^{p\times q}_{r, k}$ is an {\em interior point} of $\mathcal M^{p\times q}_{r, k}$ if there is an open ball $B_{\epsilon}(M) \subset R^{p \times q}$ that satisfies $B_{\epsilon}(M) \cap \mathcal V^{p\times q}_{r} = B_{\epsilon}(M) \cap \mathcal M^{p\times q}_{r, k}$. By the following lemma, we can check whether a matrix lies in the interior or boundary of $\mathcal M_{r,k}^{p\times q}$ by checking this for its rescaling that satisfies $M\mathbf 1=\mathbf1$. \begin{lemma}\label{lem:rescale} A matrix $M\in\mathbb R^{p\times q}_{\geq 0}$ without zero rows lies in the interior of $\mathcal M_{r, k}$ if and only if the matrix $N$, obtained from $M$ by rescaling such that $N\mathbf 1=\mathbf1$, lies in the interior of $\mathcal M_{r, k} \cap \{P \in \mathbb R^{p\times q}_{\geq 0} :P\mathbf 1 = \mathbf1\}$ with respect to $\mathcal V_r\cap \{P \in \mathbb R^{p\times q}_{\geq 0} :P\mathbf 1 = \mathbf1\}$. \end{lemma} \begin{proof} First assume that the rescaled matrix $N$ lies in the interior of $\mathcal M_{r, k} \cap R$, where $R=\{P \in \mathbb R^{p\times q}_{\geq 0} :P\mathbf 1 = \mathbf1\}$. Thus, there exists $\epsilon > 0$ such that $\mathcal B_{\epsilon}(N)\cap\mathcal V_r\cap R \subseteq \mathcal M_{r, k} \cap R$. Let $\alpha_1,\dots, \alpha_p$ be the row sums of $M$, i.e. $M\mathbf 1 = \alpha$. Without loss of generality, assume that $0<\alpha_1\leq \alpha_2\leq \cdots \leq \alpha_p$. Then, consider the ball $\mathcal B_{\epsilon\alpha_1}(M)$. If a matrix $M' =M + A\in \mathcal B_{\epsilon\alpha_1}(M)\cap \mathcal V_r$, then, after dividing the rows of $M'$ by $\alpha_1,\dots, \alpha_p$ respectively, we obtain the matrix $N + B$, where $B$ is the rescaled version of $A$. Since $\alpha_1\leq \cdots\leq \alpha_p$, then $\Vert B\Vert \leq \frac1{\alpha_1}\Vert A\Vert$. Thus $N+B\in\mathcal B_{\epsilon}(N)\cap\mathcal V_r\cap R\subseteq \mathcal M_{r, k}\cap R$. Since rescaling of the rows by positive numbers does not change the rank or psd rank, we have $M'\in\mathcal M_{r, k}$. Therefore, $\mathcal B_{\epsilon\alpha_1}(M)\cap\mathcal V_r\subseteq \mathcal M_{r, k}$, i.e. $M$ is in the interior of $\mathcal M_{r, k}$. Now, assume that $M$ lies in the interior of $\mathcal M_{r, k}$. Then, there exists $\epsilon > 0$ such that $\mathcal B_{\epsilon}(M)\cap \mathcal V_{r}\subseteq \mathcal M_{r, k}$. Let $M\mathbf 1 = \alpha$, and assume that $0 < \alpha_1\leq \alpha_1\leq \cdots \leq \alpha_p$. Consider the ball $\mathcal B_{\epsilon / \alpha_p}(N)$. If $N' = N + B \in \mathcal B_{\epsilon / \alpha_p}(N) \cap \mathcal{V}_r \cap R$, then after multiplying the rows of $N'$ by $\alpha_1,\dots, \alpha_p$ respectively we obtain the matrix $M' = M + A$, where $A$ is the rescaled version of $B$, and $||A||\leq \alpha_p ||B||$. Thus, $M'\in\mathcal B_{\epsilon}(M)\cap \mathcal V_r\subseteq \mathcal M_{r,k}$. Since rescaling of the rows by positive numbers does not change the rank or the psd rank, we have $N'\in\mathcal M_{r,k}$. Thus, $\mathcal B_{\epsilon/\alpha_p}(N) \cap \mathcal{V}_r \cap R \subseteq \mathcal M_{r, k} \cap R$, so $N$ lies in the interior of $\mathcal M_{r, k} \cap R$. \end{proof} Lemma \ref{lem:rescale} implies that if we want to study the topology of $\mathcal M_{r, k}$ as a subset of $\mathcal V_r$, we can restrict ourselves to the topology of the space $\mathcal M_{r, k}\cap\{P \in R^{p\times q}_{\geq 0}: P\mathbf 1 = \mathbf 1\}$ as a subset of $\mathcal V_r\cap\{P \in R^{p\times q}_{\geq 0}:P\mathbf 1=\mathbf 1\}$, and Lemma \ref{lem:nest} gives us a recipe for thinking of the elements of this space geometrically. \subsection{Comparison with nonnegative rank} Three different versions of nonnegative matrix factorizations appear in the literature: In~\cite{Vavasis09} Vavasis considered the exact nonnegative factorization which asks whether a nonnegative matrix $M$ has a nonnegative factorization of size equal to its rank. The geometric version of this question asks whether one can nest a simplex between the polytopes $P$ and $Q$. In~\cite{GG12} Gillis and Glineur defined restricted nonnegative rank as the minimum value $r$ such that there exist $A \in \mathbb{R}_{\geq 0}^{p \times r}$ and $B \in \mathbb{R}_{\geq 0}^{r \times q}$ with $M=AB$ and $\mathrm{rank}(A)=\mathrm{rank}(M)$. The geometric interpretation of the restricted nonnegative rank asks for the minimal $r$ such that there exist $r$ points whose convex hull can be nested between $P$ and $Q$. The geometric version of the nonnegative rank factorization asks for the minimal $r$ such that there exist $r$ points whose convex hull can be nested between an $(r-1)$-dimensional polytope inside an $q$-simplex. These polytopes are not $P$ and $Q$ as defined in this paper. See~\cite[Theorem 3.1]{CR93} for details. In the psd rank case there is no distinction between the psd rank and the restricted psd rank, because taking an intersection with a subspace does not change the size of a spectrahedral shadow while intersecting a polytope with a subspace can change the number of vertices. Conjecture~\ref{conjecture:spectrahedra_are_enough} also suggests that there is no distinction between the spectrahedron and the spectrahedral shadow case which we can compare with simplices and polytopes in the nonnegative rank case, or equivalently the exact nonnegative matrix factorization and restricted nonnegative factorization case. \section{Matrices of rank three and psd rank two}\label{section:rank3psdrank2}\label{sec:psdrank2} In this section we study the set $\mathcal M_{3, 2}$ of matrices of rank at most three and psd rank at most two. We completely characterize its topological and algebraic boundaries $\partial \mathcal M_{3, 2}$ and $\overline{\partial\mathcal M_{3, 2}}$. Consider a matrix $M\in\mathbb R^{p\times q}_{\geq 0}$ of rank three. We get a $2$-polytope $P$ and a $2$-polyhedron $Q$ such that $P \subseteq Q \subset \mathbb{R}^2$. Theorem \ref{thm:spectrahedron} now has the following simpler form. \begin{corollary}[Proposition~4.1 in \cite{GRT}]\label{cor:psd_rank_two} Let $M$ be a nonnegative rank three matrix. Let $P\subseteq Q\subseteq \mathbb R^2$ be a polytope and a polyhedron for which $M = S_{P, Q}$. Then $\mathrm{rank}_{psd}(M) = 2$ if and only if there exists a half-conic such that its convex hull $C$ satisfies $P \subseteq C \subseteq Q$. In particular if $Q$ is bounded, then $\mathrm{rank}_{psd}(M) = 2$ if and only if we can fit an ellipse between $P$ and $Q$. \end{corollary} Half-conics are ellipses, parabolas and connected components of hyperbolas in $\mathbb{R}^2$. If $M\mathbf 1 = \mathbf 1$, then $P$ and $Q$ are bounded and the half-conic in Corollary~\ref{cor:psd_rank_two} is an ellipse. Using this geometric interpretation of psd rank two, we give a condition on when a matrix $M$ lies in the interior of $\mathcal M_{3, 2}$. \begin{lemma}\label{continuity_of_factorizations} Let $M \in \mathbb{R}^{p \times q}$ be such that $M \mathbf 1 =\mathbf 1$ and $\mathrm{rank}(M)=r$. In a small neighborhood of $M$, there exists a continuous map $\mathcal{V}_r \cap \{M \in \mathbb{R}^{p \times q}:M \mathbf 1 = \mathbf 1\} \rightarrow \mathbb{R}^{p \times r} \times \mathbb{R}^{r \times q}, M \mapsto (A,B)$ such that $M=AB$ and the last column of $A$ consists of ones. \end{lemma} \begin{proof} Let $\mathrm{rank} (M)=r$. Consider the rank-size factorization $M=AB$ where $A$ consists of $r-1$ linearly independent columns of $M$ and the column $\mathbf{1}$ such that $\mathbf{1}$ is not in the column span of the $r-1$ columns. Then the entries of $B$ are solutions of the linear system of equations $AB=M$. In particular, we can choose $r$ linearly independent rows of $M$ and write down the square system corresponding to the rows. Then each entry of $B$ is of the form $\frac{\det(\cdot)}{\det(\cdot)}$, where the upper determinant is in the entries of $A,M$ and the lower determinant is in the entries of $A$. However, the entries of $A$ are also entries of $M$. Hence, we have constructed a map that is continuous in the neighborhood of $M$ where the set of linearly independent columns and rows used for constructing $A$ and $B$ remain linearly independent. \end{proof} \begin{lemma}\label{lem:interior} Let $M$ be a nonnegative matrix of rank three satisfying $M \mathbf 1 =\mathbf 1$ such that there exist nested polytopes $P$ and $Q$ for which $M = S_{P, Q}$. Then $M$ lies in the interior of $\mathcal M_{3, 2}$ if and only if there exists a region $E$ bounded by an ellipse such that $P\subset E\subset Q$ and the boundary of $E$ does not contain any vertices of $P$. \end{lemma} \begin{proof} By Lemma~\ref{lem:rescale}, we may assume throughout the proof that $M \mathbf 1=\mathbf 1$ and hence $P \subseteq Q$ are bounded. Abusing the terminology, we will call the region bounded by an ellipse an ellipse in this proof. Assume first that $M$ lies in the interior of $\mathcal M_{3, 2}$. By Lemma~\ref{lem:nest} and Corollary~\ref{cor:psd_rank_two} there exists an ellipse $E$ such that $P\subseteq E\subseteq Q$. If the boundary of $E$ does not contain any vertices of $P$, then we are done. Suppose that the boundary of $E$ contains some vertices of $P$. We are going to find another ellipse $E'$ such that $P\subset E'\subset Q$ and the boundary of $E'$ does not contain any vertices of $P$. Since $M$ is in the interior of $\mathcal M_{3, 2}$, none of the entries of $M$ are 0, so the boundary of the polygon $Q$ does not contain any vertices of $P$. Moreover, there exists $\epsilon > 0$ such that $\mathcal V_3\cap \mathcal B_{\epsilon}(M)\subset \mathcal M_{3, 2}$. Pick a point in the interior of the polygon $P$ and consider the polygon $tP$ obtained by a homothety centered at the selected point with some $t>1$. Then, $P\subset tP\subseteq Q$ for a small enough $t>1$, and $P$ is strictly contained in $tP$. Now consider the generalized slack matrix of $tP$ and $Q$ and call it $M_t$. We can choose $t$ close enough to 1 so that $M_t\in\mathcal B_{\epsilon}(M)\subseteq \mathcal M_{3,2}$. Thus, $M_t$ has psd rank at most two and there exists an ellipse $E'$ such that $tP\subset E'\subset Q$. Therefore $P \subset tP \subset E'\subset Q$ and the boundary of the ellipse $E'$ does not contain any vertices of $P$. Now suppose that there exists an ellipse $E$ and polygons $P$ and $Q$ such that $P\subset E\subset Q$ and the ellipse $E$ does not contain any vertices of $P$. It is possible to shrink the ellipse $E$ slightly so that it also does not touch any edges of $Q$ either. We obtain an ellipse $E'$ that does not touch any vertices of $P$ and does not touch any edges of $Q$. By Lemma~\ref{continuity_of_factorizations}, for any matrix $M'\in\mathcal B_{\epsilon}(M)\cap\mathcal V_3 \cap \{M \in \mathbb{R}^{p \times q}:M\mathbf 1 = \mathbf 1\}$ we obtain polyhedra that are small perturbations of $P$ and $Q$ and hence $E'$ is nested between them. Therefore, $M'\in\mathcal M_{3, 2}$ and so $\mathcal B_{\epsilon}(M)\cap\mathcal V_3 \cap \{M \in \mathbb{R}^{p \times q}:M\mathbf 1 = \mathbf 1\} \subseteq \mathcal M_{3,2}$. \end{proof} We can now show how $\mathcal M_{3, 2}$ relates to the variety $\mathcal V_3$. \begin{proposition} The Zariski closure of $\mathcal M_{3,2}^{p\times q}$ over the real numbers is $\mathcal{V}^{p \times q}_3$. \end{proposition} \begin{proof} Suppose that there exists a ball $\mathcal B\subseteq \mathbb R^{p\times q}$ such that $\mathcal B\cap \mathcal{V}_3 \subseteq \mathcal M_{3, 2}$. This implies that the dimension of $\mathcal M_{3,2}^{p\times q}$ is equal to that of $\mathcal{V}_3$, and since $\mathcal M_{3,2} \subset \mathcal{V}_3$ and $\mathcal{V}_3$ is irreducible \cite[Theorem 2.10]{bruns1988determinantal}, the Zariski closure of $\mathcal M_{3, 2}$ over the real numbers equals $\mathcal{V}_3$. We show how to find such a ball $\mathcal B$. By Lemmas~\ref{continuity_of_factorizations} and~\ref{lem:interior}, it would suffice to find nested polygons $P\subseteq Q\subseteq \mathbb R^2$ such that $P$ has $p$ vertices, $Q$ has $q$ edges and there exists an ellipse nested between them that does not touch the vertices of $P$. Such a configuration certainly exists, for example, we can consider a regular $p$-gon $P$ centered at the origin with length $1$ from the origin to any of its vertices, and a regular $q$-gon $Q$ centered at the origin with length $5$ from the origin to any of its edges. Then, we can fit a circle of radius $2$ and center the origin between $P$ and $Q$ so that it does not touch the vertices of $P$. \end{proof} \begin{remark} The set of $p \times q$ matrices of psd rank at most $k$ is connected as it is the image under the parametrization map of the connected set $(\mathcal{S}^k_+)^p \times (\mathcal{S}^k_+)^q$. If we also fix the rank, then it is not known if the corresponding set is connected. \end{remark} The following theorem is the main result of this section. \begin{theorem}\label{main_theorem} We describe the topological and algebraic boundaries of $\mathcal M_{3,2}^{p\times q}$. \begin{enumerate} \item[a.] A matrix $M\in \mathcal M_{3,2}^{p\times q}$ satisfying $M \mathbf 1=\mathbf 1$ lies on the topological boundary $\partial\mathcal M_{3,2}^{p\times q}$ if and only if $M_{ij}=0$ for some $i,j$, or each ellipse that fits between the polygons $P$ and $Q$ contains at least three vertices of the inner polygon $P$ and is tangent to at least three edges of the outer polygon $Q$. \item[b.] A matrix $M\in\overline{\mathcal M_{3, 2}^{p\times q}} = \mathcal V_3^{p\times q}$ satisfying $M \mathbf 1=\mathbf 1$ lies on the algebraic boundary $\overline{\partial\mathcal M_{3, 2}^{p\times q}}$ if and only if $M_{ij} = 0$ for some $i,j$ or there exists an ellipse that contains at least three vertices of $P$ and is tangent to at least three edges of $Q$. \item[c.] The algebraic boundary of $\mathcal M_{3, 2}^{p\times q}$ is the union of $\binom p3\binom q3+pq$ irreducible components. Besides the $pq$ components $M_{ij}=0$, there are $\binom p3\binom q3$ components each of which is defined by the $4\times 4$ minors of $M$ and one additional polynomial equation with $1035$ terms homogeneous of degree $24$ in the entries of $M$ and homogeneous of degree $8$ in each row and each column of a $3\times 3$ submatrix of $M$. \end{enumerate} \end{theorem} \begin{proof} Let $\tilde{P}$ and $\tilde{Q}$ be the projective completions of $\text{cone}(P \times \{1\})$ and $\text{cone}(Q \times \{1\})$, i.e. the closures of images of $\text{cone}(P \times \{1\})-\{0\}$ and $\text{cone}(Q \times \{1\})-\{0\}$ under the map $\mathbb{R}^3 \rightarrow \mathbb P^2, (x,y,z) \mapsto [x:y:z]$. In \cite{Gallier}, $\tilde{P}$ and $\tilde{Q}$ are called projective polyhedra. If $P$ and $Q$ are bounded, there is no need to take closure. Hence, in this case there is one-to-one correspondence between statements about incidence relations in the affine and projective case. In Section~\ref{sec:preliminaries}, we required $A$ to have rows $A_i=(a_i^T,1)$ and defined $P=\text{conv}(a_1,\ldots,a_p)$. Similarly, the last row of $B$ gave constant terms of inequalities defining $Q$. Thus $\text{cone}(P \times \{1\})$ is the cone over the rows of $A$ and $\text{cone}(Q \times \{1\})=\{x \in \mathbb{R}^3:x^T B \geq 0\}$. This allows us to define $\tilde{P}$ and $\tilde{Q}$ for general $M$ (even if $\mathbf 1$ is not in the column span of $M$). Since in the projective plane all non-degenerate conics are equivalent, we will use the word ``conic" instead of ``ellipse". Abusing the terminology, we will also call the region bounded by a nondegenerate conic a conic in this proof. The region bounded by a nondegenerate conic is determined by the region bounded by the corresponding double cone in $\mathbb{R}^3$. $(a)$ \underline{Only if:} We show the contrapositive of the statement: If all the entries of $M$ satisfying $M \mathbf 1=\mathbf 1$ are positive and there is a conic between $\tilde P$ and $\tilde Q$ whose boundary contains at most two vertices of $\tilde P$ or is tangent to at most two edges of $\tilde Q$, then $M$ lies in the interior of $M_{3,2}^{p\times q}$. First, if there is a conic $E$ between $\tilde P$ and $\tilde Q$ whose boundary touches neither of the polytopes, then $M$ is in the interior of $\mathcal M_{3,2}$ by Lemma \ref{lem:interior}. If at most two edges of $\tilde Q$ are tangent to the boundary of the conic $E$, then $\tilde P \subset E \subset \tilde Q$ can be transformed by a projective transformation such that the two tangent edges are $x = 0$ and $y = 0$ and that the points of tangency are $[0:1:1]$ and $[1:0:1]$. We denote the image of $E$ by $\overline{E}$. The equation of the conic $\overline{E}$ has the form $ax^2 + bxy + cy^2 + dxz+eyz+fz^2 = 0$. We know that the only point that lies on the conic $\overline{E}$ with $x=0$ is the point $[0:1:1]$ since $\overline{E}$ touches the line $x=0$ at $[0:1:1]$. If we plug in $x=0$, we get $$cy^2 + eyz+fz^2 = 0.$$ We may assume $c \geq 0$, hence we must have $cy^2 + eyz + fz^2 = (y-z)^2$. Therefore, $c=1, e=-2, f=1$. Similarly, since $\overline{E}$ touches the line $y=0$ at $[1:0:1]$, when we plug in $y=0$, we get that $ax^2 + dxz+fz^2 = (x-1)^2$, so, $a=1, d=-2, f=1$. Thus, the conic $\overline{E}$ has the form $$\{(x, y): x^2 + bxy + y^2 -2xz-2yz + z^2 = 0\},$$ for some $b$. The conic is degenerate if and only if $b=2$. Since $E$ is nondegenerate, also $\overline{E}$ is nondegenerate. The double cone corresponding to $\overline{E}$ in $\mathbb{R}^3$ is defined by $x^2 + bxy + y^2 -2xz-2yz + z^2 \leq 0$. Since $x=0$ and $y=0$ are tangent to this double cone and touch it at the points $(0,1,1)$ and $(1,0,1)$, for all nonzero $x$ and $y$ we have $xy >0$. This corresponds to $b<2$. For a slightly smaller value of $b$, we obtain a slightly larger double cone. The nondegenerate conic $\overline{E}' \subseteq \mathbb{P}^2$ corresponding to this double cone contains $\overline{E}$ and touches $\overline{E}$ only at the points $[1:0:1]$ and $[0:1:1]$. Let $E'$ be the preimage of $\overline{E}'$ under the projective transformation considered above. We have $\tilde P\subseteq E\subset E'\subseteq \tilde Q$ and the conic $E'$ does not touch $\tilde P$. Thus, by Lemma \ref{lem:interior}, $M$ lies in the interior of $\mathcal M_{3,2}$. The case when $E$ goes through at most two vertices of $\tilde P$ follows by duality. \underline{If:} By Lemma \ref{lem:interior}, if $M\in \mathcal M_{3,2}$ satisfying $M \mathbf 1=\mathbf 1$ lies in the interior, then there is a conic between $\tilde P$ and $\tilde Q$ that does not touch $\tilde P$. Thus, if every conic nested between $\tilde P$ and $\tilde Q$ contains at least three vertices of $\tilde P$ and touches at least three edges of $\tilde Q$, then $M$ lies on the boundary $\partial\mathcal M_{3, 2}$ $(b),(c)$ If $M \in \mathbb{R}^{p \times q}$ without nonnegativity constraints satisfies $M \mathbf 1=\mathbf 1$, then one can define polytopes $P$ and $Q$ as explained before Lemma~\ref{lem:nest}. The difference is that $P \subseteq Q$ does not hold anymore, and we also might not have $\tilde P \subseteq \tilde Q$. Nevertheless, one can talk about vertices of $\tilde P$ and edges of $\tilde Q$. Hence given three points $a,b,c$ in $\mathbb{P}^2$ and three lines $d,e,f$ in $\mathbb{P}^2$, each given by three homogeneous coordinates, we seek the condition that there exists a conic $X$ such that $a,b,c$ lie on $X$ and $d,e,f$ are tangent to $X$. Let $X=\begin{bmatrix} x_{11} & x_{12} & x_{13} \\ x_{12} & x_{22} & x_{23} \\ x_{13} & x_{23} & x_{33} \end{bmatrix}$ be the matrix of a conic. Then the corresponding conic goes through the points $a,b,c$ if and only if \begin{align}\label{equations_for_points} a^TXa=b^TXb=c^TXc=0. \end{align} Similarly, the lines $d,e,f$ are tangent to the conic if and only \begin{align}\label{equations_for_lines} d^TYd=e^TYe=f^TYf=0, \end{align} where $XY=I_3$. We seek to eliminate the variables $X$ and $Y$. Let $[a,b,c]$ denote the matrix whose columns are $a,b,c$. First we assume that $[a,b,c]$ is the $3 \times 3$-identity matrix. Then we proceed in two steps: 1) The equations (\ref{equations_for_points}) imply that $x_{11},x_{22},x_{33}$ are zero. We make the corresponding replacements in equations~(\ref{equations_for_lines}). 2) We use~\cite[formula (4.5) on page 48]{Sturmfels02} to get the resultant of three ternary quadrics to get a single polynomial in the entries of $d,e,f$. Now we use invariant theory to obtain the desired polynomial in the general case. Let $g \in \textrm{GL}_3(\mathbb{R})$. The conic $X$ goes through the points $a,b,c$ and touches the lines $d,e,f$ if and only if the conic $g^{-T}Xg^{-1}$ goes through the points $ga,gb,gc$ and touches the lines $g^{-T}d,g^{-T}e,g^{-T}f$. Thus our desired polynomial belongs to the ring of invariants $\mathbb{R}[V^3 \oplus V^{*3}]^{\textrm{GL}_3(\mathbb{R})}$ where $V=\mathbb{R}^3$ and the action of $\textrm{GL}_3(\mathbb{R})$ on $V^3 \oplus V^{*3}$ is given by $$g\cdot(a,b,c,d,e,f):=(ga,gb,gc,g^{-T}d,g^{-T}e,g^{-T}f).$$ The First Fundamental Theorem states that $\mathbb{R}[V^3 \oplus V^{*3}]^{\textrm{GL}_3(\mathbb{R})}$ is generated by the bilinear functions $(i|j)$ on $V^3 \oplus V^{*3}$ defined by $$(i|j):(a,b,c,d,e,f) \mapsto ([a,b,c]^T [d,e,f])_{ij}.$$ For the FFT see for example~\cite[Chapter 2.1]{KP}. In the special case when $[a,b,c]$ is the $3 \times 3$ identity matrix, $(i|j)$ maps to the $(i,j)$-th entry of $[d,e,f]$. Hence to obtain the desired polynomial in the general case, we replace in the resultant obtained in the special case the entries of the matrix $[d,e,f]$ by the entries of the matrix $[a,b,c]^T [d,e,f]$. \texttt{Maple} code for doing the steps in the previous paragraphs can be found at our website. This program outputs one polynomial of degree $1035$ homogeneous of degree $8$ in each of the rows and the columns of the matrix $\begin{bmatrix}-&a&-\\-&b&-\\-&c&-\end{bmatrix}\begin{bmatrix}| & | & |\\d & e & f \\| & | & |\end{bmatrix}$. By construction, if this homogeneous polynomial vanishes and the projective polyhedron $\tilde P$ with vertices $a,b,c$ lies inside the projective polyhedron $\tilde Q$ with edges $d,e,f$ and $a,b,c,d,e,f$ are real, then there exists a conic nested between $\tilde P$ and $\tilde Q$ touching $d,e,f$ and containing $a,b,c$. Therefore, the Zariski closure of the condition that the only possible conics that can fit between $\tilde P$ and $\tilde Q$ touch at least three edges of $\tilde Q$ and at least three vertices of $\tilde P$ is exactly that there exists an conic that touches at least three edges of $\tilde Q$ and at least three vertices of $\tilde P$. none of the vertices of $\tilde P$ has the last coordinate equal to zero This proves $(b)$. To prove $(c)$, let $M\in\mathcal V_3$ be such that $M = A B$ and $a, b, c$ are three of the rows of $A$ and $d, e, f$ are three of the columns of $B$. Then, the above-computed polynomial contains variables only from the entries of a $3\times 3$ submatrix of $M$ corresponding to these rows and columns. We can drop the assumption $M \mathbf 1 =\mathbf 1$ here: Scaling a row of $M$ by a constant corresponds to scaling the corresponding row of $A$ by the same constant, which does not influence equations~(\ref{equations_for_points}). For each three rows and three columns of $M$ we have one such polynomial, so the algebraic boundary is given by the union over each three rows and three columns of $M$ of the variety defined by the $4\times 4$ minors of $M$ and the corresponding degree $24$ polynomial with $1035$ terms. \end{proof} Here is an algebraic version of Theorem \ref{main_theorem}. \begin{corollary} \label{cor:algebraic} A matrix $M\in\mathbb R^{p\times q}_{\geq 0}$ satisfying $M \mathbf 1=\mathbf1$ lies on the boundary $\partial \mathcal M_{3, 2}$ if and only if for every size-2 psd factorization $M_{ij} = \langle A_i, B_j\rangle$, at least three of the matrices $A_1,\dots, A_p\in\mathcal S^2_+$ have rank one and at least three of the matrices $B_1,\dots, B_q\in\mathcal S^2_+$ have rank one. \end{corollary} \begin{proof} Suppose that $M\not\in\partial \mathcal M_{3,2}$. Let $P = \mathrm{cone}\{a_1,\dots, a_p\}$ and $Q = \{x \in \mathbb{R}^{r-1}: \langle x,b_j \rangle \geq 0 \text{ for } j=1,\ldots,q\}$ such that $M=S_{P,Q}$. By~\cite[Proposition 4.4]{GRT} and Theorem~\ref{main_theorem}, there exists an invertible linear map $\pi$ such that $P \subseteq \pi(\mathcal S_+^2) \subseteq Q$ and the boundary of $\pi(\mathcal S_+^2)$ contains at most two rays of $P$ or is tangent to at most two facets of $Q$. The invertibility of $\pi$ gives $$\pi^{-1}(P) \subseteq \mathcal S_+^2 \subseteq \pi^{-1}(Q),$$ where $\pi^{-1}(P) = \mathrm{cone}\{\pi^{-1}(a_1),\dots, \pi^{-1}(a_p)\}$ and $$\pi^{-1}(Q) = \{x\in L\cap \mathcal S^2 : \langle \pi(x),b_j \rangle \geq 0\}= \{x\in L\cap \mathcal S^2 : \langle x,\pi^T(b_j)\rangle \geq 0\}.$$ Thus $M = S_{\pi^{-1}(P), \pi^{-1}(Q)}$, since $$M_{ij} = \langle a_i,b_j\rangle = \langle \pi(\pi^{-1}(a_i)),b_j\rangle = \langle \pi^{-1}(a_i),\pi^T(b_j)\rangle.$$ The inclusion $\pi^{-1}(P) \subseteq \mathcal S_+^2$ implies that $\pi^{-1}(a_1),\ldots,\pi^{-1}(a_p)$ are psd. Taking dual of the inclusion $\mathcal S_+^2 \subseteq \pi^{-1}(Q)$ gives that $\pi^T(b_1),\ldots,\pi^T(b_q)$ are psd. Since $\pi$ is invertible, we know that either the boundary of $\mathcal S^2_+$ contains at most two rays of $\pi^{-1}(P)$ or is tangent to at most two facets of $\pi^{-1}(Q)$. Hence $\pi^{-1}(a_1),\ldots,\pi^{-1}(a_p),\pi^T(b_1),\ldots,\pi^T(b_q)$ gives a psd factorization of $M$ with at most two of $\pi^{-1}(a_1),\ldots,\pi^{-1}(a_p)$ having rank one or at most two of $\pi^T(b_1),\ldots,\pi^T(b_q)$ having rank one. Suppose that there exists a psd factorization of $M$, given by matrices $A_1,\dots, A_p, B_1,\dots, B_q\in\mathcal S_+^2$, such that at most two of the $A_i$ have rank one. Consider $P = \mathrm{cone}\{A_1,\dots, A_p\}$ and $Q = \{x\in \mathcal S^2 : \langle x,B_j\rangle \geq 0, \forall j=1,\dots, q\}$. Then $P \subseteq \mathcal S_+^2 \subseteq Q$ and the boundary of $\mathcal S_+^2$ contains at most two rays of $P$. Using the inner product preserving bijection between $\mathcal S^2$ and $\mathbb{R}^3$, we can consider all objects in $\mathbb{R}^3$. In particular, the images of $A_1,\dots, A_p, B_1,\dots, B_q$ in $\mathbb{R}^3$ give a rank factorization of $M$. By Theorem~\ref{main_theorem} (a), we have $M\not\in\partial \mathcal M_{3,2}$. \end{proof} We now investigate the topological boundary more thoroughly. \begin{proposition} Suppose $M \in \mathcal{M}^{p\times q}_{3,2}$ satisfying $M \mathbf 1 = \mathbf 1$ is strictly positive. Then $M$ lies on the topological boundary if and only if there exists a unique ellipse that fits between $P$ and $Q$. \end{proposition} \begin{proof} Abusing the terminology, we will call the region bounded by an ellipse an ellipse in this proof. A matrix in the relative interior of $\mathcal{M}_{3,2}$ will have multiple ellipses nested between $P$ and $Q$: By the only if direction of the proof of Theorem~\ref{main_theorem} part (a), there exists an ellipse that is contained in $Q$ and strictly contains $P$. We can just take slight scalings of this ellipse to get multiple ellipses. This proves the ``if'' direction. For the ``only if'' direction, suppose $M$ lies on the topological boundary and $E_0$ and $E_1$ are two ellipses nested between $P$ and $Q$. Let $E_{1/2}$ be the ellipse determined by averaging the quadratics defining $E_0$ and $E_1$, i.e. \[ E_{1/2} = \left\{ x : q_0(x) + q_1(x) \geq 0 \right\} \textup{ where } E_i = \left\{ x : q_i(x) \geq 0 \right\}. \] It is straightforward to see that $E_{1/2}$ is nested between $P$ and $Q$. Furthermore, if $v$ is a vertex of $P$, then $E_{1/2}$ passes through $v$ if and only if both $E_0$ and $E_1$ pass through $v$. Similarly, if $f$ is a facet of $Q$, then $E_{1/2}$ is incident to $f$ if and only if $E_0$ and $E_1$ are tangent to $f$ at the same point. By Theorem~\ref{main_theorem}, the ellipse $E_{1/2}$ must pass through three vertices of $P$ and three edges of $Q$. Hence, there must exist six distinct points that both $E_0$ and $E_1$ pass through. No three of the six points are collinear, since ellipses $E_0$ and $E_1$ pass through them. Since five distinct points in general position determine a unique conic, we must have that $E_0 = E_1$. \end{proof} \begin{example} \rm In the previous result, we examined the geometric configurations on the boundary of the semialgebraic set coming from strictly positive matrices. The simplest idea for such a matrix is to take two equilateral triangles and expand the inner one until we are on a boundary configuration as in Figure~\ref{fig:two_equilateral_on_the_boundary}. \begin{figure}[H] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.7\textwidth]{twoEquilateralOnTheBoundary} \caption{Boundary configuration\\ \vspace{0.6cm} } \label{fig:two_equilateral_on_the_boundary} \end{subfigure} \begin{subfigure}[b]{0.42\textwidth} \centering \includegraphics[width=0.7\textwidth]{notOnTheBoundary} \caption{Interior configuration which also lies on the algebraic boundary $\overline{\partial\mathcal M_{3, 2}}$} \label{fig:not_on_the_boundary} \end{subfigure} \caption{Geometric configurations of matrices in $\mathcal M_{3,2}^{3\times 3}$}\label{fig:boundary_and_interior} \end{figure} This configuration has the slack matrix \begin{equation}\label{circulant_matrix_on_the_boundary} \frac{1}{6} \begin{bmatrix} 4 &1 & 1\\ 1 & 4 &1\\ 1 & 1 & 4 \end{bmatrix}. \end{equation} The $1035$ term boundary polynomial from Theorem~\ref{main_theorem} vanishes on this matrix, as we expect. This matrix lies in the set of $3 \times 3$ circulant matrices which have the form $$ \begin{bmatrix} a & b & c\\ c & a & b\\ b & c & a \end{bmatrix}. $$ It was shown in~\cite[Example 2.7]{FGPRT} that these matrices have psd rank at most two precisely when $a^2+b^2+c^2-2(ab+ac+bc) \leq 0$. As expected, whenever this polynomial vanishes, the $1035$ term boundary polynomial vanishes as well. The matrix~(\ref{circulant_matrix_on_the_boundary}) is a regular point of the hypersurface defined by the boundary polynomial. Figure~\ref{fig:not_on_the_boundary} shows an instance of parameters $a,b,c$ such that the matrix is on the algebraic boundary but not on the topological boundary -- the polynomial vanishes, but the matrix lies in the interior of $\mathcal M_{3, 2}$. We were interested in finding out if the $1035$ term boundary polynomial could be used in an inequality to classify circulant matrices of psd rank at most two. The family of circulant matrices which have $c=1$ and whose psd rank is at most two is depicted in Figure~\ref{fig:circulantBoundary2D}. The boundary polynomial, shown in Figure~\ref{fig:regionsBoundaryPolynomial2D}, takes both positive and negative values on the interior of the space. Figures~\ref{fig:circulantBoundary3D} and~\ref{fig:regionsBoundaryPolynomial3D} show the semialgebraic set and the boundary polynomial in the $3$-dimensional space. \begin{figure}[h!] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=0.9\textwidth]{circulantBoundary2D} \caption{Circulant matrices of psd rank at most 2} \label{fig:circulantBoundary2D} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.9\textwidth]{circulantBoundaryPolynomial2D} \caption{The boundary polynomial} \label{fig:regionsBoundaryPolynomial2D} \end{subfigure} \caption{$3 \times 3$ circulant matrices in $\mathbb{R}^2$}\label{fig:circulant_matrices_2D} \end{figure} \begin{figure}[h!] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.9\textwidth]{cone1} \caption{Circulant matrices of psd rank at most 2} \label{fig:circulantBoundary3D} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.9\textwidth]{cone2} \caption{The boundary polynomial} \label{fig:regionsBoundaryPolynomial3D} \end{subfigure} \caption{$3 \times 3$ circulant matrices in $\mathbb{R}^3$}\label{fig:circulant_matrices_3D} \end{figure} \end{example} \section{Matrices of higher psd rank}\label{sec:geometricInterpretation} In Corollary~\ref{cor:algebraic}, we showed that a matrix lies on the boundary $\partial \mathcal M_{3,2}$ if and only if in every psd factorization $M_{ij} = \langle A_i, B_j\rangle$, at least three $A_i$'s and at least three $B_j$'s have rank one. In analogy with this result, we conjecture that a matrix lies on the boundary $\partial \mathcal M_{r,k}$ if and only if in every psd factorization $M_{ij} = \langle A_i, B_j\rangle$, at least $k+1$ matrices $A_i$ and at least $k+1$ matrices $B_j$ have rank one. \begin{conjecture}\label{thm:k+1k} A matrix $M\in\mathbb R^{p\times q}_{\geq 0}$ satisfying $M \mathbf 1=\mathbf1$ lies on the boundary $\partial \mathcal M_{r, k}$ if and only if for every size-$k$ psd factorization $M_{ij} = \langle A_i, B_j\rangle$, at least $k+1$ of the matrices $A_1,\dots, A_p\in\mathcal S^2_+$ have rank one and at least $k+1$ of the matrices $B_1,\dots, B_q\in\mathcal S^2_+$ have rank one. \end{conjecture} Let $M\in\mathbb R^{p \times q}_{\geq 0}$ be a full rank matrix, and let $P\subseteq Q\subseteq \mathbb R^{r-1}$ be nested polytopes such that $M = S_{P, Q}$. By Theorem~\ref{thm:spectrahedron}, the matrix $M$ has psd rank at most $k$ if and only if we can nest a spectrahedral shadow $C$ of size $k$ between $P$ and $Q$. By definition, the spectrahedral shadow $C$ is a linear projection of a spectrahedron $\tilde C = L\cap\mathcal S^k_+$ of size $k$. \begin{definition}We say that a vector $v\in C$ lies in the {\em rank $s$ locus} of $C$ if there exists a $k\times k$ psd matrix in $\tilde C$ of rank $s$ that projects onto $v$. \end{definition} The geometric version of the Conjecture~\ref{thm:k+1k} is: \begin{conjecture}\label{conjecture:geometric_description2} A matrix $M$ is on the boundary $\partial \mathcal M_{r,k}$ if and only if all spectrahedral shadows $C$ of size $k$ such that $P \subseteq C \subseteq Q$ contain $k+1$ vertices of $P$ at rank one loci and touch $k+1$ facets of $Q$ at rank $k-1$ loci. \end{conjecture} For $r=\binom{k+1}{2}$, one can show similarly to the proof of Corollary~\ref{cor:algebraic} that Conjectures~\ref{thm:k+1k} and~\ref{conjecture:geometric_description2} are equivalent. This case differs from other cases, by linear map $\pi$ being invertible. The psd rank three and rank four setting corresponds to the geometric configuration where a $3$-dimensional spectrahedral shadow of size three is nested between $3$-dimensional polytopes. A detailed study of generic spectrahedral shadows can be found in~\cite{SS14}. \begin{example} \rm We now give an example of a geometric configuration as in Conjecture~\ref{conjecture:geometric_description2}. We stipulate that the vertices of the interior polytope coincide with the nodes of the spectrahedron in Figure~\ref{figure:shadow1} and the facets of the outer polytope touch the boundary of this spectrahedron at rank two loci. In the dual picture, the vertices of the inner polytope lie on the rank one locus depicted in Figure~\ref{figure:shadow3} and the facets of the outer polytope contain the rank two locus of this spectrahedral shadow. \begin{figure}[h] \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.85\textwidth]{shadow1} \caption{Spectrahedron} \label{figure:shadow1} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.9\textwidth]{shadow3} \caption{Rank-one locus of the dual shadow} \label{figure:shadow3} \end{subfigure} \caption{3-dimensional spectrahedral shadows} \end{figure} \end{example} We end this section with a restatement of Conjecture~\ref{thm:k+1k} in a special case using Hadamard square roots. \begin{definition} Given a nonnegative matrix $M$, let $\sqrt{M}$ denote a Hadamard square root of $M$ obtained by replacing each entry in $M$ by one of its two possible square roots. The square root rank of a nonnegative matrix M, denoted as $\text{rank}_{\sqrt{}}(M)$, is the minimum rank of a Hadamard square root of $M$. \end{definition} \begin{lemma}[Lemma~2.4 in~\cite{GRT13}] The smallest $k$ for which a nonnegative real matrix $M$ admits a $\mathcal{S}^k_+$-factorization in which all factors are matrices of rank one is $k=\text{rank}_{\sqrt{}}(M)$. \end{lemma} Hence Conjecture~\ref{thm:k+1k} is equivalent to the statement that a matrix $M\in\mathcal M_{k+1,k}^{(k+1) \times (k+1)}$ lies on the boundary $\partial \mathcal M_{k+1,k}^{(k+1) \times (k+1)}$ if and only if its square root rank is at most $k$. We conclude this section with a conjecture which would lead to a semialgebraic description of $\mathcal M_{r, k}^{p \times q}$. \begin{conjecture}\label{semialgebraic_description} Every matrix $M \in \mathcal{M}^{p \times q}_{r,k}$ has a psd factorization $M_{ij}=\langle A_i,B_j \rangle$, with at least $k$ matrices $A_i$ and $k-1$ matrices $B_j$, or at least $k-1$ matrices $A_i$ and $k$ matrices $B_j$ being rank one. \end{conjecture} If this conjecture were true, there would be $\binom{p}{k}\binom{q}{k-1}+\binom{p}{k-1}\binom{q}{k}$ options for selecting the $2k-1$ rank-one matrices. For each such option we would be able to describe the semialgebraic set of all such matrices that have psd rank $k$. \section{Evidence towards Conjecture \ref{thm:k+1k}}\label{sec:higherPsdRank} In this section, we present partial evidence towards proving Conjecture~\ref{thm:k+1k} if $p=q=r=k+1$. Section \ref{sec:5.1} is theoretical in nature, while Section \ref{section:higher_psd_rank} exhibits computational results. \subsection{Nested spectrahedra} \label{sec:5.1} By Theorem \ref{thm:spectrahedron} a matrix $M$ for which $M\mathbf1 = \mathbf1$ has psd rank $k$ if and only if we can nest a spectrahedral shadow of size $k$ between the polytopes $P$ and $Q$ corresponding to $M$. In the following lemma, we show that a $(k+1)\times (k+1)$ matrix $M$ has psd rank $k$ if and only if we can fit a spectrahedron of size $k$ between $P$ and $Q$. We show that if there is a spectrahedral shadow $C$ nested between $P$ and $Q$, then we can find a spectrahedron $C'$ of the same size such that $P\subseteq C'\subseteq C\subseteq Q$. \begin{lemma}\label{lemma:enough_to_consider_spectrahedra} Let $M\in\mathbb R^{(k+1)\times (k+1)}_{\geq 0}$ be a full-rank matrix such that $M\mathbf 1 = \mathbf 1$. Then, $M$ has psd rank at most $k$ if and only if we can nest a spectrahedron of size $k$ between the two polytopes $P$ and $Q$ corresponding to $M$. \end{lemma} \begin{proof} If we can fit a spectrahedron of size $k$ between $P$ and $Q$, then $M$ has psd rank at most $k$. Conversely, suppose that $M$ has psd rank at most $k$. Then there exists a slice $L$ of $\mathcal S^k_+$ and a linear map $\pi$ such that $C = \pi(L\cap \mathcal S^k_+)$ lies between $P$ and $Q$: $$P\subseteq C\subseteq Q.$$ If $\pi$ is a $1:1$ linear map, then, the image $C$ is just a linear transformation of a spectrahedron, and is therefore a spectrahedron of the same size. So, assume that $\pi$ is not $1:1$, i.e. it has nontrivial kernel. We can write $$L\cap \mathcal S^k_+ = \{(x_1,\dots, x_s) \in \mathbb{R}^s : \sum_{i=1}^s x_iA_i + (1-\sum_{i=1}^s x_i)A_{s+1}\succeq 0\}$$ for some $A_1,\dots, A_{s+1} \in \mathcal{S}^k$. Let $u_1,\dots, u_s$ be an orthonormal basis of $\mathbb R^s$ such that $\pi(u_i)=e_i$ for $i \in \{1,\ldots,r\}$ and $\text{ker}(\pi) = \text{span}(u_{k+1},\dots, u_s)$. Let $U$ be the orthogonal matrix with columns $u_1,\dots,u_s$. Consider new coordinates $y$ such that $x = Uy$. We can write $$L\cap\mathcal S^k_+ = \{Uy \in \mathbb{R}^s : \sum_{i=1}^s y_i B_i + (1 - \sum_{i=1}^s y_i) B_{s+1}\succeq 0\},$$ where $B_1,\dots,B_{s+1}$ are linear combinations of the $A_i$'s. Then $$C = \{(y_1,\dots,y_k) \in \mathbb{R}^k : \exists y_{k+1},\dots,y_s \in \mathbb{R} \text{ s.t.} \sum_{i=1}^s y_i B_i + (1 - \sum_{i=1}^s y_i)B_{s+1}\succeq 0\}.$$ Since $M$ is full rank, we can factor it as $M = AB$, where $A, B\in\mathbb R^{(k+1)\times (k+1)}$ and $$A = \begin{pmatrix} 1 & 0 & \cdots & 0 & 1\\ 0 & 1 & \cdots & 0 & 1\\ \vdots & &\ddots & & \vdots\\ 0 & 0& \cdots & 1 & 1\\ 0 & 0& \cdots & 0 & 1 \end{pmatrix}, \quad\quad\quad \quad\quad\quad B = A^{-1} M.$$ The inner polytope $P$ comes from an affine slice of the conic hull of the rows of $A$. Let the slice be given by the last coordinate equal to 1. Then $P$ is the standard simplex in $\mathbb R^k$, i.e. $$P = \text{conv}\{e_1,\dots,e_k, 0\}.$$ Since $e_i\in P\subseteq C$ for $i \in \{1,\ldots,k\}$, then there exist $y_{k+1}^{(i)},\dots, y_s^{(i)}\in\mathbb R$ such that $$D_i = B_i + \sum_{j=k+1}^s [y_j^{(i)}(B_j - B_{s+1})]\succeq 0.$$ Since $0\in P\subseteq C$, then there exist $y_{k+1}^{(0)},\dots, y_s^{(0)}\in\mathbb R$ such that $$D_{k+1} =B_{s+1} + \sum_{j=k+1}^s [y_j^{(0)}(B_j - B_{s+1})]\succeq 0.$$ Consider the spectrahedron $$C' = \{(y_1,\dots,y_k) : \sum_{i=1}^k y_i D_i + (1-\sum_{i=1}^k y_i)D_{k+1} \succeq 0\}.$$ We have $e_i \in C'$ for $i \in \{1,\dots,k\}$, since $D_i\succeq 0$. Also $0\in C'$, since $D_{k+1}\succeq 0$. Thus $P\subseteq C'$. Moreover, if $(y_1,\dots,y_k)\in C'$, then $$0 \preceq \sum_{i=1}^k y_i D_i + (1-\sum_{i=1}^k y_i)D_{k+1}= \sum_{i=1}^k y_i(B_i + \sum_{j=k+1}^s [y_j^{(i)}(B_j - B_{s+1})])$$ $$+ (1-\sum_{i=1}^k y_i)(B_{s+1} + \sum_{j=k+1}^s [y_j^{(0)}(B_j - B_{s+1})])$$ $$=\sum_{i=1}^k y_i B_i + \sum_{j=k+1}^s(\sum_{i=1}^ky_iy_j^{(i)} - (1-\sum_{i=1}^ky_i)y_j^{(0)}) B_j$$ $$+ (1 - \sum_{i=1}^ky_i - \sum_{j=k+1}^s(\sum_{i=1}^ky_iy_j^{(i)} -(1-\sum_{i=1}^ky_i)y_j^{(0)}))B_{s+1}.$$ Therefore $(y_1,\dots, y_k)\in C$ and $P\subseteq C'\subseteq C\subseteq Q$. \end{proof} We conjecture that the statement of Lemma~\ref{lemma:enough_to_consider_spectrahedra} holds for matrices of any size. \begin{conjecture}\label{conjecture:spectrahedra_are_enough} Let $M\in\mathbb R^{p\times q}_{\geq 0}$ have rank $k+1$ and assume that $M\mathbf 1 = \mathbf 1$. Then $M$ has psd rank at most $k$ if and only if we can nest a spectrahedron of size $k$ between the two polytopes $P$ and $Q$ corresponding to $M$. \end{conjecture} We now turn our attention to matrices which lie on the boundary of the set of matrices of fixed size, rank, and psd rank. Our goal is to present partial evidence towards Conjecture~\ref{conjecture:geometric_description2}. Suppose we have polytopes $P$ and $Q$ and a spectrahedron $C$ such that $P\subseteq C\subseteq Q$. Further, assume that $P$ has $k+1$ vertices. We show that if $k$ of the $k+1$ vertices of the polytope $P$ touch the spectrahedron $C$ at rank-one loci, then we can find a smaller spectrahedron $C'$ such that $P\subseteq C'\subseteq C\subseteq Q$. This means that the matrix $S_{P, Q}$ does not lie on the boundary $\partial \mathcal M_{k, k+1}^{(k+1)\times (k+1)}$. \begin{lemma}\label{lemma:two_spectrahedra} Let $P = \text{conv}(e_1,\dots, e_k, 0) \subseteq\mathbb R^k$. Let $C$ be a spectrahedron of size $k$ such that $P\subseteq C$ and the vertices $e_1,\dots, e_k$ correspond to rank one matrices in $C$. Then there exists another spectrahedron $C'$ of size $k$ such that $P\subseteq C'\subseteq C$ with all $k+1$ vertices of $P$ corresponding to rank one matrices in $C'$. \end{lemma} \begin{figure}[h] \centering \includegraphics[width=0.4\textwidth]{twoSpectrahedra} \caption{The spectrahedra $C$ (in light yellow) and $C'$ (in blue) as in Lemma~\ref{lemma:two_spectrahedra}} \label{fig:two_spectrahedra} \end{figure} \begin{proof} The statement is trivial when $k=1$. We proceed by induction. By the conditions in the statement of the lemma, we can assume that $$C = \{(x_1,\dots, x_k) \in \mathbb{R}^k : x_1a_1a_1^T + x_2 a_2 a_2^T +\cdots + x_k a_k a_k^T + (1-\sum_{i=1}^k x_i) B \succeq 0\},$$ where $a_1,\dots, a_k \in\mathbb R^k$ are vectors. We have $B\succeq 0$ since $0\in C$. Suppose first that $\dim(\text{span}\{a_1,\dots, a_k\}) = \ell < k$. Let $U$ be a change of coordinates that transforms span$\{a_1,\dots, a_k\}$ into span$\{e_1,\dots, e_l\}$. Denoting $a_i' = Ua_i$, we have $$C = \{ (x_1,\dots, x_k) \in \mathbb{R}^k : x_1a_1'(a'_1)^T + x_2 a_2' (a'_2)^T +\cdots + x_k a_k' (a'_k)^T + (1-\sum_{i=1}^k x_i) UBU^T \succeq 0\},$$ where $B' = UBU^T$ is positive semidefinite. If $B'_{i,j} = 0$ for all $i,j \geq \ell+1$, then, the statement reduces to the case of $\ell$, which is true by induction. So suppose that $B'_{\ell+1, \ell+1} > 0$ (since $B'\succeq 0$). Choose a vector $d\in\mathbb R^k$ such that $d_{\ell+1}\neq 0$ and $d d^T \preceq B'$. Consider the spectrahedron $$C' = \{(x_1,\dots, x_k) \in \mathbb{R}^k : x_1a_1'(a'_1)^T + x_2 a_2' (a'_2)^T +\cdots + x_k a_k' (a'_k)^T + (1-\sum_{i=1}^k x_i) d d^T\succeq 0\}.$$ Clearly $e_1,\dots, e_k, 0\in C'$. We will show that $C'\subseteq C$. Indeed, let $(x_1,\dots, x_k)\in C'$. Since $(a_i')_{\ell+1} = 0$ for $i \in \{1, \ldots,k \}$, $d_{\ell+1} \neq 0$ and $$ x_1a_1'(a'_1)^T + x_2 a_2' (a'_2)^T +\cdots + x_k a_k' (a'_k)^T+ (1-\sum_{i=1}^k x_i) d d^T\succeq 0,$$ we have $(1-\sum_{i=1}^k x_i) \geq 0$. But then $$ 0 \preceq x_1a_1'(a'_1)^T + x_2 a_2' (a'_2)^T +\cdots + x_k a_k' (a'_k)^T + (1-\sum_{i=1}^k x_i) d d^T $$ $$\preceq x_1a_1'(a'_1)^T + x_2 a_2' (a'_2)^T +\cdots + x_k a_k' (a'_k)^T + (1-\sum_{i=1}^k x_i) B'$$ and therefore $C'\subseteq C$. Now assume that $\dim(\text{span}\{a_1,\dots, a_k\}) = k$. Let $U$ be an invertible transformation such that $Ua_i = e_i$. Then $$C =\{(x_1,\dots, x_k) \in \mathbb{R}^k : x_1e_1e_1^T + x_2 e_2 e_2^T +\cdots + x_k e_k e_k^T + (1-\sum_{i=1}^k x_i) UBU^T \succeq 0\},$$ where $B' = UBU^T$ is positive semidefinite. Let $d\in\mathbb R^k$ be such that $d_i = \sqrt{B'_{i,i}}$ and let $S\in\mathbb R^{k\times k}$ be such that $$S_{i,j} = \begin{cases} \frac{B'_{i,j}}{\sqrt{B'_{i,i}B'_{j,j}}} & \text{if } B'_{i,i}B'_{j,j} \neq 0,\\ 1 & \text{if } B'_{i,i}B'_{j,j} = 0 \text{ and } i = j,\\ 0 & \text{if } B'_{i,i}B'_{j,j} = 0 \text{ and } i\neq j. \end{cases}$$ Since $B'\succeq 0$, also $S \succeq 0$, since it is obtained from $B'$ by rescaling some rows and columns and by adding $1$ on the diagonal in places that are 0 in $B'$. Let $$C' =\{(x_1,\dots, x_k) \in \mathbb{R}^k: x_1e_1e_1^T + x_2 e_2 e_2^T +\cdots + x_k e_k e_k^T + (1-\sum_{i=1}^kx_i) d d^T \succeq 0\}.$$ Then, clearly $e_1,\dots, e_k, 0\in C'$. We will show that $C'\subseteq C$. Let $(x_1,\dots, x_k)\in C'$. Then \begin{align}\label{Cpoint} x_1e_1e_1^T + x_2 e_2 e_2^T +\cdots + x_k e_k e_k^T + (1-\sum_{i=1}^k x_i) d d^T \succeq 0. \end{align} By the Schur Product Theorem, we know that the Hadamard product of two positive semidefinite matrices is positive semidefinite. Therefore, when we take the Hadamard product of the matrix \eqref{Cpoint} with $S$ we get a positive semidefinite matrix. But that Hadamard product equals $$x_1 e_1e_1^T + x_2 e_2 e_2^T +\cdots + x_k e_k e_k^T + (1-\sum_{i=1}^k x_i) B' \succeq 0,$$ and therefore $C'\subseteq C$. \end{proof} Let $P$ and $C$ be as in the statement of Lemma~\ref{lemma:two_spectrahedra}. Let $Q \subset \mathbb{R}^k$ be any polytope such that $P \subseteq C \subseteq Q$ and consider the slack matrix $S_{P, Q}$. The statement of Lemma~\ref{lemma:two_spectrahedra} indicates that $S_{P, Q}$ does not lie on the boundary $\partial\mathcal M^{(k+1)\times(k+1)}_{k+1, k}$, because the new spectrahedron $C'$ does not touch $Q$. As we saw in Section~\ref{section:rank3psdrank2}, in order for a matrix to lie on the boundary, the configuration $P\subseteq C\subseteq Q$ has to be very tight, and Lemma~\ref{lemma:two_spectrahedra} shows that having $k$ of the vertices of $P$ lie in the rank one locus of $C$ is not tight enough. Similarly, having $k$ of the facets of $Q$ touch $C$ at rank $k-1$ loci will not be enough. This is why we believe that all $k+1$ vertices of $P$ have to be in the rank one locus of $C$, and all $k+1$ of the facets of $Q$ have to touch $C$ at its rank $k-1$ locus. \subsection{Computational evidence}\label{section:higher_psd_rank} In this section we provide computational evidence for Conjecture~\ref{thm:k+1k} when $k>2$. \begin{example} \rm We consider the 2-dimensional family of $4 \times 4$ circulant matrices \begin{equation}\label{circulantFamily} \begin{bmatrix} a & b & 1 & b\\ b & a & b & 1\\ 1 & b & a & b\\ b & 1 & b & a \end{bmatrix} \end{equation} which is parametrized by $a$ and $b$. In Figure~\ref{fig:circulantPsd3}, the $4126$ green dots correspond to randomly chosen matrices of the form (\ref{circulantFamily}) that have psd rank at most three. The psd rank is computed using the code provided by the authors of~\cite{VGGT15} adapted to the computation of psd rank~\cite[Section 5.6]{KLTT15}. The red curves correspond to matrices of the form (\ref{circulantFamily}) that have a psd factorization by $3 \times 3$ rank one matrices. These curves are obtained by an elimination procedure in {\tt Macaulay2}. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{circulantScatterPlotTogetherWithBoundaries} \caption{A family of $4 \times 4$ circulant matrices of psd rank at most 3} \label{fig:circulantPsd3} \end{figure} \end{example} If the condition that $k+1$ matrices $A_i$ and $k+1$ matrices $B_j$ have rank one is equivalent to the matrix $M$ being on the algebraic boundary $\overline{\partial\mathcal M_{r, k}^{p\times q}}$, then the set of matrices that have a psd factorization by such matrices should have codimension one inside the variety $ \mathcal V_r^{p \times q}$ of $p \times q$ matrices of rank at most $r$. The dimension of $ \mathcal V_r^{p \times q}$ is $pr+qr-r^2$. In the following example, we test several different assignments of ranks to each of the matrices $A_i, B_j$, and we mark those whose image has dimension $pr+qr-r^2-1$. \begin{example}\label{example:dimension_computation} \rm Let $A_1, \ldots , A_{p}, B_1, \ldots , B_{1} \in \mathcal{S}^k_+$ be symbolic matrices of ranks $r_1, \ldots, r_p,r'_1,\ldots,r'_q$. We construct a matrix $M$ such that $M_{ij}=\langle A_i, B_j \rangle$. We vectorize the matrix $M$ and compute its Jacobian $J$ with respect to the entries of $A_1, \ldots, A_{p}, B_1, \ldots, B_{q}$. Finally we substitute the entries of $A_1, \ldots, A_{p}, B_1, \ldots , B_{q}$ by random nonnegative integers and compute the rank of $J$ after this substitution. If $\mathrm{rank}(J)=pq-1$, then the matrices that have a psd factorization by matrices of ranks $\{r_1, \ldots , r_p\},\{r'_1, \ldots , r'_q\}$ give a candidate for a boundary component, assuming that the boundary components are only dependent on the ranks of the $A_i$'s and the $B_j$'s. \begin{table}[H] \begin{tabular}{ | c | c | c | c | } \hline psd rank & p & q & ranks \\ \hline 3 & 4 & 4 & \{\{1,1,1,1\},\{1,1,1,1\}\} \\ 3 & 4 & 5 & \{\{1,1,1,1\},\{1,1,1,1,2/3\}\} \\ 3 & 4 & 6 & \{\{1,1,1,1\},\{1,1,1,1,2/3,2/3\}\},\{\{1,1,1,2\},\{1,1,1,1,1,1\}\} \\ 3 & 5 & 5 & \{\{1,1,1,1,2/3\},\{1,1,1,1,2/3\}\} \\ 3 & 5 & 6 & \{\{1,1,1,1,2/3\},\{1,1,1,1,2/3,2/3\}\},\{\{1,1,1,2,3\},\{1,1,1,1,1,1\}\} \\ 3 & 6 & 6 &\begin{tabular}{@{}c@{}}\{\{1,1,1,1,2/3,2/3\},\{1,1,1,1,2/3,2/3\}\},\{\{1,1,1,1,1,1\},\{1,1,1,2,3,3\}\},\\ \{\{1,1,1,1,1,1\},\{1,1,2,2,2,2\}\},\{\{1,1,1,1,1,2\},\{1,1,1,2,2,2\}\}\end{tabular} \\ \hline \end{tabular} \caption{Ranks of matrices in the psd factorization of a psd rank three matrix that can potentially give boundary components} \label{table1} \end{table} The possible candidates for $k=3$ are summarized in Table~\ref{table1}. For all $p,q$ the case where four matrices $A_i$ and four matrices $B_j$ have rank one and all other matrices have any rank greater than one are represented. These are the cases that appear in Conjecture~\ref{thm:k+1k}. If any of the other candidates in Table~\ref{table1} corresponded to a boundary component, then Conjecture~\ref{thm:k+1k} would be false. If $k=4$, $p=q=10$, exactly five $A_i$ and five $B_j$ matrices have rank one and the rest of the matrices have rank two, then the Jacobian has rank 94. If the rest of the matrices in the psd factorization have rank three or four, then the Jacobian has rank 99 as expected. Hence if Conjecture~\ref{thm:k+1k} is true, then in general not every matrix on the boundary has a psd factorization with $k+1$ matrices $A_i$ and $k+1$ matrices $B_j$ having rank one, and rest of the matrices having rank two. \end{example} \begin{example} \rm Using the same strategy as in Example~\ref{example:dimension_computation}, we have checked that the Jacobian has the expected rank for $p=q=r=k+1$ and $k<10$. \end{example}
1,314,259,992,608
arxiv
\section{Introduction} It is well known experimentally that quasistatically stressed disordered solids produce intermittent response statistics \cite{phys_rep}, particularly in terms of acoustic emissions, that show scale-free size distributions. These intriguing dynamics is seen universally across scales from microscopic laboratory samples to the geological scale of earthquakes \cite{herrmann90,bkc96,rmp_2012,wiley_book,lucilla}. Empirically, the scale-free size distribution of breaking progression is known in different communities for decades. For example, in geoscience, this is known as the Gutenberg-Richter law, in magnetic domain walls as crackling noise, and so on. \\ The interests of statistical physicists in this context stems from the universal nature of the dynamics across length and energy scales. The scale-free variations of acoustic emissions, waiting time statistics, etc., are independent of the microscopic details of the underlying systems, which are very different from each other. Such behavior indicates critical dynamics, particularly self-organized critical dynamics for the system, where the universality hypothesis is still applicable, without having to fine-tune a driving parameter \cite{bonamy}. Such a phenomenon is therefore open for analysis with the tools of critical phase transitions, universality and therefore is an important step towards predictability of imminent failure \cite{alava_lasse,kun_record,sci_rep}. \\ As a consequence of the scale-free dynamics and potential applicability of the universality hypothesis, many generic models were proposed over the years that reproduce such a scale-free behavior. Such models include the fiber bundle model, the random fuse model, the Burridge-Knopoff model, and so on \cite{wiley_book,fbm_rmp,fbm_book}. The common underlying feature of these models is that they are threshold activated, driven, dynamical models. Particularly, for an external driving parameter crossing a pre-assigned threshold value for a single unit of these models, that unit is activated and influences the units in its `neighborhood’, which may in-turn get activated and thereby initiating an ‘avalanche’. As can be guessed, this type of dynamics is often related to sandpile models of self-organized criticality \cite{soc} and indeed such associations extensively explored in the past \cite{soc_fbm}. \\ The two major parameters that influence the nature of the response in such models are the range of interaction and the strength of the disorder. It was explored, particularly in the fiber bundle model that for a moderate disorder, a scale-free avalanche statistics is only recovered for a `sufficiently’ long-range of interaction \cite{pre1,pre2,kun_ptrc}. In the random fuse model, where the interaction range is not parameters to be tuned, it was shown that the avalanche statistics is not a power-law in the large system size limit \cite{sch}. This is in apparent contradiction with the fact that in reality, the interaction range in disordered elastic samples is not infinite i.e., not a mean-field-like interaction. However, experiments routinely reveal scale-free statistics. \\ One important distinction between the analytical and numerical results of avalanche dynamics and that of the experiments is that in the former it is the number of elements failing in an avalanche that is the measurable quantity, while in the latter it is the energy released in the avalanche. Now, in the mean-field limit of the fiber bundle model, it is straightforward to show that the avalanche size and the energy avalanche size are proportional, hence the two distributions are identical in shape. But this relation is no longer valid for local load sharing variants. In those cases, therefore, it is crucial to explore the size distributions of the energy emissions and compare that with experiments. In this work, we consider the simplest possible variant of the local load sharing fiber bundle model and analyze the energy avalanche statistics of that model. We then compare the results with experiments and also present a plausible argument for its form. \section{Description of Fiber Bundle Model} After being introduced by Pierce in 1926 \cite{Pierce}, the fiber bundle model has been proven to be important yet arguably the simplest model to study failure processes in disordered solids. A conventional fiber bundle model consists of a set of linear elastic fibers or Hookean springs, attached between two parallel plates. The plates are pulled apart by a force $F$, creating a stress $\sigma=F/L$ on $L$ fibers. Once the stress crosses the breaking threshold of a particular fiber, chosen from a random distribution, that fiber breaks irreversibly. The stress of broken fibers is then redistributed either globally among all surviving fibers (global load sharing or GLS scheme) or among the surviving nearest neighbors only (local load sharing or LLS scheme). For the GLS scheme \cite{Pierce, Daniels} no stress concentration occurs anywhere around the failed fibers as the stress of the failed fibers is shared among all surviving fibers. On the other hand, in LLS scheme \cite{Phoenix,Smith,Newman,Harlow2,Harlow3,Smith2}, stress concentration is observed near a broken patch (set of neighboring broken fibers) and increases with the size of such patches. After such redistribution, the load per fiber increases initiating failure of more fibers starting an avalanche. At the end of an avalanche, either all fibers are broken (suggesting global failure) or the bundle comes to a stable state with few broken fibers where an increment of external stress is required to make the model evolve further. The last applied stress just before global failure is considered to be the nominal stress or strength $\sigma_c$ of the bundle. The fraction of fibers that survive at $\sigma_c$ just before global failure is defined as the critical unbroken fraction of fibers ($U_c$). \begin{figure}[ht] \centering \includegraphics[width=8.7cm, keepaspectratio]{fig1a.jpg} \includegraphics[width=8.5cm, keepaspectratio]{fig1b.jpg} \caption{Upper Panel: The figure shows the spectrum of avalanche sizes ($s$) and corresponding energy values ($E$) for GLS FBM with increasing number of avalanches $k$. We can see that there is a direct correspondence between the $s$ and $E$ values for a certain $k$. This means, higher $s$ gives higher $E$. Since in case of GLS FBM, the fibers break in an increasing order of threshold values, we get, $E(k+1)>E(k)$ if $s(k+1)=s(k)$. Lower Panel: The figure shows the spectrum of avalanche sizes ($s$) and corresponding energy values ($E$) for LLS FBM with an increasing number of avalanches $k$. We do not see a direct correspondence between the $s$ and $E$ values here like GLS FBM. For example, the red eclipses show the parts where only 1 fiber breaks at each $k$ value but the corresponding $E$ values show many different values without any particular order as the fibers themselves do not follow any order while breaking.} \label{fig1} \end{figure} \section{Numerical Results} We have studied the fiber bundle model numerically in both mean-field limit and with local load sharing scheme in one dimension, though the major part of the paper will deal with the latter only. Numerical simulations are carried out for system sizes ranging in between $10^3$ and $5\times10^5$ and are averaged over $10^4$ configurations. Our motive is to understand the dynamics of avalanches and corresponding energy bursts emitted during these avalanches as the model evolves with increasing externally applied stress. Unless otherwise stated, we will use a uniform distribution ranging from 0 to 1 in order to assign threshold values to individual fibers beyond which it breaks. \subsection{Relation between $s$ and $E$} Figure \ref{fig1} shows a comparison between different avalanches and energy emitted during those avalanches for a bundle of size $10^5$. The results are produced for a single configuration. As usual, an avalanche is defined as the number of fibers broken in-between two consecutive stress increments; $k$ is the number of such stress increments in this case. While presenting the energy spectrum and the avalanches we have excluded the final avalanche leading to global failure. \begin{figure}[ht] \centering \includegraphics[width=8.5cm, keepaspectratio]{fig2.jpg} \caption{The figure shows the variation of energy $E$ with avalanche size (total number of broken fibers) $s$ for both GLS and LLS fiber bundle model. We observe $E \sim s$, for GLS FBM. On the other hand, for LLS scheme, $E \sim s^{\gamma}$, with $\gamma \approx 2.5$.} \label{fig2} \end{figure} The upper panel of figure \ref{fig1} shows the results for the GLS fiber bundle model while on the lower panel, we have shown the results with the local load sharing (LLS) scheme. Note that the range of $k$ for the LLS model is much less than the range of $k$ with the GLS scheme. This is understandable since with the LLS scheme, the model is more unstable due to stress concentration and a large number of fibers are broken during the final avalanche. The model evolves with a lesser number of stress increments in this case prior to a global failure where the average size of the avalanches is smaller compared to that in the GLS scheme. Now, for an avalanche of size $s$, if $n$ fibers with threshold values $\tau_1$, $\tau_2$, $\tau_3$, $\cdots$, $\tau_n$ break, then the amount of energy emitted during this avalanche will be: \begin{align}\label{eq1} E(s) = \displaystyle\frac{1}{2}\displaystyle\sum_{i=1}^{n} \tau_i^2. \end{align} This follows from the assumption of linear elastic (stress $\propto$ strain) behavior of individual fibers up until their individual (brittle) failure points. With above formalism, for each stress increment $k$, we will obtain an avalanche $s(k)$ and a corresponding energy burst of magnitude $E(s(k))$. \\ The energy spectrum follows a particular trend in the case of the GLS scheme. Since with the GLS scheme the fibers break in the increasing order of their threshold values, the energy emitted at $k+1$-th load increment will be higher than the energy emitted at $k$-th increment, even if the avalanche sizes happen to be the same at $k$ and $k+1$. Due to this, the variations of $s$ and $E$ with increasing $k$ looks exactly the same, only the values are scaled by a constant when we transfer from $s$ to $E$. Such correlation between $s$ and $E$ is not present in the case of the LLS fiber bundle model. In the case of the LLS scheme, the fibers break due to the interplay between the local stress profile and the threshold values of the fibers themselves. Due to such dynamics, the fibers do not break in increasing order of their thresholds. Then, there might be scenarios where $E(k+1)<E(k)$ when $s(k+1)=s(k)$ or even $s(k+1)>s(k)$. The red ellipses in the lower panel of figure \ref{fig1} shows this absence of correlation between $s(k)$ and $E(s(k))$. For both ellipses, $s=1$ for that period. In spite of that, we see a fluctuation in energy values without a particular trend. In the following, we will discuss this relation between $s$ and $E$ in detail. \begin{figure}[ht] \centering \includegraphics[width=8cm, keepaspectratio]{fig3a.jpg} \\ \includegraphics[width=8cm, keepaspectratio]{fig3b.jpg} \caption{Distribution of energies for an uniform distribution (0,1) and system sizes ranging in between $10^3$ and $10^5$. The results are shown for GLS FBM. (a) We already know that in the mean-field limit $P(s) \sim s^{-\beta}$, with $\beta \approx 2.5$. (b) We observe $Q(E) \sim E^{-\alpha}$ where $\alpha \approx 2.5$ as well independent of the system size.} \label{fig3} \end{figure} Figure \ref{fig2} highlights how energy $E$ emitted as a function of average avalanche size $s$ for a bundle of size $10^5$ and configuration $10^4$. Results for both GLS and LLS schemes are shown in the figure. We observe the following behavior: \begin{equation} \label{eq2} E \sim \left\{\begin{array}{ll} s & \mbox{, \text{for GLS},}\\ s^{\gamma} & \mbox{, \text{with $\gamma=2.5$ for LLS}.}\\ \end{array} \right. \end{equation} This behavior can be used to understand the relation between distributions $P(s)$ of avalanche size $s$ and $Q(E)$ of emitted energies $E$. For this we simply need to implement a change in variable \cite{theorem1} scheme as follows: \begin{align}\label{eq3} Q(E) \sim P[s(E)].|s^{\prime}(E)| = P[s(E)].|\displaystyle\frac{ds(E)}{dE}| \end{align} \textbf{Change in variable: GLS scheme} In case of GLS scheme, we observe \begin{align}\label{eq4} &E(s) \sim s \nonumber \\ &s(E) \sim E \end{align} This makes \begin{align}\label{eq5} s^{\prime}(E) = \displaystyle\frac{ds(E)}{dE} \sim 1 \end{align} We also know that the avalanche size distribution in case of GLS scheme is a scale free distribution with an exponent 2.5 \cite{hh92}. \begin{align}\label{eq6} P(s) \sim s^{-\beta}, \ \ \ \text{with $\beta=2.5$} \end{align} Then, combining Eq.\ref{eq3}, \ref{eq4}, \ref{eq5} and \ref{eq6}, we get, \begin{align}\label{eq6a} Q(E) \sim P(E).1 \sim E^{-\beta} \end{align} \begin{figure}[ht] \centering \includegraphics[width=8cm, keepaspectratio]{fig4a.jpg} \\ \includegraphics[width=8cm, keepaspectratio]{fig4b.jpg} \caption{Distribution of energies for an uniform distribution [0:1] and system sizes ranging in between $10^3$ and $10^5$. The results are shown for LLS FBM. (a) Avalanche size distribution for LLS FBM is an exponential function: $P(s) \sim e^{-s/s_0}$, where $s_0$ depends weakly on the system size. (b) Scale free distribution for energy emitted: $Q(E) \sim E^{-\alpha}$, with $\alpha \approx 3.5$.} \label{fig4} \end{figure} \begin{figure*}[ht] \centering \includegraphics[width=15cm, keepaspectratio]{fig5.jpg} \caption{(a) The finite size effect of $\alpha(L)$ is shown, which follows a scaling: $[\alpha(\infty)-\alpha(L)] \sim L^{-\eta}$ with $\eta=0.15$, where $\alpha(\infty)$ is the value of the exponent in the thermodynamic limit. We get $\alpha(\infty)=3.47$. (b) \& (c) The least square fit error and corresponding exponent $\eta$ for the scaling $[\alpha(\infty)-\alpha(L)] \sim L^{-\eta}$ is given for different values of $\alpha(\infty)$. We consider the value of $\alpha(\infty)$ and $\eta$ which produces the minimum error. This same procedure has been followed next while exploring the same thing for different threshold distributions.} \label{fig5} \end{figure*} \textbf{Change in variable: LLS scheme} In case of LLS scheme, we observe \begin{align}\label{eq7} &E(s) \sim s^{\gamma} \nonumber \\ &s(E) \sim E^{-\gamma} \end{align} This makes \begin{align}\label{eq8} s^{\prime}(E) = \displaystyle\frac{ds(E)}{dE} \sim (-\gamma)E^{-(\gamma+1)} \end{align} where $\gamma=2.5$. We also know that the avalanche size distribution in case of LLS scheme is an exponential distribution \cite{khh97}. \begin{align}\label{eq9} P(s) \sim e^{-s/s_0} \end{align} Then, combining Eq.\ref{eq3}, \ref{eq7}, \ref{eq8} and \ref{eq9}, we get, \begin{align}\label{eq10} Q(E) \sim P(E).\gamma E^{-(\gamma+1)} \sim \gamma e^{-\displaystyle\frac{E^{-\gamma}}{s_0}} E^{-(\gamma+1)} \end{align} In the limit of high $E$ value, Eq.\ref{eq10} can be simplified as follows \begin{align}\label{eq11} Q(E) \sim E^{-\alpha} \ \ \ \text{where $\alpha=\gamma+1=3.5$} \end{align} Above treatment shows that, in case of LLS scheme, in spite of an exponential distribution for avalanche sizes, the distribution of emitted energy is still observed to be scale-free. \subsection{Distribution of $s$ and $E$: Uniform distribution} Figure \ref{fig4}(a) shows the avalanche size distribution $P(s)$ for a GLS fiber bundle model with system size ranging from $10^3$ to $10^5$. This scale-free decrease of $P(s)$ with $s$ is already known in the literature. We also observe the same universal exponent 2.5 \cite{hh92}. Figure \ref{fig4}(b) shows the corresponding distribution for the energy emitted. We observe the same scale-free distribution for the energy as well, with the same exponent 2.5. This behavior is consistent with Eq.\ref{eq6} and Eq.\ref{eq6a} respectively. Figure \ref{fig4}(a), on the other hand, shows the avalanche size distribution with the LLS scheme. The distribution is exponential as derived analytically by Kloster et al.\cite{khh97}. The inset of the same results in log scale in order to compare them with the previous claim by Zhang and Ding \cite{zd94}, that $P(s)$ shows a scale-free behavior with a very high exponent closer to $-4.8$. This claim of scale-free nature is not substantiated and the exponential form for $P(s)$ is accepted in the literature. The distribution of energy in figure \ref{fig4}(b) shows a scale-free distribution, in spite of the fact that the avalanche size distribution is an exponential distribution. The exponent of the scale-free distribution is observed to an increasing function of the size of the bundle \begin{align}\label{eq12} Q(E) \sim E^{-\alpha(L)} \end{align} The above behavior is similar to Eq.\ref{eq11}, but with a $L$ dependent exponent instead of a constant value. To compare this $L$ dependent exponent with the value in Eq.\ref{eq11}, we have to study the variation of $\alpha$ in Eq.\ref{eq12} in details as the size of the bundle is increased. We have discussed this next. Figure \ref{fig5} shows the scaling of exponent $\alpha$ in figure \ref{fig4}(b) as the model approaches the thermodynamic limit. We observe the following scaling, \begin{align}\label{eq13} \alpha(\infty)-\alpha(L) \sim L^{-\eta} \end{align} where $\eta=-0.15$ and $\alpha(\infty)$ (= 3.47) has a value close to $\gamma+1$ (see Eq.\ref{eq11}). The fitting and the exponent $\eta$ is calculated from the minimization of least square fit error. This is shown in figure \ref{fig5}(b) and (c). We choose a certain value of $\alpha(\infty)$ and fit our numerical results. This in turn will produce a value of $\eta$ and corresponding least square fit error. If we repeat this for a number of $\alpha(\infty)$ values, then we can express the error (see figure \ref{fig5}b) and the exponent $\eta$ (see figure \ref{fig5}c) as a function of $\alpha(\infty)$. The dotted line in figure \ref{fig5}(a) corresponds to the value of $\alpha(\infty)$ (= 3.47) and $\eta$ (= 0.15) for which the least square fit error is minimum. \subsection{Universality} So far, we have generated the numerical results where a uniform distribution from 0 to 1 is used to assign random thresholds to individual fibers. In this section, we will verify the universality of our results. For this purpose, we will mainly explore 4 other distributions: (i) linearly increasing from 0 to 1, (ii) linearly decreasing from 0 to 1, (ii) a Weibull distribution with scale parameter 1 and Weibull modulus 1, and (iii) A power law distribution from 0 to 1 with exponent 2.0. \begin{figure*}[ht] \centering \includegraphics[width=8cm, keepaspectratio]{fig6a.jpg} \ \ \ \includegraphics[width=8cm, keepaspectratio]{fig6b.jpg} \\ \includegraphics[width=8cm, keepaspectratio]{fig6c.jpg} \ \ \ \includegraphics[width=8cm, keepaspectratio]{fig6d.jpg} \caption{(a) Distribution of energies for 4 different threshold distributions: (a) Linearly increasing [0:1], (b) Linearly decreasing [0:1] (c) Scale free distribution of exponent 2 between 0 and 1, (d) Weibull distribution with scale factor 1.0 and Weibull modulus 1.0. The system sizes varies in between $10^3$ and $5\times10^5$. The results are shown for LLS FBM. We observe a scale free distribution for $E$: $Q(E) \sim E^{-\alpha}$ for all thresholds. The exponent value $\alpha(L)$ shows finite size effect. The dotted line shows the slope with highest system size in our simulation. The exponents are close but less than 3.5. As before, this obeys the following scaling: $[\alpha(\infty)-\alpha(L)] \sim L^{-\eta}$. The value of $\eta$ for above mentioned distributions are 0.14, 0.12, 0.13 and 0.11 respectively. We observe the exponent $\alpha(\infty)$ in the thermodynamic limit to be very close to 3.5 irrespective of the choice of the threshold distribution.} \label{fig6} \end{figure*} \begin{figure}[ht] \centering \includegraphics[width=8cm, keepaspectratio]{fig7.jpg} \caption{The figure shows the variation of energy $E$ with avalanche size (total number of broken fibers) $s$ for LLS fiber bundle model. We have repeated the study for 5 different threshold distributions. We observe for all distributions, $E \sim s^{\gamma}$, where $\gamma$ has a value 2.5 independent of the nature of the distribution.} \label{fig7} \end{figure} In all these cases, the energy burst size distributions were found to be scale-free with an exponent value close to $-3.5$ (see figure \ref{fig6}), as is predicted from Eq. (\ref{eq11}). The variation with system size also universal across these different threshold distributions. These results suggest that the scale-free nature of the energy burst size distribution in the local load sharing fiber bundle model is a universal feature. We have further checked that the relation between an avalanche size and an energy burst size i..e., $E\sim s^{2.5}$ is valid for all these threshold distributions, as can be seen from Fig. \ref{fig7}. \section{Discussions and conclusions} The local load sharing fiber bundle model is known to be lacking in reproducing the scale-free avalanche statistics often seen in the experimental setup of fracturing brittle solids. In all the interpolation schemes between global (equal) and local load sharing versions of fiber bundles, the avalanche size distribution $P(s)$ only show a cross-over between the mean-field ($P(s)\sim s^{-\beta}$) and local load sharing ($P(s)\sim e^{-s/s_0}$) limits. The mean-field limit, however, is a rather idealized condition for modeling real samples. However, one important distinction between avalanche sizes ($s$) of the fiber bundle model and what is usually measured in the experiments is that in the latter case it is the energy burst ($E$) emitted in an avalanche that is measured. However, that distinction is not at all significant in the mean-field i.e., the global load-sharing limit of the model, because in that limit $E\sim s$. However, in the local load sharing version, we numerically find $E\sim s^{\gamma}$. Given an exponential distribution for the avalanche size distribution in the local load sharing limit and this numerical observation, it is possible to show that the size distribution of the energy bursts is scale-free ($Q(E)\sim E^{-\alpha}$) with $\alpha=\gamma+1$ (see Eq. (\ref{eq11})). We have then numerically checked that $\gamma\approx 2.5$ for various different threshold distributions (see Fig. \ref{fig6}) and independently checked that the size distribution exponent for the energy bursts are close to $-3.5$ (see Figs. \ref{fig4}, \ref{fig7}). Indeed, there are indications in experiments with sandstones that the avalanche amplitude distribution was exponential while the energy burst distribution was found to be a power law (see e.g., \cite{arma,expt}). Our results reproduce the same for the local load sharing fiber bundle model. In conclusion, the local load sharing fiber bundle model is shown to have a non-trivial relation between the avalanche size (number of fibers broken) and the energy burst size (elastic energy released from the broken fibers). Consequently, the energy burst size distribution is shown to have scale-free nature, with an exponent value independent of the threshold distributions of the fibers. Given that experimentally one measures the energy released, these results indicate that local load sharing fiber bundles can have a significant role in modeling fracture of brittle solids without having to resort to the equal load sharing mean-field limit. \bigskip The authors thank Prof. Bikas K. Chakrabarti for his valuable comments. This work was partly supported by the Research Council of Norway through its Centres of Excellence funding scheme, project number 262644.
1,314,259,992,609
arxiv
\section{Introduction} The advent of sequencing technology has facilitated the collection of genome-wide data for different molecular processes (e.g., gene expression, DNA methylation, microRNA [miRNA] expression), resulting in multi-omics data analysis from the same set of individuals or biospecimens. Exploring molecular mechanisms using multi-omics data is expected to improve our current knowledge of diseases, which may lead to further improvements in disease diagnosis, prognosis, and personalized treatment. While single-omics analysis can only capture a part of the biological complexity of a disease, integration of multi-omics data is required to provide a comprehensive overview of the underlying biological mechanisms. Various methods such as unsupervised data integration models based on matrix factorization and correlation-based analysis, supervised data integration models based on network-based methods and multiple kernel learning, and Bayesian methods have been proposed for multi-omics data integration \cite{huang2017more}. For example, multi-omics factor analysis (MOFA) \cite{argelaguet2018multi} is a Bayesian-based method for multi-omics integration by extracting the shared axes of variation between the different omics. Sparse generalized canonical correlation analysis (sGCCA) \cite{tenenhaus2014variable} is a generalization of regularized canonical correlation analysis with an L1-penalty model that selects co-expressed variables from omics datasets. Recently, researchers have been interested in multi-omics biomarkers that can explain or characterize a known phenotype. DIABLO (Data Integration Analysis for Biomarker discovery using Latent cOmponents) \cite{singh2019diablo} extends sGCCA to a supervised framework for identifying shared molecular patterns that can explain phenotypes across multi-omics; however, most of these methods are linear representations that cannot capture complex biological processes. Canonical correlation analysis (CCA) \cite{hotelling1992relations} is a well-known multivariate model for capturing the associations between any two sets of data. CCA and its variations have been applied in several studies \cite{ tenenhaus2014variable,mandal2017faroc, jendoubi2019whitening, singh2019diablo} because of its advantages in biological interpretation. However, a drawback of CCA is that it can only consider the linear relationship of two modalities to maximally correlate them. Generalized canonical correlation analysis (GCCA) \cite{kettenring1971canonical} extends the CCA to the case of more than two modalities. To complement the GCCA, deep generalized canonical correlation analysis (DGCCA) \cite{benton2017deep} considers nonlinear relationship learning of more than two modalities. Also, supervised deep CCA \cite{liu2017supervised} and task-optimal CCA \cite{couture2019deep} have been proposed for supervised learning while considering nonlinear maximal correlation, but they can be only applied to two modalities. In this study, we propose a supervised deep generalized canonical correlation analysis (SDGCCA), a nonlinear supervised learning model integrating with multiple modalities for discriminating phenotypic groups. SDGCCA identifies the common and correlated information between multiple omics data, which is important for discriminating phenotypic groups. SDGCCA is also based on a deep neural network (DNN), allowing the powerful capturing of the nonlinear part of the biological complexity. After training SDGCCA, we utilized Shapley additive explanation (SHAP) \cite{lundberg2017unified} to identify correlated biomarkers contributing to classification. \section{Related Work} In this section, we briefly review relevant previous studies. Table \ref{tab:Table1} presents all the notions considered throughout the study . \begin{table}[!ht] \begin{footnotesize} \vskip -0.15in \caption{The notations used in Eq. \ref{EQ:1}-\ref{EQ:12}} \vskip -0.2in \begin{center} \centering \setlength\tabcolsep{2pt} {\renewcommand{\arraystretch}{0.85} \begin{tabular}{lcl} \toprule \textsc{Notation} & \textsc{Dimension} & \textsc{Description}\\ \midrule $n$ &- & Number of samples \\ $m$ &- & Number of modalities \\ $k$ &- & Dimensions of the shared representation \\ $c$ &- & Number of label categories \\ $d_i$ &- & Dimensions of the $i$-th modality. \\ $\overbar{d_i}$ &- & Output dimensions of a deep neural network of the $i$-th modality. \\ $f_i(\cdot)$ &- & Deep neural network of the $i$-th modality. \\ $\theta_i$ &- & Parameters of $f_i(\cdot)$. \\ \midrule $X_i$ &$d_i \times n$ & $i$-th modality \\ $V_i$ &$d_i \times k$ & Projection matrix for $X_i$ \\ $U_i$ &$\overbar{d_i} \times k$ & Projection matrix for $f_i(X_i)$ \\ $Y$ &$c \times n$ & Label \\ $U_y$ &$c \times k$ & Projection matrix for $Y$ \\ $U_y^{\dagger}$ &$c \times k$ & Pseudo inverse of $U_y$ \\ $G$ &$k \times n$ & Shared representation \\\bottomrule \label{tab:Table1} \end{tabular}} \vskip -0.2in \end{center} \end{footnotesize} \end{table} \subsection{CCA} CCA is one of the representative methods for dimension reduction that can consider the correlation between two modalities. It is trained to maximize the correlation between two mapped matrices using the projection matrices of each of the two modalities. The objective function of CCA is as follows: \begin{equation} (V_1^*, V_2^*) = \underset{V_1, V_2}{\text{argmax }} corr(V_1^\top X_1, V_2^\top X_2) = \underset{V_1, V_2}{\text{argmax }} \frac{V_1^\top \Sigma_{12} V_2}{\sqrt{V_1^\top \Sigma_{11} V_1 V_2^T \Sigma_{22} V_2}},\\ \label{EQ:1} \end{equation} where $X_i$ denotes the $i$-th modality, $\Sigma_{11}$ and $\Sigma_{22}$ denote covariance matrices of $X_1$ and $X_2$, respectively, and $\Sigma_{12}$ denotes a cross-covariance matrix, $V_i$ denotes a projection matrix for the $i$-th modality, and $V_1^{*}$ and $V_2^{*}$ can be adopted to select the relevant features in both modalities. Since the objective function above is invariant for scaling of $V_1$ and $V_2$, the final objective function is expressed as follows by adding the constraints of the unit variance. \begin{equation} \begin{gathered} (V_1^*, V_2^*) = \underset{V_1, V_2}{\text{argmax }} V_1^\top \Sigma_{12} V_2, \\ \text{s.t. } V_1^\top \Sigma_{11} V_1 = V_2^\top \Sigma_{22} V_2 = I \end{gathered} \label{EQ:2} \end{equation} However, CCA has two limitations: (1) CCA is limited to mapping linear relationships and (2) CCA can only leverage two modalities. \subsection{DCCA} A deep canonical correlation analysis (DCCA) \cite{andrew2013deep} is used to solve the limitations of CCA that extracts only the linear relationship. In DCCA, to consider a nonlinear relationship, a DNN is applied to each modality. DCCA is learned by maximizing the correlation of the DNN outputs of each modality. The objective function of DCCA is as follows: \begin{equation} ({\theta_1}^*, {\theta_2}^*, {U_1}^*, {U_2}^*) = \underset{U_1, U_2}{\text{argmax }} corr(U_1^\top f_1(X_1), U_2^\top f_2(X_2)), \label{EQ:3} \end{equation} where $f_i(.)$ is a DNN function for the $i$-th modality, $U_i$ indicates a projection matrix for $f_i(X_i)$, and $\theta_i$ is a parameter of $f_i(.)$. $\theta_i(.)$ is trained via back-propagation that maximizes the objective function of DCCA. However, because DCCA maximizes the correlation between the DNN outputs, unlike CCA, it cannot directly extract correlated features in both modalities. In addition, DCCA cannot be applied to more than two modalities. \subsection{GCCA} GCCA is used to extend the CCA to more than two modalities. The GCCA learns projection metrics that map each modality to a shared representation. The objective function of GCCA is as follows: \begin{equation} \begin{gathered} \underset{V_1, \ldots, V_m, G}{\text{minimize }} \sum_{i=1}^{m} \|G-V_i^{\top}X_i\|_F^2 ,\\ \text{s.t. }GG^{\top} = I , \label{EQ:4} \end{gathered} \end{equation} where $G$ denotes the shared representation and $V_i$ indicates a projection matrix for $X_i$. To solve the objective function of GCCA, an eigen decomposition of an $n \times n$ matrix is required, which increases quadratically with sample size and leads to memory constraints. Also, unlike DCCA, nonlinear associations between modalities cannot be considered. \subsection{DGCCA} DGCCA is a model that addresses the two limitations of CCA by including both the advantages of GCCA and DCCA. DGCCA learns projection metrics that map each output of DNN to a shared representation. The objective function of DGCCA is as follows: \begin{equation} \begin{gathered} \underset{U_1, \ldots, U_m, G}{\text{minimize }} \sum_{i=1}^{m} \|G-U_i^{\top} f_i(X_i)\|_F^2, \\ \text{s.t. }GG^{\top} = I. \label{EQ:5} \end{gathered} \end{equation} $U_i$ and $G$ are trained to reduce the reconstruction error of GCCA, and to update $\theta_i$, gradients are back-propagated through the neural network. The gradient propagating to $f_i(X_i)$ is defined as $2U_i G - 2U_i U_i^\top f_i(X_i)$, and $\theta_i$ can be updated with back-propagation to minimize the objective function of DGCCA. As $\theta_i$ is updated, the value of $f_i(X_i)$ is changed. Therefore, to solve the objective function of DGCCA, updating $U_i$ and $G$ and updating $\theta_i$ are alternately performed. DGCCA has the advantage of being able to obtain the nonlinear relationship of each modality. In addition, DGCCA can consider the correlation between more than two modalities. \subsection{DIABLO} DIABLO extends sGCCA, which is a GCCA with L1-penalty. It is different from sGCCA as (1) the correlation between linear combinations of multi-omics data is changed to covariance; and (2) unlike sGCCA, which is an unsupervised method, it is a supervised framework capable of classification by maximizing the covariance between multiple omics datasets, including phenotype information. The objective function of DIABLO is as follows: \begin{equation} \begin{gathered} \underset{V_1, \ldots, V_m, U_y}{\text{maximize }} \sum_{i,j=1; i \neq j}^{m} D_{i,j} \text{ } cov(V_i^\top X_i, V_j^\top X_j) + \sum_{l=1}^m D_{l,y} \text{ } cov(V_l^\top X_l, U_y^\top Y), \\ \text{s.t. } \|V_i\|_2 = 1 \text{ and } \|V_i\|_1=\lambda_i , \|U_y\|_2 = 1 \text{ and } \|U_y\|_1=\lambda_y, \label{EQ:6} \end{gathered} \end{equation} where $D=\{D_{ij}\} \in \mathbb{R}^{(m+1) \times (m+1)}$ is a design matrix that determines whether datasets should be connected. However, DIABLO has a limitation—only assumes a linear relationship between the selected features to explain the phenotype. \section{Methods} \begin{figure}[ht!] \includegraphics[scale=1]{Fig1.eps} \vskip -2.3in \caption{\textbf{A schematic of SDGCCA.} $X_1,...,X_m$ are m modality, and $Y$ is the label information. Deep neural networks $f_1,...,f_m$ operate on $X_1,...,X_m$. The outputs of each modality and $Y$ are multiplied by each projection matrix ($U_1,...,U_m,U_y$). Two objective functions search for the optimal network $f_1,...,f_m$ and projection matrices, which provide both the highest correlation and lowest prediction error.} \label{fig1} \end{figure} \subsection{The SDGCCA method} The SDGCCA proposed in this study integrates ideas from DGCCA and DIABLO. SDGCCA incorporates the phenotypes of samples for supervised learning and selects significant features based on CCA. It uses DNN to consider nonlinear interactions between multi-omics data including phenotype (Fig.~\ref{fig1}). SDGCCA made it possible to predict the phenotypes of samples by adding two elements to DGCCA. First, the correlation of each modality and the correlation with the labels are considered. Thus, the shared presentation $G$ can be trained to obtain label information. The correlation loss function is defined as follows: \begin{equation} \begin{gathered} L_{corr} = \|G-U_y^{\top}Y\|_F^2 + \sum_{i=1}^{m} \|G-U_i^{\top} f_i(X_i)\|_F^2 ,\\ \text{s.t. }GG^{\top} = I , \label{EQ:7} \end{gathered} \end{equation} where $U_y$ denotes a projection matrix for label $Y$. Second, cross entropy \cite{de2005tutorial}, which is widely used in supervised models, is used to enable the propagation of label information directly to the DNN of each modality. The projection matrix $U_y$ obtained from Eq. \ref{EQ:7} can map the label to the shared representation. In addition, a projection matrix $U_i$ maps each modality to the shared presentation. Using the pseudo-inverse matrix of a projection matrix $U_y$, the label $Y$ can be approximated as follows: \begin{equation} \begin{gathered} G \approx U_i^{\top} f_i(X_i) \approx U_y^{\top}Y,\\ Y \approx {(U_y^{\top})^{\dagger}} U_i^{\top} f_i(X_i), \label{EQ:8} \end{gathered} \end{equation} where $U_y^{\dagger}$ denotes the pseudo inverse of $U_y$. Then, let $\hat{Y_i}$=${(U_y^{\top})^{\dagger}} U_i^{\top} f_i(X_i)$. By applying a softmax function to $\hat{Y_i}$, the model is trained using cross entropy. A classification loss can be defined as follows: \begin{equation} \begin{gathered} L_{ce} = \sum_{i=1}^m CrossEntropy(Y, Softmax(\hat{Y_i})). \label{EQ:9} \end{gathered} \end{equation} The final label prediction of SDGCCA uses the soft voting of the label presentation ($\hat{Y_i}$) of each modality. The label prediction of SDGCCA is defined as follows: \begin{equation} \hat{Y} = Softmax((\sum_{i=1}^m \hat{Y_i})/m), \label{EQ:10} \end{equation} where $m$ denotes the number of modalities. The optimization of the proposed model consists of three main steps. First, $U_i,...,U_m, U_y$ and $G$ are trained by the correlation loss function ($L_{corr}$). Here, $G$ is obtained by solving an eigenvalue problem. Let $C_{ii}=f_i(X_i) f_i(X_i)^\top, \text{s.t. }i=1, \ldots, m$, $C_{(m+1)(m+1)}=YY^\top$, $P_i=f_i(X_i)^\top C_{ii}^{-1} f_i(X_i) \in \mathbb{R}^{n \times n}$, $P_{m+1}=Y^\top C_{(m+1)(m+1)}^{-1} Y \in \mathbb{R}^{n \times n}$, and $M=\sum_{i=1}^{m+1} P_i$. Then, rows of $G \in \mathbb{R}^{n \times k}$ are orthonormal as top $k$ eigenvectors of $M$. If such $G$ is obtained, it can be easily obtained as $U_i = C_{ii}^{-1} f_i(X_i) G^\top$ and $U_y = C_{(m+1)(m+1)}^{-1} Y G^\top$. Second, $\theta_i$ of $f_i(.)$ is trained using the $L_{corr}$. It can be updated by selecting only the part related to $\theta_i$ in $L_{corr}$ and finding gradients to back-propagate to $f_i(X_i)$ as follows. \begin{equation} \begin{gathered} \sum_{i=1}^{m} \|G-U_i^{\top} f_i(X_i)\|_F^2,\\ =\sum_{i=1}^{m} \|G-Gf_i(X_i)^\top C_{ii}^{-1} f_i(X_i)\|_F^2,\\ =\sum_{i=1}^{m} \|G(I_n-P_i))\|_F^2,\\ =\sum_{i=1}^{m} \text{Tr}[G(I_n-P_i)G^\top],\\ =\sum_{i=1}^{m} \text{Tr}(I_k) - \text{Tr}(GMG^\top),\\ =Jk - \text{Tr}(GMG^\top).\\ \end{gathered} \label{EQ:11} \end{equation} As above, $L_{corr}$ can be solved by maximizing $Tr(GMG^\top)$, and the derivative of $Tr(GMG^\top)$ with respect to $f_i(X_i)$ is demonstrated in DGCCA \cite{benton2017deep} as $2U_i G - 2U_i U_i^\top f_i(X_i)$. Finally, after substituting the $U_i$ and $U_y$ obtained above into Eq. \ref{EQ:7}, $\theta_i$ is trained using $L_{ce}$. A detailed algorithm for training SDGCCA using Eq. \ref{EQ:7}-\ref{EQ:11} is summarized in Algorithm 1. {\renewcommand{\arraystretch}{1.5} \begin{tabularx}{\textwidth}{ @{} X @{} } \toprule \normalsize{\textbf{Algorithm 1:} Training the proposed model }\\ \midrule \hspace{0em}\textbf{Input:} Training dataset $X=[X_1, X_2, \ldots, X_m]$, regularization rate $\alpha$, learning rate $\beta$, and max iterations T\\ \hspace{0em}\textbf{Output:} Projection matrices $U_1, \ldots, U_m, U_y$, parameters $\theta_i$ of $f_i$ \\ \hspace{1em}t $ = 1$ \\ \hspace{1em}\textbf{while:} Validation loss does not converge or t $\leqq$ T\\ \hspace{3em}\textbf{Step 1. Calculate $U_1, \ldots, U_m, U_y, G$} \\ \hspace{4em}\begin{tabular}{ @{\hspace{\tabcolsep}} | l } \hspace{0.5em} $L_{corr} = \|G-U_y^{\top}Y\|_F^2 + \sum_{i=1}^{m} \|G-U_i^{\top} f_i(X_i)\|_F^2$ \\ \hspace{0.5em} $U_1, \ldots, U_m, U_y, G = \underset{U_1, \ldots, U_m, U_y, G}{\text{argmin }} L_{corr}$ \hspace{4em}\end{tabular} \\ \hspace{3em}\textbf{Step 2. Training $\theta_i$ using $L_{corr}$} \\ \hspace{4em}\begin{tabular}{ @{\hspace{\tabcolsep}} | l } \hspace{0.5em}$\nabla_{f_i(X_i)} L_{corr} \leftarrow{} U_iU_i^\top f_i(X_i)-U_iG$ \\ \hspace{0.5em}$\theta_i \leftarrow{} (1-\alpha)\theta_i - \beta\nabla_{\theta_i} \nabla_{f_i(X_i)} L_{corr}$ \hspace{4em}\end{tabular} \\ \hspace{3em}\textbf{Step 3. Training $\theta_i$ using $L_{ce}$} \\ \hspace{4em}\begin{tabular}{ @{\hspace{\tabcolsep}} | l } \hspace{0.5em}$\hat{Y}_i \leftarrow{} (U_y^{\top})^{\dagger} U_i^{\top} f_i(X_i)$\\ \hspace{0.5em}$L_{ce} = \sum_{i=1}^m CrossEntropy(Y, Softmax(\hat{Y}_i))$\\ \hspace{0.5em}$\theta_i \leftarrow{} (1-\alpha)\theta_i - \beta\nabla_{\theta_i} L_{ce}$ \hspace{4em}\end{tabular} \\ \hspace{3em} t $ \leftarrow{}$t $+ 1$ \\ \hspace{1em}\textbf{end while}\\ \bottomrule \end{tabularx}} \subsection{Identification of multi-omics biomarkers} SDGCCA is trained by maximizing the correlation of the DNN output, and a projection matrix can be used to select most correlated output from the DNN of each modality. Because SDGCCA uses eigen decomposition to obtain a projection matrix as a CCA-based model, it can be observed that the correlation value of the first component ($U_i[:,1]$) is the largest among the values mapped to the shared representation through each modality. Therefore, the DNN output with the highest correlative output among the DNN outputs corresponds to the maximum coefficient of the projection matrix (${\text{argmax }\lvert U_i[:,1] \rvert}$). The most correlative output among each DNN is as follows: \begin{equation} f_i(\cdot)[\text{argmax }\lvert U_i[:,1]\rvert, :] \label{EQ:12} \end{equation} However, unlike CCA and GCCA, the model is difficult to interpret due to the DNN. Thus, we used SHAP to select features related to the most correlative output among each DNN. In \cite{lundberg2017unified}, SHAP calculates the feature importance using SHAP value that satisfies the desirable properties (local accuracy, missingness, and consistency) for each prediction. Specifically, we used Deep SHAP, which is tailored to DNN and effectively combines SHAP values calculated for smaller components of a DNN into SHAP values for the whole DNN. \section{Results} \subsection{Datasets} We applied the proposed method to an AD classification task using multi-omics data. Three types of omics data (i.e. mRNA, DNA methylation, and microRNA (miRNA)) and clinical data were obtained from the ROSMAP cohort in the AMP-AD Knowledge Portal (\url{https://adknowledgeportal.synapse.org/}). We downloaded mRNA data that were normalized with quantile normalization to fragments per kilobase of transcript per million mapped read (FPKM) and removed potential batch effects using the Combat \cite{johnson2007adjusting}. The $\beta$-values of the downloaded DNA methylation data were measured using the Infinium HumanMethylation450 BeadChip and the missing $\beta$-values were imputed using a k-nearest neighbor algorithm. We downloaded miRNA data that were normalized using variant stabilization normalization and removed potential batch effects using the Combat. AD patients (n = 207) and normal controls (n = 169) with gene expression (GE), DNA methylation (ME), and miRNA expression (MI) profiles were included. Normalized FPKM values of the GE profiles were log2-transformed. For ME data, CpG sites located in promoter regions (TSS200 or TSS1500) were mapped to the corresponding gene, and the $\beta$-values of all overlapping genes were averaged. The MI profile was normalized using a variant stabilization normalization method, and batch effects were corrected using Combat \cite{johnson2007adjusting}. Finally, 18,164 GE features, 19,353 ME features, and 309 MI features were obtained. To further measure the performance of the proposed method, we used kidney renal clear cell carcinoma (KIRC) data collected from The Cancer Genome Atlas (TCGA) for the early- and late-stage classification. The TCGA level-3 data on gene expression (Illumina mRNAseq), DNA methylation (Illumina HumanMethylation450 BeadArray), and miRNA expression (IlluminaHiSeq miRNAseq) were obtained. The methylation data used in this study were preprocessed according to \cite{ma2020diagnostic}. Finally, KIRC data are comprised of 313 samples (184 and 129 early- and late-stage samples, respectively) on 16,406 GE, 16,459 ME, and 342 MI features. \subsection{Existing methods for performance comparison} We compared the classification performance of the SDGCCA with the following ten existing methods. Here, we selected widely utilized machine learning or deep learning models and CCA-based multi-omics integration methods to relay how SDGCCA can contribute to the CCA framework. We selected (1) support vector machine (SVM), (2) extreme gradient boosting (XGB) \cite{chen2015xgboost}, (3) logistic regression (LR), (4) random forest (RF) as a method of machine learning, and (5) DNN as methods of deep learning. For CCA-based methods, (6) GCCA, (7) DGCCA, and (8) DIABLO \cite{singh2019diablo} were selected. Because GCCA and DGCCA are unsupervised learning models, SVM was used as an additional classification model. In addition, the performance was compared with (9) Multi-Omics Graph cOnvolutional NETworks (MOGONET) \cite{wang2021mogonet} and (10) SMSPL \cite{9146338}, which are recently released multi-omics integration algorithms, although it is not a CCA-based model. The performance of all combinations of GE, ME, and MI of ROSMAP, such as GE+ME, GE+MI, ME+MI, and GE+ME+MI was compared. In addition, the performance of GE+ME+MI of KIRC was additionally compared. We used accuracy (ACC), F1 score (F1), area under the receiver operating characteristic curve (AUC), and Matthews correlation coefficient (MCC) \cite{elith2006novel} as metrics for evaluating classification performance. For all metrics, the mean and standard deviation for five-fold cross-validation (CV) were calculated. Each CV used 60\% of the samples as a training set, 20\% as a validation set, and 20\% as a test set, and the hyperparameters of all models were selected based on the MCC of the validation set. For SDGCCA, hyperparameters, including “Learning rate” from the set \{$1\mathrm{e}{-4}$, $1\mathrm{e}{-5}$\}, “L2 regularization term on weights” from the set \{0, $1\mathrm{e}{-2}$, $1\mathrm{e}{-4}$\}, and “dimension of shared representation” from the set \{1, 2, $ \ldots$, 10\}, were selected using the validation set. Details about hyperparameters of all other models and five-fold cross validation are described in Supplementary Material. SDGCCA is trained using correlation and classification losses. To see how each loss affects classification and feature selection, we performed ablation studies by measuring the performance of two additional models. First, SDGCCA-$\text{G}_{corr}$ is a model excluding Step 2 of Algorithm 1 in the training process. Second, SDGCCA-$\text{G}_{clf}$ is a model excluding Step 3 of Algorithm 1 in the training process. \subsection{Evaluation of classification performances} The results of the classification of AD patients and normal controls are summarized in Table \ref{tab:Table2}, \ref{tab:Table3}, \ref{tab:Table4}, and \ref{tab:Table5}. SDGCCA showed the best performance in 10 out of 16 cases, except for the performance of AUC in GE+ME, F1 in GE+MI, ACC, F1, MCC in ME+MI, and F1 in GE+ME+MI. In addition, for SDGCCA, the integration of all three omics data (GE+ME+MI) outperformed the integration of the two omics data. Interestingly, the integration of ME and MI showed different results from the combination of the other two omics data. For ME+MI, LR performed better than other machine learning models (SVM, XGB, and RF) and had the highest MCC values. In addition, SMSPL was the best performing model for ACC and FI measurements. If we consider that LR extracts the linear relationship between multi-omics data, and SMSPL is an LR-based model, the importance of nonlinearity in ME+MI is less than that in other combinations of omics data. \begin{table*}[!hb] \begin{footnotesize} \caption{Performance comparison of AD classification using GE+ME in ROSMAP multi-omics data.} \begin{center} \begin{tabular}{lcccc} \toprule \textsc{Method} & \textsc{ACC} & \textsc{F1} & \textsc{AUC} & \textsc{MCC} \\ \midrule SVM & 0.676 ± 0.044 & 0.711 ± 0.036 & 0.751 ± 0.055 & 0.346 ± 0.095 \\ XGB & 0.643 ± 0.063 & 0.686 ± 0.059 & 0.697 ± 0.053 & 0.275 ± 0.131 \\ LR & 0.674 ± 0.067 & 0.674 ± 0.072 & 0.750 ± 0.071 & 0.363 ± 0.133 \\ RF & 0.602 ± 0.058 & 0.687 ± 0.047 & 0.678 ± 0.059 & 0.179 ± 0.134 \\ DNN & 0.697 ± 0.037 & 0.695 ± 0.035 & 0.785 ± 0.038 & 0.412 ± 0.079 \\ GCCA+SVM & 0.665 ± 0.054 & 0.699 ± 0.046 & 0.710 ± 0.067 & 0.323 ± 0.111 \\ DGCCA+SVM & 0.609 ± 0.035 & 0.700 ± 0.021 & 0.673 ± 0.072 & 0.194 ± 0.080 \\ DIABLO & 0.633 ± 0.059 & 0.637 ± 0.062 & 0.702 ± 0.050 & 0.277 ± 0.120 \\ MOGONET & 0.670 ± 0.022 & 0.698 ± 0.050 & 0.698 ± 0.042 & 0.332 ± 0.034 \\ SMSPL & 0.683 ± 0.071 & 0.723 ± 0.056 & 0.751 ± 0.084 & 0.356 ± 0.155 \\ \midrule SDGCCA-$\text{G}_{corr}$ & 0.721 ± 0.050 & 0.724 ± 0.055 & \fontseries{b}\selectfont 0.788 ± 0.043 & 0.453 ± 0.095 \\ SDGCCA-$\text{G}_{clf}$ & 0.691 ± 0.034 & 0.693 ± 0.034 & 0.765 ± 0.046 & 0.396 ± 0.07 \\ \midrule SDGCCA & \fontseries{b}\selectfont 0.729 ± 0.035 & \fontseries{b}\selectfont 0.728 ± 0.037 & 0.782 ± 0.019 & \fontseries{b}\selectfont 0.474 ± 0.069 \\\bottomrule \multicolumn{5}{l}{\scriptsize The best performances are marked in bold.} \end{tabular} \label{tab:Table2} \end{center} \end{footnotesize} \end{table*} \begin{table*}[!hb] \begin{footnotesize} \begin{center} \caption{Performance comparison of AD classification using GE+MI in ROSMAP multi-omics data.} \begin{tabular}{lcccc} \toprule \textsc{Method} & \textsc{ACC} & \textsc{F1} & \textsc{AUC} & \textsc{MCC} \\ \midrule SVM & 0.679 ± 0.042 & 0.714 ± 0.036 & 0.755 ± 0.054 & 0.351 ± 0.089 \\ XGB & 0.647 ± 0.062 & 0.689 ± 0.057 & 0.704 ± 0.057 & 0.283 ± 0.130 \\ LR & 0.680 ± 0.069 & 0.681 ± 0.070 & 0.758 ± 0.070 & 0.375 ± 0.140 \\ RF & 0.602 ± 0.054 & 0.683 ± 0.046 & 0.678 ± 0.056 & 0.181 ± 0.126 \\ DNN & 0.689 ± 0.048 & 0.695 ± 0.049 & 0.765 ± 0.065 & 0.387 ± 0.095 \\ GCCA+SVM & 0.648 ± 0.044 & 0.700 ± 0.044 & 0.693 ± 0.060 & 0.288 ± 0.092 \\ DGCCA+SVM & 0.633 ± 0.071 & 0.714 ± 0.047 & 0.617 ± 0.095 & 0.244 ± 0.154 \\ DIABLO & 0.662 ± 0.060 & 0.672 ± 0.063 & 0.736 ± 0.066 & 0.330 ± 0.116 \\ MOGONET & 0.696 ± 0.055 & \fontseries{b}\selectfont 0.722 ± 0.055 & 0.759 ± 0.040 & 0.387 ± 0.112 \\ SMSPL & 0.691 ± 0.079 & 0.719 ± 0.056 & 0.760 ± 0.062 & 0.378 ± 0.169 \\ \midrule SDGCCA-$\text{G}_{corr}$ & 0.697 ± 0.047 & 0.702 ± 0.053 & 0.757 ± 0.051 & 0.404 ± 0.091 \\ SDGCCA-$\text{G}_{clf}$ & 0.667 ± 0.039 & 0.692 ± 0.030 & 0.739 ± 0.043 & 0.331 ± 0.082 \\ \midrule SDGCCA & \fontseries{b}\selectfont 0.699 ± 0.017 & 0.697 ± 0.015 & \fontseries{b}\selectfont 0.796 ± 0.033 & \fontseries{b}\selectfont 0.416 ± 0.035 \\\bottomrule \multicolumn{5}{l}{\scriptsize The best performances are marked in bold.} \end{tabular} \label{tab:Table3} \end{center} \end{footnotesize} \end{table*} \begin{table*}[!hb] \begin{footnotesize} \begin{center} \caption{Performance comparison of AD classification using ME+MI in ROSMAP multi-omics data.} \begin{tabular}{lcccc} \toprule \textsc{Method} & \textsc{ACC} & \textsc{F1} & \textsc{AUC} & \textsc{MCC} \\ \midrule SVM & 0.678 ± 0.040 & 0.713 ± 0.036 & 0.753 ± 0.052 & 0.349 ± 0.085 \\ XGB & 0.653 ± 0.059 & 0.697 ± 0.055 & 0.708 ± 0.054 & 0.296 ± 0.123 \\ LR & 0.683 ± 0.064 & 0.684 ± 0.065 & 0.758 ± 0.067 & \fontseries{b}\selectfont 0.380 ± 0.130 \\ RF & 0.597 ± 0.051 & 0.682 ± 0.043 & 0.670 ± 0.058 & 0.169 ± 0.120 \\ DNN & 0.644 ± 0.033 & 0.637 ± 0.045 & 0.741 ± 0.031 & 0.305 ± 0.061 \\ GCCA+SVM & 0.631 ± 0.044 & 0.699 ± 0.037 & 0.672 ± 0.065 & 0.249 ± 0.096 \\ DGCCA+SVM & 0.561 ± 0.031 & 0.681 ± 0.027 & 0.548 ± 0.071 & 0.073 ± 0.079 \\ DIABLO & 0.686 ± 0.048 & 0.701 ± 0.051 & 0.755 ± 0.072 & 0.374 ± 0.095 \\ MOGONET & 0.668 ± 0.030 & 0.708 ± 0.040 & 0.708 ± 0.028 & 0.329 ± 0.048 \\ SMSPL & \fontseries{b}\selectfont 0.686 ± 0.032 & \fontseries{b}\selectfont 0.724 ± 0.025 & 0.747 ± 0.054 & 0.365 ± 0.068 \\ \midrule SDGCCA-$\text{G}_{corr}$ & 0.678 ± 0.050 & 0.679 ± 0.066 & \fontseries{b}\selectfont 0.764 ± 0.052 & 0.369 ± 0.093 \\ SDGCCA-$\text{G}_{clf}$ & 0.662 ± 0.012 & 0.681 ± 0.021 & 0.733 ± 0.029 & 0.325 ± 0.027 \\ \midrule SDGCCA & 0.684 ± 0.046 & 0.693 ± 0.051 & \fontseries{b}\selectfont 0.764 ± 0.039 & 0.372 ± 0.089 \\\bottomrule \multicolumn{5}{l}{\scriptsize The best performances are marked in bold.} \end{tabular} \label{tab:Table4} \end{center} \end{footnotesize} \end{table*} \begin{table*}[!th] \begin{footnotesize} \begin{center} \caption{Performance comparison of AD classification using GE+ME+MI in ROSMAP multi-omics data.} \begin{tabular}{lcccc} \toprule \textsc{Method} & \textsc{ACC} & \textsc{F1} & \textsc{AUC} & \textsc{MCC} \\ \midrule SVM & 0.679 ± 0.040& 0.714 ± 0.035 & 0.756 ± 0.050 & 0.352 ± 0.084 \\ XGB & 0.655 ± 0.060 & 0.698 ± 0.055 & 0.711 ± 0.055 & 0.299 ± 0.124 \\ LR & 0.683 ± 0.061 & 0.683 ± 0.063 & 0.759 ± 0.064 & 0.380 ± 0.124 \\ RF & 0.603 ± 0.050 & 0.684 ± 0.041 & 0.672 ± 0.055 & 0.181 ± 0.116 \\ DNN & 0.707 ± 0.039 & 0.701 ± 0.037 & 0.779 ± 0.043 & 0.437 ± 0.079 \\ GCCA+SVM & 0.628 ± 0.042 & 0.702 ± 0.033 & 0.669 ± 0.065 & 0.240 ± 0.094 \\ DGCCA+SVM & 0.569 ± 0.018 & 0.680 ± 0.037 & 0.615 ± 0.055 & 0.104 ± 0.034 \\ DIABLO & 0.673 ± 0.060 & 0.679 ± 0.064 & 0.739 ± 0.044 & 0.354 ± 0.117 \\ MOGONET & 0.684 ± 0.040 & 0.736 ± 0.012 & 0.692 ± 0.059 & 0.359 ± 0.086 \\ SMSPL & 0.699 ± 0.047 & 0.726 ± 0.027 & 0.777 ± 0.068 & 0.397 ± 0.110 \\ \midrule SDGCCA-$\text{G}_{corr}$ & \fontseries{b}\selectfont 0.731 ± 0.035 & \fontseries{b}\selectfont 0.742 ± 0.031 & 0.797 ± 0.034 & 0.469 ± 0.075 \\ SDGCCA-$\text{G}_{clf}$ & 0.678 ± 0.047 & 0.682 ± 0.050 & 0.753 ± 0.061 & 0.367 ± 0.089 \\ \midrule SDGCCA & \fontseries{b}\selectfont 0.731 ± 0.050 & 0.729 ± 0.056 & \fontseries{b}\selectfont 0.805 ± 0.043 & \fontseries{b}\selectfont 0.479 ± 0.094 \\ \bottomrule \multicolumn{5}{l}{\scriptsize The best performances are marked inbold.} \end{tabular} \label{tab:Table5} \end{center} \end{footnotesize} \end{table*} In all the experiments, SVM that uses the original input data performed better than GCCA+SVM and DGCCA+SVM. In addition, SDGCCA performed better than GCCA+SVM and DGCCA+SVM, except for F1 in GE+MI. This result indicates that there is a risk of losing information related to classification when dimension reduction is performed by only considering the correlation. In most cases, the performance of SDGCCA-$\text{G}_{corr}$ was better than that of SDGCCA-$\text{G}_{clf}$, and the performances were improved when both the correlation and classification losses were combined. The results of the classification of early-stage and late-stage of KIRC are shown in Table \ref{tab:Table6}. SDGCCA showed the best performance in two out of four cases, except for the performance of F1, and AUC. F1 and AUC were the highest in LR-based SMSPL, and LR also had all higher performance than other machine learning models (SVM, XGB, and RF) and DNN. Consistent with the results of ROSMAP, all performances of DGCCA+SVM were higher than those of GCCA+SVM, and all performances of SDGCCA-$\text{G}_{corr}$ were higher than those of SDGCCA-$\text{G}_{clf}$. \begin{table*}[!th] \begin{footnotesize} \begin{center} \caption{Performance comparison of early- and late-stage classification using GE+ME+MI in KIRC multi-omics data.} \begin{tabular}{lcccc} \toprule \textsc{Method} & \textsc{ACC} & \textsc{F1} & \textsc{AUC} & \textsc{MCC} \\ \midrule SVM & 0.713 ± 0.040& 0.708 ± 0.039& 0.790 ± 0.035& 0.401 ± 0.082 \\ XGB & 0.693 ± 0.055& 0.688 ± 0.057& 0.778 ± 0.066 &0.362 ± 0.125 \\ LR & 0.738 ± 0.053& 0.738 ± 0.052 &0.784 ± 0.039& 0.480 ± 0.106 \\ RF & 0.687 ± 0.024& 0.661 ± 0.032& 0.770± 0.031& 0.340 ± 0.054 \\ DNN & 0.687 ± 0.023& 0.715 ± 0.025 &0.763 ± 0.054& 0.418 ± 0.041 \\ GCCA+SVM & 0.652 ± 0.057& 0.615 ± 0.073 &0.678 ± 0.086& 0.247 ± 0.159 \\ DGCCA+SVM & 0.665 ± 0.067& 0.642 ± 0.081 &0.684 ± 0.106& 0.287 ± 0.167 \\ DIABLO & 0.719 ± 0.052 & 0.760 ± 0.044 & 0.791 ± 0.030 & 0.425 ± 0.117 \\ MOGONET & 0.661 ± 0.095 & 0.728 ± 0.087 & 0.745 ± 0.061 & 0.327 ± 0.123 \\ SMSPL & 0.710 ± 0.069 &\fontseries{b}\selectfont 0.763 ± 0.052 & \bftab0.808 ± 0.067 & 0.394 ± 0.151 \\ \midrule SDGCCA-$\text{G}_{corr}$ & 0.741 ± 0.063& 0.742 ± 0.062 &0.800 ± 0.058& 0.479 ± 0.129 \\ SDGCCA-$\text{G}_{clf}$ & 0.735 ± 0.060& 0.734 ± 0.057& 0.794 ± 0.061 &0.472 ± 0.122\\ \midrule SDGCCA & \bftab0.745 ± 0.035& 0.745 ± 0.034& 0.793 ± 0.084& \fontseries{b}\selectfont 0.484 ± 0.069 \\ \bottomrule \multicolumn{5}{l}{\scriptsize The best performances are marked in bold.} \end{tabular} \label{tab:Table6} \end{center} \end{footnotesize} \end{table*} To statically estimate the performance of our model against other models, we performed a paired $t$-test using five-fold cross-validation classification results in MCC values for GE+ME+MI of ROSMAP and KIRC (Table 7). We found that SDGCCA statistically outperformed its competing methods in 15 of the 20 cases ($p$-value $<$ 0.05). \begin{table*}[!th] \begin{footnotesize} \begin{center} \caption{Statistical significances of performance improvements of SDGCCA against other methods.} \begin{tabular}{lcc} \toprule \textsc{Methods} & \textsc{ROSMAP} & \textsc{KIRC} \\ \midrule SVM & 6.57E-02 & \textbf{7.22E-04} \\ XGB & \textbf{2.82E-02} & \textbf{2.28E-02} \\ LR & \textbf{1.68E-03} & 4.40E-01 \\ RF & 5.65E-02 & \textbf{1.09E-02} \\ DNN & \textbf{8.39E-03} & \textbf{3.28E-02} \\ GCCA+SVM & 9.26E-02 & \textbf{4.17E-03} \\ DGCCA+SVM & \textbf{1.72E-02} & \textbf{1.35E-02} \\ DIABLO & \textbf{1.67E-03} & \textbf{2.36E-02} \\ MORONET & \textbf{1.59E-02} & \textbf{3.51E-02} \\ SMSPL & \textbf{1.59E-02} & 5.24E-02\\ \bottomrule \multicolumn{1}{l}{\scriptsize Values with a $p$-value $<$ 0.05 are marked in bold.} \end{tabular} \label{tab:Table7} \end{center} \end{footnotesize} \end{table*} \begin{figure}[hp!] \includegraphics[width=\textwidth]{Fig3.eps} \caption{ \textbf{$t$-SNE plots for CCA-based methods including GCCA, DGCCA, DIABLO, and SGDCCA.} (A) ROSMAP data and (B) TCGA KIRC data. Each method was used to compute projections for the gene expression, DNA methylation, and miRNA data. The circle and cross symbols represent training and test samples, respectively. Samples are colored according to labels. } \label{fig2} \end{figure} We projected each omics data and concatenated multi-omics data on a low-dimensional space throughout dimension reduction by t-distributed Stochastic Neighbor Embedding (t-SNE). Figure~\ref{fig2} visualizes the projections of multi-omics data for each method. As expected, the supervised learning-based methods more clearly separated classes. Among the supervised learning-based models, nonlinear SGDCCA classified classes more clearly than linear DIABLO. To further demonstrate the effects of the hyperparameter k (dimension of shared representation) on the SDGCCA, we trained SDGCCA under a wide range of k using the ROSMAP data. Figure S2 shows the embedding performance, correlation sum, and classification performance of SDGCCA when k varies from 1 to 10. we observed that the hyperparameter k did not influence the embedding performance and classification performance of SDGCCA as the performance fluctuated with the change of K. However, we observed that the correlation sum peaked at K=7 and decreased thereafter. This experiment described in detail in Supplementary material. \subsection{Classification performance of the identified biomarkers} We compared the feature selection performance of the CCA-based method to demonstrate that the set of relevant features of DNN output with high correlation between each modality is effective in classification. For each CV, SHAP selected 300 out of 18,164 features for GE, 300 out of 19,353 features for ME, and 30 out of 309 features for MI using only training data of ROSMAP dataset. Here, correlated features were selected using only training data. Performance was evaluated using LR, which was the best performance among the machine learning models in the GE+ME+MI experiments. To confirm whether the features selected by SDGCCA are important features in the classification, LRs with these features were compared with randomly selected features and features selected by CCA-based models, GCCA and DGCCA . The comparisons were repeated 100 times. \begin{table*}[!ht] \begin{footnotesize} \begin{center} \caption{ROSMAP classification performance comparison of important features selected by CCA-based methods.} \label{table1} \begin{tabular}{lcccc} \toprule \textsc{Feature Set} & \textsc{ACC} & \textsc{F1} & \textsc{AUC} & \textsc{MCC} \\ \midrule All Features & 0.683 ± 0.061 & 0.683 ± 0.063 & \fontseries{b}\selectfont 0.759 ± 0.064 & 0.380 ± 0.124 \\ Random Features & 0.661 ± 0.043 & 0.674 ± 0.047 & 0.727 ± 0.045 & 0.328 ± 0.086 \\ GCCA & 0.630 ± 0.043 & 0.638 ± 0.036 & 0.693 ± 0.045 & 0.269 ± 0.089 \\ DGCCA & 0.669 ± 0.040 & 0.678 ± 0.048 & 0.739 ± 0.037 & 0.345 ± 0.076 \\ \midrule SDGCCA-$\text{G}_{corr}$ & 0.646 ± 0.045 & 0.661 ± 0.048 & 0.716 ± 0.053 & 0.294 ± 0.092 \\ SDGCCA-$\text{G}_{clf}$ & 0.650 ± 0.021 & 0.650 ± 0.020 & 0.739 ± 0.040 & 0.315 ± 0.048 \\ \midrule SDGCCA & \fontseries{b}\selectfont 0.689 ± 0.045 & \fontseries{b}\selectfont 0.698 ± 0.042 & 0.755 ± 0.043 & \fontseries{b}\selectfont 0.386 ± 0.095 \\ \bottomrule \multicolumn{5}{l}{\scriptsize The best performances are marked in bold.} \end{tabular} \label{tab:Table8} \end{center} \end{footnotesize} \end{table*} Table \ref{tab:Table8} presents the classification performance of important features selected by all the competing methods. All features and feature sets obtained from DGCCA and SDGCCA performed better than the randomly selected features, while the other feature sets did not. A feature set from SDGCCA showed better performance than using all features except AUC. Thus, it can be observed that SDGCCA can identify important features in AD classification using multi-omics data. SDGCCA-$\text{G}_{corr}$ and SDGCCA-$\text{G}_{clf}$ performed lower than when using randomly selected features. Regarding SDGCCA-$\text{G}_{corr}$, the gradient associated with the correlation is not propagated to the weight and bias of the DNN of each modality, indicating that the ability to select the correlative features between multi-omics is worse. When we calculated the average of correlations between the first components of shared presentation of each modality in the training set, SDGCCA-$\text{G}_{corr}$ had a correlation coefficient of 0.462, which is much lower than the correlation coefficient of 0.954 from SDGCCA-$\text{G}_{clf}$, and the correlation coefficient of 0.956 from SDGCCA. Regarding SDGCCA-$\text{G}_{clf}$, the correlation value is slightly lower than that of SDGCCA, but shows lower performance. Accordingly, $L_{corr}$ only cannot propagate sufficient information about the label to the DNN of each modality, and it is important to use $L_{clf}$ together. \subsection{Pathway analysis using the SHAP values} To further illustrate the applicability of the proposed method, we performed pathway analysis. For the pathway analysis, all ROSMAP samples were used for training the SDGCCA, where hyperparameters having the highest MCC values for five folds on average in the five-fold cross-validation were used. We clustered features with similar patterns using all the samples and features with variable SHAP values (Fig.~\ref{fig3} (A)). Pathway enrichment analysis was performed based on the Kyoto encyclopedia of genes and genomes (KEGG) database \cite{kanehisa2000kegg} with the GE and ME features of each cluster. Fig.~\ref{fig3}A illustrates enriched KEGG pathways with adjusted p-values of less than 0.05. Cluster H was significantly enriched in the KEGG pathway related to olfactory transduction (adjusted P-values $=$ 2.E-37). Previous studies \cite{zou2016olfactory} have revealed that AD is closely related to olfactory dysfunction in AD. We analyzed clusters J and S in detail using ClueGO \cite{bindea2009cluego} Cytoscape plugins to show the relationship (Fig.~\ref{fig3} (B) and (C)). In cluster J, we identified that the two genes HIPK3 and TGFBR1 related to cellular senescence, and miR-885-5p targeting them were clustered. Cellular senescence is widely known to be associated with AD \cite{boccardi2015cellular,masaldan2019cellular,reddy2017micrornas}. In addition, IL6, IL10 and RAF1 genes in cluster J network are also well-known AD closely related genes \cite{arosio2004interleukin, mei2006distribution}. Cluster S was significantly enriched in the KEGG pathway related to AD (adjusted P-values $=$ 1.E-03) and neurodegeneration (adjusted P-values $=$ 5.E-03). CASP8 and PLCB1 are known as AD biomarkers \cite{rehker2017caspase, shimohama1995signal}. AXIN1 and PPP3CA are also known as AD-related genes \cite{lloret2011amyloid, whelan2019multiplex}. \begin{figure}[ht!] \includegraphics[width=\textwidth]{Fig2.eps} \caption{ \textbf{The results of pathway analysis.} (A) Clustered heatmap of SHAP value with red color denoting increase and green color denoting decrease. The Kyoto Encyclopedia of Genes and Genomes pathways enriched in each of the 20 clusters are represented with a heatmap. (B) The pathway network of cluster J. (C) The pathway network of cluster S. Yellow circles denote well-known Alzheimer's disease-related genes. } \label{fig3} \end{figure} \section{Conclusion} In this study, we proposed a CCA-based SDGCCA, an integration method of multi-omics data for the classification and identification of significant multi-omics biomarkers. SDGCCA was trained to consider the nonlinear/complex interaction between multi-omics using the loss of DGCCA that maximizes the correlation of each DNN output. In addition, because the label can be predicted using a projection matrix, it is possible to train the model to propagate label information to each DNN using cross entropy. SDGCCA performed better in the AD classification task using gene expression, DNA methylation, and miRNA expression than the other machine learning models, DNN, DIABLO, MOGONET, and SMSPL. We showed that SDGCCA can select an important feature set related to a phenotype by comparing it with other feature selection models. Using SHAP values, we performed clustering of features in multi-omics data, and showed that it is applicable to AD-related biomarker discovery using pathway analysis. In conclusion, SDGCCA is a multi-omics integration algorithm with high classification performances and has the ability to select a set of mutually contributing features from different multi-omics datasets. \section*{Software} Source codes of SDGCCA are available at https://github.com/DMCB-GIST/SDGCCA. \section*{Acknowledgments} This work was supported by supported by the Bio \& Medical Technology Development Program of NRF funded by the Korean government (MSIT) (NRF-2018M3C7A1054935) and Institute of Information \& communications Technology Planning \& Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-01842, Artificial Intelligence Graduate School Program (GIST)). \bibliographystyle{unsrt}
1,314,259,992,610
arxiv
\section{Introduction}\label{intro} The attempt to apply methods of physics to describe various features of societies has a long history, stretching as far back as Thomas Hobbes and William Petty \cite{b02,b04}. Although the potential for misapplications is far from negligible \cite{b04,m89}, over the past two decades tools, methods and ideas originally developed to understand the fabric of the physical universe are being increasingly applied by physicists to describe and understand the inner workings of societies \cite{s05,ds08,doyne05,ga04,ms00,mc04,ri07,r02}. What started simply as an exercise in statistical mechanics, where complex behavior arises from simple rules caused by the interaction of a large number of components, due to the increasing interest of physicists in interdisciplinary research these applications have been constantly growing and the area of what today is named as socio-economical physics, sociophysics and econophysics for short, was born in the late 1990s \cite{b06,cks07,ma07,nat06,y08}. As a consequence, old problems in what until recently was believed to be the exclusive realm of economics are receiving fresh attention in econophysics and possible new perspectives and solutions are emerging. Our goal here is to focus in one of those old problems, namely in the work made over a century ago by the Italian economist and sociologist Vilfredo Pareto \cite{pareto}, who studied the personal income distribution for some countries and years. He found out that the complementary cumulative personal income distributions followed a power law for those with high income \cite[p.\ 245]{b04}, \cite[p.\ 152]{mh04}, a result that turned out later to be considered a classic example of a fractal distribution \cite[p.\ 347]{mandelbrot}, \cite{n05}. Later results confirmed Pareto's findings, but the application of his personal income power law, also known simply as \textit{Pareto law} \cite{k80,n05}, is limited to the very high income population (see below). The overwhelming majority of the population does not follow Pareto's power law distribution and, therefore, the characterization and understanding of the personal income distribution of the economically less favored still remains an open problem. There has been several recent studies about individual income distribution for different countries and epochs, modern, medieval and even ancient. For old societies, these studies include ancient Egypt \cite{a02} and medieval Hungary around 1550 \cite{hns05}. A list of recent studies for modern societies carried out by both economists and econophysicists, and which by no means should be considered as exhaustive, includes Australia \cite{bym06,mah04}, Brazil \cite{cfl98}, China \cite{crt07}, France \cite{qd06}, Germany \cite{qd06}, India \cite{s06}, Italy \cite{cg05,qd06}, Japan \cite{anostt00,fsaka03,i05,s02,s01,sn05}, Poland \cite{dj02,lo04}, United Kingdom \cite{dy01b,h81,qd06,wm04} and USA \cite{bmm96,s05,cy05,dy01,dy01b,lo04,mc90,wm04,y08}. The results coming out of these studies are varied. Although most of them confirm the validity of the Pareto law at higher personal income data, characterization of the lower individual income distribution remains disputed. Gaussian, log-normal, gamma, generalized beta of the second kind, Fisk and Beaman distribution functions have been used to fit the data, as well as Dagum, Singh-Maddala and Weibull models \cite{bj05,bmm96,crt07,h81,k80,l68,lo04,mr07,mc90,qd06}. Recently the exponential was found to produce a good description for about 98\% of the population at the lower personal income portion \cite{bym06,s05,cy05,dy00,dy01,dy01b,dy02,lo04,wm04,y08}. Disparate interpretations for these distributions have also been advanced. Many interpretations are basically of statistical nature, invoking stochastic processes \cite{bym06,dy01,k80,n05,r03,sr01}. Others attempt to draw analogies from physics. This is the case of Dr\u{a}gulescu and Yakovenko \cite{dy01,dy01b,y03,y08}, who advanced an exponential type distribution of personal income analogous to the Boltzmann-Gibbs distribution of energy in statistical physics, and Chatterjee et al.\ \cite{ccm04}, who proposed an ideal-gas model of a closed economic system where total money and number of agents are fixed. The purpose of this paper is to study the personal income distribution of Brazil for approximately the last 30 years. Here we provide empirical evidence which confirms that Brazil also follows the Pareto law for the tiny group which constitutes the high personal income population. The other motivation of this paper was to try to determine whether or not the exponential is as good a descriptor for the Brazilian data as it is for the USA. Our results show that the exponential and, by extension, any function based on it, turned out to be a very poor descriptor of the lower income distribution in Brazil. Such a result led us to search for another simple function capable of describing the individual income distribution for the majority of the Brazilian population. We propose here the \textit{Gompertz curve} \cite{g25,kot01,w32} as a good descriptor for the distribution of the lower income population. Although the Gompertz curve can be written with two parameters only, we shall show below that one of them can be linked to a boundary condition determined by the problem. This effectively leaves only one parameter to be fitted by the data. Therefore, here we provide empirical evidence that the personal income distribution in Brazil reasonably follows the Gompertz curve for the overwhelming majority of the population. Our results show that the individual income distribution data in Brazil from 1978 to 2005 are well described by both the Pareto law and the Gompertz curve. This time span constitutes virtually all data for the Brazilian individual income distribution available in digital form at the time of writing. We have calculated the parameters of both curves with their uncertainties for all years in this period, with exception of those when there was no data collection: 1980, 1991, 1994, 2000 (see Section \ref{data} below). We also present the Lorenz curves, the Gini coefficients and the evolution of the Pareto index, that is, the exponent of the Pareto law, in this time span as well as a comparison of the income share for the two groups, showing an approximate cycle with roughly a 4 year period. As it happens for other countries, we found evidence that the lower income population, represented here by a Gompertz curve, constitutes about 99\% of the Brazilian population, with the remaining 1\% richest being represented by a Pareto power law distribution. Similarly to other countries, such results characterize Brazil as being a well defined two income class system. The plan of the paper is as follows. Section \ref{data} presents the income data of Brazil and discusses how the data reduction necessary for our analysis was carried out. Some results obtained directly from the data, such as the Lorenz curves and Gini coefficients are also shown. Section \ref{model} presents our analytical modeling by means of the Gompertz curve and Pareto power law complementary cumulative distribution functions. The results are presented in Section \ref{results}, where one can find various tables presenting the fitted parameters and plots showing the linearization of both the Gompertz and Pareto income regions with their fitted lines, as well as the evolution of the Paretian component income share relative to the overall income. Section \ref{conclusion} summarizes and discusses the results. \section{The Data}\lb{data} Personal income data for the Brazilian population is available in yearly samples called PNAD. This is a Brazilian Portuguese acronym meaning ``National Survey by Household Sampling.'' IBGE, the Brazilian government institution responsible for data collection, formatting and availability, carries out the survey every September and the data is released usually about one year later. PNAD data has been systematically available digitally since 1978, although in 1980, 1991, 1994 and 2000 there was no data collection and, therefore, there are no PNADs for these years. IBGE also has digital PNAD data for 1972, but the file seems incomplete and without clear labels for each entry. In addition the 1972 data collection was apparently carried out by a very different methodology than the one adopted by IBGE from 1978 onward. For these reasons we considered the 1972 PNAD data unreliable and discarded it from our analysis. PNAD comprises surveys of about 10\% of households in Brazil. The released data is made of files with entries for each surveyed household, providing the total household's income, the number of people living in, a weight index representing its proportion to the complete set of households in Brazil, occupation of those individuals and many other entries which are not relevant for the present analysis. PNAD is a sampling, not a census, and the surveyed households' locations in Brazilian territory are carefully selected by IBGE such that once the weight index is used the final set should be very close to the complete real set. The most appropriate procedure to find the personal income from our data set would be to adopt some sort of ``equivalence scale'', that is, a tool allowing us to reach conclusions about how the total income in a household is shared among all of its members. One way of doing this is to allocate points to each individual in a household, such that the first adult would have a higher weight than other persons whose ages are, say, 14 years or older. Children under the age of 14 would be allocated an even smaller weight. For instance, the first adult would have a weight of 1 point, additional persons above 14 years would have 0.5 points and children would be allocated with 0.3 points. The idea behind this procedure is to differentiate the household members who consume, but do not produce income (children, for instance), from those who do both, but at different levels, and also take into account the fact that there are goods in a household which are consumed by several individuals at the same time, like, for instance, washing-machines, kitchens, etc, and, therefore, a second adult would not consume as much as the first and would contribute more in raising the household's well-being. Using this procedure the income of children under 14 years would be near zero, even though they share the household's total income. Equivalized household income would then be obtained by dividing the total household income by the sum of the points attributed to the household members \cite{deaton}. The major obstacle we faced in implementing such a differentiated equivalence scale with our data is the fact that the PNADs do not provide us with enough information to do so. What we have is a list of the total income in a household and the number of people living in. Under these circumstances we adopted an equivalence scale such that each individual is allocated a weight of 1 point. So, for each PNAD entry we divided the total income by the number of people living in, meaning that the household income is equally divided for every occupant. As mentioned above, each PNAD household entry has a supplied weight index corresponding to its relative importance, or representation, as regards the entire country. This means that although the survey comprises only a portion of Brazilian households, once we obtain the income of each individual in a particular home we multiply the resulting values by this weight in order to obtain the number of individuals with that particular income in the whole country. Thus, we end up with tables relating on one side a certain number of individuals and on the other their respective incomes. Brazil experienced runaway inflation and hyperinflation for most of the 1980s and early 1990s, resulting in a series of currency adjustments where many zeros were ``dropped'' from time to time and new currency names were adopted each time those adjustments became effective. Hyperinflation came to an abrupt end in 1994 when a new and stable currency, called \textit{real} (R\$), was adopted. This fact required the adoption of a methodology such that the final data were somehow homogenized, otherwise comparison of data sets of different years would be problematic. Thus, our adopted procedure was of normalizing the income values by the average income of September of each year. In other words, let $x_i'$ be the \textit{i}th income received on the month of September of a certain year given in one of the Brazilian currency units legally adopted in the country when the survey was carried out. Then $\langle x' \rangle$ is the average income value during the month of September of that particular year. We may now define the \textit{normalized individual income} $x_i$ to be the ratio $x_i = {x_i'}/{\langle x' \rangle}$ so that $x_i$ becomes currency independent. In this way we were able to produce tables listing the number of people in terms of currency free income values. This allowed us to generate distribution functions relative to the average personal income in a certain year. This individual average income does change from year to year, as can be seen in table \ref{tab1}, where the currency names, exchange rates and the average individual incomes on September of each year are presented. \begingroup \begin{table*}[!htbp \caption{Currencies in Brazil from 1978 to 2005 and the average individual income $\langle x' \rangle$ calculated in September of a given year. $\langle x' \rangle$ is converted by the exchange rate of September 15th of each year and presented in US dollars of that particular day (source: Brazil Central Bank). The hyperinflation period is clearly visible in the evolution of the exchange rate. \label{tab1}} \begin{center} \begin{tabular}{crrr} \hline\noalign{\smallskip} \textbf{year} & \textbf{currency name - symbol} & \textbf{exchange rate: 1 US\$ =}& $\mathbf{\langle x' \rangle}$ \textbf{in US\$} \\ \noalign{\smallskip}\hline\noalign{\smallskip} 1978 & cruzeiro - Cr\$ &19.05 Cr\$&132.503\\ 1979 & cruzeiro - Cr\$ &28.793 Cr\$&122.531\\ 1981 & cruzeiro - Cr\$ &105.284 Cr\$&88.201\\ 1982 & cruzeiro - Cr\$ &202.089 Cr\$&94.505\\ 1983 & cruzeiro - Cr\$ &701.388 Cr\$&56.786\\ 1984 & cruzeiro - Cr\$ &2203.96 Cr\$&51.764\\ 1985 & cruzeiro - Cr\$ &7461.575 Cr\$&58.039\\ 1986 & cruzado - Cz\$ &13.84 Cz\$&91.306\\ 1987 & cruzado - Cz\$ &49.866 Cz\$&75.687\\ 1988 & cruzado - Cz\$ &326.233 Cz\$&87.276\\ 1989 & cruzado novo - NCz\$&3.267 NCz\$&137.183\\ 1990 & cruzeiro - Cr\$ &75.54 Cr\$&161.516\\ 1992 & cruzeiro - Cr\$ &5775 Cr\$&111.906\\ 1993 & cruzeiro real - CR\$ &111.1 CR\$&126.547\\ 1995 & real - R\$ &0.953 R\$&213.188\\ 1996 & real - R\$ &1.019 R\$&228.299\\ 1997 & real - R\$ &1.094 R\$&221.687\\ 1998 & real - R\$ &1.181 R\$&213.902\\ 1999 & real - R\$ &1.898 R\$&133.631\\ 2001 & real - R\$ &2.672 R\$&110.857\\ 2002 & real - R\$ &3.342 R\$&97.532\\ 2003 & real - R\$ &2.923 R\$&122.815\\ 2004 & real - R\$ &2.891 R\$&148.468\\ 2005 & real - R\$ &2.294 R\$&221.787\\ \noalign{\smallskip}\hline \end{tabular} \end{center} \end{table*} \endgroup Our next step was then to divide the data in bins inasmuch as most data is clumped towards low income values. The data binning methodology adopted here is the standard one used for problems involving power law determination \cite{n05} and which was previously used by these authors to derive the Zipf law for Brazilian cities \cite{nm}. The method consists of taking logarithmic binning such that bins span at increasing larger intervals and every step is 10\% larger than the previous one. This is accomplished according to the rule below, \begin{equation} x_j= 1.1^{(j-1)}x_{\mathrm{min}}.\lb{rule} \end{equation} By following this procedure we were able to create for each year a sample of $n$ observed values such that, $$ \{x_j\}: (j=1,\ldots,n), \, (x_1=x_{\mathrm{min}}), \, (x_{\mathrm{min}}\approx 0.01), \, (n \approx 100). $$ The purpose of this methodology is to achieve a sharp decrease in the statistical fluctuations in the tail due to the fact that bins with far smaller number of observed values, prevalent in the tail of the distributions, are prone to large fluctuations. This effect has the potential of creating a serious bias in the determination of the parameters by least square fitting \cite{n05}. To counteract this problem, it is known that an appropriate logarithmic binning is very effective at severely reducing the uneven variation in the tail, which means that the possible bias in the parameter determination by least square fitting \cite{g04} is, therefore, strongly reduced. After the steps described above were taken we were able to obtain cumulative probabilities by calculating the number of individuals whose income goes up to certain values and dividing this value by the total number of individuals. The final results are shown in figures \ref{fig1} and \ref{fig2}, where complementary cumulative probabilities are plotted against normalized income for each year of the studied time span. It is clear from these graphs that there are enough points to form an almost continuous and smooth curve. Therefore, from now on we will change the discrete variable $x_j$ to the continuous independent variable $x$ representing the normalized individual income values. \begin{figure*} \epsfysize=16cm \begin{center} \rotatebox{-90}{\epsffile{pnad1.ps}} \end{center} \caption{Graph of the complementary cumulative probability of individual income $F(x)$ plotted against the normalized individual income $x$ for the month of September of each year in the time span of this study. Although Brazil experienced runaway inflation and hyperinflation from 1981 to 1993 the plots show a remarkable similarity during and after this period. The major differences stem from the plots of 1978 and 1979, just prior to Brazil's great inflationary period. Due to the absence of reliable digitalized data before 1978 we were unable to ascertain whether or not these years were the last ones of a qualitatively different era regarding the income distribution in Brazil, which was then possibly terminated by the inflationary period.\label{fig1}} \end{figure*} \begin{figure*} \epsfysize=16cm \begin{center} \rotatebox{-90}{\epsffile{pnad2.ps}} \end{center} \caption{Continuation of figure \ref{fig1} showing the complementary cumulative individual income distribution $F(x)$ against the normalized individual income $x$ in Brazil from 1992 to 2005.\label{fig2}} \end{figure*} The data obtained with the procedures outlined above allowed us to calculate the so-called \textit{Lorenz curve} \cite{k80,l05}, which measures the degree of inequality in income distribution, by setting the maximum income value to 100\% and then calculating the percentage of individuals who receive certain percentage of the maximum income. Figures \ref{fig3} and \ref{fig4} show the Lorenz curves for Brazil from 1978 to 2005. \begin{figure*} \epsfysize=16cm \begin{center} \rotatebox{-90}{\epsffile{lorenz1.ps}} \end{center} \caption{The Lorenz curves for the individual income distribution of Brazil in the month of September of the respective year. The x-axis plots the \% of individuals whereas the y-axis is the \% of total income.\label{fig3}} \end{figure*} \begin{figure*} \epsfysize=16cm \begin{center} \rotatebox{-90}{\epsffile{lorenz2.ps}} \end{center} \caption{Continuation of figure \ref{fig3} showing the Lorenz curves of the income distribution in Brazil from 1992 to 2005.\label{fig4}} \end{figure*} Once the points forming the Lorenz curves had been calculated we were able to obtain the correspondent \textit{Gini coefficients} \cite{g12,g13,g21,k80}, which measure the inequality of the income distribution. This was done by numerically calculating the area below the Lorenz curves. Figure \ref{fig5} shows the results. \begin{figure*} \epsfysize=13cm \begin{center} \rotatebox{-90}{\epsffile{gini.ps}} \end{center} \caption{This figure shows the evolution of Brazilian Gini coefficient for most of the last three decades. The values shown in this plot are presented in table \ref{tab4a}.\label{fig5}} \end{figure*} \section{Modeling the Individual Income Distribution}\lb{model} Anyone attempting to familiarize oneself with the recent literature in econophysics will see that when physicists try to solve problems traditionally dealt with by economists they do so via a different perspective. That, of course, will be no different for the income distribution problem. We therefore believe to be fruitful to expose our viewpoints about how to approach the income distribution problem at the very beginning of our discussion. So, this section will start by outlining our modeling perspective and how it differs from the traditional approach followed by economists. The first aspect worth mentioning is that economists often fit their income distribution data by means of complex single functions with as many parameters as necessary \cite{bj05,bmm96,crt07,dj02,mc90,qd06}. Fitting the whole dataset with single functions with four or more parameters may produce a better data fit, but the drawback is that this kind of fitting does not give a better insight into the problem. The paramount objective of any physical modeling is to find the differential equations which describe the observed empirical pattern and, therefore, data fitting is only the very first step into that direction and must be made bearing in mind Occam's razor, which in this case means using simple functions with as little parameters as possible. Theoretical assumptions of economic nature must be built into the differential equations and not in the empirical curves. Therefore, the relationships among the parameters should be a result of the dynamics of the model determined by the solutions of the differential equations. Using as many parameters as necessary in complicated functions which do not originate from some sort of dynamical analysis is not a promising approach to the income distribution problem because it will make the task of finding the underlying differential equations even more difficult, if not impossible. Perhaps this is one of the reasons why the approach of conventional mainstream economics to the personal income distribution problem has made little progress since Pareto's time towards developing a dynamical theory connecting personal income generation and economic growth in the sense of Sraffa \cite{s60}, as pointed out by Gallegati et al.\ \cite{gklo06}. Thus, simple functions with as few parameters as possible which, at the same time, offer a reasonable agreement with the data are certainly much more preferable. The second point is that there is a tendency among a sizable number of economists of following an axiomatic and mathematically guided approach to their problems as opposed to the empirically guided paths usually taken by physicists. The major trouble of approaching a problem guided almost exclusively by logic is that this often leads to paradoxical situations, where it is possible to deductively arrive at apparently sound conclusions, which at the same time are entirely unsound empirically -- here Aristotelian physics comes to mind as an example. The empirically sound path means starting and staying as close to the real data as possible when studying any problem of economic nature and avoiding as much as possible any kind of a priori assumption. This is especially true during the infancy of a new area of study. Examples of successful theories which did not follow this path are exceedingly rare, even within physics. That does not mean we dismiss the power of theoretical reasoning, but even 20th century theoretical physics is strongly anchored upon very solid empirical foundations. For this reason we believe that research in econophysics must always carefully consider the real data in order to avoid at all costs hypothetical, often anti-empirical, a priori assumptions. For econophysics to succeed it must not repeat the fatal traps of conventional neoclassical economics, which is based on too many anti-empirical assumptions, resulting in all too often compromised results \cite{jpb08,g06,gklo06,k01,keen03,ks06,mh04,mc00,mc04,mc05,mc07,o97}. As a third point, it was mentioned above that Dr\u{a}gulescu and Yakovenko \cite{dy01,dy01b,y03} proposed an exponential type distribution of personal income analogous to the Boltzmann-Gibbs distribution of energy in statistical physics under the motivation that ``in a closed economic system money is conserved'' \cite{dy00}. Similarly Chatterjee et al.\ \cite{ccm04} advanced an ideal-gas model of a closed economic system where total money and number of agents are fixed such that ``no production or migration occurs and the only economic activity is confined to trading'' \cite{ccm04}. Those results led to criticisms made by Gallegati et al.\ \cite{gklo06} who argued that industrialized economies are not a conservative system, meaning that ``income is not, like energy in physics, conserved by economic processes''. This occurs because although transactions, that is, exchanges are conservative, ``capitalist economies are (...) characterized by economic growth. And growth occurs because production produces a net physical surplus''. Ref.\ \cite{gklo06} concludes by stating that ``models which focus purely on exchange and not on production cannot by definition offer a realistic description of the generation of income in the capitalist, industrialized economies''. Gallegati et al.\ \cite{gklo06} may have a point regarding the development of a dynamical theory of production. However, the focus of the approach made by physicists on the personal income distribution characterization problem has not been on this dynamical theory, which is obviously necessary, but has not yet been developed. So far, econophysicists have been mainly focused on the more modest aim of finding good analytical descriptors of the individual income distribution, not only for the very rich where the Pareto law is valid, but for the whole society. On this point the proposal of an exponential distribution is without any doubt a step forward since it seems to produce good agreements with the data of some countries and is a simple function, with one parameter only. Therefore, if the exponential function does not produce a good fit for the income data of Brazil (see below) we are entitled to ask whether or not it is possible to find another function with one, or two parameters at most, which could produce a good data fit for the Brazilian data and, perhaps, could also be useful for fitting the income data of other countries. As a final conceptual point, we should mention that in recent econophysics literature the words ``income'' and ``wealth'' have been used indistinctively. We believe this to be inappropriate. In this article income is used as a generic term for anything gained by an individual in a specific period of time, usually monthly or annually. It can be wage, pension, government grant, the revenue obtained from property or investment like rent or dividends, etc. However, we believe that income should not be confused with wealth, because although these two concepts are related, wealth is the result of saved, or accumulated, income, often inherited. In other words, income is a flux, an inflow of value that an individual receives, or earns, at a specific time interval which, if accumulated, may become wealth. In turn, the investment of wealth in property, shares, etc, generates income as rent, dividends, etc. The empirical findings that led to Pareto law were mostly derived from personal income data, although it appears reasonable to suspect that the personal wealth distribution should also follow a power law for those individuals with high wealth. \subsection{Basic Equations} Let $\mathcal{F}(x)$ be the {\it cumulative distribution function of individual income}, or simply {\it cumulative income distribution}, which gives the probability that an individual receives an income less than or equal to $x$. It follows from this definition that the \textit{complementary cumulative income distribution} $F(x)$ will then give the probability that an individual receives an income equal to or greater than $x$. Clearly $\mathcal{F}(x)$ and $F(x)$ are related by the following expression, \begin{equation} \mathcal{F}(x)+F(x)=100, \lb{ff} \end{equation} where we have assumed the maximum probability as being equal to 100\%. If both $\mathcal{F}(x)$ and $F(x)$ are continuous and have continuous derivatives for all values of $x$, this means that, \begin{equation} d\mathcal{F}(x)/dx = f(x), \; \; \; \; dF(x)/dx=-f(x), \lb{c} \end{equation} and \begin{equation} \int_0^\infty f(x)\:dx=100. \lb{norm} \end{equation} Here $f(x)$ is the {\it probability distribution function of individual income}, defined such that $f(x)\,dx$ is the fraction of individuals with income between $x$ and $x+dx$. This function is also known as {\it probability density}, but from now on we will call it simply as {\it probability income distribution}. The equations above lead to the following results, \begin{equation} \mathcal{F}(x) - \mathcal{F}(0) = \int_0^x f(w) \: dw, \lb{3} \end{equation} \begin{equation} F(x) - F(\infty) = \int_x^\infty f(w) \: dw. \lb{4} \end{equation} Although we found in our data a non-negligible number of individuals who earned nothing when the sampling was carried out, zero income values do not have a weight in the income distribution function and, therefore, it seems reasonable to assume those results to be of a transitional nature and dismiss them from our analysis by assigning zero probabilities. Similarly, very rich people are made of very few individuals such that their probabilities tend to zero. Note, however, that these two situations are limiting cases and should only be considered as true within the uncertainties of our measurements. Therefore, it follows from this reasoning that the boundary conditions below should approximately apply to our problem, \begin{equation} \left\{ \begin{array}{lclcl} \mathcal{F}(0) & = & {F}(\infty) & \cong & 0, \\ \mathcal{F}(\infty) & = & {F}(0) & \cong & 100. \end{array} \right. \lb{condi1} \end{equation} \subsection{Two Parts for the Income Distribution} As discussed above, our approach implies searching for simple functions to describe the income distribution. Therefore we shall divide this distribution in two distinct parts, one for the very rich and the other for the overwhelming majority of the population. To establish the notation, when divided that way the complementary cumulative distribution function of the individual income will be written as follows, \begin{equation} F(x)= \left\{ \begin{array}{ll} G(x), & \; \; ( \: 0 \le x < x_{\scriptstyle t}), \\ P(x), & \; \; (x_{\scriptstyle t} \le x \le \infty), \\ \end{array} \right. \lb{disto} \end{equation} where $x_{\scriptstyle t}$ is the \textit{transitional income value} marking the transition between the two components of the income distribution. Then the cumulative distribution will be given by, \begin{equation} \mathcal{F}(x)= \left\{ \begin{array}{ll} \mathcal{G}(x), & \; \; ( \: 0 \le x < x_{\scriptstyle t}), \\ \mathcal{P}(x), & \; \; (x_{\scriptstyle t} \le x \le \infty), \\ \end{array} \right. \lb{disto1} \end{equation} and the probability density yields, \begin{equation} f(x)= \left\{ \begin{array}{ll} g(x), & \; \; ( \: 0 \le x < x_{\scriptstyle t}), \\ p(x), & \; \; (x_{\scriptstyle t} \le x \le \infty). \\ \end{array} \right. \lb{distro2} \end{equation} \subsection{The Pareto Law} It is a well known empirical fact that the richest portion of many, perhaps most, populations follows a {\it Pareto power law} of the form, \begin{equation} P(x) = \beta \; x^{\displaystyle - \alpha}, \label{pareto} \end{equation} where $\alpha$ and $\beta$ are positive constants. The parameter $\alpha$ is known as \textit{Pareto index} or just the \textit{ fractal dimension} of the distribution, if we adopt the modern language of fractals \cite{mandelbrot,mh04}. This law is valid only for the region of \textit{high personal income}, starting at $x=x_{\scriptstyle t}$ and going up to the maximum value obtained in the observed dataset. As we shall show below our data presents compelling evidence that the Pareto law is valid in Brazil. It is well known that if the complementary cumulative distribution is a power law, the probability distribution is also a power law. Therefore, the Paretian part of the income distribution of the Brazilian population has a probability density given by the following expression, \begin{equation} p(x)= \alpha \; \beta \; x^{^{\scriptstyle -(1+\alpha )}}. \lb{pareto1} \end{equation} It clearly follows from this equation that $p(\infty)=0$. \subsection{The Lower Income Region} \subsubsection{The Exponential} The first obvious thing to do with our data in the lower income region was to follow the proposal of Ref.\ \cite{dy01} and try an exponential fit. Surprisingly, however, the results were not good. The semi-log plot clearly did not linearize our data, something that could only be achieved by removing the values due to very low income. Figures \ref{exp1} and \ref{exp2} show plots where we have attempted to fit the exponential to the Brazilian data and a simple visual inspection shows the inadequacy of this function to describe the observed data points. Since other functions like the Gaussian or the Boltzmann-Gibbs are also derived from the exponential, these graphs were enough to convince us to dismiss all functions based on a simple exponential as viable fits for the Brazilian data. We then started searching for other ways of representing the Brazilian income distribution, especially at very low income values. \begin{figure*} \epsfysize=16cm \begin{center} \rotatebox{-90}{\epsffile{gas1.ps}} \end{center} \caption{These graphs show the exponential fit for the lower region of the income distribution. Clearly the exponential is not a good representation for the Brazilian income data.\label{exp1}} \end{figure*} \begin{figure*} \epsfysize=16cm \begin{center} \rotatebox{-90}{\epsffile{gas2.ps}} \end{center} \caption{Continuation of figure \ref{exp1} showing how poor representation is the exponential for the complementary cumulative income distribution data in Brazil.\label{exp2}} \end{figure*} \subsubsection{The Gompertz Curve} In the process of searching for a simple function capable of representing our dataset we realized that the plot itself suggested taking the second logarithm of the complementary cumulative distribution. When doing so the data tended to follow a straight line, a result which immediately suggested adopting the {\it Gompertz curve} \cite{w32} to model the complementary cumulative income distribution of Brazil. This curve may be written as follows, \begin{equation} G(x)= e^{ \displaystyle e^{(A-Bx)} }, \label{gomp} \end{equation} where $A$ and $B$ are positive constants. Section \ref{conclusion} below presents further discussions about this function. The definition of cumulative distribution and its complement allow us to find the Gompertzian probability density income distribution of the Brazilian population. It can be written as follows, \begin{equation} g(x)= B \; e^{(A-Bx)} \; e^{ \displaystyle e^{(A-Bx)}}. \lb{gomp1} \end{equation} Therefore, as mentioned above, in what follows it will become clear that our data presents compelling evidence that the complementary cumulative individual income distribution in Brazil has two distinct components represented by a Gompertz curve and the Pareto power law, situation which, similarly to other countries, {\it characterize Brazil as having a well defined two income class system} as far as individual income is concerned. Both equations (\ref{pareto}) and (\ref{gomp}) can be linearized and, therefore, the unknown parameters can be obtained by linear data fitting. However, the boundary conditions (\ref{condi1}) allow us to find the theoretical value for $A$ and $g(0)$. These results may be written as follows, \begin{equation} e^{ \displaystyle e^{A}}= G(0), \; \; \; \Leftrightarrow \; \; \; A=\ln \left\{ \ln \left[ \, F(0) \right] \right\} = 1.53, \lb{A} \end{equation} \begin{equation} g(0)=461 \: B. \lb{g0} \end{equation} The equations above are just different ways of expressing the boundary condition due to zero income individuals data. The fitting should produce values for $A$ which will probably fluctuate around its theoretical result above. Finding the extent of these fluctuations is one of our goals, since they should indicate how much the approximations given by equations (\ref{condi1}) are valid. Nevertheless, it is an advantageous feature of our modeling to know beforehand one of the four parameters. As we shall see below, $\beta$ can be determined by either data fitting or normalization, a fact which effectively leaves only two parameters, $\alpha$ and $B$, to be determined entirely by data fitting. \subsection{Continuity Across the Gompertz-Pareto Regions} It is desirable to investigate whether or not the cumulative income distribution remains continuous across the transition between the Gompertz and Pareto regions. For this continuity to occur all parameters should obey the constraint equation $G(x_t)=P(x_t)$, that is, \begin{equation} e^{ \displaystyle e^{(A-Bx_{\scriptstyle t})} }= \beta \; x_{\scriptstyle t}^{\displaystyle - \alpha}. \lb{vinc} \end{equation} In addition, should the usual normalization of the probability distributions between the two regions possibly hold, the following condition will need to be satisfied, \begin{eqnarray} \int_0^\infty f(x)\,dx = \int_0^{x_{\scriptstyle t}} B \; e^{(A-Bx)} \; e^{ \displaystyle e^{(A-Bx)} } dx + \nonumber \\ + \int_{x_{\scriptstyle t}}^\infty \alpha \; \beta \; x^{^{\scriptstyle -(1+\alpha )}} dx=100. \lb{norm1} \end{eqnarray} It is straightforward to show that the normalization above together with the boundary conditions (\ref{A}) lead to the same constraint equation (\ref{vinc}). It is also simple to verify that the constraint equation above can be solved once $\alpha$, $\beta$ and $B$ are determined by fitting, albeit finding $x_t$ from equation (\ref{vinc}) requires the use of numerical methods. Nevertheless, our preference is to determine $x_t$ directly from the observed data, leaving the remaining parameters to be obtained by a mixture of data fitting and normalization. \subsection{Exponential Approximation of the Gompertz Curve} We can derive a convenient approximation for the Gompertz curve (\ref{gomp}) when it nears the Pareto region, i.e., for large values of $x$. In this case the term $Bx$ dominates over the parameter $A$ and equation (\ref{gomp}) reduces to $G(x) \approx e^{\displaystyle e^{-Bx}}$. If we now define a new variable $z=e^{-Bx}$, then large values of $x$ imply small values of $z$ and the following Taylor expansion holds: \begin{equation} e^z = 1+ z+z^2/2+z^3/6+\ldots \; (z<1). \lb{ex} \end{equation} In view of this we may write the following approximation, \begin{equation} G(x)= e^{ \displaystyle e^{(A-Bx)} } \approx 1+ e^{-Bx} \; \; \; (\mbox{for $Bx > A$ and $e^{-Bx} < 1$}). \lb{gomp-exp} \end{equation} This result means that the Gompertz curve reduces to the exponential function when the personal income $x$ is large enough. It also means that the Gompertz curve allows us to have one of its parameters as a boundary condition for the zero income situation at the same time as having an exponential feature for larger incomes. In addition, the probability income distribution as given by equation (\ref{gomp1}) can also be similarly approximated, yielding, \begin{eqnarray} g(x) & = & B \; e^{(A-Bx)} \; e^{ \displaystyle e^{(A-Bx)}} \nonumber \\ & \approx & B \; e^{-Bx} \; \; (\mbox{for $Bx > A$ and $e^{-Bx} < 1$}). \lb{gomp1-exp} \end{eqnarray} Note that the approximation above means leaving the very low income data out of our analysis, which in turn reduces our problem to the exponential fit, as proposed in Ref.\ \cite{dy01}. A simple visual inspection of figures \ref{exp1} and \ref{exp2} shows that the data seems to be fairly represented by an exponential if we remove the very low income dataset $(x \le 2)$. This feature may explain why the exponential is such a poor representation of our income data. Brazil is notoriously a very unequal country in terms of income distribution and, therefore, our data tend to clump towards low income values. Finally, the approximations (\ref{gomp-exp}) and (\ref{gomp1-exp}) also mean that the exponential and the Gompertz curve are not very dissimilar to one another in terms of being good representations of the non-Paretian part of the individual income distribution. So, \textit{the case for the Gompertz curve is made on the grounds of a better data fit}, especially considering the very low income values that are strongly represented in the Brazilian income dataset, and its possible interpretation as a growth curve in the context of attempting to connect personal income with industrial production and economic growth (see Section \ref{conclusion} below). \subsection{Average Income} The mean income of the whole population may be written as follows, \begin{eqnarray} \langle x \rangle & = &\frac{\int_0^\infty x \: f(x) \: dx}{\int_0^\infty f(x) \: dx} = \frac{1}{100} \left[ \int_0^{x_{\scriptstyle t}} x \: B \: e^{(A-Bx)} \; e^{ \displaystyle e^{(A-Bx)}} dx \; + \right. \nonumber \\ & + & \left. \lim_{x_{_{\mbox{\tiny max}}} \rightarrow \infty} \int_{x_{\scriptstyle t}}^{x_{_{\mbox{\tiny max}}}} x \: \alpha \: \beta \: x^{-(1+\alpha)} dx \right]. \lb{avg} \end{eqnarray} The solution of the last integral on the right hand side yields, \begin{eqnarray} & \lim_{x_{_{\mbox{\tiny max}}} \rightarrow \infty} & \int_{x_{\scriptstyle t}}^{x_{_{\mbox{\tiny max}}}} x \: \alpha \: \beta \: x^{-(1+\alpha)} dx = \nonumber \\ = & \lim_{x_{_{\mbox{\tiny max}}} \rightarrow \infty} & \left\{ \frac{\alpha \: \beta}{(1-\alpha)} \left[ {x_{_{\mbox{\tiny max}}}}^{(1-\alpha)} - {x_{\scriptstyle t}}^{(1-\alpha)} \right] \right\}. \lb{limit} \end{eqnarray} Clearly this limit will only converge if the Pareto index is bigger than one. Possible non finite averages may happen with power laws as discussed in Ref.\ \cite{n05}. Indeed, datasets of finite sizes will produce a finite average since we can take $x_{_{\mbox{\tiny max}}}$ as being the maximum dataset value and cut off this integral above some upper limit. Nevertheless, this is not the case of income distribution because although there are extremely rich individuals, if we make more measurements and generate a larger dataset we will eventually reach a value of $x$ such that the chance of getting an even larger value will indeed become zero, since even super-rich individuals do not receive an infinite income and their numbers are finite. In other words, as we go to larger and larger individual income datasets our estimate of $\langle x \rangle$ will \textit{not} increase without bound. We therefore can conclude that the condition $\alpha > 1$ is an empirically necessary requirement for the Pareto law to hold, which is just another way of stating that the boundary condition $F(\infty) \cong 0$ is empirically sounding. In such a case equation (\ref{avg}) reduces to an expression which may be written as below, \begin{equation} \langle x \rangle = \frac{1}{100} \left[ \mathcal{I}(x_t) + \frac{\alpha \: \beta}{(\alpha -1)} {x_{\scriptstyle t}}^{(1-\alpha)} \right], \; \; \; (\mbox{for $\alpha > 1$}), \lb{avg2} \end{equation} where $\mathcal{I}(x)$ is given by the following, numerically solvable, integral, \begin{equation} \mathcal{I}(x) \equiv \int_0^x w \, g(w) \, dw= \int_0^x w \: B \: e^{(A-Bw)} \; e^{ \displaystyle e^{(A-Bw)}} dw. \lb{I} \end{equation} \section{Results}\lb{results} \subsection{Parameters of the Gompertz Curve} To determine $A$ and $B$ we carried out a least squares fit since in this region the dataset does not exhibit large fluctuations which can cause large fitting bias, as discussed in Goldstein et al.\ (2004). However, to do so we first need to find $x_{\mathrm{gmax}}$, that is, the maximum value of $x$ that marks the end of the Gompertz region. The boundary conditions (\ref{condi1}) and (\ref{A}) imply $A=1.53$ and, therefore, we assumed that the end of the Gompertz region is reached when a value for $x$ is found such that the straight line fit of $\left\{ \ln \left[ \ln G(x) \right] \right\}$ produces $A=1.5\pm0.1$. By following this methodology we were able to determine the specific value of $x_{\mathrm{gmax}}$ for our dataset and fit the Gompertz curve. Plots are shown in figures \ref{gomp1f} and \ref{gomp2} and the results are summarized in table \ref{t-gomp} where one can verify that the result $A=1.54\pm0.03$ encompasses the whole period under study, that is, from 1978 to 2005. Hence, in the time period of our analysis $A$ varies no more than 2.6\% from its boundary value given in equation (\ref{A}). Regarding the other parameter, the results are also stable from 1981 to 2005. However, $B$ was found to be higher in 1978 and 1979, a result which is probably related to the fact that in these years the income distribution behaves differently (see the caption of figure \ref{fig1}). \begin{figure*} \epsfysize=16cm \begin{center} \rotatebox{-90}{\epsffile{gomp1.ps}} \end{center} \caption{Plots showing the fit of Gompertz curve to Brazil's individual income distribution data. The y-axis is the double logarithm of the cumulative distribution, that is, $\left\{ \ln \left[ \ln F \right] \right\}$, whereas the x-axis is the normalized individual income $x$ up to the value where $A \approx 1.5$. The dashed line is the fitted straight line. Clearly the fit is good up to very small values of $x$, a result which brings support in favor of the Gompertz curve as a good model for the income distribution of the economically less favored individuals in the Brazilian population. Values of the parameters resulting from the fit are presented in table \ref{t-gomp}.\label{gomp1f}} \end{figure*} \begin{figure*} \epsfysize=16cm \begin{center} \rotatebox{-90}{\epsffile{gomp2.ps}} \end{center} \caption{Continuation of figure \ref{gomp1f} showing the fit of the Gompertz curve for data from 1992 to 2005.\label{gomp2}} \end{figure*} \begingroup \begin{table*}[!htbp \caption{Results of fitting the Gompertz curve to Brazil's income distribution data from 1978 to 2005. The parameters were obtained by least-square fitting and the respective errors by means of one thousand bootstrap resamples with replacement such that an average fitting and standard deviation can be obtained for each sample in order to estimate the uncertainties. Note the very good values of the correlation coefficients of the fitting. Since the Gompertz parameters do not change very much, we were able to reach an estimation valid for the whole period of our analysis. That yields $A=(1.54\pm0.03)$, $B=(0.39\pm0.08)$. Similarly, the end of the Gompertz region is given by $x_{\mathrm{gmax}}=(7.4\pm0.8)$. \label{t-gomp}} \begin{center} \begin{tabular}{cccccc} \hline\noalign{\smallskip} \textbf{year} & $\mathbf{A}$ & $\mathbf{B}$ & $\mathbf{x_{{gmax}}}$ & \textbf{correlation coeff.} & \textbf{\% of individuals in Gompertz region} \\ \noalign{\smallskip}\hline\noalign{\smallskip} 1978 &$1.52\pm0.01$& $0.46\pm0.01$ &$6.606$&$0.997$&$98.9$\\ 1979 &$1.54\pm0.01$& $0.44\pm0.01$ &$6.920$&$0.997$&$98.9$\\ 1981 &$1.55\pm0.01$& $0.34\pm0.02$ &$7.533$&$0.992$&$98.9$\\ 1982 &$1.55\pm0.01$& $0.34\pm0.02$ &$7.473$&$0.993$&$98.9$\\ 1983 &$1.54\pm0.01$& $0.33\pm0.01$ &$6.910$&$0.996$&$98.7$\\ 1984 &$1.55\pm0.01$& $0.33\pm0.01$ &$7.388$&$0.994$&$98.9$\\ 1985 &$1.54\pm0.01$& $0.33\pm0.01$ &$7.490$&$0.996$&$98.9$\\ 1986 &$1.55\pm0.01$& $0.34\pm0.01$ &$7.112$&$0.995$&$98.8$\\ 1987 &$1.55\pm0.01$& $0.34\pm0.02$ &$7.626$&$0.992$&$98.9$\\ 1988 &$1.54\pm0.01$& $0.32\pm0.02$ &$8.140$&$0.992$&$98.9$\\ 1989 &$1.53\pm0.01$& $0.32\pm0.01$ &$7.856$&$0.995$&$98.8$\\ 1990 &$1.54\pm0.01$& $0.34\pm0.02$ &$8.074$&$0.991$&$98.9$\\ 1992 &$1.56\pm0.01$& $0.36\pm0.02$ &$7.635$&$0.989$&$99.0$\\ 1993 &$1.54\pm0.01$& $0.33\pm0.01$ &$7.674$&$0.997$&$98.8$\\ 1995 &$1.54\pm0.01$& $0.33\pm0.01$ &$7.887$&$0.995$&$98.9$\\ 1996 &$1.55\pm0.01$& $0.35\pm0.02$ &$8.163$&$0.989$&$99.0$\\ 1997 &$1.55\pm0.01$& $0.34\pm0.02$ &$7.935$&$0.992$&$99.0$\\ 1998 &$1.54\pm0.01$& $0.33\pm0.01$ &$7.628$&$0.997$&$98.8$\\ 1999 &$1.54\pm0.01$& $0.33\pm0.01$ &$7.811$&$0.994$&$98.9$\\ 2001 &$1.54\pm0.01$& $0.34\pm0.01$ &$7.774$&$0.996$&$98.9$\\ 2002 &$1.55\pm0.01$& $0.34\pm0.02$ &$7.878$&$0.993$&$99.0$\\ 2003 &$1.54\pm0.01$& $0.33\pm0.01$ &$7.374$&$0.997$&$98.8$\\ 2004 &$1.55\pm0.01$& $0.34\pm0.02$ &$7.653$&$0.993$&$98.9$\\ 2005 &$1.54\pm0.01$& $0.33\pm0.01$ &$7.403$&$0.997$&$98.8$\\ \noalign{\smallskip}\hline \end{tabular} \end{center} \end{table*} \endgroup \subsection{Parameters for the Pareto Law} To fit the Pareto law we need to determine $x_{\mathrm{pmin}}$, that is, the minimum value of $x$ that marks the start of the Paretian part of the income distribution. In most years our data clearly indicated that $x_{\mathrm{pmin}}$ ought to be equal to $x_{\mathrm{gmax}}$. Nevertheless, due to the previously discussed anomaly of the income distribution in 1978 and 1979, the data for these years showed that $x_{\mathrm{pmin}} > x_{\mathrm{gmax}}$. Inasmuch as from their definitions it is obvious that $x_{\mathrm{gmax}} \le x_t \le x_{\mathrm{pmin}}$, for 1978 and 1979 the transition incomes between the Gompertzian and Paretian regions and their uncertainties are evaluated as follows, \begin{equation} x_t=\frac{1}{2} \left( x_{\mathrm{pmin}} + x_{\mathrm{gmax}} \right), \; \; \delta x_t=\frac{1}{2}\left( x_{\mathrm{pmin}} - x_{\mathrm{gmax}} \right). \lb{xt} \end{equation} Clearly if $x_{\mathrm{gmax}} = x_{\mathrm{pmin}}$, then $x_t=x_{\mathrm{pmin}}$ and $\delta x_t=0$. These quantities were then calculated in our dataset and the results are presented in table \ref{t-pareto}. The parameters $\alpha$ and $\beta$ were evaluated by two different methodologies, least squares fitting and maximum likelihood estimate. Details of both methods and comparison of the results are described in what follows. \subsubsection{Least Squares Fitting} This fitting method is not recommended when the data shows large fluctuations, unless some binning process is employed such that these fluctuations are severely reduced. As discussed in Section \ref{data}, our data was treated that way and, therefore, we believe that presenting the Pareto law parameters obtained by least squares fitting (LSF) is useful, especially in order to compare than with the other fitting method described below. Figures \ref{paretof1} and \ref{pareto2} show the tail of the complementary cumulative distribution where one can clearly identify the power law decay in the data plots of all years. These figures also show the straight line fitted by least squares. Table \ref{t-pareto} presents the values of the parameters found by LSF. Once can clearly notice that both parameters of the Pareto law dot not remain as stable as the parameters of the Gompertz curve during the time span of our analysis. We must note that Cowell et al.\ \cite{cfl98} have previously presented evidence for the Pareto law in the Brazilian individual income distribution. Nonetheless, their study was restricted to the shorter 1981-1990 period than the one considered here and, even so, they only analyzed data for three years: 1981, 1985 and 1990. They assumed a log-normal distribution for the region of lower income, but found out later that a Gaussian distribution does not fit well the data. They also took the unusual step of dividing the Pareto tail in two income range regions, one for the rich and the other for the very rich, without presenting an adequate justification for such a procedure, but reaching conclusions about the ``increased inequality amongst the very rich.'' This seems particularly odd if we bear in mind that the income region of the very rich is exactly where we have the least data and the statistical fluctuations are at their highest. As stated above, here we present a study with a larger time span and which includes all available data in the specified period, totaling 24 yearly samples. We also advance the Gompertz curve as a good descriptor for the lower individual income population and found no evidence to support the claim made by Ref.\ \cite{cfl98} of such two Paretian components. On the contrary, our data showed very clearly a well defined and unique Pareto tail in all samples. \subsubsection{Maximum Likelihood Estimation} This method is considered a better way of finding the Pareto index because it deals well with the statistical fluctuations found in the tails of income distributions. Here we shall closely follow the approach proposed by Ref.\ \cite{n05} to derive the likelihood of our dataset. The constant $\beta$ is obtained as a result of the normalization requirement (\ref{norm1}). As seen above, this normalization is equivalent to the constraint equation (\ref{vinc}). Hence, \begin{equation} \beta = {x_{\scriptstyle t}}^{\displaystyle \alpha} e^{ \displaystyle e^{(A-Bx_{\scriptstyle t})} }. \lb{norm2} \end{equation} This expression can be substituted into the probability density (\ref{pareto1}), yielding, \begin{equation} p(x)= \alpha \; {x_{\scriptstyle t}}^{\alpha} e^{ \displaystyle e^{(A-Bx_{\scriptstyle t})} } \; x^{^{\scriptstyle -(1+\alpha )}}. \lb{pareto3} \end{equation} The \textit{likelihood} of the data set is given by, \begin{equation} P(x|\alpha)=\prod_{j=1}^n p(x_j)=\prod_{j=1}^n \alpha \; {x_{\scriptstyle t}}^{\alpha} e^{ \displaystyle e^{(A-Bx_{\scriptstyle t})} } \; {x_j}^{^{\scriptstyle -(1+\alpha )}}. \lb{prod} \end{equation} We can calculate the most likely value of $\alpha$ by maximizing the likelihood with respect to $\alpha$, which is the same as maximizing the logarithm of the likelihood, denoted as $\mathcal{L}$. Such calculation leads us to the following results, \begin{eqnarray} \mathcal{L} & = & \ln P(x|\alpha) \nonumber \\ & = & \sum_{j=1}^n \left[ \ln \alpha+\alpha \ln x_t + e^{(A-Bx_{\scriptstyle t})} - \left( 1+ \alpha \right) \ln x_j \right] \nonumber \\ & = & n \ln \alpha+ n \alpha \ln x_t + n e^{(A-Bx_{\scriptstyle t})} - \left( 1+ \alpha \right) \sum_{j=1}^n \ln x_j. \lb{lik1} \end{eqnarray} Setting $\partial \mathcal{L} / \partial \alpha =0$, we find, \begin{equation} \alpha=n {\left[ \sum_{j=1}^n \ln \left( \frac{x_j}{x_t} \right) \right] }^{-1}. \lb{lik} \end{equation} Apart from a slight notation change, this result is equal to equation (B6) in Ref.\ \cite{n05}, despite the fact that this work adopts a different normalization, as can be seen when comparing equation (\ref{norm2}) above to equation (9) of Ref.\ \cite{n05}. Therefore, this change in normalization does not affect the estimation of the exponent of the Pareto law obtained by the maximum likelihood estimator (MLE). To find the expected error in the estimation of $\alpha$, the width of the maximum of the likelihood as a function of $\alpha$ should provide us with an estimate of $\delta \alpha$. Taking the exponential of equation (\ref{lik1}) allows us to find the likelihood as follows, \begin{equation} P(x|\alpha)= a e^{-b(1+ \alpha )} \alpha^n {x_t}^{n \alpha}, \lb{lik2} \end{equation} where \begin{equation} a= e^{\displaystyle \: n e^{(A-Bx_{\scriptstyle t})}}, \lb{a} \end{equation} \begin{equation} b= \sum_{j=1}^n \ln x_j \; . \lb{b} \end{equation} Remembering that $\alpha>1$, the square root of the variance in $\alpha$ will give us $\delta \alpha$. Therefore, we have that, \begin{equation} \delta \alpha=\sqrt{\langle \alpha^2 \rangle - {\langle \alpha \rangle }^2 }, \lb{dalfa} \end{equation} where \begin{equation} \langle \alpha \rangle = \frac{\displaystyle \int_1^\infty e^{-b(1+ \alpha )} \alpha^{(1+n)} {x_t}^{n \alpha} d \alpha} {\displaystyle \int_1^\infty e^{-b(1+ \alpha )} \alpha^n {x_t}^{n \alpha} d \alpha}, \lb{med} \end{equation} and \begin{equation} \langle \alpha^2 \rangle = \frac{\displaystyle \int_1^\infty e^{-b(1+ \alpha )} \alpha^{(2+n)} {x_t}^{n \alpha} d \alpha} {\displaystyle \int_1^\infty e^{-b(1+ \alpha )} \alpha^n {x_t}^{n \alpha} d \alpha}. \lb{med2} \end{equation} Note that these two integrals can be solved numerically and that both $x_j$ and $n$ in equations (\ref{lik}), (\ref{b}), (\ref{med}) and (\ref{med2}) refer only to the observed normalized income values within the Pareto region, that is, $x_j \ge x_t$. After calculating $\delta \alpha$, finding $\delta \beta$ becomes just a matter of using standard error propagation techniques in equation (\ref{norm2}). Table \ref{t-pareto} and figures \ref{pareto_mv1} and \ref{pareto_mv2} present the results of the Pareto law parameters obtained with the MLE. \subsection{Percentage Populations and Percentage Share} Once the Gompertzian and Paretian regions were established, we were able to find the percentage of the population in each component. The results shown in tables \ref{t-gomp} and \ref{t-pareto} allowed us to determine that from 1978 to 2005 the income region described by a Gompertz curve includes $(98.85\pm0.15)$\% of the population of Brazil, whereas the Pareto region includes only $(0.85\pm0.45)$\% of the Brazilian population. These results are similar to the findings of Ref.\ \cite{dy01} for the USA, showing that Brazil also has a two class income system where the overwhelming majority of the population belongs to the lower income class. \begin{figure*} \epsfysize=16cm \begin{center} \rotatebox{-90}{\epsffile{pareto_mq1.ps}} \end{center} \caption{Plots showing the least squares fitting (LSF) of the Pareto law to Brazil's income distribution data from 1978 to 1990. The full line is the fitted straight line. The Pareto power law is clearly visible in all plots and, differently from Ref.\ \cite{cfl98}, we found no evidence of two Pareto tails. On the contrary, our plots show unique line fits. The tail fluctuations are visible, but they are not severe. Numerical values of the fitted parameters are shown in table \ref{t-pareto}.\label{paretof1}} \end{figure*} \begin{figure*} \epsfysize=16cm \begin{center} \rotatebox{-90}{\epsffile{pareto_mq2.ps}} \end{center} \caption{Continuation of figure \ref{paretof1} showing the LSF of the Pareto power law for data from 1992 to 2005.\label{pareto2}} \end{figure*} \begingroup \begin{table*}[!htbp \caption{Results of fitting the Pareto law to Brazil's income data. The transition values from Gompertz to Pareto regions are also shown. Due to the oddity of the data in 1978 and 1979 (see above), there are indeed uncertainties of $x_t$ in these years. Results of the parameters are presented by both methods, least squares fitting (LSF) and maximum likelihood estimator (MLE). The correlation coefficient obtained with LSF is included, as well as the percentage of the population in the Paretian region. $\delta \alpha$ and $\delta \beta$ in the LSF column were calculated as in the Gompertzian region, that is, by bootstrap replacement. The MLE method results in more stable values of $\alpha$, whereas LSF results appear noisy.\label{t-pareto}} \begin{center} \begin{tabular}{cccccccccc} \hline\noalign{\smallskip} \textbf{year} & $\mathbf{x_{pmin}}$ & $\mathbf{x_t}$ &\boldmath $\mathbf{\alpha_{LSF}}$ &\boldmath $\mathbf{\beta_{LSF}}$ &\boldmath $\mathbf{\alpha_{MLE}}$ &\boldmath $\mathbf{\beta_{MLE}}$ & \textbf{c.c.\ LSF} & \textbf{\% in Pareto region} \\ \noalign{\smallskip}\hline\noalign{\smallskip} 1978&40.0&$23.3\pm16.7$&$2.44\pm0.11$& $4767\pm4236$&$2.94\pm0.06$&$10639\pm22543$&$0.981$&$1.1-0.4$\\ 1979&40.0&$23.5\pm16.5$&$3.09\pm0.17$&$49543\pm76479$&$3.09\pm0.09$&$17464\pm38230$&$0.973$&$1.1-0.4$\\ 1981&7.533&$7.533$&$3.52\pm0.09$&$2071\pm781$&$2.84\pm0.11$&$444\pm100$&$0.993$&$1.1$\\ 1982&7.473&$7.473$&$2.53\pm0.04$&$150.5\pm20.8$&$2.68\pm0.06$&$316\pm40$&$0.997$&$1.1$\\ 1983&6.910&$6.910$&$3.03\pm0.08$&$655.6\pm186.6$&$2.64\pm0.05$&$263\pm26$&$0.992$&$1.3$\\ 1984&7.388&$7.388$&$3.50\pm0.07$&$1870\pm451$&$2.84\pm0.11$&$441\pm97$&$0.996$&$1.1$\\ 1985&7.490&$7.490$&$3.15\pm0.10$&$1112\pm382$&$2.66\pm0.05$&$312\pm34$&$0.990$&$1.1$\\ 1986&7.112&$7.112$&$2.11\pm0.07$&$57.84\pm12.52$&$2.57\pm0.03$&$234\pm17$&$0.987$&$1.2$\\ 1987&7.626&$7.626$&$2.43\pm0.06$&$153.6\pm26.7$&$2.72\pm0.07$&$360\pm55$&$0.996$&$1.1$\\ 1988&8.140&$8.140$&$3.06\pm0.13$&$983.7\pm446.1$&$2.87\pm0.12$&$585\pm153$&$0.991$&$1.1$\\ 1989&7.856&$7.856$&$2.22\pm0.04$&$112.5\pm11.4$&$2.78\pm0.09$&$445\pm80$&$0.997$&$1.2$\\ 1990&8.074&$8.074$&$2.27\pm0.05$&$91.35\pm16.73$&$2.64\pm0.05$&$332\pm36$&$0.994$&$1.1$\\ 1992&7.635&$7.635$&$2.12\pm0.06$&$54.52\pm10.36$&$2.64\pm0.05$&$288\pm31$&$0.990$&$1.0$\\ 1993&7.674&$7.674$&$2.41\pm0.04$&$179.3\pm25.9$&$2.57\pm0.03$&$271\pm20$&$0.996$&$1.2$\\ 1995&7.887&$7.887$&$3.21\pm0.10$&$1376\pm488$&$2.78\pm0.09$&$437\pm79$&$0.991$&$1.1$\\ 1996&8.163&$8.163$&$3.20\pm0.12$&$1331\pm616$&$2.75\pm0.08$&$421\pm71$&$0.986$&$1.0$\\ 1997&7.935&$7.935$&$2.79\pm0.05$&$419.3\pm79.7$&$2.62\pm0.04$&$310\pm32$&$0.995$&$1.0$\\ 1998&7.628&$7.628$&$2.94\pm0.03$&$632.6\pm68.6$&$2.68\pm0.06$&$335\pm40$&$0.998$&$1.2$\\ 1999&7.811&$7.811$&$3.10\pm0.07$&$867.5\pm205.5$&$2.78\pm0.09$&$429\pm77$&$0.996$&$1.1$\\ 2001&7.774&$7.774$&$3.23\pm0.20$&$1588\pm1257$&$2.72\pm0.07$&$372\pm54$&$0.973$&$1.1$\\ 2002&7.878&$7.878$&$2.93\pm0.05$&$539.8\pm89.1$&$2.78\pm0.09$&$426\pm79$&$0.996$&$1.0$\\ 2003&7.374&$7.374$&$3.18\pm0.06$&$1017\pm227$&$2.78\pm0.09$&$387\pm68$&$0.996$&$1.2$\\ 2004&7.653&$7.653$&$3.89\pm0.31$&$5793\pm19036$&$3.10\pm0.23$&$785\pm364$&$0.962$&$1.1$\\ 2005&7.403&$7.403$&$2.59\pm0.09$&$172.7\pm50.8$&$2.84\pm0.11$&$441\pm97$&$0.984$&$1.2$\\ \noalign{\smallskip}\hline \end{tabular} \end{center} \end{table*} \endgroup \begin{figure*}[tbh] \epsfysize=16cm \begin{center} \rotatebox{-90}{\epsffile{pareto_mv1.ps}} \end{center} \caption{These graphs present the tail of the complementary cumulative distribution fitted by a Pareto power law whose exponents were obtained with the maximum likelihood estimator (MLE). The dashed lines appearing in the plots of 1978 and 1979 represent the upper limits of $\delta \beta$ in these years, which, according to the results of table \ref{t-pareto}, are quite large.\label{pareto_mv1}} \end{figure*} \begin{figure*}[htb] \epsfysize=15.4cm \begin{center} \rotatebox{-90}{\epsffile{pareto_mv2.ps}} \end{center} \caption{Continuation of figure \ref{pareto_mv1} showing the Pareto power law fitted with the MLE from 1992 to 2005.\label{pareto_mv2}} \end{figure*} It is of interest to obtain the percentage share of each of the two income components analyzed in this paper relative to the total income. Table \ref{tab4a} presents these results together with the Gini coefficients shown in figure \ref{fig5}. Due to the same reasons discussed above the data analysis for 1978 and 1979 is problematic because there is a large uncertainty in the transition income $x_t$ between the Gompertzian and Paretian regions (see table \ref{t-pareto}). Figure \ref{fig8} shows the percentage share of the Pareto region and in this figure the uncertainties for 1978 and 1979 appear as large error bars for the first two points. If we dismiss these two points, after a careful look at the irregular curve formed by the variations of the Paretian percentage share we can see that there is an oscillatory pattern, although with changing amplitudes, whose periods can be set as roughly 4 years. The maximum and minimum inflexion points seem to alternate at approximately every 2 years. It is interesting to know whether or not there is any possible correlation of this approximate cycling pattern with any other economic quantity. Figure \ref{gdp} presents a plot of the gross domestic product (GDP) growth of Brazil in the same time period of figure \ref{fig8} and, although we can also identify an approximate cycling pattern in this graph, its oscillation does not seem to correlate with the cycles in the percentage share of the Pareto region. As a final point, we should note that this approximate cycling pattern in the Paretian share could be consistent with a purely deterministic dynamical model based on the application of the Lotka-Volterra equation to economic growth and cycle as advanced long ago by Goodwin \cite{g67}. Although such a model predicts a very regular oscillation of the percentage share of the lower income class, this discrepancy with our data could perhaps be remedied by the introduction of perturbation techniques. We shall not pursue this issue further here \cite{mjr07}. \begingroup \begin{table*}[!htbp \caption{This table presents the percentage share relative to the total of each of the two components that characterize the income distribution in Brazil. The Gini coefficients from 1978 to 2005 plotted in figure \ref{fig5} are also presented. By definition these coefficients are obtained as the area in between the two curves in figures \ref{fig3} and \ref{fig4}. The area below each of the Lorenz curves was estimated numerically.\label{tab4a}} \begin{center} \begin{tabular}{cccc} \hline\noalign{\smallskip} \textbf{year} & \textbf{\% share of Gompertz region} & \textbf{\% share of Pareto region} & \textbf{Gini coefficient}\\ \noalign{\smallskip}\hline\noalign{\smallskip} 1978 & 57.1 & 31.9 & 0.739 \\ 1979 & 62.0 & 26.2 & 0.711 \\ 1981 & 87.7 & 12.3 & 0.574 \\ 1982 & 87.2 & 12.8 & 0.581 \\ 1983 & 85.5 & 14.5 & 0.584 \\ 1984 & 87.2 & 12.8 & 0.576 \\ 1985 & 85.8 & 14.2 & 0.589 \\ 1986 & 85.2 & 14.8 & 0.580 \\ 1987 & 85.9 & 14.1 & 0.592 \\ 1988 & 85.4 & 14.6 & 0.609 \\ 1989 & 82.5 & 17.5 & 0.628 \\ 1990 & 85.9 & 14.1 & 0.605 \\ 1992 & 87.0 & 13.0 & 0.578 \\ 1993 & 84.1 & 15.9 & 0.599 \\ 1995 & 85.9 & 14.1 & 0.596 \\ 1996 & 86.7 & 13.3 & 0.598 \\ 1997 & 86.1 & 13.9 & 0.598 \\ 1998 & 84.5 & 15.5 & 0.597 \\ 1999 & 86.0 & 14.0 & 0.590 \\ 2001 & 85.2 & 14.8 & 0.592 \\ 2002 & 86.4 & 13.6 & 0.586 \\ 2003 & 85.4 & 14.6 & 0.579 \\ 2004 & 87.3 & 12.7 & 0.577 \\ 2005 & 86.2 & 13.8 & 0.580 \\ \noalign{\smallskip}\hline \end{tabular} \end{center} \end{table*} \endgroup \begin{figure*}[htp] \epsfysize=13cm \begin{center} \rotatebox{-90}{\epsffile{plot-percent-pareto.ps}} \end{center} \caption{Plot showing the percentage share of the Pareto region relative to the total income. The error bars in 1978 and 1979 are due to the big uncertainty of the transitional income value $x_t$ in these years, which led to huge uncertainties in the income share of both the Gompertz and Pareto regions. Even if we dismiss the portion due to these two years, it is apparent an irregular oscillatory pattern with changing amplitude during the time span of our analysis. The maximum and minimum inflexion points seem to alternate roughly at every 2 years, whereas the period of this oscillation occurs at approximately every 4 years. If this oscillatory pattern is in fact a real feature of the income distribution in Brazil, the next maximum of the Paretian income share should occur in 2005-2007, while the next minimum should happen in 2007-2009. However, we must point out that \textit{this oscillatory pattern does not mean equilibrium}. There was economic growth for most of the period shown here, albeit the growth rate was at times fairly modest.\label{fig8}} \end{figure*} \begin{figure*}[htp] \epsfysize=13cm \begin{center} \rotatebox{-90}{\epsffile{gdp.ps}} \end{center} \caption{GDP growth rate in Brazil from 1978 to 2005. Although this graph also shows an approximate cycling pattern, the oscillation shown here does not seem to correlate with the cycle in the percentage share of the Paretian region presented in figure \ref{fig8}.\label{gdp}} \end{figure*} \section{Conclusion}\lb{conclusion} In this paper we have carried out an analysis of the personal income distribution in Brazil from 1978 to 2005. We have made use of the extensive household data surveys collected and made digitally available by the Brazilian Institute for Geography and Statistics -- IBGE in order to obtain 24 yearly samples of the complementary cumulative distribution function $F(x)$ of individual income of Brazil in terms of the normalized personal income $x$. We have concluded that this distribution function is well described by two components. The first is a Gompertz curve of the form $G\,(x)=\exp\,[\,\exp \, (A-Bx)]$, valid from $x=0$ up to the transitional income $x_t$ and which includes $(98.85\pm0.15)$\% of the population. The second component of the complementary cumulative income distribution is a Pareto power law $P\,(x)= \beta\,x^{-\alpha}$, valid from $x_t$ up. This includes the remaining $(0.85\pm0.45)$\% of the population of Brazil. The positive parameters $A$, $B$, $\alpha$ and $\beta$ were all determined by a mixture of boundary conditions, normalization and data fitting in all 24 yearly samples. We also estimated uncertainties for these parameters. Lorenz curves and Gini coefficients were also obtained, as well as the evolution of the percentage share of both components relative to the total income. The Paretian and Gompertzian shares show an approximate cycling pattern with periods of about 4 years and maximum and minimum peaks alternating at about every 2 years. These results show that the income distribution pattern emerging from the present study allows us to \textit{characterize Brazil as being formed by a well defined two class system}. The challenging questions posed by the results of this work concern the possible origins of the Gompertz curve. It seems quite reasonable to suspect that the underlying dynamics of income distribution should be intimately related to the dynamics of production and economic growth in industrialized capitalist economies. Since economic growth happens because production produces a net physical surplus, the search for the origins of the Gompertz curve in income distribution should perhaps focus in growth because this curve has been successfully applied in models of population dynamics, particularly human mortality from where it has originated \cite{sh05}, population ecology \cite{kot01} and the growth of biomass \cite{ma06}. So, the Gompertz curve may provide an important clue connecting income distribution and economic growth as a result of net production surplus. And although in these applications the power of the first exponential of the Gompertz curve is negative whereas in here it has a positive sign, such a difference may not be relevant to the connection just mentioned. These remarks should also be true for the \textit{logistic function}, which share with the Gompertz curve the main feature of being S-shaped \cite{kot01,w32} and also appears in economic models. From a physicists' standpoint, it is well known that the dynamics of complex systems gives raise to fractal power law patterns similar to the Pareto law. So, patterns in economic growth, viewed perhaps as a complex dynamical system, could be the root cause giving raise to the Gompertzian and Paretian income distribution functions. \begin{acknowledgement} We would like to express our gratitude to Humberto Lopes, Jos\'e Luiz Louzada, Vera Duarte Magalh\~aes and Cristiano de Almeida Martins for their help with IBGE data. We are also grateful to two referees for very useful comments which improved this paper. \end{acknowledgement}
1,314,259,992,611
arxiv
\section{Introduction and main results} The study of glass and mean field or realistic spin glass models is a very rich and important part of theoretical physics \cite{Montanaribook, MPV, Parisibook}. For mathematicians, it is a challenging program \cite{T03,T11,P13}. Roughly speaking, the main goal is to study the global maxima or, more generally, the ``largest individuals" of a stochastic process with ``high-dimensional" correlation structure. The classic example of such a process is the mixed $p$-spin model. Its Hamiltonian (or energy) $H_N$ is defined on the spin configuration space $\Sigma_N=\{-1,1\}^N$ by \begin{align*} H_N(\sigma)&=X_N(\sigma)+h\sum_{i=1}^N\sigma_i. \end{align*} Here, $h\in\mathbb{R}$ denotes the strength of the external field and $X_N$ is a centered Gaussian process with covariance, \begin{equation*} \mathbb E X_{N}(\sigma^1)X_N(\sigma^2)=N\xi(R_{1,2}), \end{equation*} where \begin{equation*} \xi(s):=\sum_{p\geq 2}c_p^2s^p \end{equation*} for some real sequence $(c_p)_{p\geq 2}$ with $\sum_{p\geq 2}2^pc_p^2<\infty$ and $$R_{1,2}=R(\sigma^{1}, \sigma^{2}):= \frac{1}{N}\sum_{i=1}^N\sigma_i^1\sigma_i^2$$ is the normalized inner product between $\sigma^1$ and $\sigma^2$, known as the overlap. The covariance structure of $X_N$ is as rich as the structure of the metric space $(\Sigma_N, d)$, where $d$ is the Hamming distance on $\Sigma_{N}$, $$d(\sigma^{1},\sigma^{2}) = \frac{1-R(\sigma^{1},\sigma^{2})}{2}.$$ The problem of computing the maximum energy (or the ground state energy) of $H_N$ as $N$ diverges is a rather nontrivial task. Standard statistical mechanics deals with this problem by considering the Gibbs measure \begin{equation*} G_{N,\beta} (\sigma) = \frac{1}{Z_{N,\beta}} e^{\beta H_N(\sigma)} \end{equation*} and the free energy \begin{align*} F_{N,\beta}=\frac{1}{\beta N}\log Z_{N,\beta}, \end{align*} where $Z_{N,\beta}$ is the partition function of $H_N$ defined as \begin{align*} Z_{N,\beta} = \sum_{\sigma\in \Sigma_{N}} e^{ \beta H_N(\sigma)}. \end{align*} The parameter $\beta=1/(kT)>0$ is called the inverse temperature, where $k$ is the Boltzmann constant and $T$ is the absolute temperature. The main goal in this approach is to try to describe the large $N$ limit of the sequences of the free energies $F_{N,\beta}$ and the Gibbs measures $G_{N,\beta}$. When the temperature $T$ decreases, large values of $H_N$ become more important (to both the partition function $Z_{N,\beta}$ and to the Gibbs measure $G_{N,\beta}$) and they prevail over the more numerous smaller values. Since $H_{N}$ is a high-dimensional correlated field with a large number of points near its global maximum, this question becomes very challenging, especially for small values of $T$. When $\xi(s)=s^{2}/2$ and $h=0$, the model above is the famous Sherrington-Kirkpatrick (SK) model introduced in \cite{SK}, as a mean field modification of the Edwards-Anderson model \cite{EA}. Using a non-rigorous replica trick and a replica symmetric hypothesis, Sherrington and Kirkpatrick \cite{SK} proposed a solution to the limiting free energy of the SK model. Their solution however was incomplete; an alternative solution was proposed in 1979 in a series of ground-breaking articles by Giorgio Parisi \cite{Pa79,Parisi, Pa80,Pa83}, where it was foreseen that: \begin{enumerate} \item[$(i)$] The limiting free energy is given by a variational principle, known as the Parisi formula, \item[$(ii)$] The Gibbs measures are asymptotically ultrametric, \item[$(iii)$] At low temperature, the symmetry of replicas is broken infinitely many times. \end{enumerate} The first two predictions were confirmed in the past decade. Following the beautiful discovery of Guerra's broken replica symmetry scheme \cite{Guerra}, the Parisi formula was proved in the seminal work of Talagrand \cite{Talagrand} in 2006 under the convexity assumption of $\xi$. Later, in 2012, the ultrametricity conjecture was established by Panchenko \cite{Panch} assuming the validity of the extended Ghirlanda-Guerra identities \cite{GG}. These identities are known to be valid for the SK model under an asymptotically vanishing perturbation term to the Hamiltonian, and for generic models without any perturbation. As a consequence of ultrametricity, the Parisi formula was further extended to generic models by Panchenko \cite{P05} utilizing the Aizenman-Sims-Starr scheme \cite{ASS}. Our main result in this paper confirms the third prediction at zero temperature, $T=0$. More precisely, the Parisi formula is stated as follows. Denote by $\mathcal{M}$ the collection of all cumulative distribution functions $\alpha$ on $[0,1]$ and by $\alpha(d s)$ the probability induced by $\alpha$. For $\alpha\in\mathcal{M}$, define \begin{align}\label{pf} \mathcal{P}_{\beta}(\alpha)=\frac{\log 2}{\beta}+\Psi_{\alpha,\beta}(0,h)-\frac{1}{2}\int_0^1\beta\alpha(s)s\xi''(s)d s, \end{align} where $\Psi_{\alpha,\beta}(t,x)$ is the weak solution to the following nonlinear parabolic PDE, \begin{align*} \partial_t\Psi_{\alpha,\beta}(t,x)&=-\frac{\xi''(t)}{2}\bigl(\partial_{xx}\Psi_{\alpha,\beta}(t,x)+\beta\alpha(t)(\partial_x\Psi_{\alpha,\beta}(t,x))^2\bigr) \end{align*} for $(t,x)\in[0,1)\times\mathbb{R}$ with boundary condition $$ \Psi_{\alpha,\beta}(1,x)=\frac{\log \cosh \beta x}{\beta}. $$ For the existence and regularity of $\Psi_{\alpha,\beta}$, we refer the readers to \cite{ParisiMeasure,JT2}. The Parisi formula \cite{Talagrand} states that \begin{align}\label{eq0} F_\beta:=\lim_{N\rightarrow\infty}F_{N,\beta}&=\inf_{\alpha\in\mathcal{M}}\mathcal{P}_\beta(\alpha) \,\,\,\,a.s. \end{align} The infinite dimensional variational problem on the right side of \eqref{eq0} has a unique minimizer \cite{ParisiMeasure}, denoted by $\alpha_{P,\beta}$. The measure $\alpha_{P,\beta}(d t)$ induced by $\alpha_{P,\beta}$ is known as the Parisi measure \cite{MPV}\footnote{The Parisi measure is the inverse of the functional order parameter $q(x)$ in \cite{Parisi}, sometimes written as $x(q)$.}. Its physical relevance is described by the facts that it is the limiting distribution of the overlap $R(\sigma^1,\sigma^2)$ under the measure $\mathbb E G_N^{\otimes 2}$ and, more importantly, that it determines the ultrametric description of the asymptotic Gibbs measure. For instance, the number of points in the support of the Parisi measure corresponds to the number of levels in the tree structure induced by the ultrametricity of the asymptotic Gibbs measure. See \cite{MPV,P13} for detailed discussion. The importance of the Parisi measure leads to the following classification. If a Parisi measure $\alpha_{P,\beta}(d t)$ is a Dirac measure, we say that the model is replica symmetric (RS). For $k\geq 1$, we say that the model has $k$ levels of replica symmetry breaking ($k$-RSB) if the Parisi measure is atomic and has exactly $k+1$ jumps. If the Parisi measure is neither RS nor $k$-RSB for some $k\geq 1,$ then the model has full-step replica symmetry breaking (FRSB). We will also say that the model is at least $k$-RSB if the Parisi measure contains at least $k+1$ distinct values in its support. The FRSB prediction in $(iii)$ above plays an inevitable role in Parisi's original solution of the SK model; see \cite{Der} for a historic account. It can be written as: \begin{prediction}[Parisi]\label{con:ds} For any $\xi$ and $h$, there exists a critical inverse temperature $\beta_{c}>0$ such that for any $\beta > \beta_{c}$, the mixed $p$-spin model is FRSB. \end{prediction} In this paper, we establish this prediction at zero temperature. To prepare for the statement of our main result, we recall the Parisi formula for the ground state energy of $H_N$ as follows. First of all, the Parisi formula allows us to compute the ground state energy of the model by sending the temperature $T$ to zero, \begin{align} \label{eq1} GSE := \lim_{N\to \infty} \max_{\sigma \in \Sigma_N} \frac{H_N(\sigma)}N=\lim_{\beta \rightarrow\infty}F_{\beta}=\lim_{\beta \rightarrow\infty}\inf_{\alpha\in\mathcal{M}}\mathcal{P}_\beta(\alpha), \end{align} where the validity of the first equality can be found, for instance, in Panchenko's book \cite[Chapter 1]{P13}. Recently, the analysis of the $\beta$-limit of the second equality was carried out in Auffinger-Chen \cite{AC16} and it was discovered that the ground state energy can be written as a Parisi-type formula. Let $\mathcal{U}$ denote the collection of all cumulative distribution functions $\gamma$ on $[0,1)$ induced by any measures on $[0,1)$ and satisfying $\int_0^1\gamma(t) d t<\infty$. Denote by $\gamma(d t)$ the measure that induces $\gamma$ and endow $\mathcal{U}$ with the $L^1(d t)$-distance. For each $\gamma\in \mathcal{U}$, consider the weak solution to the Parisi PDE, \begin{align*} \partial_t \Psi_\gamma(t,x) = -\frac{\xi''(t)}2 \bigl(\partial_{xx} \Psi_\gamma(t,x)+\gamma(t) (\partial_x \Psi_\gamma(t,x))^2\bigr) \end{align*} for $(t,x)\in[0,1)\times \mathbb{R}$ with boundary condition \[ \Psi_\gamma(1,x) = |x|. \] One may find the existence and regularity properties of this PDE solution in \cite{CHL16}. The Parisi functional at zero temperature is given by \begin{equation} \label{eq:par_fcn} \mathcal{P}(\gamma) = \Psi_\gamma(0,h) -\frac12 \int_0^1 t\xi''(t)\gamma(t) d t. \end{equation} Auffinger and Chen \cite{AC16} proved that the maximum energy can be computed through \begin{equation}\label{eq:par_fml} GSE=\inf_{\gamma\in \mathcal{U}} \mathcal{P}(\gamma)\quad a.s. \end{equation} We call this variational representation the Parisi formula at zero temperature. It was proved in \cite{CHL16} that this formula has a unique minimizer, denote by $\gamma_P.$ We call $\gamma_P(d t)$ the Parisi measure at zero temperature. We say that the model is FRSB at zero temperature if $\gamma_P(d t)$ contains infinitely many points in its support. Our first main result is a proof of Parisi's FRSB prediction at zero temperature. \begin{theorem}\label{th:main} For any $\xi$ and $h,$ the mixed $p$-spin model at zero temperature is FRSB. \end{theorem} Similar to the role of the Parisi measure at positive temperature played in describing the behavior of the model, the Parisi measure at zero temperature also has its own relevance in understanding the energy landscape of the Hamiltonian around the maximum energy. Indeed, consider the mixed even $p$-spin model, i.e., $c_p=0$ for all odd $p\geq 3.$ It can be shown that for any $\varepsilon,\eta>0$ and any $u$ in the support of $\gamma_P(d t)$, there exists some constant $K>0$ independent of $N$ such that \begin{align}\label{eq4} \mathbb{P}\Bigl(\exists\sigma^1,\sigma^2\,\,\mbox{such that}\,\,R_{1,2}\in (u-\varepsilon,u+\varepsilon)\,\,\mbox{and}\,\,\frac{H_N(\sigma^1)}{N},\frac{H_N(\sigma^2)}{N}\geq GSE-\eta\Bigr)\geq 1- Ke^{-\frac{N}{K}} \end{align} for all $N\geq 1.$ This means that for any $u\in\mbox{supp } \gamma_P$, one can always find two spin configurations around the maximum energy such that their overlap is near $u$ with overwhelming probability. The display \eqref{eq4} can be carried out by means of the Guerra-Talagrand replica symmetry breaking bound for the maximum coupled energy with overlap constraint (see \cite[Subsection 3.1]{CHL16} and \cite{AC17}). Now knowing that the model is FRSB by Theorem \ref{th:main} indicates that the spin configurations around the maximum energy are not simply clustered into equidistant groups. This is in sharp contrast to the energy landscape of the spherical version of the mixed $p$-spin model, where in the pure $p$-spin model, i.e., $\xi(t)=t^p$ for $p\geq 3,$ it was shown by Subag \cite{Subag} that around the maximum energy, the spin configurations are essentially orthogonally structured. This structure was also presented in more general mixtures of the spherical model in the recent work of Auffinger and Chen \cite{AC17}. \begin{remark} \rm The problem of computing the maximum energy is also generally known as the Dean's problem and is frequently used to motivate the theory of mean field spin glasses, see \cite{P13,MPV}. More recently, the formula \eqref{eq1} has appeared in other optimization problems related to theoretical computer science such as extremal cuts on sparse random graphs, see \cite{DMS,Sen} and the references therein. \end{remark} We now return to the positive temperature case. Recall the Parisi measure $\alpha_{P,\beta}$ introduced in \eqref{eq0}. Our second main result, as a consequence of Theorem \ref{th:main}, shows that for any mixture parameter $\xi$ and external field $h$, the number of levels of replica symmetry breaking must diverge as $\beta$ goes to infinity. \begin{theorem}\label{thm:finiteTemp} Let $k\geq 1.$ For any $\xi$ and $h$, there exists $\beta_{k}$ such that the mixed $p$-spin model is at least $k$-RSB for all $\beta > \beta_{k}.$\end{theorem} We finish this section with some historical remarks and describing the main novelty of our approach. For the SK model without external field, the Parisi measure was shown to be RS in the high temperature regime $\beta < 1$ by Aizenman, Lebowitz, and Ruelle \cite{ALR}. Later, it was also understood by Toninelli \cite{Toni} that the Parisi measure is not RS in the low temperature region $\beta>1$. The whole region $\beta >1$ is expected to be of FRSB. Before Theorem \ref{thm:finiteTemp}, the state of the art towards Parisi's FRSB prediction was given in \cite[Theorem 3]{AChen13}, where the authors established that for sufficiently low temperature the mixed $p$-spin model with $h=0$ is at least 2-RSB. It is also believed that the functional ordered parameter $\alpha_{P,\beta}$ is not only FRSB at low temperature, but also has an absolutely continuous part \cite[Chapter III]{MPV}. Regularity properties of Parisi measures can be found in \cite{AChen13}. The main novelty of our approach to Theorem \ref{th:main} is to explore the Parisi formula for the ground state energy \eqref{eq:par_fml} by considering a perturbation around the point $1$. In short, we show that it is always possible to lower the value of the Parisi functional of any atomic measure with finite atoms by adding a large enough jump near $1$. At finite temperature, since the Parisi measure is a probability measure, the idea of adding a large jump is not feasible. As the reader will see, some miraculous cancellations (see Lemma \ref{add:lem1} and Proposition \ref{add:prop2} among others) occur during the proof. These cancellations mostly come from exact computations that use the fact that the boundary condition of $\Psi_\gamma$ at $1$ is $|x|$. Theorem \ref{thm:finiteTemp} follows from Theorem \ref{th:main} after some weak convergence considerations. % \section{Lowering the value of the Parisi functional} In this section, we show that for any atomic $\gamma(d s)$ with finitely many jumps, one can always lower the value of the Parisi functional by a perturbation of $\gamma$ around $1$. Let $\gamma\in \mathcal{U}$ be fixed. Suppose that $\gamma(dt)$ is atomic and consists of finitely many jumps, that is, \begin{align*} \gamma(t) = \sum_{i=0}^{n-1} m_{i} \ensuremath{\boldsymbol 1}_{[q_{i}, q_{i+1})} (t)+ m_n \ensuremath{\boldsymbol 1}_{[q_n,1)}(t), \end{align*} where $(q_i)_{0\leq i\leq n}$ and $(m_i)_{0\leq i\leq n}$ satisfy \begin{align} \begin{split} \label{eq2} &0=q_0< q_1<q_2<\cdots<q_n<1,\\ &0\leq m_0<m_1<m_2<\cdots<m_n<\infty. \end{split} \end{align} Here and in what follows, $\ensuremath{\boldsymbol 1}_B(t)=\ensuremath{\boldsymbol 1}_{[t\in B]}$ is the indicator function of the set $B\subset \mathbb{R}$. Let $m_{n+1}$ be any number greater than $m_n.$ For any $q\in (q_n,1),$ consider a perturbation of $\gamma$ by \begin{align}\label{pert} \gamma_q(t) = \sum_{i=0}^{n-1} m_{i} \ensuremath{\boldsymbol 1}_{[q_{i},q_{i+1})}(t) + m_n\ensuremath{\boldsymbol 1}_{[q_{n}, q)} (t)+ m_{n+1} \ensuremath{\boldsymbol 1}_{[q, 1)} (t). \end{align} In other words, we add a jump to the top of $\gamma.$ Our main result is the following theorem. It says that if $m_{n+1}$ is large enough, then the Parisi functional evaluated at perturbed measure $\gamma_q(dt)$ has a smaller value than $\mathcal{P}(\gamma)$ locally for $q$ near $1$. \begin{theorem}\label{thm3} There exist $m_{n+1}>m_n$ and $\eta\in (q_n,1)$ such that \begin{align*} \mathcal{P}(\gamma_q)<\mathcal{P}(\gamma) \end{align*} for all $\eta\leq q<1$. \end{theorem} The following three subsections are devoted to the proof of Theorem \ref{thm3}. \subsection{Probabilistic representation of $\mathcal P$} We start by observing that the Parisi functional at $\gamma$ admits a probabilistic expression by an application of the Cole-Hopf transformation to the Parisi PDE. Indeed, let $z_0,\ldots,z_n$ be i.i.d. standard Gaussian random variables. Denote \begin{align*} J&=h+\sum_{i=0}^{n-1}z_i\sqrt{\xi'(q_{i+1})-\xi'(q_i)}+z_{n}\sqrt{\xi'(1)-\xi'(q_n)}. \end{align*} Set \begin{align*} X_{n+1}&=|J|. \end{align*} Define iteratively, for $0\leq i\leq n,$ \begin{align*} X_i=\frac{1}{m_i}\log \mathbb{E}_{z_i}\exp m_i X_{i+1}. \end{align*} where $\mathbb{E}_{z_i}$ stands for the expectation for $z_i.$ Here $X_0$ is defined as $\mathbb{E}_{z_0} X_1$ if $m_0=0.$ Then $\Psi_{\gamma}(0,h)=X_0$ and thus, \begin{align*} \mathcal{P}(\gamma) &= X_0-\frac12\sum_{i=0}^{n-1} m_i \int_{q_i}^{q_{i+1}} t\xi''(t) d t-\frac{m_n}2\int_{q_n}^1 t\xi''(t) d t. \end{align*} Recall the perturbation $\gamma_q$ from \eqref{pert}. Clearly $\gamma_q=\gamma$ on $[0,q)$ for all $q_{n}< q < 1.$ For notational convenience, we denote \begin{align}\label{eq3} q_{n+1}=q,\,\,q_{n+2}=1. \end{align} In a similar manner, by applying the Cole-Hopf transformation, we can express $\Psi_{\gamma_q}(0,h)$ as follows. Let $z_{n+1}$ be a standard Gaussian random variables independent of $z_1,\ldots,z_n.$ Define \begin{align*} Y_{n+2}&=\Bigl|h+\sum_{j=0}^{n+1}z_j\sqrt{\xi'(q_{j+1})-\xi'(q_j)}\Bigr| \end{align*} and iteratively, for $0\leq i\leq n+1,$ \begin{align*} Y_{i}&=\frac{1}{m_{i}}\log \mathbb{E}_{z_i} \exp m_{i} Y_{i+1}. \end{align*} Here again we let $Y_0=\mathbb{E}_{z_0}Y_1$ whenever $m_0=0.$ Thus, $\Psi_{\gamma_q}(0,h)=Y_0$ for any $q\in (q_{n},1).$ As a result, \begin{align} \label{e:pxgaq} \mathcal{P}(\gamma_q) &= Y_0-\frac12\sum_{i=0}^{n-1} m_i \int_{q_i}^{q_{i+1}} t\xi''(t) d t-\frac{m_n}{2}\int_{q_n}^qt\xi''(t)d t-\frac{m_{n+1}}2\int_{q}^1 t\xi''(t) d t. \end{align} In particular, we have $\lim_{q\to1-}\Psi_{\gamma_q}(0,h)=\Psi_{\gamma}(0,h)$ and $\lim_{q\to1-}\mathcal{P}(\gamma_q)=\mathcal{P}(\gamma).$ \subsection{Some auxiliary lemmas}\label{sub2.1} We state two propositions that will be heavily used in our main proof in the next subsection. Let $0\leq a<t<b$ and $0< m<m'.$ Denote by $z$ a standard normal random variable. Define \begin{align} \begin{split}\label{add:eq2} A (t,x)&=\frac{1}{m'}\log \mathbb{E}\exp m'\bigl|x+z\sqrt{b-t}\bigr|,\\ B(t,x)&=\frac{1}{m}\log \mathbb{E}\exp m A (t,x+z\sqrt{t-a}),\\ C(t,x)&=\mathbb{E} A _x ^2(t,x+z\sqrt{t-a})V(t,x,x+z\sqrt{t-a}), \end{split} \end{align} where \begin{align*} V(t,x,y)&=e^{m(A (t,y) -B(t,x))}. \end{align*} Here $A_x(t,x)$ is the partial derivative of $A(t,x)$ in $x$. In what follows, we will adopt the same notation for $A_{xx}(t,x),A _t(t,x),B_t(t,x),C_t(t,x)$, etc. for the partial derivatives with respect to the subscripts. We will also consider these functions applied to random variables. Using again $z$ to denote a standard Gaussian, we set $V=V(t,x,x +z \sqrt{t-a})$, $A_{x}=A_{x}(t,x+z\sqrt{t-a})$, etc. The main results of this subsection are the following two propositions. \begin{proposition} \label{add:prop1} For any $(t,x)\in[a,b)\times\mathbb{R}$, we have that \begin{align} \begin{split} \label{add:prop1:eq1} B_t(t,x)&=\frac{(m-m')}{2}C(t,x) \end{split} \end{align} and \begin{align} \begin{split} \label{add:prop1:eq2} C_t(t,x)&=\mathbb{E} \bigl(A _{xx}^2+2(m -m')A _{xx}A _x^2\bigr)V+\frac{(m -m')m }{2}\bigl(\mathbb{E} A _x^4V-\bigl(\mathbb{E} A _x^2V\bigr)^2\bigr). \end{split} \end{align} \end{proposition} \begin{remark} The functions \eqref{add:eq2} and the formula \eqref{add:prop1:eq1} also appeared in \cite[Section 14.7]{T11} in a similar manner, where in the exponent of $A$, the author used the random variable $\beta^{-1}\log \cosh(\beta (x+z\sqrt{b-t}))$ instead of $|x+z\sqrt{b-t}|.$ \end{remark} \begin{proposition} \label{add:prop2} We have that \begin{align} \begin{split} \label{add:prop2:eq1} \lim_{t\rightarrow b-}C(t,x)&=1 \end{split} \end{align} and \begin{align*} \liminf_{t\rightarrow b-}C_t(t,x)&\geq\frac{2(m+m')}{3}\Delta(x), \end{align*} where \begin{align}\label{add:prop2:eq3} \Delta(x)&=\frac{2}{\sqrt{2\pi(b-a)}}\frac{e^{-\frac{x^2}{2(b-a)}}}{\mathbb{E} e^{m |x+z \sqrt{b-a}|}}. \end{align} \end{proposition} Before we turn to the proof of Propositions \ref{add:prop1} and \ref{add:prop2}, we first gather some fundamental properties of the function $A .$ \begin{lemma} \label{add:lem0} $A $ is the classical solution to the following PDE with boundary condition $A (b,x)=|x|,$ \begin{align}\label{PDE} A _t(t,x)&=-\frac{1}{2}\bigl(A _{xx}(t,x)+m'A _x(t,x)^2\bigr) \end{align} for $(t,x)\in[a,b)\times\mathbb{R}.$ In addition, \begin{align} \begin{split}\label{add:lem0:eq1} &|A _x(t,x)|\leq 1,\,\,(t,x)\in [a,b)\times\mathbb{R}, \end{split}\\ \begin{split}\label{add:lem0:eq2} &\lim_{t\rightarrow b-}A _x(t,x)=\mbox{\rm sign}(x),\,\,\forall x\in \mathbb{R}\setminus \{0\}, \end{split}\\ \begin{split} \label{add:lem0:eq3} &\lim_{t\rightarrow b-}\mathbb{E} A _x^{2k}V=1,\,\,\forall \; k\geq 1, \forall \; 0<m<m', \end{split} \end{align} where $\mbox{sign}(x)=1$ if $x>0$ and $=-1$ if $x<0.$ \end{lemma} \begin{proof} Define $$ g(t,x)=e^{\frac{(b-t){m'}^2}{2}+m'x}\Phi \Bigl(m'\sqrt{b-t}+\frac{x}{\sqrt{b-t}}\Bigr), $$ where $\Phi$ is the cumulative distribution function of a standard normal random variable. Note that a direct computation gives \begin{align*} \mathbb{E} e^{m'|x+z\sqrt{b-t}|}&=g(t,x)+g(t,-x). \end{align*} Thus, \begin{align*} A (t,x)=\frac{1}{m'}\log\bigl(g(t,x)+g(t,-x)\bigr). \end{align*} From this expression, we can compute that \begin{align} \begin{split}\label{add:lem0:proof:eq1} A _x(t,x)&=\frac{g(t,x)-g(t,-x)}{g(t,x)+g(t,-x)},\\ A _{xx}(t,x) &=m'\Bigl(1-\Bigl(\frac{g(t,x)-g(t,-x)}{g(t,x)+g(t,-x)}\Bigr)^2\Bigr)+2\Gamma(t,x),\\ A _t(t,x)&=-\frac{m'}{2}-\Gamma(t,x), \end{split} \end{align} where \begin{align*} \Gamma(t,x):=\frac{1}{\sqrt{2\pi(b-t)}}\frac{e^{-\frac{x^2}{2(b-t)}}}{g(t,x)+g(t,-x)}. \end{align*} Therefore, these equations together validate \eqref{PDE}. From the first equation, we can also conclude \eqref{add:lem0:eq1} and \eqref{add:lem0:eq2}. Note that $\lim_{t\rightarrow b-}V(t,x,y)=V(b,x,y)$ and $\ln V(t,\cdot,\cdot)$ is at most of linear growth. From \eqref{add:lem0:eq1} and \eqref{add:lem0:eq2}, the dominated convergence theorem implies \eqref{add:lem0:eq3}. \end{proof} \begin{proof}[\bf Proof of Proposition \ref{add:prop1}] Recall that the Gaussian integration by parts states that for a standard normal random variable $z$, $\mathbb{E} zf(z)=\mathbb{E} f'(z)$ for all absolutely continuous functions $f$ satisfying that $\ln|f|$ is at most of linear growth at infinity. From this formula and the PDE \eqref{PDE}, the partial derivative of $B$ in $t$ is given by \begin{align*} B_t(t,x)&=\mathbb{E}\bigl(A _t+\frac{z}{2\sqrt{t-a}}A _x\bigr)V\\ &=\mathbb{E}\bigl(-\frac{1}{2}\bigl(A _{xx} + m' A _x ^2\bigr) +\frac{1}{2}\bigl(A _{xx} + m A _x ^2\bigr)\bigr)V\\ &=\frac{m -m'}{2}\mathbb{E} A _{x} ^2V, \end{align*} which gives \eqref{add:prop1:eq1}. To compute the partial derivative of $C$ in $t$, write $C_t=I+II,$ where \begin{align*} I&:=2\mathbb{E} \Bigl(A _{tx}+\frac{z }{2\sqrt{t-a}}A _{xx}\Bigr)A _xV\\ II&:=m \mathbb{E} A _x^2\Bigl(A _t+\frac{z }{2\sqrt{t-a}}A _x-B_t(t,x)\Bigr)V. \end{align*} Here, from \eqref{PDE}, since \begin{align*} A _{tx}&=-\frac{1}{2}\bigl(A _{xxx}+2m'A _{xx}A _x\bigr), \end{align*} using the Gaussian integration by parts again gives \begin{align*} I&=2\mathbb{E} \Bigl(A _{tx}A _x+\frac{1}{2}\bigl(A _{xxx}A _x+A _{xx}^2+m A _{xx}A _x^2\bigr)\Bigr)V\\ &=\mathbb{E} \Bigl(-A _x\bigl(A _{xxx}+2m'A _{xx}A _x\bigr)+\bigl(A _{xxx}A _x+A _{xx}^2+m A _{xx}A _x^2\bigr)\Bigr)V\\ &=\mathbb{E} \bigl(A _{xx}^2+(m -2m')A _{xx}A _x^2\bigr)V. \end{align*} In addition, from \eqref{PDE}, \begin{align*} II&=m \mathbb{E} \Bigl(-\frac{1}{2}\bigl(A _{xx}A _x^2+m'A _x^4\bigr)+\frac{1}{2}\bigl(3A _{xx}A _x^2+m A _x^4\bigr)-A _x^2B_t(t,x)\Bigr)V\\ &=m \mathbb{E} A _{xx}A _x^2V+\frac{(m -m')m }{2}\bigl(\mathbb{E} A _x^4V-\bigl(\mathbb{E} A _x^2V\bigr)^2\bigr). \end{align*} From these, \eqref{add:prop1:eq2} follows.\end{proof} To handle the limits in Proposition \ref{add:prop2}, we need two lemmas. \begin{lemma}\label{add:lem1} For any odd $k\geq 1,$ there exists a constant $K$ independent of $t$ such that \begin{align} \label{add:lem1:eq1} \mathbb{E} A _{x} ^{k-1}A _{xx} V\leq \frac{Ke^{m |x|}}{\sqrt{t-a}} \end{align} for all $t\in[a,b)$ and $x\in \mathbb{R}$. Moreover, \begin{align}\label{add:lem1:eq2} \lim_{t\rightarrow b-}\mathbb{E} A _{x} ^{k-1}A _{xx}V &=\frac{1}{k}\Delta(x), \end{align} where $\Delta(x)$ is defined in Proposition \ref{add:prop2}. \end{lemma} \begin{proof} Define \begin{align*} D(t,x)&=\mathbb{E} z A _x ^{k}(t,x+z\sqrt{t-a})V(t,x,x+z\sqrt{t-a}) . \end{align*} Note that $|A _x(t,x)|\leq 1$ and $B(t,x) \geq 0$. We have \begin{align*} V(t,x,y)&=e^{m (A (t,y)-B(t,x))}\leq e^{mA (t,0)+m|y|}. \end{align*} Using the Gaussian integration by parts, we can write \begin{align}\label{add:lem1:proof:eq1} D(t,x)&=\sqrt{t-a}\mathbb{E} \bigr(kA _{x} ^{k-1}A _{xx} +m A _{x}^{k+1}\bigl)V . \end{align} This and the previous inequality together imply \eqref{add:lem1:eq1} since \begin{align*} k\sqrt{t-a}\mathbb{E} A _{x} ^{k-1}A _{xx}V &\leq D(t,x)\leq e^{m A (a,0)+m |x|}\mathbb{E} |z |e^{m |z |\sqrt{b-a}}, \end{align*} where the first inequality used the fact that $k+1$ is even. Next, we verify \eqref{add:lem1:eq2}. Note that from \eqref{add:lem0:eq1} and \eqref{add:lem0:eq2}, the dominated convergence theorem implies \begin{align*} \lim_{t\rightarrow b-}D(t,x)&=\frac{\mathbb{E} z \mbox{sign}(x+z \sqrt{b-a})e^{m |x+z \sqrt{b-a}|}}{\mathbb{E} e^{m |x+z \sqrt{b-a}|}}=\sqrt{b-a}\bigl(\Delta(x)+m \bigr), \end{align*} where the second equation used the fact that \begin{align}\label{add:lem4} \mathbb{E} z \mbox{sign}(x+z \sqrt{b-a})e^{m |x+z \sqrt{b-a}|}&=\frac{2}{\sqrt{2\pi}}e^{-\frac{x^2}{2(b-a)}}+m \sqrt{b-a}\mathbb{E} e^{m |x+z \sqrt{b-a}|}. \end{align} See the verification of this equation in the appendix. In addition, since $k+1$ is even, \eqref{add:lem0:eq3} yields \begin{align*} \lim_{t\rightarrow b-}\mathbb{E} A _{x}^{k+1}V &=1. \end{align*} Thus, from \eqref{add:lem1:proof:eq1} and the last two limits, \begin{align*} \Delta(x)\sqrt{b-a}+m \sqrt{b-a}&=\lim_{t\rightarrow b-}D(t,x)=\sqrt{b-a}\bigl(k\lim_{t\rightarrow b-}\mathbb{E} A _{x} ^{k-1}A _{xx}V +m \bigr), \end{align*} from which \eqref{add:lem1:eq2} follows. \end{proof} \begin{lemma} \label{add:lem3} We have that \begin{align*} \liminf_{t\rightarrow b-}\mathbb{E} A _{xx}^2V&\geq \frac{4m'}{3}\Delta(x). \end{align*} \end{lemma} \begin{proof} Recall the middle equation of \eqref{add:lem0:proof:eq1}. We see that \begin{align} \label{add:lem2:eq1} A _{xx}&=m'(1-A _x^2)+2\Gamma, \end{align} on $[a,b)\times\mathbb{R}.$ Here $\Gamma=\Gamma(t, x+z\sqrt{b-a})$ as usual. Using \eqref{add:lem0:eq3}, \eqref{add:lem2:eq1}, and Lemma \ref{add:lem1} with $k=1$ gives \begin{align*} \lim_{t\rightarrow b-}\mathbb{E}\Gamma V&=\frac12\Delta(x). \end{align*} Also multiplying both sides of \eqref{add:lem2:eq1} by $A _x^2$ and applying \eqref{add:lem0:eq3} and Lemma \ref{add:lem1} with $k=3$ yield \begin{align*} \lim_{t\rightarrow b-}\mathbb{E} A_{x}^{2}\Gamma V= \frac{1}{6}\Delta(x). \end{align*} From \eqref{add:lem2:eq1}, since \begin{align*} A _{xx}^2&=\bigl(m'(1-A _x^2)+2\Gamma\bigr)^2\\ &\geq [m'(1-A _x^2)]^2+4m' (1-A _x^2)\Gamma, \end{align*} the announced result follows by the last two limits. \end{proof} \begin{proof}[\bf Proof of Proposition \ref{add:prop2}] The statement \eqref{add:prop2:eq1} follows from \eqref{add:eq2} and \eqref{add:lem0:eq3}. From \eqref{add:prop1:eq2}, Lemma \ref{add:lem1}, and Lemma \ref{add:lem3}, \begin{align*} \liminf_{t\rightarrow b-}C_t(t,x)&\geq \liminf_{t\rightarrow b-}\mathbb{E} \bigl(A _{xx}^2+2(m-m')A _{xx}A _x^2\bigr)V\\ &\geq \liminf_{t\rightarrow b-}\mathbb{E} A _{xx}^2V+2(m-m')\lim_{t\rightarrow b-}\mathbb{E} A _{xx}A _x^2 V\\ &=\frac{4m'}{3}\Delta(x)+\frac{2(m-m')}{3}\Delta(x)\\ &=\frac{2(m+m')}{3}\Delta(x), \end{align*} where the first inequality used \eqref{add:lem0:eq3}. \end{proof} \subsection{Proof of Theorem \ref{thm3}} Recall the sequences $(q_i)_{0\leq i\leq n+2}$ and $(m_i)_{0\leq i\leq n+1}$ from \eqref{eq2} and \eqref{eq3}. Recall the quantities $m,m',a,b$ and the functions $A,B,C,V$ from \eqref{add:eq2}. From now on, we take \begin{align*} m&=m_n,\,\,m'=m_{n+1},\\ a&=\xi'(q_n),\,\,b=\xi'(1), \end{align*} and let \begin{align*} \hat{A}(q,x)&=A(\xi'(q),x),\\ \hat{B}(q,x)&=B(\xi'(q),x),\\ \hat{C}(q,x)&=C(\xi'(q),x),\\ \hat{V}(q,x,y)&=V(\xi'(q),x,y). \end{align*} For $0\leq i\leq n$, set \[ W_i=\exp m_i(Y_{i+1}-Y_i). \] Denote \begin{align*} Z&=h+\sum_{j=0}^{n-1}z_j\sqrt{\xi'(q_{j+1})-\xi'(q_j)}, \end{align*} and \begin{align*} \phi(q)&=\mathbb{E} W_0\cdots W_{n-1}\hat{C}(q,Z). \end{align*} \begin{lemma}\label{lem2} We have that \begin{align} \label{lem2:eq1} \partial_q\mathcal{P}(\gamma_q)=\frac{\xi''(q)}{2}(m_{n+1}-m_n)\bigl(q-\phi(q)\bigr) \end{align} and \begin{align} \begin{split}\label{lem2:eq2} \phi'(q)&=\frac{\xi''(q)(m_{n}-m_{n+1})}{2}\sum_{i=0}^{n-1}m_i\mathbb{E}\Bigl[ W_0\cdots W_i\bigl(\mathbb{E}_{i+1}\bigl[W_{i+1}\cdots W_{n-1}\hat{C}(q,Z)\bigr]\bigr)^2\Bigr]\\ &-\frac{\xi''(q)(m_{n}-m_{n+1})}{2}\sum_{i=0}^{n-1}m_i\mathbb{E} \Bigl[W_0\cdots W_{i}\mathbb{E}_{z_i}\bigl[W_i\bigl(\mathbb{E}_{i+1}\bigl[W_{i+1}\cdots W_{n-1}\hat{C}(q,Z)\bigr]\bigr)^2\bigr]\Bigr]\\ &+\mathbb{E} W_0\cdots W_{n-1}\hat{C}_q(q,Z), \end{split} \end{align} where $\mathbb{E}_i$ is the expectation with respect to $z_i,\ldots,z_{n-1}$ and $\hat{C}_q$ is the partial derivative with respect to $q.$ \end{lemma} \begin{proof} Observe that for $0\leq i\leq n-1$, \begin{align*} \partial_qY_i&=\mathbb{E}_{z_i} W_i \partial_q Y_{i+1}. \end{align*} An induction argument yields \begin{align*} \partial_qY_i&=\mathbb{E}_i W_i\cdots W_{n-1}\partial_qY_{n} \end{align*} for $0\leq i\leq n-1$. Since $Y_{n}=\hat{B}(q,Z),$ the equation \eqref{add:prop1:eq1} leads to \begin{align}\label{add:eq3} \partial_qY_i&=\frac{\xi''(q)(m_{n}-m_{n+1})}{2}\mathbb{E}_i W_i\cdots W_{n-1}\hat{C}(q,Z). \end{align} From \eqref{e:pxgaq}, since \begin{align*} \partial_q\mathcal{P}(\gamma_q)&=\partial_qY_0+\frac{q \xi''(q)}{2}(m_{n+1}-m_n), \end{align*} this and \eqref{add:eq3} with $i=0$ yield \eqref{lem2:eq1}. On the other hand, for $0\leq i\leq n-1,$ from \eqref{add:eq3}, \begin{align*} \partial_qW_i&=m_i\bigl(\partial_qY_{i+1}-\partial_qY_i\bigr)W_i\\ &=\frac{\xi''(q)}{2}(m_{n}-m_{n+1})m_iW_i\bigl(\mathbb{E}_{i+1}W_{i+1}\cdots W_{n-1}\hat{C}(q,Z)-\mathbb{E}_iW_i\cdots W_{n-1}\hat{C}(q,Z)\bigr) \end{align*} where $\mathbb{E}_{i+1}W_{i+1}\cdots W_{n-1}\hat{C}(q,Z)=\hat{C}(q,Z)$ if $i=n-1.$ Finally, since \begin{align*} \phi'(q)&=\sum_{i=0}^{n-1}\mathbb{E} W_0\cdots W_{i-1}(\partial_qW_i)W_{i+1}\cdots W_{n-1}\hat{C}(q,Z)+\mathbb{E} W_0\cdots W_{n-1}\hat{C}_q(q,Z), \end{align*} plugging the last equation into this derivative yields \eqref{lem2:eq2}. \end{proof} \begin{proof}[\bf Proof of Theorem \ref{thm3}] Recall $\phi'(q)$ from \eqref{lem2:eq2}. Let $W_0',\ldots, W_{n-1}'$ be $W_0,\ldots, W_{n-1}$ evaluated at $q=1.$ Note that $\mathbb{E}_{z_i}W_i=1$ for all $0\leq i\leq n-1$ and $|\hat{C}|\leq 1$ by \eqref{add:lem0:eq1}. Applying Fatou's lemma and conditional expectation yield that the first two lines of \eqref{lem2:eq2} cancel each other and as a result of Proposition \ref{add:prop2}, \begin{align} \begin{split}\label{thm3:proof:eq1} \liminf_{q\rightarrow 1-}\phi'(q)&=\liminf_{q\rightarrow 1-}\mathbb{E} W_0\cdots W_{n-1}\hat{C}_q(q,Z)\\ &\geq \mathbb{E} W_0'\cdots W_{n-1}'\liminf_{q\rightarrow 1-}\hat{C}_q(q,Z)\\ &\geq \frac{2\xi''(1)(m_n+m_{n+1})}{3}\mathbb{E} W_0'\cdots W_{n-1}'\Delta(Z), \end{split} \end{align} where $\Delta(Z)$ is defined through \eqref{add:prop2:eq3} with $a=\xi'(q_n),$ $b=\xi'(1)$, and $m=m_{n}.$ We emphasize that although we do not know whether $\hat{C}_q$ is nonnegative (see \eqref{add:prop1:eq2}), the use of Fatou's lemma remains justifiable. Indeed, note that $|\hat{A}_x|\leq 1$, $\mathbb{E}_{z_n}\hat{V}=1$, and by \eqref{add:lem1:eq1}, \begin{align*} 0&\leq \mathbb{E}_{z_n}\hat{A}_{xx}(q,Z)\hat{A}_{x}^2(q,Z)\hat{V}(q,Z,Z+z_n\sqrt{\xi'(q)-\xi'(q_n)})\leq \frac{Ke^{m_{n} |Z|}}{\sqrt{\xi'(q)-\xi'(q_n)}}, \end{align*} where $K$ is a constant independent of $q.$ From \eqref{add:prop1:eq2}, $$ \hat{C}_q(q,Z)\geq -(m_{n+1}-m_{n})\xi''(q)\Bigl(\frac{2Ke^{m_{n} |Z|}}{\sqrt{\xi'(q)-\xi'(q_n)}}+\frac{m_n}{2}\Bigr). $$ In addition, it can be shown that each $\ln\bigl( W_0W_1\cdots W_{n-1}\bigr)$ is at most of linear growth in $z_0,\ldots,z_{n-1}$ following from the fact that each $Y_i$ is uniformly Lipschitz in the variable $z_i$ for all $q\in [q_n,1].$ This and the last inequality together validates \eqref{thm3:proof:eq1}. Next, from \eqref{thm3:proof:eq1}, we can choose $m_{n+1}$ large enough at the beginning such that $$ \liminf_{q\rightarrow 1-}\phi'(q)>1. $$ Note that $\lim_{q\rightarrow 1-}\phi(q)=1$. From \eqref{lem2:eq1}, the above inequality implies that $\partial_q\mathcal{P}(\gamma_q)<0$ for our choice of $m_{n+1}$ as long as $q$ is sufficiently close to $1.$ This completes our proof. \end{proof} \begin{remark} The validity of \eqref{thm3:proof:eq1} and Theorem \ref{thm3} relies on the positive lower bound of $C_t$ coming from Proposition \ref{add:prop2}. When one looks at \eqref{add:prop1:eq2} together with the fact $\lim_{t\to b-}A_x^2=1$, it is tempting to think that $C_t$ is actually negative since $m'=m_{n+1}$ is taken to be large. As a result, Proposition \ref{add:prop2} may look counterintuitive. The remedy for this puzzle is the fact that $A_{xx}$ is singular in the limit $t\to b-$ and the dominated convergence theorem does not apply. These ``singular expectations'' are one of the major difficulties to prove Theorem~\ref{thm3}. They are handled by the exact computations coming from Lemmas \ref{add:lem1} and \ref{add:lem3}. \end{remark} \section{Proof of Theorems \ref{th:main} and \ref{thm:finiteTemp}} \begin{proof}[\bf Proof of Theorem \ref{th:main}] We prove Theorem \ref{th:main} by contradiction. First, note that it is known by \cite[Theorem 6]{CHL16} that the Parisi measure $\gamma_P$ is not constantly zero. Suppose that the support of $\gamma_P$ consists of only $n\geq 1$ points. Then from Theorem \ref{thm3}, we can lower the value of the Parisi functional by a perturbation of $\gamma_P$ at $1$ defined in \eqref{pert}. This leads to a contradiction of the minimality of $\mathcal{P}(\gamma_P).$ Hence, the support of $\gamma_P$ must contain infinitely many points. \end{proof} \begin{remark} The statement of Theorem \ref{th:main} can be strengthened to the fact that the Parisi measure $\gamma_P$ cannot be ``flat'' near 1, i.e., $\gamma_P(t)<\gamma_P(1-)$ for any $0<t<1$. In fact, if this is not true, then $\gamma_P$ is a constant function on $[a,1)$ for some $a.$ One can then apply essentially the same argument as Proposition~\ref{thm3} to lower the Parisi functional. The only difference is that since $\gamma_P$ is not necessarily a step function on $[0,a),$ the term $W_1\cdots W_{n-1}$ in Lemma \ref{lem2} have to be replaced by a continuous modification using the optimal stochastic control representation for $\Psi_\gamma$ in \cite{CHL16}. We omit the details of the argument. \end{remark} \begin{remark}\rm Our argument of Theorem \ref{th:main} does not rely on the uniqueness of the Parisi measure. All we need is the existence of \emph{a} Parisi measure which was proved in \cite{AC16}. \end{remark} \begin{proof}[\bf Proof of Theorem \ref{thm:finiteTemp}] Recall the Parisi measure $\alpha_{P,\beta}$ for the free energy from \eqref{eq0}. We first claim that $(\beta\alpha_{P,\beta})_{\beta>0}$ converges to $\gamma_P$ vaguely on $[0,1).$ Suppose there exists an infinite sequence $(\beta_l)_{l\geq 1}$ such that $(\beta_l\alpha_{P,\beta_l})_{l\geq 1}$ does not converge to $\gamma_P$ vaguely on $[0,1).$ By an identical argument as \cite[Equation (16)]{AC16}, we can further pass to a subsequence of $(\beta_l\alpha_{P,\beta_l})_{l\geq 1}$ such that it vaguely converges to some $\gamma$ on $[0,1).$ To ease our notation, we use $(\beta_l\alpha_{P,\beta_l})_{l\geq 1}$ to standard for this subsequence. It was established in \cite[Lemma 3]{AC16} that \begin{align*} \lim_{l\rightarrow\infty}F_{\beta_l}\geq \mathcal{P}(\gamma). \end{align*} From this, \begin{align*} \mathcal{P}(\gamma_P)=\lim_{\beta\rightarrow\infty}F_\beta\geq \mathcal{P}(\gamma). \end{align*} From the uniqueness of $\gamma_P$ established in \cite[Theorem 4]{CHL16}, it follows that $\gamma_P=\gamma,$ a contradiction. Thus, $(\beta\alpha_{P,\beta})_{\beta>0}$ converges to $\gamma_P$ vaguely on $[0,1).$ This completes the proof of our claim. Next, if Theorem \ref{thm:finiteTemp} does not hold, then from the above claim, there exists some $k\geq 1$ such that the support of $\alpha_{P,\beta}$ contains at most $k$ points for all sufficiently large $\beta$. This implies that the support of $\gamma_P$ contains at most $k$ points. This contradicts Theorem \ref{th:main}. \end{proof}
1,314,259,992,612
arxiv
\section{\label{sec:level1}INTRODUCTION} The discovery of superconductivity up to 56 K in iron-based arsenides \cite{Hosono,Chen-Sm,WNL-Ce,Ren-Pr,Ren-Nd,WHH,Wang-Th} has aroused great interest in the community of condensed matter physics. The undoped parent compounds adopt the tetragonal structure at room temperature, which consists of [Fe$_2$As$_2$]$^{2-}$ layers separated alternatively by [Ln$_{2}$O$_{2}$]$^{2+}$ \cite{Johnson&Jeitschko,Quebe} or $A^{2+}$ ($A$=Ca, Sr, Ba, Eu) layers\cite{Pfisterer1980,Pfisterer1983,EuFeAs,CaFeAs ChenXH} . At low temperatures, the parent compounds undergo a structural phase transition from tetragonal to orthorhombic, accompanied\cite{BaFe2As2} or followed\cite{DaiPC Neutron} by a SDW-like antiferromagnetic (AFM) phase transition. Doping with electrons or holes in the parent compounds suppresses the phase transitions and induces the high temperature superconductivity. This intimate connection between superconductivity and magnetism suggests unconventional superconductivity in the iron-based arsenides.\cite{Kotliar,Cao,Singh} Very recently, superconductivity has been observed in LaFe$_{1-x}$\emph{M}$_{x}$AsO \cite{Co-doping Sefat,Co-doping Cao,Ni-doping Cao} and BaFe$_{2-x}$\emph{M}$_{x}$As$_{2}$\cite{Co-doping Ba,Ni-doping Ba} ($M$=Co and Ni). These findings are quite remarkable and challenge our common wisdom of superconductivity, which shows that direct doping in the superconducting-active blocks generally destroys superconductivity. In high \emph{T}$_{c}$ cuprates, actually, Ni substitution for Cu in the CuO$_{2}$ planes drastically reduces \emph{T}$_{c}$. Hence these experimental results provide clues to the superconducting mechanism for the iron-based arsenide superconductors. Currently, an itinerant scenario within rigid band model is more favored to understand this unusual doping-induced superconductivity.\cite{Co-doping SrFe2As2} EuFe$_2$As$_2$ is a unique member in the ternary iron arsenide family due to the fact that Eu$^{2+}$ ions carry local moments, which orders antiferromagnetically below 20 K.\cite{EuFeAs,EuFeAs Ren,EuFeAs jeevan} Except this AFM transition, the physical properties of EuFe$_{2}$As$_{2}$ were found to be quite similar with those of its isostructural compounds BaFe$_{2}$As$_{2}$ and SrFe$_{2}$As$_{2}$,\cite{EuFeAs Ren} both of which become superconducting upon appropriate doping\cite{BaK rotter,SrK wang,SrK zhu}. It was then expected that EuFe$_{2}$As$_{2}$ could be tuned superconducting through similar doping strategies. Indeed, superconductivity with $T_c$ over 30 K has been observed in (Eu,K)Fe$_{2}$As$_{2}$\cite{EuK} and (Eu,Na)Fe$_{2}$As$_{2}$\cite{EuNa}. Doping at the Fe site in EuFe$_{2}$As$_{2}$ takes advantage of inducing possible superconductivity while leaving the magnetic Eu$^{2+}$ layers intact, which could provide us insight to the interplay between superconductivity and magnetism. Here we report a systematic study on the physical properties in EuFe$_{2-x}$Ni$_{x}$As$_{2}$ (0$\leq$\emph{x}$\leq$0.2) system. It was found that both the SDW ordering of Fe moments and the AFM ordering of Eu$^{2+}$ moments were suppressed by substituting Fe with Ni. Ferromagnetic (FM) ordering of Eu$^{2+}$ moments emerges for \emph{x}$\geq$0.06. While the SDW transition is completely suppressed for \emph{x}$\geq$0.16, no superconducting transition was observed down to 2 K in EuFe$_{2-x}$Ni$_{x}$As$_{2}$, in contrast with the superconductivity in BaFe$_{2-x}$Ni$_{x}$As$_{2}$\cite{Ni-doping Ba}. Our results suggest a strong coupling between the magnetism of Eu$^{2+}$ ions and the conduction electrons of [Fe$_{2-x}$Ni$_{x}$As$_{2}$]$^{2-}$ layers. \section{\label{sec:level1}EXPERIMENT} Polycrystalline samples of EuFe$_{2-x}$Ni$_{x}$As$_{2}$ ($x$= 0, 0.03, 0.06, 0.09, 0.12, 0.16 and 0.2) were synthesized by solid state reaction with EuAs, Fe$_{2}$As and Ni$_{2}$As. EuAs was presynthesized by reacting Eu grains and As powders in evacuated silica tube at 873 K for 10 h then 1123 K for 36 h. Fe$_{2}$As was presynthesized by reacting Fe powers and As powders at 873 K for 10 h and 1173 K for 2.5 h. Ni$_{2}$As was presynthesized by reacting Ni powders and As powders at 873 K for 10 h then 1073 K for another 10 h. The powders of EuAs, Fe$_{2}$As and Ni$_{2}$As were weighed according to the stoichiometric ratio, thoroughly ground and pressed into pellets in an argon-filled glove-box. The pellets were sealed in evacuated quartz tubes and annealed at 1173 K for 24 h and furnace-cooled to room temperature. \begin{figure} \includegraphics[width=7.5cm]{Fig1a.eps} \includegraphics[width=7.5cm]{Fig1b.eps} \includegraphics[width=7.5cm]{Fig1c.eps} \caption{(Color online) (a) X-ray powder diffraction pattern at room temperature and the Rietveld refinement profile for the EuFe$_{1.97}$Ni$_{0.03}$As$_{2}$ sample. Eu$_{2}$O$_{3}$ ($\sim$1.4\%) and Fe$_{0.985}$Ni$_{0.015}$As ($\sim$6\%) are also included in the refinement. (b) and (c) represent the (008) and (220) diffraction peaks for the EuFe$_{2-x}$Ni$_{x}$As$_{2}$ samples, respectively. (d) Refined lattice parameters plotted as functions of Ni content \emph{x}.} \end{figure} Powder x-ray diffraction (XRD) was performed at room temperature using a D/Max-rA diffractometer with Cu-K$_{\alpha}$ radiation and a graphite monochromator. The data were collected with a step-scan mode. The structural refinements were performed using the programme RIETAN 2000.\cite{Izumi} The electrical resistivity was measured using a standard four-probe method. The measurements of dc magnetic properties were performed on a Quantum Design Magnetic Property Measurement System (MOMS-5). Thermopower measurements were carried out in a cryogenic refrigerator down to 17 K by a steady-state technique with a temperature gradient $\sim$ 1 K/cm. \section{\label{sec:level1}RESULTS AND DISCUSSION} The crystal structure for all the EuFe$_{2-x}$Ni$_{x}$As$_{2}$ ($x$= 0, 0.03, 0.06, 0.09, 0.12, 0.16, 0.2) samples at room temperature were refined with the tetragonal ThCr$_{2}$Si$_{2}$ structure. An example of the refinement profile for EuFe$_{1.97}$Ni$_{0.03}$As$_{2}$ is shown in Fig. 1(a). The weighted pattern factor and goodness of fit are $R_{wp}$ $\sim$ 11.2\% and \emph{S}$\sim$1.6, indicating a fairly good refinement. Minor impurity phases of Eu$_{2}$O$_{3}$ and Fe$_{0.985}$Ni$_{0.015}$As are also identified. In addition, the refined occupancies are close to the nominal value. With increasing Ni content, the (008) diffraction peaks shift towards higher angles (Fig. 1(b)) while the (220) diffraction peak shift towards lower angles (Fig. 1(c)). This observation is consistent with the result from the Rietveld refinements, which show that \emph{a}-axis increases slightly while \emph{c}-axis shrinks remarkably with increasing Ni content, as shown in Fig. 1(d). \begin{figure} \includegraphics[width=8cm]{Fig2.eps} \caption{(Color online) Temperature dependence of resistivity for the EuFe$_{2-x}$Ni$_{x}$As$_{2}$ samples. The inset shows derivative plots for \emph{x}=0.16 and 0.2 below 40 K. The anomalies are marked by arrows.} \end{figure} Figure 2 shows the temperature dependence of normalized resistivity ($\rho$) for the EuFe$_{2-x}$Ni$_{x}$As$_{2}$ samples. The $\rho$ value at 300 K decreases with increasing Ni content, which is probably attributed to the increase of carrier concentration induced by the Ni doping. For the parent compound, $\rho$ drops rapidly below 195 K and shows a kink at $\sim$20 K. The former is associated with a SDW transition of Fe moments while the latter is due to the AFM ordering of Eu$^{2+}$ moments.\cite{EuFeAs Ren} On Ni doping, the anomaly in $\rho$ associated with the SDW transition is presented as an upturn, followed by a hump. This behavior resembles that observed in BaFe$_{2-x}$Ni$_{x}$As$_{2}$ crystals.\cite{Ni-doping Ba} With increasing Ni content \emph{x}, $T_{\text{SDW}}$ shifts to lower temperatures. For \emph{x} $\geq$ 0.16 the SDW transition is completely suppressed, however, no superconducting transition was observed down to the lowest temperature in the present study. Instead, two kinks in $\rho$ at low temperatures are present, which can be seen more clearly in the derivative plots as shown in the inset of Fig. 2. It is probable that they share the same origin as that of undoped EuFe$_{2}$As$_{2}$ under magnetic fields, which is related to the different magnetic states of Eu$^{2+}$ moments\cite{EuFeAs meta}. \begin{figure} \includegraphics[width=8cm]{Fig3.eps} \caption{(Color online) Temperature dependence of thermopower for the EuFe$_{2-x}$Ni$_{x}$As$_{2}$ samples. The inset shows the thermopower value at 300 K plotted as a function of Ni content \emph{x}.} \end{figure} Figure 3 shows the temperature dependence of thermopower ($S$) for EuFe$_{2-x}$Ni$_{x}$As$_{2}$ samples. The sign reversal behavior, which manifests multi-band scenario, is observed for \emph{x}=0 and 0.03. The value of $S$ for the other samples is negative. With increasing Ni content, the room-temperature thermopower is pushed toward more negative values, as shown in the inset of Fig. 3. For a simple two-band model with electrons and holes, $S$ can be expressed as, \begin{equation} S=\frac{n_{h}\mu_{h}|S_{h}|-n_{e}\mu_{e}|S_{e}|}{n_{h}\mu_{h}+n_{e}\mu_{e}}, \end{equation} where $n_{h(e)}$, $\mu_{h(e)}$ and $|S_{h(e)}|$ denote the concentration, mobility and thermopower contribution of the holes(electrons), respectively. Therefore, the increase in $|S|$ suggests that Ni doping increases the electron concentration. Meanwhile, the anomaly due to the SDW transition is suppressed to lower temperatures and is no longer visible for \emph{x}=0.16, in agreement with the above resistivity measurements. Recently, it was found that there exists enhanced thermopower in the superconducting window of SmFe$_{1-x}$Co$_x$AsO system.\cite{Co-doping Cao} In the present system, no such enhancement was observed, which may be related to the absence of superconductivity. \begin{figure} \includegraphics[width=7.5cm]{Fig4a.eps} \includegraphics[width=7.5cm]{Fig4b.eps} \caption{(Color online) (a) Temperature dependence of zero-field cooling (ZFC) (open symbols) and field-cooling (FC) (solid symbols) magnetic susceptibility for the EuFe$_{2-x}$Ni$_{x}$As$_{2}$ samples. (b) Field dependence of magnetization at 2 K for the EuFe$_{2-x}$Ni$_{x}$As$_{2}$ samples. The inset shows an expanded plot of the low field region for \emph{x}=0.03 and 0.16.} \end{figure} Figure 4(a) shows the temperature dependence of magnetic susceptibility ($\chi$) for the EuFe$_{2-x}$Ni$_{x}$As$_{2}$ samples below 50 K under an applied field of 20 Oe. The $\chi$ data of 25 K$\leq$\emph{T}$\leq$180 K for \emph{x}$\geq$0.03 basically fall onto the same curve, which can be well fitted by the modified Curie-Weiss law, \begin{equation} \chi=\chi_0+\frac{C}{T-\theta}, \end{equation} where $\chi_0$ denotes the temperature-independent term, $C$ the Curie-Weiss constant and $\theta$ the paramagnetic Curie temperature. The refined parameters are \emph{C}= 8.0(1) emu$\cdot$K/mol and $\theta$=19(1) K. The calculated effective moment $P_{eff}$ is $\sim$8 $\mu_{B}$ per formula unit, close to the theoretical value of 7.94 $\mu_{B}$ for a free Eu$^{2+}$ ion. It is evident that the valence state of Eu ions remains +2 and ferromagnetic interaction between Eu$^{2+}$ moments dominates up to 10\% Ni doping. The anomaly in susceptibility due to the SDW transition is hardly observed even after subtracting the Curie-Weiss contribution of Eu$^{2+}$ moments. On further cooling, a sharp peak can be observed in both $\chi_{\text{ZFC}}$ and $\chi_{\text{FC}}$ for \emph{x}=0.03 at $\sim$ 19 K, similar to that observed in undoped EuFe$_{2}$As$_{2}$.\cite{EuFeAs Ren} We ascribe this peak to the AFM ordering of Eu$^{2+}$ moments. With increasing Ni content to 0.06, the peak shifts to $\sim$16 K. Surprisingly, for the same sample, a small bifurcation between ZFC and FC curves develops below $\sim$13 K, suggesting the formation of ferromagnetic domains. For \emph{x}$\geq$0.09, an obvious divergence between $\chi_{\text{ZFC}}$ and $\chi_{\text{FC}}$ is seen, suggesting the emergence of FM ordered state. It is also noted that there exists a broad peak below $T_{\text{Curie}}$ in the ZFC curves for \emph{x}$\geq$0.12. Interestingly, $T_{\text{Curie}}$ and $T_{\text{Peak}}$ coincide with aforementioned two kinks in $\rho$ at low temperatures, respectively. In EuFe$_{2}$As$_{2}$ single crystals, we have observed a metamagnetic phase with applied field perpendicular to the $c$-axis\cite{EuFeAs meta}. Thus we speculate that $T_{\text{Peak}}$ may be related to a successive metamagnetic transition. Figure 4(b) shows the field dependence of magnetization for the EuFe$_{2-x}$Ni$_{x}$As$_{2}$ samples at 2 K. For \emph{x}=0.03, a slope change in the \emph{M-H} curve can be seen clearly at $\mu_{0}H$=0.55 T, which is ascribed to a field-induced metamagnetic transition.\cite{EuFeAs Ren,EuFeAs meta,EuFeAs Chen} Moreover, there is no hysteresis loop in the low field region, consistent with the AFM ground state of Eu$^{2+}$ moments. For the other samples, however, \emph{M} increases steeply with initial increasing \emph{H}. In addition, small hysteresis loops are observed. These results are in agreement with the above susceptibility measurements, suggesting that Eu$^{2+}$ moments are FM ordered for \emph{x}$\geq$0.06. It is noted that all the saturated magnetic moments are around 6.3 $\mu_{B}$ per formula, which is smaller than the theoretical value of 7 $\mu_{B}$ for a free Eu$^{2+}$ ion. This discrepancy is attributed to presence of impurity phases, whose magnetic response is much weaker. \begin{figure} \includegraphics[width=8.5cm]{Fig5.eps} \caption{(Color online) Magnetic phase diagram for EuFe$_{2-x}$Ni$_{x}$As$_{2}$ system (0$\leq$\emph{x}$\leq$0.2).} \end{figure} Our experimental results on the physical properties of the EuFe$_{2-x}$Ni$_{x}$As$_{2}$ system are summarized in the magnetic phase diagram in Fig. 5. The parent compound EuFe$_{2}$As$_{2}$ shows AFM ordering of Eu$^{2+}$ moments at 20 K as well as SDW ordering of Fe moments at 195 K. With Ni doping, both the orderings are suppressed. On one hand, the SDW transition is gradually suppressed and eventually disappears at $x$=0.16. Nevertheless, no superconductivity was observed down to 2 K. On the other hand, the magnetic ordering of Eu$^{2+}$ moments changes from AFM to FM at $x$$\approx$0.06. This observation is surprising in view of the AFM ordering of Eu$^{2+}$ moments in both the end members EuFe$_{2}$As$_{2}$ and EuNi$_{2}$As$_{2}$\cite{EuFeAs Moss}. By contrast, $T_{Neel}$ remains nearly unchanged upon 10\% Fe doping in EuNi$_{2}$As$_{2}$.\cite{Ni122} The AFM structure of Eu$^{2+}$ moments in EuFe$_{2}$As$_{2}$ is proposed to be of \emph{A}-type, \emph{i.e.} FM coupling for intralayer Eu$^{2+}$ moments while AFM coupling for interlayer Eu$^{2+}$ moments.\cite{EuFeAs Ren,EuFeAs meta,EuFeAs Chen}. The distance between nearest Eu$^{2+}$ layers is $\sim$ 6 {\AA} hence direct overlap of interlayer Eu 4\emph{f} orbitals can be neglected. Therefore, the AFM exchange between interlayer Eu$^{2+}$ moments is probably ascribed to the carrier-mediated Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction.\cite{EuFeAs Moss} The RKKY exchange coupling \emph{J}$_{RKKY}$ $\propto$ -$\frac{\alpha cos \alpha-sin \alpha}{\alpha^{4}}$, where $\alpha$=2$k_{F}R$, \emph{R} denotes the distance between two magnetic moments and $k_{F}$ the Fermi vector. One can see that \emph{J}$_{RKKY}$ oscillates between AFM (negative) and FM (positive) at the variation of 2$k_{F}R$. Considering the dimensionality of the Fermi surfaces, it is probably that heavy 3-D hole pocket derived from Fe \emph{d}$_{z}$ states\cite{Singh} is responsible for mediating the RKKY interaction. Substitution of Fe with Ni introduces electrons, which results in the decrease of $k_{F}^{z}$. Meanwhile, $R$ is also shortened, as indicated by the reduction of \emph{c}-axis. Thus the interlayer coupling may be tuned from AFM to FM. On the other hand, the FM interaction within the Eu$^{2+}$ layers persists up to 10\% Ni doping. As a consequence, a FM ordering of Eu$^{2+}$ moments is established. In contrast, the dominant interaction between Eu$^{2+}$ moments in EuNi$_{2}$As$_{2}$ is antiferromagnetic, as indicated by negative paramagnetic Curie temperature\cite{EuFeAs Moss}. This may account for the robust AFM ordering of Eu$^{2+}$ moments upon Fe doping in EuNi$_{2}$As$_{2}$. The clarification of these issues relies on further ARPES as well as neutron diffraction studies. In the iron-based arsenides, superconductivity generally emerges as the SDW order is suppressed by the carrier doping. As a matter of fact, superconductivity with the maximum $T_{c}$ of $\sim$ 20 K has been observed in BaFe$_{2-x}$Ni$_{x}$As$_{2}$ system.\cite{Ni-doping Ba} Thus, the absence of superconductivity in EuFe$_{2-x}$Ni$_{x}$As$_{2}$ may be relevant to the magnetism of Eu$^{2+}$ ions. The RKKY interaction mentioned above may hinder the Cooper pairing for superconductivity. Recently, re-entrant superconducting behavior has been observed in a high pressure study of EuFe$_{2}$As$_{2}$ crystal.\cite{EuFeAs reentrant} The results suggest that once $T_{c}$ becomes smaller than the magnetic ordering temperature of Eu$^{2+}$ moments, superconductivity will be completely suppressed. If EuFe$_{2-x}$Ni$_{x}$As$_{2}$ were superconducting, its maximum $T_{c}$ would be $\sim$ 6 K smaller than that of BaFe$_{2-x}$Ni$_{x}$As$_{2}$ due to the existence of paramagnetic Eu$^{2+}$ ions\cite{EuFeAs reentrant}. The assumed $T_{c}$ is below the Curie temperatures. This could account for the absence of superconductivity in EuFe$_{2-x}$Ni$_{x}$As$_{2}$ system. \section{\label{sec:level1}Conclusion} In summary, we have systematically studied the transport and magnetic properties on a series of EuFe$_{2-x}$Ni$_{x}$As$_{2}$ polycrystalline samples with 0$\leq$\emph{x}$\leq$0.2. It is found that both the SDW transition associated with the Fe moments and the AFM ordering of Eu$^{2+}$ moments are suppressed upon Ni doping. Though the SDW transition is completely suppressed for \emph{x}$\geq$0.16, no superconducting transition is observed down to 2 K. Surprisingly, a FM ground state of Eu$^{2+}$ moments emerges for \emph{x}$\geq$0.06. A detailed magnetic phase diagram is presented and discussed within the RKKY framework. Our results suggest there exists a strong coupling the magnetism of Eu$^{2+}$ ions and the electronic state in the [Fe$_{2-x}$Ni$_{x}$As$_{2}$]$^{2-}$ layers. \begin{acknowledgments} We would like to thank J. H. Dai and Q. Si for helpful discussions. This work is supported by the National Basic Research Program of China (No.2006CB601003 and 2007CB925001) and the PCSIRT of the Ministry of Education of China (IRT0754). \end{acknowledgments}
1,314,259,992,613
arxiv
\section{Introduction} \label{sec:introduction} Mutli-Instance-Learning (MIL) is a popular modelling framework for addressing different weakly-supervised problems \cite{babenko2011robust,wu2014milcut,ruiz2014regularized}. In traditional Single-Instance-Learning (SIL), the fully supervised setting is assumed with the goal to learn a model from a set of feature vectors (instances) each being annotated in terms of target label $y$. By contrast, in MIL, the weak supervision is assumed, thus, the training set is formed by bags (sets of instances), and only labels at bag-level are provided. Furthermore, MIL assumes that there exist an underlying relation between the bag-label (e.g., video) and the labels of its constituent instances (e.g., image frames). In standard Multi-Instance-Classification (MIC) \cite{maron1998framework}, labels are considered binary variables $y \in \{-1,1\}$ and negative bags are assumed to contain only instances with an associated negative label. In contrast, positive bags must contain at least one positive instance. Another MIL assumption is related to the Multi-Instance-Regression (MIR) problem \cite{ray2001multiple}, where $y \in R$ is a real-valued variable and the maximum instance-label within the bag is usually assumed to be equal to $y$. Note, however, that none of these assumptions accounts for structure in the bag labels. Yet, this can be important in case when the bag labels are ordinal, i.e., $y \in \{ 0 \prec ... \prec l \prec L \}$, as in the case of various ratings or intensity estimation tasks. In this work, we focus on the novel modelling task to which we refer as Multi-Instance-Ordinal Regression (MIOR). Similar to MIR, in MIOR we assume that the maximum instance ordinal value within a bag is equal to its label. To demonstrate the benefits of the proposed approach to MIOR, we apply it to the task of automatic pain estimation \cite{lucey2011painful}. Pain monitoring is particularly important in clinical context, where it can provide an objective measure of the patient's pain level (and, thus, allow for proper treatment) \cite{Aung2015automatic}. The aim is to predict pain intensity levels from facial expressions (in each frame in a video sequence) of a patient experiencing pain. To obtain the labelled training data, the pain level is usually manually coded on an ordinal scale from low to high intensity \cite{hjermstad2011studies}. To estimate the pain, several SIL methods have been proposed \cite{rudovic2013automatic,kaltwang2012continuous}. Yet, the main limitation of these approaches is they require the frame-based pain level annotations to train the models, which can be very expensive and time-consuming. To reduce the efforts, MIL approaches have recently been proposed for automatic pain detection \cite{wu2015multi,sikka2013weakly,ruiz2014regularized}. Specifically, a weak-label is provided for the whole image sequence (in terms of the maximum observed pain intensity felt by the patient). Then, a video is considered as a bag, and image frames as instances, where the pain labels are provided per bag. In contrast to per-frame annotations, the bag labels are much easier to obtain. For example, using patients self-reports or external observers \cite{lucey2011painful}. Yet, existing MIL approaches for the task focus on the MIC setting, i.e, pain intensities are binarized and model predicts only the presence or absence of pain. Consequently, these approaches are unable to deal with Ordinal Regression problems, and, thus, estimate different intensity levels of pain -- which is critical for real-time pain monitoring. In this paper, we propose Multi-Instance Dynamic Ordinal Random Fields (MI-DORF) for MIL with ordinal bag labels. We build our approach using the notion of Hidden Conditional Ordinal Random Fields framework (HCORF) \cite{kim2010hidden}, for modeling of linear-chains of ordinal latent variables. In contrast to HCORF that follows the Single-Instance paradigm, the energy function employed in MI-DORF is designed to model the MIOR assumption relating instance and bag labels. In relation to static MIL methods, our MI-DORF also incorporates dynamics within the instances, encoded by transitions between ordinal latent states. This information is useful when instances (frames) in a bag are temporally correlated, as in pain videos. The main contributions of this work can be summarised as follows: \begin{itemize} \renewcommand\labelitemi{$\bullet$} \item To the best our knowledge, the proposed MI-DORF is the first MIL approach that imposes ordinal structure on the bag labels. The proposed method also incorporates dynamic information that is important when modeling temporal structure in instances within the bags (i.e., image sequences). While modeling the temporal structure has been attempted in \cite{wu2015multi,liu2015video}, there are virtually no works that account for both ordinal and temporal data structures within MIL framework. \item We introduce an efficient inference method in our MI-DORF, which has a similar computational complexity as the forward-backward algorithm \cite{barber2012bayesian} used in standard first-order Latent-Dynamic Models (e.g HCORF). This is despite the fact that we model high-order potentials modelling the Multi-Instance assumption. \item We show in the task of automated pain intensity estimation from the UNBC Shoulder-Pain Database \cite{lucey2011painful} that the proposed MI-DORF outperforms significantly existing related approaches applicable to this task. We show that due to the modeling of the ordinal and temporal structure in the target data, we can infer instance-level pain intensity levels that largely correlate with manually obtained frame-based pain levels. Note that we do so by using only the bag labels for learning, that are easy to obtain. To our knowledge, this has not been attempted before. \end{itemize} \section{Related Work} \label{sec:related} \textbf{Multi-Instance-Learning.} Existing MIC/MIR approaches usually follow the bag-based or instance-based paradigms \cite{amores2013multiple}. In bag-based methods, a feature vector representation for each bag is first extracted. Then, these representations are used to train standard Single-Instance Classification or Regression methods, used to estimate the bag labels. Examples include Multi-Instance Kernel \cite{gartner2002multi}, MILES \cite{chen2006miles}, MI-Graph \cite{zhou2009multi} and MI-Cluster Regression \cite{wagstaff2008multiple}. The main limitation of these approaches is that the learned models can only make predictions at the bag-level. However, these methods cannot work in in the weakly-supervised settings, where the goal is to predict instance-labels (e.g., frame-level pain intensity) from a bag (e.g., a video). In contrast, instance-based methods directly learn classifiers which operate at the instance level. For this, MIL assumptions are incorporated into the model by considering instance-labels as latent variables. Examples include Multi-Instance Support Vector Machines \cite{andrews2002support} (MI-SVM), MILBoost \cite{zhang2005multiple}, and Multi-Instance Logistic Regression \cite{hsu2014augmented}. The proposed MI-DORF model follows the instance-based paradigm by treating instance-labels as ordinal latent states in a Latent-Dynamic Model. In particular, it follows a similar idea to that in the Multi-Instance Discriminative Markov Networks \cite{hajimirsadeghi2013multiple}. In this approach, the energy function of a Markov Network is defined by using cardinality potentials modelling the relation between bag and instance labels. MI-DORF also make use of cardinality potentials, however, in contrast to the works described above, it accounts for the ordinal structure at both the bag and instance level, while also accounting for the dynamics in the latter. \textbf{Latent-Dynamic Models.} Popular methods for sequence classification are Latent-Dynamic Models such as Hidden Conditional Random Fields (HCRFs) \cite{quattoni2007hidden} or Hidden-Markov-Models (HMMs) \cite{rabiner1986introduction}. These methods are variants of Dynamic Bayesian Networks (DBNs) where a set of latent states are used to model the conditional distribution of observations given the sequence label. In these approaches, dynamic information is modelled by incorporating probabilistic dependence between time-consecutive latent states. MI-DORF builds upon the HCORF framework \cite{kim2010hidden} which considers latent states as ordinal variables. However, HMM and HCRF/HCORF follow the SIL paradigm where the main goal is to predict sequence labels. In contrast, in MI-DORF, we define a novel energy function that encodes the MI relationship between the bag labels, and also their latent ordinal states. Note also that the recent works (e.g., \cite{wu2015multi}, \cite{liu2015video}) extended HMMs/HCRFs, respectively, for MIC. The reported results in this work suggested that modeling dynamics in MIL can be beneficial when bag-instances exhibit temporal structure. However, these methods limit their consideration to the case where bag labels are binary and, therefore, are unable to solve the MIOR problem. \textbf{MIL for weakly-supervised pain detection.} Several works attempted pain detection in the context of the weakly-supervised MIL. As explained in Sec.\ref{sec:introduction}, these approaches adopt the MIC framework where pain intensities are binarized. For instance, \cite{sikka2013weakly} proposed to extract a Bag-of-Words representation from video segments and treat them as bag-instances. Then, MILBoosting \cite{zhang2005multiple} was applied to predict sequence-labels under the MIC assumption. Following the bag-based paradigm, \cite{ruiz2014regularized} developed the Regularized Multi-Concept MIL method capable of discovering different discriminative pain expressions within an image sequence. More recently, \cite{wu2015multi} proposed MI Hidden Markov Models, an adaptation of standard HMM to the MIL problem. The limitation of these approaches is that they focus on the binary detection problem, and, thus, are unable to deal with (ordinal) multi-class problems (i.e., pain intensity estimation). This is successfully attained by the proposed MI-DORF. \section{Multi-Instance Dynamic Ordinal Random Fields (MI-DORF)} \label{sec:model} \subsection{Multi Instance Ordinal Regression (MIOR)} \label{sec:problem_description} In the MIOR weakly-supervised setting, we are provided with a training set $\mathcal{T}=\{(\mathbf{X}_1,y_1),(\mathbf{X}_2,y_2),...,(\mathbf{X}_N,y_N)\}$ formed by pairs of structured-inputs $X\in\mathcal{X}$ and labels $y \in \{ 0 \prec ... \prec l \prec L \}$ belonging to a set of $L$ possible ordinal values. In this work, we focus on the case where $\mathbf{X} = \{ \mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{T} \}$ are temporal sequences of $T$ observations $\mathbf{x} \in R^d$ in a d-dimensional space \footnote{Total number of observations $T$ can vary across different sequences}. Given the training-set $\mathcal{T}$, the goal is to learn a model $\mathcal{F}: \mathcal{X} \rightarrow \mathcal{H}$ mapping sequences $\mathbf{X}$ to an structured-output $\mathbf{h} \in \mathcal{H}$. Concretely, $\mathbf{h} = \{ h_{1},h_{2},...,h_{T} \}$ is a sequence of variables $h_t \in \{ 0 \prec ... \prec l \prec L \}$ assigning one ordinal value for each observation $\mathbf{x_t}$ . In order to learn the model $\mathcal{F}$ from $\mathcal{T}$, MIOR assumes that the maximum ordinal value in $\mathbf{h}_n$ must be equal to the label $y_n$ for all sequences $\mathbf{X}_n$: \begin{equation} \label{eq:miorassumption} \mathcal{F}( \mathbf{X}_n ) = \mathbf{h}_n \hspace{3mm} s.t \hspace{3mm} y_n = \max_h(\mathbf{h}_n) \hspace{6mm} \forall \hspace{1mm} (\mathbf{X}_n,y_n) \in \mathcal{T} \end{equation} \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{MILDORF_INFERENCE} \caption{(a) Graphical representation of the proposed MI-DORF model. Node potentials $\Psi^{N}$ model the compatibility between a given observation $\mathbf{x}_t$ and a latent ordinal value $h_t$ . Edge potentials $\Psi^E$ take into account the transition between consecutive latent ordinal states $h_t$ and $h_{t+1}$. Finally, the high-order cardinality potential $\Psi^{M}$ models the MIOR assumption relating all the latent ordinal states $\mathbf{h}_t$ with the bag-label $y$. (b) Equivalent model defined using the auxiliary variables $\zeta_t$ for each latent ordinal state. The use of these auxiliary variables and the redefinition of node and edge potentials allows us to perform efficient inference over the MI-DORF model (see Sec. \ref{sec:inference}).} \label{fig:mildorf} \vspace{-0.3cm} \end{figure} \subsection{MI-DORF: Model Overview} We model the structured-output $\mathbf{h} \in \mathcal{H}$ as a set of ordinal latent variables. We then define the conditional distribution of $y$ given observations $\mathbf{X}$. Formally, $P(y|\mathbf{X};\theta)$ is assumed to follow a Gibbs distribution as: \begin{equation} \label{eq:cond_probability} P(y|\mathbf{X};\theta) = \frac{\sum_h{e^{-\Psi(\mathbf{X},\mathbf{h},y;\theta)}}}{\sum_{y^\prime}\sum_h{e^{-\Psi(\mathbf{X},\mathbf{h},y^\prime;\theta)}}}, \end{equation} where $\theta$ is the set of the model parameters. As defined in Eq. \ref{eq:energy_func}, the energy function $\Psi$ defining the Gibbs distribution is composed of the sum of three different types of potentials. An overview of the model is shown in Fig. \ref{fig:mildorf}(a). \begin{equation} \label{eq:energy_func} \Psi(\mathbf{X},\mathbf{h},y;\theta) = \sum_{t=1}^{T} \Psi^{N}(\mathbf{x}_t,h_t;\theta^N) + \sum_{t=1}^{T-1} \Psi^{E}(h_t,h_{t+1};\theta^E) + \Psi^{M}(\mathbf{h},y,\theta^{M}), \end{equation} \subsubsection{MI-DORF: Ordinal node potentials} The node potentials $\Psi^{N}(\mathbf{x},h;\theta^N)$ aim to capture the compatibility between a given observation $\mathbf{x}_t$ and the latent ordinal value $h_t$. Similar to HCORF, it is defined using the ordinal likelihood model \cite{winkelmann2006analysis}: \begin{equation} \label{eq:ordinal_regression} \Psi^{N}(\mathbf{x},h=l;\theta^N)= \log \Bigg( \Phi \bigg( \frac{b_l - \mathbf{\beta}^T \mathbf{x})}{\sigma}\bigg) - \Phi \bigg( \frac{b_{(l-1)} - \mathbf{\beta}^T \mathbf{x})}{\sigma}\bigg) \Bigg), \end{equation} where $\Phi(\cdot)$ is the normal cumulative distribution function (CDF), and $\theta^N=\{\beta,\mathbf{b},\sigma\}$ is the set of potential parameters. Specifically, the vector $\beta \in {R}^d$ projects observations $\mathbf{x}$ onto an ordinal line divided by a set of cut-off points ${b_0} = - \infty \le \cdots \le {b_L} = \infty $. Every pair of contiguous cut-off points divide the projection values into different bins corresponding to the different ordinal states $l=1,...,L$. The difference between the two CDFs provides the probability of the latent state $l$ given the observation $\mathbf{x}$, where $\sigma$ is the standard deviation of a Gaussian noise contaminating the ideal model (see \cite{kim2010hidden} for details). In our case, we fix $\sigma=1$, to avoid model over-parametrization. \subsubsection{MI-DORF: Edge potentials} The edge potential $\Psi^E(h_t,h_{t+1};\theta^E)$ models temporal information regarding compatibilities between consecutive latent ordinal states as: \begin{equation} \Psi^E(h_t=l,h_{t+1}=l^\prime;\theta^E) = \mathbf{W}_{l,l^\prime}, \end{equation} where $\theta^E={\mathbf{W}^{L \times L}}$ represents a real-valued transition matrix, as in standard HCORF. The main goal of this potential is to perform temporal smoothing of the instance intensity levels. \subsubsection{MI-DORF: Multi-Instance-Ordinal potential} In order to model the MIOR assumption (see Eq. \ref{eq:miorassumption}), we define a high-order potential $\Psi^{M}(\mathbf{h},y;\theta^M)$ involving label $y$ and all the sequence latent variables $\mathbf{h}$ as: \begin{equation} \label{eq:mil_potential} \Psi^{M}(\mathbf{h},y;\theta^M) = \begin{cases} w \sum_{t=1}^{T} \mathbf{I} (h_t==y) \hspace{3mm} \text{iff} \hspace{3mm} \max (\mathbf{h}) = y \\ -\infty \hspace{3mm} \text{otherwise} \end{cases}, \end{equation} where $\mathbf{I}$ is the indicator function, and $\theta^M=w$. Note that when the maximum value within $\mathbf{h}$ is not equal to $y$, the energy function is equal to $-\infty$ and, thus, the probability $P(y|\mathbf{X};\theta)$ drops to 0. On the other hand, if the MI assumption is fulfilled, the summation $w \sum_{t=1}^{T} \mathbf{I} (h_t==y)$ increases the energy proportionally to $w$ and the number of latent states $\mathbf{h} \in h_t$ that are equal to $y$. This is convenient since, in sequences annotated with a particular label, it is more likely to find many latent ordinal states with such ordinal level. Therefore, the defined MI potential does not only model the MI-OR assumption but also provides mechanisms to learn how important is the proportion of latent states $\mathbf{h}$ that are equal to the label. Eq. \ref{eq:mil_potential} is a special case of cardinality potentials \cite{gupta2007efficient} also employed in binary Multi-Instance Classification \cite{hajimirsadeghi2013multiple}. \subsection{MI-DORF: Learning} \label{sec:training} Given a training set $\mathcal{T}=\{(\mathbf{X}_1,y_1),(\mathbf{X}_2,y_2),...,(\mathbf{X}_N,y_N)\}$, we learn the model parameters $\theta$ by minimizing the regularized log-likelihood: \begin{equation} \label{eq:training} \min_\mathbf{\theta} \hspace{3mm} \sum_{i=1}^N \log P(y|\mathbf{X};\theta) + \mathcal{R}(\theta), \end{equation} where the regularization function $\mathcal{R}(\theta)$ over the model parameters is defined as: \begin{equation} \mathcal{R}(\theta) = \alpha (||\beta||_2^2 + ||\mathbf{W}||_F^2) \end{equation} and $\alpha$ is set via a validation procedure. The objective function in Eq.\ref{eq:training} is differentiable and standard gradient descent methods can be applied for optimization. To this end, we use the L-BFGS Quasi-Newton method \cite{byrd1994representations}. The gradient evaluation involves marginal probabilities $p(h_t|\mathbf{X})$ and $p(h_t,h_{t+1}|\mathbf{X})$ which can be efficiently computed using the proposed algorithm in Sec. \ref{sec:inference}. \subsection{MI-DORF: Inference} \label{sec:inference} The evaluation of the conditional probability $P(y|\mathbf{X};\theta)$ in Eq.\ref{eq:cond_probability} requires computing $\sum_h{e^{-\Psi(\mathbf{X},\mathbf{h},y;\theta)}}$ for each label $y$. Given the exponential number of possible latent states $\mathbf{h} \in \mathcal{H}$, efficient inference algorithms need to be used. In the case of Latent-Dynamic Models such as HCRF/HCORF, the forward-backward algorithm \cite{barber2012bayesian} can be applied. This is because the pair-wise linear-chain connectivity between latent states $\mathbf{h}$. However, in the case of MI-DORF, the inclusion of the cardinality potential $\Psi^{M}(\mathbf{h},y;\theta^{M})$ introduces a high-order dependence between the label $y$ and all the latent states in $\mathbf{h}$. Inference methods with cardinality potentials has been previously proposed in \cite{gupta2007efficient,tarlow2012fast}. However, these algorithms only consider the case where latent variables are independent and, therefore, they can not be applied in MI-DORF. For these reasons, we propose an specific inference method. The idea behind it is to apply the standard forward-backward algorithm by converting the energy function defined in Eq. \ref{eq:energy_func} into an equivalent one preserving the linear-chain connectivity between latent states $\mathbf{h}$. To this end, we introduce a new set of auxiliary variables $\boldsymbol{\zeta} = \{\zeta_1,\zeta_2,...,\zeta_T\}$, where each $\zeta_t \in \{0,1\}$ takes a binary value denoting whether the sub-sequence $\mathbf{h}_{1:t}$ contains at least one ordinal state $h$ equal to $y$. Now we redefine the MI-DORF energy function in Eq. \ref{eq:energy_func} as: \begin{equation} \label{eq:energy_func_inference} \Psi(\mathbf{X},\mathbf{h},\boldsymbol{\zeta},y;\theta) = \sum_{t=1}^{T} \Psi^{N}(\mathbf{x}_t,h_t,\zeta_t,y;\theta^N) + \sum_{t=1}^{T-1} \Psi^{E}(h_t,h_{t+1},\zeta_t,\zeta_{t+1},y;\theta^E), \end{equation} where the new node and edge potentials are given by: \begin{equation} \label{eq:inference_node_potential} \Psi^{N}(\mathbf{x}_t,h_t,\zeta_t,y;\theta^N) = \begin{cases} \Psi^{N}(\mathbf{x}_t,h_t;\theta^N) + w\mathbf{I} (h_t==y) \hspace{2mm} \text{iff} \hspace{2mm} h_t <= y \\ -\infty \hspace{3mm} \text{otherwise} \end{cases}, \end{equation} \begin{equation} \Psi^{E}(h_t,h_{t+1},\zeta_t,\zeta_{t+1},y;\theta^E) = \begin{cases} \mathbf{W}_{h_t,h_{(t+1)}} \hspace{3mm} \text{iff} \hspace{3mm} \zeta_t=0 \land \zeta_{t+1}=0 \land h_{t+1} \neq y\\ \mathbf{W}_{h_t,h_{(t+1)}} \hspace{3mm} \text{iff} \hspace{3mm} \zeta_t=0 \land \zeta_{t+1}=1 \land h_{t+1} = y \\ \mathbf{W}_{h_t,h^{(t+1)}} \hspace{3mm} \text{iff} \hspace{3mm} \zeta_t=1 \land \zeta_{t+1}=1 \\ -\infty \hspace{3mm} \text{otherwise} \end{cases} \end{equation} Note that Eq. \ref{eq:energy_func_inference} does not include the MIO potential and, thus, the high-order dependence between the label $y$ and latent ordinal-states $\mathbf{h}$ is removed. The graphical representation of MI-DORF with the redefined energy function is illustrated in Fig.\ref{fig:mildorf}(b). In order to show the equivalence between energies in Eqs. \ref{eq:energy_func} and \ref{eq:energy_func_inference}, we explain how the the original Multi-Instance-Ordinal potential $\Psi^M$ is incorporated into the new edge and temporal potentials. Firstly, note that $\Psi^{N}$ now also takes into account the proportion of ordinal variables $h_t$ that are equal to the sequence label. Moreover, it enforces $\mathbf{h}$ not to contain any $h_t$ greater than $y$, thus aligning the bag and (max) instance labels. However, the original Multi-Instance-Ordinal potential also constrained $\mathbf{h}$ to contain at least one $h_t$ with the same ordinal value than $y$. This is achieved by using the set of auxiliary variables $\zeta_t$ and the re-defined edge potential $\Psi^{E}$. In this case, transitions between latent ordinal states are modelled but also between auxiliary variables $\zeta_t$. Specifically, when the ordinal state in $h_{t+1}$ is equal to $y$, the sub-sequence $\mathbf{h}_{1:t+1}$ fulfills the MIOR assumption and, thus, $\zeta_{t+1}$ is forced to be $1$. By defining the special cases at the beginning and the end of the sequence ($t=1$ and $t=T$): \begin{equation} \label{eq:inference_node_potential0} \Psi^{N}(\mathbf{x}_1,h_1,,\zeta_1,y) = \begin{cases} \Psi^{N}(\mathbf{x}_1,h_1) + w\mathbf{I} (h_1==y) \hspace{2mm} \text{iff} \hspace{2mm} \zeta_1 = 0 \land l_1 < y \\ \Psi^{N}(\mathbf{x}_1,h_1) + w\mathbf{I} (h_1==y) \hspace{2mm} \text{iff} \hspace{2mm} \zeta_1 = 1 \land l_1 = y \\ -\infty \hspace{3mm} \text{otherwise} \end{cases}, \end{equation} \begin{equation} \label{eq:inference_node_potentialT} \Psi^{N}(\mathbf{x}_T,h_T,\zeta_T,y) = \begin{cases} \Psi^{N}(\mathbf{x}_T,h_T) + w\mathbf{I} (h_T==y) \hspace{1.5mm} \text{iff} \hspace{1.5mm} \zeta_T = 1 \land h_T <= y \\ -\infty \hspace{3mm} \text{otherwise} \end{cases}, \end{equation} we can see that the energy is $-\infty$ when the MIOR assumption is not fulfilled. Otherwise, it has the same value than the one defined in Eq.\ref{eq:energy_func} since no additional information is given. The advantage of using this equivalent energy function is that the standard forward-backward algorithm can be applied to efficiently compute the conditional probability: \begin{equation} \label{eq:cond_probability2} P(y|\mathbf{X};\theta) = \frac{\sum_\mathbf{h} \sum_{\boldsymbol{\zeta}} {e^{-\Psi(\mathbf{X},\mathbf{h},\boldsymbol{\zeta},y;\theta)}}}{\sum_{y^\prime}\sum_\mathbf{h} \sum_{\boldsymbol{\zeta}} {e^{-\Psi(\mathbf{X},\mathbf{h},\boldsymbol{\zeta},y^\prime;\theta)}}}, \end{equation} The proposed procedure has a computational complexity of $\mathcal{O}(T \cdot (2L)^2)$ compared with $\mathcal{O}(T \cdot L^2)$ using standard forward-backward in traditional linear-chain latent dynamical models. Since typically $L<<T$, this can be considered a similar complexity in practice. The presented algorithm can also be applied to compute the marginal probabilities $p(h_t|\mathbf{X})$ and $p(h_t,h_{t+1}|\mathbf{X})$. This probabilities are used during training for gradient evaluation and during testing to predict ordinal labels at the instance and bag level. \section{Experiments} \subsection{Baselines and evaluation metrics} \label{sec:baselines} The introduced MI-DORF approach is designed to address the Multi-Instance-Ordinal Regression when bags are structured as temporal sequences of ordinal states. Given that this has not been attempted before, we compare MI-DORF with different approaches that either ignore the MIL assumption (Single-Instance) or do not model dynamic information (Static): \textbf{Single-Instance Ordinal Regression (SIL-OR):} MIL can be posed as a SIL problem with noisy labels. The main assumption is that the majority of instances will have the same label than their bag. In order to test this assumption, we train standard Ordinal Regression \cite{winkelmann2006analysis} at instance-level by setting all their labels to the same value as their corresponding bag. During testing, bag-label is set to the maximum value predicted for all its instances. Note that this baseline can be considered an Static-SIL approach. \textbf{Static Multi-Instance Ordinal Regression (MI-OR):} Given that no MIOR methods have previously been proposed for this task, we implemented this static approach following the MIOR assumption. This method is inspired by MI-SVM \cite{andrews2002support}, where instance labels are considered latent variables and are iteratively optimized during training. To initialize the parameters of the ordinal regressor, we follow the same procedure as described above in SIL-OR. Then, ordinal values for each instance are predicted and modified so that the MIOR assumption is fulfilled for each bag. Note that if all the predictions within a bag are lower than its label, the instances with the maximum value are set to the bag-label. On the other hand, all the predictions greater than the bag-label are decreased to this value. With this modified labels, Ordinal Regression is applied again and this procedure is applied iteratively until convergence. \textbf{Multi-Instance-Regression (MIR):} Several methods have been proposed in the literature to solve the MIL problem when bags are real-valued variables. In order to evaluate the performance of this approach in MIOR, we have implemented a similar method as used in \cite{hsu2014augmented}. Specifically, a linear regressor at the instance-level is trained by optimizing a loss function over the bag-labels. This loss models the MIR assumption by using a soft-max function which approximates the maximum instance label within a bag predicted by the linear regressor. Note that a similar approach is also applied in Multi-Instance Logistic Regression \cite{ray2005supervised}. In these works, a logistic loss is used because instance labels take values between 0 and 1. However, we use a squared-error loss to take into account the different ordinal levels. \textbf{Multi-Instance HCRF (MI-HCRF):} This approach is similar to the proposed MI-DORF. However, MI-HCRF ignores the ordinal nature of labels and models them as nominal variables. For this purpose, we replace the MI-DORF node potentials by a multinomial logistic regression model \footnote{The potential with the Multinomial Logistic Regession model is defined as $\log ( \frac{ \exp(\beta^T_l x)}{ \sum_{ l^\prime \in L} \exp(\beta^T_{l^\prime} x) } )$ . Where all $\mathbf{\beta_l}$ defines a linear projection for each possible ordinal value $l$ \cite{walecki2015variablestate} }. Inference in MI-HCRF is performed by using the algorithm described in Sec. \ref{sec:inference}. \textbf{Single-Instance Latent-Dynamic Models (HCRF/HCORF):} We also evaluate the performance of HCRF and HCORF. For this purpose, the Mutli-Instance-Ordinal potential in MI-DORF is replaced by the employed in standard HCRF \cite{quattoni2007hidden}. This potential models the compatibility of hidden state values $\mathbf{h}$ with the sequence-label $y$ but ignores the Multi-Instance assumption. For HCRF, we also replace the node potential as in the case of MI-HCRF. Inference is performed using the standard forward-backward algorithm. \textbf{Evaluation metrics:} In order to evaluate the performance of MI-DORF and the compared methods, we report results in terms of instance and bag-labels predictions. Note that in the MIL literature, results are usually reported only at bag-level. However, in problems such as weakly-supervised pain detection, the main goal is to predict instance labels (frame-level pain intensities). Given the ordinal nature of the labels, the reported metrics are the Pearson's Correlation (CORR), Intra-Class-Correlation (ICC) and Mean-Average-Error (MAE). For bag-label predictions, we also report the Accuracy and average F1-score as discrete metrics. \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{SynthQualitativeFinal.eps} \end{center} \caption{Description of the procedure used to generate synthetic sequences. (a) A random matrix modelling transition probabilities between consecutive latent ordinal values. (b) Ordinal levels assigned to the random feature vectors according to the ordinal regressor. (c) Example of a sequence of ordinal values obtained using the generated transition matrix. The feature vector representing each observation is randomly chosen between the samples in (b) according to the probability for each ordinal level. (c-d) Examples of instance-level predictions in a sequence for MI-OR and MI-DORF. } \label{fig:unbc_results_synthetic} \vspace{-0.3cm} \end{figure} \input{baselineresults} \subsection{Synthetic Experiments} \label{sec:synth_experiments} \textbf{Synthetic Data:} Given that no standard benchmarks are available for MIOR, we have generated synthetic data. To create the synthetic sequences, we firstly generated a sequence of ordinal values using a random transition matrix. It represents the transition probabilities between temporally-consecutive ordinal levels. The first value for the sequence is randomly chosen with equal probability among all possible ordinal levels. Secondly, we generate random parameters of an Ordinal Regressor as defined in Eq. \ref{eq:ordinal_regression}. This regressor is used to compute the probabilities for each ordinal level in a set o feature-vectors randomly sampled from a Gaussian distribution. Thirdly, the corresponding sequence observation for each latent state in the sequence is randomly chosen between the sampled feature vectors according to the obtained probability for each ordinal value. Finally, the sequence-label is set to the maximum ordinal state within the sequence following the MIOR assummption, and Gaussian noise ($\sigma=0.25$) is added to the feature vectors. Fig. \ref{fig:unbc_results_synthetic}(a-c) illustrates this procedure. Following this strategy, we have generated ten different data sets by varying the ordinal regressor parameters and transition matrix. Concretely, each dataset is composed of 100 sequences for training, 150 for testing and 50 for validation. The last set is used to optimize the regularization parameters of each method. The sequences have a variable length between 50 and 75 instances. The dimensionality of the feature vectors was set to 10 and the number of ordinal values to 6. \textbf{Results and Discussion:} Table \ref{tab:synth_results} shows the results computed as the average performance over the ten datasets. SIL methods (SIL-OR, HCRF and HCORF ) obtain worse performance than their corresponding MI versions (MI-OR,MI-HCRF and MI-DORF) in most of the evaluated metrics. This is expected since SIL approaches ignore the Multi-Instance assumption. Moreover, HCORF and MI-DORF obtain better performance compared to HCRF and MI-HCRF. This is because the former model the latent states as nominal variables, thus, ignoring their ordinal nature. Finally, note that MI-DORF outperforms the static methods MI-OR and MIR. Although these approaches use the Multi-Instance assumption and incorporate the labels ordering, they do not take into account temporal information. In contrast, MI-DORF is able to model the dynamics of latent ordinal states and use this information to make better predictions when sequence observations are noisy. As Fig. \ref{fig:unbc_results_synthetic}(c-d) shows, MI-OR predictions tends to be less smooth because dynamic information is not taken into account. In contrast, MI-DORF better estimate the actual ordinal levels by modelling transition probabilities between consecutive ordinal levels. \subsection{Weakly-supervised pain intensity estimation} \label{sec:unbc_experiments} In this experiment, we test the performance of the proposed model for weakly-supervised pain intensity estimation. To this end, we use the UNBC Shoulder-Pain Database \cite{lucey2011painful}. This dataset contains recordings of different subjects performing active and passive arm movements during rehabilitation sessions. Each video is annotated according to the maximum pain felt by the patient during the recording in an ordinal scale between 0 (no pain) and 5 (strong pain). These annotations are used as the bag label in the MIOR task. Moreover, pain intensities are also annotated at frame-level in terms of the PSPI scale \cite{prkachin1992consistency}. This ordinal scale ranges from 0 to 15. Frame PSPI annotations are normalized between 0 and 5, in order to align the scale with the one provided at the sequence level. Furthermore, we used a total of 157 sequences from 25 subjects. The remaining 43 were removed because a high discrepancy between sequence and frame-level annotations was observed. Concretely, we do not consider the cases where the sequence label is 0 and frame annotations contains higher pain levels. Similarly, we also remove sequences with a high-discrepancy in the opposite way. Given the different scales used in frame and sequence annotations, we computed the agreement between them. For this purpose, we firstly obtained the maximum pain intensities at frame-level for all the used sequences. Then, we computed the CORR and ICC between them and their corresponding sequence labels. The results were 0.83 for CORR, and 0.78 in the case of ICC. This high agreement indicates that predictions in both scales are comparable. More importantly, this supports our hypothesis that sequence labels are highly correlated with frame labels; thus, the used bag labels provide sufficient information for learning the instance labels in our weakly-supervised setting. \textbf{Facial-features:} For each video frame, we compute a geometry-based facial-descriptor as follows. Firstly, we obtain a set of 49 landmark facial-points with the method described in \cite{XiongD13}. Then, the obtained points locations are aligned with a mean-shape using Procrustes Analysis. Finally, we generate the facial descriptor by concatenating the $x$ and $y$ coordinates of all the aligned points. According to the MIL terminology, these facial-descriptors are considered the instances in the bag (video). \textbf{Experimental setup:} We perform Leave-One-Subject-Out Cross Validation similar to \cite{sikka2013weakly}. In each cycle, we use 20 subjects for training, 1 for testing and 4 for validation. This last subset is used to cross-validate the regularization parameters of each particular method. In order to reduce computational complexity and redundant information between temporal consecutive frames, we have segmented the sequences using non-overlapping windows of 0.5 seconds, similar to \cite{sikka2013weakly}. The instance representing each segment is computed as the mean of its corresponding facial-descriptors. Apart from the baselines described in Sec. \ref{sec:baselines}, we also evaluate the performance of the MIC approach considering pain levels as binary variables. For this purpose, we have implemented the MILBoosting \cite{zhang2005multiple} method used in \cite{sikka2013weakly} and considered videos with a pain label greater than 0 as positive. Given that MI-Classification methods are only able to make binary predictions, we use the output probability as indicator of intensity levels, at bag and instance-level, i.e., the output probability is normalized between 0 and 5. \input{unbcresults} \textbf{Results and discussion:} Table \ref{tab:unbc_results} shows the results obtained by the evaluated methods following the experimental setup previously described. By looking into the results of the compared methods, we can derive the following conclusions. Firstly, SI approaches ( SIL-OR, HCORF and HCRF) obtain worse performance than MI-OR and MIR. This is because pain events are typically very sparse in these sequences and most frames have intensity level 0 (neutral). Therefore, the use of the MIL assumption has a critical importance in this problem. Secondly, poor results are obtained by HCRF and MI-HCRF. This can be explained because these approaches consider pain levels as nominal variables and are ignorant of the ordering information of the different pain intensities. Finally, MILBoost trained with binary labels also obtains low performance compared to the MI-OR and MIR. This suggest that current approaches posing weakly-supervised pain detection as a MIC are suboptimal, thus, unable to predict accurately the target pain intensities. By contrast, MI-DORF obtains the best performance across all the evaluated metrics at both the sequence and frame-level. We attribute this to the fact the MI-DORF models the MIL assumption with ordinal variables. Moreover, the improvement of MI-DORF compared to the static approaches, such as MI-OR and MIR, suggests that modelling dynamic information is beneficial in this task. To get better insights into the performance of our weakly supervised approach, we compare its results (in terms of ICC) to those obtained by the fully supervised (at the frame level) state-of-the-art approach to pain intensity estimation - Context-sensitive Dynamic Ordinal Regression \cite{rudovic2015context}. While this approach achieves an ICC of 0.67/0.59, using context/no-context features, respectively, our MI-DORF achieves an ICC of 0.40 without ever seeing the frame labels. This is a good trade-off between the need for the "very-expensive-to-obtain" frame-level annotation, and the model's performance. Finally, in Fig. \ref{fig:unbc_results_qualitative}, we show more qualitative results comparing predictions of MI-OR, MIR and MI-DORF. The shown example sequences depict image frames along with the per-frame annotations and those obtained by compared models, using the adopted weakly-supervised setting (thus, only bag labels are provided). First, we note that all methods succeed to capture the segments in the sequences where the intensity changes occur, as given by the frame-level ground truth. However, note that MI-DORF achieves more accurate localization of the pain activations and prediction of their actual intensity. This is also reflected in terms of the MAE depicted, showing clearly that the proposed outperforms the competing methods on target sequences. \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{QualitativeUNBC.eps} \end{center} \caption{Visualization of the pain intensity predictions at frame-level for MI-OR, MIR and the proposed MI-DORF method. From top to bottom, three sequences with ground-truth where MI-DORF predicted the sequence labels: 0, 3 \& 5 respectively.} \label{fig:unbc_results_qualitative} \end{figure} \section{Conclusions} In this work, we introduced MI-DORF for the task of Multi-Instance-Ordinal Regression. This is the first MI approach that imposes an ordinal structure on the bag labels, and also attains dynamic modeling of temporal sequences of corresponding ordinal instances. In order to perform inference in the proposed model, we have developed an efficient algorithm with similar computational complexity to that of the standard forward-backward method - despite the fact that we model high-order potentials modelling the MIOR assumption. We demonstrated on the task of weakly supervised pain intensity estimation that the proposed model can successfully unravel the (ordinal) instance labels by using only the (ordinal) bag labels. We also showed that this approach largely outperforms related MI approaches -- all of which fail to efficiently account for either temporal or ordinal, or both types of structure in the target data. \vspace{3mm} \noindent {\bf Acknowledgement}. This paper is part of a project that has received funding from the European Union's Horizon 2020 research and innovation programme under grants agreement no. 645012 ( KRISTINA ), no. 645094 ( SEWA ) and no. 688835 ( DE-ENIGMA ). Adria Ruiz would also like to acknowledge Spanish Government to provide support under grant FPU13/01740. \bibliographystyle{splncs} \input{output.bbl} \end{document}
1,314,259,992,614
arxiv
\section{Introduction} Since the discovery of the first rocky exoplanet \cite[term we use here to refer to planets with masses and radii consistent with MgSiO$_3$ and Fe compositions following][]{Rogers2015}, CoRoT-7b \citep{leger09,queloz09}, effort has been made to find and study the formation, composition and evolution of these systems, since they resemble Earth in many ways. As most rocky planets are smaller in size than $1.6R_\Earth$, which correspond to masses of $6M_\Earth$ \citep{WM14, wolfgang15, Rogers2015}, the discovery of those type of exoplanets is difficult due to the small signals that these radii and masses imply. In fact, in addition to CoRoT-7b, only 9 planets with secure masses and radii (i.e., masses and radii with values more than $3-\sigma$ away from zero) in this rocky regime exist to date: GJ1132 \citep{bt2015}, Kepler-36b \citep{carter2012}, K2-3d \citep{crossfield15,almenara15}, Kepler-93b \citep{dressing2015}, Kepler-10b \citep{dumusque2015,weiss2016}, Kepler-23b \citep{ford2012,HL2014}, Kepler-20b \citep{fressin2012}, Kepler-406b \citep{marcy2014}, and Kepler-78b \citep{so13,howard13,pepe2013,grunblatt15}. All of these planets have radii smaller than $\sim 1.6R_\Earth$, as has been empirically determined. Although the sample of rocky planets is small, some interesting relationships suggest that some of these rocky planets might have common properties \citep{WM14}. Perhaps one of the most interesting relations was recently introduced by \cite{dressing2015} which, considering the planets with radii and mass measurements measured to better than $20\%$ precision, show that the planets follow a common iso-composition curve on the mass-radius diagram, along with Earth and Venus. This relation was recently revised by \cite{zs2016} to be a 74\% rock and 26\% Fe composition. This suggests that these small, rocky analogs of Earth might have similar compositions with small intrinsic scatter. Here we report what could be a possible interesting addition to the picture of rocky worlds described above: a $2.23R_\Earth$ exoplanet that falls just where a pure rock (i.e., magnesium silicate) composition is expected in the mass radius diagram using two-layer models. Although this does not mean the planet has exactly this composition, its position on the diagram does makes it interesting due the fact that this has been used in previous works to divide the ``non-rocky" and ``possibly rocky" planets \citep{Rogers2015}. The discovery is made in the context of a Chilean based effort whose aim is to follow-up planetary candidates selected using data from the two-wheeled \textit{Kepler} (K2) mission. K2 has proven to be very effective in the search for exoplanets, enabling a plethora of new discoveries of planets of different sizes, which are especially interesting due to the presence of several bright host stars in the sample that allow detailed follow-up characterisation \citep[see, e.g., ][]{armstrong15, becker15, crossfield15, petigura2015,so15, vanderburg15}. The paper is structured as follows. In \S2 we present the data, which includes the K2 photometry, archival, new, adaptive optics (AO) and lucky imaging of the target star, along with high resolution spectra and radial velocities obtained with the HARPS spectrograph. \S3 presents a joint analysis of the data and presents the derived parameters of the planetary system. We discuss the results in \S4 and present our conclusions in \S5. \section{Data} \subsection{K2 Photometry} \begin{figure*} \plotone{CL001-04.eps} \caption{K2 photometry \cite[obtained from][upper panel]{VJ14} and long-term and outlier corrected version of the photometry (lower panel). The smooth, long-term variation observed in the original photometry was removed by a smoothed median-filter, depicted in the upper panel by a red solid line, which was used for outlier removal (see text). Two clear transit-like events can be seen on both versions of the photometry close to $2457070$ and $2457110$ BJD (indicated with red arrows). Note that the precision obtained for this lightcurve is $\sim 55$ ppm (rms) per point. \label{k2lc}} \end{figure*} K2 photometry for our target was obtained by the \textit{Kepler} spacecraft during Campaign 4. This field was observed between February and April 2015 and the data was released on September of the same year. We obtained the decorrelated versions of all the lightcurves in the campaign which were made publicly available for download by \cite{VJ14}, using the photometry with the optimal aperture, which in the case of our target star corresponded to a $\approx 3$ pixel radius around the target, or an aperture of $\approx 12\arcsec$ radius. We performed a transit search using a Box Least Squares \citep[BLS,][]{bls2002} algorithm. Once a periodic signal is detected along with the best-fit depth, the transit event is flagged as a pontential planetary candidate if (1) the depth is at least $3\sigma$ larger than the average noise level of the lightcurve (denoted by $\sigma$) and (2) if there are three or more transit events. Initially, because of the last requirement, the lightcurve of the target star was not flagged by our transit search pipeline. However, we also performed visual inspection of all the lightcurves, revealing this interesting candidate. In order to double check that this was indeed an astrophysical signal and not a spurious signal arising from the decorrelation method used to obtain the lightcurve, we also inspected the detrended lightcurves released by the \textit{Kepler} team using the PDC-MAP algorithm \citep{stumpe2012}, and the same signal was observed at the exact same times as the signals observed in the \cite{VJ14} photometry. We were thus confident that the signal is of astrophysical origin and proceeded to analyse the light curve. A median filter with a 41 point ($\sim 20.5$ hour) window was used in order to further filter long-term variations of this target. The resulting median filter was smoothed using a Gaussian filter with a 5-point standard-deviation, and this smoothed light curve was used to normalize the light curve. Using this normalized lightcurve, an initial fit using our transit-fitting pipeline (see below) revealed a $P=41.7d$ period for this candidate and a lightcurve whose shape resembled that of a planetary transit, with a transit duration consistent with that of a planetary companion. Using the parameters obtained from this initial fit, we removed outliers from the out-of-transit data, discarding any points deviating more than 3-$\sigma$ from the median flux. The resulting normalized version of this lightcurve is shown on Figure~\ref{k2lc}. No other significant signals were found in the photometry. \subsection{Reconnaissance spectroscopy} A high resolution spectrum of this target was taken on October 21st with the CORALIE spectrograph mounted on the 1.2m Euler Telescope in La Silla Observatory in order to obtain rough spectral parameters of the stellar host, and define whether this was a giant or a dwarf star. Data were reduced and analyzed using the procedures decribed in \citet{jordan2014}. The analysis of the CORALIE spectra gave $T_\textnormal{eff}=5600$ K, $\log(g) = 4.4$ dex, $[\textnormal{Fe/H}]=0.0$ dex and $v\sin(i)=2.5$ km/s, which revealed that the star was a dwarf solar-type star. In addition, no secondary peak was seen on the cross-correlation function indicating no detectable spectroscopic binary. Because of this, the target was promoted to our list of planetary candidates despite the lack of high resolution imaging needed to rule out potential blend events. \subsection{High precision radial velocities with HARPS} High-precision radial velocities (RVs) were obtained from the HARPS spectrograph mounted on the 3.6m telescope at La Silla between October and December of 2015 in order to measure the reflex motion of the star due to the hypothetical planet producing the transit signal. The observations covered our predicted negative and positive quadratures, along with epochs in between, in order to probe possible long-term trends in the RVs indicative of a possible massive companion. 23 spectra were taken in total with the simultaneous Thorium-Argon mode; the HARPS pipeline (DRS, version 3.8) was used to reduce these spectra and to obtain the (drift-corrected) radial velocities, which are calculated via cross-correlation with a G2V mask which is appropiate for the stellar type of the host (see \S3.1). The typical precision was $\sim 3$ m/s for each individual RV measurement. For each spectra, the bisector span, $S$-index, and the integrated flux of the H$_\alpha$ and \ion{He}{1} lines were obtained to monitor the activity of the host star and study its influence on the RVs \citep{santos2010,jenkins2011}. The measured RVs, along with these various calculated activity indicators, are given in Table~\ref{table:rv_list}. Although the times are given in UTC, they were converted to TBD (which is the time scale used by Kepler) for our joint analysis, which we describe in \S3.2. \begin{deluxetable*}{ccccccccccc}[!ht] \tablecaption{Radial velocities obtained with the HARPS spectrograph along with various activity indicators.} \tablehead{ \colhead{BJD} & RV & $\sigma_{\rm RV}$ & BIS & $\sigma_{\rm BIS}$ & $S_{H,K}$ & $\sigma_{S_{H,K}}$ & ${\textnormal{H}_\alpha}$ & $\sigma_{H_\alpha}$ & \ion{He}{1} & $\sigma_{H_\alpha}$ \\ \colhead{(UTC)} & \colhead{m sec$^{-1}$} & \colhead{m sec$^{-1}$} & \colhead{m sec$^{-1}$} & \colhead{m sec$^{-1}$} & \colhead{dex} & \colhead{dex} & \colhead{dex} & \colhead{dex} & \colhead{dex} & \colhead{dex} } \startdata $ 2457329.63450 $&$ -20333.9 $&$ 4.4 $&$ 35.0 $&$ 6.2 $&$ 0.1748 $&$ 0.0063 $&$ 0.10151 $&$ 0.00013 $&$ 0.50230 $&$ 0.00081 $\\ $ 2457329.67362 $&$ -20340.1 $&$ 3.6 $&$ 28.9 $&$ 5.0 $&$ 0.1535 $&$ 0.0050 $&$ 0.10337 $&$ 0.00013 $&$ 0.50279 $&$ 0.00081 $\\ $ 2457329.72375 $&$ -20337.9 $&$ 3.9 $&$ 34.2 $&$ 5.6 $&$ 0.1864 $&$ 0.0053 $&$ 0.10367 $&$ 0.00013 $&$ 0.50146 $&$ 0.00081 $\\ $ 2457330.80181 $&$ -20343.1 $&$ 2.6 $&$ 22.6 $&$ 3.7 $&$ 0.1483 $&$ 0.0035 $&$ 0.10167 $&$ 0.00013 $&$ 0.50787 $&$ 0.00082 $\\ $ 2457331.63418 $&$ -20342.5 $&$ 2.4 $&$ 23.2 $&$ 3.4 $&$ 0.1551 $&$ 0.0029 $&$ 0.10171 $&$ 0.00013 $&$ 0.51233 $&$ 0.00083 $\\ $ 2457331.68695 $&$ -20338.4 $&$ 2.0 $&$ 11.0 $&$ 2.8 $&$ 0.1573 $&$ 0.0026 $&$ 0.10209 $&$ 0.00013 $&$ 0.50301 $&$ 0.00081 $\\ $ 2457332.64705 $&$ -20335.7 $&$ 2.6 $&$ 20.2 $&$ 3.7 $&$ 0.1549 $&$ 0.0038 $&$ 0.10236 $&$ 0.00013 $&$ 0.50221 $&$ 0.00081 $\\ $ 2457332.72713 $&$ -20338.4 $&$ 2.0 $&$ 12.9 $&$ 2.9 $&$ 0.1459 $&$ 0.0025 $&$ 0.10178 $&$ 0.00013 $&$ 0.50273 $&$ 0.00081 $\\ $ 2457336.65528 $&$ -20345.3 $&$ 3.2 $&$ 20.8 $&$ 4.5 $&$ 0.1701 $&$ 0.0042 $&$ 0.10397 $&$ 0.00013 $&$ 0.49984 $&$ 0.00081 $\\ $ 2457336.73328 $&$ -20339.8 $&$ 3.6 $&$ 9.5 $&$ 5.1 $&$ 0.1547 $&$ 0.0047 $&$ 0.10187 $&$ 0.00013 $&$ 0.49884 $&$ 0.00081 $\\ $ 2457339.70924 $&$ -20343.1 $&$ 4.9 $&$ 14.2 $&$ 6.9 $&$ 0.1985 $&$ 0.0068 $&$ 0.09939 $&$ 0.00013 $&$ 0.49982 $&$ 0.00081 $\\ $ 2457339.72063 $&$ -20339.2 $&$ 4.1 $&$ 31.1 $&$ 5.8 $&$ 0.1738 $&$ 0.0058 $&$ 0.10496 $&$ 0.00013 $&$ 0.50575 $&$ 0.00082 $\\ $ 2457340.69354 $&$ -20340.0 $&$ 3.7 $&$ 5.9 $&$ 5.3 $&$ 0.1833 $&$ 0.0056 $&$ 0.10217 $&$ 0.00013 $&$ 0.51486 $&$ 0.00083 $\\ $ 2457340.70475 $&$ -20336.6 $&$ 3.1 $&$ 23.9 $&$ 4.3 $&$ 0.1687 $&$ 0.0044 $&$ 0.10083 $&$ 0.00013 $&$ 0.50038 $&$ 0.00081 $\\ $ 2457341.74523 $&$ -20336.6 $&$ 3.3 $&$ 18.7 $&$ 4.7 $&$ 0.1658 $&$ 0.0043 $&$ 0.10536 $&$ 0.00013 $&$ 0.50213 $&$ 0.00081 $\\ $ 2457341.75600 $&$ -20335.3 $&$ 3.2 $&$ 25.4 $&$ 4.5 $&$ 0.1491 $&$ 0.0043 $&$ 0.10274 $&$ 0.00013 $&$ 0.50658 $&$ 0.00082 $\\ $ 2457348.80101 $&$ -20330.3 $&$ 3.1 $&$ 21.6 $&$ 4.4 $&$ 0.1858 $&$ 0.0056 $&$ 0.10339 $&$ 0.00013 $&$ 0.50277 $&$ 0.00081 $\\ $ 2457360.62435 $&$ -20337.2 $&$ 2.2 $&$ 14.8 $&$ 3.2 $&$ 0.1581 $&$ 0.0029 $&$ 0.10354 $&$ 0.00013 $&$ 0.50080 $&$ 0.00081 $\\ $ 2457360.63915 $&$ -20336.8 $&$ 2.1 $&$ 11.0 $&$ 3.0 $&$ 0.1585 $&$ 0.0027 $&$ 0.10237 $&$ 0.00013 $&$ 0.50273 $&$ 0.00081 $\\ $ 2457361.66418 $&$ -20337.3 $&$ 3.1 $&$ 30.7 $&$ 4.4 $&$ 0.1719 $&$ 0.0042 $&$ 0.10607 $&$ 0.00013 $&$ 0.50020 $&$ 0.00081 $\\ $ 2457361.67814 $&$ -20337.0 $&$ 2.9 $&$ 32.8 $&$ 4.1 $&$ 0.1533 $&$ 0.0038 $&$ 0.10231 $&$ 0.00013 $&$ 0.49660 $&$ 0.00080 $\\ $ 2457362.66191 $&$ -20331.0 $&$ 2.2 $&$ 15.4 $&$ 3.1 $&$ 0.1517 $&$ 0.0031 $&$ 0.10313 $&$ 0.00013 $&$ 0.49866 $&$ 0.00081 $\\ $ 2457362.67602 $&$ -20335.6 $&$ 2.2 $&$ 13.5 $&$ 3.1 $&$ 0.1575 $&$ 0.0032 $&$ 0.10479 $&$ 0.00013 $&$ 0.50121 $&$ 0.00081 $ \enddata \label{table:rv_list} \end{deluxetable*} \subsection{Archival and New Imaging} \begin{figure*} \plottwo{HSTGSC2.eps}{POSS2IR.eps} \caption{Archival imaging for our target at the coordinates given in the EPIC catalog obtained with different versions of POSS: (a) POSSII-F survey, taken with a red filter and (b) POSSII-N survey, taken with an infrared filter. The black circle indicates the aperture used for our K2 data. The white solid circle has a radius of $5''$ and the dashed circle a radius of $2''$ for illustration purposes; these are centered on the measured centroid of the target star. The red circle to the left of the target star marks an object which is $\sim 8.2$ magnitudes fainter than the target in $R$ (see text). \label{archival-image}} \end{figure*} Archival imaging was obtained from the STScI Digitized Sky Survey\footnote{\url{http://stdatu.stsci.edu/cgi-bin/dss\_form}} at the EPIC coordinates of our target. Data are from the Palomar Observatory Sky Survey (POSS). In Figure~\ref{archival-image} we show the best images among the available archival images in terms of the measured FWHM. We show images taken at two epochs and with two filters: one obtained in 1995 using the RG610 filter (red\footnote{Transmission curve available at \url{http://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en/dss/TransmissionCurves/POSSII-F-IIIaF-RG610.txt}}, $590-715$ nm), taken by the POSSII-F and one using the RG9 filter (near-infrarred\footnote{Transmission curve available at \url{http://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en/dss/TransmissionCurves/POSSII-N-IVN-RG9.txt}}, $700-970$ nm) obtained in 1996 by the POSSII-N. For reference, we show the aperture used to obtain our K2 photometry (black circle, $12\arcsec$) along with circles with $5\arcsec$ (white solid line) and $2\arcsec$ (white dashed line) radii which are centered on the centroid of our target star, which was obtained by fitting a 2D Gaussian to the intensity profile. \begin{figure} \plotone{median_combined.eps} \caption{Modern imaging of our target obtained from LOCGT from CTIO using the SBIG camera with the $R$ filter on 2015/12/27. Note that, although the circles have the same meanings as the ones in Figure~\ref{archival-image}, the scale here is different. \label{modern-image}} \end{figure} New imaging was obtained using the Las Cumbres Observatory Global Telescope Network (LCOGT). Four images were taken using the SBIG camera with the Bessel $R$ filter on UT 2015/12/27 from the Cerro Tololo Interamerican Observatory (CTIO). Our target star reached close-to saturation counts ($\sim47000$ counts) in order to have enough photons to observe the close-by stars present in the POSS images. Figure~\ref{modern-image} shows the resulting image obtained by median-combining our four images, along with the same circles as those drawn on Figure~\ref{archival-image}. Given that the largest potential source of false-positive detections in our case comes from blended eclipsing binary systems mimicking a planetary transit event, we note that, given that the depth of the observed transit is $\sim 0.05\%$, if a blended eclipsing binary system was responsible of the observed depth, then assuming a total eclipse of the primary (which is the worst case scenario; all other scenarios should be easier to detect), the eclipsed star would have to be $\sim 8.23$ magnitudes fainter than our target star in the {\em Kepler} bandpass. We can confidently rule out such a bright star down to a distance of $9\arcsec$ of the target star with the POSSII and LCOGT images. For reference, the closest star to the left of the target star (indicated with a red circle) in Figures \ref{archival-image} and \ref{modern-image} is $\sim 8.2$ magnitudes fainter than the target star in the $R$ band. As can be seen on the images, a star that bright would be evident in the archival POSS images and/or on our new LCOGT images at distances larger than $9\arcsec$. \subsection{Adaptive optics \& lucky imaging} \begin{figure*} \plottwo{logimage.eps}{TDRIZZLE_0010_007_CL001_SDSSi__000.eps} \caption{(\textit{Left}) Adaptive optics image (log-scale) obtained with MagAO+Clio2 on 2015-12-06. The black dashed circle has a $2''$ radius for illustration and comparison with Figure \ref{archival-image}; the grey circle mark a faint source found on our image, which was above our contrast limit but we identified as being of instrumental origin (see text). (\textit{Right}) AstraLux Sur $i'$-band observations of our candidate on 2015-12-24. The inner black dashed circle indicates $2''$, while the outer black solid circle indicates $5''$ for comparison with Figure \ref{archival-image}. \label{ao-lucky-image}} \end{figure*} Adaptive optics (AO) imaging was obtained using MagAO+Clio2 instrument mounted at the Magellan Clay telescope in Las Campanas Observatory on December 6th using the $K_s$ filter with the full Clio2 $1024\times512$ pixel frames of the narrow camera (f/37.7). The natural guide star system was used and, because our target is relatively bright, it was used as the guide star. 32 images with exposure times of 30 sec each were taken in five different positions of the camera (nodding), all of them at different rotator offset angles. Due to a motor failure of the instrument, the nodding and rotation patterns were not able to cover the full $16\arcsec\times8\arcsec$ field of view around the star. However, it gave us enough data to rule out stars within a $2\arcsec$ radius. We follow methods similar to those described in \cite{morzinski15} to reduce our images, which we briefly describe here; a \texttt{Python} implementation of such methods is available at Github\footnote{\url{https://github.com/nespinoza/ao-reduction}}. First, the images were corrected by dark current but not flat fielded, because the flats show an uneven flux level as a result of optical distortions and not of intrinsic pixel sensitivities \citep[see section A.3 in ][for a detailed explanation of this effect]{morzinski15}. A bad pixel mask provided by \cite{morzinski15} was used in order to mask bad pixels. After these corrections are applied to each image, we obtain a median image using our 32 frames in order to get an estimate of the background flux, which we then subtract from each of the individual frames. In order to further correct for differences in the sky backgrounds of each image, we apply a 2D median filter with a 200 pixel ($\approx 3\arcsec$) window which takes care of large-scale fluctuations of each image. The background-subtracted images are then merged by first rotating them to the true north \citep[using the astrometric calibration described in][]{morzinski15} and combined using the centroid of our target star (obtained by fitting a 2D Gaussian to the profile) as a common reference point between the images. Our resulting AO image, obtained by combining our 32 images, is shown in Figure~\ref{ao-lucky-image}. A 2D gaussian fit to the target star gives a FWHM of $0\farcs2$, which we set as our resolution limit. The limiting contrasts in our AO observations in the $K_s$ band were estimated as follows. First, a 2D gaussian fit to the target star was made and used to remove it from the image. Although a 2D gaussian does not perform a perfect fit at the center, the fit is good enough for the wings of the PSF, which is our aim. Then, at each radial distance $n\times$FWHM away from the target star, where $n=1,2,...,15$ is an integer, a fake source was injected at $15$ different angles. Sources with magnitude differences from 11 to 0 were injected in $0.1$ steps, and a detection was defined if 3 or more pixels were 5-sigma above the median flux level at that position. The results of our injection and recovery experiments are plotted in Figure~\ref{contrast-plot}. Only one source was detected at $\sim 2\arcsec$ from the target. The shape and position of this object is inconsistent with a speckle but is very faint: we measure a magnitude difference of $\Delta K_s = 9.8$ with the target and is thus just above our contrast level at that position (see Figure~\ref{ao-lucky-image}, the source is indicated with a grey circle in the upper right). A careful assessment of the PSF shape, however, made it inconsistent with the object having the same PSF shape as our star. Comparing its PSF with known ``ghosts" on the image, on the other hand, revealed that this source is not of astrophysical but of instrumental origin. \begin{figure} \plotone{contrast_curve.eps} \caption{5-$\sigma$ contrast curves obtained from our MagAO+Clio2 $K_s$ band (black line) and AstraLux Sur $i'$-band (red line) observations of our candidate. \label{contrast-plot}} \end{figure} In order to search for companions at larger separations, lucky imaging was obtained with AstraLux Sur mounted on the New Technology Telescope (NTT) at La Silla Observatory \citep{hippler09} on 2015/12/24 using the $i'$ band. Figure~\ref{ao-lucky-image} shows our final image obtained by combining the best 10\% images with a drizzle algorithm. Because the PSF shape obtained for our lucky imaging is complex and we already ruled out companions inside a $2\arcsec$ radius with Magellan+Clio2, and given that our objective with lucky imaging was to rule out companions at larger angular distances, we did not perform PSF substraction algorithms in order to obtain the $5-\sigma$ contrasts at those distances. Instead, we used simple aperture photometry in order to estimate the $5-\sigma$ contrasts outside the $2\arcsec$ radius by performing a procedure similar to that described in \cite{wollert15}. In summary, we estimated the noise level in a $5\times 5$ box at each radial distance at $15$ different angles for distances larger than $2\arcsec$ from the estimated centroid of the image (where the contribution of the target star' PSF to the background level is low), and calculated the magnitude contrast by obtaining the flux of the target star using a 5-pixel radius around it and a 5-pixel radius about the desired distance from the star, where $5-\sigma$ counts are summed to each pixel at that distance before performing the aperture photometry. Then, the magnitude contrast at a given distance is obtained as the average value obtained at the different angles. The resulting $5-\sigma$ contrasts are presented in Figure~\ref{contrast-plot}. We study the constrains that our archival, new, AO and lucky imaging put on the false-positive probabilities and transit dilutions on the next section. \section{Analysis} \subsection{Stellar properties} \begin{table*}[!ht] \begin{center} \caption{Stellar parameters of BD+20594.} \label{table:stellar-params} \begin{threeparttable} \centering \begin{tabular}{ lcr } \hline \hline Parameter & Value & Source \\ \hline Identifying Information\\ ~~~EPIC ID & 210848071 & EPIC\\ ~~~2MASS ID & 03343623+2035574 & 2MASS\\ ~~~R.A. (J2000, h:m:s) & 03$^h$34$^m$36.23$s$ & EPIC\\ ~~~DEC (J2000, d:m:s) & 20$^o$35$'$57.23$''$ & EPIC\\ ~~~R.A. p.m. (mas/yr) & $36.7\pm0.7$ & UCAC4\\ ~~~DEC p.m. (mas/yr) & $-51.8\pm1.3$ & UCAC4\\ Spectroscopic properties\\ ~~~$T_\textnormal{eff}$ (K) & $5766\pm 99$ & ZASPE\\ ~~~Spectral Type & G & ZASPE\\ ~~~[Fe/H] (dex) & $-0.15\pm 0.05$ & ZASPE\\ ~~~$\log g_*$ (cgs)& $4.5\pm 0.08$ & ZASPE\\ ~~~$v\sin(i)$ (km/s)& $3.3\pm 0.31$ & ZASPE\\ Photometric properties\\\ ~~~$K_p$ (mag)& 11.04 & EPIC\\ ~~~$B$ (mag)& $11.728\pm 0.044$ & APASS\\ ~~~$V$ (mag)& $11.038\pm 0.047$ & APASS\\ ~~~$g'$ (mag)& $11.352\pm 0.039$ & APASS\\ ~~~$r'$ (mag)& $11.872\pm 0.050$ & APASS\\ ~~~$i'$ (mag)& $10.918\pm 0.540$ & APASS\\ ~~~$J$ (mag)& $9.770\pm 0.022$ & 2MASS\\ ~~~$H$ (mag)& $9.432\pm 0.022$ & 2MASS\\ ~~~$Ks$ (mag)& $9.368\pm 0.018$ & 2MASS\\ Derived properties\\ \vspace{0.1cm} ~~~$M_*$ ($M_\Sun$)& $0.961^{+0.032}_{-0.029}$ & \texttt{isochrones}+ZASPE\\ \vspace{0.1cm} ~~~$R_*$ ($R_\Sun$)& $0.928^{+0.055}_{-0.040}$ & \texttt{isochrones}+ZASPE\\ \vspace{0.1cm} ~~~$\rho_*$ (g/cm$^3$)& $1.70^{+0.20}_{-0.26}$ & \texttt{isochrones}+ZASPE\\ \vspace{0.1cm} ~~~$L_*$ ($L_\Sun$)& $0.88^{+0.15}_{-0.12}$ & \texttt{isochrones}+ZASPE\\ \vspace{0.1cm} ~~~Distance (pc)& $152.1^{+9.7}_{-7.4}$ & \texttt{isochrones}+ZASPE\\ \vspace{0.1cm} ~~~Age (Gyr)& $3.34^{+1.95}_{-1.49}$ & \texttt{isochrones}+ZASPE\\ \hline \end{tabular} \textit{Note}. Logarithms given in base 10. \end{threeparttable} \end{center} \end{table*} In order to obtain the properties of the host star, we made use of both photometric and spectroscopic observables of our target. For the former, we retrieved $B$,$V$,$g$,$r$ and $i$ photometric magnitudes from the AAVSO Photometric All-Sky Survey \citep[APASS,][]{apass} and $J$, $H$ and $K$ photometric magnitudes from 2MASS for our analysis. For the spectroscopic observables, we used the Zonal Atmospherical Stellar Parameter Estimator \citep[\texttt{ZASPE},][]{brahm2016} algorithm using our HARPS spectra as input. \texttt{ZASPE} estimates the atmospheric stellar parameters and $v \sin i$ from our high resolution echelle spectra via a least squares method against a grid of synthetic spectra in the most sensitive zones of the spectra to changes in the atmospheric parameters. \texttt{ZASPE} obtains reliable errors in the parameters, as well as the correlations between them by assuming that the principal source of error is the systematic mismatch between the data and the optimal synthetic spectra, which arises from the imperfect modelling of the stellar atmosphere or from poorly determined parameters of the atomic transitions. We used a synthetic grid provided by \cite{brahm2016} and the spectral region considered for the analysis was from 5000 $\AA$ to 6000 $\AA$, which includes a large number of atomic transitions and the pressure sensitive Mg Ib lines. The resulting atmospheric parameters obtained through this procedure were $T_{\textnormal{eff}} = 5766\pm 99$ K, $\log(g) = 4.5\pm 0.08$, $[\textnormal{Fe/H}] = -0.15\pm 0.05$ and $v\sin(i) = 3.3\pm 0.31$ km/s. With these spectroscopic parameters at hand and the photometric properties, we made use of the Dartmouth Stellar Evolution Database \citep{dotter2008} to obtain the radius, mass, age and distance to the host star using isochrone fitting with the \texttt{isochrones} package \citep{morton2015}. We take into account the uncertainties in the photometric and spectroscopic observables to estimate the stellar properties, using the \texttt{emcee} \citep{emcee2013} implementation of the affine invariant Markov Chain Monte Carlo (MCMC) ensemble sampler proposed in \cite{GW2010} in order to explore the posterior parameter space. We obtain a radius of $R_* = 0.928^{+0.055}_{-0.040}R_\Sun$, mass $M_* = 0.961^{+0.032}_{-0.029}M_\Sun$, age of $3.3^{+1.9}_{-1.5}$ Gyr and a distance to the host star of $152.1^{+9.7}_{-7.4}$ pc. The distance to the star was also estimated using the spectroscopic twin method described in \cite{jofre2015}, which is independent of any stellar models. The values obtained were $158.3 \pm 5.4$ pc when using 2MASS J band photometry and $160.0 \pm 5.7$ pc if H band photometry was used instead, where the stars HIP 1954, HIP 36512, HIP 49728 and HIP 58950 were used as reference for the parallax. Those values are in very good agreement with the value obtained from isochrone fitting. The stellar parameters of the host star are sumarized in Table~\ref{table:stellar-params}. \subsection{Joint analysis} We performed a joint analysis of the photometry and the radial velocities using the \textbf{EXO}planet tra\textbf{N}sits and r\textbf{A}d\textbf{I}al ve\textbf{L}ocity fitt\textbf{ER}, \texttt{exonailer}, which is made publicly available at Github\footnote{\url{http://www.github.com/nespinoza/exonailer}}. For the transit modeling, \texttt{exonailer} makes use of the \texttt{batman} code \citep{batman2015}, which allows the user to use different limb-darkening laws in an easy and efficient way. If chosen to be free parameters, the sampling of the limb-darkening coefficients is performed in an informative way using the triangular sampling technique described in \cite{kipping2013}. For the quadratic and square-root laws, we use the transformations described in \cite{kipping2013} in order to sample the physically plausible values of the limb-darkening coefficients. For the logarithmic law we use the transformations described in \cite{EJ2016}, which presents the sampling of the limb darkening parameters for the more usual form of the logarithmic law to allow for easier comparison with theoretical tables \cite[if the geometry of the system is properly taken into account, see][]{EJa2015}. The code also allows the user to fit the lightcurve assuming either a pure white-noise model or an underlying flicker ($1/f$) noise plus white-noise model using the wavelet-based technique described in \cite{CW2009}. For the RV modelling, \texttt{exonailer} assumes Gaussian uncertainties and adds a jitter term in quadrature to them. The joint analysis is then performed using the \texttt{emcee} MCMC ensemble sampler \citep{emcee2013}. For the joint modelling of the dataset presented here, we tried both eccentric and circular fits. For the radial velocities, uninformative priors were set on the semi-amplitude, $K$, and the RV zero point, $\mu$. The former was centered on zero, while the latter was centered on the observed mean of the RV dataset. Note that our priors allow us to explore negative radial velocity amplitudes, which is intentional as we want to explore the possibility of the RVs being consistent with a flat line (i.e., $K=0$). Initially a jitter term was added but was fully consistent with zero, so we fixed it to zero in our analysis. As for the non-circular solutions, flat priors were set on $e$ and on $\omega$ instead of fitting for the Laplace parameters $e\cos(\omega)$ and $e\sin(\omega)$ because these imply implicit priors on the parameters that we want to avoid \citep{anglada-escude2013}. For the lightcurve modelling, we used the selective resampling technique described in \cite{kipping2010} in order to account for the 30 min cadence of the K2 photometry, which has as a consequence the smearing of the transit shape. In order to minimize the biases in the retrieved transit parameters we fit for the limb darkening coefficients in our analysis \citep[see][]{EJa2015}. In order to decide which limb-darkening law to use, we apply the method described in \cite{EJ2016} which, through simulations and given the lightcurves properties, aids in selecting the best limb-darkening law in terms of both precision and bias using a mean-squared error (MSE) approach. In this case, the law that provides the minimum MSE is the quadratic law, and we use this law in order to parametrize the limb-darkening effect. In addition, the K2 photometry is not good enough to constrain the ingress and egress times because only two transits were observed in long-cadence mode, which provides poor phase coverage; this implies that the errors on $a/R_*$ are rather large. Because of this, we took advantage of the stellar parameters obtained with our HARPS spectra, and derived a value for this parameter from them \cite[see ][]{sozzetti2007} of $a/R_* = 54.83^{+2.19}_{-3.16}$. This value was used as a prior in our joint analysis in the form of a Gaussian prior. We used the largest of the errorbars as the standard deviation of the distribution, which is centered on the quoted median value of the parameter\footnote{Performing a joint analysis with a large uniform prior on $a_/R_*$ spanning from $a/R_* \in (25,70)$ gives a posterior estimate of $a/R_* = 55.92^{+5.64}_{-13.11}$ for this parameter, which is in excellent agreement with this spectroscopically derived value.}. We tried both fitting a flicker-noise model and a white-noise model, but the flicker noise model parameters were consistent with no $1/f$ noise component, so the fit was finally obtained assuming white noise. $500$ \textit{walkers} were used to evolve the MCMC, and each one explored the parameter space in $2000$ links, $1500$ of which were used as burn-in samples. This gave a total of $500$ links sampled from the posterior per \textit{walker}, giving a total of $250000$ samples from the posterior distribution. These samples were tested to converge both visually and using the \cite{geweke92} convergence test. \begin{figure*} \plotone{CL001-04_lc.eps} \caption{Phase-folded photometry (grey points; circles the first transit, triangles the second transit) and best-fit transit lightcurve for the circular (red, solid line) and eccentric (red, dashed line) fits for our planet obtained from our joint analysis. Note that the difference in the lightcurve for both fits is very small. \label{k2lc-fit}} \end{figure*} \begin{figure} \plotone{CL001-04_rvs.eps} \caption{Phase-folded HARPS radial velocities (grey) and best-fit radial velocity models for both circular (red, solid line) and eccentric (red, dashed line) fits using our joint analysis. The light blue bands indicate regions that have been repeated for better visualisation of the RV curve. \label{k2rv-fit}} \end{figure} Figures~\ref{k2lc-fit} and \ref{k2rv-fit} show close-ups to the phased photometry and radial velocities, respectively, along with the best-fit models for both circular (red, solid line) and non-circular (red, dashed line) fits obtained from our joint analysis of the dataset. The lightcurve fits for both models are very similar, but in the RVs the differences are evident. In particular, the eccentric fit gives rise to a slightly smaller semi-amplitude than (yet, consistent with) the one obtained with the circular fit. For the eccentric fit, we obtain $e = 0.096^{+0.089}_{-0.066}$, $\omega = 53^{+17}_{-23}$ degs and a semi-amplitude of $K = 2.9^{+1.1}_{-1.0}$ m sec$^{-1}$. For the circular orbit, we find a semi-amplitude of $K = 3.1^{+1.1}_{-1.1}$ m sec$^{-1}$. Since the differences on the lightcurves are very small, we analyze the likelihood function of the radial-velocity data in order to compare the models and decide which is preferred by the data. We obtain that both models are indistinguishable, with both the AIC ($\Delta\textnormal{AIC}=2$) and BIC ($\Delta\textnormal{BIC}=2$) values being $\sim 2$. We thus choose the simpler model of those two, which is the circular model, and report the final parameters using this as our final model. The resulting parameters of our fit are tabulated in Table~\ref{table:planet-params}. It is interesting to note that the radial velocity semi-amplitude is inconsistent with zero by almost $3\sigma$. Moreover, we are confident that those variations do not arise from activity as all the correlation coefficients we calculate between our RVs and the different activity indexes given in Table \ref{table:rv_list} give correlation coefficients which are consistent with $0$ at $\approx 1\sigma$, and all variations of the activity indices at the period and time of transit-center found for our target are consistent with flat lines. Interestingly, the radial-velocity semi-amplitude is large for a planetary radius of only $R_p = 2.23^{+0.14}_{-0.11}R_\Earth$; the $K=3.1^{+1.1}_{-1.1}$ m/s semi-amplitude implies a mass of $M_p = 16.3^{+6.0}_{-6.1} M_\Earth$, which at face value could be consistent with a rocky composition, a rare property for a Neptune-sized exoplanet such as BD+20594b. We caution, however, that this interpretation has to be taken with care, as we have poor phase coverage on the ``up" quadrature. We put these values in the context of discovered exoplanets of similar size in \S4. \begin{table}[!ht] \caption{Orbital and planetary parameters for BD+20594.} \label{table:planet-params} \begin{threeparttable} \centering \begin{tabular}{ lcl } \hline \hline Parameter & Prior & Posterior Value \\ \hline Lightcurve parameters\\ \vspace{0.1cm} ~~~$P$ (days)\dotfill & $\mathcal{N}(41.68,0.1)$ & 41.6855$^{+0.0030}_{-0.0031}$ \\ \vspace{0.1cm} ~~~$T_0-2450000$ (${\textnormal{BJD}_{\textnormal{TDB}}}$)\dotfill & $\mathcal{N}(7151.90,0.1)$ & 7151.9021$^{+0.0042}_{-0.0047}$ \\ \vspace{0.1cm} ~~~$a/R_{\star}$ \dotfill &$\mathcal{N}(54.83,3.16)$& $55.8^{+3.3}_{-3.3}$ \\ \vspace{0.1cm} ~~~$R_{p}/R_{\star}$\dotfill & $\mathcal{U}(0,0.1)$ & 0.02204$^{+0.00058}_{-0.00057}$ \\ \vspace{0.1cm} ~~~$i$ (deg)\dotfill & $\mathcal{U}(80,90)$ &89.55$^{+0.17}_{-0.14}$\\ \vspace{0.1cm} ~~~$q_1$ \dotfill & $\mathcal{U}(0,1)$&$0.38^{+0.29}_{-0.16}$\\ \vspace{0.1cm} ~~~$q_2$ \dotfill & $\mathcal{U}(0,1)$& $0.52^{+0.32}_{-0.30}$\\ \vspace{0.1cm} ~~~$\sigma_w$ (ppm) \dotfill & $\mathcal{J}(50,80)$ & 55.00$^{+0.73}_{-0.72}$\\ \vspace{0.1cm} RV parameters\\ \vspace{0.1cm} ~~~$K$ (m s$^{-1}$)\dotfill & $\mathcal{N}(0,100)$ & $3.1^{+1.1}_{-1.1}$\\ \vspace{0.1cm} ~~~$\mu$ (km s$^{-1}$)\dotfill & $\mathcal{N}(-20.337,0.1)$ & $-20.33638^{+0.00073}_{-0.00073}$ \\ \vspace{0.1cm} ~~~$e$ \dotfill & --- & $0$ (fixed) \\ \vspace{0.1cm} Derived Parameters\\ \vspace{0.1cm} ~~~$M_p$ ($M_\Earth$) \dotfill &---& $16.3^{+6.0}_{-6.1}$ \\ \vspace{0.1cm} ~~~$R_p$ ($R_\Earth$) \dotfill &---& $2.23^{+0.14}_{-0.11}$ \\ \vspace{0.1cm} ~~~$\rho_p$ (g/cm$^3$) \dotfill &---& $7.89^{+3.4}_{-3.1}$ \\ \vspace{0.1cm} ~~~$\log g_p$ (cgs) \dotfill &---& $3.50^{+0.14}_{-0.21}$ \\ \vspace{0.1cm} ~~~$a$ (AU) \dotfill &---& $0.241^{+0.019}_{-0.017}$ \\ \vspace{0.1cm} ~~~$V_\textnormal{esc}$ (km/s) \dotfill &---& $30.2^{+5.3}_{-6.2}$ \\ \vspace{0.1cm} ~~~$T_\textnormal{eq}$ (K) \dotfill &&\\ \vspace{0.1cm} ~~~\ Bond albedo of $0.0$ &---& $546^{+19}_{-18}$ \\ \vspace{0.1cm} ~~~\ Bond albedo of $0.75$ &---& $386^{+13}_{-12}$ \\ \hline \end{tabular} \textit{Note}. Logarithms given in base 10. $\mathcal{N}(\mu,\sigma)$ stands for a normal prior with mean $\mu$ and standard-deviation $\sigma$, $\mathcal{U}(a,b)$ stands for a uniform prior with limits $a$ and $b$ and $\mathcal{J}(a,b)$ stands for a Jeffrey's prior with the same limits. \end{threeparttable} \end{table} \subsection{Planet scenario validation} In order to validate the planet scenario which we have implied in the past sub-section, we make use of the formalism described in \cite{morton2012} as implemented on the publicly available \texttt{vespa}\footnote{\url{https://github.com/timothydmorton/VESPA}} package. In short, \texttt{vespa} considers all the false-positive scenarios that might give rise to the observed periodic dips in the light curve and, using photometric and spectroscopic information of the target star, calculates the false-positive probability (FPP) which is the complement of the probability of there being a planet given the observed signal. Because our archival and modern imaging presented on \S2.4 rule out any companion at distances larger than $9\arcsec$ radius, we consider this radius in our search for possible false-positive scenarios using \texttt{vespa}, which considers the area around the target star in which one might suspect false-positives could arise. The algorithm calculates the desired probability as \begin{eqnarray*} \textnormal{FPP} = \frac{1}{1+f_p P }, \end{eqnarray*} \noindent where $f_p$ is the occurrence rate of the observed planet (at the specific observed radius) and $P = L_\textnormal{TP}/L_\textnormal{FP}$, where TP indicates the transiting-planet scenario and FP the false-positive scenario, and each term is defined as $L_i = \pi_i \mathcal{L}_i$, where $\pi_i$ is the prior probability and $\mathcal{L}_i$ is the likelihood of the $i$-th scenario. For our target, considering all the information gathered and the fact that no secondary eclipse larger than $\approx 165$ ppm (i.e., 3-sigma) is detected, we obtain a value of $P = 4288.79$. As for the occurrence rate of planets like the one observed, we consider the rates found by \cite{petigura2013} for planets between $2-2.83R_\Earth$ with periods between 5 and 50 days orbiting solar-type stars, which is $7.8\%$, i.e., $f_p = 0.078$. This gives us a false-alarm probability of $\textnormal{FPP} = 3\times 10^{-3}$. Given that this probability is smaller than the usual $1\%$ threshold \cite[e.g.,][]{montet2015}, we consider our planet validated. We note that this FPP is an upper limit on the real FPP given our AO and lucky-imaging observations. Both observations rule out an important part of the parameter space for blending scenarios between $0\farcs2$ and $5\arcsec$ from the star, which are the main source of false-positives for our observations. \subsection{Transit dilutions} As it will be discussed in the next section, both the planet radius and mass puts BD+20594 in a very interesting part of the mass-radius diagram. Therefore, it is important to discuss the constraints that our spectroscopic and new, AO and lucky imaging observations pose on possible background stars that might dilute the transit depth and thus cause us to underestimate the transit radius. Given that the factor by which the planetary radius is changed by a collection of stars inside the aperture used to obtain the photometry of the target star is given by $\sqrt{1/F_\%}$, where $F_\%$ is the fraction of the total flux in the aperture added by the star being transited, we estimate that only stars with magnitude differences $\lesssim 2$ are able to change the transit radius by magnitudes similar to the quoted uncertainties in Table \ref{table:planet-params}. We note that such magnitude differences in the Kepler bandpass are ruled out from $0\farcs2$ to the aperture radius used to obtain the photometry for our target star: our AO and lucky imaging observations rule out companions of such magnitudes from $0\farcs2$ to $5\arcsec$ (see Figure \ref{contrast-plot}). On the other hand, stars with magnitude differences of that order should be evident on our retrieved archival and new images presented in \S2.4, at least at distances of $5\arcsec$ from our target star, and up to and beyond the $12\arcsec$ aperture used to obtain the K2 photometry. Given that the remaining unexplored area on the sky is very small (only $0\farcs2$ around our target star), and that a star of such magnitude should produce an evident peak on the cross-correlation function on our high resolution spectra which is not seen, we consider that our derived transit radius is confidently unaffected by dilutions of background field stars. \section{Discussion} \begin{figure*} \plotone{mass_radius_diagram.eps} \caption{Mass-radius relationship for planets with secure masses and radii (at the $3-\sigma$ level, grey points) having masess less than $32M_\Earth$ and radii less than $4R_\Earth$. BD+20594b is plotted in red, while Solar System planets are plotted as coloured circles (on the lower left Earth with blue, Venus with orange, and in the upper right Neptune in cyan and Uranus in dark blue). Theoretical 2-layer mass-radius models from \cite{zs2016} are plotted with different colors; a $100\%$ water composition is depicted in blue, a $100\%$ rock (MgSiO$_3$) composition in brown and a $100\%$ Fe composition in grey. The light blue dashed line indicates the best-fit composition of small rocky exoplanets obtained by \cite{zs2016} for reference ($74\%$ MgSiO$_3$, $26\%$ Fe); the best-fit composition of BD+20594b is that of $100\%$ MgSiO$_3$. \label{mr-diagram}} \end{figure*} As mentioned in the previous section, the large mass ($M_p = 16.3^{+6.0}_{-6.1} M_\Earth$) for the calculated radius ($R_p = 2.23^{+0.14}_{-0.11}R_\Earth$) found for BD+20594b is very interesting. Figure~\ref{mr-diagram} compares BD+20594b with other discovered exoplanets with radii less than $4R_\Earth$ ($\sim$ Neptune) and masses smaller than $32M_\Earth$ (limits of theoretical models) as retrieved from exoplanets.eu\footnote{Data retrieved on 23/12/2015} except for the Kepler-10 planets, for which we use the masses obtained by \cite{weiss2016}, along with 2-layer models obtained from \cite{zs2016}. As can be seen, BD+20594b spans a regime in radius at which most exoplanets have low densities and are composed of large amounts of volatiles \citep{Rogers2015}. In particular, taking the mass-radius estimates for BD+20594b at face value, the best-fit composition assuming a 2-layer model for the planet is $100\%$ MgSiO$_3$, i.e., a pure rock composition, positioning the planet in the boundary of ``possibly rocky" and ``non-rocky" planets. More realistic three-layer alternatives, however, can explain the observed radius and mass of the planet if a rock/Fe core has an added volatile envelope, composed either by water or H/He \citep[see, e.g., the modelling for Kepler-10c in ][]{weiss2016}. If, for example, we assume an Earth-like interior composition for the planet (i.e., $74\%$ MgSiO$_3$ and $26\%$ Fe) and again take the mass and radius estimates at face value, three-layer models obtained from \cite{zs2016} give a possible $0.2R_\Earth$ water envelope for the planet (corresponding to $8\%$ in mass). This thus gives a maximum radius for a possible H/He envelope, which would anyways produce a small layer of much less than a percent in mass; at least significantly smaller than the one modelled for Kepler-10c. Given that the errors on the mass of BD+20594b are large enough to be consistent with several compositions, a careful assessment must be made in order to explore its possible rocky nature. To this end, we follow the approach introduced by \cite{Rogers2015} and compute $p_\textnormal{rocky}$, the posterior probability that a planet is sufficiently dense to be rocky, which is defined as the fraction of the joint mass-radius posterior distribution that falls between a planet composition consistent with being rocky. A probably rocky planet, then, would have $p_\textnormal{rocky}\sim 1$, while a planet with a density that is too low to be rocky would result in $p_\textnormal{rocky}\sim 0$. The definition of ``rocky planet" used in \cite{Rogers2015}, which we adopt in this work, is given by those planets spanning compositions between $100\%$ rock and $100\%$ Fe. Although this definition is based on simple 2-layer models for the planetary composition, and in theory for a given point in the mass-radius diagram planets could have denser compositions with a gaseous envelope on top, we use this metric anyways in order to compare our newly discovered exoplanet in terms of the population of already discovered small planets. This is an important point to make, as $p_\textnormal{rocky}$ is actually an upper limit on the probability that a planet is indeed rocky. To compute this value and compare it to the population of exoplanets with secure masses and radii discovered so far, we use the models from \cite{zs2016}. To sample from the posterior distributions given the posterior estimates published in the literature for the different exoplanets, we use the methods described in Appendix~A of \cite{EJa2015} and assume these radii and masses are drawn from skew-normal distributions in order to use the asymmetric error bars published for those parameters, while we use the posterior samples of our MCMC fits described in \S3.2 to sample from the posterior joint distribution of mass and radius of BD+20594b. Our results are depicted in Figure~\ref{procky}, where we also indicate the threshold radius found by \cite{Rogers2015} at which there is a significant transition between rocky and non-rocky exoplanets, with smaller exoplanets having in general rocky compositions and larger exoplanets having less dense compositions. \begin{figure} \plotone{procky.eps} \caption{The posterior probability that a planet is sufficiently dense to be rocky, $p_\textnormal{rocky}$, as a function of radius for all exoplanets with secure masses and radii (grey points), along with the estimated values for BD+20594b (red point). The black dashed line shows the transition between rocky (to the left) and non-rocky (to the right of the diagram) planets, along with the 95\% confidence band on this threshold (blue band). \label{procky}} \end{figure} As evident in Figure~\ref{procky}, BD+20594b is in an interesting position in this diagram. The closest exoplanet to BD+20594b in this diagram is Kepler-20b, which has a radius of $1.91^{+0.12}_{-0.21}R_\Earth$, which is only $2-\sigma$ away from the ``rocky" boundary. BD+20594b, on the other hand, is more than $5-\sigma$ away from it. With a value of $p_\textnormal{rocky}\sim 0.43$, BD+20594b is the first Neptune-sized exoplanet to date with a large (compared to the typical Neptune-sized planet) posterior probability of being dense enough to be rocky. The large mass obtained for BD+20594b implies that if the planet ever had the chance to acquire an atmosphere, it should retain it. However, if the planet is indeed actually primarly composed of rock, given its small radius, a significant H/He envelope is unlikely in the usual settings of planet formation. Calculations using core accretion theory by \cite{IH2012}, predict that if the mass of rock in the protoplanet is on the order of $\sim 10 M_\Earth$, even for disk dissipation time-scales on the order of $\sim 10$kyr an accretion of a $\sim 1M_\Earth$ H/He envelope should happen. Even in the case of a large opacity of the protoplanetary disk, a mass of rock similar to the one possible for BD+20594b should imply at least this level of H/He accretion. Given the bulk composition and distance of BD+20594b to its parent star, mass loss due to X-ray and Extreme UV radiation from its parent star its unlikely. If this indeed is the primary composition of this planet, it might be possible that it formed at late stages in the protoplanetary disk, under conditions similar to those on transition disks \citep{lee2016} or that some external effect removed the accreted envelope from the planet. Recent studies on giant impacts, which predict efficient devolatilization mechanisms for Super-Earths, might prove useful in explaining the lack of an extended atmosphere for BD+20594b if the planet ever accreted a significant H/He atmosphere in the first place \citep{SF2015}. In terms of mass and radius, BD+20594b is similar to both Kepler-131b \citep{marcy2014} and Kepler-10c \citep{weiss2016}. Although both of them are probably non-rocky due to their low $p_\textnormal{rocky}$ ($\sim 0.1$ and $\sim 0.002$ respectively), which is the main difference with BD+20594b, they are also ``warm" Neptune-sized planets just as BD+20594b, with periods of $16d$ and $45.29d$, respectively. The similarity in mass, radius and period between Kepler-10c and BD+20594b, in fact, makes both of these planets excellent laboratories for comparison in order to put planet formation theories to test. Finally, it is interesting to mention that the sub-solar metallicity of the host star adds more weight to the growing evidence that low-mass planets tend to be found orbiting stars with a lower metallicity content \citep{mayor2009,adibekyan2012} or at least they appear to show a lack of preference towards metal-rich stars \citep{Jenkins2013,BL2015}. \section{Conclusions} Using K2 photometry from Campaign 4 and a follow-up effort including radial-velocities from the HARPS spectrograph, we have presented BD+20594b, a planet with a radius $R_p = 2.23^{+0.14}_{-0.11}R_\Earth$ and mass of $M_p = 16.3^{+6.0}_{-6.1} M_\Earth$ orbiting a solar-type star. BD+20594b lies in an interesting position in the mass-radius diagram, in the boundary between "possibly rocky" and "non-rocky" planets. Given the brightness of the host star ($V=11.04$), BD+20594b is amenable for future follow-up studies, which will enable a detailed study of its mass and hence composition, that might be able to confirm whether BD+20594b is in the "possibly rocky" or "non-rocky" regime on the mass-radius diagram. \section{Acknowledgments} We thank the referee for insightful comments that greatly improved this work. N.E., J.S.J. and A.J. would like to thank E. Pall\'e for his willingness to share time on northern hemisphere facilities for follow-up efforts. N.E. and R.B. are supported by CONICYT-PCHA/Doctorado Nacional. A.J. acknowledges support from FONDECYT project 1130857 and from BASAL CATA PFB-06. N.E., R.B. A.J. and J.C. acknowledge support from the Ministry for the Economy, Development, and Tourism Programa Iniciativa Cient\'ifica Milenio through grant IC 120009, awarded to the Millennium Institute of Astrophysics (MAS). J.S.J. acknowledges support from BASAL CATA PFB-06. This paper includes data collected by the Kepler mission. Funding for the Kepler mission is provided by the NASA Science Mission directorate. It also made use of the SIMBAD database (operated at CDS, Strasbourg, France), NASA's Astrophysics Data System Bibliographic Services, and data products from the Two Micron All Sky Survey (2MASS) and the APASS database and the Digitized Sky Survey. Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO programmes 096.C-0499(A), 096.C-0417(A) and 096.D-0402(A).
1,314,259,992,615
arxiv
\section{Introduction} \label{sec:intro} Global tropospheric chemistry transport models (CTM) are used to address important issues ranging from air quality to climate change. In order to continuously improve their performance, it is of crucial importance to understand and quantify the diverse sources of uncertainties and errors present in them. We group these in three different categories, (\textit{i}) errors and uncertainties coming from observations and data used in our models (such as emission inventories, wind fields, reaction rates); (\textit{ii}) errors coming from our choice of governing equations (or mathematical model), parametrizations, and the level of complexity of the physical modules included in our formulation; and (\textit{iii}) numerical errors coming from the choice of algorithms we use to solve the governing equations using computers \citep{bib:Ent02, bib:Zha11}. \\ In this study, we focus our attention on estimating the magnitude of numerical errors (\textit{iii}), in particular, those arising from the choice of operator splitting technique utilized to integrate in time the transport and chemistry operators in real-life global CTMs. In order to achieve this, we numerically extend the results introduced for the linear diffusion-reaction case in \citep{bib:Spo00}, to a non-linear 1-D chemistry-transport numerical model. The latter numerical results provide us with a framework to estimate upper bounds for operator splitting errors in the fully non-linear 3-D state-of-the-art global CTM: GEOS-Chem \citep{bib:Bey01}. To the best of our knowledge, our contribution is the first in estimating operator splitting errors in the context of real-life global atmospheric chemistry simulations. \\ CTMs simulate the dynamics of chemical species in the atmosphere by numerically integrating a set of coupled nonlinear partial differential equations of the type: \begin{equation} \dfrac{\partial C_i}{\partial t}+\nabla\cdot \left({\boldsymbol u} \:C_i \right)=\nabla\cdot \left( \rho K \nabla \dfrac{C_i}{\rho} \right)+ P_i(C_j)- C_i L_i(C_j)+Q_i - S_i \label{eq:advec-reac} \end{equation} for $i=1,...,N$; where $C_i({\boldsymbol x},t)$ represents the spatio-temporal evolution of the concentration of species $i$ (typically over a hundred species are considered), ${\boldsymbol u}({\boldsymbol x},t)$ is the wind velocity, $\rho$ is the air density, $K$ the eddy diffusivity matrix, $P_i$ are the nonlinear production terms, $L_i$ are the destruction terms, $Q_i$ are the volume emission sources, and $S_i$ are the sinks (ex. precipitation or in-cloud removal). See \cite{bib:Spo07} for a detailed description of these equations. \\ Due to the dimensions of grid boxes in global CTMs, like GEOS-Chem (with hundreds of kilometers in the horizontal versus tens to hundreds of meters in the vertical), intertial vertical transport processes in this global models are simulated (a) using vertical mass fluxes schemes that ensure that the horizontal air flow is divergent-free ($\nabla_{hor}\cdot \boldsymbol u=0$), (b) using convection parametrizations, and (c) using a boundary layer mixing algorithm \citep{bib:Lin96,bib:All96, bib:Wil06, bib:Pra08}. In addition, horizontal diffusion due to numerical errors in transport schemes are typically higher than their Eddy difusivity counterpart, as measured by aircraft missions \citep{bib:Pis09, bib:Wil06, bib:Ras07, bib:San13}. As a consequence, the first term of the right-hand side of equation (\ref{eq:advec-reac}), which models the dynamics of intertial vertical transport as an eddy diffusion process, is not explicitly integrated in global CTMs; and the governing equations (\ref{eq:advec-reac}) are sometimes written \citep{bib:Ras07, bib:San10, bib:San13} in a simplified way as \begin{equation} \dfrac{\partial C_i}{\partial t}+{\boldsymbol u}\cdot \nabla C_i=P_i(C_j)- C_i L_i(C_j)+Q_i - S_i. \label{eq:advec-reac2} \end{equation} The chemistry operator on the right-hand-side of equations (\ref{eq:advec-reac2}) models the chemical interaction of atmospheric species whose lifetimes range from milliseconds to many years. The chemistry operator is very stiff as a consequence of this large range of time-scales and thus, implicit-in-time methods are an appropriate choice to integrate equations (\ref{eq:advec-reac}). Traditional methods, such as the method of lines, aimed at achieving this task in realistic 3D simulations, involve solving for an enormous number of degrees of freedom at each time step in a coupled fashion ( $10^8 \approx$ 100 chemical species in $\sim 10^6$ grid cells, for a $1^{\circ}\times 1^{\circ}$ spatial resolution). This is due to the inter-species coupling in the chemistry operator and the spatial coupling in the transport operator. In practical situations, however, efficient computational algorithms to integrate equations (\ref{eq:advec-reac}) use operator splitting strategies that allow the explicit time--integration of the transport and implicit time--integration of the chemistry operators separately and sequentially, thus, reducing significantly the degrees of freedom solved in a coupled fashion at a given time step. This is done at the expense of a loss of accuracy in the approximate solution \citep{bib:HunVer03}. \\ Estimating the magnitude of the numerical errors introduced by the time--integration of equations (\ref{eq:advec-reac}) in realistic 3-D computer simulations is a hard task since no relevant analytic solution can be used as a reference to estimate them. In theory, estimates of these errors depend directly on the regularity properties of the analytic solution of equations (\ref{eq:advec-reac}), the set of initial and boundary conditions, and the chosen numerical scheme \citep{bib:Guo86, bib:Ise96, bib:Ern04, bib:Bre08}. In this study, we assume that the analytic solution of equations (\ref{eq:advec-reac}) is unique and regular enough so that numerical error estimates can be expressed as inequalities of the form (\ref{eq:error_est}). Operator splitting errors, as well as numerical errors arising from the time--integration of the chemistry operator depend explicitly on the magnitude of the chosen time steps, while numerical errors coming from the time--integration of the transport operator depend both on the time step and on the grid size. This fact, in combination with an expression of the analytic solution of equations (\ref{eq:advec-reac}), is exploited to obtain the exact magnitude of operator splitting errors in our one-dimensional proto-type transport-chemistry numerical model. \\ Our one-dimensional numerical experiments show three main results: (a) operator splitting sequences where the stiff non--linear chemistry operator is evaluated at the end of the time step are more accurate than those where the transport is evaluated lastly, independently of the operator splitting time-step, as in the linear case introduced in \citep{bib:Spo00}; (b) the results of numerical simulations that use different operator splitting strategies differ by at most 10\%; and (c) numerical errors coming from the integration of the transport operator are much bigger than those coming form the operator splitting technique for spatial and temporal scales comparable to those used in global CTM. We use this fact, and evidence from papers such as \citep{bib:Wil06, bib:Ras07, bib:Pra08, bib:San13}, to suggest that in realistic 3D simulations, errors due to operator splitting are much smaller than those introduced by transport schemes. \section{Numerical error estimation} Upper bounds of the numerical errors introduced by solving partial differential equations with regular boundary and initial conditions, using a given numerical scheme, can be expressed by inequalities represented as \begin{equation} ||C(x,t)-C_h(x,t)||_{_{V_1}}\leq M_1\:\Delta t\:^\alpha + M_2\: \Delta x\:^\beta \label{eq:error_est} \end{equation} where $C(x,t)$ is the true solution of the partial differential equation, $C_h(x,t)$ the numerical approximation, $\Delta t$ and $\Delta x$ are the time step and grid size respectively, $\alpha$ and $\beta$ are exponents (typically larger than one) that determine the order of convergence of the method in time and space respectively, $M_1$ and $M_2$ are constants that depend on the regularity of the true solution $C(x,t)$ and parameters in the equation, and $||\cdot||_{V_1}$ is the norm in the appropriate Banach space $V_1$. For a convergent method, as $\Delta t\rightarrow 0$ and $\Delta x\rightarrow 0$, the numerical error vanishes, ({\it i.e.} $||C-C_h||_{_{V_1}}\rightarrow 0$) and the numerical approximation $C_h$ converges to the true solution $C$, in the normed space $V_1$. More details about the integral representation (equation \ref{eq:error_est}) of numerical errors due to discretization of partial differential equations can be found in: \cite{bib:Guo86, bib:Ise96, bib:Ern04, bib:Bre08}\\%(Iserles, 1996; Ern and Guermond, 2004; Brenner and Scott, 2002). \\ For the specific set of partial differential equations (\ref{eq:advec-reac}), operator splitting errors and errors coming from the numerical integration of the chemistry operator (where no coupling in space exists) contribute to the first term on the right-hand-side of inequality (\ref{eq:error_est}), whereas, numerical errors from the integration of the transport operator contribute to the first and second terms of the right-hand-side of inequality (\ref{eq:error_est}). Quantifying the independent contribution of each processes to each term of inequality (\ref{eq:error_est}) is not simple in practical applications. In the following section, we show how to estimate the magnitude of operator splitting errors in the absence of other numerical error coming from the time--integration of the transport and chemistry operators. \\ \subsection{Operator splitting techniques and error estimation} \label{sec:op_split} Classical approaches to estimate the numerical errors introduced by operator splitting approaches are based on asymptotic expansions of exponential operators (linear case) and Lie operator formalism (nonlinear case). For completeness, we briefly describe important results of the linear analysis of operator splitting methods in this section. We refer the reader to \citet{bib:LanVer99, bib:Spo00, bib:HunVer03} and the references therein for more details. In this section, it is assumed that the time--integration of each operator separately can be found exactly giving rise to no numerical error, {\it i.e.} the numerical errors discussed below come only from the choice of the operator splitting technique. \\ We use as an example the linear evolution equation, \begin{equation} \dfrac{d { v}}{d t}=Av+Bv, \qquad v(0)=v_0, \qquad v\in \mathbb{R}^n \label{eq:op_split1} \end{equation} where $A$ and $B$ are linear operators. One of these operators could represent the linear spatial differential operator $d/dx$ (transport) in equations (\ref{eq:advec-reac}). The analytic solution for this problem is given by: \begin{equation}v=v_0 \exp((A+B)t) \label{eq:exact} \end{equation} The simplest operator splitting method, called Godunov and denoted by $(A-B)$, can be obtained for $t\in[0,\Delta t]$ by solving the two evolution equations in sequence as: \begin{equation} \left\{ \begin{aligned} \dfrac{d {v^*}}{d t} &=Av^*, \qquad &v^*(0) = & v_0 \qquad & in \; [0, \Delta t]\\ \dfrac{d {v^{**}}}{d t}& =Bv^{**}, \qquad & v^{**}(0) = &v^*(\Delta t) \qquad & in \; [0, \Delta t]. \end{aligned} \right. \end{equation} The value for $v$ at $t=\Delta t$ is given by $v_{AB}(\Delta t)=v^{**}(\Delta t)$. The solution obtained with this operator splitting method at $t=\Delta t$ is given by \begin{equation}v_{AB}(\Delta t)=v_0 \exp(B \Delta t)\exp(A\Delta t) \label{eq:godunov} \end{equation} The exact solution (\ref{eq:exact}) and the solution $v_{AB}$ in the previous equation will be the same if \[\exp((A+B)\Delta t )=\exp(B \Delta t)\exp(A\Delta t). \] This will happen if the operators $A$ and $B$ commute (think of matrices), \textit{i.e.} if $AB=BA$. When $AB\neq BA$, then the (point-wise) local-in-time numerical error associated to solving problem (\ref{eq:op_split1}) using Godunov's operator splitting technique can be shown to be \begin{equation} le_{AB}=\dfrac{(AB-BA)}{2} \Delta t^2 v_0 \label{eq:ab_error} \end{equation} which leads to a global error $\mathcal{O}(\Delta t)$, \textit{i.e.} $||v-v_{AB}||\leq M_{AB}\;\Delta t$ (for a constant $M_{AB}$ that depends only on the regularity of the analytic solution $v$). Since the numerical error vanishes as $\Delta t \rightarrow 0$, Godunov's method is a convergent first order method in time, in the linear case. Another simple Godunov operator splitting can be obtained by reversing the order of evaluation of the operators $A$ and $B$ to obtain the $(B-A)$ method ($v_{BA}$). A more accurate and symmetric operator splitting method, often referred to as Strang method (Strang, 1968), can be obtained by averaging the output of the two previous methods, i.e. $v_{S}(\Delta t)=\frac{1}{2}(v_{AB}+v_{BA})$. It can be shown that the Strang operator method is globally second order accurate, \textit{i.e.} $||v-v_{S}||\leq M_{S}\;\Delta t^2$ for a constant $M_S$ \citep{bib:Spo00, bib:HunVer03}.\\ The linear analysis presented above may fail and lead to different convergence results if one of the operators is stiff, {\it i.e.} if the dynamics of one operator take place in much faster time scales than the dynamics in the other operator \citep{bib:Spo00}. This can be seen by introducing a small parameter $\epsilon$ (representing the ratio between fast time scales in the stiff operator and the slow time scales of the other operator) and re-writing the linear evolution equation (\ref{eq:op_split1}) as a singular perturbation equation by re-defining \begin{equation} A=\dfrac{\chi(\epsilon)}{\epsilon} \qquad \qquad \text{and} \qquad \qquad B=T. \label{eq:defn_stiff} \end{equation} For our purposes, one can identify the chemistry operator with the stiff operator $\chi/\epsilon$, (the nonlinear chemistry can be, locally-in-time and space, approximated by a linear and stiff mechanism at least for some subset of fast species), and identify the transport operator with the slow operator $T$, for which the dynamics takes place in a more confined range of time scales (as represented by our global models). It is shown in \citep{bib:Spo00} that the local error for the $\left(\frac{\chi}{\epsilon}-T\right)$ Godunov method becomes (compare to equation (\ref{eq:ab_error})): \begin{equation} le_{\epsilon}\sim\dfrac{(\chi\: T - T\:\chi)}{\epsilon} \Delta t^2 v_0 \\ \label{eq:stiff_error} \end{equation} leading to a global error $\mathcal{O}\left(\frac{\Delta t}{\epsilon}\right)$, implying that $||v-v_{\epsilon}||\leq M_{\epsilon}\left(\frac{\Delta t}{\epsilon}\right)$. Note that convergence of the operator splitting method, in this case, can only be guaranteed provided the operator splitting time step, $\Delta t$, is small enough to satisfy $\Delta t \ll \epsilon$ so that higher order terms, $\mathcal{O}(\frac{\Delta t}{\epsilon})^k$, will indeed vanish as $k\rightarrow \infty$ in the Taylor expansion of the error. \\ In atmospheric chemistry simulations, we use operator splitting methods to integrate in-time two operators in equations (\ref{eq:advec-reac}): transport and chemistry. Transport and chemistry are known to commute when the velocity field is divergent free and chemistry is independent of the spatial location. In real atmospheric situations, these conditions are typically not met. Indeed, the non-linear chemistry operator depends dynamically on the geographic location (due to photolysis), and atmospheric wind fields are in general not divergent-free. The result of the linear analysis above suggests that operator splitting approaches will converge only if the operator splitting time step is much smaller than the lifetime of the fastest species in the chemistry mechanism ($\Delta t \ll \epsilon$). This is also the criterion established to ensure stability and convergence of explicit-in-time chemistry solvers, and suggests the use of prohibitively small operator splitting time-steps in order to guarantee convergence of the method. In practice, however, the use of implicit schemes to integrate the chemistry operator in global chemistry models leads to the choice of large operator splitting time-steps compared to the intrinsic stiffness of the chemistry system ($\Delta t\gg \epsilon$). As a consequence, and according to expression (\ref{eq:stiff_error}), we may expect to observe large operator splitting errors when solving equations (\ref{eq:advec-reac}) with stiff and potentially non--linear chemistry operators.\\ It is argued in \citet{bib:Spo00}, that operator splitting errors, even in the presence of large operator splitting time steps (such that $\Delta t\gg \epsilon$), may not be as big as suggested by expression (\ref{eq:stiff_error}). \citet{bib:Spo00} argues that the stiffness of the system can be balanced by the existence of an underlying reduced model (low-dimensional manifold) describing the dynamics of the system and thus, by choosing the appropriate order of operator evaluation in a time-step, the splitting error may be bounded even with the increase of stiffness. Moreover, he shows for the linear case that sequences where the stiff operator is evaluated at the end of the time step lead to convergent and accurate methods in a one dimensional diffusion-chemistry toy example, even for large operator splitting time steps. In solving equations (\ref{eq:advec-reac}), examples of these sequences include: Transport--Chemistry and Chemistry--Transport--Chemistry. \\ Intuitively speaking, evaluating the transport operator at the end of the time step sets the state of the system far from the underlying low dimensional manifold driving the chemical system and provides an initial condition $v_0$ for the next time evaluation that enhances error propagation. This is avoided by evaluating the stiff chemistry operator at the end of the time step. The existence of these reduced models driving the dynamics in regional and global atmospheric chemistry models has been found in \citet{bib:LowTom00, bib:San10, bib:Ras07}, suggesting that the operator splitting order should be selected carefully. To the best of our knowledge, a careful investigation of these errors in the realistic non--linear case does not exist so far and thus we aim at achieving this here. \\ Isolating operator splitting errors in practical global atmospheric chemistry models is not straightforward, first, because we lack expressions for the analytic solution of the system in realistic circumstances, and second, since the solutions of the chemistry and transport operators, separately, are obtained using numerical schemes and thus are not exact as it was assumed in the previous analysis. In order to estimate upper bound estimates of operator splitting errors we proceeded as follows. We first found sharp estimates of numerical errors in a 1D non--linear chemistry-transport prototype problem with a known analytic solution. We designed this 1D problem to resemble the interaction of numerical errors in the time--integration of the transport and (stiff) non--linear chemistry, when using operator splitting methods, at spatial and time scales used in 3D global simulations. Our 1D findings guide our methodology to understand the differences observed between the outputs of 3D global simulations using different operator splitting strategies. We performed multiple 3D global simulations in order to further understand additional numerical errors, due to the time integration of relevant processes (emissions, convective transport, and deposition) inherently solved with operator splitting approaches. \section{One-dimensional advection-reaction system} \label{sec:one_dim_numerics} We considered a one-dimensional advection-reaction system that can be solved analytically and thus exact values of numerical errors can be obtained. The system is characterized by a constant wind field throughout the domain, and a three-species ($NO$, $NO_2$ , $O_3$) stiff non--linear chemistry-mechanism modeling the $NO_x(NO+NO_2)$ cycle through oxidation by ozone ($O_3$). This cycle is key in determining the balance of Ozone ($O_3$) in the atmosphere. The chemical reactions are given by: \begin{equation} NO + O_3 \xrightarrow{k_1} NO_2, \quad NO_2 \xrightarrow{k_2} NO + O_3 \label{eq:reactions} \end{equation} where the parameters $k_1$ and $k_2$ represent the constant reaction rates throughout the domain. The resulting advection-reaction system of equations can be written as \begin{eqnarray} \label{eq:toy_syst} \dfrac{\partial \;NO}{\partial t}+u\;\dfrac{\partial \;NO}{\partial x}=-k_1(NO) \;O_3+k_2 \;NO_2 \\ \dfrac{\partial \; NO_2}{\partial t}+u\;\dfrac{\partial \;NO_2}{\partial x}=k_1(NO)\; O_3-k_2 \;NO_2\\ \label{eq:toy_syst2} \dfrac{\partial \; O_3}{\partial t}+u\;\dfrac{\partial \; O_3}{\partial x}=-k_1(NO) \;O_3+k_2 \;NO_2 \label{eq:toy_syst3} \end{eqnarray} where $NO$, $NO_2$, and $O_3$, represent the concentration of each chemical in space and time, and $u$ the constant velocity of the flow (compare with equations (\ref{eq:advec-reac})). \\ The advection and reaction operators commute in this problem (since the advection operator is divergent-free, $\partial u/\partial x=0$, and the chemistry is independent of the location in space), thus, the use of operator splitting approaches should not introduce any error when the exact solutions of the chemistry and advection operators are known. However, when solving numerically the advection operator, with an Eulerian advection scheme, undesired numerical diffusion will cause the numerical advection operator to not commute with the chemistry operator (since nonlinear chemical operators do not commute with diffusion, as shown in \citet{bib:HunVer03}) thus signalling the emergence of operator splitting errors in the numerical solution of equations (\ref{eq:toy_syst})-(\ref{eq:toy_syst3}). \\ This one-dimensional problem is relevant in realistic global 3D simulations since the transport operator is solved utilizing Eulerian numerical schemes, and thus, giving rise to undesired numerical diffusion that will not commute with the time-integration of the chemistry operator. Moreover, in regions in the atmosphere where the flow is near (2D) divergent-free (due to a well stratified atmosphere) and during the night (or day) so that chemistry is independent of space, chemistry and transport operators may commute locally in space and time as in the 1D prototype. \\ In more complicated circumstances, for example in regions of space close to the terminator line (where the day and night boundary is), and in Equatorial regions where convection makes the atmosphere be far from divergent-free conditions, operator splitting errors can be expected to be larger since the advection and chemistry operators will not commute.\\ \subsection{Analytic steady-state solution} When the chemistry is fast with respect to transport processes, an exact expression can be found for the steady-state solution of system (\ref{eq:toy_syst})-(\ref{eq:toy_syst3}). For example, by choosing $k_1=1000$ and $k_2=2000$, as in \citep{bib:Spo00}, and introducing the non-stiff combined-chemistry operator $\chi = (NO)\; O_3 - 2\; NO_2$, we can represent a stiff (fast) chemistry operator as the quotient $\chi/\epsilon$ for a small parameter $\epsilon$. Equations (\ref{eq:toy_syst}-\ref{eq:toy_syst3}) can be re-written, as suggested in equation (\ref{eq:defn_stiff}), as: \begin{eqnarray} \dfrac{\partial \;NO}{\partial t} +u\;\dfrac{\partial \;NO}{\partial x}=-\frac{\chi}{\epsilon}, \label{eq:toy_stiff1} \\ \dfrac{\partial \;NO_2}{\partial t}+u\;\dfrac{\partial \;NO_2}{\partial x}=\frac{\chi}{\epsilon}, \\ \dfrac{\partial \;O_3}{\partial t}+u\;\dfrac{\partial \;O_3}{\partial x}=-\frac{\chi}{\epsilon}. \label{eq:toy_stiff2} \end{eqnarray} Here $\epsilon$ represents the stiffness of the system and is given by the ratio between the slow advection scales and the fast chemistry time scales. For example if $u\sim\mathcal{O}(1)$ and $k_i\sim 10^3$, then $\epsilon\sim10^{-3}$.\\ The expression of the steady-state solution of system is found by introducing the lumped species $NO_x=NO+NO_2$ and $O_x=O_3+NO_2$ (References, sportisse 2000) in order to re-write equations (\ref{eq:toy_stiff1})-(\ref{eq:toy_stiff2}) as: \begin{eqnarray} \label{eq:toy_stiff_lumped1} \dfrac{\partial \;NO_x}{\partial t}+u\;\dfrac{\partial \;NO_x}{\partial x}=0, \\ \dfrac{\partial \;O_x}{\partial t}+u\;\dfrac{\partial \;O_x}{\partial x}=0, \\ \dfrac{\partial \;O_3}{\partial t}+u\;\dfrac{\partial \;O_3}{\partial x}=-\frac{\chi}{\epsilon}. \label{eq:toy_stiff_lumped2} \end{eqnarray} In this new form, and denoting $D/Dt=\partial/\partial t+u \;\partial/\partial x$, it can be seen that the lumped species $NO_x$ and $O_x$ are conserved in time, since \[\dfrac{D\;NO_x}{Dt}=0 \quad \text{and} \quad \dfrac{D\;NO_x}{Dt}=0.\] As a consequence, for regions where the three species are initially present, the exact asymptotic value of the concentration of all species, $NO^{\dagger}$, $NO_2^{\dagger}$, and $O_3^{\dagger}$, can be found explicitly as a function of the initial concentration of the lumped species. This is achieved in two steps. First, by expressing the values of the steady state concentrations, $NO^{\dagger}$ and $NO_2^{\dagger}$, as a function of the conserved lumped species as: \begin{equation} NO_x(0)=NO^{\dagger}+NO_2^{\dagger}\quad \text{and}\quad O_x(0)=O_3^{\dagger}+NO_2^{\dagger}, \label{eq:lumped_steady} \end{equation} and substituting them in equation (\ref{eq:toy_stiff_lumped2}). The system reaches a chemical steady state when $\chi=(NO)\; O_3 - 2\; NO_2=0$, or equivalently when \begin{equation} [NO_x(0)- [O_x(0)- O_3^{\dagger}]] O_3^{\dagger} - 2 [O_x(0)- O_3^{\dagger}]=0, \label{eq:steady_1D} \end{equation} which is a second order equation for the steady state of $O_3^{\dagger}$ with solutions given by \begin{eqnarray} \label{eq:steady_1D_2} O_3^{\dagger}=&-&\frac{1}{2} \;(2+NO_x(0)-O_x(0))\\ &\pm& \frac{1}{2} \sqrt{(2+NO_x(0)-O_x(0))^2+8 O_x(0)} \nonumber \end{eqnarray} And second, the values of $NO^{\dagger}$, and $NO_2^{\dagger}$ can be found by substituting the (physically relevant) positive solution of (\ref{eq:steady_1D_2}) in equations (\ref{eq:lumped_steady}). For time scales $\tau$ such that $\tau \gg 1/k$ (for $k=min(k_1, k_2)$), the system will have reached chemical steady-state and from then on, equations (\ref{eq:toy_stiff_lumped1})-(\ref{eq:toy_stiff_lumped2}) (and thus the original system (\ref{eq:toy_syst})-(\ref{eq:toy_syst3})) will behave as a transport-only process propagating the steady-state concentrations with a constant velocity $u$. \subsection{Numerical experiments} We chose to solve equations (\ref{eq:toy_syst})-(\ref{eq:toy_syst3}) to simulate the fate of an instantaneous release containing the three chemicals over a $360$ km one-dimensional region. The constant flow velocity was chosen to resemble realistic atmospheric values of $u=10$ m/s. We prescribed a computational spatial domain, $x\in [0,L]$ for $L=3000$ km, so that the plume would stay within the domain for the whole simulation time, $t\in [0,T]$ for $T=10$ hours, and in order to not introduce any errors due to boundary conditions in the numerical advection operator. The values of $k_1=1000$ and $k_2=2000$ were chosen for the stiff chemistry operator. The effective stiffness of the chemistry with respect to the transport is $\mathcal{O}(10^{-2})$ since $u\sim\mathcal{O}(10)$. The initial conditions are given by $NO(x,0)=NO_2(x,0)=O_3(x,0)=p(x)$, where \begin{displaymath} p(x) = \left\{ \begin{array}{ll} 1 & \text{if}\quad x\in[720, 1080]\\ 0 & \text{elsewhere}. \end{array} \right. \end{displaymath} In a 10-hour simulation time period, the initial release is advected exactly $360$ km to the right, and the concentrations of all species have reached chemical equilibrium. According to expression (\ref{eq:steady_1D_2}), $O_3^{\dagger}=NO^{\dagger}=1.236$, and $NO_2^{\dagger}=0.764$. The exact solution at time $t=T=10$ hours is explicitly given by $O_3(x,T)=NO(x,T)=1.236\times p(x-360)$ and $NO_2(x,T)=0.764\times p(x-360)$. This is our reference solution.\\ \begin{figure \centering \includegraphics[width=.49\textwidth]{O_3_different_resolutions} \includegraphics[width=.49\textwidth]{Sportisse_norm_diff_delta_x} \includegraphics[width=.49\textwidth]{Continuous_RMS_dx_180km} \caption[Behavior of numerical error ] {Behavior of numerical error in the one--dimensional transport--chemistry system. The top panel shows the analytical ``true'' and numerical solutions at different grid sizes of the system after a 10-hour simulation time. The middle panel shows the errors relative to the true solution with different grid sizes and operator splitting approaches. The bottom panel shows the behavior of the relative errors (RRMS) from the two operator splitting approaches, for fixed $\Delta x=180$km and different time steps, when compared to the analytic solution.} \label{fig:ozone_1d} \end{figure} For the numerical simulations, we implemented an explicit, second order accurate (in space), one-dimensional advection-scheme based on the Lax-Wendroff method with superbee slope limiters (See \cite{bib:Lev02}, pp 112 for details), and used for the chemistry, the built-in implicit stiff-ODE integrator ode23 from Matlab. In order to minimize contributions to the numerical error, to the first term in inequality (\ref{eq:error_est}), from both the advection scheme and chemistry integrator, we utilized a very small internal advection time step, $\Delta t_{\tau}=90$ seconds, and set the convergence relative-tolerance parameter to $10^{-3}$ in the routine ode23 (it adaptively chooses a small internal time step in order to meet the prescribed $0.1\%$ error convergence criterion). \\ We solved equations (\ref{eq:toy_syst})-(\ref{eq:toy_syst3}) using multiple first order Godunov operator splitting approaches (where transport and chemistry were evaluated in different orders) for multiple operator-splitting time-steps, $\Delta t=180, \;360,\; 1800$ and $3600$ seconds, and for multiple grid sizes $\Delta x=22.5,\;45,\;90, \;180$ and $360$ km (the three largest grid sizes were chosen to resemble spatial resolutions of $4^{\circ}\times 5^{\circ}$, $2^{\circ}\times 2.5^{\circ}$, and $1^{\circ}\times 1.25^{\circ}$, in current 3D global CTMs). The results of these numerical simulations and the exact solution are plotted in the top plot of Figure \ref{fig:ozone_1d}. The numerical solutions corresponding to the multiple operator splitting approaches, for a given value of $\Delta x$, appear as a single curve since their differences were smaller than the line-width chosen for the plot. \\ The quantification of numerical errors was performed using the modified relative root mean square (RRMS), commonly used in 3D atmospheric chemistry simulations given by \begin{equation} d_{_{AB}}(C_i)=\sqrt{\frac{1}{M}\displaystyle \sum_{\Omega} \left \vert \dfrac{C_i^A-C_i^B}{C_i^A} \right \vert ^2 } \label{eq:RMS} \end{equation} where $C_i^A$ and $C_i^B$ are the concentrations of species $i$ calculated in simulations $A$ and $B$, respectively, $\Omega$ is the set of grid-boxes where $C_i^A$ exceeds a threshold $a$, and $M$ is the number of such grid-boxes. We used $a=$10$^{-4}$, thus neglecting concentrations smaller than $\sim 0.01\%$ with respect to the original concentration. In our one-dimensional experiments, simulation $A$ is the exact solution, and simulation $B$ was one of the multiple Godunov operator splitting approaches. The second plot of Figure \ref{fig:ozone_1d} shows the quantity $d_{_{AB}}=(1/i)\sum_i d_{_{AB}}(C_i)$ for $i=3$ species, for the multiple values of $\Delta t$ and $\Delta x$. In this plot, the red triangles represent simulations where transport was evaluated last, ($\chi-T$), and the green dots where chemistry was evaluated last ($T-\chi$). This plot confirms what is observed in the top plot, {\it i.e.}, the fact that the differences across the multiple operator splitting approaches, for a given $\Delta x$, are very small ($\leq 1\%$).\\ In the bottom plot of Figure \ref{fig:ozone_1d}, we further show the values of the numerical error for the two sequences, $\chi-T$ and $T-\chi$, for $\Delta x=180$ km, for the multiple values of the operator splitting time-steps. We found this plot to be representative of the behaviour of the numerical error for other values of $\Delta x$. Note that while the differences across the multiple approaches are very small, the interesting mathematical behaviour of the numerical error, discussed in section \ref{sec:op_split}, can be observed. Indeed, the numerical error of the sequences $T-\chi$, where the chemistry (the stiff process) is evaluated last, produce better numerical results than their counter parts $\chi -T$. Moreover, $T-\chi$ sequences appear to be almost insensitive to the magnitude of the operator splitting time-step (the error even seems to grow as $\Delta t\rightarrow 0$ as reported in Sportisse, 2000) making them a preferred choice, since larger operator splitting time steps allow faster computations when exploiting the intrinsic parallelizable nature of the chemistry operator. The quality of results produced by sequences where transport is evaluated last, follows the traditional behaviour of linear analysis where the numerical error decreases as the operator splitting time decreases. Since the magnitude of these first order operator splitting errors was so small, we chose to not implement higher order operator splitting approaches.\\ While the bottom plot of Figure \ref{fig:ozone_1d} shows a clear picture of the magnitude of operator splitting errors ($\leq 1\%$), we performed transport-only simulations in order to verify the magnitude of the numerical errors coming from the numerical advection scheme itself. The results of these simulations are shown in the top plot of Figure \ref{fig:ozone_1d_transp}. Note that while the magnitude of the concentration of $O_3$ in these simulations is exactly one (since no chemistry is present), the numerically simulated profiles, for the different values of $\Delta x$, look very similar to those in the top plot of Figure \ref{fig:ozone_1d}. Indeed, when computing the modified RRMS error associated to these simulations, as shown in the bottom plot of Figure \ref{fig:ozone_1d_transp}, the behaviour of the relative errors resembles the one observed in the middle plot of Figure \ref{fig:ozone_1d}. In short, the numerical errors coming from the choice of operator splitting are eclipsed by the largest component of the numerical error coming from the spatial discretization (second term in inequality (\ref{eq:error_est})) in the numerical advection scheme.\\ Having chosen an initial condition in the shape of a step function in our experiments, caused our second order numerical advection scheme to behave as a first order scheme. Indeed the numerical error decreases close to linearly in our numerical experiments when using the $L^2$-norm instead of the modified RRMS (plot not shown). Estimates for the numerical errors, in the form of an effective numerical diffusion, $D_h$, for 1D first order numerical advection schemes place their value at $D_h\sim u\Delta x$, where $u$ is the mean flow velocity and $\Delta x$ the grid spacing. In our 1D experiments, these numerical diffusion is of the order $D\sim10^6$ m$^2$/s. Numerical diffusion in 3D global models (\citet{bib:Lin96, bib:San13, bib:Ras07, bib:Wil06, bib:Pis09}) is estimated to be around $10^5-10^6$ m$^2$/s. These 3D estimates place our one-dimensional experiments within a relevant range. \begin{figure \centering \includegraphics[width=.49\textwidth]{O_3_different_resolutions_transpot_only} \includegraphics[width=.49\textwidth]{Sportisse_norm_diff_delta_x_transp_only} \caption[Behavior of numerical error, transport only.] {Behavior of numerical error in the one--dimensional transport--only system. The top panel shows the analytical ``true'' and numerical solutions at different grid sizes of the system after a 10-hour simulation time. The bottom panel shows the errors relative to the true solution with different grid sizes and operator splitting approaches.} \label{fig:ozone_1d_transp} \end{figure} \section{Numerical experiments using GEOS-Chem} \label{sec:numerical} Determining the exact magnitude of numerical errors in 3D global CTM simulations in the exact same way we did for our 1D prototype is not possible. This is due to the lack of an analytic expression for the solution to equations (\ref{eq:advec-reac}) in realistic circumstances (time-dependent winds, time-dependent chemistry rates changing throughout the geographic domain due to photolysis, time-dependent emissions). In order to estimate operator splitting errors in 3D CTMs, we can only compare the output of simulations where everything is kept the same except for the operator splitting sequence and the operator splitting time step. This is the strategy we present in this section, which in combination with the results from our one-dimensional simulations, allowed us to determine upper bounds of operator splitting errors in GEOS-Chem. In order to further understand additional numerical errors, due to the time integration of relevant processes inherently solved with operator splitting approaches and not present in our 1D toy example, we performed multiple additional 3D global simulations. In these simulations, we gradually included inhomogeneous boundary conditions (emission processes) to the time integration (\cite{bib:Spo00, bib:HunVer03}), and vertical processes (convection and dry deposition).\\ GEOS-Chem is a state-of-the-art 3D global Eulerian model of tropospheric chemistry driven by assimilated meteorological observations from the Goddard Earth Observing System (GEOS) of the NASA Global Modeling and Assimilation Ofice (GMAO). The model simulates global tropospheric ozone-NOx-VOC-aerosol chemistry. The full chemical mechanism for the troposphere involves over a hundred species and over three hundred reactions. The ozone-$NO_x-HO_x-VOC-$aerosol chemical mechanism of GEOS-Chem has been described by \cite{bib:Bey01,bib:Par04} and recently updated by\cite{bib:Mao10}. Details of the chemical reactions and rate constants are reported in the chemical mechanism document (http://acmg.seas.harvard.edu /geos/wiki\_docs/chemistry /chemistry \_updates\_v6.pdf). In Figures 4$-$6 the chemical species are arranged in the order of their chemical lifetimes in the atmosphere, from OH ($<$ 1 second) and $NO_x$ ($\sim$1 hour), to CO and $C_2H_6$ (2--3 months). The chemical mass balance equations are integrated using a Gear-type solver \citep{bib:Jac95}. Stratospheric chemistry is not explicitly simulated and it instead uses the “Synoz” cross-tropopause ozone flux boundary condition of \citet{bib:McLin00}. The model uses the flux form semi-Langrangian advection scheme of \citet{bib:Lin96}. We used the GEOS-Chem model (v8-02-03) driven by the GEOS-5 data at the 4 x 5 horizontal resolution and 47 levels in the vertical. Detailed descriptions of the model are given by \citep{bib:Bey01} and \citep{bib:Zha11b}. In this study, we initiate the model simulations on January 1, 2005 with model fields from a 6-month spin-up run, and focus on the weekly averaged model results for January 1-7, 2005. \subsection{Transport and chemistry} Our strategy consisted of comparing the instantaneous concentration of several chemical species, after multiple one-week long, 4$^\circ$ x 5$^\circ$ horizontal resolution, GEOS-Chem simulations (version v8-02-02), using two versions of the (default) second order Strang operator splitting method given by the sequences: \[T(\Delta t/2) \chi(\Delta t) T (\Delta t/2) \quad \text{and}\quad \chi (\Delta t/2) T(\Delta t) \chi (\Delta t/2)\] for different values of the operator-splitting time step $\Delta t$. These sequences are denoted as $T \chi T$ and $\chi T \chi$ respectively in the subsequent paragraphs. We used $\Delta t= 60, 30, 10, 2$ mins. In all these simulations, transport and chemistry were the only active mechanisms, all other mechanisms were turned off. The inactive mechanisms include: emissions, convective transport, deposition, and planetary boundary layer mixing. Emissions correspond to inhomogenous boundary conditions that are treated numerically as production rates distributed in the boundary layer and solved together in the chemistry operator.\\ We used the modified RRMS (\ref{eq:RMS}) with a threshold $a=$10$^6$ molecules cm$^{-3}$ to quantify the numerical differences in our global simulations. Figure \ref{fig:rms_opsplit_all} shows the relative differences between the reference simulation $\chi T \chi$ with $\Delta t=2$ mins, and the other operator splitting approaches for multiple $\Delta t$'s. Note that the maximum differences across simulations (and species) are of the order of $\sim 10\%$. \\ Using our one-dimensional prototype and fixing $\Delta x=180$ km, we compared the results of two operator splitting strategies ($T-\chi$ and $\chi-T$) for multiple values of $\Delta t$, with the sequence $T-\chi$ and $\Delta t=3600$ sec set as a reference. The results are displayed in Figure \ref{fig:1D_opsplit_tch_as_ref}. Note that while the bottom plot of Figure \ref{fig:ozone_1d} shows that operator splitting (relative) errors are less than $ 1\%$ (when comparing to the analytic solution), the relative differences between simulations using alternative operator splitting methods may be as large as $10\%$. This is roughly the same magnitude of the differences observed between the 3D (transport-chemistry) simulations in the top panel of Figure \ref{fig:rms_opsplit_all}.\\ Note that we chose the sequence $\chi T \chi$ with $\Delta t=2$ mins as the reference simulation for our 3D experiments, instead of the sequence $\chi T \chi$ with $\Delta t=60$ that would have been suggested by our 1D experiments (as in Figure \ref{fig:1D_opsplit_tch_as_ref}). The reason for this is shown in the top panel of Figure \ref{fig:rms_op_split_dt}, where we can see that the differences between (transport-chemistry) simulations with different operator splitting sequences but with the same time step, get smaller as $\Delta t$ gets smaller. This behaviour would be expected from a converging operator splitting method where none of the operators is stiff and where the order of evaluation of the operators is not relevant. An alternative explanation could be that the operator splitting errors are very small and what we are observing is the convergence of the time--integration of each operator, separately, as $\Delta t$ gets small. This would suggest that the numerical errors of the time--integration of the transport and the chemistry contribute significantly to the first term (involving $\Delta t$) on the right hand-side of inequality (\ref{eq:error_est}), and should be comparable, in magnitude, to those observed between different operator splitting sequences.\\ In order to investigate this, we plotted the differences between simulations where the only active mechanism was either chemistry or transport, for multiple $\Delta t$'s, while keeping all other parameters exactly the same as in the previous simulations. The results are plotted in Figure \ref{fig:rms_chem_only}. These two plots show that indeed the numerical errors arising from the time--integration of each of the operators separately lead to differences of the same magnitude as those observed in the operator splitting simulations. We also observe that the differences get smaller as $\Delta t $ decreases suggesting numerical convergence. This comparable differences make it hard to disentangle a sharp estimate of the operator splitting in 3D. \\ Note also that in our one dimensional prototype a cleaner analysis was achieved since we chose a smaller internal time step ($\Delta t_{\tau}$= 90 seconds) to integrate the (explicit-in-time) transport operator than the operator splitting time step (180 seconds$\leq\Delta t \leq 60$mins). This choice reduced the contribution to the numerical errors involving $\Delta t$ in inequality (\ref{eq:error_est}) from the transport integration. In order to save computational time in GEOS-Chem (and in most CTMs), however, the time step of the (explicit-in-time) transport scheme is chosen to be equal to the operator splitting time step leading to larger numerical errors. \\ In our one dimensional prototype, the chemistry operator was solved using an adaptive time--integration routine with very tight convergence constraints, thus reducing numerical errors. The time--integration of the chemistry operator in GEOS-Chem uses an adaptive time stepping strategy (\citet{bib:Jac95}) in order to meet convergence requirements (absolute and relative numerical error tolerances) at every user-defined time step. These parameters have been internally set to keep simulation times reasonable while maintaining acceptable numerical accuracy. We kept these settings as they are typically used in global simulations for our numerical experiments. Figure \ref{fig:rms_chem_only} shows the differences between chemistry--only simulations for different user-defined chemistry time-steps. Presumably these errors could be decreased by fine tuning the error tolerances in the time integration routine appropriately, but this approach may increase processing times considerably.\\ Despite all of these numerical issues, we highlight the fact that we can establish an upper limit of about a $10\%$ for the magnitude of operator splitting errors based on the results of our multiple simulations in 3D. Moreover, we show that differences of the single chemical species with largest discrepancies across simulations, Isoprene, are not significant in Figures \ref{fig:isoprene_chem_only}, \ref{fig:isoprene_transp_only}, and \ref{fig:isoprene_op_split_only}, for chemistry only simulations, transport only simulations, and different sequences of operator splitting methods, respectively. From these plots and the results of our one-dimensional prototype, we hypothesize that the operator splitting errors may be much smaller than $10\%$. \\ We also highlight the fact that we did not pursue further efforts to show that the sequences evaluating the chemistry at the end of the time step in 3D compare better with observations, since our one-dimensional prototype, as well as multiple studies in global CTMs \citep{bib:Ras07, bib:Pra08, bib:San13}, suggest that the numerical errors associated to the transport integration, at current spatial resolutions, are significantly larger than those observed in operator splitting methods. In addition, uncertainties in emission fields and deposition mechanisms may pose further difficulties in addressing this question. In our one-dimensional proto-type, subsequent reductions in the spatial resolution lead to significant improvements in the accuracy of the numerical solution globally (for any operator splitting sequence). Whereas a better choice of operator splitting (where chemistry is evaluated last) leads to a very modest improvement at a given spatial resolution $\Delta x$. \subsection{Boundary conditions and vertical processes} Other important processes in 3D simulations are integrated in time using operator splitting strategies. As noted in \cite{bib:Spo00} and \cite{bib:HunVer03} the time integration of inhomogeneous boundary conditions, such as emission processes in global simulations, using operator splitting strategies may lead to considerable numerical errors. Additionally, the time integration of vertical processes such as convection and deposition using operator splitting may also lead to important numerical errors. In order to investigate the magnitude of numerical errors due to these processes, we performed additional 3D simulations that gradually included inhomogeneous boundary conditions (emissions) and vertical process. In other words, aside from the 3D ``transport-chemistry'' simulations discussed in the previous sections, we performed simulations with (i) ``transport, chemistry, and emissions'' and simulations with (ii) ``transport, chemistry, emissions, convective transport and deposition''. When emissions are included, they are integrated within the chemistry solver, using the chemistry time step. Convective transport and deposition are solved using the standard setting of GEOS-Chem, which integrate these two processes (sequentially) during the chemistry time step. The differences between these two sets of simulations, using the same methodology explained in the previous section, are plotted in the two lower panels of Figures \ref{fig:rms_opsplit_all} and \ref{fig:rms_op_split_dt}. As these Figures show, the additional numerical errors coming from the inclusion of inhomogenous boundary conditions (emissions) are significant. Indeed, the differences between the simulations that include ``transport, chemistry, and emissions'' are roughly double the magnitude of the differences between the simulations that include only ``transport-chemistry'' for different operator splitting strategies. The incorporation of convective transport and deposition to the simulations does increase the differences between simulation, mainly when changes in time steps are large, as shown in the bottom panel of Figure \ref{fig:rms_opsplit_all}. When time-steps are fixed and operator splitting approaches are different, these vertical processes do not seem to lead to larger differences in the different simulations. \begin{figure \centering \includegraphics[width=.49\textwidth]{Discrete_RMS_dx_180km_tch_as_ref} \caption{Behavior of the relative errors (RRMS) of simulations performed with two different operator splitting approaches ($T-\chi$ and $\chi-T$), fixing $\Delta x=180$ km, for multiples time steps. The reference solution is obtained with the sequence $T-\chi$ for $\Delta t=3600$ seconds.} \label{fig:1D_opsplit_tch_as_ref} \end{figure} \begin{figure \centering \includegraphics[width=.49\textwidth]{Figure4} \caption{Behavior of numerical error in the GEOS-Chem 3-D model simulations. Here TCT denotes Transport-Chemistry-Transport, CTC denotes Chemistry-Transport-Chemistry, and the numbers denotes operator splitting time steps in minutes. Relative RMS relative to the CTC2 model simulation are shown for different chemical species with lifetimes ranging from seconds $(OH)$ to months $(CO, C_2H_6)$. Active processes in these simulations are as follows: Transport and chemistry (top panel); Transport, chemistry and emissions (middle panel); Trasnport, chemistry, emissions, convective transport and deposition (bottom panel). } \label{fig:rms_opsplit_all} \end{figure} \begin{figure \centering \includegraphics[width=.49\textwidth]{Figure5} \caption{Behavior of numerical error in the GEOS-Chem 3-D model simulations. Here TCT denotes Transport-Chemistry-Transport, CTC denotes Chemistry-Transport-Chemistry, and the numbers denotes operator splitting time steps in minutes. Relative RMS for different operator splitting approaches for fixed time steps: $\Delta t=2,30, 60$ mins.Active processes in these simulations are as follows: Transport and chemistry (top panel); Transport, chemistry and emissions (middle panel); Trasnport, chemistry, emissions, convective transport and deposition (bottom panel).} \label{fig:rms_op_split_dt} \end{figure} \section{Conclusions and Future work} We have presented a way to characterize operator splitting errors in the context of atmospheric chemistry modeling. Our approach numerically extends one--dimensional linear results to non-linear 1D and 3D cases. These numerical findings are relevant to global atmospheric chemistry modeling. Our findings suggest that stiff operators should be evaluated lastly in operator splitting methodologies. This results is consistent with the linear results presented in \citep{bib:Spo00}, and previous studies in numerical weather prediction \citep{bib:Dub05} . Differences of approximately $10\%$ across species are found when comparing the outputs of global simulations using different operator splitting approaches, using multiple splitting time steps. This, in combination with our one-dimensional results, suggests that operator splitting errors do not exceed $10\%$ relative errors in global simulations. We show also, that in current spatial resolutions, the numerical diffusion errors introduced in global atmospheric chemistry models eclipse errors emerging from operator splitting techniques. \subsection{Future work} Future studies should identify whether operator splitting strategies that evaluate fast dynamics operators lastly in global simulations lead to simulations that improve the match between simulations and observations. Further exploration is also required regarding the effect of different operator splitting strategies in the time integration of of the governing equations of aerosol dynamics and different choices of boundary layer mixing schemes. Additional ``toy-tests'' that should be explored in order to further understand numerical errors introduced by different operator splitting strategies include those discussed in \citep{bib:Lau14, bib:Pud06}. Finally, nuances between operator splitting approaches in Eulerian and Semi-Lagrangian transport schemes should be more deeply investigated \citep{bib:Pud97}. \begin{figure \centering \includegraphics[width=.49\textwidth]{Figure6} \caption{Behavior of numerical error in the GEOS-Chem 3-D model simulations. Relative RMS for transport--only (top panel) and chemistry only (bottom) simulations using different time steps: $\Delta t=2,30, 60$ mins.} \label{fig:rms_chem_only} \end{figure} \begin{figure \centering \includegraphics[width=.49\textwidth]{diff_isop_transp_lev0} \caption{Comparison of isoprene concentrations using different time steps for GEOS-Chem transport--only simulations. Isoprene concentrations at the surface level from the model simulation with time step of 60 minutes (top-left panel) are compared to the model simulation with time step of 2 minutes (top-right panel). Absolute (bottom-left) and relative differences (bottom-right) are also shown.} \label{fig:isoprene_transp_only} \end{figure} \begin{figure \centering \includegraphics[width=.49\textwidth]{diff_isop_chem_lev0} \caption{Comparison of isoprene concentrations using different time steps for GEOS-Chem chemistry--only simulations. Isoprene concentrations at the surface level from the model simulation with time step of 60 minutes (top-left panel) are compared to the model simulation with time step of 2 minutes (top-right panel). Absolute (bottom-left) and relative differences (bottom-right) are also shown.} \label{fig:isoprene_chem_only} \end{figure} \begin{figure \centering \includegraphics[width=.49\textwidth]{diff_isop_op_split_lev0} \caption{Comparison of isoprene concentrations using Transport-Chemistry-Transport (time step of 60 minutes) versus Chemistry-Transport-Chemistry (time step of 2 minutes).} \label{fig:isoprene_op_split_only} \end{figure} \section*{Acknowledgements} MS and LZ would like to thank the technical assistance provided by Claire Carouge. MS would like to thank Jonathan Pines for his involvement in the exploratory phases of this project. This work was partially funded by the National Natural Science Foundation of China (41205103)
1,314,259,992,616
arxiv
\section{Introduction} \label{sec: intro} Energy transport occurs in many contexts: from circuits and molecular junctions to processes like photosynthesis~\cite{Kassal2013DoesPhotosynthesis, Ishizaki2009UnifiedApproach, Engel2007EvidenceSystems, Brixner2017ExcitonSystems} and the electron transport chain in biology~\cite{Kundu2017NanoscaleHarvesting}. This fundamental process has very different features depending on the scale on which it acts and the specifics of the system coupling to the environment~\cite{Amarnath2016MultiscalePlants, Bennett2013ADescription}. For over a decade, a lot of work has exposed the mechanisms of Environmental Noise-Assisted Quantum Transport (ENAQT)~\cite{Plenio2008Dephasing-assistedBiomolecules, Mohseni2008Environment-AssistedTransfer, Chin2010Noise-assistedComplexes, Zerah-Harush2018UniversalNetworks, Dwiputra2021Environment-assistedEdges}, a phenomenon describing how incoherent processes from interactions with the environment around a system can improve energy transport in quantum systems. This work was heavily motivated by the possible connection between ENAQT and the efficiency of photosynthesis~\cite{Kassal2013DoesPhotosynthesis,Engel2007EvidenceSystems, Plenio2008Dephasing-assistedBiomolecules,Mohseni2008Environment-AssistedTransfer, Chin2010Noise-assistedComplexes,Lambert2012QuantumBiology, Huelga2013VibrationsBiology, Stones2016VibronicTransfer}, though recent work suggests the relationship between the two may be more nuanced~\cite{Harush2021DoNot, Higgins2021PhotosynthesisTransfer, Duan2017NatureTransfer.}. There are a number of different ways in which ENAQT can arise, as shown in figure \ref{fig: ENAQT schematic}. These include line broadening which can help to overcome energetic barriers; the breaking up of an `invariant subspace' of the system Hamiltonian that is inaccessible to extraction operators on a quantum system~\cite{Chin2010Noise-assistedComplexes}; and momentum rejuvenation which counteracts the tendency of a fraction of the excitation to get stuck in only sluggishly propagating states~\cite{Li2015MomentumFlow}. Recent studies of steady state populations have also shown that the occupation of system sites becomes more uniform when transport efficiency is near-optimal~\cite{Zerah-Harush2018UniversalNetworks, Dwiputra2021Environment-assistedEdges,Zerah-Harush2020EffectsTransport}, this population uniformisation phenomenon is discussed in \ref{sec: uniformisation}. \begin{figure}[H] \centering \includegraphics[width = .95\linewidth]{figure1.pdf} \caption{Illustrations of the ENAQT mechanisms that are relevant in this paper: dephasing induced line broadening (a), the invariant subspace (b) and momentum rejuvenation (c). Dephasing (and other forms of decoherence) act to broaden the linewidth of system states, making otherwise forbidden transitions energetically possible, which enables faster energy transport in disordered systems. The invariant subspace describes the eigenstates of a coupled system that have zero overlap with the particular energy extraction site $\ket{i}$; disorder and localisation increases the extent of this subspace, and environmental noise is needed to access it from the extraction site. Momentum rejuvenation is a finite size effect: it describes how high group velocity components of a population leave a system first, producing a skewed velocity distribution. Incoherent noise resets the distribution, effectively pumping population from low to high group velocities.} \label{fig: ENAQT schematic} \end{figure} In this paper, we perform a systematic study of how localising the eigenstates of 1D chains modifies their transport efficiency. Figure \ref{fig: chain schematic} illustrates the model we will consider here, which allows us to study three mechanisms that limit the delocalisation of chain eigenstates: limiting the total length of the chain, introducing static disorder, and applying a uniform energy gradients. Varying static disorder induces Anderson localisation~\cite{Anderson1958AbsenceLattices}, while a linear energy gradient produces Wannier-Stark localisation~\cite{Wannier1960WaveField}. \begin{figure}[H] \centering \includegraphics[width = .95\linewidth]{img/figure2.pdf} \caption{Schematic view of our system setup, showing a chain with ten sites of different energies, the energy of each site is altered by some random disorder $\zeta_i$ and a uniform and linear energy gradient $\eta$. Coupling to an environment induces dephasing $\Gamma$ on each site. The measure of transport efficiency that we use is the steady state current $I_{ss}$ extracted from the last site. After extraction the chain population is reinjected back onto all sites equally. Our goal is to find $\Gamma_{optimal}$ where $I_{ss}$ is maximised for the given combination of $\eta$ and $\zeta$.} \label{fig: chain schematic} \end{figure} Previous studies on the effects of disorder on ENAQT have focused on how disorder affects the extent of the invariant subspace~\cite{Chin2010Noise-assistedComplexes, Caruso2009HighlyTransport} as well as the distribution of steady-state populations~\cite{Zerah-Harush2020EffectsTransport}. These studies have consistently found that as static disorder increases, more dynamic disorder is needed to improve transport efficiency\cite{Chin2010Noise-assistedComplexes, Zerah-Harush2020EffectsTransport}. More static disorder means more pure dephasing is needed to enable otherwise forbidden transitions, therefore the optimal pure dephasing rate is generally positively correlated with static disorder. Momentum rejuvenation, unlike other ENAQT mechanisms is a finite size effect~\cite{Li2015MomentumFlow}. High group velocity components of a propagating wave-packet explore and quickly exit the finite sized system, leaving behind a skewed velocity distribution which can be reset by environmental noise, repopulating the depleted higher velocity states. A consequence of this mechanism is that larger systems need longer before faster exciton components can escape, therefore they need to be `reset' less often, meaning the optimal noise rate is reduced. In this paper we aim to produce a deeper understanding of the relationship between ENAQT and localisation, and we will also show that momentum rejuvenation continues to apply in non-degenerate systems and in the steady state. This allows us to compare the effect disorder has on size-dependent and size-independent ENAQT mechanisms. The focus of this work is on chains with short-range nearest-neighbour coupling, as this model is widely studied and can be fully localised. Long-range coupling has been observed in relevant experimental systems such as molecular aggregates~\cite{Spano1991CooperativeAggregates, Gulli2019MacroscopicNanotubes, Strumpfer2012HowLight-harvesting} or ion traps~\cite{Jurcevic2014QuasiparticleSystem}. However, in general the long-range interactions in 1D systems prevent full Anderson localisation~\cite{Levitov1989AbsenceInteraction,Evers2008AndersonTransitions}, and recent work has shown that homogeneous long-range coupling~\cite{Celardo2016ShieldingHopping} or coupling to cavities~\cite{Chavez2020Disorder-EnhancedCavities} can significantly alter 1D responses to disorder in ways beyond the scope of this paper. Recent years have also seen broad interest in the transient effects of dephasing on quantum diffusion, such as stochastic resonance, and many-body localisation, especially focused on the quasiperiodic Aubry-André model~\cite{Gholami2017NoiseModels, Lorenzo2018RemnantsNoise, Zhu2021ProbingComputer, Malla2018SpinfulCoupling, Bonca2018DynamicsBaths,Dwiputra2021Environment-assistedEdges, Prelovsek2018TransientBosons}, as well as quantum chaotic systems~\cite{Deutsch2018EigenstateHypothesis, DAlessio2016FromThermodynamics, Sa2020ComplexChaos, Rubio-Garcia2021FromLiouvillians}. We find no non-trivial transient effects in our model (see \ref{sec: transient effects}), so there remains open question of how the findings presented here would apply to more complicated scenarios. \section{Theoretical Model} \label{sec: Theory} \subsection{System Model} \label{sec: model} In this paper we model chains with the single excitation approximation, defining the Hamiltonian as \begin{equation} H = \sum_i \epsilon_i \ket{i}\bra{i} + J \sum_{i=1}^{N-1} \ket{i}\bra{i+1} + \text{H.c.}, \label{eqn: Ham} \end{equation} $\ket{i}$ represents a state with single excitation, on site $i$. $\epsilon_i$ is the on-site energy for site $i$, H.c is the Hermitian conjugate, and $J$ is the strength of the coupling between neighbouring sites. For this work $\hbar = 1$ and all quantities are given in terms of the coupling strength $J$ so we can focus on capturing the influence of disorder and gradients in a very general sense. We consider chains of $N$ sites with site energies $\epsilon_i$ determined by a combination of energy disorder and a gradient in average site energies. As a convention we set the $\epsilon_0$ and $\epsilon_N$ to the highest average energy and zero respectively, from this we can define $\eta = \frac{\epsilon_0 - \epsilon_N}{N \cdot J}$ which we use to define the effective gradient applied to our system, scaled by system length and given in terms of the coupling strength $J$. To each site energy we add a perturbation $\zeta(\sigma)_i$ drawn from a Gaussian distribution, centred on zero with a standard deviation $\sigma$; here, $\sigma$ denotes the disorder strength for the system. The three parameters of system size $N$, disorder strength $\sigma$ and gradient $\eta$ all help define eigenstates and their localisations. The disorder introduces energy gaps, constraining eigenstates through localisation~\cite{Anderson1958AbsenceLattices}. The gradient could be a result of the application of a field to the system, and produces Wannier-Stark localisation~\cite{Wannier1960WaveField, Kleinman1990CommentLocalization, Emin1987Phonon-assistedField, Wilkinson1996ObservationPotential}. To measure localisation we use the inverse participation ratio (IPR), which is a measure of the number of sites over which each eigenstate $E_\alpha$ is delocalised. The average IPR over all eigenstates is defined as \begin{equation} \text{IPR} = \frac{1}{\sum_{i,\alpha} |\braket{i| E_{\alpha}}|^4}. \label{eqn: IPR} \end{equation} This single value represents how localised that system is, with greater localisation implying not only a larger invariant subspace, but also a decreased efficiency in coherent transport. The IPR captures the system-wide impact of different gradients and disorder strengths on a system, making it a natural measure to compare systems. The effects of gradients and random disorder are illustrated in figure \ref{fig: ipr bias vs Anderson}, the coloured areas in the left panel show one standard deviation around the mean value at each point. \begin{figure}[H] \centering \includegraphics[width = .95\linewidth]{img/figure3.pdf} \caption{(Left) Average IPR against various disorder strengths $\sigma$ for $N = 40$, considered against four energy gradients $\eta$. Coloured areas show $\pm$ one standard deviation, each point is averaged from 100 configurations of disorder. An ordered ($\square$) and disordered ($\bigcirc$) point are highlighted, and their eigenspectra shown in the centre and right panels, respectively. (Centre) The eigenspectrum for a chain with $\eta = 0.1, \sigma = 0J$, showing the slight localisation of eigenstates under a uniform field. The size of each diamond is proportional to the probability of observing each eigenstate on that site. (Right) The eigenspectrum for $\eta = 0.1, \sigma = 0.3695J$, showing a mixture of field effects and disorder, producing inconsistent eigenenergy spacing as well as slightly more localised eigenstates.} \label{fig: ipr bias vs Anderson} \end{figure} \subsection{Dynamics, Lindblad and Redfield Master Equations} \label{sec: dynamics and rates} We model each chain with a Lindblad master equation implemented with the QuTiP package~\cite{Johansson2013QuTiPSystems}, \begin{equation} \Dot{\rho} = -i [H,\rho] + \Gamma \sum_{i = 1}^N \mathcal{L}\left[A_{deph, i}\right]\rho + \gamma_{inj}\sum_{i=1}^N\mathcal{L}\left[A_{inj, i}\right] + \gamma_{trap}\mathcal{L}\left[A_{ext}\right]\rho, \label{eqn: lindblad ME} \end{equation} where \(\mathcal{L}\left[A\right]\rho \) is the Lindbladian dissipator \begin{equation} \mathcal{L}\left[A\right]\rho = \left(A \rho A^\dagger - \frac{1}{2} \{ A^\dagger A, \rho \} \right). \label{eqn: lindblad dephasing} \end{equation} \(\Gamma\) sets the rate of (dephasing) noise in the system, for simplicity assumed to be the same on each site, $\{\cdot,\cdot\}$ is the anticommutator, and $A_{deph,i}$ are Lindblad operators describing the environmental influence on each site $i$. For on-site dephasing in the single excitation approximation, the operators for on-site energy noise take the form $A_{deph,i} = 2\ket{i}\bra{i} - \mathbb{I}$~\cite{Jeske2015Bloch-RedfieldComplexes, Huo2012InfluenceSystems} . The extraction operator projects population from the $N^{\text{th}}$ site (lowest end of chain) to an external shelf state where it is trapped, $A_{ext} = { \sigma_N^+\sigma_{trap}^-}$. Similarly population is re-injected from the trap back onto each site with the injection operators $A_{inj, i} = { \sigma_{trap}^+\sigma_{i}^-}$. To treat these systems at finite temperatures we use the Bloch-Redfield master equation. As we study disordered systems with very mixed energy splittings we retain all non-secular terms to ensure it remains accurate~\cite{Eastham2016Bath-inducedApproximation}. The master equation reads: \begin{equation} \begin{split} \Dot{\rho_s} &= -i [H,\rho_s] + \gamma_{inj} \sum_{i=1}^N\mathcal{L}\left[A_{inj, i}\right] + \gamma_{trap} \mathcal{L}\left[A_{ext}\right]\rho \\ &+ \Gamma \sum_{\omega} \sum_{m, n} S_{m,n}(\omega) \left( A_n(\omega)\rho_s A^\dagger_m(\omega) - \frac{1}{2} \{ A^\dagger_m(\omega)A_n(\omega), \rho_s \} \right), \label{eqn: redfield ME} \end{split} \end{equation} where the injection and extraction operators are the same as in equation \ref{eqn: lindblad ME}, $\rho_s$ is the system density matrix and the frequencies $\omega$ are the eigenenergy splittings~\cite{Breuer2002TheSystems}. $A_{m}$ are system-environment interactions, derived by transforming the relevant site basis operators $A_{deph,i} = 2\ket{i}\bra{i} - \mathbb{I}$ into the Hamiltonian eigenbasis~\cite{Breuer2002TheSystems} and $S_{m n}(\omega)$ defines the noise-power spectrum associated with the system-environment interaction. The noise-power spectrum function is \begin{equation} S_{m n}(\omega) =\left(\mathcal{N}_{BE}(\omega, \beta) + \Theta(\omega)\right)\mathcal{J}(\omega), \end{equation} where $\mathcal{N}_{BE}(\omega)$ defines Bose-Einstein statistics at a given phonon inverse temperature $\beta, \Theta(\omega)$ is the Heaviside function, allowing phonon-assisted transitions from higher to lower eigenenergies ($\omega > 0$) but not the reverse case, and $\mathcal{J}(\omega)$ is the spectral density. We use a flat spectral density as assumed in equation \ref{eqn: lindblad dephasing}, such that $\mathcal{J}(\omega) = \mathcal{J}$, this is for a direct comparison with the pure dephasing case. A Drude-Lorentz spectral density is considered and presented in \ref{sec: drude-lorentz-spectra}. \subsection{Steady state setup and observables} \label{sec: initials and observables} As indicated by figure \ref{fig: chain schematic}, we re-inject any extracted population back onto all chain sites equally. By linearity, each injection site represents an initially populated site in the dynamical approach, so this injection scheme is equivalent to a mixed initial state. This choice ensures we capture the general system response, minimising the influence from inversion symmetry effects, while also ensuring that we can generically compare transport properties across systems with different sizes without adding in extra concerns about differing lengths between injection and extraction. For completeness, we show in \ref{sec: single-site injection} that injection on a single site produces qualitatively similar results. We match the total injection to the extraction rate so that $\gamma_{inj} = \frac{\gamma_{trap}}{N}$. Our focus on steady-state properties is motivated by prior ENAQT studies which have shown that the steady state approach is more natural for energy transport in photosynthetic systems~\cite{Brumer2018SheddingProcesses, Axelrod2018AnLight, Kassal2013DoesPhotosynthesis}. The steady state $\rho_{ss}$ is found by calculating the zero eigenstate of the system Liouvillian. In our work $\gamma_{trap} = 3J$; changing this value generally changes quantitative values but not the qualitative behaviour~\cite{Zerah-Harush2020EffectsTransport} unless the rate is so high it begins to enter the Zeno regime~\cite{Chaudhry2016AEffects}. The key observable of transport efficiency is the steady state current $I_{ss}$, which we aim to maximise. This is simply the product of the extraction rate and the excited steady state population on the extraction site $N$, \begin{equation} I_{ss} = \rho_{N,N} \gamma_{trap}. \label{eqn: steady state current} \end{equation} \section{Results} \label{sec: results} In this section we show how random disorder, energy gradients and system size affect ENAQT in the pure dephasing limit (section \ref{sec: dephasing}), demonstrating the strikingly consistent relationship between IPR and $\Gamma_{optimal}$. We also present a power law that fits the unbiased chain data, letting us separate the influence of size-independent and size-dependent effects. We then go beyond pure dephasing with the Bloch-Redfield master equation and show these effects are still qualitatively robust at high to intermediate temperatures, but break down in the lower temperature limit (section \ref{sec: bloch-redfield}). \subsection{Pure Dephasing} \label{sec: dephasing} Figure \ref{fig: N40 gradient comparison} shows how energy gradients and random disorder affect the optimal dephasing for chains of 40 sites. In all cases we see that as the IPR decreases linearly $\Gamma_{optimal}$ increases rapidly, once again confirming positive correlation between ENAQT peak position and static disorder~\cite{Chin2010Noise-assistedComplexes, Caruso2009HighlyTransport, Zerah-Harush2020EffectsTransport}. The main finding is that the results for each gradient largely overlap once the chain is disordered enough, indicating that once the system eigenstates are sufficiently localised the source of localisation does not matter. \begin{figure}[H] \centering \includegraphics[width = .8\linewidth]{img/figure4.pdf} \caption{$\Gamma_{optimal}$ vs IPR for a variety of disordered $N$ = 40 chains, colour coded for each of the four different gradients $\eta$ considered. The inset shows how the curve points are generated: by varying the dephasing rate $\Gamma$ until a peak current is found. We see that in the majority of cases the trends overlap for each gradient, suggesting the IPR matters more than specific energy landscape or gradient. Calculations are repeated 100 times for each combination of gradient and disorder strength. The inset curve is calculated for $\eta = 0.1, \sigma = 0.3695J$ as in figure \ref{fig: ipr bias vs Anderson}.} \label{fig: N40 gradient comparison} \end{figure} Figure \ref{fig: N40 gradient comparison} shows that for sufficiently large IPRs (IPR $\geq 12$), the optimal dephasing for no gradient ($\eta = 0$) is lower than that for a weak gradient ($\eta = 0.1$). With momentum rejuvenation we expect that the larger the system is, the lower its $\Gamma_{optimal}$. As such, we infer that the presence of nonzero gradients limits the maximum length momentum rejuvenation can work over. So for $N = 40$ the gradient $\eta = 0.1$ is enough to slightly reduce the impact of momentum rejuvenation as compared to when $\eta = 0$. The result is a higher $\Gamma_{optimal}$ for the weak gradient. As discussed in Sec. \ref{sec: Theory}, linear energy gradients localise eigenstates~\cite{Wannier1960WaveField,Emin1987Phonon-assistedField} and alter charge transport~\cite{Chen2020ComputationalEnvironments} differently from random disorder. Yet once the chains are localised enough, momentum rejuvenation's influence is negligible and the optimal dephasing rate is determined only by the IPR, as can be seen for $\eta = 1, 10$. Therefore gradient-induced localisation and disorder-induced localisation only have different effects on ENAQT when the gradients are strong enough to shorten the length scale over which momentum rejuvenation acts, but weak enough to ensure it is still present. The relative impact of momentum rejuvenation is then further affected by the size of the system itself. As discussed above, gradients reduce the maximum length over which momentum rejuvenation acts. This leads to differences in $\Gamma_{optimal}$ if that length is less than the system length. By extension we should then expect the differences between $\eta = 0$ and $\eta = 0.1$ to scale with the number of chain sites $N$. We can observe this directly in figure \ref{fig: N10-40 gradient enaqt curves} which demonstrates how the localising effects of energy gradients and random disorder affect ENAQT for chains of different lengths. \begin{figure}[H] \centering \includegraphics[width = .95\linewidth]{img/figure5.pdf} \caption{$\Gamma_{optimal}$ vs IPR for 4 different chain lengths, calculated with 100 realisations of disorder for each combination of gradients and disorder strength. As $N$ increases the same $\Gamma_{optimal}$-IPR response generally occurs, just stretched over a larger range of IPRs. The final panel shows how the range of $\Gamma_{optimal}$ for each gradient varies with length. For this final panel $N = 50$ was also considered, using 25 different realisations of disorder for each combination of disorder and energy gradient. The upper limit of each bar shows the optimal dephasing for the most localised chains, while the lower limit shows this for ordered chains where $\sigma = 0$. The momentum rejuvenation model predicts that as a system gets larger, the optimal dephasing rate decreases, we observe this for the lower edge of the $\eta = 0$ bars and partially for $\eta = 0.1$ before its effectiveness is reduced at larger $N$ as discussed in section \ref{sec: dephasing}. For the stronger gradients the lower range of $\Gamma_{optimal}$ does not significantly change with $N$, confirming that finite size effects are effectively suppressed for these chains, leaving only the effects of site to site detunings.} \label{fig: N10-40 gradient enaqt curves} \end{figure} The upper limit of the $\Gamma_{optimal}$ bars in figure \ref{fig: N10-40 gradient enaqt curves}'s final panel shows consistent scaling behaviour across length, that is independent of gradients. This can be partially explained by regression to the mean as disorders are sampled from a Gaussian distribution. Maximally localised systems have large detunings between all sites, so the longer your system, the more detunings there are to maximise. Therefore, the larger the system, the harder it is to localise. Close inspection confirms this: the minimum IPR increases with chain length, and by extension the highest $\Gamma_{optimal}$ decreases with chain length. We now focus our attention on the lower end of these $\Gamma_{optimal}$ ranges. First we note that the high gradient behaviour ($\eta = 1, 10$) has consistent lower limits for all chain lengths considered, meaning ENAQT is only determined by average site to site detunings, with little if any sensitivity to size. Momentum rejuvenation suggests that the larger the system, the lower its optimal noise rate, and we observe the zero gradient data extends to lower and lower dephasing rates as $N$ increases, exactly as predicted~\cite{Li2015MomentumFlow}. We note that the difference in $\Gamma_{optimal}$ between $\eta = 0$ and $\eta = 0.1$ increases with $N$, indicating that the range of which momentum rejuvenation acts has a finite length at $\eta = 0.1$ and so becomes less effective as system size increases. This can also be seen in how $\Gamma_{optimal}$ changes with $N$. The lower range of $\Gamma_{optimal}$, where $\eta = 0.1$ initially decreases as expected with momentum rejuvenation, then the trend reverses as increasing $N$ reduces the impact of momentum rejuvenation. So far we have described under what conditions the size dependent effects of momentum rejuvenation can be observed, given the presence of energy gradients and random disorder. We now focus on the $\eta = 0$ limit we can directly capture how static disorder alters the influence of finite size effects on ENAQT. We consider the $N = 10-40$ chains and fit the $\eta = 0$ data with a power law of the form \begin{equation} \Gamma_{optimal}(\textnormal{IPR}) \propto \text{IPR}^{\lambda + \kappa \cdot \textnormal{IPR}}. \label{eqn: curve-power-law} \end{equation} The exponent $\lambda$ captures the response across all IPR values, corresponding to the influence of the invariant subspace and the need for line broadening. Meanwhile, the exponent $\kappa$ captures a varying influence, being negligible for very localised systems and most influential for systems with large IPR, capturing the influence of finite size effects such as momentum rejuvenation. We note that equation \ref{eqn: curve-power-law} is simply a phenomenological fit that best captures the data produced by our results, the data is not well fit by a single exponential and alternative functional forms likely require additional fitting parameters. As we show in table \ref{tab:fitting parameters} both $\lambda$ and $\kappa$ scale monotonically with chain length as expected. Further details and plots are presented in \ref{sec: curve fitting}. \begin{table}[H] \centering \begin{tabular}{|c||c|c|c|c| } \hline N & $A$ $(J)$ & $\lambda$ & $\kappa$ & SD $(\times 10^{-3})$ \\ \hline 10 & 1.59 & -3.14 & 0.07 & 1.80\\%\hline 20 & 1.70 & -2.69 & 0.03 & 0.55\\%\hline 30 & 1.74 & -2.51 & 0.02 & 0.33\\%\hline 40 & 1.73 & -2.36 & 0.01 & 0.22\\\hline \end{tabular} \caption{Table of best fit values and standard deviation for each chain length with $\eta = 0$. $A$ captures the proportionality, $\lambda$ the size independent response, and $\kappa$ the size-dependent response. As chains get longer, the fits gets more accurate, and the parameters change monotonically as the same behaviours stretch over a new range of IPRs.} \label{tab:fitting parameters} \end{table} \subsection{Finite temperature Bloch Redfield model} \label{sec: bloch-redfield} As described in section \ref{sec: dynamics and rates}, we can go from phenomenological pure dephasing model---effectively an infinite temperature limit---to a microscopically-founded finite-temperature approach with the use of the full, nonsecular Bloch-Redfield master equation, equation \ref{eqn: redfield ME}. These calculations are done against a flat spectral density for direct comparison with the pure dephasing results above. We define the inverse temperature $\beta = \frac{1}{k_B T}$, and consider 3 temperatures, $J \cdot \beta = 10, 1, 0.1$ (low, medium and high respectively), giving the results in figure \ref{fig: 3 temps N10}\footnotemark. \begin{figure}[H] \centering \includegraphics[width = .95\linewidth]{img/figure6.pdf} \caption{$\Gamma_{optimal}$ vs IPR for $N = 10$, considered at 3 inverse temperatures, once again for 100 realisations of disorder at each combination of gradient and disorder strengths. High and intermediate temperatures have broadly the same monotonic form, we note that the cooler the system is, the greater the $\Gamma_{optimal}$. These peaks are found using a bounded peak finding function, with the range $0 < \Gamma < 50J$. In the finite temperature limit we find some data points clustered at the edges of this range, suggesting either monotonic $I_{ss}$ vs $\Gamma$ curves, or $\Gamma_{optimal} \geq 50J$. We cut off all results within $10^{-3}J$ of either limit and collect them in the offset sections above and below the central axes. Each offset series is separated into points corresponding to different temperatures and annotated with a percentage to illustrate what fraction of data points corresponding to each temperature lie there.} \label{fig: 3 temps N10} \end{figure} \footnotetext{In $<$ 0.1\% of cases we found the steady state solver would fail, the optimisation procedure handled this by moving to the next trial point and continuing.} Under these conditions we still recover the characteristic trend of a monotonic relationship between $\Gamma_{optimal}$ and IPR, and we report similar results for sufficiently wide non-flat spectra in figure \ref{fig: finite-temp-lorentzians}. We note that as temperatures lower, $\Gamma_{optimal}$ for a given IPR increases. As temperatures decrease, the specific energy landscape of each chain becomes more important ~\cite{Davidson2021PrinciplesAnalysis}, as it becomes harder to avoid trapping population in energy minima. As a result, the range of $\Gamma_{optimal}$ associated with any IPR gets broader continuously as temperatures get lower We therefore conclude that the general ENAQT response to disorder depends not just on localisation, but also on avoiding the trapping of population in energetic minima. So when transfer rates up and down in energy become significantly different the chain population cannot explore all the system sites, trapping population in energetic minima. In this limit the universal response observed for pure dephasing breaks down, producing a regime which is very sensitive to the specifics of the energy landscape. By the reverse argument, if energy can move reasonably around a system then the monotonic relationship between optimal environmental coupling and IPR is well-defined. \section{Conclusion} \label{sec: conclusion} We have systematically shown how localisation and optimal ENAQT are related for 1D chains, producing a universal trend strongly determined by the IPR. The IPR in turn is determined by an interplay of energy gradients, random disorder and system length. Comparing the range of $\Gamma_{optimal}$ for various lengths of chain provided further insight into how strong gradients can suppress the influence of finite size effects. Additionally we have found that steady state current in unbiased, disorder systems can be described by a power law containing size dependent and size independent contributions, illustrating that finite size effects such as momentum rejuvenation still affect how ENAQT acts on disordered systems. Extending the model to include finite temperatures shows that the same response holds for high to intermediate temperatures. By contrast, at lower temperatures population can become trapped in local energetic minima. This decouples transport efficiency from eigenstate localisation and the transport becomes more sensitive to a chain's specific energy landscape. Throughout this paper we have shown that the localisation of a system's eigenstates is directly connected to what its optimal conditions for ENAQT are. By considering this for a large range of possible conditions we have developed new and broadly applicable insights into how localisation and finite size effects alter ENAQT in 1D. More work is required to confirm if this response is altered for higher dimensional systems where eigenstates may be further delocalised. For example simple tight-binding honeycomb lattices as found in graphene nano-ribbons can display quantum chaotic properties under weak static fields~\cite{Kolovsky2013Wannier-StarkLattice}, opening up a new class of system. The effects of localisation could be further investigated, whether taking a more fine-grained look at the unusual Wannier-Stark behaviour in \ref{sec: gradient vs ipr}, or going to much larger system sizes in order to limit the influence of finite size effects. Lastly, quasiperiodic systems such as the Aubry-André model could be considered, where transient effects such as stochastic resonance with anti-localised eigenstates~\cite{Gholami2017NoiseModels} may provide new insights into ENAQT beyond the steady state. \section*{Acknowledgements} We thank Scott Davidson, Dominic Rouse and Gerard Valentí-Rojas for the helpful discussions. This work was supported by EPSRC Grant No. EP/L015110/1. Computations were carried out using QuTiP~\cite{Johansson2013QuTiPSystems}, figures made in matplotlib~\cite{Hunter:2007}. \bibliographystyle{unsrt} \section*{\refname}}{}{}{ \usepackage{float} \usepackage{parskip} \setcounter{tocdepth}{2} \setcounter{secnumdepth}{2} \graphicspath{{img/}{../img/}}
1,314,259,992,617
arxiv
\section{Introduction} \vspace{-0.5em} Image editing has been widely studied in the field of computer vision due to its usefulness in photo-realistic editing applications, social media image sharing, and image-based advertisement. An image can be used to transfer its style into the target image~\cite{gatys2016image, gatys2016neural}. Also, modifying specific parts in the human face image, such as hairstyle or color, is useful in image editing applications~\cite{xia2021tedigan,Patashnik_2021_ICCV}. The purpose of semantic image manipulation is to generate a novel image that contains both source image identification and semantic information of user intention. In this paper, we tackle the semantic image manipulation task, which is the task of modifying an image with user-provided semantic cues. To apply the user intention into the image, a mixture of sketches and text is used to perform image manipulation and synthesis~\cite{park2019semantic,xia2021tedigan}. User intention can be applied by drawing a paint~\cite{park2019semantic} or writing text with semantic meanings~\cite{xia2021tedigan,gatys2016neural}. \iffalse TediGAN~\cite{xia2021tedigan} also gives extra-inputs to text-based image manipulation by embedding other modalities, such as mask and sketch, in the same space as text. More specifically, TediGAN projects the multi-modal embedding of the mask onto $\mathcal{W}$ Space of StyleGAN. Multi-modal image manipulation is performed by style mixing latent code obtained from content and text-driven latent code. However, extra inputs mainly support refining shape or boundary of objects in the image, and such objects are often predefined. Also, non of these works have use sound directly to edit the image. \fi Text-based image manipulation methods are proposed to edit the image conditionally~\cite{el2019tell, jiang2021language, li2020manigan, nam2018tagan,xia2021tedigan}. These works modify target contents in the image based on the text information. Among the text-based image manipulation methods, StyleCLIP~\cite{Patashnik_2021_ICCV} considered leveraging the representational power of Contrastive Language-Image Pre-training (CLIP)~\cite{radford2learning} models to produce text-relevant manipulations with given text input. StyleCLIP maintains high quality image generation ability using StyleGAN~\cite{jeong2021tr} while allowing insertion of semantic text into the image. However, text-based image manipulation has an inherent limitation when applying sound semantics into the image, due to the lack of handling vivid sound, which has infinite variation. Since the text is the form of discrete character, expressing the spectrum that has a continuous and dynamic context of sound is extremely difficult in our world. For example, every ``thunder'' generates different loudness and characteristic of ``\textit{sound of thunder}''. The discreetness of a text message prevent expressing the detailed difference of the sound around us. Therefore, the text-based image manipulation model has limitations in transferring specific, vivid sound semantics into the source image for the image modification. Sound provides polyphonic information of the scene and contains multiple sound events~\cite{9524590}. That is why watching a movie with sound is more realistic than reading a book. Our daily environment is filled with diverse sound sources and a complex blend of audio signals~\cite{9524590}. Therefore, sound, which we focus on, is a necessary modality for image manipulation. Several studies~\cite{chen2017deep, hao2018cmcgan, oh2019speech2face, qiu2018image, wan2019towards, zhu2021deep} have attempted to visualize the meaning of sound, but it is still challenging to reflect sound events in high-resolution images due to two reasons. The first reason is the lack of a suitable high-resolution audio-visual dataset. Audio-visual benchmark video datasets~\cite{caba2015activitynet, kay2017kinetics, soomro2012ucf101} for GAN training has generally lower resolution than high-resolution image datasets including Flickr-Faces-HQ (FFHQ)~\cite{karras2019style} and The Large-scale Scene Understanding Challenge (LSUN)~\cite{yu2015lsun}. There is no dataset with as many audio-visual pairs as the number of image-text pairs used for CLIP training. CLIP uses 400 million image-text pair data to learn the relationship between very large and diverse image and text modalities, whereas audio-visual pair data is still insufficient. Secondly, it is difficult to discover potential correlations between auditory and visual modalities~\cite{zhu2021deep}. Extracting appropriate temporal context, tone, and theme from the sound is difficult. To overcome these challenges of manipulating images with sound semantics, we introduce a novel image manipulation method driven by sound semantics~(see Fig.~\ref{fig:contrastivelearning}). As shown in Fig.~\ref{fig:fig1}, an image of an old car is manipulated into an old car with a fire truck-like exterior appearance when adding a siren sound. Our model consists of two main stages: (i) the CLIP-based Multi-modal Representation Learning, where an audio encoder is trained to produce a latent representation aligned with textual and visual semantics by leveraging the representation power of pre-trained CLIP models. (ii) the Sound-Guided Image Manipulation, where we use the direct latent code optimization to produce a semantically meaningful image in response to a user-provided sound. Our experimental results show that the proposed method supports a variety of sound sources with a better reflection of given audio information when transferring image styles. The sound-based approach supports more diverse and detailed information related to scenes compared to text-based image manipulation methods. Our main contributions are listed as follows: \begin{itemize} \vspace{-0.7em} \item We propose multi-modal contrastive losses to expand the CLIP-based embedding space. Moreover, we introduce contrastive learning on augmented audio data, which helps to learn a more robust representation. Here, we achieve state-of-the-art performance for a zero-shot audio classification task.\vspace{-0.7em} \item We propose semantic-level image manipulation solely based on the given audio features, including temporal context, tone, and volume.\vspace{-0.7em \item We propose the sound-guided code optimization steps with adaptive layer masking for putting sound meaning into images, enhancing the realism of the output.\vspace{-0.7em} \end{itemize} \begin{figure*} \begin{center} \includegraphics[width=\textwidth]{main_figure1.pdf} \end{center} \vspace{-1.3em} \caption{Our model consists of two main steps: (a) the {\em CLIP-based Contrastive Latent Representation Learning} step and (b) the {\em Sound-Guided Image Manipulation} step. In (a), we train a set of encoders with three different modalities~(audio, text, and image) to produce the matched latent representations. The latent representations for a positive triplet pair~(e.g., audio input: ``Explosion'', text: ``explosion'', and corresponding image) are mapped close together, while that of negative pair samples further away in the~(CLIP-based) embedding space~(left). In (b), we use a direct code optimization approach where a source latent code is modified in response to user-provided audio, producing a sound-guided image manipulation result~(right).} \label{fig:contrastivelearning} \vspace{-1.3em} \end{figure*} \section{Related Work} \myparagraph{Text-guided Image Manipulation.} Text-guided image manipulation is the most widely studied among guidance based tasks. Several studies~\cite{dong2017semantic, li2020manigan, nam2018tagan} employed the GAN-based encoder-decoder structure to preserve the features of the image while presenting image manipulations corresponding to the text description. StyleCLIP~\cite{Patashnik_2021_ICCV} and TediGAN~\cite{xia2021tedigan} utilize the latent space of the pre-trained StyleGAN and the prior knowledge from CLIP~\cite{radford2learning}. StyleCLIP performed image manipulation using a user-provided text prompt. TediGAN enabled image generation and manipulation using GAN inversion technique using multi-modal mapping. Beyond text and images, a sound can express a complex context appearing in a scene, and there is correspondence between a sound and an event occurring in the scene. \myparagraph{Sound-guided Image Manipulation.} Sound contains temporal dynamic information of a scene, which can be used as an imagery source for image manipulation. Some approaches have been introduced for the sound-guided image manipulation task. However, the previous works mainly focus on music (instead of using sound semantics), which includes music-to-visual style transfer with cross-modal learning strategy~\cite{lee2020crossing} and a neural music visualizer by mapping music embeddings to visual embeddings from StyleGAN~\cite{jeong2021tr}. To manipulate the image according to the sound, \textit{Tr$\ddot{a}$umerAI}~\cite{jeong2021tr} visually expresses music by latent transfer mapping of music to StyleGAN's style embedding. However, the above studies have limitations in focusing only on the reaction, not the semantic of the sound, in the direction of navigation in the latent space of StyleGAN. \textit{Crossing you in style}~\cite{lee2020crossing} uses the period to define the semantic relationship between sound and visual domain, but there is still a limitation which only can transfer the image style. Our proposed method can isolate the modification area in the source image, such as modifying the emotion of the face while preserving the color of the hair. \myparagraph{Interpreting Latent Space in StyleGAN.} The intermediate latent space in pre-trained StyleGAN~\cite{karras2019style} solves the disentanglement issue and allows the generated images to be manipulated meaningfully according to changes in the latent space. Extended latent space $\mathcal{W}+$ allows image manipulation with interpretable controls from a pre-trained GAN generator~\cite{abdal2019image2stylegan, karras2019style, karras2020analyzing}. For latent space analysis in audio sequences, \textit{Audio-reactive StyleGAN}~\cite{brouwer2020audio} generates an image every time step by calculating the magnitude of the audio signal and moving it in the latent space of StyleGAN. However, the method cannot control the meaning of sound in the latent space. StyleGAN's motion in the latent space is only mapped to the magnitude of the sound. There is a novelty in that we manipulate images with the properties of sound. \myparagraph{Audio-visual Representation Learning.} Cross-modal representation learning obtains relationships between different modalities in audio-visual tasks such as video retrieval and text-image cross-modal tasks such as image captioning and visual question answering. Audio-visual representation learning studies~\cite{DBLP:journals/corr/AytarVT17,nagrani2018learnable, suris2018cross} aim to map both modalities to the same embedding space. The correlation between modalities is learned by contrastive learning between composite audio-visual pairs~\cite{chen2021distilling, mazumder2021avgzslnet, sun2020learning}. However, audio-visual representation learning is still challenging because there is no adequate data as much as CLIP~\cite{radford2learning} for learning the correlation between different modalities. CLIP learned the relationship between image and text embedding by multi-modal self-supervised learning of 400 million image-text pairs and showed zero-shot inference performance comparable to supervised learning in most image-text benchmark datasets. In this paper, the audio encoder not only exploits the representation ability of CLIP but also learns supervisory signals from the audio data itself through self-supervised manners. As a result, our method obtains an audio-specific representation for sound-guided image manipulation. \section{Method} We follow the existing text-guided image manipulation model, StyleCLIP~\cite{Patashnik_2021_ICCV}. Our model and StyleCLIP manipulate the latent code of StyleGAN using joint embedding space between modalities. However, our model extends the CLIP~\cite{radford2learning} embedding space to the audio embedding space, which was not embedded before. We also introduce novel contrastive losses and adaptive masking for sound-guided image manipulation. Our model consists of two main steps: (i)~the CLIP-based Multi-modal Latent Representation Learning and (ii)~the Sound-guided Image Manipulation. First, we train audio, text, and image encoders to generate new latent representations. To do so, we train the audio encoder using the InfoNCE loss~\cite{oord2018representation, alayrac2020self, zhang2020contrastive} to produce a latent representation that is aligned with the representations from the pre-trained CLIP's text and image encoders. Such aligned representations can be used for image manipulation with the provided audio input. After the pre-training step, we use encoders to manipulate images according to a target sound input~(e.g., images with different facial expressions can be manipulated with different sound inputs). \subsection{Multi-modal Latent Representation Learning} As shown in Fig.~\ref{fig:contrastivelearning}~(a), we train a set of encoders with three different modalities \{audio, text, and image\} to produce the matched representations in the embedding space. Specifically, given audio, text, and image inputs, i.e. $x_a$, $x_t$, and $x_v$, we use three different encoders to obtain a set of $d$-dimensional latent representations, i.e. $\bf{a}$, $\bf{t}$, and ${\bf{v}}\in\mathcal{R}^{d}$, respectively. These latent representations are learned via a typical contrastive learning approach following the work by Radford~\etal~\cite{radford2learning} -- the latent representations for a positive triplet pair are mapped close together in the embedding space, while that of negative pair samples further away. Learning such a joint representation from scratch is, however, generally challenging due to the lack of multi-modal datasets, which can provide positive and negative pairs. Thus, we instead leverage the pre-trained CLIP model, which optimized a visual-textual joint representation by contrastive learning. Then, we train an audio encoder to produce an aligned representation by using contrastive learning. Details are explained in the next section. Note that we obtain a latent representation $\hat{{\bf{a}}}\in\mathcal{R}^{d}$ from an augmented audio input $\hat{x}_a$, which is shown useful to improve the quality of the latent representation as this is a common practice in the self-supervised representation learning. \myparagraph{Matching Multi-modal Representations via Contrastive Loss.} We use the InfoNCE loss~\cite{alayrac2020self} to map positive audio-text pairs close together in the CLIP-based joint embedding space, while negative pairs further away. Formally, given a minibatch of $N$ audio-image representation pairs $\{{\bf{a}}_i, {\bf{t}}_j\}$ for $i\in\{1, 2, \dots, N\}$, we first compute the following audio-to-text loss function for the $i$-th pair: \begin{equation} l_{i}^{(a\rightarrow t)}=-\text{log}\cfrac{\exp(\langle{\bf{a}}_i, {\bf{t}}_j\rangle/\tau) }{\sum_{\textnormal{j=1}}^N\exp(\langle{\bf{a}}_i, {\bf{t}}_j\rangle/\tau)} \label{loss:mini_con} \end{equation} where $\langle{\bf{a}}_i, {\bf{t}}_j\rangle$ represents the cosine similarity, i.e. $\langle{\bf{a}}_i, {\bf{t}}_j\rangle = {\bf{a}}_i^\intercal{\bf{t}}_j/\|{\bf{a}}_i\|\|{\bf{t}}_j\|$ and $\tau$ is a temperature parameter. This loss function is the log loss of an $N$-way classifier that wants to predict $\{{\bf{a}}_i, {\bf{t}}_j\}$ as the true representation pair. As the loss function is asymmetric, we define the following similar text-to-audio contrastive loss: \begin{equation} l_{i}^{(t\rightarrow a)}=-\text{log}\cfrac{\exp(\langle{\bf{t}}_i, {\bf{a}}_j\rangle/\tau) }{\sum_{\textnormal{j=1}}^N\exp(\langle{\bf{t}}_i, {\bf{a}}_j\rangle/\tau)} \label{loss:mini_con2} \end{equation} Concretely, we minimize the following loss function $\mathcal{L}_{\textnormal{nce}}$ as a sum of the two losses $l_{i}^{(a\rightarrow t)}$ and $l_{i}^{(t\rightarrow a)}$ for all positive audio-text representation pairs in each minibatch of size $N$: \begin{equation} \begin{aligned} \mathcal{L}_{\textnormal{nce}}^{(a \leftrightarrow t)}=\cfrac{1}{N}\sum_{i=1}^N (l_{i}^{(a\rightarrow t)} + l_{i}^{(t\rightarrow a)}) \end{aligned} \label{loss:nce} \end{equation} \myparagraph{Applying Self-supervised Representation Learning for Audio Inputs.} Self-supervised learning approaches rely on a contrastive loss that encourages representations of the same-class different views to be close in the embedding space, while that of different-class views to be pushed away from each other. We apply this technique to improve the quality of audio representations by minimizing the following $\mathcal{L}_{\textnormal{self}}^{(a \leftrightarrow \hat{a})}$: \begin{equation} \mathcal{L}_{\textnormal{self}}^{(a \leftrightarrow \hat{a})}=\cfrac{1}{N}\sum_{i=1}^N (l_{i}^{(a\rightarrow \hat{a})} + l_{i}^{(\hat{a}\rightarrow a)}) \label{loss:self} \end{equation} where $l_{i}^{(a\rightarrow \hat{a})}$ and $l_{i}^{(\hat{a}\rightarrow a)}$ are defined in a similar way as in Eq.~\ref{loss:mini_con} and \ref{loss:mini_con2}. This loss function is useful to learn subtle differences over sound inputs as it needs to maximize the mutual information between two different views of the same inputs but to minimize the mutual information between two views of the different inputs. For example, as shown in Fig.~\ref{fig:new_figure}, an audio sample $a_i$ forms a negative pair with $\hat{a}_j$ for $i\neq j$, which induces a diffusive effect in the embedding space. \myparagraph{Data Augmentation.} We further apply the data augmentation strategy to improve the quality of representations and to overcome the lack of large-scale audio-text multimodal datasets. For audio inputs, we apply the SpecAugment~\cite{park19e_interspeech}, which visually augments Mel-spectrogram acoustic features by warping the features and masking blocks of frequency channels. For text inputs, we augment text data by (i) replacing words with synonyms, (ii) applying a random permutation of words, and (iii) inserting random words. Note that, for (i) we find synonyms of the given word from WordNet~\cite{fellbaum2010wordnet} and insert the synonym anywhere randomly in the given text input. For example, we augment original texts {\it ``rowboat, canoe, kayak rowing"} to produce new text {\it ``row canoe, kayak quarrel rowboat."} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{cvpr_figure12.pdf} \vspace{-1.3em} \caption{Multi-modal contrastive learning with audio self-supervised loss. } \vspace{-1.3em} \label{fig:new_figure} \end{figure} \myparagraph{Loss Function.} To summarize, we minimize the following loss function $\mathcal{L}_\text{total}$: \begin{equation} \begin{aligned} \mathcal{L}_{\textnormal{total}} = \mathcal{L}_{\textnormal{nce}}^{( a \leftrightarrow v)} + \mathcal{L}_{\textnormal{nce}}^{( a \leftrightarrow t)} + \mathcal{L}_{\textnormal{self}}^{( a \leftrightarrow \hat{a})} \end{aligned} \label{loss:con} \end{equation} \subsection{Sound-guided Image Manipulation} After learning the multi-modal joint embedding space by minimizing Eq.~\ref{loss:con}, we use a direct latent code optimization method to manipulate the given image similar to StyleCLIP~\cite{Patashnik_2021_ICCV}. As shown in Fig.~\ref{fig:contrastivelearning} (b), our model minimizes the distance between a given source latent code and an audio-driven latent code in the learned joint embedding space to produce sound-guided manipulated images. Moreover, we propose a {\it Adaptive Layer Masking} technique, which adaptively manipulates the latent code. \myparagraph{Direct Latent Code Optimization.} We employ the direct latent code optimization for sound-guided image manipulation by solving the following optimization problem: \begin{equation} \begin{aligned} \mathcal{L}_{man} =\ & \underset{w_a \in \mathcal{W}+}\argmin\;{d_{\textnormal{cosine}}(G(w_a),a)} + \lambda_{\textnormal{ID}}{\mathcal{L}_{ID}}(w_a) \\ & \hspace{3.57 cm} + \lambda_{sim}||{g} \cdot {(w_a - w_s)||_2}\\ \end{aligned} \label{loss:man} \end{equation} where a given source latent code $w_s\in\mathcal{W}$~(the intermediate latent space in StyleGAN), audio-driven latent code $w_a\in\mathcal{W}+$. $\lambda_{sim}$ and $\lambda_{ID}$ are hyperparameters. $g$ is a trainable vector to mask the specific style layer adaptively. $\mathcal{L}_{\textnormal{ID}}$ and $G$ are the identity loss and StyleGAN-based generator, respectively. The source latent code $w_s$ means the randomly generated latent code from $G$ or the latent code obtained from the existing input image through GAN inversion~\cite{richardson2021encoding, 10.1145/3450626.3459838}. With such an optimization scheme, we minimize the cosine distance $d_{\textnormal{cosine}}(G(w_a),a)$ between the embedding vectors of the manipulated image $G(w_a)$ and the audio input $a$. \myparagraph{Identity Loss.} The similarity to the input image is also controlled by the identity loss function $\mathcal{L}_{\textnormal{ID}}$, which is defined: \begin{equation} \mathcal{L}_{\text{ID}}(w_a) = 1 - \langle R(G(w_s), R(G(w_a))) \rangle \end{equation} where $R$ is the pre-trained ArcFace~\cite{deng2019arcface} model for face recognition, thus this loss function minimizes the cosine similarity $\langle R(G(w_s), R(G(w_a))) \rangle$ between its arguments in the latent space of the ArcFace network. This allows manipulating facial expressions without changing the personal identity. Note that we disable the identity loss by setting $\lambda_{\textnormal{ID}}=0$ for all other image manipulations. \myparagraph{Adaptive Layer Masking.} We control style changes with adaptive layer masking. $L_2$ regularization is effective in keeping the image generated from the moved latent code from being different from the original~\cite{Patashnik_2021_ICCV}. However, StyleGAN's latent code has different properties per each layer, so different weights should be applied to each layer if the user-provided attribute changes. We use layerwise masking to keep compact content information within style latent code. In StyleGAN2~\cite{karras2020analyzing}, the latent code represents as $ w \in \mathbb{R}^{L \times D}$, where $L$ is the number of the network layers, and $D$ is the latent code's dimension size. We declare a parameter vector $g$ in $L$ dimension. In latent optimization step, $g$ and $w$ are multiplied per layer. $g$ is iteratively updated, which adaptively manipulates the latent code. \myparagraph{Sound and Text Multi-modal Style Mixing.} Multi-modal manipulation of audio and text is based on style mixing of StyleGAN. Different layers of $w$ latent code in StyleGAN represent different properties. Because audio and text share the same new multi-modal embedding space, selecting a specific layer of each latent code guided by audio and text can manipulate the image using properties of audio and text. \section{Experiments} \myparagraph{Implementation Details.} Following CLIP~\cite{radford2learning}, we use the Vision Transformer (ViT)~\cite{dosovitskiy2021an} for our image encoder and the Transformer~\cite{radford2019language} for our text encoder. Note that we use a pre-trained model from \cite{radford2learning}. For our audio encoder, we use ResNet50~\cite{hershey2017cnn} by following~\cite{hershey2017cnn}, where we employ the same output dimension, 512, as the image and text encoder. First, we convert audio inputs to Mel-spectrogram acoustic features. Then, our audio encoder takes these features as an input to produce a 512-dimensional latent representation. The details about the train dataset are explained in supplemental material. For the manipulation step, we leverage StyleGAN2~\cite{karras2020analyzing}'s pre-trained generator. We set the size of latent code based on the resolution of the learned image. Here, we set $18 \times 512$ for images of size $1024 \times 1024$ and $14 \times 512$ for $256 \times 256$. We train our model for 50 epochs using the Stochastic Gradient Descent (SGD) with the cosine cyclic learning rate scheduler~\cite{smith2017cyclical}. We set the learning rate to $10^{-3}$ with the momentum $0.9$ and weight decay $10^{-4}$. The batch size is set to 384. For audio augmentation, we use SpecAugment~\cite{park19e_interspeech} with the frequency mask ratio of $0.15$ and time masking ratio of $0.3$. For direct latent code optimization, $\lambda_{sim}$ and $\lambda_{ID}$ in in Eq. (\ref{loss:man}) are set to $0.008$ and $0.004$ for the FFHQ dataset; and $0.002$ and $0$ for the LSUN dataset. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{cvpr_figure5.pdf} \vspace{-1.5em} \caption{Comparison of sound-guided manipulation results. Given fire crackling (top) and raining (bottom) audio inputs, we manipulate the input image with Tr\"{a}umerAI~\cite{jeong2021tr}, Crossing you in style~\cite{lee2020crossing}, and our method.} \vspace{-1.5em} \label{fig:cvprfig5} \end{figure} \subsection{Qualitative Analysis} \myparagraph{Sound-guided Image Manipulation.} We first compare our sound-guided image manipulation model with existing sound-based style-transfer models including Tr\"{a}umerAI~\cite{jeong2021tr} and Crossing you in Style~\cite{lee2020crossing}. Fig.~\ref{fig:cvprfig5} showcases image manipulation results in response to given audio inputs, including fire crackling and raining. We observe that our model produces a better quality of manipulated images where existing models often fail to capture semantic information of the given audio input~(See 2\textsuperscript{nd} and 3\textsuperscript{rd} columns). \begin{figure}[t] \centering \includegraphics[width=\linewidth]{cvpr_figure2.pdf} \vspace{-1.5em} \caption{Given the (a) input image, we compare the image manipulation results between (b-c) text-driven image manipulation approaches (i.e. TediGAN~\cite{xia2021tedigan} and StyleCLIP~\cite{Patashnik_2021_ICCV}) and (d) ours. Attributes for driving such manipulations include baby crying, people coughing, people giggling, and people screaming.} \label{fig:cvprfig2} \vspace{-1.5em} \end{figure} \myparagraph{Comparison of Text-guided Image Manipulation.} We use the latest text-guided image manipulation models as a baseline, including TediGAN and the latent optimization technology of StyleCLIP. As shown in Fig.~\ref{fig:cvprfig2}, the proposed sound-guided image manipulation~(proposed method) shows more radical results than text-guided manipulation~(TediGAN~\cite{xia2021tedigan} and StyleCLIP~\cite{Patashnik_2021_ICCV}). Unlike text-guided methods, the audio-guided approach achieves natural image style transfer while capable of reflecting multiple labels. For example, TediGAN emphasizes crying, whereas StyleCLIP focuses on the baby when ``baby crying'' context is given. On the contrary, our proposed method is capable of handling ``baby'' and ``crying'' simultaneously. We demonstrate that each audio sample has its own context, which makes the guidance richer than text~(Fig.~\ref{fig:difference}). If the magnitude of \textit{Thunder} is altered or a specific attribute like \textit{Rain} is added to the audio, the manipulation context becomes more diverse than text-guided image manipulation. We visualize the direction vector with t-SNE~\cite{van2008visualizing} in a supplemental document. By subtracting the vectors of the latent code guided by each modality and the source latent code, we show the distribution of manipulating direction. We select the attributes in VGG-Sound~\cite{chen2020vggsound} and randomly manipulate the audio and text prompts. Although we randomly sample the audio and text in the same labels, the sound-guided latent code shows a more significant transition than the text-guided latent code. We use various text synonyms for a fair comparison, but text-guided latent code seems less effective with changes. \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{cvpr_figure11.pdf} \vspace{-1.3em} \caption{Comparison of manipulation results between ours (top) and the existing text-driven manipulation approach, StyleCLIP~\cite{Patashnik_2021_ICCV} (bottom). Unlike the text-driven approach, ours can produce more diverse manipulation results in response to different intensities of raining, i.e. raining, raining with weak thunder, and raining with strong thunder.} \label{fig:difference} \vspace{-0.8em} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{figure9_submission.pdf} \caption{An example of image style mixing jointly with the audio (\textit{people giggling}) and text input (\textit{black woman}).} \label{fig:stylemixing} \vspace{-2em} \end{figure} \myparagraph{Multi-modal Image Manipulation.} Our method ensures that audio, text, and image share the same embedding space. To demonstrate that multi-modal embedding lies in a same latent space, we interpolated text and sound-guided latent code~(see supplementary document). Constructing multi-modal shareable latent space enables joint modification of the target image with user-provided text and audio inputs from the same embedding space. We further perform multi-modal style mixing experiments by selecting a specific layer of latent code and mixing style with audio and text. We find that the sound source can effectively manipulate facial emotion aspects such as ``giggling" on the face and text information controls the background color of the target image (Fig.~\ref{fig:stylemixing}). For the style-mixing details, we follow TediGAN's StyleGAN layerwise analysis~\cite{xia2021tedigan}. In the 18 $\times$ 512 latent code, the style-mixing technique selects the 1st to 9\textsuperscript{th} layers of the sound-guided latent code and the 10\textsuperscript{th} to 18\textsuperscript{th} layers of the text-guided latent code to mix the dynamic characteristics of sound and human properties of text. \myparagraph{Effect of Adaptive Layer Masking.} In StyleGAN~\cite{jeong2021tr}, it is necessary to adaptively regularize style layer since each layer of latent code has different style attributes. For each layer of latent code, it multiplies the trainable parameter that controls the diversity during regularization. The ablation study shows a qualitative comparison of the mechanism for applying adaptive layer masking to the style layer, as illustrated in Fig.~\ref{fig:cvprfig6}. The adaptive masking rectifies the direction by changing the latent code based on the semantic cue. When applying the gate function, sound-guided image manipulation is semantically reasonable. For example, a thunderstorm is a blend of thunder and rain sound. Although thunder and lightning are not seen in the second row, lightning and rain appear in the last row. Manipulation results according to $\lambda_{sim}$ and $\lambda_{ID}$ hyperparameters are added to the supplemental material. \subsection{Quantitative Evaluation} \myparagraph{Zero-shot Transfer.} We compare our model to the supervised method and the existing zero-shot audio classification method. First, we compare audio embeddings trained by supervised methods such as logistic regression, ResNet50~\cite{hershey2017cnn} supervised by random initialization of weights as a baseline model, and AudioCLIP~\cite{guzhov2021audioclip}. We consider AudioCLIP as supervised learning method since it fine-tunes the evaluation dataset using the audio head in the paper. Even though logistic regression is used without additional fine-tuning on the ResNet50 backbone, it is comparable to AudioCLIP using ESResNeXt~\cite{guzhov2021esresne} as the backbone. Secondly, we compare the zero-shot audio classification accuracy with Wav2clip~\cite{wu2021wav2clip}. Table~\ref{zero} shows that our model outperforms previous studies in each task. Our proposed loss learns three modalities in the CLIP embedding space and obtains a more rich audio representation through the contrastive loss between audio samples whereas Wav2clip only learns the relationship between audio and visual. \myparagraph{Semantic Accuracy of Manipulation.} We quantitatively analyze the effectiveness of our proposed audio-driven image manipulation approach. First, we measure performance on the semantic-level classification task. Given the audio embeddings from our pre-trained audio encoder, we train a linear classifier to recognize eight semantic labels including giggling, sobbing, nose-blowing, fire crackling, wind noise, underwater bubbling, explosion, and thunderstorm. We use StyleGAN2~\cite{karras2020analyzing} weights pre-trained from the FFHQ~\cite{karras2019style} dataset when guiding with giggling, sobbing, and nose blowing attributes to compare the semantic-level classification accuracy between text and audio. Also, when guiding with fire crackling, wind noise, underwater bubbling, explosion, and thunderstorm attributes, the weights of StyleGAN2 pre-trained with the LSUN (church)~\cite{yu2015lsun} dataset are used. As shown in Fig.~\ref{fig:userstudy}~(a), we generally outperform existing text-driven manipulation approach with better semantically-rich latent representation. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{cvpr_figure6.pdf} \vspace{-1.3em} \caption{Ablation study of adaptive layer masking. The first row is the input image, the second row is the manipulation result when the gate function is not applied, and the third row is the sound-guided image manipulation result after the gate function is applied. } \label{fig:cvprfig6} \vspace{-1.3em} \end{figure} \begin{table}[t!] \caption{Comparison of the quality of audio representations between ours and alternatives. We report classification accuracy (top-1 in \%) of a linear classifier on the ESC-50~\cite{piczak2015esc} and the Urban sound 8k~\cite{Salamon:UrbanSound:ACMMM:14} datasets as well as their zero-shot inference results. {\it{Abbr.}} $S$: supervised setting.} \label{zero} \centering \resizebox{\linewidth}{!}{ \begin{tabular}{@{}lcccc@{}} \toprule \multirow{2}{*}{Model} & \multirow{2}{*}{$S$} & \multirow{2}{*}{Zero-shot} & \multicolumn{2}{c}{Dataset} \\ \cmidrule{4-5} & & & ESC-50 & Urban sound 8k \\ \midrule ResNet50~\cite{hershey2017cnn} & \checkmark & - & 66.8 \% & \textbf{71.3 \%} \\ AudioCLIP~\cite{guzhov2021audioclip} & \checkmark & - & 69.4 \% & 68.8 \% \\ \midrule Ours w/o $\mathcal{L}_{nce}^{(a \leftrightarrow \hat{a})}$ & - & - & 58.7 \% & 63.3 \% \\ Ours & - & - & \textbf{72.2 \%} & 66.8 \% \\\midrule Wav2clip~\cite{wu2021wav2clip} & - & \checkmark & 41.4 \% & 40.4 \% \\ Ours w/o $\mathcal{L}_{nce}^{(a \leftrightarrow \hat{a})}$ & - & \checkmark & 49.4 \% & 45.6 \% \\ Ours & - & \checkmark & \textbf{57.8 \%} & \textbf{45.7\%} \\ \bottomrule \end{tabular}} \vspace{-0.5em} \end{table} \myparagraph{Distribution of Manipulation Direction.} We can see how much the latent code has changed by the cosine similarity between the source latent code and the manipulated latent code. We compare the cosine similarity between text-guided and sound-guided latent representations. We evaluate the mean and variance of the cosine similarity between $w_s$, a source latent code, $w_a$, an audio-driven latent code, and $w_t$, a text-driven latent code. The latent representations generally exhibit a high-level characteristic of the content~(see sup.). In the latent space of StyleGAN2, the sound-guided latent code moves more from the source latent code than the text-guided latent code, and the image generated from the sound-guided latent code is more diverse and dramatic than the text-guided method. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{cvpr_figure13.pdf} \caption{Quantitative evaluation and user study results. (a)~Downstream task evaluation to compare the quality of representations between ours and text-driven manipulation approaches on the FFHQ~\cite{karras2019style} dataset. A linear classifier is trained to predict 8 semantic labels, such as giggling, sobbing, etc. Participants answered a questionnaire including (b)~naturalness (``Which of the images is the best?'') and (c)~perceptual realism (``Do you think the provided image looks naturally manipulated?''). For perceptual realism, we use a 5-point Likert scale.} \label{fig:cvprfig13} \label{fig:userstudy} \vspace{-1.3em} \end{figure} \iffalse \begin{table}[t!] \caption{Downstream task evaluation to compare the quality of representations between ours and text-driven manipulation approaches on the FFHQ~\cite{karras2019style} dataset. A linear classifier is trained to predict 8 semantic labels, such as giggling, sobbing, etc. } \vspace{-1.0em} \centering \label{accuracy} \resizebox{.5\linewidth}{!}{ \begin{tabular}{@{}lc@{}} \toprule Model & Accuracy (in \%)\\ \midrule TediGAN~\cite{xia2021tedigan} & 84.8 \\ StyleCLIP~\cite{Patashnik_2021_ICCV} & 92.7 \\ Ours & \textbf{98.3} \\ \bottomrule \end{tabular}} \end{table} \fi \iffalse \begin{table}[t] \caption{User study result. Participants answered two questions regarding naturalness (``Which of the images is the best?'') and perceptual realism (``Do you think the provided image looks naturally manipulated?''). For perceptual realism, we use a five-level Likert scale.} \label{table:section1} \centering \resizebox{.7\linewidth}{!}{ \begin{tabular}{@{}lcc@{}} \toprule Model & Perceptual Realism & Naturalness \\ \midrule TediGAN~\cite{xia2021tedigan} & 20.4 \% & 2.435\\ StyleCLIP~\cite{Patashnik_2021_ICCV} & 20.2 \% & 3.255 \\ Ours & \textbf{59.4 \%} & \textbf{4.045} \\ \bottomrule \end{tabular}} \vspace{-1.3em} \end{table} \fi \subsection{User Study} We recruit 100 participants from Amazon Mechanical Turk (AMT) for evaluating our proposed method. We show participants three manipulated images that are generated by TediGAN~\cite{xia2021tedigan}, StyleCLIP~\cite{Patashnik_2021_ICCV}, and our model. Participants answer the following questionnaire: (i) Perceptual Realism-~\textit{Which of the images is the best?} and (ii) Naturalness-~\textit{Do you think the provided image looks naturally manipulated?} For naturalness, we employ Likert scale ranging from 1~(low naturalness) to 5~(high naturalness). Fig.~\ref{fig:userstudy}~(b) and Fig.~\ref{fig:userstudy}~(c) show that our method significantly outperforms other state-of-the-art approaches~(TediGAN and StyleCLIP) in terms of \textit{Perceptual Realism} and \textit{Naturalness}. The large portion of participants~($59.4$\%) chose generated image by our model as the best. Moreover, the result also shows that our method generated more natural images than other text-driven manipulation approaches. Details are illustrated in supplementary document \section{Applications} \myparagraph{Sound-Guided Artistic Paintings Manipulation.} We propose a novel sound-guided image manipulation approach for artistic paintings. We employ StyleGAN2~\cite{karras2020analyzing} generator which is pre-trained with the fine-art paintings dataset called WikiArt~\cite{saleh2016large}. As shown in Fig.~\ref{fig:cvprfig8}, our model could produce various manipulations for art paintings guided by given audio inputs. We observe that an audio input can successfully provide a semantic cue to manipulate artistic paintings. Given a fire crackling sound, a painting is manipulated with fire crackling. We also measured the manipulation quality for artistic painting using the Wikiart dataset using AMT. The responses showed that audio~(73.3\%) is better than text~(26.7\%) in terms of manipulation. \myparagraph{Music Style Transfer.} Our method has a potential to reflect the mood of the music into the image style. Fig.~\ref{fig:cvprfig8} illustrates the results of image style transfer with various music genres. The source latent code is close to the keywords of each music, so the mood of the music appears in the image. For instance, \textit{Funny} music manipulates the image with a fairy-tale style whereas \textit{Latin} music manipulates the image with red-color theme which reflects \textit{passion} characteristic. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{cvpr_figure8.pdf} \caption{Examples of sound-guided artistic paintings manipulation and music style transfer using our method.} \label{fig:cvprfig8} \vspace{-1.3em} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{cvpr_figure7.pdf} \vspace{-1.3em} \caption{Failure cases of manipulation with our method.} \label{fig:cvprfig7} \vspace{-1.3em} \end{figure} \section{Discussion and Conclusion} We propose a method to manipulate images based on the semantic-level understanding from the given audio input. We take the user-provided audio input into the latent space of StyleGAN2~\cite{karras2020analyzing} and the CLIP~\cite{radford2learning} embedding space. Then, the latent code is aligned with the audio to enable meaningful image manipulation while reflecting the context from the audio. Our model produces responsive manipulations based on various audio inputs such as wind, fire, explosion, thunderstorm, rain, giggling, and nose blowing. We observe that an audio input can successfully provide a semantic cue to manipulate images accordingly. However, it would be challenging to preserve the identity for all cases due to the drastic change in image style~(see Fig.~\ref{fig:cvprfig7}). Our method of traversing multi-modal embedding space can be used in many applications with multi-modal contexts. \newpage { \small \bibliographystyle{ieee}
1,314,259,992,618
arxiv
\section{Introduction} $J/\Psi$ suppression \cite{HMHK86,MS86} was theoretically proposed as an important signal of quark-gluon plasma (QGP). The basic idea of $J/\Psi$ suppression is that $J/\Psi$ disappears above the QCD critical temperature $T_c$ due to vanishing of the confinement potential and appearance of the Debye screening effect. In contrast, recently, some lattice QCD calculations indicate an opposite result that $J/\Psi$ and $\eta_c$ survive even above $T_c$ \cite{UKMM01,UNM02,AH04,DKPW04,IDIST06,AAOPS07}. Spectral functions of charmonia are extracted from temporal correlators at high temperature using the maximum entropy method (MEM) in Refs.~4)-8). Although there are some quantitative differences, the peaks corresponding to $J/\Psi$ and $\eta_c$ seem to survive even above $T_c$ ($T_c < T < 2T_c$) in the $c\bar c$ spectral function. However, one may ask a question on the ``survival of $J/\Psi$ and $\eta_c$'' observed in lattice QCD. Are the $c\bar c$ states above $T_c$ observed in lattice QCD really compact (quasi-)bound states? Since colored states are allowed in the QGP phase, there is a possibility that the observed $c\bar c$ state in lattice QCD is just a $c\bar c$ scattering state,\cite{MP08} which is spatially spread. Particularly in lattice QCD, even scattering states have discretized spectrum, due to the finite-volume effect. Therefore, for the fair judgment of the ``survival of charmonia above $T_c$", it is necessary to clarify whether the $c\bar c$ systems are spatially compact (quasi-)bound states or scattering states. To this end, we study the spatial {\it boundary-condition dependence} of the energy and the spectral function for the $c\bar c$ systems ($J/\Psi$ and $\eta_c$) above $T_c$ in lattice QCD\cite{IDIST06}. Actually, for $c\bar c$ scattering states, there occurs a significant energy difference between periodic and anti-periodic boundary conditions on the finite-volume lattice. In contrast, for spatially compact charmonia, there is almost no energy difference between these boundary conditions.\cite{IDIST06} Using this fact, we investigate the $c\bar c$ system above $T_c$ in terms of their spatial extent. \section{Boundary-Condition Dependence and Compactness of States} We show briefly the method to distinguish spatially-localized states from scattering states on the finite-volume lattice.\cite{IDIST06,IDIOOS05} For a compact $c\bar c$ (quasi-)bound state, the wave-function of the $c\bar c$ state is spatially localized, and therefore its energy is insensitive to the spatial boundary condition. In contrast, for a $c\bar c$ scattering state, the wave-function of the $c\bar c$ system is spatially spread, so that there occurs a significant boundary-condition dependence of the energy for the low-lying $c\bar c$ scattering state. Let us estimate the boundary-condition dependence. \begin{itemize} \item Under the {\it periodic boundary condition (PBC)}, the momentum of a quark or an anti-quark is discretized as $p_k=2n_k\pi/L \ (k=1,2,3, \ n_k\in {\bf Z})$ on the finite lattice with the spatial volume $L^3$, and the minimum momentum is $\vec p_{\rm min}=\vec 0$. \item Under the {\it anti-periodic boundary condition (APBC)}, the momentum is discretized as $p_k=(2n_k+1)\pi/L \ (k=1,2,3, \ n_k\in {\bf Z})$. In this case, the minimum momentum is $|\vec p_{\rm min}|= \sqrt{3}\pi/L$. \end{itemize} The energy difference of the low-lying $c\bar c$ scattering state is estimated as $ \Delta E_{\rm scatt}\equiv E_{\rm APBC}^{\rm scatt}-E_{\rm PBC}^{\rm scatt} \simeq 2\sqrt{m_c^2+3\pi^2/L^2} -2m_c \simeq 350{\rm MeV} $ for $L \simeq 1.55{\rm fm}$ in Ref.~7), in the non-interacting case with the charm-quark mass $m_c$. In Ref.~7), we consider possible correction of the energy difference from a short-range potential of Yukawa type between a quark and an anti-quark, and find that the correction is negligible compared to the energy difference $\Delta E_{\rm scatt}\simeq 350{\rm MeV}$ estimated in the non-interacting case. \section{Pole-Mass Measurements above $T_c$ in Lattice QCD} First, we perform the standard pole-mass measurement of low-lying $c\bar c$ systems at finite temperature in anisotropic quenched lattice QCD\cite{IDIST06} with the standard plaquette action at $\beta\equiv 2N_c/g^2=6.10$ and the renormalized anisotropy $a_s/a_t=4.0$, {\it i.e.}, $a_t=a_s/4 \simeq (8.12{\rm GeV})^{-1} \simeq 0.024{\rm fm}$ for the spatial and temporal lattice spacing. The adopted lattice sizes are $16^3\times (14-26)$, which correspond to the spatial volume as $L^3 \simeq (1.55{\rm fm})^3$ and the temperature as (1.11$-$2.07)$T_c$. We use 999 gauge configurations, picked up every 500 sweeps after the thermalization of 20,000 sweeps. For quarks, we use $O(a)$-improved Wilson (clover) action on the anisotropic lattice \cite{IDIST06}. We adopt the hopping parameter $\kappa=0.112$, which reproduces the masses of charmonia at zero temperature. To enhance the ground-state overlap, we use a Gaussian spatially-extended operator with the extension radius $\rho=0.2{\rm fm}$ in the Coulomb gauge \cite{IDIST06}, which is found to maximize the ground-state overlap. The energy of the low-lying $c\bar c$ state is extracted from the temporal correlator of the spatially-extended operators, where the total momentum of the system is projected to be zero. \begin{table}[t] \begin{center} \caption{The energy of the $c\bar c$ system in $J/\Psi$ ($J^P=1^{-}$) and $\eta_c$ ($J^P=0^-$) channels on PBC and APBC at $\beta=6.10$ at each temperature. The superscripts $J/\Psi$ and $\eta_c$ denote quantities of $J/\Psi$ and $\eta_c$, respectively. All the statistical errors are smaller than 0.01GeV. The energy difference $E_{\rm APBC}-E_{\rm PBC}$ observed in lattice QCD is also added. The observed energy difference is very small, compared to the estimated energy difference $\Delta E_{\rm scatt}\simeq 350{\rm MeV}$ in the case of the low-lying $c\bar c$ scattering states.} \begin{tabular}{cccccccc} \hline \hline Temperature & $E^{J/\Psi}_{\rm PBC}$ & $E^{J/\Psi}_{\rm APBC}$ & $E^{J/\Psi}_{\rm APBC}-E^{J/\Psi}_{\rm PBC}$ \ \ & \ \ $E^{\eta_c}_{\rm PBC}$ & $E^{\eta_c}_{\rm APBC}$ & $E^{\eta_c}_{\rm APBC}-E^{\eta_c}_{\rm PBC}$ \\ \hline $1.11T_c$ &3.05GeV & 3.09GeV &0.04GeV \ \ & \ \ 3.03GeV & 3.02GeV &$-$0.01GeV \\ $1.32T_c$ &2.95GeV & 2.98GeV &0.03GeV \ \ & \ \ 2.99GeV & 2.98GeV &$-$0.01GeV\\ $1.61T_c$ &2.94GeV & 2.98GeV &0.04GeV \ \ & \ \ 3.00GeV & 2.97GeV &$-$0.03GeV\\ $2.07T_c$ &2.91GeV & 2.93GeV &0.02GeV \ \ & \ \ 3.01GeV & 3.00GeV &$-$0.01GeV\\ \hline \end{tabular} \label{tab1} \end{center} \end{table} Table \ref{tab1} shows the boundary-condition dependence of the low-lying $c\bar c$ state energy in $J/\Psi$ ($J^P=1^{-}$) and $\eta_c$ ($J^P=0^{-}$) channels at finite temperatures. Both in $J/\Psi$ and $\eta_c$ channels, the energy difference between PBC and APBC is less than 40MeV, which is much smaller than the energy difference $\Delta E_{\rm scatt}$ in the case of the low-lying $c\bar c$ scattering states, {\it i.e.}, $|E_{\rm APBC}-E_{\rm PBC}| \ll \Delta E_{\rm scatt}\simeq 350{\rm MeV}$, at all measured temperatures. These results indicate that the observed $c\bar c$ states are {\it spatially-localized (quasi-)bound states} as charmonia of $J/\Psi$ and $\eta_c$ for $1.11T_c < T < 2.07T_c$. Here, we comment on a ``constant contribution" to the meson correlator above $T_c$, provided by ``wrap-around quark propagation", which is pointed out in Ref.~11). The $\eta_c$ channel is {\it free from} this extra contribution, but the $J/\Psi$ channel suffers from it. Actually, in contrast to almost no temperature dependence of the $\eta_c$ mass, there is a temperature dependence of the $J/\Psi$ mass, which may be an artifact due to the constant contribution to the meson correlator \cite{U07}. However, the difference of the $J/\Psi$ mass between $1.11T_c$ and $2.07T_c$ is only about 140MeV, and this value is rather small compared to the estimated energy difference $\Delta E_{\rm scatt}\simeq 350{\rm MeV}$ for $c\bar c$ scattering states. Therefore, even including this extra effect, our main conclusion is unchanged also for $J/\Psi$ above $T_c$. In fact, $J/\Psi$ and $\eta_c$ survive as spatially-localized (quasi-)bound states for $1.11T_c < T < 2.07T_c$. \section{MEM Analyses for Spectral Functions above $T_c$ in Lattice QCD} Next, we investigate the boundary-condition dependence of the spectral function $A(\omega)$ of the $c \bar c$ system above $T_c$ using the maximum entropy method (MEM) in lattice QCD.\cite{IDIST06} Using MEM, we extract the spectral function $A(\omega)$ from the temporal correlator $G(t)$ of the point source and the point sink, where the total momentum is projected to be zero. In this calculation, we use the Wilson quark action on a fine lattice at $\beta=7.0$, {\it i.e.}, $a_t=a_s/4 \simeq (20.2{\rm GeV})^{-1} \simeq 9.75\times 10^{-3}{\rm fm}$. The adopted lattice size is $20^3\times 46$, which corresponds to the spatial volume $L^3\simeq (0.78{\rm fm})^3$ and the temperature $T \simeq 1.62T_c$. Figure 2 shows the spectral functions in $J/\Psi$ and $\eta_c$ channels on PBC (dotted line) and APBC (solid line). There appear low-lying peaks around 3GeV, which correspond to the charmonia of $J/\Psi$ and $\eta_c$. We find no spatial boundary-condition dependence for the low-lying peaks, which indicate that the $c\bar c$ states corresponding to $J/\Psi$ and $\eta_c$ appear as spatially-localized (quasi-)bound states even above $T_c$.\cite{IDIST06} \section{Summary and Conclusion} We have studied $J/\Psi$ and $\eta_c$ above $T_c$ in anisotropic lattice QCD to clarify whether these states are spatially-localized (quasi-)bound states or $c\bar c$ scattering states. As a result, both in $J/\Psi$ and $\eta_c$ channels, we have found almost no spatial boundary-condition dependence of the energy of the low-lying $c\bar c$ system even on a finite-volume lattice for $(1.11-2.07) T_c$. Also in the MEM analysis, we find no spatial boundary-condition dependence of the low-lying peaks corresponding to $J/\Psi$ and $\eta_c$ in the spectral function at $T \simeq 1.62T_c$. These facts indicate that {\it $J/\Psi$ and $\eta_c$ survive in QGP as spatially-localized (quasi-)bound states for $T_c < T < 2T_c$.} \begin{figure}[t] \centerline{ \includegraphics[width=5.4cm] {fig2a.eps} \hspace{0.5cm} \includegraphics[width=5.4cm] {fig2b.eps} } \caption{The spectral function $A(\omega)$ of the $c\bar c$ system in (a) $J/\Psi$ channel and (b) $\eta_c$ channel at $1.62T_c$ on PBC (dotted line) and APBC (solid line), normalized by the default function $m(\omega)$.\cite{IDIST06} Almost no boundary-condition dependence is found for the low-lying peaks around 3GeV, which correspond to the charmonia of $J/\Psi$ and $\eta_c$. In the $\eta_c$ channel, the dotted and the solid lines coincide. For the figures of the spectral functions with errorbars, see Figs.~9 and 10 in Ref.~7). } \label{fig2} \end{figure} \section*{Acknowledgement} H.~I. and H.~S. thank the Yukawa Institute for Theoretical Physics, for fruitful discussions at ``New Frontiers in QCD 2008".
1,314,259,992,619
arxiv
\section{} Stars do not simply pop up on the main sequence. Before the stars arrive on the zero-age main sequence, they form in the collapses of molecular clouds, gain matter through accretion processes, and compress their cores until hydrogen can burn in full equilibrium. Although this evolutionary phase lasts a relatively short time, it is the imprint of these important physical processes that is often ignored by simplified assumptions. While asteroseismology offers a great tool to investigate these physical processes, studying pre-MS oscillations in turn has the potential to further advance the field. Asteroseismology of pre-main sequence stars faces observational and theoretical challenges. The remnants of their birth environment which is often still surrounding the young stars causes variability that can interfere with the signal of pulsations. The lack of long time-base satellite observations in addition limits the applications of the method. Theoretical models of pre-main sequence stars include several assumptions and simplifications that influence the calculation of pulsation frequencies and excitation properties of pulsation modes. Keeping all this in mind, the prospects for pre-main sequence asteroseismology are manifold. An improved understanding of the structure of young stellar objects has the potential to answer some of the open questions of stellar evolution, including angular momentum transport and the formation of magnetic fields. While gyrochronology, for example, struggles to determine the ages of the youngest clusters, pulsations in pre-main sequence stars can function as an independent age indicator yielding higher precision for single stars. The increasing interest of stellar astrophysics in general to investigate the formation and early evolution of stars and planets illustrates the growing importance of pre-main sequence asteroseismology. In this work we discuss its potential for an advancement of our understanding of stellar structure and evolution. \tiny \fontsize{8}{11}\helveticabold { \section{Keywords:} early stellar evolution, pre-main sequence, p- and g-mode pulsations, stellar structure, accretion physics, angular momentum transport, asteroseismology} \end{abstract} \section{Introduction} The study of pre-main sequence stars was initiated in the 1950s when \textit{what appear to be recently formed groups of stars} \citep{Henyey1955} drew interest from the astronomical community. \citet{Henyey1955} provided the first calculations of stars before their main sequence phase. Their models described the gravitational contraction of radiative stars and the respective evolution of the spectroscopic parameters is still referred to as the `Henyey track' today. Once it was evident that convection plays a major part in the evolution of stars, \citet{Hayashi1961} delivered improved theoretical models for the pre-main sequence phase, achieving good agreement with the observational data of NGC 2264 \citep{Walker1956}. \citet{Hayashi1961} discussed the forbidden zone in the Hertzsprung-Russell diagram -- an area in which no star can be in the hydrostatic equilibrium as the needed temperature gradient would immediately be brought down by rapid convection -- and provided calculation after which stars follow a fully convective `Hayashi track' before joining the `Henyey track' on their contraction towards the ZAMS. Because of the forbidden zone in the Hertzsprung-Russell diagram, the models by \citet{Hayashi1961} follow the `Hayashi track' before joining the `Henyey track' on their contraction towards the ZAMS. \citet{Iben1965} refined the picture of pre-main sequence evolution (classical pre-main sequence model from here on) by following the ${\rm C}^{12}$-depletion in more detail Compared to the real star formation process, however, this classical view of the pre-main sequence evolution suffers from a crude approximation: the initial model. While the latter is taken as a huge ($\sim 55\,R_\odot$ for a $2\, M_\odot$ star) fully convective star at ZAMS mass, real stellar seeds are produced in the collapse of molecular clouds. Such an optically thin cloud collapses under its own gravity. The increase in density and temperature leads to the formation of a first hydrostatic core which will further heat up until molecular hydrogen dissociates at $\sim2000$\,K. This is a strongly endothermic process and leads to a second collapse, ending in the formation of the second hydrostatic core \citep[see e.g.][]{Larson1969}. Such a stellar seed with $1-5\,R_\odot$ and $10^{-3}-10^{-2}\,M_\odot$ \citep{Larson1969, Bhandare2018}, constitutes the first stage of the pre-main sequence evolution, and continues to accrete material from its surrounding cloud or disk. The evolution of such accreting protostars was followed by multiple authors including \citet{Palla1990}, who phrased the word `birthline' meant as the position in the Hertzsprung-Russell diagram in which the radius of the accreting protostar first coincides with the radius of the classical pre-main sequence models. This created a misconception: a view in which stars evolve along the classical pre-main sequence tracks but are still hidden underneath their dust cloud and become visible when they cross the birthline. This view is a very unphysical picture, as stars evolve along the birthline (or rather their very own track) during their accretion phase. The concept of such a birthline is hence outdated, with state-of-the-art models of the pre-main-sequence providing a very different picture. The latter has been manifested by many authors \citep[e.g.][]{Hartmann1996, Hartmann1997, Wuchterl2001, Baraffe2009, Baraffe2010, Hosokawa2011, Baraffe2012, Kunitomo2017, Jensen2018, Elbakyan2019}, but only recently arrived in the field of asteroseismology \citep{Steindl2021b, Steindl2022a, Steindl2022b}. Introducing accretion effects into the numerical simulations of the pre-main sequence evolution provides an insight into the complicated structure of such young stars. The Kippenhahn diagram in Figure \ref{fig:kippenhahn} shows the striking differences between the simplified classical model (\ref{fig:kippenhahn}A) and the more realistic simulation including disk-mediated accretion rates (\ref{fig:kippenhahn}B). Most notable is the difference in chemical mixing. While the view of fully convective pre-main sequence stars is deeply rooted, state-of-the-art pre-main sequence models show that this is not the case although a large part of the stellar interior can still be affected by convection at different stages of the pre-main sequence evolution. Naturally, such drastic changes in internal structure are also mirrored by the spectroscopic parameters of the star. Figure \ref{fig:kiel} provides the corresponding evolutionary tracks in the Kiel diagram (log($g$)-log($T_{\rm eff}$)-diagram). It is important to state that the track for the disk-mediated model is unique, that is a different accretion history will lead to significantly changed evolutionary track. In order to provide a realistic picture of pre-main sequence evolution, we have to move on from simplified views (Hayashi-track $\rightarrow$ Henyey-track $\rightarrow$ main sequence) and have to accept more complicated evolutionary paths: The spectroscopic parameters and internal structure in the early phases of a stars' lifetime are directly related to the properties of the mass accretion. Only after the disk has dissolved and the star continues to evolve without obtaining new material, will the structure of real stars gradually converge towards the structure that we are used to from the classical models. Even if spectroscopic parameters are rather similar, the internal structure remains different. Most notably is the existence of a temperature inversion towards the centre of the star \citep[see e.g. Fig. 8 of][]{Steindl2021b}. An imprint of star formation on the internal structure remains throughout the pre-main sequence phase and at least until the ZAMS; this should provide the opportunity to probe such disk-mediated evolution models with asteroseismology \citep{Steindl2022a}. Astrophysicists of the past have long desired a tool to probe the stellar interior. While direct photometric and spectroscopic methods pierce only the stellar atmosphere, information about the entire star are needed to improve our theory of stellar structure and evolution. Today, such a tool is available. Asteroseismology -- the theory of stellar oscillations -- provides the opportunity to measure (often) tiny changes in the stellar structure powered by stellar pulsations by means of photometric or spectroscopic methods \citep[overviews about asteroseismology can be found in, e.g.,][]{JCD1982,Gough1987,Unno1989,Aerts2010}. As the pulsations travel throughout the star, their frequencies hold information about the entire structure hence providing a view deep into the stellar interior. Since its discovery, asteroseismology has allowed many improvements of our understanding of stellar structure throughout the entire Hertzsprung-Russell diagram and all evolutionary stages \citep[e.g.,][]{Aerts2021}. Especially promising is the research field ``Pre-main sequence asteroseismology". Its origin lies in the first discovery of pulsations in young stars by \citet{Breger1972}. In his article, he reported that the two members of the young open cluster NGC\,2264 -- V\,588\,Mon and V\,589\,Mon -- show $\delta$ Scuti-type pulsations. But it took 20 years until the next observational detections were available \citep[e.g.,][]{Praderie1991,Kurtz1995}. These observations triggered the search for additional members of this new group of pulsating stars, the pre-main sequence $\delta$ Scuti stars, as well as the first theoretical work on pulsational instability in stars before the onset of hydrogen core burning \citep{Marconi1998}. Subsequently, more pre-main sequence stars were found to show radial and non-radial oscillations. It soon became obvious that not only $\delta$ Scuti type pulsations can be excited in young stars, but also $\gamma$ Doradus \citep[e.g.,][]{Bouabid2011,Zwintz2013} and Slowly Pulsating B type variability \citep[e.g.,][]{Gruber2012}. A complete overview of the history of pre-main sequence asteroseismology can be found in \citet{Zwintz2019}. A very important milestone in the field of pre-main sequence asteroseismology was the discovery of the presence of non-radial pulsations in pre-main sequence $\delta$ Scuti stars and the corresponding theoretical description \citep{Ruoppo2007,Zwintz2007}. Soon after, the observed pulsation frequencies of a $\delta$ Scuti star were used to confirm its pre-main sequence evolutionary stage in combination with theoretical models \citep{Guenther2007}. Pre-main sequence $\delta$ Scuti stars have since then proven to be a treasure trove for observational discoveries, with \citet{Zwintz2014} showing a connection between the pulsational properties of pre-main sequence $\delta$ Scuti stars and their relative evolutionary stage: the closer the stars are to the onset of hydrogen core burning, the faster they oscillate. Such a direct connection between stellar pulsation frequencies and the relative evolutionary stages has not yet been found for more evolved $\delta$ Scuti stars. More recent milestones include the discovery of a first candidate of solar-like oscillations in pre-main sequence stars by \citet{Muellner2021}, after predictions of their existence were already made early on by \citep{Samadi2005}. Furthermore, the case of RS Cha, a pre-main sequence eclipsing binary consisting of two $\delta$ Scuti stars, provides the best evidence for the discovery of tidally perturbed pulsations in young stars to date \citep{Steindl2021a}. Pre-main sequence asteroseismology provides the opportunity for many more exciting discoveries. To get there, however, many challenges have to be overcome, in order to uncover the mysteries of this complicated evolutionary stage. The aim of this work is to present these challenges and provide the reader with an outlook on the great prospects this field offers. We review pulsations in young stars in Section \ref{sec:puls_young_stars} before an in-depth description of the challenges pre-main sequence asteroseismology is faced with, both observational and theoretical, in Section \ref{sec:challenges}. An idea for a space mission dedicated to young stars and star forming regions is presented in Section \ref{STRETTO}. Section \ref{sec:prospects} concludes this work with a discussion of possible future milestones and how we might be able to achieve them sooner rather than later. \section{Pulsations in young stars} \label{sec:puls_young_stars} As of March 2022, seven types of pulsations have been discovered theoretically and observationally in pre-main sequence stars. Sorted from most massive to least massive, these are: Slowly Pulsating B (SPB), $\delta$ Scuti, tidally perturbed, $\gamma$ Doradus, $\delta$ Scuti -- $\gamma$ Doradus hybrid, solar-like, and M type pulsations. Table \ref{tab:preMS_puls:types} provides an overview of their properties and gives approximate current numbers of known objects, and Figure \ref{fig:instab_strips} illustrates the corresponding instability regions. Overall, the pre-main sequence pulsators have the same pulsation properties as their counterparts in the main sequence and post-main sequence stages. The difference between the evolutionary stages lies in the pattern of excited oscillation frequencies \citep[e.g.,][]{Suran2001,Bouabid2011,Gruber2012} which is another beautiful illustration of the power of asteroseismology. Below we briefly describe the properties of the known types of pre-main sequence pulsators sorted from most massive to least massive. \textbf{SPB type.} The pulsations in SPB type stars are excited by the heat-engine ($\kappa$) mechamism acting in the ionisation zone of metals \citep{Dziembowski1993}. The pulsation periods lie between about 0.5 and 3 days \citep{Aerts2010}. With masses between $\sim$3 and 7\,$M_{\odot}$, the pre-main sequence evolution of SPB type stars proceeds relatively fast making them statistically less frequent. As a consequence, SPB pulsators before the onset of hydrogen core burning are observationally harder to find. The expected temperature range for pre-main sequence SPB stars is 11100 to 18700\,K \citep{Steindl2021b}. \textbf{$\delta$ Scuti type.} The pulsation periods of these intermediate-mass pre-main sequence stars with effective temperatures from 6300 to 10300\,K \citep{Steindl2021b} lie between $\sim$18 minutes and 7 hours \citep{Zwintz2019}. Pre-main sequence $\delta$ Scuti stars show $p$-modes driven by the heat-engine ($\kappa$) mechanism in the ionisation zones of hydrogen and helium \citep{Aerts2010}. This is the group of pre-main sequence pulsators that was discovered first. Because of their pulsation periods, pre-main sequence $\delta$ Scuti stars could easily be detected with ground-based observations obtained only within a few nights. \textbf{Tidally perturbed type.} Intermediate-mass $\delta$ Scuti type stars can often be found in binary systems. In some cases, the two components of the binary systems interact leading to strong effects on their structure and evolution \citep[e.g., ][]{DeMarco2017}. If the two components are in a close and eccentric orbit, tidal effects cause self-excited pulsation modes to be perturbed \citep[e.g., ][]{Reyniers2003a,Reyniers2003b}. As of March 2022, only one pre-main sequence star, RS Cha, is known to show tidally perturbed oscillations \citep{Steindl2021a}. \textbf{$\gamma$ Doradus type.} Pre-main sequence $\gamma$ Doradus stars have early F spectral types. Their expected range in effective temperature lies between 5200 and 7650\,K \citep{Steindl2021b}. First theoretical predictions for this type of pulsations in pre-main sequence stars have been conducted by \citet{Bouabid2011} without observational evidence. The first observational detections followed a few years later \citep{Zwintz2013}. The $g$-mode pulsations of pre-main sequence $\gamma$ Doradus stars are excited by the convective flux blocking mechanism \citep{Guzik2000}. The pulsation periods are in the range from 0.3 to 3 days \citep{Aerts2010} and, hence, are quite similar to those in SPB stars. A reliable value for effective temperature is therefore required to identify the type of pulsator as the light curves alone are not sufficient. \textbf{$\delta$ Scuti -- $\gamma$ Doradus hybrid type.} Some pre-main sequence pulsators in the A to F range of spectral types can show both $p$- and $g$-modes, hence, $\delta$ Scuti and $\gamma$ Doradus type pulsations. Consequently, this class of objects combines the properties of both classes described above. \textbf{Stochastic solar type.} Stochastic solar-like $p$-mode oscillations are predicted to be excited in stars before their arrival on the ZAMS \citep[e.g.,][]{Samadi2005}. Pre-main sequence stars in the mass range of our Sun are mostly very active objects with magnetic fields, spots on their surfaces, and partly still accreting material from circumstellar disks. The light curves obtained for such objects often show regular and irregular variability that is not connected to pulsations. To be able to search for stochastic solar-like oscillations in pre-main sequence stars requires a suitable tool that deals with the high activity which introduces a high background signal \citep{Muellner2021}. Only one candidate is known at the moment \citep{Muellner2021}, but the search continues. \textbf{K and M type.} This is a recently discovered type of pulsation in pre-main sequence stars that has no known counterpart in the main sequence and post-main sequence phases. \citet{Steindl2021b} found a region of instability for K- and M-type stars which was expected from previous works \citep[e.g.,][]{Baran2011} and presented a first candidate pulsator of this class. The driving mechanism for M-dwarfs is expected to be the $\epsilon$-mechanism \citep[e.g.,][and references therein]{Baran2011} but detailed investigations of the instability regions in \citet{Steindl2021b} have not been performed and are subject of future work. \section{Challenges} \label{sec:challenges} The field of pre-main sequence asteroseismology was met by many challenges throughout its relatively brief history. The initial challenge was taken by \citet{Breger1972} who presented the first evidence for pulsational variability in pre-main sequence stars located in NGC 2264. Since then, due to the advent of space telescopes, the number of known pre-main sequence pulsators has risen above 100. In the last decades, lots of challenges regarding pulsations in such young stars have been identified. Many of these have been partly or fully solved, while others remain open until today. The more we start to understand stellar structure and evolution in detail, the more challenges are continuously being created. This sections aims at discussing the currently most important challenges faced by pre-main sequence asteroseismology. \subsection{Observational challenges} When observations of young stellar objects shall be conducted, several challenges have to be tackled. These are mainly related to the early evolutionary state of the stars. \textbf{Activity.} Young protostars are formed in molecular clouds. During their first evolutionary stages, they gain mass by accreting matter from their birth environment. Consequently, young stars can be partially or completely embedded in dense gas and dust, magnetic fields influence how the matter is accreted onto the early star, and the angular momentum gained from the birth process lets the young stellar object spin fast in most cases. All these phenomena can be summarized with the description that young stars show different levels of activity which manifest themselves in our observations. The dense circumstellar material can prevent us completely from viewing the young stars in the optical or generates irregular light variations of up to several magnitudes \citep[e.g.,][]{Cody2014}. Slightly less dense material can still be responsible for semi-regular variability \citep[e.g.,][]{Alencar2010}. Searching for millimagnitude pulsations in photometric time series of young stars therefore becomes tricky \citep[e.g.,][]{Zwintz2009}. The irregular or semi-regular variability originating from the disks has a second challenge for the search and characterization of pulsations: in case the pulsational variability has long periods (i.e., longer than about half a day), the distinction between variability originating from the disk and from the pulsations will be impossible in many cases. The reason is that the irregular variability produces artifacts in the frequency analysis in the low frequency domain where we would also search for the pulsations. Only if the pulsation periods are shorter (i.e., on the order of a few hours and shorter), can they be well distinguished from variability caused by the disk and the artifacts generated during the frequency analysis. The determination of colors for pre-main sequence stars is also affected by the dense dust that surrounds them: young stars appear much redder than they actually are. Observed colors include the star-disk system and not the star alone. As no general relations for dereddening can be applied for individual young stars with disks (i.e., Herbig Ae/Be stars), the real stellar colors cannot be obtained for embedded objects. Spectroscopically, the circumstellar matter is visible as very characteristic emission features, for example in the hydrogen lines. Although finding emission lines in the spectra is a good indicator for potentially young stars, in many cases it prevents a reliable calculation of effective temperature and gravitational acceleration which are needed to place the stars into a Kiel diagram. \textbf{Evolutionary stage.} Taking the atmospheric properties of given stars (i.e., effective temperature, luminosity, and mass) and placing them into a Hertzsprung-Russell diagram does not provide a unique identification of their evolutionary stage which is illustrated in Figure \ref{fig:crossing}. Some observational features related to activity have to be used to collect indications for the young evolutionary stage, and the more of these indicators are present, the better. If stars can be attributed to a star forming region or an open cluster as young as -- say -- ten million years, then this can be considered as excellent evidence for stellar youth. Observational properties such as irregular variability in the photometric time series, infrared and / or ultraviolet excesses, or emission lines in their spectra can point to an early evolutionary stage, but are not unique identifiers because they might also be attributed to quite evolved evolutionary stages. Infrared excesses, for example, can be also found for post-asymptotic giant branch (post-AGB) stars \citep[e.g.,][]{Kamath2014}, and circumstellar material is present in the form of Keplerian disks also around classical Be stars \citep[e.g., ][]{Rivinius2013}. \textbf{Availability of time-series photometry from space.} Current and former missions have either not targeted young stellar objects or were quite limited in their observations of the early evolutionary phases of stars and planets. The currently operational and hugely successful NASA mission TESS \citep{Ricker2015} can reach down to the galactic plane, but the resulting light curves often suffer from high contamination. The reason is that the CCD pixels are relatively large (i.e., 21 arcseconds per pixel). Consequently, TESS observations avoid to observe deep in the galactic plane. The NASA mission Kepler \citep{Borucki2010} observed a single field high above the galactic plane on purpose to avoid star forming regions and any resulting contamination. The Kepler K2 \citep{Gilliland2010} mission provided some data for young stars and star forming regions in four of 19 campaigns (i.e., campaigns numbers 2, 9, 13, and 15) illustrating the potential of space observations for this research field. The BRITE-Constellation nano-satellite mission \citep{Weiss2014} targets only the brightest stars on the sky, limiting the observations of young stars and planets (which are typically fainter by several orders of magnitude) dramatically. The earlier satellite missions CoRoT \citep{Auvergne2009} and MOST \citep{Walker2003} allowed for observations of the youngest objects in the galaxy through dedicated short (i.e., between 10 days and 5-6 weeks) observing runs, for example on the young cluster NGC 2264 (MOST \& CoRoT) or on individual young stellar objects such as HD 142666, HD 37806, or TW Hya. ESA’s future mission PLATO \citep[planned launch in 2026; ][]{Rauer2014} is scheduled to observe two selected fields for two years each: both fields will not reach down to the galactic plane, hence not be able to target the youngest regions in the Milky Way. Additionally, PLATO’s pixel size of 18\,$\mu$m $\times$ 18\,$\mu$m yields a plate scale of about 26.5 arcseconds per pixel which is even higher than TESS’s plate scale. Therefore, observations of star forming regions and young clusters with high object densities will be problematic for PLATO due to high percentages of contamination. The current maximum time bases for continuous photometric observations of pre-main sequence pulsating stars are $\sim$80 days from Kepler K2 and slightly more than 100 days from TESS \citep{Steindl2021b}. Therefore, pre-main sequence asteroseismology has the challenge to work with way more limited observational material than most of the other fields in asteroseismology. \subsection{Theoretical challenges} Many ingredients are needed to properly describe the earliest phases of stellar evolution since many physical processes are active during that time span. In terms of complexity of the pre-main sequence evolution, the discussion in the introduction only scratches the surface of the challenges we are faced with to create theoretical models of such stars. Stellar rotation, magnetic fields, and star-disk interaction are just a few examples of the physical ingredients, in addition to mass accretion, that need to be kept in mind. All of the above will generally be different for every object. Hence, there might not be a single other phase of stellar evolution in which the spectroscopic parameters and internal structure vary as much on a case by case level as during the pre-main sequence evolution. \textbf{Stellar rotation.} When stars are born in the collapse of a molecular cloud, they obtain angular momentum. Throughout the accretion phase, in which material from the surrounding disk deposited onto stellar surface, the system is expected to be disk-locked \cite{Bouvier1997}. That is, the disk and the star co-rotate until the former is dissolved or its influence on the star becomes minor. The mechanism of disk-locking, however, provides many open questions for the implementation of pre-main sequence models: How long does the disk-locking phase last? What is the distribution of rotation periods and how is it produced? Does the disk lock only the stellar atmosphere or is the whole star co-rotating? If the former, how is the angular momentum distributed in the stellar atmosphere and what is the mechanism of the angular momentum transport? If the latter, what mechanism fixes the rotation rate throughout the star? Some of these questions linger to even later phases of the pre-main sequence stage. After the disk has dissolved, angular momentum throughout the star will evolve according to a not yet fully explained mechanism. Including angular momentum into the current description of stellar evolution models remains an open question with lots of impact on the pulsational characteristics of stars: For gravity mode pulsators, the period spacings are tilted according to the angular momentum of the core while the frequencies of pressure modes are split with respect to the angular velocity and the azimuthal order \citep{Aerts2010}. The Coriolis force in rotating stars gives rise to a new family of pulsation modes, the Rossby modes. The latter have so far not been detected in any pre-main sequence object. \textbf{Magnetic fields.} As is common in the theory of stellar structure and evolution, many of the theoretical challenges of pre-main sequence asteroseismology are intertwined. Magnetic fields, for example, are expected to play a major role in the rotational evolution of stars. As such, they are expected to dominate the angular momentum transport in radiative zones, albeit not efficient enough to explaining observations \citep[i.e.][]{Fuller2014}. Magnetic braking seems to be the dominant mechanism for angular momentum loss in more evolved stars \citep[i.e.][]{Matt2015}. For pre-main sequence stars, magnetic fields are expected to be an important ingredient for disk-locking \citep{Barnes2001}. As the latter already implies, magnetic fields also carry implications on the mass accretion mechanism and hence the accretion rates themselves \citep[i.e.][]{Bouvier2007}. In addition, magnetic fields directly effect the internal structure of stars including the mode cavities and leave a measurable imprint on the pulsation frequencies \citep{Prat2019}. The interaction between magnetic fields and pulsation can lead to a suppression of the latter, resulting in a change in mode amplitudes \citep[i.e.][and references therein]{Lecoanet2022}. Magnetic fields with strengths of multiple kG have been found in pre-main sequence stars \citep{Lavail2017} while their consequences for pre-main sequence asteroseismology have not been explored. \textbf{Mass accretion rates.} The atmospheric parameters of pre-main sequence stars \citep{Steindl2021b, Steindl2022b} as well as their internal structure \citep[see][and the discussion in the introduction to this article]{Steindl2022a} are dependent on the characteristics of the accretion process. While time-dependent mass accretion rates are, although limited in amount, readily available from 2-dimensional simulations of the disk \citep[e.g.][]{Vorobyov2015, Jensen2018,Elbakyan2019}, many other free parameters need to be set in the calculation of stellar structure models. Most noteworthy, we lack an intrinsic description of the mechanism that describes the energy flow of the accreted material. How much energy is added to the star? How much is radiated away? Where is the energy deposited? At the current stage, we have to manually set many parameters corresponding to different assumptions. For further progress in this field it is inevitable to investigate the detailed physics of the accretion processes in more detail. Additional effects complicate the calculation of the equilibrium stellar structure. The properties of the material transferred from the accretion disk to the star are expected to be dependent on the accretion rate itself. For example, the metallicity should follow the relation $Z_{\rm acc} = \frac{\dot{M}_d}{\dot{M}_{\rm acc}}$ \citep{Kunitomo2021}, where $\dot{M}_d$ is the flux of the gas and $\dot{M}_{\rm acc}$ is the mass accretion rate. Many of the to-date calculated mass accretion histories cannot deliver the needed information of $\dot{M}_d$. However, recent studies provide this information \citep[see e.g.][]{Elbakyan2020} such that the inclusion of effects from condensing material will be possible in the near future. Most probably, however, the inclusion of these effects will further push the software instrument Modules for Experiments in Stellar Astrophysics (\textit{MESA}) \citep{paxton2011, paxton2013, paxton2015, paxton2018, paxton2019}. \textit{MESA} was never designed to perform such calculations and, albeit providing us with an indispensable and vital tool, repeatedly runs intro convergence issues with strong time-dependent mass accretion rates during the early phases of the pre-main sequence evolution. Strong efforts will need to go into \textit{MESA}-related problems which is a time consuming work. However, we are not concerned that progress in this regard will have to wait long since the core \textit{MESA} team is very helpful in any regards of their community focused tool. \textbf{The issue of controlled grid studies.} The simple fact that each and every pre-main sequence star has its very own time-dependent accretion rate history and, hence, a very different (and often times chaotic) evolution in the Hertzsprung-Russell diagram \citep{Steindl2022a} complicates the calculation of controlled grids. With the inclusion of disk-mediated accretion rates, the days of (almost) parallel evolutionary tracks are gone which complicates almost every theoretical study. Incorporating assumptions similar to those in the work of \citet{Steindl2021b}, namely that each star follows the same accretion track (with constant mass accretion rate) simplifies such studies, but at the cost of completely disregarding the different evolutionary paths. Quasi-random grids, similar to \citep{Steindl2022b} are in general to be preferred, but the exact values of parameters at a given location in the Hertzsprung-Russell diagram are then not uniquely defined by one evolutionary track. Albeit disk-mediated mass accretion rates are available in a limited amount, the calculation of thousands (as we would wish for in such studies) remain challenging due to the needed computational time. \textbf{Pre-main sequence asteroseismology beyond intermediate mass stars.} Among known pre-main sequence pulsators, $\delta$ Scuti stars significantly outnumber both $\gamma$ Doradus and SPB stars \citet{Steindl2021b}. While it is reasonably simple to verify the pre-main sequence status for $\delta$ Scuti and $\gamma$ Doradus stars, such a verification is much more complicated for the more massive SPB stars. Owing to the fast evolution towards the main sequence, it remains a matter of debate at which mass range will it be still possible to observe stars in their pre-main sequence stage. This, of course, will again be dependent on their evolutionary path from the protostellar stage to the ZAMS. This calls in the need for dedicated calculations with disk-mediated accretion rates that end in higher mass stars \citep{Steindl2022b}. This will not only be helpful in regard to SPB stars, but should provide many insights in asteroseismology of $\beta$ Cephei stars with even higher mass as well. In the low mass regime, theoretical models suggest an instability region for K- and M-type stars \citep[i.e.][]{rodriguez2019, Steindl2021b} and a first candidate for such pulsation has been presented by \citet{Steindl2021b}. According to the theoretical models, many radial orders of g-modes seem to be excited \citep{Steindl2021b}. This instability region needs to be further explored with improved theoretical models for which an important step is to further decrease the mass of the initial stellar seeds which is usually taken to be $\sim10\,M_{jupiter}$ \citep{Steindl2021b, Steindl2022a, Steindl2022b}. \section{STRETTO} \label{STRETTO} STRETTO (Early STaRs and planEt evoluTion in Two cOlors) is an innovative project idea that aims to provide a micro-satellite for astronomy from space with the main goal to study early stellar and planetary evolution. \textbf{Science goals.} STRETTO aims to investigate young stars and planets in star forming regions as well as the youngest open clusters with the goal to address their early evolution. The STRETTO space telescope will be able to study the strength and properties of stellar activity and the amount of rotation present in early stars and their influence on planet formation and evolution. STRETTO will search for signs for the formation of planets and the presence of planets around member stars of young open clusters and star forming regions. The photometric time series obtained by STRETTO will enable studies of the effects of accretion on young stellar objects, of eclipsing binary and multiple systems in their early evolutionary stages, and the interior structures of young stars using asteroseismology. The expected precision of STRETTO will let us investigate the properties of ring systems around exoplanets, other circumplanetary material and the existence of smaller bodies (e.g., exomoons or exocomets) around young stellar objects. Also, the properties of young open clusters and star forming regions as larger-scale objects in our universe can be investigated with such a mission. Together with complementary ground-based observations, STRETTO science will allow to improve the input physics for the early phases of stellar and exoplanetary evolution, provide a time-dependent map of rotation and chemical composition from stellar birth to the onset of hydrogen-core burning, determine a complete picture of the angular momentum transport of young stars from the interior to the atmosphere, provide more reliable ages for the youngest stellar and exoplanetary objects, investigate the connection between magnetic fields and variability of stars in their early evolutionary stages, and understand the interaction of the young circumstellar environment with the star, including exoplanets, exomoons, and exocomets. \textbf{Instrumental design.} STRETTO will carry two 8-cm telescopes each with a 1.5 x 1.5 square degree field of view and a spatial resolution of 3 – 5 arcseconds per pixel. Each telescope will have a dedicated filter: one in the optical, the other at infrared wavelengths. From a low-Earth orbit, STRETTO will be able to monitor the young stars and planets for about half a year continuously, providing the necessary long time-bases for the analysis of the objects’ different types of variability. STRETTO will be able to take photometric time series measurements of stars in the magnitude range from about 6 to 16 mag (V) in two colors using alternating exposure times in the range from 1 to 60 seconds. The goal is to utilize a commercially available microsatellite platform (mass: 50-70kg and power: 60- 80W) which shall host two digital camera systems as payloads, one for each passband. A low earth polar orbit in the height range of 600 – 900 km will be suitable to conduct the scientific observations. The baseline for communication will be one ground station in Europe. \textbf{Potential of STRETTO.} The scientific potential of monitoring young stars and planets photometrically from space with STRETTO lies in yielding a first clear picture how stars and planets pass through their earliest evolutionary phases. \textbf{Status of the project.} Presently, a small consortium consisting of researchers and engineers from Austria, Canada, France, the Netherlands, and Poland is trying to acquire funding for a concept study. If you are interested to learn more about STRETTO and the people involved, please contact the first author of this article. \section{Prospects and importance} \label{sec:prospects} One might think that our theoretical understanding of stellar evolution is well established with only a minor need for further research. But this is a misconception as there are many physical processes that are either not well-understood (e.g., the impact of accretion on the complete evolution of stars) or not taken into account properly in our theoretical models (e.g., convection, rotation or magnetic fields). The physical effects occurring and defining the earliest evolutionary phases of stars must have an impact on their complete further evolution. It would be physically unlikely that the stars' formation histories do not play a role in their later stages. One of the biggest questions in this respect is how large the impact of the processes acting in the youngest stellar objects is and how long the pre-main sequence history of stars persists up to later stages. This is one of the questions pre-main sequence asteroseismology can and does address. By adding processes such as accretion to theoretical models of pre-main sequence stars and coupling those to models of pulsational instability lets us investigate the resulting changes in the interior structures of stars. Pre-main sequence asteroseismology should be able to test the very early evolutionary phases as well. By studying objects with ages of a few million years, we can measure the imprint of the star formation process, thereby shedding light on the many free parameters connected to the accretion physics. The amazing prospect of gathering observational information about the internal structure of stars (rotation rate, chemical mixing profiles, etc.) opens the door to exciting constraints for theoretical models. As of today, the earliest evolutionary phases of stars are often treated as a black box using crude approximations as in the classical model. The resulting stellar structure and atmospheric parameters, however, are used in many different fields to motivate for example the existence of magnetic fields or the evaporation of exoplanet atmospheres. Only dedicated asteroseismic studies of the youngest objects we can possible find, can provide us with the important ingredients to study these processes with the accuracy they deserve. Chemical composition plays an important role in stellar structure and evolution due to the sensitivity of opacities on the atomic spectra and absorption features of the elements making up the star. Most stars pulsate exactly because of the behaviour of the opacities in relation to perturbations (e.g., the heat engine mechanism). Also, the location of the computed evolutionary tracks for stars at all ages depends on the metallicity, Z \citep[e.g.,][]{Montalban2004}. Presently, we do not understand how the chemical evolution proceeds between stellar birth and the onset of hydrogen core burning upon arrival on the ZAMS. For example, in $\sim$10\% of main sequence stars of spectral type B to F chemical peculiarities are found \citep[e.g.;][]{Preston1974}, but it is unclear when these anomalies are formed. The first few detailed analyses of the atmospheric chemical abundances of pulsating pre-main sequence stars have revealed basically solar or slightly solar chemical composition with two exceptions: (i) stars with masses smaller than $\sim$1.5\,$M_{\odot}$ have not burnt the primordial Lithium completely and, hence, show an overabundance compared to the Sun \citep{Zwintz2013}; (ii) in the high-resolution spectra of intermediate-mass pre-main sequence pulsators Barium shows a significant overabundance which cannot be fully explained yet \citep[e.g.,][]{Zwintz2013}. In the future, high-resolution spectroscopic observations of a statistically large enough sample of pre-main sequence stars should be used to generate a time dependent map of the chemical evolution in the early stages of the lives of stars. Asteroseismology has successfully revealed the interior chemical structure of stars: it allows measuring the percentage of hydrogen in the cores of main sequence stars \citep[e.g.,][]{Moravveji2015} or detecting chemical gradients in g-mode period spacings \citep[e.g.,][]{Miglio2008,Bouabid2013}. Consequently, pre-main sequence asteroseismology has the potential to probe the interior chemical evolution of stars in the earliest phases of their evolution. Possible topics in this context would be to investigate the influence of the accretion history on the chemical evolution of stars and how long it persists, if observed chemical inhomogeneities on the stellar surfaces extend into the interiors or not, and if stellar pulsations let us deduce, for example, the amount of Deuterium in the earliest stars. But such investigations require an improvement in our theoretical models and dedicated instruments providing the high-accuracy data (both photometrically and spectroscopically) for pre-main sequence stars. As the excited pulsation frequencies in pre-main sequence stars are different to those in the post-main sequence stages due to the different inner structures \citep[e.g.,][]{Suran2001,Bouabid2011}, it is obvious that stellar pulsations can be used to distinguish the evolutionary stages of stars \citep{Guenther2007}. Even within the pre-main sequence stages, the pulsational properties of stars change following a relation that is not present for the same type of pulsators in later phases: the youngest objects pulsate slower than stars close to the onset of hydrogen core burning \citep[i.e., the ZAMS, ][]{Zwintz2014}. Therefore the next logical step is to use the pulsation properties of pre-main sequence stars as an age indicator for stellar astrophysics. In the earliest evolutionary phases, it is difficult to determine precise ages based on our currently available methods. The ages of the youngest open clusters, for example, are typically given with errors of 50 to 100\,\%. One of the important prospects of pre-main sequence asteroseismology is therefore to provide accurate (relative) ages for young stellar objects - similar to the percentage of hydrogen in the core, $X_c$, that is determined for the main sequence stages from asteroseismology. This is especially important, as different age indicators that are very useful for the study of older clusters, often fail to improve the accuracy of the age determination of young open clusters. Gyrochronology, for example, can provide fantastic constraints on the age of open clusters, but the usage for pre-main sequence stars is very limited. Measurements of the surface rotation periods of young stars are available, but the only way to reproduce them theoretically is to force a specific distribution of initial rotation periods. One of the major issues in this regard is the effect of the protostellar disk during the accretion phase. By coupling stellar evolution codes with a designated disk-evolution, we are hopeful to improve the models in this regard. Once we have an accurate picture of the rotation of pre-main sequence stars, we can explore the effects of the stars' rotation on their pulsational properties in much more detail. Asteroseismology of pre-main sequence stars is needed to address the question why intermediate-mass stars on the main sequence tend to show rigid rotation independent of their core rotation rates \citep{Aerts2017}. Strong coupling between the stellar core and the envelope seems to occur for stars on the main sequence and in later evolutionary phases. With pre-main sequence asteroseismology we will be able to investigate at what earlier point in stellar evolution this strong coupling starts. By measuring nearly-equidistant period spacings we can deduce near-core rotation rates for pre-main sequence g-mode pulsators -- as it is already successfully done for stars in later evolutionary stages. First steps in these investigations have been undertaken, but for a complete picture of the angular momentum transport in young stars, longer photometric time-series with highest precision obtained from space are required. The observational material available for pre-main sequence g-mode pulsators now is insufficient to conduct more detailed studies. As a consequence, the idea of the micro-satellite STRETTO (see Section \ref{STRETTO}) dedicated to young star- and planet-forming regions emerged a couple of years ago and will hopefully be realized in the near future. \section*{Conflict of Interest Statement} The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. \section*{Author Contributions} K. Zwintz and T. Steindl shared the work on this article and contributed in equal amounts to it. \section*{Funding} K. Zwintz and T. Steindl are funded by the University of Innsbruck and are grateful for the support they are receiving. \section*{Acknowledgments} We thank Matthew Kenworthy from the University of Leiden (NL), Gregg Wade from the Royal Military College (CAN) and Rainer Kuschnig from the University of Graz (AT) for their collaboration on the satellite project STRETTO. We are grateful to Eduard Vorobyov from the University of Vienna (AT) for the contribution of the time-dependent accretion rates. \bibliographystyle{frontiersinSCNS_ENG_HUMS}
1,314,259,992,620
arxiv
\section{Introduction \label{intro}} The \emph{weak product} of $G$ and $H$, denoted by $G\times H$ is defined as follows: The vertex set of $G\times H$ is the Cartesian product of the vertex sets of $G$ and $H$. Two vertices $(g_1,h_1)$ and $(g_2,h_2)$ are adjacent in $G\times H$ if $g_1g_2$ is an edge of $G$ and $h_1h_2$ is an edge of $H$. In this paper we consider the product of complete graphs on $r>2$ vertices, $$G=K_r^n=\times_{j=1}^n K_r.$$ We identify the vertices of $G$ with the elements of $\mathbb{Z}_r^n$. By the definition of product, two vertices are adjacent in $G$ iff the corresponding vectors differ in every coordinate. Let $0\leq i \leq r-1$ and $1\leq j\leq n$ be two fixed integers. It is obvious that the set of all vertices of $G$ which has $i$ in the $j$th coordinate forms an independent set. In fact, for $r>2$, these sets are the only maximum independent sets of $G$~\cite{lovazs}. A generalization of this result has been shown in \cite{ADFS} through the following Theorem: \begin{lem} {\bf \cite{ADFS}} \label{t2} For every $r\geq 3$, there exists a constant $M=M(r)$ such that for any $\epsilon>0$ the following is true. Let $G=K^n_r$ and $J$ be an independent set such that $\frac{|J|}{|G|}=\frac{1}{r}(1-\epsilon)$. Then there exists an independent set $I$ with $\frac{|I|}{|G|}=\frac{1}{r}$ such that $\frac{|J\triangle I|}{|G|}<\frac{M\epsilon}{r}$. \end{lem} In Theorem \ref{t2}, ``$\triangle$'' denotes the symmetric difference. Theorem \ref{t2} asserts that any independent set which is close to being of maximum size is close to being determined by one coordinate. The function $M(r)$ that is obtained in~\cite{ADFS} depends on $r$. When $r$ is a constant, for every constant $\delta>0$ one can choose $\epsilon$ to be a sufficiently small constant so that $\frac{|J\triangle I|}{|G|}<\frac{\delta}{r}$. But when $r$ tends to infinity, to obtain any nontrivial result from Theorem~\ref{t2}, $\epsilon$ must be less than $\frac{1}{M(r)}$ which is not a constant. The main result of this paper is to show that in Theorem~\ref{t2}, $M$ does not need to be a function of $r$. Note that this major improvement makes Theorem~\ref{t2} as powerful for large values of $r$ as for constant $r$. We formalize this in the following theorem. \begin{theorem} \label{th1} Let $G=K^n_r$, $r\geq 20$ and $\epsilon<10^{-9}$. Suppose that $J$ is an independent set of $G$ such that $\frac{|J|}{|G|}=\frac{1}{r}(1-\epsilon)$. Then there exists an independent set $I$ with $\frac{|I|}{|G|}=\frac{1}{r}$ such that $\frac{|J\triangle I|}{|G|}<\frac{40\epsilon}{r}$. \end{theorem} \begin{remark} \label{remark1} Note that for $\epsilon\ge 10^{-9}$, we have the trivial bound $\frac{|J\triangle I|}{|G|}\le \frac{2 \times 10^9\epsilon}{r}$, where $I$ is an arbitrary independent set. We also assumed that $r \ge 20$, for some technical reasons. However one can use Theorem~\ref{t2} when $r<20$, as $M(r)$ is a constant for those values of $r$. \end{remark} Let $I$ be a maximum independent set of $G=K^n_r$, and $J$ be an independent set of $G$ such that $J \not\subseteq I$. Then obviously, $\frac{|I\setminus J|}{|G|} \geq \frac{(r-1)^{n-1}}{r^n}$. So we obtain the following as a corollary of Theorem~\ref{th1}. \begin{corollary} \label{cor1} Let $G=K^n_r$, $r\geq 20$ and $\epsilon<c$ where $c=\min(10^{-9},(1-\frac{1}{r})^{n-1})/40$. Let $J$ be an independent set such that $\frac{|J|}{|G|}=\frac{1}{r}(1-\epsilon)$. Then there exists an independent set $I$ with $\frac{|I|}{|G|}=\frac{1}{r}$ such that $J \subseteq I$. \end{corollary} Note that if in Corollary~\ref{cor1}, $r>c'n$ for some constant $c'$, then one can take $c$ to be a constant that does not depend on $n$. The proof of Theorem \ref{th1} as well as Theorem \ref{t2} is based on Fourier analysis on the group $\mathbb{Z}_r^n$. Fourier analysis has been shown to be very useful in the study of Boolean functions. One can refer to \cite{ADFS, AKRS, ALM, BS, BKS, B1, F,FKN, KKL, KS, LMN, Mesh, T} to see some examples. In order to prove Theorem~\ref{th1} we show that a Boolean function which has most of its 2-norm weight concentrated on the first two levels\footnote{Defined formally below} of its Fourier expansion is close to being determined by one coordinate. Thus Lemma~\ref{l1} which formulates this might be of independent interest as a result in the direction of extending results of~\cite{FKN,B1,KS} from $\mathbb{Z}_2^n$ to $\mathbb{Z}_r^n$. Section 2 is devoted to a very brief introduction to Fourier analysis of $\mathbb{Z}_r^n$ and introducing notations and some of the necessary tools. In Section 3 we give the proof of Theorem~\ref{th1}. Section 4 contains some possible directions for future work. \section{Background} We refer the reader to~\cite{ADFS} for a nice and brief introduction to Fourier analysis of $\mathbb{Z}_r^n$. In the following we recall some basic facts and introduce some notations. Let $r>2$ and $G=\{0,1,\ldots, r-1\}^n=\mathbb{Z}_r^n$. For any $S \in G$, let $S_i$ denote the $i$th coordinate of $S$. We also think of $G$ as probability space endowed with the uniform (product) measure $\mu$. For any $S\in G$ let $u_S:G\rightarrow \mathbb{C}$ be defined by $$u_S(T)=\exp\left(\frac{2\pi i\sum_{i=1}^{n} S_iT_i}{r}\right).$$ It is well-known that the set of all functions $u_S$ ($S\in G$) forms an orthonormal basis for the space of all functions $f:G\rightarrow \mathbb{C}$. Therefore any such $f$ has a unique expansion of the form $f=\sum \widehat{f}(S)u_S$, where $$\widehat{f}(S)=\langle f,u_S\rangle= \int f(T)\cdot\overline{u_S(T)} \mu(dT).$$ For any function $f:G\rightarrow \mathbb{C}$, define the $p$-norm of $f$ as $$\|f\|_p={\left(\int |f(S)|^p \mu(dT)\right)}^{\frac{1}{p}}.$$ From orthogonality it can be easily seen that $$\|f\|_2^2=\sum_{S\in G} \widehat{f}(S)^2$$ and $$\langle f,g\rangle=\sum\widehat{f}(S)\overline{\widehat{g}(S)}.$$ We use the following notations throughout the paper: For every complex number $z$, let $d(z,\{0,1\})=\min(|z|,|z-1|)$ denote its distance from the nearest element in $\{0,1\}$. For any $S\in G$ let $|S|=|\{i:S_i\neq 0\}|$. $\overline{0}=(0,0,\ldots,0)$, and for each $1\leq i\leq n$ let $e_i=(0,\ldots,1,\ldots,0)$ be the unit vector with 1 at $i$th coordinate. Define $F_S$ as $F_S= \widehat{f}(S)u_{S}$. Let $f^{>k}=\sum_{|S|>k}F_S$ (similarly $f^{<k}=\sum_{|S|<k}F_S$) and $f^{=k}=\sum_{|S|=k}F_S$. We occasionally refer to $f^{=k}$ the $k$-th \emph{level} of Fourier expansion of $f$. Note that for any function $f$, $\widehat{f}(\overline{0})$ is the expectation of $f$, and $\|f^{\geq 1}\|_2^2$ is the variance of $f$. The following version of Bennett's Inequality which can be easily obtained from the one stated in~\cite{Bennett} will be used in the proof of Lemma~\ref{l1} below. \begin{theorem}[Bennett's Inequality] \label{Bennett} Let $X_1,\ldots,X_n$ be independent real-valued random variables with zero mean, and assume that $X_i \le c$ with probability one. Let $$\sigma^2=\sum_{i=1}^{n}{\rm Var}[X_i].$$ Then for any $t>0$, $$\Pr[\sum X_i \ge t] \le e^{-\frac{\sigma^2}{c^2} h(\frac{tc}{\sigma^2})},$$ where $h(u)=(1+u)\ln(1+u)-u$ for $u\ge 0$. \end{theorem} \section{Main results.} In~\cite{B1,FKN,KS} results of the following type have been proven: Let $f$ be a Boolean function on $\mathbb{Z}_2^n$ and $f^{>k}$ is sufficiently small for some constant $k$, then $f$ is close to being determined by a few number of coordinates. The following lemma which is the key lemma in the proof of Theorem~\ref{th1} is a result of this type for $\mathbb{Z}_r^n$. \begin{lemma} \label{l1} Let $f:\mathbb{Z}_r^n \rightarrow \mathbb{C}$ be a Boolean function such that $\|f^{=1}\|_2^2 \le \frac{1}{r}$ and $\|f^{>1}\|_2^2\le \epsilon$, where $\epsilon<\frac{1}{10^8r}$ and $r\ge 20$. Then denoting by $1 \le i_0 \le n$ the index such that $\sum_{j=1}^{r-1} |\widehat{f}(je_{i_0})|^2$ is maximized, we have $$\left\|f-\left(\widehat{f}(\overline{0})+\sum_{j=1}^{r-1} F_{je_{i_0}}\right)\right\|_2^2 < 5\epsilon.$$ \end{lemma} \begin{remark} Lemma~\ref{l1} shows that $f$ is close to a function which depends only on the $i_0$-th coordinate. We do not know if the condition $\|f^{=1}\|_2^2 \le \frac{1}{r}$ is a weakness of our proof or it is essential. The condition $\epsilon<\frac{1}{10^8r}$ is not a major weakness, since for $\epsilon\ge \frac{1}{10^{8}r}$, we have the trivial bound of $(10^8+1)\epsilon$. \end{remark} We postpone the proof of Lemma~\ref{l1} until Section~\ref{lemmaproof}. We now give the proof of Theorem \ref{th1}, assuming Lemma~\ref{l1}. \noindent \begin{proof}[{\bf Theorem \ref{th1}}] Let $J$ be an independent set of $G$ such that $\frac{|J|}{|G|}=\frac{1}{r}(1-\epsilon)$. Let $f$ be the characteristic function of $J$. Then according to the proof of Theorem~\ref{t2} (Theorem 1.2 in \cite{ADFS}), we have $$\|f^{> 1}\|_2^2 = \sum_{|S|>1} {|\widehat{f}(S)|^2} \leq \frac{2\epsilon}{r}.$$ Since $$\|f^{=1}\|_2^2 \le \|f\|_2^2=\mu(J)\leq \frac{1}{r},$$ by Lemma~\ref{l1}, there exists a function $g:\mathbb{Z}_r^n \rightarrow \mathbb{C}$ which depends on one coordinate and $\|f-g\|_2^2\leq \frac{10\epsilon}{r}$. By rounding $g$ to the nearest of 0 or 1, we get a Boolean function $g_1$ which depends on one coordinate, and since $f$ is Boolean $$\|f-g_1\|_2^2\leq 4 \|f-g\|_2^2 \leq \frac{40\epsilon}{r}.$$ \end{proof} \subsection{Proof of Lemma~\ref{l1} \label{lemmaproof}} The proof of Lemma~\ref{l1} shares similar ideas with the proof of Theorem 8 in~\cite{KS}. However dealing with (complex) Fourier expansions on $\mathbb{Z}_r^n$ instead of (real) generalized Walsh expansions on $\mathbb{Z}_2^n$ required new ideas. For any function $f$, denote $\widehat{f}(S)u_S$ by $F_S$ to make the notations easier. For $1 \le i \le n$, let $g_i=\sum_{j=1}^{r-1}F_{je_i}$, and define $g_0=\widehat{f}(\overline{0})$. For $0 \le i \le n$ let $a_i=\|g_i\|_2$. Without loss of generality assume that $a_1 \ge a_2 \ge \ldots \ge a_n$. To obtain $$\|f-(g_0+g_1)\|_2^2=\sum_{i=2}^{n} a_i^2 + \|f^{>1}\|_2^2 \leq 5\epsilon,$$ we will first show that $a_2$ is small (Claim~\ref{claim1}). This would allow us to apply a concentration theorem and conclude that $\sum_{i=2}^{n} a_i^2$ is very small (Claim~\ref{claim2}). First note that $$\|f^{=1}\|_2^2=\sum_{i=1}^n a_i^2 \le \frac{1}{r},$$ which implies that $a_2^2 \le \frac{1}{2r}$. Now since $\|g_2\|_2^2 \le \frac{1}{2r}$, for every $0\le x_2 \le r-1$, \begin{equation} \label{g_2} |g_2(x_2)|\le \sqrt{1/2}. \end{equation} \begin{claim} \label{claim1}$a_2^2 < 2000\epsilon$. \end{claim} \begin{proof} Consider an arbitrary assignment $\delta_1,\delta_3,\ldots,\delta_n$ to $x_1,x_3,\ldots,x_n$, and let $$l=\widehat{f}({\overline{0}})+g_1(\delta_1)+\sum_{i=3}^n g_i(\delta_i).$$ Since for every $0 \le x_2 \le r-1$, $$d(l,\{0,1\}) \le |g_2(x_2)|+d(l+g_2(x_2),\{0,1\}),$$ we have $$\|d(l,\{0,1\})\|_2^2 \le 2(\|g_2\|_2^2+\|d(l+g_2,\{0,1\})\|_2^2),$$ or equivalently \begin{equation} \label{bound1} d(l,\{0,1\})^2 \le 2(a_2^2+\|d(l+g_2,\{0,1\})\|_2^2). \end{equation} Note that $$\|d(f^{\le 1},\{0,1\})\|_2^2 \le 2(\|d(f,\{0,1\})\|_2^2+\|f^{>1}\|_2^2) \le 2\epsilon.$$ Therefore we can find an assignment $\delta_1,\delta_3,\ldots,\delta_n$ such that \begin{equation} \label{boundlg} \|d(l+g_2,\{0,1\})\|_2^2\leq 2\epsilon. \end{equation} By~(\ref{bound1}) for any such assignment, we have $d(l,\{0,1\})^2\le\frac{1}{r}+4\epsilon\le 1/16$, which implies either $|l|\le \frac{1}{4}$ or $|l-1|\le \frac{1}{4}$. Define $\lambda=\frac{1-\sqrt{\frac{1}{2}}-\frac{1}{4}}{\sqrt{\frac{1}{2}}+\frac{1}{4}}$. Now~(\ref{g_2}) implies that for any $0\leq x_2\leq r-1$, \begin{itemize} \item[{\bf Case 1:}] If $|l|<\frac{1}{4}$, then $|(l+g_2(x_2))-1|\ge \lambda |l+g_2(x_2)|$. \item[{\bf Case 2:}] If $|l-1|<\frac{1}{4}$, then $|l+g_2(x_2)|\ge \lambda |(l+g_2(x_2))-1|$. \end{itemize} Let $A=\{x_2\in \mathbb{Z}_r:|l+g_2(x_2)|\leq|l+g_2(x_2)-1|\}$ and denote its complement by $\overline{A}$. Representing $\|d(l+g_2,\{0,1\})\|_2^2$ as a sum of two integrals over $A$ and $\overline{A}$, and using~(\ref{g_2}), in Cases~1 and~2 one can show that $$\|d(l+g_2,\{0,1\})\|_2^2 \ge \lambda^2\|g_2\|_2^2> \frac{a_2^2}{1000}.$$ Note that the assumption $a_2^2 \ge 2000\epsilon$ will imply $\|d(l+g_2,\{0,1\})\|_2^2>2\epsilon$ which contradicts~(\ref{boundlg}). Thus $a_2^2 < 2000\epsilon$. \end{proof} \begin{claim} \label{claim2} $\sum_{i=2}^n a_i^2 \le 4 \epsilon$. \end{claim} \begin{proof} Let $2 \le m \le n$ be the minimum index which satisfies \begin{equation} \label{define_m} \sum_{i=m}^n a_i^2 \le 10^4 \epsilon. \end{equation} Denote $I=\{m, \ldots, n\}$, and for every $y \in \mathbb{Z}_r^{m-1}$ let $f^*_{I[y]}$ be a function of $\mathbb{Z}_r^{n-m+1}$ (with uniform measure $\mu'$) defined as $$f^*_{I[y]}(x)=f^{\le 1}(y \cup x).$$ Obviously $$\int \|d(f^*_{I[y]}(x),\{0,1\})\|_2^2 \mu'(dy) = \|d(f^{\le 1},\{0,1\})\|_2^2 \le 2\epsilon.$$ Hence for some $y$, $\|d(f^*_{I[y]}(x),\{0,1\})\|_2^2 \le 2\epsilon$. Let $b=\widehat{f}(\overline 0)+\sum_{i=1}^{m-1}g_i(y_i)$. Then $$f^*_{I[y]}(x)=b+\sum_{i=m}^n g_i(x_i).$$ Applying Lemma~\ref{concent} below to $f^*_{I[y]}$ for $\epsilon'=2\epsilon$ shows that $\sum_{i=m}^n a_i^2 \le 4 \epsilon$. This will imply that $m=2$, as $a_2^2<2000 \epsilon$ and $m$ was the minimum index satisfying~(\ref{define_m}), which completes the proof. \end{proof} \begin{lemma} \label{concent} Let $f:\mathbb{Z}_r^n \rightarrow \mathbb{C}$ be a function satisfying $f^{>1} \equiv 0$. Let $ \| d(f,\{0,1\}) \|_2^2 \le \epsilon'$, and suppose that $\| f^{=1}\|_2^2 < 10^4\epsilon'$ and $\epsilon'<\frac{2}{10^8r}$. Then we have $$\|f^{=1}\|_2^2<2\epsilon'.$$ \end{lemma} \begin{proof} Suppose that $f=b+\sum_{i=1}^{n} g_i$, where $b=\widehat{f}(\overline{0})$ and $g_i=\sum_{j=1}^{r-1} F_{j e_i}$. We have $$\|d(b,\{0,1\}) \|_2^2 \le 2(\|d(f,\{0,1\})\|_2^2 + \|f-b\|_2^2) \le 20002\epsilon'.$$ Without loss of generality assume that $d(b,1)\le\sqrt{20002\epsilon'}$ which implies that \begin{equation} \label{boundB} {\rm Re}(b)>2/3. \end{equation} We have $$\|f-1\|_2^2-\|d(f,\{0,1\})\|_2^2 = \int (|f-1|^2-|f|^2)\zeta dx,$$ where $$\zeta(x)=\left\{ \begin{array}{cr} 1 & {\rm Re}(f(x))<\frac{1}{2} \\ 0 & {\rm otherwise} \end{array}\right.$$ So \begin{equation} \label{difference} \|f-1\|_2^2-\|d(f,\{0,1\})\|_2^2 = \int (1-2{\rm Re}(f))\zeta dx. \end{equation} The next step is to show that (\ref{difference}) is less than $\epsilon'$. Note that ${\rm Re}(f)={\rm Re}(b)+\sum_{i=1}^n {\rm Re}(g_i)$, and $\int {\rm Re}(g_i)=0$. Moreover $$\int {\rm Re}(g_i)^2 = \|{\rm Re}(g_i)\|_2^2 \le \|g_i\|_2^2.$$ So $$ \|{\rm Re}(g_i)\|_2^2 \le \sum \|g_i\|_2^2 \le 10^4 \epsilon'$$ which follows that for every $x$, $$|{\rm Re}(g_i(x))| \le \sqrt{10^4 r \epsilon'} \le \sqrt{2}\times10^{-2} \doteq c.$$ Applying Theorem~\ref{Bennett} with $X_i=-{\rm Re}(g_i)$, we get \begin{equation} \Pr[\sum {\rm Re}(g_i) \le -t] \le e^{\frac{-10^4 \epsilon'}{c^2} h(10^{-4}tc/\epsilon')}. \end{equation} Note that $h(u) \ge u \ln(\frac{u}{e})$, for $u \ge e$; which implies that for $t \ge \frac{1}{6} \ge 10^4 e \epsilon'/c$, \begin{equation} \label{concenteration} \Pr[\sum {\rm Re}(g_i) \le -t] \le e^{-\frac{t}{c} \ln(10^{-4}tc/ e \epsilon')}. \end{equation} Now $$(\ref{difference}) = \int_{t=0}^{\infty} \Pr[1-2{\rm Re}(f)>t] =\int_{t=0}^{\infty} \Pr\left[{\rm Re}(b)+\sum {\rm Re}(g_i)<\frac{1-t}{2}\right].$$ Substituting~(\ref{boundB}) we get $$(\ref{difference}) \le \int_{t=0}^{\infty} \Pr\left[\sum {\rm Re}(g_i)<\frac{1-t}{2}-\frac{2}{3}\right] \le 2\int_{t=1/6}^{\infty} \Pr\left[\sum {\rm Re}(g_i)<-t\right].$$ Now by (\ref{concenteration}) \begin{eqnarray} \label{finaldiff} (\ref{difference}) &\le& 2\int_{t=1/6}^{\infty} e^{\frac{t}{c} \ln(10^4 e \epsilon' /t c)}\le 2\int_{t=1/6}^{\infty} \left(\frac{1-\ln(10^4e\epsilon'/t c)}{c}\right) e^{\frac{t}{c} \ln(10^4 e \epsilon'/t c)} = \nonumber \\ &= & 2e^{\frac{1}{6c} \ln(6\times10^4 e\epsilon'/c)} <\epsilon', \end{eqnarray} because $\epsilon'\le 10^{-8}$. Finally by~(\ref{finaldiff}) $$\|f^{=1}\|_2^2 \le \|f-1\|_2^2 \le \|d(f,\{0,1\})\|_2^2 + \epsilon' \le 2\epsilon'.$$ \end{proof} \section{Future Directions} Lemma~\ref{l1} asserts that when most of the 2-norm weight of the Fourier expansion of a Boolean function on $\mathbb{Z}_r^n$ is concentrated on the first two levels, then the function can be approximated by a Boolean function that depends on only one coordinate. One possible generalization of this lemma would be to show that a Boolean function on $\mathbb{Z}_r^n$ whose Fourier expansion is concentrated on the first $l$ levels for some constant $l$ can be approximated by a Boolean function that depends on $k(l)$ coordinates, for some function $k(l)$. Analogues of this for $\mathbb{Z}_2^{n}$ have been proven in~\cite{B1} and~\cite{KS}. Consider a graph $G$ whose vertices are the elements of the symmetric group $S_n$ and two vertices $\pi$ and $\pi'$ are adjacent if $\pi(i)\neq \pi'(i)$ for every $1 \le i \le n$. For every $1 \le i,j \le n$ the set $S_{ij}$ of the vertices $\pi$ satisfying $\pi(i)=j$ forms an independent set of size $(n-1)!$. Recently Cameron and Ku~\cite{Cameron} have proved that these sets are the only maximum independent sets of this graph. Similar results have been proven for generalizations of this graph in~\cite{LM}. Cameron and Ku made the following conjecture: \begin{conj}\label{Cameron} {\bf \cite{Cameron}} There is a constant $c$ such that every independent set of size at least $c(n-1)!$ is a subset of an independent set of size $(n-1)!$. \end{conj} One might notice the similarity of Conjecture~\ref{Cameron} and Corollary~\ref{cor1} for $r=n$. Despite this similarity we are not aware of any possible way to apply the techniques used in this paper to the problem. Since $S_n$ is not Abelian, the methods of the present paper (and all the papers mentioned in Section~\ref{intro}) fail to apply directly to this problem. So an answer to Conjecture~\ref{Cameron} or its analogues for the graphs studied in~\cite{LM} (which do not even have a group structure) might lead to new techniques. \bibliographystyle{siam}
1,314,259,992,621
arxiv
\section{Introduction} The observations reported here are part of our ongoing project for the identification of radio, optical and infrared counterparts of hard X-ray sources detected by the GRANAT satellite in the Galactic Center region (Goldwurm et al. 1994). The original motivation of this search is based on the fact that previous identifications of GRANAT sources have yielded to the discovery of the so called galactic microquasars, i.e., systems whose physics is regarded as a scaled-down version of the same processes (outbursts, jets, disks, etc.) occurring in extragalactic quasars and active galaxies (Falcke \& Biermann 1996). Their best representative examples known so far include 1E 1740.7$-$2942 (Mirabel et al. 1992), GRS 1758$-$258 (Rodr\'{\i}guez et al. 1992) and GRS 1915+105 (Mirabel \& Rodr\'{\i}guez 1994). A complete account of our recent radio observations will be reported in a future paper (Mart\'{\i} et al. 1998), while here we intend to discuss the particular case of \grs. The original target source \grs\ was first detected by Sunyaev (1990) as a previously unknown hard X-ray emitter in the Galactic Center direction. The coded mask imaging spectrometer ART-P on board of GRANAT was used in this discovery. The spectrum between 4-20 keV was well described by a hard power-law, with a photon index of about $-2$ and a total hydrogen column density of $\sim6\times10^{22}$ cm$^{-2}$ (Pavlinsky et al. 1994). From the same authors, the corresponding luminosity for a 8.5 kpc distance was estimated to be $8\times 10^{36}$ erg s$^{-1}$, and the best ART-P position was known with a 90\% confidence radius of $95^{\prime\prime}$. Two years later, on 1992, \grs\ experienced a hard X-ray outburst from $<20$ mCrab to 36 mCrab in the 40-400 keV band. This was detected by the SIGMA telescope, also on board of GRANAT, and both the rise and the later decline took place in a matter of a few days (Churazov et al. 1992). No additional outbursts have been reported since then. From observations in 1995 by Barret \& Grindlay (1996), a ROSAT-HRI source was proposed as the soft X-ray counterpart of \grs\ with a positional accuracy of a few arcsec. Unexpectedly, we found that a much better position for the \grs\ candidate could be obtained from the public databases of the NRAO VLA sky survey (NVSS), at the wavelength of 20 cm (Condon et al. 1993). Indeed, the inspection of the NVSS field of \grs, soon after its release, clearly revealed to our surprise the presence of a strong compact radio source well within both the ART-P and the ROSAT error boxes. The corresponding radio source designation is \nvss, with its flux density being at the 48 mJy level. The a priori probability of finding such a strong radio source within the ROSAT error box is as small as $\sim2\times10^{-5}$. This clearly implies that the X-ray and the radio source are almost certainly related or identical, and the same is also very likely to be true for the GRANAT source. The position of the NVSS object was available with sub-arcsec accuracy and an exhaustive multi-wavelength campaign (optical, infrared and radio) was soon started based on the accurate radio position. The present paper will deal with the observational evidence, collected during this campaign, that suggests a Seyfert 1 interpretation for both the NVSS, ROSAT and GRANAT sources. \section{Radio continuum observations and results} \begin{figure*} \plotfiddle{grs1734c.ps}{4.0cm}{0}{45}{45}{-280}{-180} \plotfiddle{grs1734x.ps}{4.0cm}{0}{45}{45}{-20}{-53} \caption{\label{maps} Uniform weight maps of \nvss\ at both 6 cm (left) and 3.5 cm (right) showing the jet-like structure of this radio source. The ROSAT 90\% confidence error circle is also indicated in both images. Contours at 6 cm are $-3$, 3, 4, 5, 6, 8, 10, 15, 20, 25, 30, 40, 60, 80, 100, 120, 140, 160 and 180 times 0.063 mJy beam$^{-1}$, the rms noise. Contours at 3.5 cm are $-3$, 3, 4, 5, 6, 8, 10, 12, 15, 20, 30, 40, 60, 80 and 100 times 0.053 mJy beam$^{-1}$, the rms noise. The corresponding synthesized beams are 2\pri 58$\times$0\pri 98 with position angle $-$1\grp4 at 6 cm, and 1\pri 57$\times$0\pri 58 with position angle $-$5\grp5 at 3.5 cm, respectively. } \end{figure*} In a first step, the promising candidate \nvss\ was extensively observed with the Very Large Array (VLA) interferometer of NRAO\footnote{The NRAO is operated by Associated Universities, Inc., under cooperative agreement with the USA National Science Foundation.} in a search for short term radio variability that could confirm its microquasar nature. The array was most of the time in B configuration. The data were processed following standard procedures within the AIPS software package of NRAO, with 3C 286 and 1751$-$253 being the amplitude and phase calibrator, respectively. The results of our radio monitoring of \nvss\ are summarized in Table \ref{moni}, where the flux density at several wavelengths is listed for the different dates of observation. Some older radio measurements of \nvss\ also quoted in Table \ref{moni} could be retrieved from the literature and the VLA archive database. \begin{table} \caption[]{Radio observations of \nvss} \label{moni} \begin{tabular}{cccc} \hline Date & Julian Day & $\lambda$ & Flux Density \\ & (JD$-$2400000) & (cm) & (mJy) \\ \hline 05 May 1980$^{(1)}$ & 44364.8 & 6 & $18.8\pm0.3$ \\ 1989$^{(2)}$ & $-$ & 20 & 56 \\ 23 Oct 1993$^{(3)}$ & 49184 & 20 & $48\pm2$ \\ 28 Mar 1997~~~ & 50536.0 & 20 & $63\pm1$ \\ & & 6 & $22.8\pm0.2$ \\ 10 Apr 1997~~~ & 50549.0 & 21 & $57\pm1$ \\ & & 6 & $23.7\pm0.2$ \\ & & 3.5 & $15.1\pm0.2$ \\ & & 2.0 & $8.7\pm0.6$ \\ 19 Apr 1997~~~ & 50557.9 & 6 & $23.9\pm0.2$ \\ & & 3.5 & $15.7\pm0.2$ \\ 04 May 1997~~~ & 50572.9 & 6 & $24.0\pm0.3$ \\ & & 3.5 & $14.7\pm0.4$ \\ \hline \end{tabular} ~\\ (1) VLA Archive Database; observer: Sramek R. \\ (2) Helfand et al. 1992, ApJSS, 80, 211 \\ (3) NVSS maps; Condon et al. 1996 (in preparation) \\ \end{table} In all VLA observations (specially at 6, 3.5 and 2.0 cm) the source appeared resolved with a clear bipolar jet-like structure. From the 3.5 cm map, the J2000 position of the central core is found to be $\alpha=17^{h}37^{m}$28\rl 35 and $\delta=-29^{\circ}08^{\prime}$02\pri 5 ($l^{II}=$358\grp 9, $b^{II}=$+1\grp 4), with an uncertainty of about 0\pri 1 in each coordinate. Contrary to our first expectations, no significant proper motion in the jet condensations, nor day to day variability in the source flux density, became detectable in a reliable way. In view of that, we decided to concatenate the $(u,v)$ data of the highest quality sessions in order to obtain good maps of the \nvss\ radio jets. The resulting images are presented in Fig. \ref{maps}. The strongest jet-like feature emanates in the NE direction and is $\sim3^{\prime\prime}$ extended, while a weaker $\sim2^{\prime\prime}$ counterjet is also evident. Since our VLA monitoring lasted for about five weeks and there was no positional change in the jet condensations larger than $\sim$0\pri 2, the corresponding proper motion upper limit is about 5 mas d$^{-1}$. We soon considered all these facts as a first indication that the object we were studying did not behave as expected from a microquasar source. \begin{figure} \plotfiddle{lcxu.ps}{6.0cm}{0}{50}{50}{-145}{-30} \caption{\label{lcxu} The non-thermal radio spectrum of \nvss\ from decimetric to centimetric wavelengths as observed with the VLA on 1997 April 10. The dotted line corresponds to a power law fit with the parameters described in the text.} \end{figure} On 1997 April 10, the frequency coverage of the observations was wide enough to measure the spectral properties of the source radio emission. A typical non-thermal radio spectrum was observed and we present it in Fig. \ref{lcxu}. A power law fit indicates that the spectrum is well described by $S_{\nu}=(76\pm4$~mJy~$)(\nu/$GHz$)^{-0.75\pm0.03}$. \section{Infrared and optical observations and results} The radio position of \nvss\ was observed at both infrared and optical wavelengths using different ESO telescopes\footnote{Based on observations collected at the European Southern Observatory, La Silla, Chile.}. All frames were reduced using standard procedures based on the IRAF image processing system. \subsection{Imaging} Imaging infrared observations in the J and K bands were carried out with the IRAC2b camera mounted at the F/35 photometer adapter of the 2.2 m telescope. We observed on four nights from 1997 March 23 to April 3. The infrared counterpart of \nvss\ was preliminarily identified by measuring offsets from a nearby bright star. Wide field CCD images in the V, R and I bands were later obtained with the Danish 1.54 m telescope on 1997 April 10, using the DFOSC camera whose scale is 0\pri 40 pixel$^{-1}$. An accurate astrometrical analysis of these optical images was carried out using nine reference stars from the Guide Star Catalogue (Taff et al. 1990), thus confirming our previous infrared identification. The total offset between the radio and optical position was found to be about 0\pri 6. This is well within the astrometrical errors of the fit (rms$\sim$0\pri 4) and also well inside the ROSAT error circle. We therefore conclude that our identification is correct and that the optical/infrared counterpart found is the same object as \grs, \nvss\ and the ROSAT source. Finding charts at K and R bands to assist in future observations are shown in Figs. \ref{finderk} and \ref{finderr}. The two infrared sources IRAS 17342$-$2908 and 358.83+1.39 proposed by Cherepashchuk et al. (1994) as counterpart candidates are not consistent with our identification. \begin{figure} \plotfiddle{finderK.ps}{9.0cm}{0}{45}{45}{-140}{-50} \caption{\label{finderk} Finding chart of the \nvss\ infrared counterpart in the K band. The little cross represents the accurate VLA radio position and the circle is the ROSAT error box at the 90\% confidence level. The proposed counterpart is the only object consistent with both the radio and X-ray positions.} \end{figure} \begin{figure} \plotfiddle{finderR.ps}{9.0cm}{0}{45}{45}{-140}{-50} \caption{\label{finderr} Finding chart of the \nvss\ optical counterpart in the R band with the ROSAT and VLA positions also indicated. The field is exactly the same as in Fig. \ref{finderk}} \end{figure} On the other hand, our identified source did not display clear evidences of photometric variability during both the infrared and optical imaging observations. Although we only had one night at the 1.54 m telescope, the lack of variability in the optical can be established from the similar $R$ band magnitude derived during the spectroscopic session described below. The photometry of the source is summarized in Table \ref{photo}. \begin{table} \caption[]{Magnitudes of the \grs\ counterpart} \label{photo} \begin{tabular}{cccc} \hline Filter & Observation Date & Telescope & Magnitude \\ \hline V & 1997 April 10 & 1.54 m + DFOSC & $21.0\pm0.3$ \\ R & 1997 April 10 & 1.54 m + DFOSC & $18.3\pm0.1$ \\ I & 1997 April 10 & 1.54 m + DFOSC & $16.8\pm0.1$ \\ J & Average all dates & 2.2 m + IRAC2b & $13.7\pm0.1$ \\ K & Average all dates & 2.2 m + IRAC2b & $11.1\pm0.1$ \\ \hline \end{tabular} \end{table} \subsection{Spectroscopy} Broad band spectroscopic observations of the \grs\ optical counterpart were also carried out with EFOSC1 on the 3.6 m ESO telescope, using the B300 grism whose dispersion is 2.0 \AA\ pixel$^{-1}$. As shown in Fig. \ref{halpha}, they revealed that the optical spectrum is completely dominated by strong and very broad emission from the blended H$\alpha$ and [NII] lines. Other emission lines from [OI], [OIII] and [SII] are also identifiable. A consistent redshift measurement is obtained from all of them, with our best estimate being $z=0.0214\pm0.0005$. \begin{figure} \plotfiddle{halpha.ps}{7.5cm}{0}{55}{60}{-165}{-35} \caption{\label{halpha} Optical broad band spectrum of \grs\ obtained on 1997 March 12 with the EFOSC1 instrument at the 3.6 m ESO telescope.} \end{figure} The full width zero intensity of the blended H$\alpha$ and [NII] is extremely broad, nearly 450 \AA\ or $\sim20000$ km s$^{-1}$. Their total flux in emission is $3.8\times 10^{-14}$ erg s$^{-1}$ cm$^{-2}$. We have attempted to deblend these three lines using Gaussian components at the expected wavelengths for $z=0.0214$. The results obtained are given in Table \ref{abso}. This table also includes information on the other spectral lines most evident in the \grs\ spectrum. We wish to point out that the deblending procedure may not be completely reliable here because the H$\alpha$ and [N II] lines are very difficult to separate. In particular, the full width half maximum (FWHM) derived for the forbidden [N II] components (often narrower than $\sim$1000 km s$^{-1}$ in Seyferts) seems to be unusually high, and perhaps we are underestimating the H$\alpha$ emission. It is also possible that H$\alpha$ has an even broader component that our deblending fit is not accounting for. \begin{table*} \caption[]{Main lines in the \grs\ optical spectrum} \label{abso} \begin{tabular}{lcrrrrl} \hline Line & Observed wavelength & Observed flux & FWHM & FWHM & EW & Notes \\ & (\AA) & (erg s$^{-1}$ cm$^{-2}$) & (\AA)& (km s$^{-1}$) & (\AA) & \\ \hline $[$S II$]$ & 6853.3 & $4.8\times 10^{-16}$ & 15 & 660 & $-6$ & \\ & 6871.7 & $2.5\times 10^{-16}$ & 12 & 530 & $-3$ & \\ H$\alpha$ & 6701.5 & $8.3\times 10^{-15}$ & 30 & 1340 & $-120$ & Peak wavelength \\ $[$N II$]$ & 6688.1 & $1.5\times 10^{-14}$ & 123 & 5510 & $-216$ & Assumed wavelength for deblending \\ & 6723.9 & $1.5\times 10^{-14}$ & 104 & 4640 & $-200$ & Assumed wavelength for deblending \\ $[$O I$]$ & 6432.4 & $4.2\times 10^{-16}$ & 32 & 1490 & $-10$ & \\ & 6503.5 & $4.2\times 10^{-16}$ & 35 & 1610 & $-9$ & \\ Na I D & 5893.5 & $-1.8\times 10^{-16}$ & 17 & 870 & $9$ & Interstellar line \\ $[$O III$]$ & 5113.5 & $6.8\times 10^{-16}$ & 17 & 1000 & $-400$ & Continuum very weak \\ & 5066.5 & $2.1\times 10^{-16}$ & 18 & 1070 & $-172$ & Continuum very weak \\ H$\beta$ & 4966.0 & $2.8\times 10^{-16}$ & 31 & 1870 & $-555$ & Continuum very weak \\ \hline \end{tabular} \end{table*} \section{The column density towards GRS 1734$-$292} In this section, we undertake a comparison study of the total absorption column density $N($H$)$ towards \grs\ derived by three independent methods. A mean value is finally adopted in order to deredden the photometric magnitudes of the previous section. This will help us later to estimate the unabsorbed optical luminosity of \grs\ when discussing its physical nature. \subsection{An upper limit to $N($H$)$ from the sodium interstellar absorption line} The spectrum in Fig. \ref{halpha} displays an absorption feature at 5893.5 \AA\ (see Table \ref{abso}). This can be interpreted as an unresolved detection of the two Na D interstellar absorption lines. The Na D lines are expected to be at 5890 and 5896 \AA, respectively. The absorption feature mentioned is well located at the middle point between these two wavelengths. The identification is thus convincing, although our resolution is not good enough to distinguish the two components separately. The intensity of the Na D absorption feature can be used to estimate both the extinction and the distance for objects within the Galaxy. The corresponding relationship is given by: $$ A_V = 3.8 EW,$$ where $EW$ is the mean equivalent width in \AA\ of the two Na D-lines, and a 1.9 mag kpc$^{-1}$ absorption of optical light near the Galactic Plane has been assumed (Allen 1973). For extragalactic sources, the Na lines only provide an indication of the length of the line of sight within our own Galaxy. This is, of course, making the reasonable assumption that all the Na absorption is produced inside the Milky Way. In our case, we are not able to resolve the Na lines and only an upper limit to the mean equivalent width is available from observation ($EW \leq 9$ \AA). This implies from the previous equation that the extinction towards \grs\ should be $A_V \leq 34.2$ mag, and consequently the line of sight towards \grs\ intercepts less than 18 kpc of gas and dust within our Galaxy. From the relationship by Predehl \& Schmitt (1995): $$A_V = 0.56 \left[\frac{N(H)}{10^{21}~cm^{-2}}\right] +0.23,$$ this corresponds to a total hydrogen column density of about $N($H$) \leq 6.1\times 10^{22}$ cm$^{-2}$. This upper limit is consistent in order of magnitude with the rough estimate $\sim6\times10^{22}$ cm$^{-2}$, or $A_V\sim 34$ mag, derived by Pavlinsky et al. (1994) from X-ray model fitting with their ART-P data. However, the fact that an optical counterpart has been found for \grs\ is difficult to reconcile with an optical extinction of more than thirty magnitudes. Therefore, the ART-P column density is likely to be strongly overestimated and we will show below that this is certainly the case. \subsection{$N($H$)$ estimated from neutral hydrogen absorption} A H I absorption experiment at 21 cm was carried out with the VLA on 1997 April 10 and it was also reduced using standard AIPS procedures. The resulting spectrum (not shown here) was not of high signal to noise ratio. The only absorption feature detected was localized at $3.6\pm0.1$ km s$^{-1}$ LSR velocity, with an estimated peak opacity value of $\tau_0=1.10\pm0.02$. This implies that most of the absorption is produced in the Saggittarius arm. The FWHM was $21.1\pm0.3$ km s$^{-1}$. The corresponding column density of H I along the line of sight to \nvss\ can be expressed as $N($H I$)=(4.2\pm0.2)\times10^{21} (T_s/100$~K$)$ cm$^{-2}$, where $T_s$ is the hydrogen spin temperature. Using the canonical value $T_s=125$ K, we estimate that $N($H I$)=(5.3\pm0.3)\times10^{21}$ cm$^{-2}$. Since \grs\ is close to the Galactic Center direction, $N($H$)$ should include another important contribution from the metals associated to the abundant molecular hydrogen component $N($H$_2)$ in addition to the atomic species. In order to derive $N($H$_2)$, we have used the Columbia $^{12}$CO (J=1-0) survey by Dame et al. (1987) together with the empirical relation of $N($H$_2)$ with the integrated CO line intensity. This relation can be expressed as $N($H$_2)=3.6\times 10^{20}\int T($CO$)dv$ (Scoville et al. 1987). By interpolating the Columbia survey at the \grs\ position, we find one single emission component at a LSR velocity of $-5\pm1$ km s$^{-1}$. A Gaussian fit to this line yields a peak temperature of $0.29\pm0.02$ K with a FWHM of $22\pm3$ km s$^{-1}$. The corresponding value of $N($H$_2)$ is thus $(2.4\pm0.4) \times 10^{21}$ cm$^{-2}$. By combining the H I and CO information, the total absorbing column density in the \grs\ direction can now be found from $N($H$)=N($H I$)+2 N($H$_2)$. The final result is $N($H$)= 1.0\pm0.1 \times 10^{22}$ cm$^{-2}$, equivalent to a visual extinction of $A_V=5.8\pm0.5$ magnitudes using again the Predehl \& Schmitt (1995) relation. It is important to mention here that recent studies by Dahmen et al. (1996) suggest that the conversion factor between the CO emission and the H$_2$ column density may be overestimated by an order of magnitude. If this is the case, the $A_V$ value derived above should be consequently revised. Nevertheless, a reliable lower limit of $A_V>3.2$ magnitudes can be established from the H I contribution alone. \subsection{$N($H$)$ estimated from the Balmer decrement} The extinction and the hydrogen column density towards \grs\ can also be independently estimated from the measured line ratio H$\alpha$/H$\beta$ in the optical spectrum and using Table \ref{abso} values. Following Miller \& Mathews (1972), the H$\alpha$/H$\beta$ relationship with galactic extinction can be expressed as: $$A_B=8.5 \log{((H\alpha/H\beta)/3.0)},$$ while absorption at other bands can be easily computed using the following parameterized reddening curve: $$A_{\lambda}=0.74 \lambda^{-1} -0.34,$$ where $\lambda$ is the central band wavelength expressed in microns. The H$\alpha$ flux in \grs\ is, of course, less than the total blended emission of the H$\alpha$ and [N II] lines (H$\alpha < 3.8\times 10^{-14}$ erg s$^{-1}$ cm$^{-2}$). This implies that H$\alpha$/H$\beta < 136$, and we confidently estimate that $A_V<10.5$ mag. Furthermore, if the deblending procedure in Table \ref{abso} was appropriate, we would find more precisely that H$\alpha$/H$\beta\simeq30$. Such a line ratio then yields $A_V=6.3$ mag, with a formal likely uncertainty of $\pm0.5$ mag. This final absorption estimate translates into $N($H$)=(1.1\pm0.1)\times 10^{22}$ cm$^{-2}$. \vspace{0.5cm} Summarizing this section, all three independent used methods seem to provide consistent results. This agreement may be further tested in the future by carrying out additional spectroscopic optical and radio observations with higher resolution and sensitivity. In the following, we will adopt $A_V=6\pm1$ mag, or equivalently $N($H$)=(1.0\pm0.2)\times10^{22}$ cm$^{-2}$, as a compromise mean value for discussion purposes. Such an amount of visual extinction is indeed a reasonable result at 1\grp 4 of galactic latitude. The mean values of $A_V$ as a function of $b^{II}$, and close to the Galactic Center direction, have been studied for instance by Catchpole et al. (1990). The statistical analysis of these authors using colour-magnitude diagrams does provide $A_V\sim 5$ mag for galactic latitudes in the 1\grp0-1\grp5 range, i.e., where \grs\ is located. \section{Discussion} \begin{table*} \caption[]{The main physical parameters of \grs} \label{param} \begin{tabular}{lll} \hline Parameter & Value & Notes \\ \hline Redshift & $z=0.0214\pm0.0005$ & \\ Distance & $D=87$ Mpc & $H_0=75$ km s$^{-1}$ \\ Jet size & $l_{jet}\simeq2.1$ kpc & \\ Visual absorption & $A_V = 6\pm1$ & \\ H column density & $N($H$)=(1.0\pm0.2) \times 10^{22}$ cm$^{-2}$ & \\ Radio luminosity & $L_{rad}\simeq 7\times 10^{39}$ erg s$^{-1}$ & 0.1-100 GHz band \\ Optical luminosity & $L_{opt}\simeq 2\times 10^{43}$ erg s$^{-1}$ & 4900-9000 \AA\ band \\ X-ray luminosity & $L_{X}\simeq 1\times 10^{44}$ erg s$^{-1}$ & 0.5-4.5 keV band \\ Line luminosity & $L_{H\alpha} \simeq 5\times 10^{41}$ erg s$^{-1}$ & Deblended value \\ & $L_{H\beta} \simeq 1 \times 10^{41}$ erg s$^{-1}$ & \\ & $L_{[S~II]} \simeq 4 \times 10^{40}$ erg s$^{-1}$ & \\ & $L_{[O~I]} \simeq 6 \times 10^{40}$ erg s$^{-1}$ & \\ & $L_{[O~III]} \simeq 4 \times 10^{41}$ erg s$^{-1}$ & \\ \hline \end{tabular} \end{table*} The observed redshift of \grs\ corresponds to a recession velocity of 6500 km s$^{-1}$. Such a high value rules out any interpretation based on the systemic velocity of a binary star in the Galaxy. For this galactic interpretation, one should expect a redshift (or blueshift) of at most \ltsima1000 km s$^{-1}$, i.e., the typical kick velocity acquired by the binary system after the supernova explosion forms the compact companion. On the contrary, the simplest way to account for the observed redshift is to assume that \grs\ lies at a cosmological distance and is, therefore, an extragalactic source. Using Hubble's law, the corresponding distance can be estimated as $D = 65 h^{-1}$ Mpc (where the Hubble constant is expressed here as $H_0=100h$ km s$^{-1}$ Mpc$^{-1}$ and a Universe with $\Omega=1$ is assumed). The spectrum in Fig. \ref{halpha} is highly reminiscent of a Seyfert 1 Galaxy given the large width of permitted lines. For a Seyfert galaxy at a $z=0.0214$ redshift, it should normally be possible to see some arcsec extended nebulosity if located at high galactic latitude. The deep R band CCD image in Fig. \ref{finderr} shows that this is not the case. The discovered optical counterpart appears as an unresolved source and only the galactic nucleus is evident. This is possibly due to the optical extinction in the galactic plane ($b^{II}=$1\grp 4). Although not extremely great ($A_V\simeq6$ mag), it is apparently sufficient to prevent any faint nebulosity from being seen in the optical. In the K band no nebulosity is seen either, and this compactness could mean that \grs\ is a nearby quasar instead of a Seyfert 1. However, we believe that the optical spectrum observed is a very strong evidence to prefer by now a Seyfert 1 interpretation. The catalogue of GRANAT sources does not include many examples of extragalactic objects and our discovery adds a new member to this scarce group. Only three extragalactic sources have been extensively detected by the SIGMA telescope on board of GRANAT. They are the quasar 3C 273, the Seyfert 1.5 galaxy NGC 4151, and the radio galaxy Cen A (Bassani et al. 1993; Jourdain et al. 1993). All of them are characterized by displaying clear hard X-ray variability and spectral evolution in time scales of both years and, in some cases, few days. In particular, Cen A was observed to decrease its 40-120 keV flux by a factor of 1.5 within only four days. This behavior compares well with that of \grs. Our target source is currently accepted as a confirmed variable (Barret \& Grindlay 1996), and it exhibited a few day time scale variability during its 1992 hard X-ray outburst (Churazov et al. 1992). On the other hand, the \grs\ spectrum became extremely hard during this flaring event, and this is remarkably similar to the spectrum hardening observed by SIGMA in the NGC 4151 Seyfert during epochs of high photon flux (Bassani et al. 1993). There are in addition other observational clues in agreement with a Seyfert 1 galaxy scenario. For instance, the total radio power derived from the spectrum in Fig. \ref{lcxu} is $L_{rad} \simeq 4 \times 10^{39} h^{-2}$ erg s$^{-1}$ and, in particular, $P_{21 cm} \simeq 3 \times 10^{29} h^{-2}$ erg s$^{-1}$ Hz$^{-1}$. This correlates well when plotted in monochromatic 21 cm radio power versus 0.5-4.5 keV X-ray luminosity diagrams for Seyfert 1 galaxies by Wilson (1991). The unabsorbed value of the X-ray output is taken to be here $L_X \simeq6\times 10^{43} h^{-2}$ erg s$^{-1}$ by extrapolating the power law fit from Pavlinsky et al. (1994). The observed radio jet morphology and spectral index are quite common among Seyfert galaxies. The case of \grs\ should be classified as belonging to the L or {\it linear} class in the Wilson (1991) scheme. The radio jet size ($l_{jet} \simeq 1.6 h^{-1}$ kpc) also correlates acceptably well with the radio power from a Seyfert 1 object (Ulvestad \& Wilson 1984). The optical luminosity estimated from our broad band spectrum and VRI photometry using $A_V=6$ mag is $L_{opt}\sim 1 \times 10^{43} h^{-2}$ erg s$^{-1}$ in the 4900-9000 \AA\ band. This is again in good order of magnitude agreement with expectations based on the radio/optical power correlation studied by Edelson (1987) for Seyfert galaxies in the CfA sample. Other correlations that test acceptably well are those involving the [O III] luminosity versus H$\beta$ luminosity, radio power and FWHM of [O III] (Lawrence 1987; Whittle 1985). We close this discussion by giving in Table \ref{param} the main physical parameters of \grs, expressed for the particular case of $H_0=75$ km s$^{-1}$, and mentioning that this Seyfert also fits reasonably well the Falcke \& Biermann (1996) scheme for AGNs when plotted in their diagram of monochromatic radio power versus core disk luminosity. \section{Conclusions} We have presented observations that provide a very accurate positional identification of the radio, infrared and optical counterpart of the GRANAT source \grs. The discovered counterpart displays clear evidence of being a Seyfert 1 galaxy. The most remarkable properties of the system are perhaps its clear linear jet-like structure and its broad H$\alpha$ emission. A redshift measurement yields the value $z=0.0214\pm0.0005$, thus providing a distance to \grs\ of 87 Mpc ($H_0=75$ km s$^{-1}$ Mpc$^{-1}$). The column density towards \grs\ is also estimated using three different techniques and an average value of $A_V=6\pm1$ mag is proposed. This is equivalent to a hydrogen column density of $N($H$)=(1.0\pm0.2)\times 10^{22}$ cm$^{-2}$. The Seyfert 1 nature of \grs\ is additionally confirmed by a satisfactory agreement with different well established correlations for Seyfert galaxies. We also point out that the hard X-ray behavior of \grs\ is consistent with extragalactic sources studied by GRANAT. \acknowledgements{J.M. acknowledges financial support from a postdoctoral fellowship of the Spanish Ministerio de Educaci\'on y Ciencia. LFR acknowledges support from DGAPA, UNAM and CONACyT, Mexico. We thank C. Lidman who arranged the ESO observations in service mode as well as F. Comer\'on who kindly obtained some of the images. A.S. Wilson, J. Paul, J. Lequeux, C. Gouiffes and P.-A. Duc are also acknowledged for useful comments, help and discussion. Carlos De Breuck is specially thanked for obtaining the optical spectrum. This research has made use of the Simbad database, operated at CDS, Strasbourg, France. } \newpage
1,314,259,992,622
arxiv
\section{\label{sec:Intro} Introduction} In this article we study the problem of describing the propagation of traveling waves obeying the nonlinear Klein-Gordon and the Sine-Gordon equations. Under certain conditions the effect of the nonlinearity is to preserve the oscillatory behavior of the solutions and, at the same time, modify the dispersion relation for the traveling waves which turns out to depend on the amplitude of oscillation. As a matter of fact we are considering conservative systems, for which the dynamics can be mapped to the nonlinear oscillation of a point mass in a one--dimensional potential. The main goal of this article is to explore the effects of the nonlinearity on the solutions, providing simple and efficient approximations. Although for weak nonlinearities, this task can be accomplished by applying perturbative methods (corresponding to performing an expansion in a small parameter which governs the strength of the nonlinearity itself), the situation is more complicated in presence of strong nonlinearities. In such a regime perturbation theory cannot be applied, since the perturbative series do not converge. Such a problem was studied in Ref.~\cite{Lim}, where nonperturbative formulas for the dispersion relations of the traveling wave in the Klein-Gordon and the Sine-Gordon equations were derived. The formulas obtained by Lim \emph{et al.} provide an accurate approximation to the exact results even when the nonlinearity is very strong. In this article we consider the same problems of Ref.~\cite{Lim} and apply to them an approach which has been developed recently.~\cite{AA03,AL04,AA03b,AS,AASF} Our approach is fully nonperturbative in the sense that it does not correspond to a polynomial in the nonlinear driving parameter and, when applied to a given order, allows us to obtain analytical expressions for the dispersion relations, which never involve special functions, to any desired level of accuracy. It is worth mentioning that in the case of weak nonlinearities, an expansion of the nonperturbative results in powers of the nonlinear parameter is sufficient to recover the perturbative results. Let us briefly describe the problem that we are interested in. We consider the nonlinear Klein-Gordon equation \begin{equation} u_{tt}-u_{xx}+V'(u)=0\;,\label{eq:nlKGe} \end{equation} where $V'(u)$ is a function of $u$, which we will assume to be odd, and the prime is the derivative with respect to $u$. To determine the periodic traveling wave, we set \begin{equation} u=u(\theta)\,,\qquad \theta=kx-\omega t\;. \end{equation} After substituting into Eq.(\ref{eq:nlKGe}) we find \begin{equation} \Omega^2 \ddot{u}+V'(u)=0\;,\label{eq:map} \end{equation} where $\Omega^2=(\omega^2-k^2)$ and $\dot u \equiv du/d\theta$. $u(\theta)$ is periodic with period $2\pi$ and fulfills the boundary conditions \begin{equation} u(0)=A\;,\qquad \dot u(0)=0\;,\label{eq:bc} \end{equation} with $A$ being the amplitude of the traveling wave. The solution of Eq.(\ref{eq:map}) with the previous boundary conditions oscillates between $-A$ and $A$. By integrating Eq.(\ref{eq:map}) and taking into account Eq.(\ref{eq:bc}) we obtain \begin{equation} \frac{1}{2} \Omega^2 \dot{u}^2+V(u)=V(A)\;. \end{equation} Considering $\Omega^2>0$ we observe that \begin{equation} \Omega = \frac{\pi}{\sqrt{2}\int_0^A[V(A)-V(u)]^{-1/2}du}\label{eq:exact} \end{equation} gives the exact expression for the dispersion relation of the nonlinear Klein-Gordon equation. We neglect the case $\Omega^2\le 0$ since there is no traveling wave for this configuration. This article is organized as follows: in section \ref{sec_2} we describe the variational nonperturbative approach and apply it to derive approximate analytical formulas for the nonlinear Klein--Gordon equation; in section \ref{sec_3} we apply our method to two further nonlinear equations; finally in section \ref{sec_4} we draw our conclusions. \section{Variational Method} \label{sec_2} An exact solution of Eq.~(\ref{eq:nlKGe}) can be accomplished in a limited number of cases, depending on the form of the potential $V(u)$. However, when the nonlinearities due to the potential $V(u)$ are small, it is still possible to find useful approximations using perturbation theory. The focus of this section will be on the opposite situation, when the nonlinearities are not small and a perturbative expansion is not useful. In such a case one needs to resort to nonperturbative methods, capable of providing the solution even in the presence of strong nonlinearities. One of such methods, which we will use in the present article, is the linear delta expansion (LDE)~\cite{K81,F00,AFC90,lde}. The LDE is a powerful technique that has been applied to difficult problems arising in different branches of physics like field theory, classical, quantum and statistical mechanics. The idea behind the LDE is to interpolate a given problem ${\cal P}_g$ with a solvable one ${\cal P}_s$, which depends on one or more arbitrary parameters $\lambda$. In symbolic form ${\cal P}={\cal P}_s(\lambda) + \delta ({\cal P}_g-{\cal P}_s(\lambda))$. $\delta$ is just a bookkeeping parameter such that for $\delta=1$ we recover the original problem, and for $\delta\to 0$ we can perform a perturbative expansion of the solutions of ${\cal P}$ in $\delta$. The perturbative solution obtained in this way to a finite order shows an artificial dependence upon the arbitrary parameter, $\lambda$, and would cancel if the calculation were carried out to all orders. As such we must regard such dependence as unnatural; in order to minimize the spurious effects of $\lambda$ we then require that any observable ${\cal O}$, calculated to a finite order, be locally independent on $\lambda$, i.e. that \begin{equation} \frac{\partial {\cal O}}{\partial \lambda}=0\;.\label{eq:PMS} \end{equation} This condition is known as the ``Principle of Minimal Sensitivity'' (PMS)~\cite{S81}. We call $\lambda_{PMS}$ the solution to this equation. (In the case where the PMS equation has multiple solutions, the solution with smallest second derivative is chosen.) We emphasize that the results that we obtain by applying this method do not correspond to a polynomial in the parameters of the model as in the case of perturbative methods. The procedure that we have illustrated is quite general and it will be possible to implement it in different ways depending on the problem that is being considered. In Refs.~\cite{AA03,AL04,AA03b} the LDE was used in conjunction with the Lindstedt--Poincar\'e technique to solve the corresponding equations of motion. Our approach here is to apply the LDE directly to the integral of eq.~(\ref{eq:exact}) as in Refs.~\cite{AS} and~\cite{AASF}. We will consider the potential \begin{equation} V(u)=\frac{u^2}{2}+\frac{\mu u^4}{4}\label{eq:pot} \ . \end{equation} The dispersion relation in this case can be obtained using Eq. (\ref{eq:exact}) as \begin{equation} \Omega=\frac{\pi\sqrt{1-\mu A^2}}{2\int_0^\pi (1-m\sin^2{\phi})^{-1/2}d\phi}\;, \end{equation} with $m=\frac{\mu A^2}{2(1+\mu A^2)}$. We consider the following approach to obtain the dispersion relation of a periodic traveling wave. This comes from the equation for the period of oscillations, \begin{equation} T = \int_{-A}^{+A} \frac{\sqrt{2}}{\sqrt{E-V(u)}}du\;,\label{eq:period} \end{equation} where the total energy $E$ is conserved and $\pm A$ are the classical turning points. In the spirit of the LDE we interpolate the nonlinear potential $V(u)$ with a solvable potential $V_0(u)$ and define the interpolated potential $V_\delta(u)=V_0(u)+\delta(V(u)-V_0(u))$. Notice that for $\delta=1$, $V_\delta(u)=V(u)$ is just the original potential, whereas for $\delta=0$ it reduces to $V_0(u)$. Hence we can write Eq. (\ref{eq:period}) as~\cite{AS,AASF} \begin{equation} T_\delta = \int_{-A}^{+A} \frac{\sqrt{2}}{\sqrt{E_0-V_0(u)}} \frac{du} {\sqrt{1+\delta \Delta(u)}} \end{equation} where \begin{equation} \Delta(u)=\frac{E-E_0-V(u)+V_0(u)}{E_0-V_0(u)}\;. \end{equation} Obviously $E=V(A)$ and $E_0=V_0(A)$. We treat the term proportional to $\delta$ as a perturbation and expand in powers of $\delta$. This allows us to write \begin{equation} T_\delta = \sum_{n=0}^\infty \frac{(2n-1)!!}{n! 2^n}(-1)^n \delta^n \int_{-A}^{+A} \frac{\sqrt{2} (\Delta(u))^n}{\sqrt{E_0-V_0(u)}}du\;. \label{eq:periodLDE} \end{equation} Observe that the integrals in each order of Eq.~(\ref{eq:periodLDE}) have integrable singularities at the turning points because $\Delta(\pm A)$ is finite. Assume that $|\Delta(u)|\leq \Delta_0 < 1$ for $u\in [-A,A]$, which happens if $\lambda$, the arbitrary variational parameter, is chosen appropiately. Then, the series (\ref{eq:periodLDE}) converges uniformly for $|\delta | < 1/\Delta_0$, which includes the case $\delta=1$. For the potential given in Eq. (\ref{eq:pot}) we can choose $V_0(u)=\frac{1+\lambda^2}{2}u^2$ as the interpolating potential and hence we have \begin{equation} \Delta(u)=\frac{2}{1+\lambda^2}\left[ \frac{\mu}{4}(a^2+u^2) -\frac{\lambda^2}{2}\right]\;. \end{equation} The parameter $\lambda$ should be chosen to be $\lambda>\sqrt{\frac{\mu A^2}{2}}\sqrt{1+\frac{1}{\mu A^2}}$ which guarantees the uniform convergence of Eq.(\ref{eq:periodLDE}). It is straightforward to check that at first order, \begin{equation} T_\delta^{(0)}+\delta T_\delta^{(1)}= \frac{2\pi}{\sqrt{1+\lambda^2}} \left\{1-\frac{\delta}{1+\lambda^2}\left[ \frac{3}{8} \mu A^2 -\frac{\lambda^2}{2}\right] \right\}\;. \end{equation} The PMS (\ref{eq:PMS}) with ${\cal O}=T$ yields \begin{equation} \lambda_{PMS}=\frac{\sqrt{3\mu}A}{2}\;.\label{eq:lpms_t} \end{equation} The period is found to be \begin{equation} T_{PMS}=\frac{4\pi}{\sqrt{4+3\mu A^2}}\;. \end{equation} Correspondingly, \begin{equation} \Omega_{LDE\,(1)}=\sqrt{1+\frac{3}{4} \mu A^2}\;.\label{eq:omega_period} \end{equation} In Ref.~\cite{AASF} it was found that with this value of $\lambda_{PMS}$ all the remaining terms of odd order in Eq.~(\ref{eq:periodLDE}) vanish. Hence, retaining only nonvanishing contributions, the expression for the period at order $N$ is \begin{equation} T_\delta^{(N)}= \frac{4\pi}{\sqrt{4+3 \mu A^2}} \sum_{n=0}^N (-1)^n \left( \begin{array}{c} -1/2\\n\end{array}\right) \left( \begin{array}{c} -1/2\\2n\end{array}\right) \left(\frac{\mu A^2} {4+3\mu A^2} \right)^{2n}\;, \label{eq:T_ordn} \end{equation} and, correspondingly, \begin{equation} \Omega_{LDE\,(N)}=\frac{2\pi}{T_\delta^{(N)}}\;. \label{eq:W_ordn} \end{equation} At second order we have \begin{equation} \Omega_{LDE\,(2)}=\frac{\sqrt{4+3 A^2\mu}}{2 \displaystyle{\left(1+\frac{3 A^4 \mu^2(1024 + A^2\mu(1536+611 A^2\mu))}{1024 (4+ 3 A^2\mu)^4} \right)}}\;. \label{eq:omegaper_2} \end{equation} At third order, the dispersion relation is given by \begin{equation} \Omega_{LDE\,(3)}= \frac{{\sqrt{4 + 3\,A^2\,\mu }}} {2\,\displaystyle{\left( 1 + \frac{3\,A^4\,{\mu }^2\, \left( 385\,A^8\,{\mu }^4 + 560\,A^4\,{\mu }^2\, {\left( 4 + 3\,A^2\,\mu \right) }^2 + 1024\,{\left( 4 + 3\,A^2\,\mu \right) }^4 \right) }{ 16384\,{\left( 4 + 3\,A^2\,\mu \right) }^6} \right)} }\;. \label{eq:omegaper_3} \end{equation} We will compare the results obtained using our method, Eqs. (\ref{eq:omega_period}), (\ref{eq:omegaper_2}), and (\ref{eq:omegaper_3}) with the results obtained in Ref. \cite{Lim}, where the same problem has been solved using the harmonic balance technique in combination with the linearization of the nonlinear Klein-Gordon equation. The findings of Ref. \cite{Lim} at first order, their expression for the dispersion relation coincides with our Eq.(\ref{eq:omega_period}), whereas at the second order they find \begin{equation} \Omega_{Lim\,(2)} = \sqrt{\frac{40+31\mu A^2 + \sqrt{1024+1472\mu A^2 + 421 \mu^2 A^4}}{72}}\label{eq:Lim2}. \end{equation} In the left-hand panel of Fig.~\ref{fig1} we make a comparison of the ratios of the dispersion relations obtained from Eqs.~(\ref{eq:omega_period}), and (\ref{eq:omegaper_2})-(\ref{eq:Lim2}) to the exact dispersion relations for $\mu A^2<0$, and in the right-hand panel of Fig.~\ref{fig1} we display the relative error \begin{equation} \Delta=\log_{10}\left|\frac{ \Omega-\Omega_{exact} }{\Omega_{exact}}\right|\label{relerr} \end{equation} for $\mu A^2\gg 0$. We can appreciate that our variational method at second order provides a smaller error than the method of Ref. \cite{Lim} applied to the same order. The error is further reduced by using the LDE to the third order and can be then systematically reduced using the general formula (\ref{eq:T_ordn}). \begin{figure \begin{center} \includegraphics[width=14cm]{Fig_1.eps} \end{center} \caption{(Left) Ratio of the dispersion relation from Eqs.~ (\ref{eq:omega_period}) and (\ref{eq:omegaper_2})-(\ref{eq:Lim2}) exactly for $\mu A^2<0$ and (right) relative error $\Delta$ [see eq.~(\ref{relerr})] of the dispersion relations for $\mu A^2\gg 0$.} \label{fig1} \end{figure} \section{Further Examples} \label{sec_3} \subsection{Sine-Gordon model} We now consider the Sine-Gordon model, which is governed by the potential \begin{equation} V(u)=-\cos{u}\label{eq:sGpot} \end{equation} and which allows us to write the nonlinear Klein-Gordon equation, also known as the Sine-Gordon equation as \begin{equation} \Omega^2 \ddot u+ \sin{u}=0\;.\label{eq:sGeq} \end{equation} The exact dispersion relation in this case can be obtained from \begin{equation} \Omega = \frac{\pi}{2\int_0^{\pi/2}(1-m^2 \sin^2{t})^{-1/2}dt} \label{eq:oexact_sG} \end{equation} with $m=\sin(A/2)$. Observe that in this case \begin{equation} T=4 \int_0^{\pi/2}(1-m^2 \sin^2{t})^{-1/2}dt \equiv 4K(m^2)\;, \end{equation} with $K(m)$ being the elliptic integral of the first kind. We take advantage of this fact and make use of the nonperturbative series for the elliptic integral which was derived using the LDE technique~\cite{chavos}. At order $N$, setting $\lambda=-m/2$ and $\delta=1$, it is given by the expression: \begin{equation} K_N(m,\lambda)= \frac{\pi}{2} \ \sum_{k=0}^N \sum_{j=0}^k \frac{\Gamma(j+1/2)}{j!^2 \,(k-j)!\, \Gamma(1/2-k)} \ \frac{(-m)^k } {2^{k-j} \ (1-\frac{m}{2})^{k+1/2}} \;. \end{equation} This expression provides a nonperturbative series for the elliptic integral of the first kind since it does not correspond to a simple polynomial in $m$. To further improve this series we can use the Landen transformation \cite{AbrSte} \begin{equation} K(m)=\frac{1}{1+\sqrt{m}} K\left( \frac{4\sqrt{m}}{(1+\sqrt{m})^2}\right) \label{eq:landel} \end{equation} and the inverse relation \begin{equation} K(m)=\frac{2(1-\sqrt{1-m})}{m} K \left( \frac{(-2+2\sqrt{1-m}+m)^2} {m^2}\right)\;.\label{eq:landeninv} \end{equation} Notice that $f(m)= \frac{4\sqrt{m}}{(1+\sqrt{m})^2}$ maps a value $0<m<1$ into a new value $m'=f(m)>m$. The inverse transformation $f^{-1}(m)= \frac{(-2+2\sqrt{1-m}+m)^2}{m^2}$ maps a value $m$ into a smaller one. Using this transformation we obtain more accurate approximations for the elliptic integrals. For example, at order 1 we find \begin{equation} K_{LDE\,(1)}(m) = \frac{\pi}{\sqrt{1-\frac{m}{2}+3\sqrt{1-m}}} \end{equation} and, correspondingly \begin{eqnarray} \Omega_{LDE\,(1)} &=& \frac{1}{4} \sqrt{\cos (A)+12 \left|\cos\left(\frac{A}{2}\right)\right|+3} \;.\label{eq:oLDE} \end{eqnarray} At second order we find \begin{eqnarray} \Omega_{LDE\,(2)} &=& \frac{16 \cos^2{\left(\frac{A}{4} \right)} (3+12\cos{\left(\frac{A}{2} \right) + \cos(A)})^2 \sqrt{2+2\cos{\left(\frac{A}{2} \right)}\sec^4{\left(\frac{A}{4} \right)} } }{2713+2520 \cos{\left(\frac{A}{2}\right) + 2580 \cos{(A)} + 360 \cos{\left( \frac{3A}{2} \right) + 19 \cos(2A)} }} \;\label{eq:sgomega_3} \end{eqnarray} It is noticeable that $\Omega_{LDE\,(3)}=\Omega_{LDE\,(2)}$. In fact, for the following consecutive orders, the same statement holds, i. e., $\Omega_{LDE\,(5)}=\Omega_{LDE\,(4)},\, \Omega_{LDE\,(7)}=\Omega_{LDE\,(6)}$ and so on. The same pattern of equal value of the observables for consecutive orders of approximation was found in Ref.~\cite{AL04} for the Duffing potential at large $n. $ For comparison, Lim \emph{et al.}~\cite{Lim} have found the dispersion relation to be given at first order as \begin{equation} \Omega_{Lim (1)} = \sqrt{\frac{2J_1(A)}{A}}\;,\label{sGLim1} \end{equation} and at second order as \begin{equation} \Omega_{Lim (2)}=\sqrt{g(A)+\sqrt{g^2(A)-h(A)}}\;,\label{sGLim2} \end{equation} where \begin{eqnarray} g(A)&=& \frac{(b_0-b_2-b_4+b_6)A+18a_1+2a_3}{36 A}\nonumber\\ h(A)&=& \frac{a_1(b_0-b_2-b_4+b_6)}{18A} \end{eqnarray} and \begin{equation} a_1=2J_1(A),\quad a_3=-2J_3(A),\quad b_{2i}=2(-1)^i J_{2i}(A),\quad i=0,1,2,3 \end{equation} $J_n(A)$ being the Bessel function of the first kind. \begin{figure \begin{center} \includegraphics[width=14cm]{Fig_2.eps} \end{center} \caption{(Left) Ratio of the dispersion relation from eqs.~ (\ref{eq:oLDE})-(\ref{sGLim2}) exactly and (right) relative error $\Delta$ [see eq.~(\ref{relerr})] of the dispersion relations.} \label{fig2} \end{figure} In the left-hand panel of Fig.~\ref{fig2} we display the ratio of the dispersion relations from Eqs.~(\ref{eq:oLDE})-(\ref{sGLim2}) to the exact and on the right-hand panel the corresponding relative errors. From the graphs we see that the LDE curves calculated to second order display much smaller errors than the curves obtained with the method of Lim \emph{et al.} even close to $A= \pi$. A second observation is that our formulas can be systematically improved simply by going to a higher order and that they do not involve any special function, as in the case of Eq.~(\ref{sGLim1}). \subsection{Pure quartic potential} Our final example is the Klein-Gordon equation in a pure quartic potential \begin{equation} V(u)=\frac{u^4}{4}\;, \end{equation} which leads to the equation of motion \begin{equation} \ddot u + u^3=0\;.\label{eq:pure_cub} \end{equation} This is a particular case of the first example where the contribution of the quadratic term in the potential~(\ref{eq:pot}) is neglected. As such, the corresponding dispersion relation can be derived from the expression of the period of oscillations, Eq.~(\ref{eq:T_ordn}), since the quadratic term contributes with the $4$ in the square root in the front of the double sum and in the argument in the sum, and is simply given by \begin{equation} T_{LDE\,(N)} = \frac{4\pi}{\sqrt{3 \mu A^2}}\sum_{n=0}^N (-1)^n \left( \begin{array}{c} -1/2 \\ n \end{array}\right) \left( \begin{array}{c} -1/2 \\ 2n \end{array}\right) \frac{1}{3^{2n}}\; \end{equation} and correspondingly $\Omega_{LDE\,(N)}$ can be obtained as in Eq.~(\ref{eq:W_ordn}). Results for the first three orders are the following~: \begin{eqnarray} \Omega_{LDE\,(1)}&=& \frac{24 \sqrt{3}A}{49}\;,\label{eq:cubOrd1}\\ \Omega_{LDE\,(2)}&=& \frac{13824 \sqrt{3}A}{28259}\;,\label{eq:cubOrd2}\\ \Omega_{LDE\,(3)}&=& \frac{1990656 \sqrt{3}A}{4069681}\;. \label{eq:cubOrd3} \end{eqnarray} Lim \emph{et al.} found, at first and second order of approximation, respectively, \begin{eqnarray} \Omega_{Lim (1)}&=& \frac{\sqrt{3}}{2}A\;,\nonumber\\ \Omega_{Lim (2)}&=& \frac{1}{12}\sqrt{62+2\sqrt{421}}A\;.\label{eq:LIM} \end{eqnarray} \begin{figure \begin{center} \includegraphics[width=14cm]{Fig_3.eps} \end{center} \caption{(Left) Ratio of the dispersion relation from Eqs.~ (\ref{eq:cubOrd1})-(\ref{eq:cubOrd3}), and both expressions in Eq.~(\ref{eq:LIM}) exactly and (right) relative error $\Delta$ (see eq.~(\ref{relerr})) of the dispersion relations.} \label{fig3} \end{figure} In the left-hand panel of Fig.~\ref{fig3} we display the ratio of the approximate to the exact dispersion relation and in the right-hand panel the relative error from our findings at first, second and third order and those of Ref.~\cite{Lim} given previously. At first order, our findings perform just as the second order of Lim \emph{et al.}~\cite{Lim}, and at second and third orders, the performance of the variational results is excellent. \section{Conclusions} \label{sec_4} We have derived analytical expressions for the dispersion relations of the nonlinear Klein-Gordon equation for different potentials by means of the Linear Delta Expansion. This technique is implemented by computing the period of oscillations in the given potential. In the particular example of the Sine-Gordon potential, where the dispersion relation is given in terms of elliptic integrals, we have implemented the LDE to compute such integral and, by means of the Landen transformation, we have obtained an improved series for the elliptic integral. We have observed that the expression obtained by using the first few terms in this series performs remarkably well even close to the $A = \pi$. We believe that our results are appealing in two respects: first in that they provide a systematic way to approximate the exact result with the desired accuracy, and second in that the expressions that we obtain never involve special functions, as in the case of Ref.~\cite{Lim}. An aspect that needs to be underlined is that the method described in subsection \ref{sec_2} provides a {\sl convergent} series representation for the dispersion relation, provided that the arbitrary parameter fulfills a simple condition. \begin{acknowledgments} One ofthe authors (P.A.) acknowledges the support of Conacyt grant no. C01-40633/A-1. The authors also acknowledge support of the Fondo Ram\'on Alvarez Buylla of Colima University. \end{acknowledgments}
1,314,259,992,623
arxiv
\section{Introduction} Dissipative structures, \ie\ patterns in spatially extended systems away from equilibrium have been intensively studied for many decades. A very comprehensive review can be found in \textcite{Cross-Hohenberg-1993}; results obtained since then would probably require an even more extensive review. A very popular class of mathematical models is the reaction-diffusion systems with diagonal diffusion matrices. There have been numerous indications that non-diagonal elements in diffusion matrices, \ie\ cross-diffusion, can lead to new nontrivial effects not observed in classical reaction-diffusion systems, \eg\ \emph{quasi-solitons} in systems with excitable reaction part \cite{QS1,QS6}. The defining features of the quasi-solitons was their ability to penetrate each other, which makes them akin to the true solitons in the conservative systems. However the question remained whether this similarity is a reflection of common mechanisms, or is entirely superficial and incidental. Here we report an observation which makes the similarity even more striking. Namely, the previously reported quasi-solitons propagated while retaining fixed shape profile, \ie\ were constant solutions in a co-moving frame of reference; the exceptions were the ``ageing'' quasi-solitons reported in \cite{QS6} which retained their front and tail structures but changed their overall length. Here we report ``envelope quasi-solitons'' (EQS), which share some phenomenology with envelope solitons in the Nonlinear Schroedinger Equation~\cite{Cross-Hohenberg-1993,NLS}. Namely, they have the form of spatiotemporal oscillations (``wavelets'') with a smooth envelope, and the velocity of the individual wavelets (the phase velocity) is different from the velocity of the envelope (the group velocity). This may be a serious evidence for some deep relationship between these phenomena from dissipative and conservative realms. Our observations are made in two two-component models, supplemented with cross-diffusion, rather than self-diffusion terms; such terms may appear say in mechanical~\cite{Cartwright-etal-1997}, chemical~\cite{Vanag-Epstein-2009}, or % ecological~\cite[p.~11]{Murray-2003} applications: \begin{equation} \df{\u}{\t} = \f(\u,\v) + \ddf{\v}{\x}, \quad \df{\v}{\t} = \g(\u,\v) - \ddf{\u}{x}. \label{RXD} \end{equation} We consider the FitzHugh-Nagumo (FHN) kinetics taken in the form \begin{equation} \f = \u(\u-\a)(1-\u) - \ki\v, \quad \g = \eps \u, \label{FHN} \end{equation} as an archetypal excitable model, with an arbitrarily fixed value of parameter $\ki=10$, and varied values of parameters $\a$ and $\eps$. As a specific example of a real-life system, we also consider the Lengyel-Epstein~\cite{Lengyel-Epstein-1991} (LE) model of chlorite-iodide-malonic acid-starch autocatalitic reaction system, \begin{equation} \f = \ale-\u - \frac{4\u\v}{1+\u^2}, \quad \g = \ble \left(\u - \frac{\u\v}{1+\u^2}\right), \label{LE} \end{equation} for fixed $\ale=6.3$ and $\ble=0.055$. We simulated \eq{RXD} on an interval $\x\in[0,\L]$, $\L\le\infty$ with Neumann boundary conditions for both $u$ and $v$~\cite{epaps}. \sglfigure{fig1}{ (color online) Quasi-soliton profiles at the indicated moments of time, in a co-moving frame of reference, upper $\x$-axes, with the reconstructed original spatial coordinates shown on lower $\x$-axes. Parameters $\a=0.12$, $\eps=0.01$, $\L=\infty$, solution propagates rightwards. (a-c) Development of a quasi-soliton. (d,e) Propagation of a quasi-soliton, with unchanged envelope and shifting phase of high-frequency wavelets within the envelope. Note that wavelets in (d) and (e) are in antiphase: the $\v$ profile at $\x=\xc$ is near a local minimum in (d) and a local maximum in (e). }{prof} \Fig{prof} illustrates development and subsequent propagation of an envelope quasi-soliton (EQS) solution in \eqtwo{RXD}{FHN}. The profiles are presented in a co-moving frame of reference, with $x$-coordinate measured with respect to the ``centre of mass'' $\xc$ of the quasi-soliton~\cite{epaps}. Simulations with different initial conditions show that as long as the initial perturbation is above a threshold, the amplitude and overall shape of the quasi-soliton does not depend on its details. An important feature, evident from comparing panels (d) and (e), is that whereas the overall shape (the envelope) of the quasi-soliton and the wavelength of the high-frequency oscillations (the wavelets) within that envelope remain unchanged, the phase of the the wavelets relative to the envelope position changes, so the phase velocity (of the wavelets) is different from the group velocity (of the envelope). \dblfigure{fig2}{ Density plots of the quasi-solitons reflecting from a boundary~\cite{epaps}. (a-c) FHN kinetics, $u_{-}=-0.3$, $u_{+}=1$, $\eps=0.01$, and $\a$ is varied as shown under the panels. (a) A double EQS becomes a single EQS upon the reflection. Some time after that, it will grow its twin become a double quasi-soliton again. (b) A single EQS: this is the same case as shown in~\fig{prof}. (c) A ``classical'' quasi-soliton which retains its shape as it propagates. (d) An EQS in the LE kinetics, $\L=300$, $u_{-}=0.8$, $u_{+}=3.3$. In all panels, the origin of the $\t$-axis is shifted to an arbitrarily chosen moment shortly before the impact event. }{density} This feature can also be seen in \fig{density}(b). The thin stripes in the density plot represent individual wavelets, and the broader band, consisting of these stripes, represents the envelope. The slope of the stripes is the inverse of the phase velocity, and the slope of the band is the inverse of the group velocity. The stripes are not parallel to the band, because the group and the phase velocities differ. This figure also illustrates another important phenomenon: the reflection of the quasi-soliton from the boundary. Panels (a) and (c) in \fig{density} illustrate two different sorts of solutions which are observed at higher and lower values of parameter $\a$, which also reflect from the boundary. In panel (a), we still see individual wavelet stripes that are not parallel to the envelope bands, but there are two bands in the incident wave. Note that the reflected wave only has one band; however if that reflected band is allowed to propagate for a sufficiently long time, it will spawn its twin band behind it. This is a ``multiplying'' EQS. We do not go further into properties of these solutions, reserving that for a future study. In panel (c) there is only one dominant stripe and many weaker stripes, all of which are parallel to the band. This solution has phenomenological features similar to quasi-solitons described previously, \eg\ in~\cite{QS1}, namely, the wave retains constant shape as it propagates, and reflects from a boundary. Panel (d) shows a quasi-soliton reflecting from the boundary, in the other model \eqtwo{RXD}{LE}. \sglfigure{fig3}{ (color online) (a) The number of wavelets in an EQS as a function of $\a$ at fixed $\eps=0.01$. (b) Areas of different sorts of solutions in the parametric plane $(\a,\eps)$. The black dashed line corresponds to the parametric cross-section shown in (a). }{areas} \Fig{areas} gives an overview of the parametric area of the EQS solutions in \eqtwo{RXD}{FHN} and its neighbours. In panel (b), the parameter sets at which EQS solutions like the one shown in \fig{density}(b) have been observed, are designated by red solid circles. This area is surrounded: \begin{itemize} \item at higher and lower values of $\eps$, by solutions which have similar shape to those shown in~\fig{prof} and \fig{density}(b), but do not reflect from boundaries (`annihilating', blue crosses); \item at lower values of $\a$, by multiplying EQSs, one of which is illustrated in~\fig{density}(a) (`multiplying', green stars); \item at higher values of $\a$, by constant shape quasi-solitons, such as the one shown in~\fig{density}(c) (`constant', magenta triangles). \end{itemize} The area of existence of all these solutions in the $(\a,\eps)$ parametric plane is bounded from above and from the right, and beyond it our initial conditions did not produce any stably propagating solutions (`decaying', black open circles). Panel (a) in this figure illustrates the variability of the shape of EQS within their parametric domain. Most important conclusion from~\fig{areas} is that the EQSs are not a unique feature of a special set of parameters but are observed in a rather broad parametric area. The oscillatory character of the fronts of cross-diffusion waves, described in numerical simulations~\cite{% QS1,% QS6% } and analysed theoretically in~\cite{% QS1,% QS6,% Zemskov-Loskutov-2008% }, was for waves of stationary shape. Although the proper theory of the EQSs is beyond the scope of this Letter, the analysis of their non-stationary front structure is easily achieved via linearization of \eq{RXD}. The resting states in both FHN~\eq{FHN} and LE~\eq{LE} kinetics are stable foci which already shows propensity to oscillations. Considering in more detail the FHN kinetics, for small $\u,\v$, the solution has the form \begin{equation} \Mx{\u\\\v} \approx \Re{ \C \evec \e^{-\decr(\x-\cg\t)} \e^{\i(\waven\x-\freq\t)} }, \label{fit} \end{equation} where \begin{eqnarray} & \A(\evalt,\evals)\evec=\mx{0}, \quad \evec\ne\mx{0}, \quad \det\A=0, \label{disp} \\ &\A=\Mx{ -\a-\evalt & -\ki+\evals^2 \\ \eps-\evals^2 & -\evalt }, \;\evalt = \decr\cg-\i\freq,\;\evals = -\decr+\i\waven. \nonumber \end{eqnarray} \sglfigure{fig4}{ (color online) Profiles of an EQS wavefront and its fitting by~\eq{fit} at selected moments of time. Parameters are $\eps=0.01$, $\a=0.12$, $\L=\infty$. The origin of the $\x$-axis is chosen arbitrarily. }{front} Fitting of the $v$-component of a solution shown on \fig{prof} to~\eq{fit} gives $\cg\approx 4.07698$, $\waven\approx 1.71532$, $\decr\approx 0.182305$ and $\freq\approx 6.15190$, which satisfies~\eq{disp} to 3 s.f.~\cite{epaps} The quality of the fitting is illustrated in~\fig{front}. Note that the phase velocity of the wavelets here is $\cp=\freq/\waven\approx3.59$, smaller than the group velocity, $\cg\approx4.08$, which agrees with the fact that the slope of the individual stripes in \fig{density}(b) (which is the inverse of the phase velocity $\cp$) is steeper than the slope of the band (which is the inverse of the group velocity $\cg$). The shape of the profiles in~\fig{prof} is reminiscent of localized states in the generalized Swift-Hohenberg equation with ``snakes and ladders'' bifurcation diagrams~\cite{Burke-Knobloch-2006}. The essential difference of our solutions is that they move and do not preserve their shape, so cannot be immediately studied by ODE bifurcation techniques. The defining features of the EQSs described above are similar to envelope solitons of the Nonlinear Schr\"odinger Equation. The version of this equation known as `NLS+'~\cite{NLS}, can be written in the form \[ \i \df{\w}{\t} + \ddf{\w}{\x} + \w|\w|^2 = 0 \] for a complex field $\w$, which presents a reaction-cross-diffusion system for two real fields $\u$ and $\v$ via $w=u-\i v$ of the form~\eq{RXD} with \begin{equation} \f = \u(\u^2+\v^2), \quad \g = -\v(\u^2+\v^2). \label{NLSsys} \end{equation} System~\eqtwo{RXD}{NLSsys} has soliton solutions in the form of (fast) harmonic waves with a unimodal ($\sech$-shaped) envelope, and the propagation velocity of the envelope (the group velocity) is different from the propagation velocity of the wavelets (the phase velocity). Hence one might think of possible interpretation of the EQSs in \eqtwo{RXD}{FHN} or \eqtwo{RXD}{LE} as a result of a non-conservative perturbation of the envelope solitons in \eq{NLSsys}, which would select particular values of the otherwise arbitrary amplitude and speed of the soliton and modify its shape. This interpretation, however, does not seem to work, and our attempts to connect the solutions in \eqtwo{RXD}{NLSsys} and \eqtwo{RXD}{FHN} via a one-parametric family of systems have been unsuccessful, as the EQS solutions disappeared during parameter continuation. The apparent reason is that the sense of rotation of solutions of \eq{NLSsys} in the $(\u,\v)$ is clockwise whereas in \eq{FHN} and \eq{LE} it is counterclockwise, and the variant of \eq{NLSsys} with counterclockwise rotation, the `NLS-' equation, does not have envelope soliton solutions. Another comparison can be made with ``wave packets'' reported by Vanag and Epstein in microemulsion BZ reaction, and corresponding mathematical models, associated with finite-wavelength instability of an equilibrium in a reaction-diffusion system with unequal self-diffusion coefficients~\cite{% Vanag-Epstein-2004% }. They considered two distinct types of solutions: small- and large-amplitude wave packets (SAWP and LAWP), both capable of reflection from boundaries. SAWP are observed in the nearly-linear regime, they have the phase speed (of the wavelets) different from the group speed (of the envelope, or the packet). However, being near to a linear instability and having no stabilizing effect of the dispersion as in NLS+, the packets slowly grow both in amplitude and in width, \ie\ they are not quasi-solitons. The LAWP, on the contrary, have fixed amplitude and width, but their phase and group velocities coincide, so they retain constant shape. They are therefore phenomenologically similar to the quasi-solitons reported in excitable systems with cross-diffusion~\cite{QS1}. Note that adiabatic elimination of a fast component in a reaction-diffusion system with very different self-diffusion coefficients is one of the ways in which cross-diffusion terms may appear~\cite{Kuznetsov-etal-1994,Vanag-Epstein-2009}, so this analogy deserves further investigation. To conclude, the solutions we have reported resemble NLS+ envelope solitons by their morphology and by their ability to reflect from boundaries, however they are different in that amplitudes and speeds of NLS solitons depend on initial conditions, while in \eq{RXD} they are fixed by parameters of the models. The reported solution are similar to quasi-solitons reported earlier in that they share the fixed amplitude and reflection properties, but different in that they do not preserve constant shape as their phase velocities are different from their group velocities. Hence we believe this is a new nonlinear phenomenon not seen before. The mechanisms behind the key properties of this new type of solutions require further investigation, however it is already clear that this is not simply a non-conservative perturbation of NLS. \textbf{Acknowledgments} The study was supported in part by a grant from the Research Centre for Mathematics and Modelling, University of Liverpool (UK).
1,314,259,992,624
arxiv
\section{Introduction} The reconfiguration version of an optimization problem asks whether it is possible to transform a source feasible solution $S$ into a target feasible solution $T$ by a sequence of {\em reconfiguration steps} such that every intermediate solution is also feasible; other variants return a (possibly minimum-length) {\em reconfiguration sequence} of solutions. Reconfiguration problems model real-life dynamic situations in which we seek to transform a solution into a more desirable one, maintaining feasibility during the process. The study of reconfiguration yields insights into the structure of the solution space of the underlying optimization problem, crucial for the design of efficient algorithms. Motivated by real world situations as well as by trying to understand the structure of all feasible solutions, there has been a lot of recent interest in studying the complexity of reconfiguration problems. Problems for which reconfiguration has been studied include {\sc Vertex Colouring}~\cite{BB13,BC09,CVJ08,CVJ09,CVJ11}, {\sc List Edge-Colouring}~\cite{IKD12}, {\sc Independent Set}~\cite{HD05,IDHPSUU11}, {\sc Clique}, {\sc Set Cover}, {\sc Matching}, {\sc Matroid Bases}~\cite{IDHPSUU11}, {\sc Satisfiability}~\cite{GKMP09}, {\sc Shortest Path}~\cite{B12,KMM11}, and {\sc Dominating Set}~\cite{HS12,SMN13}. Most work has been limited to the problem of determining the existence of a reconfiguration sequence between two given solutions; for most NP-complete problems, this problem has been shown to be PSPACE-complete. As there are typically exponentially many feasible solutions, the length of the reconfiguration sequence can be exponential in the size of the input instance. It is thus natural to ask whether reconfiguration problems become tractable if we allow the running time to depend on the length of the sequence; this approach suggests the use of the paradigm of parameterized complexity. In this work, we explore reconfiguration in the framework of parameterized complexity~\cite{DF97} under two natural parameterizations: $k$, a bound on the size of feasible solutions, and $\ell$, the length of the reconfiguration sequence. One of our key results is that for most problems, the reconfiguration versions remain intractable in the parameterized framework when we parameterize by $\ell$. It is important to note that when $k$ is not bounded, the reconfiguration we study become easy. We present fixed-parameter algorithms for problems parameterized by $k$ by modifying known parameterized algorithms for the problems. The paradigms of bounded search tree and kernelization typically work by exploring minimal solutions. However, a reconfiguration sequence may necessarily include non-minimal solutions. Any kernel that removes solutions (non-minimal or otherwise) may render finding a reconfiguration sequence impossible, as the missing solutions might appear in every reconfiguration sequence; we must thus ensure that the kernelization rules applied retain enough information to allow us to determine whether a reconfiguration sequence exists. To handle these difficulties, we introduce a general approach for parameterized reconfiguration problems. We use a {\em reconfiguration kernel}, showing how to adapt Bodlaender's cubic kernel~\cite{B07} for {\sc Feedback Vertex Set}, and a special kernel by Damaschke and Molokov~\cite{D09} for {\sc Bounded Hitting Set} (where the cardinality of each input set is bounded) to obtain polynomial reconfiguration kernels, with respect to $k$. These results can be considered as interesting applications of kernelization, and a general approach for other similar reconfiguration problems. As a counterpart to our result for {\sc Bounded Hitting Set}, we show that reconfiguring {\sc Unbounded Hitting Set} or {\sc Dominating Set} is $W[2]$-hard parameterized by $k + \ell$ (Section~\ref{sec-relate}). Finally, we show a general result on reconfiguration problems of hereditary properties and their `parametric duals', implying the $W[1]$-hardness of reconfiguring {\sc Independent Set}, {\sc Induced Forest}, and {\sc Bipartite Subgraph} parameterized by $k + \ell$ and {\sc Vertex Cover}, {\sc Feedback Vertex Set}, and {\sc Odd Cycle Transversal} parameterized by $\ell$. \section{Preliminaries}\label{sec-prelims} Unless otherwise stated, we assume that each input graph $G$ is a simple, undirected graph on $n$ vertices with vertex set $V(G)$ and edge set $E(G)$. To avoid confusion, we refer to {\em nodes} in reconfiguration graphs (defined below), as distinguished from {\em vertices} in the input graph. We use the modified big-Oh notation $O^*$ that suppresses all polynomially bounded factors. Our definitions are based on optimization problems, each consisting of a polynomial-time recognizable set of valid instances, a set of feasible solutions for each instance, and an objective function assigning a nonnegative rational value to each feasible solution. \begin{definition} The {\em reconfiguration graph} $R_Q(I,\textnormal{adj},k)$, consists of a node for each feasible solution to instance $I$ of optimization problem $Q$, where the size of each solution is at least $k$ for $Q$ a maximization problem (of size at most $k$ for $Q$ a minimization problem, respectively), for positive integer $k$, and an edge between each pair of nodes corresponding to solutions in the binary adjacency relation $\textnormal{adj}$ on feasible solutions. \end{definition} We define the following {\em reconfiguration problems}, where $S$ and $T$ are feasible solutions for $I$: {\sc $Q$ Reconfiguration} determines if there is a path from $S$ to $T$ in $R_Q(I,\textnormal{adj},k)$; the {\em search variant} returns a {\em reconfiguration sequence}, the sequence of feasible solutions associated with such a path; and the {\em shortest path variant} returns the reconfiguration sequence associated with a path of minimum length. For convenience, solutions paired by $\textnormal{adj}$ are said to be {\em adjacent}. Using the framework developed by Downey and Fellows~\cite{DF97}, a {\em parameterized reconfiguration problem} includes in the input a positive integer $\ell$ (an upper bound on the length of the reconfiguration sequence) and a parameter $p$ (typically $k$ or $\ell$). For a parameterized problem $Q$ with inputs of the form $(x, p)$, $|x| = n$ and $p$ a positive integer, $Q$ is {\em fixed-parameter tractable} (or in {\em FPT}) if it can be decided in $f(p) n^c$ time, where $f$ is an arbitrary function and $c$ is a constant independent of both $n$ and $p$. $Q$ is in the class {\em XP} if it can be decided in $n^{f(p)}$ time. $Q$ has a {\em kernel} of size $f(p)$ if there is an algorithm $A$ that transforms the input $(x, p)$ to $(x', p')$ such that $A$ runs in polynomial time (with respect to $|x|$ and $p$) and $(x, p)$ is a yes-instance if and only if $(x', p')$ is a yes-instance, $p' \leq g(p)$, and $|x'| \leq f(p)$. Each problem in {\em FPT} has a kernel, possibly of exponential (or worse) size. We introduce the related notion of a {\em reconfiguration kernel}; it follows from the definition that a reconfiguration problem that has such a kernel is in {\em FPT}. \begin{definition} A {\em reconfiguration kernel} of an instance of a parameterized reconfiguration problem $(x,p) = (P,\textnormal{adj},S,T,k,\ell,p)$ is a set of $h(p)$ instances, for an arbitrary function $h$, such that for $1 \le i \le h(p)$: \begin{itemize} \item{} for each instance in the set, $(x_i,p_i) = (P,\textnormal{adj},S_i,T_i,k_i,\ell_i,p_i)$, the values of $S_i$, $T_i$, $k_i$, $\ell_i$, and $p_i$ can all be computed in polynomial time, \item{} the size of each $x_i$ is bounded by $j(p)$, for an arbitrary function $j$, and \item{} $(x,p)$ is a yes-instance if and only if at least one $(x_i,p_i)$ is a yes-instance. \end{itemize} \end{definition} The main hierarchy of parameterized complexity classes is $FPT \subseteq W[1] \subseteq W[2] \subseteq \ldots \subseteq XP$, where $W$-hardness, shown using {\em FPT reductions}, is the analogue of NP-hardness in classical complexity. A parameterized problem $Q$ {\em FPT reduces} to a parameterized problem $Q'$ if there is an algorithm $A$ that transforms an instance $(I,p)$ of $Q$ to an instance $(I', p')$ of $Q'$ such that $A$ runs in time $f(p) poly(|I|)$ where $f$ is a function of $k$, $poly$ is a polynomial function, and $p' = g(p)$ for some function $g$. In addition, the transformation has the property that $(I,p)$ is a yes-instance of $Q$ if and only if $(I', p')$ is a yes-instance of $Q'$. It is known that standard parameterized versions (are there $p$ vertices that form the solution?) of {\sc Clique} and {\sc Independent Set} are complete for the class $W[1]$, and {\sc Dominating Set} is $W[2]$-complete. The reader is referred to~\cite{FG06,N06} for more on parameterized complexity. Most problems we consider can be defined using graph properties, where a {\em graph property} $\pi$ is a collection of graphs, and is {\em non-trivial} if it is non-empty and does not contain all graphs. A graph property is {\em polynomially decidable} if for any graph $G$, it can be decided in polynomial time whether $G$ is in $\pi$. For a subset $V' \subseteq V$, $G[V']$ is the {\em subgraph of $G$ induced on $V'$}, with vertex set $V'$ and edge set $\{\{u,v\} \in E \mid u,v \in V'\}$. The property $\pi$ is {\em hereditary} if for any $G \in \pi$, any induced subgraph of $G$ is also in $\pi$. Examples of hereditary properties include graphs having no edges and graphs having no cycles. It is well-known~\cite{LY80} that every hereditary property $\pi$ has a forbidden set ${\mc{F}}_\pi$, in that a graph has property $\pi$ if and only if it does not contain any graph in ${\mc{F}}_\pi$ as an induced subgraph. For a graph property $\pi$, we define two reconfiguration graphs, where solutions are sets of vertices and two solutions are adjacent if they differ by the addition or deletion of a vertex. The {\em subset reconfiguration graph of $G$ with respect to $\pi$}, $R^{\pi}_{\textsc{sub}}(G,k)$, has a node for each $S \subseteq V(G)$ such that $|S| \ge k$ and $G[S]$ has property $\pi$, and the {\em deletion reconfiguration graph of $G$ with respect to $\pi$}, $R^{\pi}_{\textsc{del}}(G,k)$, has a node for each $S \subseteq V(G)$ such that $|S| \le k$ and $G[V(G) \setminus S]$ has property $\pi$. We can obtain $R^{\pi}_{\textsc{del}}(G,|V(G)|-k)$ by replacing the set corresponding to each node in $R^{\pi}_{\textsc{sub}}(G,k)$ by its (setwise) complement. The following is a consequence of the fact that two nodes can differ by the deletion or addition of a single vertex. \begin{fact}\label{fact-degree-bound} The degree of each node in $R^{\pi}_{\textsc{sub}}(G,k)$ and each node in $R^{\pi}_{\textsc{del}}(G,k)$ is at most $|V(G)|$. \end{fact} \begin{definition} For any graph property $\pi$, graph $G$, positive integer $k$, $S \subseteq V(G)$, and $T \subseteq V(G)$, we define the following decision problems: \noindent \textsc{$\pi$-deletion$(G, k)$}: Is there $V' \subseteq V(G)$ such that $|V'| \leq k$ and $G[V(G) \setminus V'] \in \pi$?\\ \smallskip \noindent \textsc{$\pi$-subset$(G, k)$}: Is there $V' \subseteq V(G)$ such that $|V'| \geq k$ and $G[V'] \in \pi$? \\ \smallskip \noindent \textsc{$\pi$-del-reconf$(G, S, T, k, \ell)$}: For $S,T \in V(R^{\pi}_{\textsc{del}}(G,k))$, is there a path of length at most $\ell$ between the nodes for $S$ and $T$ in $R^{\pi}_{\textsc{del}}(G,k)$?\\ \smallskip \noindent \textsc{$\pi$-sub-reconf$(G, S, T, k, \ell)$}: For $S, T \in V(R^{\pi}_{\textsc{sub}}(G,k))$, is there a path of length at most $\ell$ between the nodes for $S$ and $T$ in $R^{\pi}_{\textsc{sub}}(G,k)$? \end{definition} We say that \textsc{$\pi$-deletion$(G, k)$} and \textsc{$\pi$-subset$(G, k)$} are {\em parametric duals} of each other. Note that in \textsc{$\pi$-subset$(G, k)$}, we seek a set of vertices of size {\it at least} $k$ inducing a subgraph in $\pi$, whereas in \textsc{$\pi$-deletion$(G, k)$}, we seek a set of vertices of size {\it at most} $k$ whose {\it complement set} induces a subgraph in $\pi$. We refer to \textsc{$\pi$-del-reconf$(G, S, T, k, \ell)$} and \textsc{$\pi$-sub-reconf$(G, S, T, k, \ell)$} as {\em reconfiguration problems for $\pi$}; for example, for $\pi$ the set of graphs with no edges, the former is \textsc{Vertex Cover Reconfiguration} and the latter is \textsc{Independent Set Reconfiguration}. \section{Fixed-Parameter Tractability Results}\label{sec-kernel} We first observe that for any polynomially decidable graph property, the \textsc{$\pi$-deletion} and \textsc{$\pi$-subset} reconfiguration versions are in $XP$ when parameterized by $\ell$; we conduct breadth-first search on the reconfiguration graph starting at $S$, stopping either upon discovery of $T$ or upon completing the exploration of $\ell$ levels. Fact~\ref{fact-degree-bound} implies a bound of at most $n^{\ell}$ vertices to explore in total. \begin{fact} \label{xpclaim} For any polynomially decidable graph property $\pi$, \textsc{$\pi$-del-reconf$(G, S, T, k, \ell)$} $\in XP$ and \textsc{$\pi$-sub-reconf$(G, S, T, k, \ell)$} $\in XP$ when parameterized by $\ell$. \end{fact} For an instance $(G,S,T,k,\ell)$, we partition $V(G)$ into the sets $C = S \cap T$ (vertices common to $S$ and $T$), $S_D = S \setminus C$ (vertices to be deleted from $S$ in the course of reconfiguration), $T_A = T \setminus C$ (vertices to be added to form $T$), and $O = V(G) \setminus (S \cup T) = V(G) \setminus (C \cup S_D \cup T_A)$ (all other vertices). Furthermore, we can partition $C$ into two sets $C_F$ and $C_M = C \setminus C_F$, where a vertex is in $C_F$ if and only if it is in every feasible solution of size bounded by $k$. The following fact is a consequence of the definitions above, the fact that $\pi$ is hereditary, and the observations that $G[S_D]$ and $G[O]$ are both subgraphs of $G[V(G) \setminus T]$, and $G[T_A]$ and $G[O]$ are both subgraphs of $G[V(G) \setminus S]$. \begin{fact}\label{fact-pieces} For an instance \textsc{$\pi$-del-reconf$(G, S, T, k, \ell)$} of a reconfiguration problem for hereditary property $\pi$, $G[O]$, $G[S_D]$, and $G[T_A]$ all have property $\pi$. \end{fact} In any reconfiguration sequence, each vertex in $S_D$ must be deleted and each vertex in $T_A$ must be added. We say that a reconfiguration sequence {\em touches} a vertex $v$ if $v$ is either added or deleted in at least one reconfiguration step. Any vertex that is not touched is {\em untouched}. In fact, since $\ell$ implies a bound on the total number of vertices that can be touched in a reconfiguration sequence, setting $\ell = |S_D| + |T_A|$ drastically simplifies the problem. \begin{obs}\label{obs-touch-once} For any polynomially decidable hereditary graph property $\pi$, if $|S_D| + |T_A| = \ell$, then \textsc{$\pi$-del-reconf$(G, S, T, k, \ell)$} and \textsc{$\pi$-sub-reconf$(G, S, T, k, \ell)$} can be solved in ${\mc{O}}^*(2^\ell)$ time, and hence are in FPT when parameterized by $\ell$. \end{obs} \begin{proof} Since each vertex in $T_A$ must be added and each vertex in $S_D$ deleted, in $\ell$ steps we can touch each vertex in $S_D \cup T_A$ exactly once; all vertices in $V(G) \setminus (S_D \cup T_A)$ remain untouched. Any node in the path between $S$ and $T$ in $R^{\pi}_{\textsc{sub}}(G,k)$ represents a set $C \cup B$ where $B$ is a subset of $S_D \cup T_A$. As $|S_D| + |T_A| = \ell$, there are only $2^\ell$ choices for $B$. Our problem then reduces to finding the shortest path between $S$ and $T$ in the subgraph of $R^{\pi}_{\textsc{sub}}(G,k)$ induced on the $2^\ell$ relevant nodes; the bound follows from the fact that the number of edges is at most $2^{\ell}|V(G)|$, a consequence of Fact~\ref{fact-degree-bound}. The same argument holds for $R^{\pi}_{\textsc{del}}(G,k)$.\qed \end{proof} In contrast, we show in the next section that for most hereditary properties, reconfiguration problems are hard when parameterized by $\ell$. \subsection{Bounded Hitting Set} Here, we prove the parameterized tractability of reconfiguration for certain superset-closed $k$-subset problems when parameterized by $k$, where a {\em $k$-subset problem} is a parameterized problem $\sc Q$ whose solutions for an instance $(I,k)$ are all subsets of size at most $k$ of a domain set, and is {\em superset-closed} if any superset of size at most $k$ of a solution of $\sc Q$ is also a solution of $\sc Q$. For example, parameterized \textsc{Vertex Cover} is a superset-closed problem. \begin{theorem} \label{fullkernelresult} If a $k$-subset problem $\sc Q$ is superset-closed and has an FPT algorithm to enumerate all its minimal solutions, the number of which is bounded by a function of $k$, then {\sc $Q$ Reconfiguration} parameterized by $k$ is in FPT, as well as the search and shortest path variants. \end{theorem} \begin{proof} By enumerating all minimal solutions of $\sc Q$, we compute the set $M$ of all elements $v$ of the domain set such that $v$ is in a minimal solution to $\sc Q$. For $(I,S,T,k,\ell)$ an instance of $\sc Q$ \textsc{Reconfiguration}, we show that there exists a reconfiguration sequence from $S$ to $T$ if and only if there exists a reconfiguration sequence from $S\cap M$ to $T\cap M$ that uses only subsets of $M$. Each set $U$ in the reconfiguration sequence from $S$ to $T$ is a solution, hence contains at least one minimal solution in $U\cap M$; $U \cap M$ is a superset of the minimal solution and hence also a solution. Moreover, since any two consecutive solutions $U$ and $U'$ in the sequence differ by a single element, $U \cap M$ and $U' \cap M$ differ by at most a single element. By replacing each subsequence of identical sets by a single set, we obtain a reconfiguration sequence from $S \cap M$ to $T \cap M$ that uses only subsets of $M$. The reconfiguration sequence from $S\cap M$ to $T\cap M$ using only subsets of $M$ can be extended to a reconfiguration sequence from $S$ to $T$ by transforming $S$ to $S\cap M$ in $|S \setminus M|$ steps and transforming $T\cap M$ to $T$ in $|T \setminus M|$ steps. In this sequence, each vertex in $C \setminus M$ is removed from $S$ to form $S \setminus M$ and then readded to form $T$ from $T \setminus M$. For each vertex $v \in C \setminus M$, we can choose instead to add $v$ to each solution in the sequence, thereby decreasing $\ell$ by two (the steps needed to remove and then readd $v$) at the cost of increasing by one the capacity used in the sequence from $S \cap M$ to $T \cap M$. This choice can be made independently for each of these ${\cal E} = |C\setminus M|$ vertices. Consequently, $(I,S,T,k,\ell)$ is a yes-instance for $\sc Q$ \textsc{Reconfiguration} if and only if one of the ${\cal E} +1$ reduced instances $(I, S\cap M, T\cap M, k-e, \ell-2({\cal E} - e))$, for $0 \le e \le {\cal E}$ and ${\cal E} = |C \backslash M|$, is a yes-instance for $\sc Q'$ \textsc{Reconfiguration}: we define $\sc Q'$ as a $k$-subset problem whose solutions for an instance $(I,k)$ are solutions of instance $(I,k)$ of $\sc Q$ that are contained in $M$. To show that $\sc Q'$ \textsc{Reconfiguration} is in {\em FPT}, we observe that the number of nodes in the reconfiguration graph for $\sc Q'$ is bounded by a function of $k$: each solution of $\sc Q'$ is a subset of $M$, yielding at most $2^{|M|}$ nodes, and $|M|$ is bounded by a function of $k$. \qed \end{proof} As a consequence, {\sc Bounded Hitting Set Reconfiguration}, {\sc Feedback Arc Set Reconfiguration in Tournaments Reconfiguration}, and {\sc Minimum Weight SAT in Bounded CNF Formulas Reconfiguation} (where each solution is the set of variables that are set to true in a satisfying assignment, and the problem looks for a solution of cardinality at most $k$) are proved to be in FPT when parameterized by $k$: \begin{corollary} {\sc Bounded Hitting Set Reconfiguration}, {\sc Feedback Arc Set in Tournaments Reconfiguration}, and {\sc Minimum Weight SAT in Bounded CNF Formulas Reconfiguation} parameterized by $k$ are in FPT. \end{corollary} \begin{proof} All these problems are superset-closed. Furthermore, standard techniques give FPT algorithms to enumerate their minimal solutions, and the number of minimal solutions is bounded by a function of $k$ in all cases, as required by Theorem~\ref{fullkernelresult}. We include the proofs for completeness. We can devise a search tree algorithm that gradually constructs minimal hitting sets of instances of {\sc Bounded Hitting Set}, producing all minimal hitting sets of size at most $k$ in its leaves. Consider an instance of {\sc Bounded Hitting Set}, where the cardinality of each set is bounded by a constant $c$. At each non-leaf node, the algorithm chooses a set that is not hit, and branches on all possible ways of hitting this set, including one of the (at most $c$) elements in the set in each branch. Since we are not interested in hitting sets of cardinality more than $k$, we do not need to search beyond depth $k$ in the tree, proving an upper bound of $c^k$ on the number of leaves, and an upper bound of $O^*(c^k)$ on the enumeration time. For the problem {\sc Feedback Arc Set in Tournaments}, a tournament is acyclic if and only if it has a directed cycle of length three~\cite{Bang2008}, and a set of arcs is a minimal feedback arc set in a tournament if and only if reversing its arcs in the tournament results in an acyclic tournament~\cite{Raman2006}. Therefore, at each non-leaf node in a search tree for this problem, there is always a cycle $C$ of length three and every feedback arc set shares at least one arc with $C$. The algorithm can thus branch on the three arcs in $C$, reversing one in each branch, and solve the problem recursively. As in the previous algorithm, since we are not interested in feedback arc sets of cardinality more than $k$, the search can be terminated at depth $k$, proving an upper bound of $3^k$ on the number of minimal $k$-feedback arc sets in tournaments, and an upper bound of $O^*(3^k)$ on the running time of this enumeration algorithm. Finally, Misra et al.~\cite{Misra2013} give a search tree algorithm for bounded CNF formula instances of {\sc Minimum Weight SAT}, where every clause has at most $c$ literals for some constant $c$. At each node, the algorithm chooses a clause whose literals are all positive, and branches on all possible ways of satisfying the clause, setting one variable to true in each branch. If there is no such clause, the formula is satisfied with no increase in the number of true variables, by setting every non-assigned variable to false. As before, the algorithm stops the search when it reaches a depth of $k$, proving an upper bound of $c^k$ on the number of satisfying assignments, and an upper bound of $O^*(c^k)$ on the enumeration time. \qed \end{proof} For {\sc Bounded Hitting Set}, the proof of Theorem~\ref{fullkernelresult} can be strengthened to develop a polynomial reconfiguration kernel. In fact, we use the ideas in Theorem~\ref{fullkernelresult} to adapt a special kernel that retains all minimal $k$-hitting sets in the reduced instances~\cite{D09}. \begin{theorem} \label{hittingsettheorem} {\sc Bounded Hitting Set Reconfiguration} parameterized by $k$ has a polynomial reconfiguration kernel. \end{theorem} \begin{proof} We let $(G,S,T,k,\ell)$ be an instance of {\sc Bounded Hitting Set Reconfiguration}: $G$ is a family of sets of vertices of size at most $r$ and each of $S$ and $T$ is a hitting set of size at most $k$, that is, a set of vertices intersecting each set in $G$. We form a reconfiguration kernel using the reduction algorithm $\cal A$ of Damaschke and Molokov~\cite{D09}: $G' = {\cal A}(G)$ contains all minimal hitting set solutions of size at most $k$, and is of size at most $(r-1)k^r + k$. {\sc Bounded Hitting Set} is a $k$-subset problem that is superset-closed. Moreover, $V(G')$ includes all minimal $k$-hitting sets, and the $k$-hitting sets for $G'$ are actually those $k$-hitting sets for $G$ that are completely included in $V(G')$. Therefore, as in the proof of Theorem~\ref{fullkernelresult}, $(G,S,T,k,\ell)$ is a yes-instance for {\sc Bounded Hitting Set Reconfiguration} if and only if one of the ${\cal E} +1$ reduced instances $(G', S\cap V(G'), T\cap V(G'), k-e, \ell-2({\cal E} - e))$, for $0 \le e \le {\cal E}$, is a yes-instance for {\sc Bounded Hitting Set Reconfiguration}. Notice that unlike in the proof of Theorem~\ref{fullkernelresult}, here we have access to an $f(k)$-bounded instance $G'$ based on which we can solve {\sc $Q'$ Reconfiguration}. Another difference is that here the set containing all minimal solutions can be computed in polynomial time, whereas Theorem~\ref{fullkernelresult} guarantees only a fixed-parameter tractable procedure. \qed \end{proof} {\sc Bounded Hitting Set} generalizes {\sc Vertex Cover}, {\sc Feedback Vertex Set in Tournaments}, {\sc Cluster Deletion}, and in general any deletion problem for a hereditary property with a finite forbidden set: \begin{corollary}\label{finiteforbidencor} If $\pi$ is a hereditary graph property with a finite forbidden set, then \textsc{$\pi$-del-reconf$(G, S, T, k, \ell)$} parameterized by $k$ has a polynomial reconfiguration kernel. \end{corollary} \subsection{Undirected Feedback Vertex Set} Corollary~\ref{finiteforbidencor} does not apply to {\sc Feedback Vertex Set}, for which the associated hereditary graph property is the collection of all forests; the forbidden set is the set of all cycles and hence is not finite. Indeed, Theorem~\ref{fullkernelresult} does not apply to {\sc Feedback Vertex Set} either, since the number of minimal solutions exceeds $f(k)$ if the input graph includes a cycle of length $f(k)+1$, for any function $f$. While it maybe possible to adapt the compact enumeration of minimal feedback vertex sets~\cite{Guoetal2006} for reconfiguration, we develop a reconfiguration kernel for feedback vertex set by modifying a specific kernel for the problem. We are given an undirected graph and two feedback vertex sets $S$ and $T$ of size at most $k$. We make use of Bodlaender's cubic kernel for {\sc Feedback Vertex Set}~\cite{B07}, modifying reduction rules (shown in italics) to allow the reconfiguration sequence to use non-minimal solutions, and to take into account the roles of $C$, $S_D$, $T_A$, and $O$. In some cases we remove vertices from $O$ only, as others may be needed in a reconfiguration sequence. The reduction may introduce multiple edges, forming a multigraph. Bodlaender specifies that a double edge between vertices $u$ and $v$ consists of two edges with $u$ and $v$ as endpoints. Since we preserve certain degree-two vertices, we extend the notion by saying that there is a {\em double edge} between $u$ and $v$ if either there are two edges with $u$ and $v$ as endpoints, one edge between $u$ and $v$ and one path from $u$ to $v$ in which each internal vertex is of degree two, or two paths (necessarily sharing only $u$ and $v$) from $u$ to $v$ in which each internal vertex is of degree two. Following Bodlaender, we define two sets of vertices, a feedback vertex set $A$ of size at most $2k$ and the set $B$ containing each vertex with a double edge to at least one vertex in $A$. A {\em piece} is a connected component of $G[V \setminus (A \cup B)]$, the {\em border} of a piece with vertex set $X$ is the set of vertices in $A \cup B$ adjacent to any vertex in $X$, and a vertex $v$ in the border {\em governs} a piece if there is a double edge between $v$ and each other vertex in the border. We introduce ${\cal E}$ to denote how much capacity we can ``free up'' for use in the reduced instance by removing vertices and then readding them. Bodlaender's algorithm makes use of a repeated initialization phase in which an approximate solution $A$ is found and $B$ is initialized; for our purposes, we set $A = C \cup S_D \cup T_A$ in the first round and thereafter remove vertices as dictated by the application of reduction rules. Although not strictly necessary, we preserve this idea in order to be able to apply Bodlaender's counting arguments. In the following rules, $v$, $w$, and $x$ are vertices. \begin{description} \item[Rule 1] If $v$ has degree 0, remove $v$ from $G$. {\em If $v$ is in $S_D \cup T_A$, subtract 1 from $\ell$. If $v$ is in $C$, increment ${\cal E}$ by 1.} \item[Rule 2] If $v$ has degree 1, remove $v$ and its incident edge from $G$. {\em If $v$ is in $S_D \cup T_A$, subtract 1 from $\ell$. If $v$ is in $C$, increment ${\cal E}$ by 1.} \item[Rule 3] If there are three or more edges $\{v,w\}$, remove all but two. \item[Rule 4] If $v$ has degree 2 {\em and $v$ is in $O$}, remove $v$ and its incident edges from $G$ and add an edge between its neighbours $w$ and $x$; add $w$ (respectively, $x$) to $B$ if a double edge is formed, $w$ (respectively, $x$) is not in $A \cup B$, and $x$ (respectively, $w$) is in $A$. \item[Rule 5] If $v$ has a self-loop, remove $v$ and all incident edges and decrease $k$ by 1, then restart the initialization phase. \item[Rule 6] If there are at least $k+2$ vertex-disjoint paths between $v \in A$ and any $w$ and there is no double edge between $v$ and $w$, add two edges between $v$ and $w$, and if $w \notin A \cup B$, add $w$ to $B$. \item[Rule 7] If for $v \in A$ there exist at least $k+1$ cycles such that each pair of cycles has exactly $\{v\}$ as the intersection, remove $v$ and all incident edges and decrease $k$ by 1, then restart the initialization phase. \item[Rule 8] If $v$ has at least $k+1$ neighbours with double edges, remove $v$ and all incident edges and decrease $k$ by 1, then restart the initialization phase. \item[Rule 9] If $v \in A \cup B$ governs a piece with vertex set $X$ and has exactly one neighbour $w$ in $X$, then remove the edge $\{v,w\}$. \item[Rule 10] If $v \in A \cup B$ governs a piece with vertex set $X$ and has at least two neighbours in $X$, then remove $v$ and all incident edges and decrease $k$ by 1, then restart the initialization phase. {\em Replaced by the following rule: If a piece with vertex set $X$ has a border set $Y$ such that there is a double edge between each pair of vertices in $Y$, remove $X$.} \end{description} \begin{lemma}\label{lemma-fvs-reconf} The instance $(G,S,T,k,\ell)$ is a yes-instance if and only if one of the ${\cal E} +1$ reduced instances $(G', S', T', k-e, \ell-2({\cal E} - e))$, for $0 \le e \le {\cal E}$, is a yes-instance. \end{lemma} \begin{proof} We show that no modification of a reduction rule removes possible reconfiguration sequences. This is trivially true for Rules 3 and 6. The vertices removed by Rules 1, 2, and 4 play different roles in converting a reconfiguration sequence for a reduced instance to a reconfiguration sequence for the original instance. As there is no cycle that can be destroyed only by a vertex removed from $O$ by Rule 1, 2, or 4, none of these vertices are needed. To account for the required removal (addition) of each such vertex in $S_D$ ($T_A$), we remove all $d$ such vertices and decrease $\ell$ by $d$. We can choose to leave a $v \in C_M$ in each solution in the sequence (with no impact on $\ell$) or to remove and then readd $v$ to free up extra capacity, at a cost of incrementing $\ell$ by two; in the reduced instance we thus remove $v$ and either decrement $k$ or subtract two from $\ell$. Since this choice can be made for each of these vertices, ${\cal E}$ in total, we try to solve any of ${\cal E} +1$ versions $(G', S', T', k-e, \ell-2({\cal E} - e))$ for $0 \le e \le {\cal E}$. For each of Rules 5, 7, and 8, we show that the removed vertex $v$ is in $C_F$; since the cycles formed by $v$ must be handled by each solution in the sequence, the instance can be reduced by removing $v$ and decrementing $k$. For Rule 5, $v \in C_F$ since every feedback arc set must contain $v$. For Rules 7 and 8, $v \in C_F$, since any feedback vertex set not containing $v$ would have to contain at least $k+1$ vertices, one for each cycle. For Rule 9, Bodlaender's Lemma 8 shows that the removed edge has no impact on feedback vertex sets. For Rule 10, we first assume that Rule 9 has been exhaustively applied, and thus each vertex in the border has two edges to $X$. By Fact~\ref{fact-pieces} for $\pi$ the set of acyclic graphs, there cannot be a cycle in $G[O \cup \{v\}]$ for any $v \in S_D \cup T_A \cup O$, and hence each member of the border is in $C$. Lemma 9 in Bodlaender's paper shows that there is a minimum size feedback vertex set containing $v$: even if all the neighbours of $v$ in the border are included in a feedback vertex set, at least one more vertex is required to break the cycle formed by $v$ and $X$. There is no gain in capacity possible by replacing $v$ in the reconfiguration sequence, and hence this particular piece is of no value in finding a solution.\qed \end{proof} We first present the key points and lemmas in Bodlaender's counting argument and then show that, with minor modifications, the same argument goes through for our modified reduction rules and altered definition of {\em double edge}. In Bodlaender's proof, the size of the reduced instance is bounded by bounding the sizes of $A$ and $B$ (Lemma~\ref{lemma-bod-AB}), bounding the number of pieces (Lemma~\ref{lemma-bod-piece-count}), and bounding the size of each piece. Crucial to the proof of Lemma~\ref{lemma-bod-piece-count} is Lemma~\ref{lemma-bod-not-double}, as the counting associates each piece with a pair of vertices in its border that are not connected by a double edge and then counts the number of pieces associated with each different type of pair. We use Lemma~\ref{lemma-bod-9} in the discussion below. \begin{lemma}\cite{B07}\label{lemma-bod-9} Suppose $v \in A \cup B$ governs a piece with vertex set $X$. Suppose there are at least two edges with one endpoints $v$ and one endpoint in $X$. Then there is a minimum size feedback vertex set in $G$ that contains $v$. \end{lemma} \begin{lemma}\cite{B07}\label{lemma-bod-AB} In a reduced instance, there are at most $2k$ vertices in $A$ and at most $2k^2$ vertices in $B$. \end{lemma} \begin{lemma}\cite{B07}\label{lemma-bod-not-double} Suppose none of the Rules 1--10 can be applied to $G$. Suppose $Y \subseteq V$ is the border of a piece in $G$. Then there are two disjoint vertices $v,w \in Y$ such that $\{v,w\}$ is not a double edge. \end{lemma} \begin{lemma}\cite{B07}\label{lemma-bod-piece-count} Suppose we have a reduced instance. There are at most $8k^3 + 9k^2 + k$ pieces. \end{lemma} \begin{lemma}\label{lemma-fvs-bod} Each reduced instance has $O(k^3)$ vertices and $O(k^3)$ edges, and can be obtained in polynomial time. \end{lemma} \begin{proof} Our modifications to Rules 1--3 and 5--9 do not have an impact on the size of the kernel. Although our Rule 4 preserves some vertices in $A$ of degree two, due to the initialization of $A$ to be $C \cup S_D \cup T_A$, and hence of size at most $2k$, the bound on $B$ and hence Lemma~\ref{lemma-bod-AB} follows from Rule 8. In essence, our extended definition of double edges handles the degree-two vertices that in Bodlaender's constructions would have been replaced by an edge. To claim the result of Lemma~\ref{lemma-bod-piece-count}, it suffices to show that Lemma~\ref{lemma-bod-not-double} holds for our modified rules. Bodlaender shows that if there is a piece such that each pair of vertices in the border set is connected by a double edge, Rule 10 along with Rule 9 can be applied repeatedly to remove vertices from the border of the piece and thereafter Rules 2 and 1 to remove the piece entirely. To justify Rule 10, Bodlaender shows in Lemma~\ref{lemma-bod-9} that if $v \in A \cup B$ governs a piece with vertex set $X$ and there are at least two edges between $v$ and $X$, then there is a minimum size feedback vertex set in $G$ that contains $v$. For our purposes, however, since there may be non-minimum size feedback vertex sets used in the reconfiguration sequence, we wish to retain $v$ rather than removing it. Our modification to Rule 10 allows us to retain $v$, handling all the removals from the piece without changing the border, and thus establishing Lemma~\ref{lemma-bod-not-double}, as needed to prove Lemma~\ref{lemma-bod-piece-count}. In counting the sizes of pieces, our modifications result in extra degree-two vertices. Rule 4 removes all degree-two vertices in $O$, and hence the number of extra vertices is at most $2k$, having no effect on the asymptotic count. \qed \end{proof} \begin{theorem}\label{theorem-fpt-fvs} \textsc{Feedback Vertex Set Reconfiguration} and the search variant parameterized by $k$ are in {\em FPT}. \end{theorem} \begin{proof} Since the number of reduced instances is ${\cal E} + 1 \le |C| + 1 \le k + 1$, as a consequence of Lemmas~\ref{lemma-fvs-reconf} and \ref{lemma-fvs-bod}, we have a reconfiguration kernel, proving the first result. For the search version, we observe that we can generate the reconfiguration graph of the reduced yes-instance and use it to extract a reconfiguration sequence. We demonstrate that we can form a reconfiguration sequence for $(G,S,T,k,\ell)$ from the reconfiguration sequence $\sigma$ for the reduced yes-instance $(G',S',t',k-e,\ell-2({\cal E}-e))$. We choose an arbitrary partition of the vertices removed from $G$ by Rules 1 and 2 into two sets, $K$ (the ones to keep) of size $e$ and $M$ (the ones to modify) of size ${\cal E}-e$. We can modify $\sigma$ into a sequence $\sigma'$ in which all vertices in $K$ are added to each set; clearly no set will have size greater than $k$. Our reconfiguration sequence then consists of ${\cal E}-e$ steps each deleting an element of $M$, the sequence $\sigma'$, and ${\cal E}-e$ steps each adding an element of $M$, for a length of at most $({\cal E} -e) + (\ell -({\cal E}-e)) + ({\cal E} - e) \le \ell$, as needed. \qed \end{proof} \section{Hardness Results}\label{sec-relate} The reductions presented in this section make use of the forbidden set characterization of heredity properties. A {\em $\pi$-critical graph} $H$ is a (minimal) graph in the forbidden set ${\mc{F}}_\pi$ that has at least two vertices; we use the fact that $H \notin \pi$, but the deletion of any vertex from $H$ results in a graph in $\pi$. For convenience, we will refer to two of the vertices in a $\pi$-critical graph as {\em terminals} and the rest as {\em internal vertices}. We construct graphs from multiple copies of $H$. For a positive integer $c$, we let $H_c^*$ be the (``star'') graph obtained from each of $c$ copies $H_i$ of $H$ by identifying an arbitrary terminal $v_i$, $1 \le i \le c$, from each $H_i$; in $H_c^*$ vertices $v_1$ through $v_c$ are replaced with a vertex $w$, the {\em gluing vertex of $v_1$ to $v_c$}, to form a graph with vertex set $\cup_{1 \le i \le c} (V(H_i) \setminus \{v_i\}) \cup \{w\}$ and edge set $\cup_{1 \le i \le c}\{\{u,v\} \in E(H_i) \mid v_i \notin \{u,v\}\} \cup \cup_{1 \le i \le c}\{\{u,w\} \mid \{u,v_i\} \in E(H_i)\}$. A terminal is {\em non-identified} if it is not used in forming a gluing vertex. In Figure~\ref{fig-star}, $H$ is a $K_3$ with terminals marked black and gray; $H_4^*$ is formed by identifying all the gray terminals to form $w$. \begin{figure} \begin{centering} \centerline{\includegraphics[scale=0.45]{stargraph.pdf}} \end{centering} \caption{An example $H_c^*$} \label{fig-star} \end{figure} \begin{theorem}\label{theorem-reconf-minmax} Let $\pi$ be any hereditary property satisfying the following: \begin{itemize} \item For any two graphs $G_1$ and $G_2$ in $\pi$, the graph obtained by their disjoint union is in $\pi$. \item There exists an $H \in {\mc{F}}_\pi$ such that if $H_c^*$ is the graph obtained from identifying a terminal from each of $c$ copies of $H$, then the graph $R= H_c^*[V(H_c^*) \setminus \{ u_1, u_2, \ldots u_c \}]$ is in $\pi$, where $u_1, u_2, \ldots u_c$ are the non-identified terminals in the $c$ copies of $H$. \end{itemize} Then each of the following is at least as hard as \textsc{$\pi$-subset$(G, k)$}: \begin{enumerate} \item \textsc{$\pi$-del-reconf$(G, S, T, k, \ell)$} parameterized by $\ell$, and \item \textsc{$\pi$-sub-reconf$(G, S, T, k, \ell)$} parameterized by $k + \ell$. \end{enumerate} \end{theorem} \begin{proof} Given an instance of \textsc{$\pi$-subset$(G, k)$} and a $\pi$-critical graph $H$ satisfying the hypothesis of the lemma, we form an instance of \textsc{$\pi$-del-reconf$(G', S, T, |V(G)| + k, 4k)$}, with $G'$, $S$, and $T$ defined below. The graph $G'$ is the disjoint union of $G$ and a graph $W$ formed from $k^2$ copies of $H$, where $H_{i,j}$ has terminals $\ell_{i,j}$ and $r_{i,j}$. We let $a_i$, $1 \le i \le k$, be the gluing vertex of $\ell_{i, 1}$ through $\ell_{i, k}$, and let $b_j$, $1 \le j \le k$, be the gluing vertex of $r_{1, j}$ through $r_{k, j}$, so that there is a copy of $H$ joining each $a_i$ and $b_j$. An example $W$ is shown in Figure~\ref{fig-thm17}, where copies of $H$ are shown schematically as gray ovals. We let $A = \{a_i \mid 1 \le i \le k\}$, $B = \{b_j \mid 1 \le j \le k \}$, $S = V(G) \cup A$, and $T = V(G) \cup B$. Clearly $|V(G')| = |V(G)| + 2k + k^2 (|V(H)| -2)$ and $|S|=|T|= |V(G)| + k$. Moreover, each of $V(G') \setminus S$ and $V(G') \setminus T$ induce a graph in $\pi$, as each consists of $k$ disjoint copies of $H_k^*$ with one of the terminals removed from each $H$ in $H_k^*$. \begin{figure} \begin{centering} \centerline{\includegraphics[scale=0.35]{thm17.pdf}} \end{centering} \caption{An example $W$} \label{fig-thm17} \end{figure} Suppose the instance of \textsc{$\pi$-del-reconf$(G', S, T, |V(G)|+k, 4k)$} is a yes-instance. As there is a copy of $H$ joining each vertex of $A$ to each vertex of $B$, before deleting $a \in A$ from $S$ the reconfiguration sequence must add all of $B$ to ensure that the complement of each intermediate set induces a graph in $\pi$. Otherwise, the complement will contain at least one copy of $H$ as a subgraph and is therefore not in $\pi$. The capacity bound of $|V(G)| + k$ implies that the reconfiguration sequence must have deleted from $S$ a subset $S' \subseteq V(G)$ of size at least $k$ such that $V(G') \setminus (S \setminus S') = S' \cup B$ induces a subgraph in $\pi$. Thus, $G[S'] \in \pi$, and hence \textsc{$\pi$-subset$(G, k)$} is a yes-instance. Conversely if the instance of \textsc{$\pi$-subset$(G, k)$} is a yes-instance, then there exists $V' \subseteq V(G)$ such that $|V'| = k$ and $G[V'] \in \pi$. We form a reconfiguration sequence between $S$ and $T$ by first deleting all vertices in $V'$ from $S$ to yield a set of size $|V(G)|$. $G'[V(G') \setminus (S \setminus V')]$ consists of the union of $G'[V'(G) \setminus S]$ and $G'[V'] = G[V']$, both of which are in $\pi$. Next we add one by one all vertices of $B$, then delete one by one all vertices of $A$ and then add back one by one each vertex in the set $V'$ resulting in a reconfiguration sequence of length $k + k + k + k = 4k$. It is clear that in every step, the complement of the set induces a graph in $\pi$. Thus we have showed that \textsc{$\pi$-subset$(G, k)$} is a yes-instance if and only if there is a path of length at most $4k$ between $S$ and $T$ in $R^{\pi}_{\textsc{del}}(G', |V(G)| + k)$. Since $|V(G')| - (|V(G)| + k) = k + k^2 (|V(H)| -2))$, this implies that \textsc{$\pi$-subset$(G, k)$} is a yes-instance if and only if there is a path of length at most $4k$ between ${V(G') \setminus S}$ and ${V(G') \setminus T}$ in $R^{\pi}_{\textsc{sub}}(G', k + k^2 (|V(H)| -2))$. Therefore, \textsc{$\pi$-sub-reconf$(G, S, T, k, \ell)$} parameterized by $k + \ell$ is at least as hard as \textsc{$\pi$-subset$(G, k)$}, proving the second part.\qed \end{proof} \begin{corollary}\label{corollary:hardness} \textsc{Vertex Cover Reconfiguration}, \textsc{Feedback Vertex Set Reconfiguration}, and \textsc{Odd Cycle Transversal Reconfiguration} parameterized by $\ell$ are all $W[1]$-hard and \textsc{Independent Set Reconfiguration}, \textsc{Forest Reconfiguration}, and \textsc{Bipartite Subgraph Reconfiguration} parameterized by $k + \ell$ are all $W[1]$-hard. \end{corollary} \begin{proof} It is known that for any hereditary property $\pi$ that consists of all edgeless graphs but not all cliques~\cite{KR02}, \textsc{$\pi$-subset$(G,k)$} is $W[1]$-hard. It is clear that the collections of all edgeless graphs, of all bipartite graphs, and of all forests satisfy this condition for hardness, as well as the hypothesis of Theorem~\ref{theorem-reconf-minmax}. For the collection of independent sets, the only $H \in {\mc{F}}_\pi$ is an edge both of whose endpoints are terminals. Here identifying multiple copies of $H$ at a terminal forms a star, and deleting the non-identified terminal from each of the edges results in a single vertex, which is in $\pi$. For the collection of forests, and bipartite graphs, we let $H \in {\mc{F}}_\pi$ be a triangle. When we identify multiple triangles at a vertex, and remove another vertex of each of the triangles, we obtain a tree, which is in $\pi$. \qed \end{proof} We obtain further results for properties not covered by Theorem~\ref{theorem-reconf-minmax}. Lemma~\ref{lemma-clique-cluster-hard} handles the collection of all cliques, which does not satisfy the first condition of the theorem and the collection of all {\em cluster graphs} (disjoint unions of cliques), which satisfies the first condition but not the second. Moreover, as \textsc{$\pi$-subset$(G, k)$} is in {\em FPT} for $\pi$ the collection of all cluster graphs~\cite{KR02}, Theorem~\ref{theorem-reconf-minmax} provides no lower bounds. \begin{lemma}\label{lemma-clique-cluster-hard} \textsc{Clique Reconfiguration} and \textsc{Cluster Subgraph Reconfiguration} parameterized by $k + \ell$ are $W[1]$-hard. \end{lemma} \begin{proof} We first give an {\em FPT} reduction from \textsc{$t$-Clique}, known to be $W[1]$-hard, to \textsc{Cluster Subgraph Reconfiguration}. For $(G, t)$ an instance of \textsc{$t$-Clique}, $V(G) = \{v_1, \ldots, v_n\}$, we form a graph consisting of four $K_t$'s (with vertex sets $A$, $B$, $C$, and $D$) and a subgraph mimicking $G$ (with vertex set $X$), where there is an edge from each vertex in $X$ to each vertex in each $K_t$, and each of subgraphs induced on the following vertex sets induce a $K_{2t}$: $A \cup B$, $A \cup C$, $B \cup D$, $C \cup D$. More formally, $G' = (X \cup A \cup B \cup C \cup D, E_X \cup E_T \cup E_C)$, where $X = \{x_1, \ldots, x_n\}$, $|A| = |B| = |C| = |D| = t$, $E_X = \{\{x_i,x_j\} \mid \{v_i,v_j\} \in E(G)\}$ corresponds to the edges in $G$, $E_T = \{\{a,a'\} \mid a,a' \in A, a \ne a'\} \cup \{\{b,b'\} \mid b,b' \in B, b \ne b'\} \cup \{\{c,c'\} \mid c,c' \in C, c \ne c'\} \cup \{\{d,d'\} \mid d,d' \in D, d \ne d'\} $ forms the $K_t$ cliques, and $E_C = \{\{x,a\}, \{x,b\}, \{x,c\}, \{x,d\}, \{a,b\}, \{a,c\}, \{b,d\}, \{c,d\} \mid a \in A, b \in B, c \in C, d \in D, x \in X\}$ forms the connections among the vertex setes. We let $(G', S, T, 2t, 6t)$ be an instance of \textsc{Cluster Subgraph Reconfiguration}, where $S = A \cup B$ and $T = C \cup D$. Clearly $|S| = |T| = 2t$ and both $S$ and $T$ induce cluster graphs (in fact cliques). We claim that $G$ has a clique of size $t$ if and only if there is a path of length $6t$ from $S$ to $T$. If $G$ has a clique of size $t$, then there exists a subset $Y \subseteq X$ forming a clique of size $t$. We form a reconfiguration sequence of length $6t$ as follows; add the vertices $Y$, remove the vertices in $A$, add the vertices in $D$, remove the vertices in $B$, add the vertices in $C$, and remove the vertices in $Y$, one by one. It is not hard to see that at every step in this sequence we maintain an induced clique in $G'$ of size greater than or equal to $2t$ (and hence a cluster subgraph). If there exists a path of length $6t$ from $S$ to $T$, we make use of the fact that no cluster subgraph contains an induced path of length three to show that $G$ has a clique of size $t$. Observe that before adding any vertex of $C$, we first need to remove (at least) all of $B$ since otherwise we obtain an induced path of length three containing vertices in $C$, $A$, and $B$, respectively. Similarly, we cannot add any vertex of $D$ until we have removed all of $A$. Therefore, before adding any vertex from $T$, we first need to delete at least $t$ vertices from $S$. To do so without violating our minimum capacity of $2t$, at least $t$ vertices must be added from $X$. Since every vertex in $X$ is connected to all vertices in $S$ and $T$, if any pair of those $t$ vertices do not share an edge, we obtain an induced path on three vertices. Thus $X$, and hence $G$, must have a clique of size $t$. Since in our reduction $S$ and $T$ are cliques and every reconfiguration step maintains an induced clique in $G'$ of size greater than or equal to $2t$, the same applies to the \textsc{Clique Reconfiguration} problem. Consequently, both \textsc{Clique Reconfiguration} and \textsc{Cluster Subgraph Reconfiguration} parameterized by $k + \ell$ are $W[1]$-hard. \qed \end{proof} As neither \textsc{Dominating Set} nor its parametric dual is a hereditary graph property, Theorem~\ref{theorem-reconf-minmax} is inapplicable; we instead use a construction specific to this problem in Lemma~\ref{lemma-dom-hard}, which in turn leads to Corollary~\ref{corollary-hitting-set}, since \textsc{Dominating Set} can be phrased as a hitting set of the family of closed neighborhood of the vertices of the graph. \begin{lemma}\label{lemma-dom-hard} \textsc{Dominating Set Reconfiguration} parameterized by $k + \ell$ is $W[2]$-hard. \end{lemma} \begin{proof} We give a reduction from \textsc{$t$-Dominating Set}; for $(G, t)$ an instance of \textsc{$t$-Dominating Set}, we form $G'$ as the disjoint union of two graphs $G'_1$ and $G'_2$. We form $G'_1$ from $t+2$ $(t+1)$-cliques $C_0$ (the {\em outer clique}) and $C_1$, \ldots, $C_{t+1}$ (the {\em inner cliques}); $V(C_0) = \{o_1, \ldots, o_{t+1}\}$ and $V(C_i) = \{w_{(i,0)},w_{(i,1)},\ldots w_{(i,t)}\}$ for $1 \le i \le t+1$. The edge set of $G'_1$ contains not only the edges of the cliques but also $\{\{o_j,w_{(i,j)}\} \mid 1 \le i \le t+1, 0 \le j \le t\}$; the graph to the left in Figure~\ref{fig-dom} illustrates $G'_1$ for $t= 2$. Any dominating set that does not contain all vertices in the outer clique must contain a vertex from each inner clique. \begin{figure} \begin{centering} \centerline{\includegraphics[scale=0.45]{bothdomsets.pdf}} \end{centering} \caption{Graphs used for the dominating set reduction} \label{fig-dom} \end{figure} To create $G'_2$, we first define $G^+$ to be the graph formed by adding a universal vertex to $G$, where we assume without loss of generality that $V(G) = \{v_1,\ldots,v_{|V(G)|}\}$. We let $V(G'_2) = \cup_{0 \le i \le t}V(H_i)$, where $H_0, \ldots, H_{t}$ are $t+1$ copies of $G^+$; we use $u_i$ to denote the universal vertex in $H_i$ and $v_{(i,j)}$ to denote the copy of $v_j$ in $H_i$, $1 \le j \le |V(G)|$, $0 \le i \le t$. The edge set consists of edges between each non-universal vertex $v_{(0,j)}$ in $H_0$ and, in each $H_i$, the universal vertex, its image, and the images of its neighbours in $G$, or more formally $E(G'_2) = \{\{v_{0,j},u_i\} \mid 1 \le j \le |V(G)|, 1 \le i \le t\} \cup \{\{v_{0,j},v_{i,j}\} \mid 1 \le j \le |V(G)|, 1 \le i \le t\} \cup \{\{(v_{0,j},v_{i,k}\} \mid 1 \le j \le |V(G)|, 1 \le i \le t, (v_j,v_k) \in E(G)\}$. The graph to the right in Figure~\ref{fig-dom} illustrates part of $G'_2$, where universal vertices are shown in white and, for the sake of readability, the only edges outside of $G^+$ shown are those adjacent to a single vertex in $H_0$. We form an instance $(G', S, T, 3t + 2, 6t + 4)$ of \textsc{Dominating Set Reconfiguration}, where $S = \{u_i \mid 0 \le i \le t\} \cup V(C_0)$ and $T = \{u_i \mid 0 \le i \le t\} \cup \{w_{i,i-1} \mid 1 \le i \le t+1\}$. Both $S$ and $T$ are dominating sets, as each universal vertex $u_i$ dominates $H_i$ as well as $H_0$ and $V(G'_1)$ is dominated by the outer clique in $S$ and by one vertex from each inner clique in $T$. Clearly $|S| = |T| = 2t+ 2$. We claim that $G$ has a dominating set of size $t$ if and only if there is a path of length $6t + 4$ from $S$ to $T$. In $G'_1$, to remove any vertex from the outer clique, we must first add a vertex from each inner clique, for a total of $t + 1$ additions; since $k = 3t + 2$ and $|S| = 2t + 2$, this can only take place after $G'_2$ has been dominated using at most $t$ vertices. In $G'_2$, a universal vertex $u_i$ cannot be deleted until $H_i$ has been dominated. If $G$ can be dominated with $t$ vertices, then it is possible to add the dominating set in $H_0$ and remove all the universal vertices, thus making the required capacity available. If not, then none of the universal vertices, say $u_i$, can be removed without first adding at least $t+1$ vertices to dominate $H_i$, for which there is not enough capacity. Therefore, there exists a reconfiguration sequence from $S$ to some $S'$ such that $S' \cap G'_2$ has $t$ vertices if and only if $G$ has a dominating set of size $t$. Moreover, the existence of a dominating set $D$ of size $t$ in $G$ implies a path of length $6t + 4$ from $S$ to $T$; we add $D$ in $H_0$, remove all universal vertices, reconfigure $G'_1$, add all universal vertices, and then remove $D$. Consequently, there exists a reconfiguration sequence from $S$ to $T$ in $6t + 4$ steps if and only if $G$ has a dominating set of size $t$. \qed \end{proof} The following is a result of there being a polynomial-time parameter-preserving reduction from \textsc{Dominating Set}: \begin{corollary}\label{corollary-hitting-set} \textsc{Unbounded Hitting Set Reconfiguration} parameterized by $k + \ell$ is $W[2]$-hard. \end{corollary} \vspace{-0.2cm} \section{Conclusions and Directions for Further Work} Our results constitute the first study of the parameterized complexity of reconfiguration problems. We give a general paradigm, the reconfiguration kernel, for proving fixed-parameter tractability, and provide hardness reductions that apply to problems associated with hereditary graph properties. Our result on cluster graphs (Lemma~\ref{lemma-clique-cluster-hard}) demonstrates the existence of a problem that is fixed-parameter tractable~\cite{KR02}, but whose reconfiguration version is $W$-hard when parameterized by $k$; this clearly implies that fixed-parameter tractability of the underlying problem does not guarantee fixed-parameter tractability of reconfiguration when parameterized by $k$. Since there is unlikely to be a polynomial-sized kernel for the problem of determining whether a given graph has a cluster of size at least $k$~\cite{KPRR12}, it is possible (though in our opinion, unlikely) that an underlying problem having a polynomial-sized kernel is sufficient for the reconfiguration problem to be fixed-parameter tractable when parameterized by $k$. It remains open whether there exists an NP-hard problem for which the reconfiguration version is in {\em FPT} when parameterized by $\ell$. Our {\em FPT} algorithms for reconfiguration of \textsc{Bounded Hitting Set} and \textsc{Feedback Vertex Set} have running times of $O^*(2^{O(k \lg k)})$. Further work is needed to determine whether the running times can be improved to $O^*(2^{O(k)})$, or whether these bounds are tight under the {\em Exponential Time Hypothesis}. We observe connections to another well-studied paradigm, local search~\cite{FRFLSV90}, where the aim is to find an {\em improved solution} at distance $\ell$ of a given solution $S$. Not surprisingly, as in local search, the problems we study turn out to be hard even in the parameterized setting when parameterized by $\ell$. Other natural directions to pursue (as in the study of local search) are the parameterized complexity of reconfiguration problems in special classes of graphs and of non-graph reconfiguration problems, as well as other parameterizations. \vspace{-0.1cm} \subsection*{Acknowledgements} The second author wishes to thank Marcin Kami\'{n}ski for suggesting the examination of reconfiguration in the parameterized setting. \bibliographystyle{acm}
1,314,259,992,625
arxiv
\section*{} \vspace{-1cm} \footnotetext{\textit{$^{a}$~Faculty of Physics and Center for NanoScience, Ludwig-Maximilians-Universit\"at M\"unchen, Geschwister-Scholl-Platz 1, D-80539 Munich, Germany. Fax: +49 89 2180 3182; Tel: +49 89 2180 2438; E-mail: raedler@lmu.de}} \footnotetext{\textit{$^{b}$~Department of Pharmacy - Center for Drug Research, Pharmaceutical Biology, Ludwig-Maximilians-Universit\"at M\"unchen, Butenandtstr. 5-13, D-81377 Munich, Germany }} \section*{Introduction} Micropatterning techniques have become an established tool for researchers interested in single-cell functions and dynamics \cite{Mahmud2009,Thery2005,Thery2006,Parker2002,Jiang2005,Yoon2012, Ferizi2015} and the collective behavior of small cell assemblies and tissues \cite{Rolli2012,Marel2014,Segerer2015}. Their significance for today's cell science arises from the fact that they provide direct control over the shape and functionality of the cell's environment on a microscopic scale. How a cell adapts to the structure and composition of its microenvironment can give considerable insight into its intrinsic mechanical and functional properties \cite{Bischofs2008,Tee2015,Thery2006-2}. In addition, micropatterns can be exploited to actively manipulate cell behavior. For instance, it has been found that the size and geometry of the accessible area can alter and direct the axis of cell polarization and division \cite{Thery2005,Thery2006,Jiang2005}, the positions and orientation of pseudopodia \cite{Parker2002}, or the locations of junctions between adjacent cells \cite{Tseng2012}. Cell adhesion and migration depends in large part on the protein composition of a cell's surroundings \cite{Junker1987,Tomaselli1987,Eichinger2012}, but the relative density of adhesion sites \cite{Maheshwari1999,Maheshwari2000,Rajagopalan2004}, as well as density gradients of surface-bound proteins, can influence and bias its spreading and motion \cite{Smith2004,Liu2007}. Micropatterning techniques should therefore provide for precise control over the shape of the cell's microenvironment, i.e. the distribution and concentration of surface-bound proteins. In addition, it should be compatible with a large variety of proteins and be capable of producing patterns composed of multiple protein species. Most current micropatterning protocols are based on one of the following three approaches. Soft lithography in form of microcontact printing ($\mathrm{\mu CP}$) involves protein transfer from a polymeric stamp, while the remaining surface is often passivated by PEGylation \cite{Thery2009,Ruiz2007,Wilbur1996}. In photolithography, the properties of a surface or a precoated matrix are locally altered by photocleavage with a laser device or by exposing it to light through a photomask \cite{Azioune2009,Kim2010,Belisle2009,Nakanishi2004}. In plasma lithography, a surface that is partially protected by a shadow mask or stamp is modified/activated by exposure to a plasma (e.g. oxygen) \cite{Cheng2010,Junkin2011,Langowski2005,Tourovskaia2003,Picone2014,Kim2011}. Protein deposition in the latter protocols is mainly achieved by surface adsorption from an aqueous solution. Each of these approaches has its own specific advantages. $\mathrm{\mu CP}$ provides flexibility with respect to the molecules that are transferred to the surface, and does not require advanced or expensive equipment. Photolithography-based protocols produce very homogeneous patterns and have also been extended to enable formation of gradients in the surface-bound protein density \cite{Belisle2009}. Finally, for plasma-based approaches profit from the strong activation of the surface by the plasma exposure which can be exploited (i) directly to cause increased cell attachment on elsewise cell repellent substrates \cite{Junkin2011,Kim2011}, (ii) as a basis to spatially control polymer or protein deposition or conformation \cite{Langowski2005,Cheng2010} and (iii) to selectively remove a layer of protein or polymer \cite{Picone2014}. The plasma treatment itself is a robust and effective procedure, that provides fast working protocols and is applicable to a large variety of substrates. However, the diversity of substrates, coatings and patterns used in the field of cell science is so great that no single existing technique is capable of fabricating designs suitable for all experimental conditions. In particular, gradients in protein density or the accurate deposition of different proteins within a multi component pattern are often difficult to accomplish. Additionally, since micropatterning should ideally be accessible to a broad range of labs, patterning methods should preferably also be easy to handle and cost efficient. Therefore, simple working protocols that are adaptable to different experimental conditions such as proteins and substrates and can be combined with other patterning approaches to create more complex microenvironments can be expected to stimulate further progress in this field. In this paper, we present an alternative plasma based and simple means of creating micropatterns on a broad range of substrates. The technique is based on plasma-induced patterning in combination with PEGylation and protein coating, and is therefore referred to here as microscale plasma-initiated protein patterning ($\mu$PIPP). It provides control over the final concentration of protein on the surface and produces homogeneous and stable patterns on various substrates such as glass, tissue culture polystyrene (tc-PS), cyclic olefine copolymers (COCs), and parylene C. We show that gradients on the surface-bound protein density can be generated via protein incubation within a chemotaxis chamber. Finally we combine the $\mu$PIPP protocol with $\mathrm{\mu CP}$ to create complex patterns consisting of up to three different components while providing accurate and adjacent relative positioning. The method presented in this paper should prove useful as a facile and versatile approach to the fabrication of a wide variety of micropatterns. \begin{figure*}[ht!] \centering \includegraphics[width=1\linewidth]{Fig1} \caption{Patterning protocols. (a) Patterning procedure for conventional $\mu$PIPP: 1. The surface is partially covered by a PDMS stamp of the desired pattern and exposed to $\mathrm{O_2}$ plasma 2. A PLL-PEG solution is applied to the margins of the stamp and is drawn over the exposed surface by capillary action 3. The stamp is removed and the surface is incubated with the desired protein. (b) $\mu$PIPP combined with $\mu$CP: 1. Following UV-ozone activation (see Methods), the PDMS stamp is incubated with Protein 1. 2. The protein-coated stamp is inverted and Protein 1 is printed on the surface, which is simultaneously exposed to $\mathrm{O_2}$ plasma 3. PLL-PEG solution is applied to the stamp edge and drawn between stamp and surface by capillary action 4. The stamp is removed and the surface is incubated with Protein 2.} \label{fig:1} \end{figure*} \section*{Experimental} \subsection*{Preparation of masters} Masters for stamp preparation can be created by following established protocols (such as those provided by photoresist producers like MicroChem) or the protocol provided in section S1 of the Supplementary Material. Note that labs that do not have the means to create stamp masters can order them online (from HTS Resources, for example). Once prepared, each master can be used to make multiple stamps. \subsection*{PDMS stamp preparation} PDMS was prepared by mixing 10 parts silicone elastomer with 1 part cross linker (Sylgard Elastomer Kit, Dow Corning), and poured as a 1-3\;mm thick layer onto the master and degassed in a desiccator. The coated master was then cured overnight at a temperature of $\mathrm{50^{\circ}C}$. \subsection*{Patterning} The following sections describe the patterning protocols. The proteins used for patterning in this work were: fibrinogen labeled with Alexa-488, Alexa-594 or Alexa-647 (Life Technologies) respectively, fibronectin (YO Proteins) and laminin-1 (bio-techne). Note, however, that the technique may be used with a broad range of different proteins. \subsubsection*{Conventional $\mu$PIPP:} A PDMS stamp of the desired pattern was placed on the surface to be patterned (see Fig.~\ref{fig:1}a). The assembly was then exposed to $\mathrm{O_2}$ plasma at a pressure of 2 mbar in a plasma cleaner (Diener Femto) at 40\;W for 3\;min, thus activating the exposed parts of the surface. A droplet of a 2\;mg/ml PLL(20kDa)-g[3.5]-PEG(2kDa) (PLL-PEG) (SuSoS) solution in 10\;mM HEPES (pH 7.4) and 150\;mM NaCl was then placed at the edge of the stamp, and was drawn into the spaces between surface and stamp by capillary action. After 30\;min at room temperature, the stamp was removed, and the substrate was rinsed with PBS. Finally, a (50\;$\mu$g/ml) solution of the desired protein (e.g. fibronectin, fibrinogen) dissolved in phosphate-buffered saline (PBS) was added for 30-60\;min (if not noted otherwise) and the substrate was rinsed three times with PBS. \subsubsection*{Gradient patterning:} To set up a protein density gradient in the final pattern, the $\mu$PIPP protocol was applied to a chemotaxis slide. Therefore, the surface of a chemotaxis slide (ibidi, sticky-Slide Chemotaxis3D) was first patterned with PLL-PEG according to the standard $\mu$PIPP protocol (steps 1 and 2 in Fig.~\ref{fig:1}a). Afterwards, the ``sticky chamber'' was attached and filled with PBS. The PBS on one side of each chamber was then replaced by 45\;$\mu$l of a 100\;$\mu$g/ml solution of protein in PBS according to the manufacturer's instructions \cite{ibidi:chemotaxis} to create a gradient in the concentration of the protein solution. The patterned surface was incubated in this gradient for 40\;min. Finally, the surface was rinsed by flooding the chamber three times with PBS. \subsubsection*{Multicomponent patterning:} In order to obtain a pattern consisting of three different components, the basic $\mu$PIPP protocol was extended as shown in Fig.~\ref{fig:1}b. Note that this method works for all stamp geometries that provide enclosed cavities. The PDMS stamp was initially activated for 5\;min in an UV-ozone cleaner (novascan) and incubated for about 1\;h with a 50\;$\mu$g/ml solution of the first protein. Incubated stamps were rinsed once with ultrapure water and dried for about 6\;min. A COC substrate (ibidi) was then activated for 3\;min in the UV-ozone cleaner before the stamp was set in place.\footnote[3]{This UV activation step was found to be critical, since the surface has to be sufficiently hydrophilic to allow the printed protein to be properly transferred from the stamp yet hydrophobic enough to ensure that the protein is adsorbed from the incubation solution.} The subsequent procedure follows the standard $\mu$PIPP protocol. Note that if a third protein instead of PLL-PEG is used, no plasma treatment is necessary. \subsection*{Cell culture} \subsubsection*{MDCK:} The Madin-Darby Canine Kidney (MDCK) cell line was cultured in Minimum Essential Medium (c-c-pro) containing 2 mM L-glutamine and 10\;\% fetal calf serum (FCS) at 37$^{\circ}$C in a 5\;\% $\mathrm{CO_2}$ atmosphere. Prior to experiments, cells were grown to 70-80\;\% confluence, trypsinized and centrifuged at 1000\;rcf for 3\;min. Cell pellets were resuspended in Leibovitz's L15 medium with GlutaMAX (Gibco) and 10\;\% FCS. \subsubsection*{HUVEC:} Human umbilical vein endothelial cells (HUVEC) were cultivated in endothelial cell growth medium 2 (ECGM) (Promocell) supplemented with 1\;\% penicillin/streptavidin/amphotericin B (PAN-Biotech) and 10\;\% FCS (PAA). Cells were incubated at 37$^{\circ}$C in a 5\;\% $\mathrm{CO_2}$ atmosphere. Prior to experiments, cells were overlaid with 1x trypsin/ethylenediaminetetraacetic acid (PAN-Biotech) to detach the cell layer. Subsequently, trypsin was inactivated by adding Dulbecco's Modified Eagle's Medium supplemented with 10\;\% FCS. After inactivation, cells were centrifuged and the cell pellet was diluted in ECGM to desired concentration. For all experiments, cells were used in their 3rd passage. \subsection*{Fluorescence staining} After a 24\;h incubation on micropatterned plates, cells were washed twice with PBS, and fixed with 4\;\% paraformaldehyde for 10\;min. After a second washing step with PBS, cells were permeabilized for 10\;min using 0.2\;\% Triton in PBS. Before staining, samples were exposed to 1\;\% bovine serum albumin (BSA) in PBS for 30\;min to saturate non-specific protein-binding sites, and then stained with a 1:400 dilution of rhodamine-phalloidin (Invitrogen/Thermo Scientific) and 0.5\;$\mu$g/ml Hoechst 33342 (Sigma) diluted in 1\;\% BSA in PBS. After 30\;min, fixed cells were washed three times for 5\;min each with PBS containing 0.2\;\% BSA and sealed with FluorSave Reagent (Merck Millipore) and a coverslip. \subsection*{Microscopy} Phase-contrast and fluorescence images were taken on a Nikon TI Eclipse inverted microscope. Confocal microscopy was performed using a Leica SP8 microscope. \section*{Results and discussion} In the first set of experiments we created grids of fibronectin-coated squares using microscale plasma-initiated protein patterning ($\mu$PIPP) as described in the Methods section. Note that in this set of experiments we used squares of 60~$\mathrm{\mu m}$ width but in principle pattern resolution is limited by the accuracy of the stamp master ($\sim$2~$\mathrm{\mu m}$ in our experiments). To test whether $\mu$PIPP is compatible with commonly used cell-culture substrates, we applied the procedure to standard tc-PS, glass, COC, parylene C, and PDMS. As depicted in Fig.~\ref{fig:2}, the method produces homogeneous protein patterns on all these surfaces. The method could be applied to these substrates without the need of any surface pretreatment or adjustment of the protocol. Cell adhesion and confinement to the patterned surfaces was achieved on tc-PS, as well as on glass and COC. Pattern quality was also high on parylene C, which is often used as a biocompatible coating for electronic devices. Although patterns were successfully produced on PDMS, confinement of cells within these patterns was not stable over time. (To vary the stiffness of the PDMS surface, monomer to crosslinker ratios of 1:5, 1:10, and 1:20, respectively, were used.) \begin{figure}[ht!] \centering \includegraphics[width=1\linewidth]{Fig2} \caption{$\mu$PIPP on different substrates. (Left column) Fluorescence images of fibrinogen Alexa-488 patterns on different substrates: COC, glass, parylene C, tc-PS and PDMS. Black areas are passivated with PLL-PEG. (Right column) MDCK cells on fibronectin patterns 24\;h after cell seeding (insets 5x magnified).} \label{fig:2} \end{figure} Notably, patterns produced by treatment with plasma can be visualized by phase-contrast as well as differential interference contrast microscopy (see Fig.S2 of the Supplementary Material), owing to the fact that exposure to plasma slightly alters the altitude and degree of roughness of the surface (in the order of a few hundred nm), and hence changes its optical properties \cite{Beaulieu2009,Alam2014}. This feature simplifies working with the final pattern, as patterned regions can be located and identified without any need for fluorescence microscopy. In order to test whether the patterns produced are suitable for cell confinement over long timescales, we carried out time-lapse measurements of cells on patterned surfaces over extended time periods. As shown in Fig.S3 of the Supplementary Material (and seen in earlier studies \cite{Roettgermann2014,Ferizi2015,Segerer2015}), the patterns are stable and capable of confining cells over periods of up to several days. Protein coating by incubation (rather than stamping) has the advantage that the concentration of protein attached to the surface can easily be varied, either by adjusting the concentration of the protein solution $c_{sol}$ or the time of incubation $\tau$. The results of a series of experiments in which both parameters were systematically varied are shown in Fig.~\ref{fig:3}a. Here, for three different incubation times $\tau =\;$5, 15, 25$\;\mathrm{min}$ the concentration of fibrinogen Alexa-488 in the solution was also varied as $c_{sol} =\;$ 7.5, 15, 30, 75$\;\mathrm{\mu g/ml}$. For each combination of $\tau$ and $c_{sol}$, we evaluated the fluorescence intensities of over 1500 patterns from four experiments. The low standard deviation of the measured distribution indicates that very homogeneous and reproducible patterns were generated. The mean values show that an increase in the incubation time $\tau$ or protein concentration in solution $c_{sol}$ leads to an increase in fluorescence intensity, and thus, assuming a linear dependence of protein density to fluorescence intensity, in the surface density of protein. \begin{figure*}[ht] \centering \includegraphics[width=1\linewidth]{Fig3} \caption{Controlling the surface concentration of the protein. (a) Protein density within patterns can be adjusted by varying both the concentration in the incubation solution $c_{sol}$ and the incubation time $\tau$. We analyzed the fluorescence intensity of Alexa-488-labeled fibrinogen in micropatterns after incubation with $c_{sol}=\;$7.5, 15, 30 or 75\;$\mathrm{\mu g/ml}$ for 5, 15, and 25\;min. The data is well fitted by a Langmuir isotherm (Eq. \ref{eq:langmuir}) with equilibrium constants $\alpha_{\tau}$ of $\alpha_{5}=\;$0.18\;$\pm$0.06, $\alpha_{15}=\;$0.31$\;\pm$0.16, and $\alpha_{25}=\;$0.39$\;\pm$0.18 for the different incubation times $\tau$ respectively ($\pm$ errors indicate confidence bounds of 95\;\% within the fits). The inset shows the corresponding linear scaling in inverse presentation. Error bars indicate the standard deviation. (b) A gradient in the surface-bound protein density can be generated within the patterns by incubation in a protein concentration gradient formed in a chemotaxis chamber. Top down: 1. Formation of gradients on the chemotaxis slide. 2. Fluorescence image of micropatterned stripes obtained by incubation in a gradient of fibrinogen Alexa-488. 3. Measured intensity along the line shown in the middle panel.} \label{fig:3} \end{figure*} The adsorption behavior as a function of $c_{sol}$ is well fitted by the Langmuir expression for the adsorption isotherm \cite{Langmuir1918}: \begin{equation}\label{eq:langmuir} c_{surf}=c_{max}\times\frac{\alpha \times c_{sol}}{1+\alpha \times c_{sol}} \end{equation} Here, $c_{surf}$ denotes the surface concentration of the protein, $c_{\mathrm{max}}$ the saturated surface concentration, and $\alpha$ the equilibrium constant in the case of Langmuir adsorption. Note, however, that in our experiments adsorption to, and desorption of protein from the surface depends on the time of incubation and hence is clearly not in equilibrium (at least not for $\tau =\;$ 5 and $15\;\mathrm{min}$, as $c_{surf}$ increases further for longer incubation times $\tau$). The data suggests, though, that an equilibrium regime may be asymptotically reached for longer incubation times $\tau$. Still, in the context of this paper, we use the Langmuir expression solely as an estimate for the adsorption behavior for different incubation concentrations. The dependence of $c_{surf}$ on $c_{sol}$ can be exploited to generate gradients in the density of the surface-bound protein within the micropatterns. To this end, we used a chemotaxis chamber to create a gradient in the concentration of the protein solution $c_{sol}$ in the protein incubation step of $\mathrm{\mu PIPP}$. Using this method, we succeeded in creating gradients in the density of the surface-bound proteins within the micropatterns, as can be seen in Fig.~\ref{fig:3}b. Such gradients in the concentration of proteins like fibronectin or vascular endothelial growth factor are known to have a guiding effect on cell migration and could therefore be exploited to orchestrate cell motion within the micropatterns \cite{Smith2004,Liu2007}. After the pattern and a surface gradient have been prepared, the chemotaxis chamber can be used to set up an additional gradient of protein in the solution bathing the attached cells. In such a setup, the guidance cues of a gradient in the surface-bound protein density, a soluble protein gradient, and the cues provided by the micropattern are combined. Since \textit{in vivo} cells are often confronted with such multi-cue situations, their implementation \textit{in vitro} is a useful tool for cell sciences \cite{Rodriguez2013}. In addition to single protein patterns, the technique is also capable of forming patterns consisting of three components. Such multicomponent patterning can be achieved by combining $\mathrm{\mu PIPP}$ with $\mathrm{\mu CP}$. A simple and versatile implementation of this combination is available for all geometries that are designed in such a way that the PDMS stamp used provides enclosed cavities (Fig.~\ref{fig:1}b). Using such geometries, the embossed parts of the stamp are directly used for $\mathrm{\mu CP}$ whereas the enclosed cavities which shield the surface from plasma without direct stamp contact allow for protein coating according to the standard $\mu$PIPP protocol. In this way, complex multicomponent patterns, such as those depicted in Fig.~\ref{fig:4}a, can be created. Passivation with PLL-PEG is not essential in this procedure, and patterns consisting of three different types of proteins are possible as well (Fig.~\ref{fig:4}c). In contrast to iterative methods of creating patterns of multiple components, the advantage of creating all functionalizations with the aid of the same stamp (and working iteration) is that the individual components can be placed directly adjacent to each other and their relative positioning can hence be accurately controlled. Precise interfaces and pattern geometries consisting of multiple coatings can therefore be guaranteed without the potentially problematic step of bringing one pattern generation in the relatively right position to the last one. Note that $\mu$PIPP can also be combined with $\mathrm{\mu CP}$ in a successive manner (see Section~S4 of the Supplementary Material). This alternative way of combining both techniques compliments the protocol described above and is able to produce multi component patterns such as ``dashed'' stripe patterns similar to the ones created via stamp-off protocols \cite{Desai2011,Rodriguez2014} (see Fig.~S4 of the Supplementary Material). \begin{figure*}[ht!] \centering \includegraphics[width=1\linewidth]{Fig4} \caption{Multicomponent patterning. By combining $\mu$PIPP with $\mu$CP, patterns consisting of three different functionalizations can be formed. (a) A complex pattern consisting of PLL-PEG (black) and fibrinogen labeled with Alexa-488 (green) and Alexa-647 (red), respectively. (b) Fluorescence image of patterns composed of fluorescently labeled fibronectin (green) and laminin (red) (top row). A representative confocal fluorescence image of the actin cytoskeleton of HUVECs arranged in such patterns (middle row), and a heat map of the actin cytoskeleton distribution of cells on over 20 evaluated patterns (bottom row). (Note that all three patterns were created with the multicomponent pattern protocol; hence the inner portions of the framed squares are the only regions available for passive protein attachment by incubation.) (c) Framed circle pattern consisting of fibrinogen labeled with Alexa-488 (green), Alexa-647 (red) and Alexa-594 (yellow), respectively.} \label{fig:4} \end{figure*} By seeding cells on such multicomponent patterns, cellular responses to different surface coatings can be directly compared. As a proof of principle, we studied cell adhesion on ``framed'' patterns consisting of a square area coated with one kind of protein surrounded by a rim area coated with a second type of protein. We chose a standard passivation with PLL-PEG and the two extracellular matrix proteins fibronectin and laminin-1, which are both extracellular matrix proteins known to play a role in cell adhesion \cite{Carlsson1981,Clyman1990}. We found the adhesion of HUVECs to be strongly affected by the different protein coatings within the patterns. The cells were well confined within framed squares. Within the patterns however, they avoided the parts coated with laminin-1, while adhering to the fibronectin-coated areas (Fig.~\ref{fig:4}b). Such decreased or increased adhesion on laminin compared to other proteins has been reported before \cite{Eichinger2012,Carlsson1981}. In our setup, the preference of fibronectin over laminin did neither depend on the pattern geometry nor on which of the proteins was printed and which was applied by incubation, as it becomes evident from comparison of column 2 and 3 in Fig.~\ref{fig:4}b. This suggests that, in this method, the effects of the proteins on cell adhesion do not depend on the way they are affixed to the surface. \section*{Conclusion} In this work, we have described $\mu$PIPP, a novel and simple technique for the fabrication of micropatterned protein-coated surfaces for cell studies. As shown, $\mu$PIPP is compatible with various substrates and proteins typically used in cell research. The concentrations of proteins adsorbed to the surface can be readily controlled and gradients in the density of surface-bound proteins can be formed. Both parameters are known to influence on cell spreading and migration \cite{Maheshwari1999,Maheshwari2000,Rajagopalan2004,Smith2004,Liu2007}. Since an additional gradient can be set up in the liquid medium bathing the cells with the aid of the chemotaxis chamber, multi-cue situations can be produced, in which the synergy or competition between surface and solution gradients can be studied within the defined geometries provided by a micropattern. Furthermore, in combination with $\mu$CP, patterns consisting of three different components can be generated. The fact that the deposition areas of all three components emerge from the design of a single stamp brings the advantage of high accuracy in the relative positioning of all components while still maintaining a relatively simple protocol. This set of patterning techniques thus permits complex microenvironments to be created and allows for direct comparisons of the impact of different surface functionalizations on cell adhesion and migration. Due to the simplicity and versatility of the protocol, it should find wide application as a micropatterning tool in cell science labs. \section*{Acknowledgements} Financial support from the Deutsche Forschungsgemeinschaft (DFG) via Projects B01 and B08 in Sonderforschungsbereich (SFB) 1032, and from the European Union's Seventh Framework Programme (FP7) for Research (Project NanoMILE) is gratefully acknowledged.
1,314,259,992,626
arxiv
\section{Introduction} \onehalfspacing In modern data analysis tasks, a model with good prediction accuracy is typically not sufficient; in high-stakes data-driven decision making it is often necessary to obtain a model that is also interpretable. Recently, there have been a lot of influential articles advocating for the need of statistical procedures that retain a certain degree of interpretability despite their high prediction accuracy, see e.g. relevant discussions in \cite{rudin2019stop}, \cite{murdoch2019definitions} and \cite{rudin2022interpretable}. The issue of interpretability is particularly important for the analysis of high-dimensional data where the number of predictors $ p $ is much greater than the number of samples $ n $ ($ p \gg n $), and parsimonious models where only a small subset $ t \ll p $ of the predictors are included are preferred. For example, in the analysis of deoxyribonucleic acid (DNA) microarrays, the expression levels for thousands of genes are collected. A valuable model would be one with high prediction accuracy, but also one in which only a small subset of genes are identified as relevant to predict the outcome of interest. To address the problem of prediction accuracy and interpretability in the presence of high-dimensional data, sparse regularization methods have been developed over the last three decades. In essence, sparse regularization methods optimize the goodness-of-fit of a single model while penalizing its complexity, resulting in an interpretable model with good prediction accuracy. Many regularization methods have been developed for a large class of statistical models, see e.g. \cite{hastie2019statistical} for an extensive and modern treatment. While prediction has always been a subject of importance for sparse regularization methods, there is a much stronger emphasis on inference and uncovering the true mechanism of how the output is generated as a function of the predictor variables. A deep theoretical treatment of sparse regularization methods, in terms of estimation and variable selection, may be found in \cite{buhlmann2011statistics}. While sparse regularization methods have well-established statistical theory and result in interpretable models, they are often outperformed in terms of prediction accuracy by ensemble methods. Ensemble methods, where multiple diverse models are generated and aggregated, are some of the most popular ``blackbox" algorithms for the analysis of high-dimensional data. They have led to a plethora of successful applications in genetics (see e.g. \cite{dorani2018ensemble, genes11070819}), computer vision (see e.g. \cite{rodriguez2012assessment, yu2015image}), speech recognition (see e.g. \cite{krajewski2010comparing, rieger2014speech}), fraud detection (see e.g. \cite{kim2012stock, louzada2012bagging}), and many other fields. Diverse members are essential for the good predictive performance of ensembles \citep{brown2005managing}. Current state-of-the-art methods rely on randomization or the sequential refitting of residuals to achieve diversity, which results in a large number of uninterpretable models with poor individual prediction accuracy. In this article, we introduce a unifying framework that combines the interpretability of sparse regularization methods with the high prediction accuracy of ensemble methods. In particular, we generalize sparse regression methods to multi-model regression ensembles. The proposed methodology results in ensembles comprised of a small number of sparse and diverse models, learned jointly from the data, that each have a high prediction accuracy. Thus each of the models in the ensembles provide a possible explanation for the relationship between the predictors and the response. Further, the ensembles achieve high prediction accuracy compared to state-of-the-art ensemble methods. To convey our ideas, we focus on regression ensembles. The remaining of this article is organized as follows. In Section \ref{sec:literature} we provide a literature review of sparse and ensemble methods. In Section \ref{sec:BSpS} we introduce the unifying framework between sparse and ensemble methods. In Section \ref{sec:stepwise}, we generalize stepwise regression to multi-model ensembles, which will constitute the initialization procedure for the Algorithm of Section \ref{sec:PSGD}. In Section \ref{sec:PSGD} we introduce a projected subsets gradient descent algorithm, adapting $ \ell_0 $ optimization approaches to multi-model regression ensembles. In Section \ref{sec:simulation} we perform an extensive simulation study to benchmark the proposed methodology against state-of-the-art methods. In Section \ref{sec:eye} we benchmark the proposed methodology on a gene expression data application. Section \ref{sec:discussion} closes the article with a discussion. \section{Literature Review: Sparse and Ensemble Methods} \label{sec:literature} We study the linear model with a dataset consisting of a response vector $\mathbf{y}=(y_{1},\dots ,y_{n})^T \in \mathbb{R}^n $ and a design matrix $ \bX \in \mathbb{R}^{n \times p} $ comprised of $ n $ observations $ \bx_i \in \mathbb{R}^p $ for $ p $ predictors, \begin{equation*} y_{i} = \mathbf{x}_{i}^{\prime} \boldsymbol{\beta}_{0} + \sigma \epsilon_{i}, \quad 1\leq i \leq n, \end{equation*} where $ \bbet_{0} \in \mathbb{R}^p$ and the elements of the noise vector $\boldsymbol{\epsilon} = (\epsilon_1, \dots, \epsilon_n)^T \in \mathbb{R}^n$ have variance 1. We are interested in the high-dimensional scenario where $ p \gg n $ and the underlying model is sparse, i.e. only a small fraction of the available predictors are relevant for explaining the response. For simplicity, we omit the intercept term from the regression model. We assume that the response $ \by $ the entries of the design matrix, $x_{ij}$, $ 1 \leq i \leq n$ and $1 \leq j \leq p$, are standardized so that \begin{align*} \frac{1}{n}\sum\limits_{i=1}^{n}x_{ij}=0,\quad \frac{1}{n}\sum\limits_{i=1}^{n}x_{ij}^{2}=1,\quad 1\leq j \leq p, \quad \frac{1}{n}\sum\limits_{i=1}^{n}y_{i}=0, \quad \frac{1}{n}\sum\limits_{i=1}^{n}y_{i}^{2}=1. \end{align*} \subsection{Sparse Methods} Sparse regularization methods penalize model complexity. The purpose of such methods is to find the best sparse model that achieves good prediction accuracy. The most natural approach for sparse modeling is Best Subset Selection (BSS), first mentioned in the literature by \cite{garside1965best}, which solves the nonconvex problem \begin{align} \min_{\bbet \in \mathbb{R}^p} \lVert \by -\bX \bbet \rVert_2^2 \quad \text{subject to} \quad \lVert \bbet \rVert_0 \leq t. \end{align} The largest number of nonzero coefficients $ t \leq \min(n-1, p) $ in the regression coefficients vector $ \bbet = (\beta_1, \beta_2, \dots, \beta_p)^T $ is typically determined in a data-driven way, e.g. by cross-validation (CV). While BSS has been shown to have desirable variable selection and estimation properties (see e.g. \cite{bunea2007aggregation, shen2013constrained}), it is an NP-hard problem \citep{welch1982algorithmic}. There are \begin{align} \mathcal{K}(p, t) = \sum_{j=0}^{t} \binom{p}{j} \end{align} possible subsets that must be evaluated to determine the exact solution. For example, $\mathcal{K}(15, 10) = $ 30,826 which is already a large number of subsets, even in this setting with a small number of predictor variables. While many proposals have been made to determine the optimal subset based on the training data \citep[see e.g.][]{mallows1973some,akaike1974new,schwarz1978estimating}, CV is often recommended \citep{hastie2009model} which makes the procedure even more computationally intensive. The branch-and-bound algorithm developed by \cite{furnival1974regressions} was initially the procedure of choice for BSS, but it did not scale well beyond $ p>30 $. While an improved branch-and-bound algorithm was developed by \cite{gatu2006branch}, the method still does not scale well beyond $ p> 100 $. To address the computational infeasibility of BSS, stepwise algorithms were developed (see e.g. \cite{jennrich1968application, pope1972use, bendel1977comparison}). At each step a variable is added (forward selection) and/or removed (backward elimination) from a subset of model predictors based on model goodness-of-fit until no further step can improve the model to a statistically significant extent. Stepwise algorithms have been heavily criticized and are regarded as a form of data dredging with poor model selection properties (see e.g. \cite{rencher1980inflation, wilkinson1981tests, copas1983regression, hurvich1990impact, roecker1991prediction}). To address the shortcomings of stepwise procedures, sparse regularization methods were subsequently popularized first by basis pursuit denoising \citep{chen1994basis} and then the closely related Lasso \citep{tibshirani1996regression}, a convex relaxation of BSS which solves problems of the form \begin{align} \label{eq:BSS} \min_{\bbet \in \mathbb{R}^p} \lVert \by -\bX \bbet \rVert_2^2 \quad \text{subject to} \quad \lVert \bbet \rVert_1 \leq t, \end{align} or in its Lagragian form \begin{align} \min_{\bbet \in \mathbb{R}^p} \lVert \by -\bX \bbet \rVert_2^2 + \lambda \lVert \bbet \rVert_1. \end{align} Many different sparse regularization methods have been proposed (see e.g. \cite{zou2005regularization, candes2007dantzig, zhang2010nearly}). Efficient convex solvers have been developed for the Lasso (see e.g. \cite{efron2004least, friedman2007pathwise}), however restrictive conditions on the covariance of the predictors must hold for the Lasso to have good variable selection properties (see e.g. \cite{zhao2006model}) and good relative prediction error compared to BSS (see e.g. \cite{zhang2014lower}). Recent developments by \cite{bertsimas2016best} to study the BSS nonconvex problem \eqref{eq:BSS} with a modern optimization lens has led to new research avenues in $ \ell_0 $-penalized statistical procedures (see e.g. \cite{bertsimas2020sparse, takano2020best, kenney2021mip, thompson2022robust}). \cite{bertsimas2016best} first generate good local solutions with a projected gradient descent algorithm, and then use them as warm-starts for a MIO solver. \cite{thompson2022robust} adapted their approach approach to develop a robust version of BSS. Their approach scaled to problems of dimension $ p > $ 1,000, but even with the warm-starts the MIO solver may still take upwards of 30 minutes to compute. To reduce the need for a MIO solver, \cite{hazimeh2020fast} proposed a method that generates new candidate solutions to reduce the need for a MIO solver. Once a local (incumbent) solution has been obtained from a projected gradient descent algorithm, they apply small perturbations to the local solution to yield new starting points and apply the projected gradient descent algorithm to each one of them. If the best solution obtained from the new candidates improves on the incumbent solution, it is set as the new solution. This process is repeated until no significant difference occurs for the objective function. They showed empirical evidence that their proposal often recovers either the optimal or a near-optimal solution to BSS for $ p> $ 1,000 in a matter of seconds. To explain the intuition for the potential reduction in prediction error of sparse regularization methods, consider an estimator $\hat{f}(\bx)=\hbbet^T\bx$ of the regression function $ f(\bx)= \bbet_0^T \bx $. The mean squared prediction error (MSPE) of $\hat{f}$ may be decomposed into its bias, variance and irreducible error, \begin{align} \label{eq:MSPE} \text{MSPE}\left[\hat{f}\right] = \mathbb{E}_{\bx}\left[(f(\bx)-\hat{f}(\bx))^2\right] + \sigma^2 = \text{Bias}\left[\hat{f}\right]^2 + \text{Var}\left[\hat{f}\right] + \sigma^2. \end{align} Since least squares regression is the best linear unbiased estimator (BLUE), the rationale for regularized estimation is to exploit the bias-variance trade-off favorably, i.e. to incur a small increase in bias in exchange for a larger decrease in variance. \subsection{Ensemble Methods} To understand the competitive advantage of ensemble methods in terms of prediction accuracy, we first decompose their MSPE. For an ensemble comprised of $ G $ regression functions, $\bar{f} = \sum_{g=1}^G \hat{f}_g/G$, then its MSPE can be decomposed as \begin{align} \text{MSPE}\left[\bar{f}\right] = \text{Bias}\left[\bar{f}\right]^2 + \text{Var}\left[\bar{f}\right] + \sigma^2, \label{eq:MSPE_ensemble} \end{align} where \begin{align} \text{Bias}\left[\bar{f}\right] = \overline{\text{Bias}} \quad \text{and} \quad \text{Var}\left[\bar{f}\right] = \frac{1}{G} \, \overline{\text{Var}} + \frac{G-1}{G} \, \overline{\text{Cov}}, \label{eq:MSPE_variance} \end{align} and $ \overline{\text{Bias}} $, $ \overline{\text{Var}} $ and $ \overline{\text{Cov}} $ are the average biases, variances and pairwise covariances of the $ G $ regression functions in the ensemble \citep{ueda1996generalization}. From~\eqref{eq:MSPE_variance} it is clear that an ensemble can successfully reduce its variance if the models in the ensemble are sufficiently diverse (uncorrelated), especially if the number of models is large. The statistics and machine learning community have seen an increase in algorithmic approaches to generate ensembles over the last twenty years, with most proposals relying on randomization \citep[see e.g.][]{RF, random_glm_paper} or boosting \cite[see e.g.][]{GBM, buhlmann2003boosting, boosting, yu2020submito}. Interpretability of such ensembles is typically unfeasible. However, several ad hoc methods have been developed to assess predictor importance \citep[see e.g.][]{hastie2009boosting}. In an attempt to bridge the gap between interpretability and ensemble methods, \cite{buhlmann2006sparse} introduced sparse boosting by minimizing some penalized $\ell_2$-loss function for better variable selection. The purpose of these ensemble methods is to generate a collection of diverse models comprised of different subsets of predictors. For example, in Random Forests random sampling of the data (bagging) \citep{breiman1996bagging} and the random predictor subspace method \citep{amit1997shape, ho1998random, dietterich2000experimental} are combined to generate uncorrelated trees for the purpose of achieving a lower generalization error \citep{RF}. In gradient boosting, diverse members (typically decision trees) are generated by sequentially fitting the residuals of the previous fit. \subsection{A Unifying Methodology} The multiplicity of good models is a phenomenon that has long been acknowledged, see e.g. relevant discussions in \cite{mccullagh1989monographs} and \cite{mountain1989combined}. Different, yet equally good models can provide distinct explanations for the underlying relationship between predictors and response. However, based on the philosophy of \cite{rudin2019stop}, current state-of-the-art ensemble methods lack interpretability as they typically consist of either a large number of models generated using random-based approaches or a (smaller) number of models indirectly generated by sequentially fitting residuals instead of the data. The models in the ensembles do not have high prediction accuracy on their own; they only work well when they are pooled together in the final ensemble fit. Thus each model is not insightful or reliable on its own. Hence, there is a gap between interpretable single model methods such as sparse regularization and algorithmic ensemble methods. We aim to fill this gap by developing a systematic approach to construct ensembles consisting of a relatively small number of interpretable sparse models with high individual prediction accuracy. Each of these models is learned directly from the data and provides a reliable relationship between the predictors and the response. Diversity between the models is imposed by restricting the sharing of predictors between different models. \section{Best Split Selection} \label{sec:BSpS} We now formally introduce the unifying framework between sparse and ensemble methods. Suppose we wish to find a collection of $ G \geq 2$ sparse and diverse models in an ensemble. Denote the matrix of model coefficients \begin{align} \bbet_{1:G} = \begin{pmatrix} \beta_{1}^1 & \beta_{1}^2 & \dots & \beta_{1}^G \\ \beta_{2}^1 & \beta_{2}^2 & \dots & \beta_{2}^G \\ \vdots & \vdots & \ddots & \vdots \\ \beta_{p}^1 & \beta_{p}^2 & \dots & \beta_{p}^G \end{pmatrix}, \end{align} where $ \bbet_{1:G} \in \mathbb{R}^{p \times G} $ and $ \beta_j^g $ is the coefficient for predictor $ j $ of model $ g $, $ 1 \leq g \leq G $. For notational convenience let $ \bbet^g = (\beta_1^g, \beta_2^g, \dots, \beta_p^g)^T \in \mathbb{R}^p $ be the coefficients of model $ g $ and $ \bbet_{j\cdot} = (\beta_j^1, \beta_j^2, \dots, \beta_j^G)^T \in \mathbb{R}^G$ the coefficients of predictor $ j $ across the $ G $ models. Then Best Split Selection (BSpS) solves, for a fixed number of sparse models $ G $, the nonconvex problem \begin{align} \label{eq:BSpS} \min_{\bbet^1, \dots, \, \bbet^G \in \mathbb{R}^p} \sum_{g=1}^{G} \left(\by - \bX \bbet^g\right)^2 \quad \text{subject to} \quad \begin{cases} \lVert\bbet^g\rVert_0 \leq t, \, &1 \leq g \leq G, \\ \lVert\bbet_{j\cdot}\rVert_0 \leq u, \, & 1 \leq j \leq p. \end{cases} \end{align} The parameter $ t \leq \min(n-1, p) $ restricts the $ \ell_0 $-norm of the columns of $ \bbet_{1:G} $ and thus the number of nonzero coefficients in each model. The parameter $ u \leq G $ restricts the $ \ell_0 $-norm of the rows of $ \bbet_{1:G} $ and thus the number of models that share any given predictor. Note that if $ u=G $, then \eqref{eq:BSpS} is equivalent to BSS in \eqref{eq:BSS} for the same value of $ t $ and there is no diversity among the models. Hence BSpS may be seen as a generalization of BSS to multiple groups. The tuning parameters may be chosen in a data-driven manner by using CV for instance. BSpS thus aims to find $G$ sparse models, in such a way that each model explains well the response and the different models do not have much overlap. In this way, the models complement each other well in an ensemble. While there are many proposals in the literature to obtain an optimal ensembling function (see e.g. \cite{breiman1996stacked}), for simplicity in this article the ensemble fit corresponding to the $G$ models selected by~\eqref{eq:BSpS} is given by \begin{align} \label{eq:ensemble_fit} \bbbet = \frac{1}{G} \sum_{g=1}^G \hbbet^g. \end{align} Hence, in contrast to algorithmic ensemble methods, but similarly to regularization methods, the ensemble model is an interpretable, sparse linear model. The ensemble model combines the information of the $G$ individual models, which individually provide an explanation for the relationship between a subset of the predictors and the response. \subsection{Split Combinatorics} \label{sec:split_combinatorics} The total number of possible splits of $ p $ variables into $ G $ groups, for $ p \geq G $, was derived by \cite{christidis2020split}. We extend their combinatorics result to the BSpS optimization problem \eqref{eq:BSpS} for the case without overlap between the models ($ u=1 $). Note that the computational problem for BSpS is even larger if predictors are allowed to be shared between groups ($ u>1 $). Let $ p_g $ be the number of variables in group $ g $, $ 1 \leq g \leq G $, and let $q = \sum_{g=1}^G p_g$. Also let $h_i(p_{1},\dots,p_{G})$ be the number of elements in the sequence $p_{1},\dots,p_{G}$ that are equal to $ i $, $ 1 \leq i \leq t$. The number of possible splits of $p$ features into $G$ groups comprised of at most $ t $ variables is given by \begin{align} \label{eq:total_splits} \mathcal{T}(p, G, t) =\sum_{p_{1}\leq \cdots\leq p_{G}\leq t} \binom{p}{q}\left[\frac{q!}{p_{1}! \dots p_{G}!} \prod_{i=1}^{t}\frac{1}{h_i(p_{1},\dots,p_{G})!}\right]. \end{align} For example, $\mathcal{T}(15,3,10) =$ 171,761,941. Thus, even for a relatively small number of predictor variables, the issue of computational infeasibility of BSpS becomes apparent and will be magnified further if $ t $ and $ u $ in \eqref{eq:BSpS} are chosen by CV. The BSS optimization problem in \eqref{eq:BSS} can always make use of MIO to generate global solutions using locally optimal solutions as initial candidates (see e.g. \cite{bertsimas2016best, thompson2022robust}). The computational infeasibility of BSpS, as seen from the combinatorics in \eqref{eq:total_splits}, eliminates this approach for \eqref{eq:BSpS}. \subsection{Related Work} \label{sec:multi_convex} In related work, \cite{christidis2020split} recently introduced the Split Regularized Regression (SplitReg) method which can be seen as a computationally more attractive, multi-convex relaxation of BSpS. While hard thresholds are used in BSpS in~\eqref{eq:BSpS}, soft thresholds are used in SplitReg which can be incorporated in the objective function more easily. In detail, SplitReg is a minimizer $ \bbet_{1:G} = (\bbet^1, \dots, \bbet^G) \in \mathbb{R}^{p \times G} $ of an objective function of the form \begin{align} \label{eq:splitreg} \mathcal{J}\left(\by, \bX, \bbet^1, \dots, \bbet^g \right) = \sum_{g=1}^G \left\{\frac{1}{2n} \lVert \by - \bX \bbet^g \rVert_2^2 + \lambda_s P_d\left( \bbet^g \right) + \frac{\lambda_d}{2} \sum_{g\neq h}^G P_d\left(\bbet^h, \bbet^g\right) \right\} \end{align} where $ P_s $ and $ P_d $ are sparsity and diversity penalty functions, and the constants $\lambda_s, \lambda_d >0$, which may be chosen e.g. by CV, control the magnitude of the effect of the sparsity and diversity penalties. \cite{christidis2020split} propose to use as sparsity and diversity penalties \begin{align} P_s(\bbet) = \lVert \bbet \rVert_1 \quad \text{ and } \quad P_d(\bbet^g, \bbet^h) = \sum_{j=1}^p \lvert \bbet^g \rvert \lvert \bbet^h \rvert \label{eq:splitreg_penalties} \end{align} Hence, $P_s(\bbet)$ is the Lasso penalty while the diversity penalty $P_d(\bbet^g, \bbet^h)$ is an $\ell_1$-norm relaxation of the hard threshold in~\eqref{eq:BSpS}. Note that for $\lambda_d=0$, SplitReg in \eqref{eq:splitreg} is equivalent to sparse estimation with the penalty $ P_s $, irrespective of the number of groups $G$. The ensemble fit corresponding to the solution of~\eqref{eq:splitreg} is again given by \eqref{eq:ensemble_fit}. \citet{christidis2020split} showed that with the penalties in~\eqref{eq:splitreg_penalties} the ensemble estimator yields consistent predictions and has a fast rate of convergence. Moreover, the general framework \eqref{eq:splitreg} allows the diversity penalty to be combined with alternative sparsity penalties such as the group Lasso \citep{yuan2006model} for categorical variables or the fused Lasso \citep{tibshirani2005sparsity} for data containing spatial or temporal structures. The SplitReg objective function is multi-convex and can be solved efficiently via a block coordinate descent algorithm. Unlike the parameters $ t $ and $ u $ in BSpS, the parameters $ \lambda_s $ and $ \lambda_d $ do not directly control the number of predictors in each model and the number of models that can share any given predictor. There is theoretical and empirical evidence that such sparse regularization methods can negatively affect variable selection (see e.g. \cite{van2009conditions, hazimeh2020fast}). Further, the penalties \eqref{eq:splitreg_penalties} in SplitReg induce shrinkage of the coefficients, and empirical studies and it has been suggested that shrinkage may have a negative effect on prediction in high signal-to-noise scenarios (see e.g. \cite{hastie2020best}). We develop computational tools to directly optimize BSpS in \eqref{eq:BSpS} and alleviate the issues associated with shrinkage-inducing methods. In Section \ref{sec:stepwise} we generalize forward stepwise regression to optimize BSpS in \eqref{eq:BSpS} for the special case $ u=1 $, which constitutes an initial starting point for our main algorithm. In Section \ref{sec:PSGD} we develop a computing algorithm capable of handling any $ u \in \{1, \dots, G\} $. \section{Stepwise Split (Regularized) Regression} \label{sec:stepwise} In this section, we generalize forward stepwise regression to multi-model regression ensembles. Our aim is to develop a fast algorithm that generate solutions for BSpS in \eqref{eq:BSpS} in the particular case that $ u=1 $ (i.e. when models are fully disjoint), which will constitute the initial starting point for our main algorithm in Section \ref{sec:PSGD}. For notational convenience, for any subset $ S \subseteq \{1, \dots, p\}$ we denote $ |S| $ the cardinality of the set $ S $, $ \bbet_S \in \mathbb{R}^{|S|}$ the subvector of $ \bbet \in \mathbb{R}^p$ with element indices $ S $, and $ X_S \in \mathbb{R}^{n \times |S|}$ the submatrix of $ X $ with column indices $ S $. We denote by $ I_n \in \mathbb{R}^{n \times n}$ the identity matrix of order $ n $ and $ F_{(d_1, d_2)}^{-1}(t) $ the quantile function of the $ F $-distribution with with $ d_1 $ and $ d_2 $ degrees of freedom, respectively. The stepwise split (regularized) regression algorithm is described in detail in Algorithm \ref{alg:stepwise_algo}. Initially, each model is comprised of no predictor variables. At each iteration of step 1, the candidate predictor that provides the largest improvement in goodness-of-fit to each unsaturated model is identified. The unsaturated model with the most statistically significant possible goodness-of-fit improvement based on an $ F $-test (at some level $ \gamma \in [0,1] $) is updated by adding its optimal candidate predictor to its set of model predictors. Once a predictor is included in a set of model predictors, it is removed from the set of candidate predictors and can no longer be used in another model. A model is declared saturated if there are no remaining candidate predictors providing any statistically significant improvement to the goodness-of-fit, or if it contains $ n-1 $ predictors. This process is repeated until all $ G $ models are saturated. The least squares or Lasso fit is applied to each model in step 2. If the Lasso is the fitting method of choice, each model is shrunk by a custom parameter $\lambda^{(g)}$ chosen by CV. For completeness we include the stepwise procedure in Algorithm \ref{alg:stepwise_algo}, which we refer as Step-SplitReg, in the simulation study and real data applications of Sections \ref{sec:simulation} and \ref{sec:eye} respectively. Based on our numerical experiments, a Lasso fit to each model in step 2 produces an ensemble with better prediction accuracy and better variable selection compared to using a least squares fit, and so Step-SplitReg results are reported for the case where a Lasso fit is used for each model. \begin{algorithm}[h!] \caption{Stepwise Split (Regularized) Regression \label{alg:stepwise_algo}} \begin{algorithmic}[1] \Require{Design matrix $\bX \in \mathbb{R}^{n \times p}$, response vector $\by \in \mathbb{R}^n$, number of models $ G \geq 2 $, and significance threshold $ \gamma \in [0,1] $.} \vspace{0.2cm} \Ensure{The set of candidate predictors $ J = \{1, \dots, p\} $. For each model ($ 1 \leq g \leq G $) the set of predictors $ J^{(g)} = \emptyset $, the model saturation indicator $ T^{(g)} = \textsc{false} $ and the hat matrix $ H^{(g)} = {0} \in \mathbb{R}^{n\times n}$. } \Statex \State Repeat the following steps until $ \gamma^* \geq \gamma $ or $ T^{(g)}= \textsc{true} $ for all $ 1 \leq g \leq G $: \begin{enumerate}[label*=\footnotesize 1.\arabic*:] \item For each model $ g $ satisfying $ T^{(g)}=\textsc{false} $: \begin{enumerate}[label*=\footnotesize \arabic*:] \item Identify candidate predictor maximizing decrease in residuals sum of squares, \begin{align*} j^{(g)} = \argmax_{j \in J} \tau_j^{(g)}, \quad \tau_j^{(g)} = \frac{\by^T\left(I_n-H^{(g)}\right) \bx_j}{\bx_j^T\left(I_n-H^{(g)}\right) \bx_j}. \end{align*} \item Calculate the $ p $-value $ \gamma^{(g)} $ of predictor $ j^{(g)} $ in the enlarged model, \begin{align*} \gamma^{(g)} = 1 - F_{(1,|J^{(g)}|+1)}^{-1}\left(\frac{\left(|J^{(g)}|+1\right)\tau_{j^{(g)}}^{(g)}}{\left(\by - H^{(g)}X_{J^{(g)}}\right)^T\left(\by - H^{(g)}X_{J^{(g)}}\right) - \tau_{j^{(g)}}^{(g)}}\right) \end{align*} \item If $ \gamma^{(g)} \geq \gamma$ set $T^{(g)}=\textsc{true} $. \end{enumerate} \item Identify the unsaturated model $ g^* $ with the smallest $ p $-value $ \gamma^{(g^*)} $. \item If $ \gamma^{(g^*)} < \gamma $: \begin{enumerate}[label*=\footnotesize \arabic*:] \item Update the candidates $ J = J \setminus \{j^{(g^*)}\} $ and the set of model predictors $ J^{(g^*)} = J^{(g^*)} \cup \{j^{(g^*)}\} $. \item If $ |J^{(g^*)}| = n-1 $, set $T^{(g^*)}=\textsc{true} $. Otherwise, update the model hat matrix \begin{align*} H^{(g^*)} = X_{J^{(g^*)}} \left(X_{J^{(g^*)}}^T X_{J^{(g^*)}}\right)^{-1} X_{J^{(g^*)}}^T. \end{align*} \end{enumerate} \end{enumerate} \State For each model $ g $ $ (1 \leq g \leq G) $, set $ \hbbet^g = \mathbf{0}_p \in \mathbb{R}^p $ and if (custom) regularization is applied, \begin{align*} \hbbet_{|J^{(g)}|}^g = \argmin_{\bbet \in \mathbb{R}^{|J^{(g^*)}|}} \lVert \by -\bX_{J^{(g)}} \bbet \rVert_2^2 + \lambda^{(g)} \lVert \bbet \rVert_1, \end{align*} where $\lambda^{(g)}$ is chosen by CV. Otherwise, \begin{align*} \hbbet_{|J^{(g)}|}^g = X_{J^{(g)}} \left(X_{J^{(g)}}^T X_{J^{(g)}}\right)^{-1} \by \end{align*} \State Return the sets of model predictors $ J^{(g)} $ and their coefficients $ \hbbet^{g} $, $ 1 \leq g \leq G $. \end{algorithmic} \end{algorithm} An \texttt{R}/\texttt{C++} library implementing Algorithm \ref{alg:stepwise_algo} is available on CRAN \citep{CRAN} under the name \texttt{stepSplitReg} \citep{stepSplitReg}. Many more features than those described in Algorithm \ref{alg:stepwise_algo} are available, including different model saturation criteria and different regularization procedures for the final sets of model predictors. While the stacking procedure of \cite{breiman1996stacked} is available in the package, in this article we focus on the simple case where each model is weighted equally using \eqref{eq:ensemble_fit}. A reference manual with the complete details of the package is available at \url{https://CRAN.R-project.org/package=stepSplitReg}. \section{Projected Subsets Gradient Descent} \label{sec:PSGD} We adapt ideas from $\ell_0$-penalized optimization to the BSpS problem in \eqref{sec:BSpS}, and develop an algorithm that is perfectly suited for the cross-validation of the diversity tuning parameter $ u $ in \eqref{eq:BSpS} and minimizes the need for a time-consuming local combinatorial search. For notational convenience, denote the loss function \begin{align} \label{eq:loss_function} \mathcal{L}\left(\bbet | \by, \bX \right) = \sum_{g=1}^{G} \left(\by - \bX \bbet^g\right)^2, \end{align} where the gradient of the loss is given by \begin{align} \label{eq:gradient} \nabla_{\bbet} \mathcal{L}\left(\bbet | \by, \bX \right) = X^T (X \bbet - \by). \end{align} We note that \eqref{eq:gradient} is Lipschitz continuous with Lipschitz constant $ L_{\bbet} = \lVert X^T X \rVert_2$ the spectral norm of $ X^T X $, i.e. \begin{align*} \big\lVert \nabla_{\bbet} \mathcal{L}\left(\bbet | \by, \bX \right) - \nabla_{\bbet} \mathcal{L}\left(\tbbet | \by, \bX \right) \big \rVert_2 \leq L_{\bbet} \big\lVert \bbet - \tbbet \big\rVert_2 \quad \forall \bbet, \tbbet \in \mathbb{R}^p. \end{align*} The loss function \eqref{eq:loss_function} is bounded from above by its quadratic approximation with Lipschitz constant $ L_{\bbet} $, \begin{align*} \mathcal{L}\left(\tbbet | \by, \bX \right) \leq \mathcal{L}_Q\left(\tbbet | \by, \bX \right) = \mathcal{L}\left(\bbet | \by, \bX \right) + \nabla_{\bbet} \mathcal{L}\left(\bbet | \by, \bX \right)^T \left(\tbbet - \bbet\right)+ \frac{1}{2} L_{\bbet} \big\lVert \tbbet - \bbet \big\rVert_2^2. \end{align*} We define $ S^{(g)} \subseteq J $ the subsets of predictors that are used in at most $ u-1 $ models excluding model $ g $, i.e. \begin{align} S^{(g)} = \left\{j \in J: \sum_{\substack{h=1 \\ h \neq g}}^G \mathbb{I}\left(j \in J^{(h)}\right) \leq u-1\right\}, \end{align} where $ J^{(g)}=\{j \in J: \hbbet_j^g \neq 0 \} $, $ 1 \leq g \leq G $, as defined in Section \ref{sec:stepwise}. Central to our algorithm is the projected subset operator, which we define for any vector $ v \in \mathbb{R}^p$ and some subset $ S \subseteq J$ as \begin{align} \label{eq:projected_subset} \mathcal{P}(v; \, S, t) \in \argmin_{\substack{w \in \mathbb{R}^p \\ \lVert w \rVert_0 \leq t \\ \{j \in J: w_j \neq 0 \} \subseteq S}} \lVert w - v \rVert_2^2. \end{align} The operator $ \mathcal{P}(v; \, S, t) $ retains the $ t $ largest elements in absolute value of the vector $ v $ that belong to the set $ S $. It is a set-valued map since more than one possible permutation of the indices $ \{j \in J: j \in S\} $ may exist. The projected subsets gradient descent (PSGD) algorithm is given in Algorithm \ref{alg:projected_algo}. The algorithm assumes initial solutions $ \tbbet^1, \dots, \tbbet^G $ are provided. In step 1 of the computing algorithm, the projected subset gradient descent algorithm is applied to each model once cyclically until convergence. In particular the update for model $ g $ of the projected subset gradient descent algorithm using subset $ S^{(g)} $ and model size $ t $ can be written as \begin{align*} \hbbet^g &\in \argmin_{\substack{\bbet^g \in \mathbb{R}^p \\ \lVert \bbet^g \rVert_0 \leq t \\ \{j \in J: \bbet_j^g \neq 0 \} \subseteq S^{(g)}}} \mathcal{L}_Q\left(\tbbet | \by, \bX \right) \\ &= \argmin_{\substack{\bbet^g \in \mathbb{R}^p \\ \lVert \bbet^g \rVert_0 \leq t \\ \{j \in J: \bbet_j^g \neq 0 \} \subseteq S^{(g)}}} \Bigg\lVert \tbbet^g - \left(\bbet^g - \frac{1}{L_{\bbet}}\nabla_{\bbet} \mathcal{L}\left(\bbet^g | \by, \bX \right)\right) \Bigg\rVert_2^2\\ &= \mathcal{P}\left(\tbbet^g - \frac{1}{L_{\bbet} } {\nabla}_{\bbet} \mathcal{L}\left(\tbbet^g|\by, \bX\right); \, S^{(g)}, t \right). \end{align*} We note that for each model the iterative algorithm produces a sequence of converging solutions \citep[Proposition~6]{bertsimas2016best}. After convergence is reached, its set of predictor variables $ J^{(g)} = \{j \in J: \hbbet_j^g \neq 0 \} $ is updated and the final model coefficient is computed. In the optional step 2 of Algorithm \ref{alg:projected_algo}, local combinatorial searches using random permutations of the groups are performed to improve on the solution from step 1. Based on our numerical experiments, a poor initial solution in step 1 increases the need for step 2 to yield better solutions which come with a high computational cost. However, a carefully designed algorithm that generates good initial solutions for Algorithm \ref{alg:projected_algo} alleviates the need for the random permutations of models cyclic update in step 2. \begin{algorithm}[h!] \caption{Projected Subsets Gradient Descent (PSGD) \label{alg:projected_algo}} \begin{algorithmic}[1] \Require{Design matrix $\bX \in \mathbb{R}^{n \times p}$, response vector $\by \in \mathbb{R}^n$, current solutions $ \tbbet^1, \dots, \tbbet^G $, Lipschitz constant $ L_{\bbet} $, sparsity and diversity tuning parameters $ t $ and $ u $, and tolerance parameter $\epsilon>0$, (optional) number of cycling iterations $ C $.} \vspace{0.2cm} \Ensure{The current model predictors $ J^{(g)} = \{j \in J: \tbbet_j^g \neq 0 \} $, $ 1 \leq g \leq G $. } \Statex \State Repeat the following steps for each model $ g $, $ 1 \leq g \leq G $: \begin{enumerate}[label*=\footnotesize 1.\arabic*:] \item Update the allowed predictors $ S^{(g)} = \left\{j \in J: \sum_{\substack{h=1 \\ h \neq g}}^G \mathbb{I}(j \in J^{(h)}) \leq u-1\right\} $. \item Update $ \tbbet^g $ as \begin{align*} \hbbet^g \in \mathcal{P}\left(\tbbet^g - \frac{1}{L_{\bbet} } {\nabla}_{\bbet} \mathcal{L}\left(\tbbet^g|\by, \bX\right); \, S^{(g)}, t \right) \end{align*} until $\mathcal{L}(\tbbet^g|\by, \bX) - \mathcal{L}(\hbbet^g|\by, \bX) \leq \epsilon$. \item Update the model predictors $ J^{(g)} = \{j \in J: \hbbet_j^g \neq 0 \} $. \item Compute the final model coefficients \begin{align*} \hbbet^g = \argmin_{\substack{\bbet \in \mathbb{R}^p \\ \beta_j^g = 0, j \notin J^{(g)}}} \mathcal{L}(\bbet | \by, \bX) \end{align*} \end{enumerate} \State (Optional) Repeat the following steps $ C $ times: \begin{enumerate}[label*=\footnotesize 2.\arabic*:] \item Draw a random permutation $ (\omega(1), \dots, \omega(G)) $ of $ (1, \dots, G) $. \item Repeat step 1 using the new order $ (\omega(1), \dots, \omega(G)) $ and the current solutions as initial values. \item If $\sum_{g=1}^G \mathcal{L}(\hbbet^g|\by, \bX) < \sum_{g=1}^G \mathcal{L}(\tbbet^g|\by, \bX)$, retain new solutions. Otherwise keep the old solutions $ J^{(g)} = \{j \in J: \tbbet_j^g \neq 0 \} $ and $ \hbbet^g = \tbbet^g $, $ 1 \leq g \leq G $. \end{enumerate} \Statex \State Return the sets of model predictors $ J^{(g)} $ and their coefficients $ \hbbet^g $, $ 1 \leq g \leq G $. \end{algorithmic} \end{algorithm} To generate good initial solutions for Algorithm \ref{alg:projected_algo}, in Algorithm \ref{alg:incrementing_projected_algo} we decrement the diversity of the models progressively until $ u =G $ in \eqref{eq:BSpS}. Initial sets of model predictors and their coefficients are generated using Algorithm \ref{alg:stepwise_algo}. Then Algorithm \ref{alg:projected_algo} is applied for $ u=1, 2, \dots, G $. Based on our numerical experiments, Algorithm \ref{alg:incrementing_projected_algo} produces competitive solutions in terms of minimizing the objective function of BSpS in \eqref{eq:BSpS} compared to the local combinatorial searches of Algorithm \ref{alg:projected_algo}. For the purpose of running large experiments on simulated and real data, in the remaining of this article we use Algorithm \ref{alg:incrementing_projected_algo} without the local combinatorial search in Algorithm \ref{alg:projected_algo}. The sparsity and diversity tuning parameters in \eqref{eq:BSpS}, $ t $ and $ u $ respectively, need to be determined from the training data. We use 5-fold CV for the grids of candidates $ t $ and $ u $, looking to minimize the CV MSPE. For a fixed sparsity level $ t $, Algorithm \ref{alg:incrementing_projected_algo} is perfectly suited to generate solutions for a grid of candidates for $ u $. This process is repeated for every candidate $ t $. \begin{algorithm}[h!] \caption{Decrementing Diversity PSGD \label{alg:incrementing_projected_algo}} \begin{algorithmic}[1] \Require{Design matrix $\bX \in \mathbb{R}^{n \times p}$, response vector $\by \in \mathbb{R}^n$, Lipschitz constant $ L_{\bbet} $, maximum model size $ t $, and tolerance parameter $\epsilon>0$.} \vspace{0.2cm} \Ensure{Using Algorithm \ref{alg:stepwise_algo} initialize the sets of model predictors and their coefficients for $ u=1 $, $ J^{(g)}(1) $ and $ \hbbet^g(1) $, $ 1 \leq g \leq G $. } \Statex \State Using Algorithm \ref{alg:projected_algo} update $ J^{(g)}(1) $ and $ \hbbet^g(1) $ with the current solutions as initial solutions. \Statex \State For $ u=2,\dots, G $ repeat the following step: \begin{enumerate}[label*=\footnotesize 2.\arabic*:] \item Compute $ J^{(g)}(u) $ and $ \hbbet^g(u) $ using Algorithm \ref{alg:projected_algo} with initial solutions $ J^{(g)}(u-1) $ and $ \hbbet^g(u-1) $, $ 1 \leq g \leq G $. \end{enumerate} \Statex \State Return the sets of model predictors $ J^{(g)}(u) $ and their coefficients $ \hbbet^g(u) $, $ 1 \leq u,g \leq G $. \end{algorithmic} \end{algorithm} An \texttt{R}/\texttt{C++} library implementing the PSGD Algorithms \ref{alg:projected_algo}, \ref{alg:incrementing_projected_algo} and the CV procedure is available on CRAN under the name \texttt{PSGD} \citep{PSGD}. Multithreading is available in the library with OpenMP \citep{chandra2001parallel} to reduce the computational cost of the method. The computational cost of the method (using a single thread) as function of the number of models $ G $ is explored in Section \ref{sec:number_groups}. A reference manuel with the complete details of the package is available at \url{https://CRAN.R-project.org/package=PSGD}. \section{Simulation Study} \label{sec:simulation} For each Monte Carlo replication, we generate data from the linear model \begin{equation*} y_{i} = \mathbf{x}_{i}^{\prime} \boldsymbol{\beta}_{0} + \sigma \epsilon_{i}, \quad 1\leq i \leq n, \end{equation*} where the $\mathbf{x}_{i}\in\mathbb{R}^{p}$ are multivariate normal with zero mean and correlation matrix $\boldsymbol{\Sigma} \in \mathbb{R}^{p \times p}$ and the $\epsilon_{i}$ are standard normal. We consider two combinations of $p$ and $n$, namely $(p,n)=(500, 50)$ and $(p,n)=(150, 50)$. For each $p$, we consider the proportion of active variables $\zeta = 0.1, 0.2$ and $0.4$. The $p_0=[p \zeta]$ nonzero elements of the $p$-dimensional vectors $\boldsymbol{\beta}_0$ are randomly generated as described in \cite{SIS}, i.e. nonzero coefficients are set to $ (-1)^u (a + |z|) $, where $a = 5 \log n/\sqrt{n}$, $u$ is drawn from a Bernoulli distribution with parameter 0.2 and $ z $ is drawn from the standard Gaussian distribution. For $(p,n)=(500, 50)$, the $ \ell_2 $-norms $ \Vert \boldsymbol{\beta}_{0} \Vert $ range from 21.01 to 43.22 for all proportions sparsity levels $1 - \zeta$ considered. For $(p,n)=(150, 75)$, the $ \ell_2 $-norms $ \Vert \boldsymbol{\beta}_{0} \Vert $ range from 11.30 to 22.80. We consider two different scenarios for $\boldsymbol{\Sigma}$.\\ \noindent \textbf{Scenario 1:} \begin{equation*} \Sigma_{i,j}= \begin{cases} 1 & \text{if } i=j \\ \rho & \text{if } i\neq j \\ 0 & \text{otherwise}. \end{cases} \end{equation*} \noindent \textbf{Scenario 2:} \begin{equation*} \Sigma_{i,j}= \begin{cases} 1 & \text{if } i=j \\ \rho & \text{if } 1\leq i,j \leq p_0, i\neq j \\ 0 & \text{otherwise}. \end{cases} \end{equation*} In Scenario 1, all the predictors are correlated among each other. In Scenario 2, the active variables are only correlated with each other. For both scenarios we consider the values $\rho \in \{0.2, 0.5, 0.8\}$. Then $\sigma$ is chosen to give a desired signal to noise ratio (SNR), defined as $\text{SNR} = {\bbet_{0}^{\prime} \boldsymbol{\Sigma} \bbet_{0}}/{\sigma^2}.$ We consider SNRs of 1, 3 and 5. We report results for all scenarios across all considered sparsity levels, correlations, SNRs and the two combinations of $p$ and $n$. \subsection{Methods} We ran a simulation study comparing the prediction accuracy of eleven methods. In particular we consider four sparse regression methods, their analogous split regression methods, and three ``blackbox" regression ensemble methods. All computations were carried out in \texttt{R} with their default settings. \begin{enumerate} \item[1.] \textbf{Stepwise} forward regression, computed using the \texttt{lars} package \citep{lars}. \item[2.] \textbf{Fast-BSS}, computed using the \texttt{L0Learn} package \citep{L0Learn} with the $ \ell_0$-$\ell_1 $ penalty option. \item[3.] \textbf{Lasso}, computed using the \texttt{glmnet} package \citep{glmnet}. \item[4.] Elastic Net (\textbf{EN}) with $\alpha=3/4$ for the $ \ell_1$-$\ell_2 $ mixing parameter, computed using the \texttt{glmnet} package \citep{glmnet}. \item[5.] \textbf{Step-SplitReg}, computed using the \texttt{stepSplitReg} package with a custom Lasso fit for each model. \item[6.] \textbf{Fast-BSpS}, computed using the \texttt{PSGD} package. \item[7.] \textbf{SplitReg-Lasso}, computed using the \texttt{SplitReg} package \citep{SplitReg}. \item[8.] \textbf{SplitReg-EN} with $\alpha=3/4$ for the $ \ell_1$-$\ell_2 $ mixing parameter, computed using the \texttt{SplitReg} package. \item[9.] {Random GLM} (\textbf{RGLM}) \citep{random_glm_paper}, computed using the \texttt{RGLM} package \citep{randomGLM}. \item[10.] {Random Forest} (\textbf{RF}), computed using the \texttt{randomForest} package \citep{randomForest}. \item[11.] {Extreme Gradient Boosting} (\textbf{XGBoost}) \citep{chen2016xgboost}, computed using the \texttt{xgboost} package \citep{xgboost}. \end{enumerate} For a fast computation of BSS, we use the state-of-the-art method of \cite{hazimeh2020fast}. In their implementations, \cite{L0Learn} recommend to combine $\ell_0$ regularization with shrinkage-inducing penalties to avoid overfitting and improve predictive performance, and thus we use the $\ell_0$-$\ell_2$ combination of penalties. For the four split regression methods, we use $ G = 5 $ models, a potentially suboptimal number of models. For RGLM and RF we use $ G = 5 $ and their default number of models, $ G=100 $ and $ G=500 $ respectively. To reduce the computational burden of the PSGD algorithm in our large simulation study, we use the grids $ u \in \{1, 2, 3, 4, 5\} $ and $ t \in \{0.3n, 0.4n, 0.5n\} = \{15, 20, 25\} $ in the CV procedure of BSpS. The finer grids $ t \in \{1, \dots, n-1\} $ and $ u \in \{1, \dots, 5\} $ may be used for optimal predictive performance, however at a higher computational cost. \subsection{Performance Measures} For each configuration, we randomly generate $ N = 50 $ training and test sets and for each of the methods measure average performance on the test sets. In each replication of a particular configuration, a training set is generated to fit the procedures, and a large independent test set of size $ m= $ 2,000 is used to compute the MSPE. The MSPEs reported are relative to the irreducible error $\sigma^2$, hence the best possible result is $1$. We also report the recall (RC) and precision (PR), defined for each parametric method as \begin{align*} \text{RC} = \frac{\sum_{j=1}^p\mathbb{I}(\beta_j\neq 0, \hat{\beta}_j\neq0)}{\sum_{j=1}^p\mathbb{I}(\beta_j\neq 0)}, \quad \text{PR} = \frac{\sum_{j=1}^p\mathbb{I}(\beta_j\neq 0, \hat{\beta}_j\neq0)}{\sum_{j=1}^p\mathbb{I}(\hat{\beta}_j\neq0)}, \end{align*} where $ \bbet $ and $ \boldsymbol{\hat{\beta}} $ are the true and estimated regression coefficients, respectively. For the split regression methods and RGLM, the average of the models \eqref{eq:ensemble_fit} is the vector of coefficients used to compute the recall and the precision. For the tree-based ensemble methods RF and XGBoost, the RC and PR are computed by identifying the predictors used in the trees of the ensembles. We do not report the RC and PR of RGLM and RF when their default number of models are used since their recall is always 1 and precision the proportion of active variables $\zeta$. Note that large values of RC and PR are desirable. \subsection{Results} In Table \ref{tab:ranks_sim} we report the average rank for each performance measure across all simulations settings. The best two ranks for each performance measure are in bold. The detailed results of the simulation study are available in the supplementary material. Fast-BSpS had the best average rank in terms of MSPE for both cases $ p=500 $ and $ p=150 $, whereas SplitReg-EN had the second best performance in both cases. RGLM-100 had the best average performance out of the ``blackbox" methods, however its performance deteriorated to the worst average rank when $ G=5 $, the same number of models as Fast-BSpS. In Section \ref{sec:number_groups}, we investigate this phenomenon in greater details by studying the effect of the number of groups on Fast-BSpS and RGLM. Step-SplitReg was not competitive in terms of MSPE compared to Fast-BSpS or the SplitReg methods, however it did outperform its single-model stepwise method consistently. In terms of RC, Fast-BSpS had the second best rank overall, only beaten slightly by RGLM-5. However, RGLM-5 had the third worst overall rank in PR, whereas Fast-BSpS had the best PR rank for $ p=500 $ and the best overall PR rank out of all ensemble methods. An investigation of the full simulation results in the supplementary material reveals that Fast-BSpS had its best performances relative to the competitors across all performance measures in Scenario 2, which is more realistic than Scenario 1 where it is hard to distinguish active from inactive predictors due to the correlation induced between them. \begin{table}[h!] \centering \caption{Average rank of the methods over the scenarios, correlations, SNRs and sparsity levels for $(p,n)=(500,50)$ and $(p,n)=(150,50)$. The last column contains the overall rank over both combinations of $(p,n)$. \label{tab:ranks_sim}} \begin{tabu}{rrrrrrrrrr} \toprule & \multicolumn{3}{c}{$\mathbf{\boldsymbol{p}=500}$} & \multicolumn{3}{c}{$\mathbf{\boldsymbol{p}=150}$} & \multicolumn{3}{c}{\textbf{Overall Rank}} \\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} \textbf{Method} & \textbf{MSPE} & \textbf{RC} & \textbf{PR} & \textbf{MSPE} & \textbf{RC} & \textbf{PR} & \textbf{MSPE} & \textbf{RC} & \textbf{PR} \\ \cmidrule(lr){1-1} \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} \addlinespace[0.25cm] Stepwise & 12.06 & 11.00 & \textbf{3.87} & 11.17 & 11.00 & \textbf{3.07} & 11.62 & 11.00 & \textbf{3.47} \\ Fast-BSS & 4.81 & 6.89 & 6.02 & 5.50 & 6.77 & 4.91 & 5.15 & 6.83 & 5.46 \\ Lasso & 7.20 & 9.81 & 4.22 & 6.50 & 9.78 & \textbf{3.65} & 6.85 & 9.80 & \textbf{3.93} \\ EN & 6.15 & 8.81 & 4.19 & 5.89 & 8.72 & 4.30 & 6.02 & 8.77 & 4.25 \\ \addlinespace[0.25cm] Step-SplitReg & 9.07 & \textbf{1.85} & 10.26 & 6.96 & 5.21 & 8.98 & 8.02 & 3.53 & 9.62 \\ Fast-BSpS & \textbf{2.67} & 3.70 & \textbf{3.02} & \textbf{2.26} & \textbf{2.24} & 5.81 & \textbf{2.46} & \textbf{2.97} & 4.42 \\ SplitReg-Lasso & 3.56 & 5.00 & 6.15 & 3.31 & 5.58 & 5.00 & 3.44 & 5.29 & 5.58 \\ SplitReg-EN & \textbf{2.85} & 3.85 & 5.59 & \textbf{2.65} & 4.60 & 5.41 & \textbf{2.75} & 4.22 & 5.50 \\ \addlinespace[0.25cm] RGLM-5 & 12.24 & \textbf{3.24} & 8.46 & 12.69 & \textbf{1.46} & 9.50 & 12.46 & \textbf{2.35} & 8.98 \\ RGLM-100 & 3.57 & $ - $ & $ - $ & 6.50 & $ - $ & $ - $ & 5.04 & $ - $ & $ - $ \\ RF-5 & 10.02 & 7.63 & 10.15 & 10.30 & 6.13 & 10.67 & 10.16 & 6.88 & 10.41 \\ RF-500 & 5.67 & $ - $ & $ - $ & 5.83 & $ - $ & $ - $ & 5.75 & $ - $ & $ - $ \\ XGBoost & 11.13 & 4.20 & 4.07 & 11.44 & 4.50 & 4.70 & 11.29 & 4.35 & 4.38 \\ \addlinespace[0.25cm] \bottomrule \end{tabu} \end{table} \section{The Number of Models} \label{sec:number_groups} For an ensemble comprised of a relatively small number of models, a balance of individual model prediction accuracy and diversity between the models is necessary for overall ensemble prediction accuracy. To achieve diversity, individual model accuracy must typically take a hit \citep{brown2005managing}. The BSpS framework searches for $ G $ diverse models that have a small loss in the objective function \eqref{eq:BSpS}. Thus each model in BSpS is learned directly from the data, sparse, and achieves a high prediction accuracy. Thus in a sense Fast-BSpS controls the accuracy-diversity tradeoff directly. In RGLM, bagging and the random subspace method are used to create $ G $ bags comprised of different samples and predictors. Then, a subset of predictors in each bag are retained based on a measure of correlation with the response, and forward selection is applied to this subset. RGLM thus resorts to randomization to generate a collection of diverse models, resulting in models that are individually weak and not built to achieve the optimal accuracy-diversity balance given the number of models $ G $ used. By the bias-variance-covariance decomposition of regression ensembles in \eqref{eq:MSPE_ensemble} and \eqref{eq:MSPE_variance}, if individual models are weak, a large number of them would be required for the covariance term to dominate the variance term of the ensemble and thus for the ensemble to achieve a good prediction accuracy. \subsection{Accuracy-Diversity Empirical Study} We conduct a simulation to study the effect of the number of models on Fast-BSpS and RGLM with $ G=2,3,4,5 $, as well as $ G=100 $ (the default) for RGLM. We use $ N=50 $ replications of Scenario 2 in the simulation study of Section \ref{sec:simulation} with $ p=500 $, $ \rho=0.5 $, and a SNR of 3. For each replication, Fast-BSpS and RGLM are applied to a training set of size $ n=50 $ and then a test set of size $ m= $ 2,000 is used to compute the ensemble MSPE, the average mean squared prediction error $ \overline{\text{MSPE}} $ of the individual models as well as their average pairwise correlations $ \overline{\text{Cor}} $. The mean squared prediction errors reported are relative to the irreducible error $\sigma^2$, hence the best possible result is $1$. The computation is repeated for various values of the proportion of active variables, namely $ \zeta \in \{0.1, 0.2, 0.4\}$. The results are reported in Table \ref{tab:MSPE_Groups}. \begin{table}[h!] \centering \caption{MSPE, ${\overline{\text{MSPE}}}$ and $\overline{\text{Cor}}$ of Fast-BSpS and RGLM as a function of the number of models under Scenario 2 with $ \rho=0.5 $ and SNR$ =3 $. \label{tab:MSPE_Groups}} \resizebox{\textwidth}{!}{ \begin{tabular}{rrrrrrrrrr} \toprule & \multicolumn{3}{c}{$\mathbf{\boldsymbol{\zeta}=0.1}$} & \multicolumn{3}{c}{$\mathbf{\boldsymbol{\zeta}=0.2}$} & \multicolumn{3}{c}{$\mathbf{\boldsymbol{\zeta}=0.4}$}\\ \cmidrule(lr){1-1} \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} \textbf{Method} & \textbf{MSPE} & $\mathbf{\overline{MSPE}}$ & $\mathbf{\overline{Cor}}$ & \textbf{MSPE} & $\mathbf{\overline{MSPE}}$ & $\mathbf{\overline{Cor}}$ & \textbf{MSPE} & $\mathbf{\overline{MSPE}}$ & $\mathbf{\overline{Cor}}$ \\ \cmidrule(lr){1-1} \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} \addlinespace[0.25cm] Fast-BSpS-2 & 1.29 & 1.57 & 0.85 & 1.31 & 1.55 & 0.87 & 1.27 & 1.53 & 0.85 \\ Fast-BSpS-3 & 1.21 & 1.62 & 0.83 & 1.22 & 1.62 & 0.85 & 1.21 & 1.56 & 0.85 \\ Fast-BSpS-4 & 1.20 & 1.75 & 0.80 & 1.21 & 1.73 & 0.83 & 1.18 & 1.61 & 0.84 \\ Fast-BSpS-5 & 1.19 & 1.76 & 0.80 & 1.16 & 1.75 & 0.82 & 1.16 & 1.63 & 0.83 \\ \addlinespace[0.25cm] RGLM-2 & 4.34 & 7.37 & 0.25 & 4.38 & 7.51 & 0.27 & 3.50 & 5.95 & 0.34 \\ RGLM-3 & 3.38 & 7.69 & 0.22 & 3.17 & 7.15 & 0.29 & 2.75 & 6.00 & 0.34 \\ RGLM-4 & 2.86 & 7.74 & 0.22 & 2.63 & 6.95 & 0.30 & 2.37 & 6.07 & 0.33 \\ RGLM-5 & 2.47 & 7.50 & 0.23 & 2.30 & 6.69 & 0.31 & 2.12 & 6.13 & 0.34 \\ RGLM-100 & 1.36 & 7.70 & 0.23 & 1.25 & 7.03 & 0.29 & 1.17 & 6.64 & 0.33 \\ \addlinespace[0.25cm] \bottomrule \end{tabular}} \end{table} For Fast-BSpS, it can be seen that for all sparsity levels $\overline{\text{MSPE}}$ increases with the number of models, while MSPE and $\overline{\text{Cor}}$ decrease. As the number of models increases, the average accuracy of the individual models has less impact on the ensemble MSPE compared to $\overline{\text{Cor}}$ and Fast-BSpS achieves the proper balance for this tradeoff, resulting in high accuracy for the ensemble. For RGLM, the individual models are extremely weak, with the $\overline{\text{MSPE}}$ of the individual models being between seven to eight times the variance of the noise. The individual strength of the models are not controlled or learned for the number of models in the ensemble, they are equally weak regardless of the number of models. However, $\overline{\text{Cor}}$ is much lower than RGLM for all number of models. When the number of models is increased, the average pairwise correlation between the models are becoming more important, and only when $ G=100 $ does RGLM achieve an adequate ensemble MSPE, although still higher than Fast-BSpS with $ G=2 $ for all sparsity levels. Thus RGLM relies on a large number of weak decorrelated models to achieve a low ensemble MSPE. In applied research, investigation of the individual models of Fast-BSpS could therefore reveal insightful information for the relationship between the predictors and the response, while RGLM (or other random/indirect methods) do not enjoy this property. Indeed beyond the high prediction accuracy of the individual models of Fast-BSpS, they learn the models directly from the data and enjoy good variable selection as do sparse methods. In Table \ref{tab:RCPR_Groups} we report the RC and PR of Fast-BSpS and RGLM as function of the number of models. It can be seen that Fast-BSpS enjoys near-perfect PR and a high RC relative to RGLM. RGLM has poor PR inherently due to the random nature of its methodology, and only achieves high RC when a lot of models are used. For $ G=100 $, RGLM naturally achieves a RC of 1 and a PR of $\zeta$. \begin{table}[h!] \centering \caption{RC and PR of Fast-BSpS and RGLM as a function of the number of models under Scenario 2 with $ \rho=0.5 $ and SNR$ =3 $. \label{tab:RCPR_Groups}} \begin{tabular}{rrrrrrr} \toprule & \multicolumn{2}{c}{$\mathbf{\boldsymbol{\zeta}=0.1}$} & \multicolumn{2}{c}{$\mathbf{\boldsymbol{\zeta}=0.2}$} & \multicolumn{2}{c}{$\mathbf{\boldsymbol{\zeta}=0.4}$}\\ \cmidrule(lr){1-1} \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} \textbf{Method} & \textbf{RC} & \textbf{PR} & \textbf{RC} & \textbf{PR} & \textbf{RC} & \textbf{PR} \\ \cmidrule(lr){1-1} \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} \addlinespace[0.25cm] Fast-BSpS-2 & 0.56 & 1.00 & 0.29 & 1.00 & 0.16 & 1.00 \\ Fast-BSpS-3 & 0.79 & 0.99 & 0.44 & 1.00 & 0.20 & 1.00 \\ Fast-BSpS-4 & 0.81 & 0.89 & 0.55 & 1.00 & 0.29 & 1.00 \\ Fast-BSpS-5 & 0.82 & 0.87 & 0.69 & 1.00 & 0.33 & 1.00 \\ \addlinespace[0.25cm] RGLM-2 & 0.26 & 0.22 & 0.25 & 0.43 & 0.22 & 0.77 \\ RGLM-3 & 0.33 & 0.20 & 0.33 & 0.40 & 0.30 & 0.74 \\ RGLM-4 & 0.40 & 0.19 & 0.42 & 0.40 & 0.39 & 0.76 \\ RGLM-5 & 0.47 & 0.19 & 0.49 & 0.39 & 0.46 & 0.77 \\ RGLM-100 & 1.00 & 0.10 & 1.00 & 0.20 & 1.00 & 0.40 \\ \addlinespace[0.25cm] \bottomrule \end{tabular} \end{table} \subsection{Computational Cost} While Fast-BSpS shows great promise in terms of prediction accuracy, it comes at a high cost. The computational cost (in seconds) of the CV procedure of Fast-BSpS in Scenario 2 of Section \ref{sec:simulation}, across all sparsity levels $\zeta \in \{0.1, 0.2, 0.4\}$, is provided in Table \ref{tab:cpu} as a function of the number of models. We include for comparison the computational cost of the multi-convex relaxation of BSpS as described in Section \ref{sec:multi_convex} using the \texttt{SplitReg} package \citep{SplitReg} as well as step-SplitReg as described in Section \ref{sec:stepwise} using the \texttt{stepSplitReg} package. The step-SplitReg method has by far the smallest computational cost. The CPU seconds for the \texttt{R} function calls of Fast-BSpS are significantly higher and more sensitive to the number of models compared to the multi-convex relaxation. We note that no local combinatorial search is performed in the execution of Algorithm \ref{alg:projected_algo} which may increase the cost further. The computational cost can also increase substantially if a fine grid for the sparsity parameter $ t $ is used. However, given the difficulty in optimizing $ \ell_0 $-penalized problems, it is expected that the cost of Fast-BSpS will be higher. \begin{table}[h!] \centering \caption{\label{tab:cpu}Computation time of \texttt{R} function calls for the \texttt{SplitReg}, \texttt{stepSplitReg} and \texttt{PSGD} packages in CPU seconds for varying number of models. CPU seconds are on a 2.7 GHz Intel Xeon processor in a machine running Linux 7.8 with 125 GB of RAM.} \extrarowsep=2pt \begin{tabu}{lrrrrrrr} \toprule & \multicolumn{4}{c}{Number of Models} \\ \cmidrule(lr){1-1} \cmidrule(lr){2-5} Package & 2 & 3 & 4 & 5 \\ \cmidrule(lr){1-1} \cmidrule(lr){2-5} $ \texttt{SplitReg} $ & 2.23 & 6.56 & 10.41 & 14.91 \\ $ \texttt{stepSplitReg} $ & 0.25 & 0.69 & 1.05 & 1.17 \\ $ \texttt{PSGD} $ & 4.67 & 19.38 & 31.43 & 55.92 \\ \bottomrule \end{tabu} \end{table} \section{Bardet-Biedl Syndrome Gene Expression Data} \label{sec:eye} We benchmark Fast-BSpS and the competitor methods of Section \ref{sec:simulation} on the Bardet-Biedl syndrome (BBS) gene expression dataset in \cite{flare}. In \cite{scheetz2006regulation} mutation and functional studies were performed and identified TRIM32 (tripartite motif-containing protein 32) as a gene with high correlation with BBS. The purpose of this study is to perform predict the gene expression level of TRIM32 using the expression levels of $ p = 200 $ genes from mammalian-eye tissue samples identified as relevant in \cite{scheetz2006regulation}. The dataset contains $ 120 $ mammalian-eye tissue samples. To mimic a high-dimensional scenario, in each of the $ N=50 $ replications we randomly split the data into a training set of size $ n=30 $ and a test set with the remaining $ m=90 $ samples. For Fast-BSpS we use $ u\in \{1,2,3,4,5\} $ and $ t\in \{0.3n, 0.4n, 0.5n\}=\{9, 12, 15\} $. The other methods are computed using their default settings as in Section \ref{sec:simulation}. We report the MSPE for all the methods and $\overline{\text{MSPE}}$ for the ensemble methods. The results are reported in Table \ref{tab:trim32_results}, where in each column the two best performances are in bold. For the ensemble MSPE, RGLM-100 and Fast-BSpS had the best two performances, with RGLM-100 edging out Fast-BSpS slightly. While RGLM-100 had a slightly lower MSPE than Fast-BSpS, the individual $\overline{\text{MSPE}}$ of RGLM-5 or RGLM-100 were the worst out of all the methods, being nearly three times the $\overline{\text{MSPE}}$ of Fast-BSpS or either of the SplitReg methods. On the other hand, Fast-BSpS achieved the second best individual $\overline{\text{MSPE}}$ slightly behind SplitReg-EN, matching the MSPE of its base sparse estimator BSS or the Lasso. Fast-BSpS thus achieved to not only produce a competitive ensemble prediction accuracy with only $ G=5 $ models, but each models is on average as reliable and accurate as standard sparse estimators. \begin{table}[h!] \centering \caption{\label{tab:trim32_results}MSPE and $\overline{\text{MSPE}}$ over the $ N=50 $ random splits into training and testing sets for the BBS gene expression dataset.} \begin{tabular}{rrr} \toprule \textbf{Method} & \textbf{MSPE} & $\mathbf{\overline{MSPE}}$ \\ \cmidrule(lr){1-1} \cmidrule(lr){2-3} \addlinespace[0.25cm] Stepwise & 0.82 & $ - $ \\ Fast-BSS & 0.65 & $ - $ \\ Lasso & 0.65 & $ - $ \\ EN & 0.63 & $ - $ \\ \addlinespace[0.25cm] Step-SplitReg & 0.57 & 0.93 \\ Fast-BSpS & \textbf{0.49} & \textbf{0.65} \\ SplitReg-Lasso & 0.65 & 0.67 \\ SplitReg-EN & 0.62 & \textbf{0.63} \\ \addlinespace[0.25cm] RGLM-5 & 0.69 & 1.71 \\ RGLM-100 & \textbf{0.45} & 1.67 \\ RF-5 & 0.73 & 1.02 \\ RF-500 & 0.67 & 1.03 \\ XGBoost & 0.84 & 1.04 \\ \addlinespace[0.25cm] \bottomrule \end{tabular} \end{table} As it was seen in Section \ref{sec:number_groups}, the individual models of Fast-BSpS not only have good prediction accuracy, but they also tend to use the relevant predictors with high precision, whereas RGLM relies on randomness to create diverse models which results in models with poor variable selection. Since the models in Fast-BSpS are learned directly from the data, each predictor included in a model was included for its relevance in minimizing the error term of the models in \eqref{eq:BSpS}. Taking this one step further, we can use the models in Fast-BSpS to rank the gene sets in order of importance. Defining the sets \begin{align} \label{eq:exclusive_sets} \mathcal{A}_k = \left\{j: \sum_{g=1}^{G} \mathbb{I}\left(j \in S^{(g)}\right) \geq k \right\}, \quad 1 \leq k \leq G, \end{align} where $\mathcal{A}_G \subseteq \mathcal{A}_{G-1} \subseteq \cdots \subseteq \mathcal{A}_1$, we can study the distribution of the genes across the different models. These sets identify genes related to TRIM32 in order of importance, since these genes appear in more than one model if there are no surrogate genes that may be used to reduce the loss function of BSpS in~(\ref{eq:BSpS}). Applying Fast-BSpS to the BBS dataset, $|\mathcal{A}_3=18|$, $ |\mathcal{A}_2=28| $ and $ |\mathcal{A}_1=29| $. Genes shared by more than a single model had the same sign across all the models, re-enforcing the understanding of their relationship with the gene expression level of TRIM32. To illustrate that Fast-BSpS can identify important genes that may be missed by sparse regression methods, let us consider $\mathcal{A}_3$. This set contains 18 genes that appear in at least half of the 5 Fast-BSpS individual models and thus yield an important contribution to the ensemble. Interestingly, none of these genes appears in the Fast-BSS and thus would be considered irrelevant for the prediction of the gene expression level of TRIM32. \section{Discussion and Future Directions} \label{sec:discussion} We introduced a new data-driven ensemble framework that generates a collection of sparse and diverse models learned directly from the data. In particular we introduced BSpS, a generalization of BSS to multiple groups. In BSpS the objective is to find the best possible split of predictors in a collection of models such that the sum of their individual losses is minimized. The split of predictors must satisfy the sparsity constraint, i.e. the maximum size of each model, and the diversity constraint, i.e. the maximum number of models that can share any given predictor. Each model has a prediction accuracy that is competitive with its base sparse estimator BSS, and thus presents a possible explanation for the relationship between the predictors and the response. An investigation of the intractable combinatorial problem posed by BSpS reveals the need for computational tools to generate approximate solutions. Related work in \cite{christidis2020split} to minimize an objective function with a sparsity and diversity penalty may be viewed as a multi-convex relaxation of BSpS, which does not allow to directly control the maximal model size and the number of models that may share predictors, and may have poorer prediction and variable selection performance compared to the direct optimization of BSpS. In this article we generalized forward stepwise regression to multiple groups to generate an initial solution to BSpS in the fully diverse ($ u=1 $) case, and a projected subsets gradient descent algorithm to generate approximate solutions to BSpS for any $ u $ in \eqref{eq:BSpS} that is perfectly suited for the CV procedure of the tuning parameters. Our empirical investigations via our simulation study and real data application reveal that our approach to optimize BSpS directly yields and ensemble with competitive prediction accuracy and variable selection properties compared to the multi-convex relaxation of BSpS and ``blackbox" regression ensemble methods. We showed that the proposed methodology is efficient in exploiting the accuracy-diversity tradeoff of regression ensembles, such that the optimal balance of individual model accuracy and diversity is achieved by our proposed algorithm. Contrary to other ``blackbox" regression ensemble methods such as RGLM, our methodology results in ensembles that do not rely on a large number of weak models to achieve a high prediction accuracy. Rather it relies on the search for strong individual models that have a certain degree of diversity to reduce the variance of the ensemble. This allows for the usefulness of the sets $\mathcal{A}_k$, $ 1 \leq k \leq G $, in \eqref{eq:exclusive_sets} to rank the predictors in order of importance to predict the outcome accurately. This is particularly important in gene expression data where interpretability and the identification of the relevant genes is just as important as prediction accuracy. For applied scientists, problem-specific knowledge can be incorporated into the BSpS framework very easily. For example, if certain predictors (e.g. genes) are known to be particularly important or relevant in the prediction of the outcome, it may be easily incorporated by generalizing BSpS in \eqref{eq:BSpS} to \begin{align} \label{eq:BSpS_knowledge} \min_{\bbet^1, \dots, \, \bbet^G \in \mathbb{R}^p} \sum_{g=1}^{G} \left(\by - \bX \bbet^g\right)^2 \quad \text{subject to} \quad \begin{cases} \lVert\bbet^g\rVert_0 \leq t, \, &1 \leq g \leq G, \\ \lVert\bbet_{j\cdot}\rVert_0 \leq u_j, \, & 1 \leq j \leq p. \end{cases} \end{align} where $ u_j $ is the maximum number of models that may share predictor $ j $, $ 1 \leq j \leq p $. The PSGD algorithm we proposed is an efficient way to generate solutions for the BSpS nonconvex problem in \eqref{eq:BSpS}. However, compared to Step-SplitReg and SplitReg, it suffers from a slower computation time, particularly as the number of groups or the dimension of the data are increased. A future area of research is to develop alternative computing procedures for BSpS. One possible improvement on the current algorithm is to apply the general idea of accelerated proximal gradient descent of \cite{beck2009fast} to projected gradients and incorporate it in our algorithm. Perhaps a better initialization procedure than the Step-SplitReg may be developed, or an efficient way to apply local combinatorics to search for better solutions to BSpS. We hope our new data-driven ensemble framework will motivate new and exciting research on this new paradigm for ensemble modeling. For example, BSpS in \eqref{eq:BSpS} may be easily generalized to GLMs or other models with some general loss, i.e. \begin{align} \label{eq:BSpS_general} \min_{\bbet^1, \dots, \, \bbet^G \in \mathbb{R}^p} \sum_{g=1}^{G} \mathcal{L}\left(\bbet^g | \by, \bX \right) \quad \text{subject to} \quad \begin{cases} \lVert\bbet^g\rVert_0 \leq t, \, &1 \leq g \leq G, \\ \lVert\bbet_{j\cdot}\rVert_0 \leq u_j, \, & 1 \leq j \leq p. \end{cases} \end{align} In the analysis of high-dimensional data, sparse modeling was the main focus in the literature for many years, with the overwhelming majority of proposals being different alternative approaches to the NP-hard BSS problem. Our proposal is a generalization of the sparse modeling framework, generating more not one single interpretable model with good prediction accuracy but multiple such models. We hope our introduction to split modeling will be the central focus for new developments for the analysis of high-dimensional data. \section*{Acknowledgments} Part of this work was conducted while Anthony-Alexander Christidis was a UBC Doctoral Researcher at KU Leuven's Department of Mathematics under a Mitacs Globalink Research Award. \section*{Conflict of Interests} The authors declare no potential conflict of interests. \section*{Data Availability Statement} The \texttt{R} packages \texttt{stepSplitReg} and \texttt{PSGD} created for this article are publicly available on \texttt{CRAN} together with their respective reference manuals. The data and scripts to replicate the numerical experiments are available at \url{https://doi.org/10.5281/zenodo.6450556}. \section*{Supplementary Material} The supplementary material contains the full results of our simulations and real data experiments. \bibliographystyle{Chicago}
1,314,259,992,627
arxiv
\section{Introduction} Amal Kumar Raychaudhuri (AKR, 1923-2005) was working at the Indian Association for Cultivation of Science (IACS), in Kolkata, when he wrote one of the finest scientific papers\cite{raychaudhuri1955} to have ever come out of India. The paper derived an equation which was the seed of many profound developments in the theory of general relativity. Indeed it may be said that it started a new phase in the development of the subject--mathematical relativity. The purpose of this article is to explain, in a pedagogical manner, the import of this equation and outline some of the developments it led to. This is a non-technical exposition, intended for a reader who is familiar with Newtonian physics, and special relativity but has not yet been exposed to the general theory of relativity (GTR). For a more complete picture see \cite{sayan2007,sayanres}. We will first give a simple derivation of Raychaudhuri's equation in the context of Newtonian gravity. This captures some of the essential physics of the equation. We will then indicate how Einstein's gravity differs from Newtonian gravity and write down the actual Raychaudhuri equation, the relativistic version. Further generalisation and development of this idea by others led to the Singularity theorems, which we will touch upon towards the end. This material is more advanced and placed in a Box with a statutory warning. \section{The Newtonian Raychaudhuri Equation} Let us consider the motion of a fluid in Newtonian physics, assuming that the flow is subject only to the force of gravity. (No pressure, for instance.) If the velocity of the fluid at a position ${\bf x}$ is ${\bf v}({\bf x},t)$, the rate of change of any quantity $f({\bf x},t)$ is described by the convective derivative \begin{equation} \frac{df}{dt}=\frac{\partial f}{\partial t}+{\bf{v}}.{\bf{\nabla}} f \label{convective} \end{equation} We will use repeated index notation, writing $\bf{v}.\bf{\nabla}$ as $v^i\partial_i$, where it is understood that the index $i$ is summed over its three values $i=1,2,3$. The divergence $\theta={\bf{\nabla}}.{\bf{v}}= \partial_i v^i$ of the velocity field represents the expansion rate of the fluid. $\theta$ is the rate at which the volume of a fixed ball of fluid is increasing. The evolution of $\theta$ along the flow was the principal objective of AKR's studies. Let us compute the rate of change of $\theta$ along the flow. Applying the convective derivative to $\theta$, we find \begin{equation} \frac{d\theta}{dt}=\frac{\partial }{\partial t}(\partial_i v^i)+v^j\partial_j(\partial_i v^i). \label{derivation1} \end{equation} Since partial derivatives commute, we have \begin{equation} \frac{d\theta}{dt}= \partial_i \partial_t v^i+v^j\partial_i \partial_j v^i \label{derivation2} \end{equation} which can be rearranged to read \begin{equation} \frac{d\theta}{dt}=\partial_i(\partial_t v^i+v^j\partial_jv^j)-(\partial_iv^j) (\partial_j v^i)\\ =\partial_i{\Bigg[}\frac{dv^i}{dt}{\Bigg]}-(\partial_iv^j) (\partial_j v^i) \label{derivation3} \end{equation} The term in square brackets is simply the acceleration of the fluid, which by Newton's Law is given by $-\partial^i\phi$, where $\phi$ is the Newtonian potential. The term $\partial_i v_j$ can be decomposed into its symmetric $\gamma_{ij}=(\partial_i v_j+\partial_j v_i)/2$ and antisymmetric $\omega_{ij}=(\partial_i v_j-\partial_j v_i)/2$ parts. $\omega_{ij}$ is evidently the angular velocity of rotation of the fluid. $\gamma_{ij}$ is the strain rate. Strain is a quantity familiar from the theory of elasticity. $\gamma_{ij}$ tells us the rate of change of shape of an imaginary sphere of fluid, which we colour red as an aid to imagination. $\gamma_{ij}$ can be further decomposed into a trace free part $\sigma_{ij}=\gamma_{ij}-\theta \delta_{ij}/3$, called shear rate, which changes the shape but not the volume, and the expansion rate $\theta= \gamma_i^i$, which changes only the volume. (We use indices which are superscripts and subscripts to anticipate similar conventions in relativity. In this part there is no difference as the indices are raised and lowered by $\delta_{ij}$.) The first term in (\ref{derivation3}), becomes $-\partial_i\partial^i\phi$ and can be written using Poisson's equation for the gravitational potential as $-4\pi G\rho$, where $G$ is Newton's gravitational constant and $\rho$ is the density of matter. We also write $\sigma_{ij}\sigma^{ij}$ as $\sigma^2$ and $\omega_{ij}\omega^{ij}$ as $\omega^2$. Putting it all together, we arrive at the Newtonian version of Raychaudhuri's equation \cite{ellis1998} which reads \begin{equation} \frac{d\theta}{dt}=-4\pi G \rho-\theta^2/3-\sigma^2+\omega^2 \label{newtonianraychaudhuri} \end{equation} Since the density of matter is positive, all the terms on the right (except the last one) are negative. This tells us that in a nonrotating fluid, the expansion rate is always decreasing. An initially expanding ball of fluid, will expand at a slower rate and a contracting ball will contract even faster. This is an expression of the attractive nature of gravity. Note that the rotation is the only term that counteracts the attractive effect of gravity. This is intuitively clear, rotation gives a centrifugal force that resists the attraction of gravity. This is what causes the equatorial bulge of the Earth and keeps the planets from falling into the Sun. Another force that counteracts gravity (which we have not included in the simple picture above) is outward pressure. This is what keeps the Sun from falling into itself. \section{Einstein's Relativistic Theory of Gravitation} In 1905, Einstein proposed the special theory of relativity, which revolutionised our notion of time. An essential feature of this theory is the melding of space and time into a single entity called spacetime. It resolved some earlier apparent conflicts between electromagnetism and Newtonian ideas of space and time. It gave a special role to the fundamental constant $c$, the speed of light. It predicted that mass and energy could be transmuted into each other and were therefore essentially the same thing. Every prediction made by the theory has since been verified and the topic is now undergraduate physics material. Around 1907, Hermann Minkowski gave the special theory of relativity a new formulation in terms of four dimensional geometry. This reformulation was initially rejected by Einstein as mere window dressing. However, by 1912, he came to appreciate the elegance and power of Minkowski's ideas and used it to incorporate the effects of gravitation within the theory of relativity. By 1915 he had formulated the general theory of relativity. In this theory, every small region of spacetime can be regarded by a freely falling observer as being flat and devoid of gravity, being described adequately by the special theory of relativity. However the flatness is only apparent and local. The effects of gravity are only manifest on larger scales, in gradients of gravitational fields or second derivatives of the Newtonian gravitational potential. It is these second derivatives that contain information about the curvature of spacetime. It is useful to make an analogy with the curvature of the Earth. In an approximate sense, the Earth can be regarded as flat, since most of us only explore a tiny fraction of its surface in a day's work. This fact along with the echo chamber provided by the instant connectivity of the internet has led to a thriving community of `Flat Earthers' who actually believe that the Earth is flat. There are now `Flat Earthers' distributed all over the globe! (Read that again, slowly!) Each of these Flat Earthers approximates the surface of the Earth by a tangent plane at his location. However, the tangent plane at Sydney is different from the tangent plane in Cairo: they do not mesh together to form a single plane. The curvature of the Earth is captured in the variation of the tangent planes and revealed by simple experiments that explore a large enough fraction of the Earth's surface--like international travel or watching ships sail over the horizon. In Einstein's general theory of relativity, freely falling observers (FFOs) can (to a good approximation) pretend that their spacetime is flat and described by special relativity. As in the example of the Earth, these separate flat spacetimes of different FFOs do not mesh together to form a single flat spacetime. Spacetime is curved by gravity. The theory is simple and elegant in concept, though it does involve some higher mathematics like tensor algebra and differential geometry. The Newtonian potential is replaced by ten functions $g_{\mu\nu}$, the metric describing the geometry of spacetime. The source of gravitation is not just matter, but also matter currents, and stresses. The theory reduces to Newton's theory for weak fields and slow motions. In Minkowski spacetime (which is flat and devoid of gravity), particles follow straight lines. Massive particles have speeds less than that of light (following timelike curves) and massless ones have speed $c$, (following null curves). In Einsteinian spacetime particles follow geodesics, which are curves which appear locally straight to FFOs. Just as great circles on a spherical Earth will appear locally straight to Flat Earthers. A more precise definition of (timelike) geodesics is that they are curves which extremise the length between their endpoints. Null geodesics also obey an extremum principle: Fermat's principle. \section{The Relativistic Raychaudhuri Equation} Just after Einstein died in April 1955, AKR's paper appeared in the Physical Review, containing an early form of the Raychaudhuri equation and the first ever singularity theorem. This started a new phase in general relativity. This equation was the basis of all the singularity theorems that were to follow in the coming years. Einstein's general theory gives us a framework for the study of cosmology, the dynamics of the Universe. In the early 1950s AKR was working on cosmology as a researcher at the IACS, Kolkata. The Friedman-Lemaitre-Roberston-Walker models described the Universe as homogeneous and isotropic on the cosmological scale. The `cosmic fluid' has galaxies as its point particles. Einstein's theory admitted an expanding Universe, consistent with Hubble's observations of the recession of distant galaxies. The Universe had a beginning a finite time in the past. The main problem with these models, was that the curvature grew without bound as one approached the beginning of the Universe, a feature that mathematicians describe as a singularity. This was disturbing to many physicists including Einstein, since it meant that our physical laws would no longer apply. There were some who believed that the singularity was an artefact of the high degree of symmetry (homogeneity and isotropy) of the solution. It was felt that a slight perturbation of these models would remove the singularity and the associated physical problem. It was in this context that AKR derived his equation. The equation reads \begin{equation} \frac{d\theta}{d\tau}=-8\pi G (T_{\mu \nu}-\frac{1}{2} T g_{\mu\nu}) t^\mu t^\nu -\theta^2/3-\sigma^2+\omega^2, \label{relativisticraychaudhuri} \end{equation} where $T_{\mu\nu}$ is the stress-Energy tensor, $T$ its trace, $t$ is the four-vector tangent to the world-line of the galaxy and $g_{\mu\nu}$ is the metric tensor, which describes the geometry of spacetime. A comparison of this equation with (\ref{newtonianraychaudhuri}) shows some similarities and some differences. The similarity is due to the fact that they capture the same physical effect: the tendency of gravity to bring matter together. The differences arise because this is a relativistic equation. First the Newtonian time $t$ is replaced by $\tau$, the proper time of an observer freely falling with the cosmic fluid. Second, the source of gravity is not just a scalar $\rho$, the matter density, but a tensor $T_{\mu\nu}$ which has components corresponding to matter density as well as momentum density and pressure (which is a stress). The beauty of this equation (\ref{relativisticraychaudhuri}) is that it is independent of any symmetry considerations. It is also general in that it allows all forms of matter. The consequences of this equation follow from requiring only that the matter satisfies a positivity condition. \begin{equation} (T_{\mu\nu}-\frac{1}{2} Tg_{\mu\nu})t^\mu t^\nu\ge0 \label{strongenergycondition} \end{equation} for all timelike $t^\mu$. This condition is met by all known forms of classical matter, and is called the strong energy condition. In the absence of rotation ($\omega=0$), it leads to the inequality \begin{equation} \label{generalinequality} \frac{d\theta}{d\tau}\le -\theta^2/3 \end{equation} which can be rearranged to read \begin{equation} \label{rearranged} \frac{d(\theta)^{-1}}{d\tau}\ge 1/3. \end{equation} An expanding Universe can be regarded in time reverse as contracting. Thus, the problems of gravitational collapse to black holes and the big bang origin of the expanding Universe are two sides of the same coin; they are related by time reversal. The only difference is that the singularity is in the future for black holes and in the past for the big bang. These singularities are places where spacetime begins or ends \cite{hawking1975large}. An initially contracting Universe has $\theta^{-1}$ tending to zero from negative values and $\theta$ diverges to negative infinity in a finite amount of proper time. This results in a divergence of the density of matter at the big bang, a physical singularity. Giving up the high degree of symmetry of cosmological models does not save us from the singularity. This was the main conclusion of AKR's landmark paper. \section{The Sachs equation} A similar equation describing the motion of light in a gravitational field was derived by R. Sachs\cite{sachs}. We can think of light as consisting of photons and apply the logic of the last section to a fluid whose particles are photons rather than galaxies. In relativity, light travels along null geodesics rather than the timelike geodesics of massive particles. This results in some changes. Sachs' equation for the expansion of the fluid reads \begin{equation} \frac{d\theta}{d\lambda}=-8\pi G T_{\mu \nu} l^\mu l^\nu -\theta^2/2-\sigma^2+\omega^2. \label{sachs} \end{equation} The differences are \begin{enumerate} \item The tangent vector to the fluid flow is now written as $l$ since it is null. \item The negative matter term on the right has a simpler form. \item Since the proper time along a null geodesic vanishes, $\tau$ is replaced by $\lambda$, which is called an affine parameter. \item There are 3 dimensions transverse to the tangent vector $l$. But since $l$ is transverse to itself, there are effectively only two transverse dimensions, resulting in $\theta^2/2$ on the right hand side. This is similar to the idea that a massive spin one particle has three degrees of freedom, while the massless spin one photon has only two. \end{enumerate} \begin{figure}[h!t] \includegraphics[scale=1.2]{Figure1.pdf} \caption{Rotation, shear and expansion: From left to right; an {\it object} placed in the path of a light beam casts a shadow which may be {\it rotated},{\it sheared} or {\it expanded} } \end{figure} A simple thought experiment serves to bring out the physical interpretation of rotation, shear and expansion. Imagine placing an object such as the red one shown in Fig. 1, in the path of a light beam and transverse to the direction of propagation. This object will cast a shadow on a transverse screen placed further along the ray. The shadow may be rotated, sheared or expanded. These ideas led to a description of spacetimes in terms of optical scalars in the Newman-Penrose formalism. Optical scalars are a set of complex numbers which describe the distortion of light beams in a gravitational field. They are now widely used in numerical relativity and the study of gravitational waves. \color{blue} \section{Box 1: Singularity Theorems} {\it \color{red} Statutory warning: more advanced material, presented impressionistically, refer to \cite{Wald:106274,hawking1975large} for better explanations.} A serious problem with discussing singularities in general relativity is the very definition of one. One can always excise singular points from a spacetime and present it as non--singular. (For instance, a cone is singular at its tip, but one can simply snip off the tip and claim it is nonsingular.) This will however result in incomplete geodesics: a particle (or photon) following a geodesic may disappear in a finite time (or affine parameter). If the spacetime cannot be extended to remove this unphysical behaviour we describe the spacetime as singular. The singularity theorems seek to prove the existence of incomplete geodesics {\it i.e} that spacetime has an edge. The work of Penrose, Hawking and Geroch on singularity theorems in the 1960s and 1970s uses global differential geometry and topology to prove precisely stated theorems, which are far beyond the scope of this article. We will give a flavour of an argument due to Penrose which deals with gravitational collapse. Cosmological Singularities in the past can be dealt with by time reversal, as discussed earlier. In 1965, Penrose wrote a paper introducing the notion of a trapped surface. Penrose's contribution\cite{penrose1965} was recognised by the Physics Nobel Prize \cite{bagla2020,samuel2020} for 2020. If we emit a flash of light from a closed two dimensional surface ${\cal S}$, there is an expanding out-going flash and a contracting in-going flash (or wavefront). In strong gravitational fields (like inside the event horizon of a Schwarzschild black hole) it can happen that both these flashes are contracting; the expansion $\theta$ is negative. Intuitively it is clear that observers in the spacetime region between the two flashes are trapped between two wavefronts whose area is decreasing with time. There is no escape for them from this trapped region and they will meet an untimely end. To prove this mathematically, one has to argue more formally \cite{penrose1965,Wald:106274}. Let ${\cal F}$ be the set of all points of spacetime, which can be reached from ${\cal S}$ by timelike or null curves pointing into the future. One can show that the boundary of ${\cal F}$ is ruled by null geodesics emerging orthogonally from ${\cal S}$ which have not focussed. This is because null geodesics travel at the speed of light and nothing travels faster than light. However, if two neighbouring geodesics focus, they leave the boundary of ${\cal F}$ and enter into the interior of ${\cal F}$. The null curve described by traversing one geodesic to the focus and jumping to the other geodesic, is not a geodesic but a broken geodesic. (Just as a broken straight line is not a straight line.) A broken null geodesic represents light which changes direction (say it bounces off a mirror) and whose endpoints can be connected by a timelike curve. We know from Raychaudhuri's equation that all the null geodesics emanating from ${\cal S}$ will thus leave the boundary of ${\cal F}$ in a finite amount of affine parameter. This shows that the boundary of ${\cal F}$ is compact. Let us assume that the spacetime admits a spatial slice $\Sigma$ that goes out to infinity. We also assume \cite{Wald:106274} that there is a globally nonvanishing timelike past pointing vector field $v$ on spacetime. By looking at points on the boundary of ${\cal F}$, and tracing them back along $v$ to $\Sigma$, Penrose was able to establish a continuous correspondence between a compact space and a non-compact one. This contradiction proves the result. The result is robust against perturbations (since a small perturbation of a trapped surface is still trapped) and shows that singularities are generic. \color{black} \section{Conclusion} Let us note a few points which we glossed over in the main text. \begin{enumerate} \item While all classical known forms of matter (galaxies, neutron stars) satisfy the strong energy condition, a positive cosmological constant does not. This leads to a kind of repulsive gravity which results in an accelerated expansion of the Universe. Astronomers have inferred such accelerations by measuring the red shifts of dim and distant supernovae. Viewed as a form of matter, the cosmological constant is referred to as Dark Energy. \item We mentioned the focussing of geodesics as the main consequence of Raychaudhuri's equation. Such focussing can happen even in Minkowski spacetime, resulting in caustics. (You may have seen caustics in a cup of tea on a bright day.) Caustics are not singularities of the spacetime unless there is matter following the geodesics (as in \cite{raychaudhuri1955}). \item Although $\theta$ has the dimension of a rate $1/T$, it is commonly referred to as expansion although it is really an expansion {\it rate}. \item Apart from the equation for the propagation of expansion, there are similar equations for the propagation of shear and rotation along light rays. Some authors (e.g \cite{sayanres,hawking1975large}) talk of Raychaudhuri equation{\it s} (plural) to include these as well as Sach's equation. \item For much of its history, the general theory of relativity (GTR) lay in a somewhat dormant state. There were some predictions of the theory which were verified by tests (the ``classical tests'') in the solar system. These were very small corrections to Newton's theory and soon the dramatic successes of the quantum theory in laboratory physics overshadowed the miniscule effects predicted by GTR. The worldwide renaissance of GTR started in the late 1950's, driven by both experiment and theory. The discovery of the cosmic microwave background made cosmology an experimental science. Radio astronomy revealed a violent Universe and the need for a better understanding of relativistic astrophysics, black holes and gravitational radiation. The early neglected work of S. Chandrasekhar (1930s) was recognised for its true worth. Major events were relativity conferences held in Chapel Hill (1958) and Texas (1963). Key players in this renaissance were (apart from the names already mentioned) Herman Bondi, Dennis Sciama and John Wheeler, who guided, inspired and led the way to bring GTR to grips with the real Universe. \item While AKR, working in near complete isolation, derived his equation in 1953, there were numerous delays in publication (see below) which resulted in his paper being published only in 1955. Considering that Penrose's singularity theorem appeared only in 1965, this is remarkably prescient. \end{enumerate} This article has focussed on the Science behind the Raychaudhuri equation. A personal account of AKR and his work can be found in \cite{parongama2008,parongamabook}. (The latter is partly in Bengali and partly in English.) An inspiring account of the history and background of AKR's work is given in Ref. \cite{majumdar2020}. Apart from this path breaking work, AKR has published in several areas of physics, written text books and nurtured generations of students in Kolkata. He taught students to question all authority, including his own. As a teacher he was revered by his students and was accorded the high honour of being initialised as AKR \cite{parongama2008}. In closing, we spend a few words on the circumstances around the publication of the paper \cite{raychaudhuri1955}. The manuscript was rejected by editors, mislaid or misunderstood by referees and inordinately delayed before publication, to the great frustration of the author. As a researcher in IACS, Kolkata, AKR was constantly harassed for working on his `abstruse' ideas and pressured to work on areas other than relativity, the subject closest to his heart. The discouragement was severe and a part of it came from luminaries of Indian science, some of whom did have the ability, though not the time or inclination, to understand AKR's work. In due course AKR left IACS and took up teaching at Presidency college, to the great benefit of the students there. Despite the discouragement and lack of appreciation, AKR pressed on with his researches. In a few years, the work was recognised for its insight by researchers all over the world. Indian recognition of AKR's work was slow in coming. Only after his name was well known in the West did the Indian scientific community wake up to the fact that we had a deep and original thinker in our midst. Perhaps a sorry reflection of our colonial past! The main theoretical impact of the Singularity theorems is that they predict the demise of general relativity as a fundamental theory. In fundamental physics, quantum mechanics gives a unitary description of all known interactions (electromagnetic, weak and strong); the description conserves information and is reversible. Understanding the irreversible nature of gravitational collapse and the attendant loss of information is a problem at the frontier\cite{spentarajesh} of theoretical physics. For an account of current research on the information paradox, see the article by Raghu Mahajan in this issue. \newpage \begin{figure}[htp] \includegraphics[scale=2.5]{Figure2.png} \vspace{-.6cm} \caption{Figure shows Penrose's picture of gravitational collapse. Shown in black is an initial spatial slice $\Sigma$. Matter (blue) is collapsing under its gravity. A trapped surface ${\cal S}$ forms and its future ${\cal F}$ has a boundary consisting of initially ingoing (${\cal B}^-$) and outgoing (${\cal B}^+$) null geodesics which have not focussed. Image credit \copyright Roshni Rebecca Samuel.} \end{figure} \newpage \section{Acknowledgements} It is a pleasure to thank Sayan Kar, H.S.Mani, Rajaram Nityananda, Vishwambhar Pati, N. Sathyamurthy, Parongama Sen, Sukanya Sinha, Supurna Sinha and N. Sathyamurthy and Spenta Wadia for reading through the manuscript and making constructive suggestions; and Roshni Rebecca Samuel for her artistic rendering of Figure 2.
1,314,259,992,628
arxiv
\section{Introduction} Ebola virus disease (EVD) is a severe disease in humans which has infected nearly 25 thousand individuals and claimed more than ten thousand deaths during the recent outbreak in West Africa, according to the report of World Health Organization dated 18 March 2015 \cite{WHO,CDC}. The most affected countries are Guinea, Liberia and Sierra Leone. This study aims to provide some real-time estimations on the outbreak in these three countries using the reported cumulative case data. Specifically, we will estimate the following quantities: \begin{enumerate} \item basic reproduction number $R_0$, which is defined as the average new cases caused by a single infective individual during one infectious period; \item inflection point $t_c$, which marks the time when the increment speed of cumulative case numbers starts to slow down; \item final outbreak size $K$, which indicates the total number of infectious cases throughout the outbreak wave. \item peak time $t_p$, which is defined as the critical time when daily infectious number reaches its maximum. \end{enumerate} All of these indicators provide quantitative information about severity of a disease outbreak. \section{Methods} Following \cite{WWY12}, we study the epidemic model: \begin{equation}\label{model} \begin{aligned} S'(t)&=-{\beta S(t)I(t)\over S(t)+I(t)};\\ I'(t)&={\beta S(t)I(t)\over S(t)+I(t)}-\gamma I(t), \end{aligned} \end{equation} where $S(t)$ and $I(t)$ are the numbers of susceptible and infective individuals at time $t$, respectively. The constant $\beta$ denotes the transmission rate of the disease, and the constant $\gamma$ corresponds to the removal rate of infective individuals. The basic reproduction number \cite{DHR90,vW02} is given by \begin{equation}\label{R0} R_0={\beta\over\gamma}. \end{equation} It is noted that a disease outbreak occurs if and only if $R_0>1$. The differential system \eqref{model} can be solved explicitly and its solution is given by \begin{equation}\label{SI} \begin{aligned} S(t)=&K[1+e^{\gamma(R_0-1)(t-t_c)}]^{-R_0/(R_0-1)};\\ I(t)=&K[1+e^{\gamma(R_0-1)(t-t_c)}]^{-1/(R_0-1)} \\&-K[1+e^{\gamma(R_0-1)(t-t_c)}]^{-R_0/(R_0-1)}, \end{aligned} \end{equation} where $K$ and $t_c$ are two constants of integration. Now, we define the cumulative infective case number at time $t$ as \begin{equation} C(t)=\int_{-\infty}^t{\beta S(t)I(t)\over S(t)+I(t)}. \end{equation} From \eqref{model} and \eqref{SI}, we have \begin{equation}\label{C} C(t)=K-K[1+e^{\gamma(R_0-1)(t-t_c)}]^{-R_0/(R_0-1)}. \end{equation} Here, the constant $K=C(\infty)$ has the biological meaning of final outbreak size. It can be verified that $C''(t_c)=0$. Hence, $t_c$ is the inflection point of $C(t)$. We remark that the inflection point $t_c$ is related to but different from another commonly used quantity: the peak time, denoted by $t_p$. The peak time is defined as the time when infective case number achieves its maximum, namely, $I'(t_p)=0$. It follows from \eqref{SI} that \begin{equation} t_p=t_c+{\ln R_0\over\gamma(R_0-1)}. \end{equation} In the case when $R_0$ is close to $1$, namely, $\ln R_0\approx R_0-1$, we can approximate the difference $t_p-t_c$ by $1/\gamma$. Thus, the peak time occurs about one infectious period after the inflection point \cite{WWY12}. Richards' empirical model \cite{Ri59} was suggested to provide real-time estimation of a disease outbreak; see \cite{HC06} for example. However, some of the parameters in Richards' model do not have clear biological meanings \cite{WWY12}. The advantage of formula \eqref{C} is that all of the parameters in this formula have significant biological interpretations. We will use the explicit formula of $C(t)$ in \eqref{C} to fit the reported cumulative case numbers of 2014 Ebola outbreak in West Africa and provide real-time estimation of basic reproduction number $R_0$, inflection point $t_c$, final outbreak size $K$ and peak time $t_p$. As pointed out in \cite{WWY12}, one should fix the value of $\gamma$, the removal rate of infective individuals, to resolve possible overfitting problems. Note that $1/\gamma$ can be regarded as the infectious period which characterizes the average duration of an individual being infective. In most cases, an individual is removed from the infective group either by recovery or death. For the fatal cases of Ebola virus disease, death usually occurs between 6 and 16 days (with mean 7.5 days) after onset of symptom; and for the non-fatal cases, patients may improve their symptoms at around day 6 but need more time to recover \cite{CDC2}. Convalescent patients may still be infective because the Ebola virus RNA may remain in the body fluid for a couple of weeks even though the risk of transmission from them is low \cite{BTD07}. It is thus reasonable to assume the infectious period to be 7.5 days with some possible perturbations in the interval between 6 and 16 days. In our simulation, we first fix $1/\gamma=7.5$ days to estimate the basic reproduction number $R_0$, inflection point $t_c$, peak time $t_p$ and final outbreak size $K$ using reported cumulative case data of Ebola virus in Guinea, Liberia and Sierra Leone, respectively \cite{WHO,CDC}. Also, we provide the 95\% confidence intervals of each estimated parameter value using bootstrap method. Next, we vary the value of the parameter $1/\gamma$ from 6 to 16 days and investigate the sensitivity of fitted parameter values. \section{Results} The basic reproduction number $R_0$ is estimated as 1.116 (95\% CI: 1.115-1.116) for Guinea, 1.226 (95\% CI: 1.225-1.228) for Liberia, and 1.181 (95\% CI: 1.181-1.182) for Sierra Leone. The inflection point $t_c$ is estimated as 21 November 2014 for Guinea, 24 October 2014 for Liberia, and 28 November 2014 for Sierra Leone. As shown in Table \ref{table-fit}, the lengths of 95\% confidence intervals for the estimated inflection points are no more than one day. \begin{table*}[htp] \centering \begin{tabular}{ccccc} \hline & final outbreak size $K$ & basic reproduction number $R_0$ & inflection point $t_c$ & peak time $t_p$\\ \hline Guinea & 3268 [3257, 3274] & 1.116 [1.115, 1.116] & 266 [265, 266] & 273 [272, 273]\\ Liberia & 8630 [8605, 8660] & 1.226 [1.225, 1.228] & 238 [237, 238] & 244 [244, 245]\\ Sierra Leone & 11227 [11198, 11253] & 1.181 [1.181, 1.182] & 273 [272, 273] & 279 [279, 280]\\ \hline \end{tabular} \caption{Estimated parameter values with 95\% confidence intervals. Here, day 1 corresponds to 1 March 2014. So, days 266, 238 and 273 correspond to 21 November 2014, 24 October 2014, and 28 November 2014, respectively.} \label{table-fit} \end{table*} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{Guinea} \caption[Guinea]{Fitted graph for the reported cumulative cases in Guinea. The dots are real data and the curve is plotted using fitted results. }\label{fig-Guinea} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{Liberia} \caption[Guinea]{Fitted graph for the reported cumulative cases in Liberia. The dots are real data and the curve is plotted using fitted results. }\label{fig-Liberia} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{SierraLeone} \caption[Guinea]{Fitted graph for the reported cumulative cases in Sierra Leone. The dots are real data and the curve is plotted using fitted results. }\label{fig-SierraLeone} \end{figure} The fitted curves together with reported cumulative case data are illustrated in Figure \ref{fig-Guinea} (Guinea), Figure \ref{fig-Liberia} (Liberia) and Figure \ref{fig-SierraLeone} (Sierra Leone). It is noted that in each of these three figures, there is a jump on the reported cumulative case numbers in late October 2014. This is due to a more comprehensive assessment of patient databases on the World Health Organization report dated 29 October 2014 \cite{WHO2}. Among these three countries, Liberia has the most significant gap, which may account for the result that the inflection point for Liberia is about one month earlier than the other two countries. We also fit the final outbreak size as 3268 for Guinea, 8630 for Liberia, and 11227 for Sierra Leone. All of these estimated values are smaller than cumulative case numbers reported on 18 March 2015. This indicates that another potential outbreak wave may be approaching \cite{HC06}. Now, we regularly increase the value of infectious period $1/\gamma$ from 6 to 16 days, and conduct numerical simulations. It is noted that the fitted values of basic reproduction number $R_0$ will also increase from 1.092 to 1.252 for Guinea, from 1.179 to 1.503 for Liberia, and from 1.144 to 1.400 for Sierra Leone; see Figure \ref{fig-R0}. The estimated final outbreak size stays in a range of [3261, 3307] for Guinea, [8620, 8677] for Liberia, and [11209, 11319] for Sierra Leone; see Figure \ref{fig-K}. On the other hand, the inflection point $t_c$ and peak time $t_p$ do not vary too much; see Figures \ref{fig-tc} and \ref{fig-tp}. For Guinea, $t_c$ decreases from 266 (21 November 2014) to 264 (19 November 2014). For Liberia, $t_c$ decreases from 238 (24 October 2014) to 235 (21 October 2014). For Sierra Leone, $t_c$ decreases from 273 (28 November 2014) to 270 (25 November 2014). The peak time $t_p$ increases from 272 (27 November 2014) to 279 (4 December 2014) for Guinea, from 244 (30 October 2014) to 248 (3 November 2014) for Liberia, and from 279 (4 December 2014) to 284 (9 December 2014) for Sierra Leone. We observe that the fitted inflection point $t_c$ and peak time $t_p$ are stable under perturbations on the infectious period $1/\gamma$. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{R0} \caption[Guinea]{Estimated values of basic reproduction number when infectious period increases from 6 to 16 days.}\label{fig-R0} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{tc} \caption[Guinea]{Estimated values of inflection point when infectious period increases from 6 to 16 days. Here, day 1 corresponds to 1 March 2014. So, days 266, 238 and 273 correspond to 21 November 2014, 24 October 2014, and 28 November 2014, respectively.}\label{fig-tc} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{tp} \caption[Guinea]{Estimated values of peak time when infectious period increases from 6 to 16 days. Here, day 1 corresponds to 1 March 2014. So, days 266, 238 and 273 correspond to 21 November 2014, 24 October 2014, and 28 November 2014, respectively.}\label{fig-tp} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{K} \caption[Guinea]{Estimated values of final outbreak size when infectious period increases from 6 to 16 days.}\label{fig-K} \end{figure} \section{Discussion} This study provides real-time estimation of basic reproduction number, final outbreak size, inflection point and peak time for the onging Ebola outbreak in West Africa using reported cumulative case data. The fitted basic reproduction numbers are smaller than those estimated in \cite{Al14} where only the data until 20 August 2014 was used. This indicates that the disease control policy became more effective during the late stage of the outbreak. We also observe that the increment speed of cumulative case number began to slow down after 21 November 2014 in Guinea, 24 October 2014 in Liberia, and 28 November 2014 in Sierra Leone. The estimated inflection points for Guinea and Sierra Leone are close to each other, but the one for Liberia is about one month earlier. This is due to a significant increase on reported cumulative case number dated 29 October 2014; see Figure \ref{fig-Liberia} and \cite{WHO2}. From Table \ref{table-fit}, we note that the estimated peak time has about one week's delay after the estimated inflection point, while the infectious period is fixed as 7.5 days. This supports the conclusion in \cite{WWY12} that the peak time occurs about one infectious period after the inflection point. If we vary the infectious period from 6 to 16 days, the estimated basic reproduction number stays in a range of [1.092, 1.252] for Guinea, [1.179, 1.503] for Liberia, and [1.144, 1.400] for Sierra Leone. The estimated final outbreak size ranges from 3261 to 3307 for Guinea, from 8620 to 8677 for Liberia, and from 11209 to 11319 for Sierra Leone. The estimated inflection point and peak time are much stabler and only varies within a small interval. This demonstrates that our method has a significant accuracy in capturing the inflection point and peak time. The values of final outbreak sizes in three countries are all underestimated, which can be considered as a warning signal of a second outbreak wave.
1,314,259,992,629
arxiv
\section{Introduction} Time-harmonic wave propagation is a mechanism at the center of a large number of physical and industrial applications. We may cite, among many, radar imaging \cite{dorf_2006a}, or seismic prospection \cite{tarantola_1984a}. In practice, numerical methods are required to approximately simulate the propagation of waves, and although several methods are available, it is still very challenging to compute accurate approximations in high-frequency regime. Here, we consider the scalar Helmholtz equation, which is probably the simplest model for this kind of problems. Specifically, given a compactly supported right-hand side $f: \mathbb R^d \to \mathbb C$, our model problem is to find $u: \mathbb R^d \to \mathbb C$ such that \begin{subequations} \label{eq_helmholtz_intro} \begin{equation} -k^2 \mu u-\div \left(\boldsymbol A \boldsymbol \nabla u\right) = f \text{ in } \mathbb R^d, \end{equation} where $\mu$ and $\boldsymbol A$ are (given) smooth coefficients that are respectively equal to $1$ and $\boldsymbol I$ outside a ball of radius $R > 0$, and $k > 0$ is the (given) wavenumber. This equation is supplemented with the Sommerfeld radiation condition at infinity. Namely, we require that \begin{equation} \label{eq_sommerfeld_intro} \frac{\partial u}{\partial |\boldsymbol x|}(\boldsymbol x) - ik u(\boldsymbol x) = o\left( |\boldsymbol x|^{(-d +1)/2}\right) \text{ as } |\boldsymbol x| \to +\infty. \end{equation} \end{subequations} A particularly important scenario covered by \eqref{eq_helmholtz_intro} is the scattering of a plane-wave, where the right-hand side takes the form \begin{equation} \label{eq_rhs_scattering} f := \left (k^2 \mu + \div \left (\boldsymbol A \boldsymbol \nabla \cdot\right )\right ) e^{ik\boldsymbol d\cdot \boldsymbol x}, \end{equation} where $\boldsymbol d \in \mathbb R^d$, $|\boldsymbol d| = 1$ is the incidence direction. As we propose a ``volumic'' method, we will actually replace the Sommerfeld radiation condition \eqref{eq_sommerfeld_intro} by a Perfectly Matched Layer (PML). This approach is entirely standard \cite{berenger_1994,collino_monk_1998a,galkowski2021perfectly}, and amounts to slightly modifying the coefficients $\mu$ and $\boldsymbol A$ in \eqref{eq_helmholtz_intro}. This process is detailed in Section \ref{section_model_problem}. In this work, we investigate the use of discretization spaces based on Gaussian coherent states (GCS), that is, functions of the form \begin{equation*} \Phi_{k,\boldsymbol x_0,\boldsymbol{\xi}_0}(\boldsymbol x) := \left (\frac{k}{\pi}\right )^{d/4} e^{-\frac{k}{2}|\boldsymbol x-\boldsymbol x_0|^2} e^{-ik\boldsymbol{\xi}_0 \cdot (\boldsymbol x-\boldsymbol x_0)}, \end{equation*} where $\boldsymbol x_0,\boldsymbol{\xi}_0 \in \mathbb R^d$ are user-selected parameters. The idea of decomposing a function as a discrete sum of Gaussian coherent states goes back to \cite{Gabor}. Here, following \cite{chaumontfrelet_ingremeau_2022a,daubechies_grossman_meyer_1986a}, we focus on a lattice of phase-space points $[\boldsymbol x^{k,\boldsymbol m},\bxi^{k,\boldsymbol n}] := \sqrt{(kR)^{-1}\pi} [\boldsymbol m,\boldsymbol n]$, $[\boldsymbol m,\boldsymbol n] \in \mathbb Z^{2d}$, that are spaced by $\sim (kR)^{-1/2}$, and we write \begin{equation*} \gs{k}{\boldsymbol m}{\boldsymbol n} := \Phi_{k,\boldsymbol x^{k,\boldsymbol m},\bxi^{k,\boldsymbol n}}. \end{equation*} Then, our a discretization space is of the form \begin{equation*} W_\Lambda := \mathrm{Vect} \left\{ \gs{k}{\boldsymbol m}{\boldsymbol n}; \;\; [\boldsymbol m,\boldsymbol n] \in \Lambda \right\}, \end{equation*} where $\Lambda \subset \mathbb Z^{2d}$ is a carefully chosen set of indices. For the sake of simplicity, we will assume in the introduction that the domain is non-trapping. Our first result is that if $\Lambda$ is chosen as \begin{equation*} \Lambda := \left \{ [\boldsymbol m,\boldsymbol n] \in \mathbb Z^{2d}; \;\; |\boldsymbol x^{k,\boldsymbol m}|^2 + |\bxi^{k,\boldsymbol n}|^2 \leq \rho kR \right \}, \end{equation*} for $\rho > 0$, we have \begin{equation} \label{eq_approximation_fixed} \dim W_\Lambda \simeq \rho^d (kR)^d \;\; \text{ and } \;\; \min_{w \in W_\Lambda} \|u-w\|_{\widehat{H}^1(\mathbb R^d)} \leq C \rho^{-1} \|f\|_{L^2(\mathbb R^d)}, \end{equation} for a general $f \in L^2(\mathbb R^d)$ with $\operatorname{supp} f \subset B(0,R)$, where $\|{\cdot}\|_{\widehat{H}^1(\mathbb R^d)}$ is a $H^1(\mathbb R^d)$-norm including a weight at infinity (see \eqref{eq_weighted_norm_k} below). As we describe in more length afterwards, this result is not very impressive on its own. Specifically, it is a standard approximation result similar to polynomial approximations: we need a fixed number of points per wavelength to achieve a constant accuracy. Our second result, which is key, deals with the case where $f$ takes the particular form \eqref{eq_rhs_scattering}. In this case, we select \begin{equation*} \Lambda := \left \{ [\boldsymbol m,\boldsymbol n] \in \mathbb Z^{2d}; \;\; |p(\boldsymbol x^{k,\boldsymbol m},\bxi^{k,\boldsymbol n})| \leq k^{-1/2+\varepsilon} \right \} \end{equation*} where $\varepsilon > 0$ can be selected arbitrarily small and \begin{equation*} p(\boldsymbol x,\boldsymbol{\xi}) := \boldsymbol A(\boldsymbol x) \boldsymbol{\xi} \cdot \boldsymbol{\xi} -\mu(\boldsymbol x) \qquad \forall \boldsymbol x,\boldsymbol{\xi} \in \mathbb R^d, \end{equation*} is the principal symbol associated with the differential operator in \eqref{eq_helmholtz_intro}, and we have \begin{equation} \label{eq_approximation_asymptotic} \dim W_\Lambda \simeq (kR)^{d-1/2+\varepsilon} \;\; \text{ and } \;\; \min_{w \in W_\Lambda} \|u-w\|_{\widehat{H}^1(\mathbb R^d)} \leq C_{\varepsilon,m} (kR)^{-m} \qquad \forall m \in \mathbb N. \end{equation} It means that for right-hand sides corresponding to scattering problems (and actually, a wider family of right-hand sides), Gaussian coherent states provide an accurate solution with $\mathcal O((kR)^{d-1/2+\varepsilon})$ DOFs. In fact, the convergence is even super-algebraic as the frequency increases. To put \eqref{eq_approximation_fixed} and \eqref{eq_approximation_asymptotic} into perspective, we compare them with other standard methods. Actually, there are several options to numerically solve \eqref{eq_helmholtz_intro} (either with the Sommerfeld condition \eqref{eq_sommerfeld_intro} or with a PML approximation), that we review below. The most versatile approach is probably the finite element method (FEM). The method hinges of a triangulation of the domain into elements of size $h$, and piecewise polynomial basis functions of degree $p$. It can be shown that if $p$ grows logarithmically with $k$, then the condition that $kh/p$ is constant provides (at least) a constant accuracy as $k$ increases \cite{lafontaine_spence_wunsch_2022a,melenk_sauter_2010a,melenk_sauter_2011a}. As a result, high-order FEM essentially requires $\mathcal O((kR)^d)$ degrees of freedom (DOFs) to achieve a constant accuracy. The resulting matrix is sparse. Trefftz-like methods are similar to FEM in that they also rely on a mesh of the domain, but the polynomial shape functions are replaced by local solutions to the Helmholtz problem, such as plane-waves \cite{hiptmair_moiola_perugia_2016a}. There are many ways to ``glue'' these local solution together, including partition of unity methods \cite{melenk1996partition}, least squares methods \cite{monk1999least}, the ultra weak variational method \cite{cessenat2003using}, the discontinuous enrichment method \cite{farhat2001discontinuous} or the variational theory of complex rays \cite{riou2008multiscale}. While these methods typically induce a large reduction of the number of DOFs as compared to FEM, they usually still need at least $\mathcal O((kR)^d)$ DOFs, see, e.g., \cite{chaumontfrelet_valentin_2020a,gittelson_hiptmair_perugia_2009a,hiptmair_moiola_perugia_2011a}. Similar to FEM, the resulting matrix is sparse. The next family of methods we want to mention are boundary element methods (BEM) \cite{sauter_schwab_2010a}. These methods rely on boundary integral equations which, strictly speaking, are not available for smoothly varying coefficients, since the expression of Green's function must be available. It is nevertheless interesting to include them in the comparison. These methods typically provide a constant accuracy with only $\mathcal O((kR)^{d-1})$ DOFs \cite{galkowski_spence_2022a}. However, the resulting matrix is dense and its entries are costly to compute. These issues can be mitigated using compression techniques, such as the fast multi-pole method \cite{Greengard:1987:FMM} or hierarchical matrices \cite{Hackbusch:2015:HMM}. Finally, asymptotic methods rely on the fact when the frequency is very large, it is sometimes possible to simplify the search for the solution of Helmholtz equation and the properties of the solution itself to computations involving only the underlying classical dynamics \cite{engquist_runborg_2003a}. This is done using tools of semi-classical analysis, such as the WKB method and can lead to discrete problems with a number of DOFs independent of $k$. The main drawback of these approaches is that they are only asymptotically valid: they do not converge for fixed value of $k$. Besides, it is not always clear from which range of $k$ they are relevant. As compared to FEM, the proposed GCS method thus gains ``half a dimension'' at high-frequencies, but it is still half a dimension higher than BEM. As compared to BEM however, our methodology has the advantage to apply in a generic framework where the Green's function is not available. Another important comment is that in the (very) high-frequency regime, our method is more expensive than asymptotic methods. However, asymptotic methods cannot converge at fixed $k$, which our method does. This is summarized in Table \ref{table_costs}. \begin{table} \centering \begin{tabular}{|c|cccc|} \hline & FEM & GCS & BEM & asymptotic \\ \hline Cost & $(kR)^d$ & $(kR)^{d-1/2}$ & $(kR)^{d-1}$ & $1$ \\ \hline Heterogeneous media & yes & yes & no & yes \\ \hline Convergence at fixed $k$ & yes & yes & yes & no \\ \hline \end{tabular} \caption{Comparison of commonly used discretization techniques} \label{table_costs} \end{table} In addition to the approximability results \eqref{eq_approximation_fixed} and \eqref{eq_approximation_asymptotic}, we also present a least-squares method based on Gaussian coherent states for Problem \eqref{eq_helmholtz_intro}. As we show, the convergence of the method is easily established. Besides, although the matrix is dense, we show that the entries decay super-algebraically away from the diagonal. As a result, the matrix is essentially banded, and we believe that efficient linear solvers can be devised. This will be analysed in more depth in future works. We finally present a set of one-dimensional numerical experiments using the proposed least-squares method. Although the setting is rather simple, the results perfectly fit the theoretical analysis and readily shows that proposed approach allows for a drastic reduction of the number of DOFs in the high-frequency regime. To the best of our knowledge, our micro-locally adapted spaces of Gaussian coherent states appear to be entirely original, but we would like to mention that similar basis functions have already been employed to discretize PDE problems. In particular, generalised coherent states like Hagerdorn wavepackets were used to describe the solution of time-dependent Schr\"odinger equation in \cite{Faou:2009:CSQ,Gradinaru:2014:CSW,Gradinaru:2021:HWS,lasser2020computing}. The remainder of our work is organised as follows. In Section \ref{sec:setting}, we precise the setting and state our key approximation result \eqref{eq_approximation_asymptotic} in its more general form. Section \ref{sec:proofs} contains the proof of our findings. In Section \ref{sec:helmholtz}, we apply the general theory of Sections \ref{sec:setting} and \ref{sec:proofs} to our scattering model problem. Numerical examples are reported in Section \ref{sec:numerics}. Finally, Appendix \ref{sec:GS} collects technical results concerning Gaussian coherent states. \section{Setting and main results} \label{sec:setting} \subsection{Notations} Throughout this work $\hbar \in \mathcal H \subset (0,1]$ will denote a small parameter. When applying our general results to the Helmholtz equation, we will have $\hbar \sim (kR)^{-1}$, so that considering the set $(0,1]$ amounts to ignoring low frequencies, and focusing on high frequencies when $\hbar \to 0$. For the sake of generality, we restrict our analysis to a subset $\mathcal H \subset (0,1]$ for reasons that will become apparent in Section \ref{sec:helmholtz}. Notice that the case $\mathcal H = (0,1]$ is not excluded. \subsubsection{Basic notation} The canonical basis of $\mathbb R^d$ or of $\mathbb C^d$ will be denoted by $(\boldsymbol{e}_1,..., \boldsymbol{e}_d)$. If $\boldsymbol x,\boldsymbol y \in \mathbb C^d$, we write \begin{equation*} \boldsymbol x \cdot \boldsymbol y := \sum_{j=1}^d x_j y_j \end{equation*} without complex conjugation on the second argument, and $|\boldsymbol x|^2 = \boldsymbol x \cdot \overline{\boldsymbol x}$ is the usual Euclidean norm. For a multi-index $\boldsymbol{\alpha} \in \mathbb N^d$, $[\boldsymbol{\alpha}] := \alpha_1+\dots+\alpha_d$ denotes its usual $\ell_1$ norm. If $v: \mathbb R^d \to \mathbb C$, the notation \begin{equation*} \partial^{\boldsymbol{\alpha}} v := \frac{\partial^{\alpha_1}}{\partial x_1} \dots \frac{\partial^{\alpha_d}}{\partial x_d} v \end{equation*} is employed for the partial derivatives in the sense of distributions, whereas $\boldsymbol x^{\boldsymbol{\alpha}} := x_1^{\alpha_1} \cdot \ldots \cdot x_d^{\alpha_d}$. Finally, if $\boldsymbol{\beta} \in \mathbb N^d$ is another multi-index, we will sometimes need the notation \begin{equation*} \left ( \begin{array}{c} \boldsymbol{\alpha} \\ \boldsymbol{\beta} \end{array} \right ) = \prod_{j=1}^d \left ( \begin{array}{c} \alpha_j \\ \beta_j \end{array} \right ), \end{equation*} and the notation $\boldsymbol{\alpha} \leq \boldsymbol{\beta}$ means that $\alpha_j \leq \beta_j$ for all $j \in \{1,\dots,d\}$. If $\boldsymbol n \in \mathbb Z^d$, we employ the notation $|\boldsymbol n|^2 := n_1^2 + \dots + n_d^2$ for its $\ell_2$ norm. Finally, if $\Lambda \subset \mathbb Z^{2d}$, $\ell^2(\Lambda)$ has its usual definition, and we denote by $\|\cdot\|_{\ell^2(\Lambda)}$ its usual norm. \subsubsection{Key functional spaces} In what follows, $L^2(\mathbb R^d)$ is the usual Lebesgue space of complex-valued square integrable functions over $\mathbb R^d$. The usual norm and inner products of $L^2(\mathbb R^d)$ are respectively denoted by $\|\cdot\|_{L^2(\mathbb R^d)}$ and $(\cdot,\cdot)$. Following \cite{chaumontfrelet_ingremeau_2022a}, since we are dealing with the (unbounded) $\mathbb R^d$ space, our analysis will require the weighted Sobolev spaces \begin{equation*} \widehat{H}^p(\mathbb R^d) := \left \{ v \in L^2(\mathbb R^d) \; | \; \boldsymbol x^{\boldsymbol{\alpha}} \partial^{\boldsymbol{\beta}} v \in L^2(\mathbb R^d) \quad \forall \boldsymbol{\alpha},\boldsymbol{\beta} \in \mathbb N^d; \; [\boldsymbol{\alpha}],[\boldsymbol{\beta}] \leq p \right \}, \end{equation*} that we equip with the family of equivalent $\hbar$-weighted norms given by \begin{equation*} \|v\|_{\widehat{H}^p_\hbar(\mathbb R^d)}^2 := \sum_{[\boldsymbol{\alpha}] \leq p} \sum_{q \leq p- [\boldsymbol{\alpha}]} \hbar^{2[\boldsymbol{\alpha}]} \||\boldsymbol x|^q \partial^{\boldsymbol{\alpha}} v\|_{L^2(\mathbb R^d)}^2 \end{equation*} for all $p \in \mathbb N$. $C^0(\mathbb R^d)$ is the set of complex-valued continuous functions defined over $\mathbb R^d$, and $C^\ell(\mathbb R^d)$ is the set of functions $v: \mathbb R^d \to \mathbb C$ such that $\partial^{\boldsymbol{\alpha}} v \in C^0(\mathbb R^d)$ for all $\boldsymbol{\alpha} \in \mathbb N^d$ with $[\boldsymbol{\alpha}] \leq \ell$. We introduce the notation \begin{equation*} \|v\|_{C^\ell(\mathbb R^d)} := \max_{\substack{\boldsymbol{\alpha} \in \mathbb N^d \\ [\boldsymbol{\alpha}] \leq \ell}} \max_{\boldsymbol x \in \mathbb R^d} |(\partial^{\boldsymbol{\alpha}} v)(\boldsymbol x)|, \qquad \forall v \in C^\ell(\mathbb R^d) \end{equation*} and $C^\ell_{\rm b}(\mathbb R^d)$ is the subset of functions $v \in C^\ell(\mathbb R^d)$ such that $\|v\|_{C^\ell(\mathbb R^d)} < +\infty$. We also set \begin{equation*} C^\infty_{\rm b}(\mathbb R^d) := \bigcap_{\ell \in \mathbb N} C^\ell_{\rm b}(\mathbb R^d). \end{equation*} Finally, if $\Omega \subset \mathbb R^d$ is an open set, we denote by $C_c^\infty(\Omega)$ the set of smooth functions whose support is a compact subset of $\Omega$. \subsection{The frame of Gaussian coherent states} The goal of this work is to efficiently approximate the solution $u_\hbar$ to the equation $P_\hbar u_\hbar = f$ with a finite span of Gaussian coherent states. For $[\boldsymbol m,\boldsymbol n] \in \mathbb Z^{2d}$, we thus consider the Gaussian state \begin{equation*} \gs{\hbar}{\boldsymbol m}{\boldsymbol n}(\boldsymbol x) := (\pi\hbar)^{-d/2} e^{-\frac{1}{2\hbar}|\boldsymbol x-\bxx^{\hbar,\boldsymbol m}|^2} e^{\frac{i}{\hbar}\bxi^{\hbar,\boldsymbol n} \cdot (\boldsymbol x-\bxx^{\hbar,\boldsymbol m})}, \end{equation*} where $\bxx^{\hbar,\boldsymbol m} := \sqrt{\pi\hbar} \boldsymbol m$ and $\bxi^{\hbar,\boldsymbol n} := \sqrt{\pi\hbar} \boldsymbol n$. The family of Gaussian coherent states $(\gs{\hbar}{\boldsymbol m}{\boldsymbol n})_{[\boldsymbol m,\boldsymbol n] \in \mathbb Z^{2d}}$ actually forms a \emph{frame} over $L^2(\mathbb R^d)$, meaning there exists two constants $0 < \alpha < \beta < +\infty$ solely depending on $d$ such that \begin{equation*} \alpha \|v\|_{L^2(\mathbb R^d)}^2 \leq \sum_{[\boldsymbol m,\boldsymbol n] \in \mathbb Z^{2d}} |(v,\gs{\hbar}{\boldsymbol m}{\boldsymbol n})|^2 \leq \beta \|v\|_{L^2(\mathbb R^d)}^2. \end{equation*} This result was first proved in \cite{daubechies_grossman_meyer_1986a}, but the idea of decomposing a function as a discrete sum of Gaussian states goes back to \cite{Gabor}, where is was proved that the span of $(\gs{\hbar}{\boldsymbol m}{\boldsymbol n})_{[\boldsymbol m,\boldsymbol n] \in \mathbb Z^{2d}}$ is dense in $L^2(\mathbb R^d)$. Actually, the frame property implies that there exists another family of functions $(\gs{\hbar}{\boldsymbol m}{\boldsymbol n}^\star)_{[\boldsymbol m,\boldsymbol n] \in \mathbb Z^{2d}}$ called the dual frame such that \begin{equation} \label{eq_dual_frame} v = \sum_{[\boldsymbol m,\boldsymbol n] \in \mathbb Z^{2d}} (v,\gs{\hbar}{\boldsymbol m}{\boldsymbol n}^\star) \gs{\hbar}{\boldsymbol m}{\boldsymbol n} \end{equation} for all $v \in L^2(\mathbb R^d)$. It is thus clear that any $v \in L^2(\mathbb R^d)$ may be well-approximated by (a large number of) Gaussian states. As we are going to develop hereafter, when considering the solution to a high-frequency PDE problem, a good approximation may be obtain with few Gaussian states, by carefully selecting the indices $[\boldsymbol m,\boldsymbol n]$ in \eqref{eq_dual_frame}. \begin{remark}[General expansions in the Gaussian frame] \label{rem:BoundedUtile} The family $(\gs{\hbar}{\boldsymbol m}{\boldsymbol n})$ is not a Riesz basis, so that the expansion (\ref{eq_dual_frame}) of $v$ as a sum of $\gs{\hbar}{\boldsymbol m}{\boldsymbol n}$ is not unique. However, a crucial property of (\ref{eq_dual_frame}) is that this expansion is stable, in the sense that \begin{equation*} \sum_{[\boldsymbol m,\boldsymbol n] \in \mathbb Z^{2d}} |(v,\gs{\hbar}{\boldsymbol m}{\boldsymbol n}^\star)|^2 \leq \gamma \|v\|_{L^2(\mathbb R^d)}^2, \end{equation*} where $\gamma$ only depends on $d$, the dual frame being itself a frame. This is especially important at the numerical level in the presence of round-off errors \cite{Adcock:2019:FNA}. \end{remark} \subsection{Settings and key assumptions} \label{ssec:general} Throughout this work, we consider a second order differential operator on $\mathbb R^d$ depending on $\hbar$, and taking the form \begin{equation} \label{eq:FormeGeneraleOperateur} (P_{\hbar} v)(\boldsymbol x) = \hbar^2 \sum_{j,\ell=1}^d a_{j\ell}^{\hbar}(\boldsymbol x) \frac{\partial^2 v}{\partial x_j \partial x_\ell}(\boldsymbol x) + i\hbar\sum_{j=1}^d b_{j}^\hbar(\boldsymbol x) \frac{\partial v}{\partial x_j }(\boldsymbol x) + c^\hbar(\boldsymbol x) v(\boldsymbol x), \end{equation} where $a_{j\ell}^\hbar,b_j^\hbar,c^\hbar \in C^\infty_{\rm b}(\mathbb R^d)$ for $1 \leq j,\ell \leq d$. For the sake of simplicity, we introduce \begin{equation*} C_{{\rm coef},p} := \sup_{\hbar \in \mathscr H} \left ( \sum_{j,\ell=1}^d \|a_{j,\ell}^\hbar\|_{C^p(\mathbb R^d)} + \sum_{j=1}^d \|b_j^\hbar\|_{C^p(\mathbb R^d)} + \|c^\hbar\|_{C^p(\mathbb R^d)} \right ) \qquad \forall p \in \mathbb N, \end{equation*} and assume that $C_{{\rm coef},p} < +\infty$ for all $p \in \mathbb N$. The principal symbol of $P_\hbar$ is the function $p_\hbar \in C^\infty(\mathbb R^{2d})$ defined by \begin{equation} \label{eq_symbol} p_\hbar(\boldsymbol x,\boldsymbol{\xi}) := \sum_{j,\ell=1}^d a_{j,\ell}^\hbar(\boldsymbol x) \xi_i \xi_j + \sum_{j=1}^d b_j^\hbar(\boldsymbol x) \xi_j + c^\hbar(\boldsymbol x). \end{equation} For the sake of shortness, we will often write $\mathrm{p}_\hbar (\boldsymbol m,\boldsymbol n) := p_\hbar(\bxx^{\hbar,\boldsymbol m},\bxi^{\hbar,\boldsymbol n})$ for $[\boldsymbol m,\boldsymbol n] \in \mathbb Z^{2d}$. Along with the smoothness of the coefficients, we make two key assumptions. First, we assume that $P_\hbar: \widehat{H}^p(\mathbb R^d) \to \widehat{H}^p(\mathbb R^d)$ is invertible with the norm of $P_\hbar^{-1}$ being polynomially bounded in $\hbar$. Specifically, we assume that for all $f \in L^2(\mathbb R^d)$ there exists a unique $u_\hbar \in L^2(\mathbb R^d)$ such that $P_\hbar u_\hbar = f$. In addition, there exists $N \in \mathbb N$ such that for all $p \in \mathbb N$ \begin{equation} \label{eq_polynomial_resolvant} \|u_\hbar\|_{\widehat{H}_\hbar^p(\mathbb R^d)} \leq C_{{\rm sol},p} \hbar^{-N} \|f\|_{\widehat{H}_\hbar^p(\mathbb R^d)} \quad \forall \hbar \in \mathscr H \end{equation} for some constant $C_{{\rm sol},p}$ independent of $\hbar$. Our second assumption is that there exists a value $\delta_0 > 0$ such that \begin{equation} \label{eq_assumption_symbol_bounded} \sup_{\hbar \in \mathscr H} \operatorname{diam} \{(\boldsymbol x,\boldsymbol{\xi})\in \mathbb R^{2d} ; \; |p_\hbar(\boldsymbol x,\boldsymbol{\xi})| < \delta_0\} \leq D_0 < +\infty \end{equation} for some $D_0 \in \mathbb R$. In the remainder of this work, we allow generic constants $C$ to depend on $\{C_{{\rm coef},p}\}_{p \in \mathbb N}$, $\{C_{{\rm sol},p}\}_{p \in \mathbb N}$, $N$ and $D_0$. We also employ the notation $C_{\alpha,\beta,\dots}$ if the constant $C$ is additionally allowed to depend on other previously introduced quantity $\alpha,\beta,\dots$ \subsection{Statement of the approximability result} \label{ssec:statement} Our main result is that, if $f$ is micro-localised near the set $\{ (\boldsymbol x, \boldsymbol{\xi}) \in \mathbb R^{2d}; p_\hbar(\boldsymbol x, \boldsymbol{\xi})=0\}$, then so is the solution $u_\hbar$ to $P_\hbar u_\hbar = f$. This is a standard result when micro-localisation is understood in terms of pseudo-differential operators (see for instance \cite[Theorem 6.4]{zworski2012semiclassical}), but here, by micro-localisation properties, we mean that $f$ can be approached by a linear combination of $\gs{\hbar}{\boldsymbol m}{\boldsymbol n}$ with $[\boldsymbol m,\boldsymbol n]$ in a certain region of $\mathbb Z^{2d}$. Hence, our results may not be easily recovered from \cite[Theorem 6.4]{zworski2012semiclassical}. \begin{theorem}[Approximability for Gaussian state right-hand sides] \label{theorem_approximability} Let $\varepsilon > 0$ and $0 \leq \alpha < \delta_0/2$. For $\hbar \in \mathscr H$, consider the right-hand side \begin{equation*} f_\hbar := \sum_{[\boldsymbol m,\boldsymbol n] \in \Lambda_{\hbar,{\rm rhs}}} F_{\boldsymbol m,\boldsymbol n}^\hbar \gs{\hbar}{\boldsymbol m}{\boldsymbol n}. \end{equation*} where $F^\hbar \in \Lambda_{\hbar,\rm rhs}$ with \begin{equation}\label{eq:FormeRHS} \Lambda_{\hbar,{\rm rhs}} := \left \{ [\boldsymbol m,\boldsymbol n] \in \mathbb Z^{2d} \; | \; |p_\hbar(\bxx^{\hbar,\boldsymbol m},\bxi^{\hbar,\boldsymbol n})| \leq \alpha + 2\hbar^{1/2-\varepsilon} \right \}. \end{equation} Then, if $u_\hbar$ is the solution to $P_\hbar u_\hbar = f_\hbar$, we have \begin{equation*} \left \| u_\hbar - \sum_{[\boldsymbol m, \boldsymbol n] \in \Lambda_{\hbar,{\rm sol}}} (u_\hbar,\gs{\hbar}{\boldsymbol m}{\boldsymbol n}^\star) \gs{\hbar}{\boldsymbol m}{\boldsymbol n} \right \|_{\widehat{H}_\hbar^p(\mathbb R^d)} \leq C_{\varepsilon,p,m} \hbar^{-m} \|F_\hbar\|_{\ell^2(\mathbb Z^{2d})} \quad \forall m \in \mathbb N. \end{equation*} for all $p \in \mathbb N$ with \begin{equation*} \Lambda_{\hbar,{\rm sol}} := \left \{ [\boldsymbol m,\boldsymbol n] \in \mathbb Z^{2d} \; | \; |p_\hbar(\bxx^{\hbar,\boldsymbol m},\bxi^{\hbar,\boldsymbol n})| \leq \alpha + 4\hbar^{1/2-\varepsilon} \right \}. \end{equation*} \end{theorem} In practice, the right-hand side of the problem is not a finite linear combination of Gaussian coherent states. However, many right-hand sides of interest become well-approximated by such combination in the high-frequency regime. This is in particular the case when considering scattering by a plane-wave (see Lemma \ref{Lem:ApproxPW} below). \begin{corollary}[Approximability for micro-localised right-hand sides] \label{corollary_approximability} Let $p \geq 0$. Consider a set of right-hand sides $(f_\hbar)_{\hbar \in \mathscr H} \subset \widehat{H}^p(\mathbb R^d)$ and assume that there exists a set of sequences $(F^{\hbar})_{\hbar \in \mathscr H} \subset \mathbb Z^{2d}$ such that \begin{align*} \|F^{\hbar}\|_{\ell^2(\mathbb Z^{2d})} &\leq C \\ \|f_\hbar - \sum_{[\boldsymbol m,\boldsymbol n] \in \Lambda_{\hbar,{\rm rhs}}} F^{\hbar}_{\boldsymbol m,\boldsymbol n}\gs{\hbar}{\boldsymbol m}{\boldsymbol n}\|_{\widehat{H}_\hbar^p(\mathbb R^d)} &\leq C_m\hbar^{-m} \quad \forall m \in \mathbb N \end{align*} for all $\hbar \in \mathscr H$. Then, we have \begin{equation*} \left \| u_\hbar - \sum_{[\boldsymbol m,\boldsymbol n] \in \Lambda_{\hbar,{\rm sol}}} (u_{\hbar},\gs{\hbar}{\boldsymbol m}{\boldsymbol n}^\star)\gs{k}{\boldsymbol m}{\boldsymbol n} \right \|_{\widehat{H}_\hbar^{p+2}(\mathbb R^d)} \leq C_{\varepsilon,p,m}\hbar^{-m} \quad \forall m \in \mathbb N. \end{equation*} \end{corollary} \section{Proof of Theorem \ref{theorem_approximability}} \label{sec:proofs} This section is devoted to the detailed proof of Theorem \ref{theorem_approximability} and Corollary \ref{corollary_approximability}. \subsection{Preliminary results on Gaussian states} \label{sec:properties} We start by stating key preliminary results on the Gaussian and dual frames. We first point out that the converse to \eqref{eq_dual_frame} is true, namely, that \begin{equation} \label{eq_gabor_expansion} v = \sum_{[\boldsymbol m,\boldsymbol n] \in \mathbb Z^{2d}} (v,\gs{\hbar}{\boldsymbol m}{\boldsymbol n}^\star) \gs{\hbar}{\boldsymbol m}{\boldsymbol n} \quad \forall v \in L^2(\mathbb R^d), \end{equation} see, e.g. \cite{Adcock:2019:FNA,chaumontfrelet_ingremeau_2022a,daubechies_grossman_meyer_1986a}. We also record that the following bound \begin{equation} \label{eq_norm_gskmn} \|\gs{\hbar}{\boldsymbol m}{\boldsymbol n}\|_{\widehat{H}_\hbar^s(\mathbb R^d)} \leq C_s (1+ |[\boldsymbol m,\boldsymbol n]|^s) \end{equation} holds true for all $p \in \mathbb N$, see \cite[Lemma C.1]{chaumontfrelet_ingremeau_2022a}. Finally, we will need the following expansion result. \begin{proposition}[Tight expansion] \label{proposition_tight_expansion} For all $[\boldsymbol m,\boldsymbol n] \in \mathbb Z^{2d}$, there exists a sequence of coefficients $U^{\boldsymbol m,\boldsymbol n} \subset \ell^2(\mathbb Z^{2d})$ such that \begin{equation*} \|U^{\boldsymbol m,\boldsymbol n}\|_{\ell^p(\mathbb Z^{2d})} \leq C_p \quad \forall p \in [1,+\infty], \end{equation*} and for all $\varepsilon > 0$ and $s \in \mathbb N$, we have \begin{equation} \label{eq_tight_expansion} \left \| \gs{\hbar}{\boldsymbol m}{\boldsymbol n}^\star- \sum_{\substack{[\boldsymbol m',\boldsymbol n'] \in \mathbb Z^{2d} \\ |[\boldsymbol m,\boldsymbol n]-[\boldsymbol m',\boldsymbol n']| \leq \hbar^{-\varepsilon}}} U^{\boldsymbol m,\boldsymbol n}_{\boldsymbol m',\boldsymbol n'} \gs{\hbar}{\boldsymbol m'}{\boldsymbol n'} \right \|_{\widehat{H}_\hbar^s(\mathbb R^d)} \leq C_{\varepsilon,s,m} \hbar^m \quad \forall m \in \mathbb N. \end{equation} \end{proposition} \begin{proof} We start by applying \eqref{eq_gabor_expansion} to $v = \gs{\hbar}{\boldsymbol m}{\boldsymbol n}^\star$, leading to \begin{equation*} \gs{\hbar}{\boldsymbol m}{\boldsymbol n}^\star = \sum_{[\boldsymbol m',\boldsymbol n'] \in \mathbb Z^{2d}} (\gs{\hbar}{\boldsymbol m}{\boldsymbol n}^\star,\gs{\hbar}{\boldsymbol m'}{\boldsymbol n'}^\star) \gs{k}{\boldsymbol m}{\boldsymbol n}. \end{equation*} Next, we recall from \cite[Proposition 4.2]{chaumontfrelet_ingremeau_2022a} that \begin{equation*} |(\gs{\hbar}{\boldsymbol m}{\boldsymbol n}^\star,\gs{\hbar}{\boldsymbol m'}{\boldsymbol n'}^\star)| \leq C e^{-|[\boldsymbol m,\boldsymbol n]-[\boldsymbol m',\boldsymbol n']|^{1/2}}. \end{equation*} As a result, we define $U^{\boldsymbol m,\boldsymbol n}_{\boldsymbol m',\boldsymbol n'} := (\gs{\hbar}{\boldsymbol m}{\boldsymbol n}^\star,\gs{\hbar}{\boldsymbol m'}{\boldsymbol n'}^\star)$, so that $U^{\boldsymbol m,\boldsymbol n}$ indeed belongs to $\ell^p(\mathbb Z^{2d})$ for $1 \leq p \leq +\infty$, and \begin{equation*} \mathcal E := \gs{\hbar}{\boldsymbol m}{\boldsymbol n}^\star- \sum_{\substack{[\boldsymbol m',\boldsymbol n'] \in \mathbb Z^{2d} \\ |[\boldsymbol m,\boldsymbol n]-[\boldsymbol m',\boldsymbol n']| \leq \hbar^{-\varepsilon}}} U^{\boldsymbol m,\boldsymbol n}_{\boldsymbol m',\boldsymbol n'} \gs{\hbar}{\boldsymbol m'}{\boldsymbol n'} = \sum_{\substack{[\boldsymbol m',\boldsymbol n'] \in \mathbb Z^{2d} \\ |[\boldsymbol m,\boldsymbol n]-[\boldsymbol m',\boldsymbol n']| > \hbar^{-\varepsilon}}} U^{\boldsymbol m,\boldsymbol n}_{\boldsymbol m',\boldsymbol n'} \gs{\hbar}{\boldsymbol m'}{\boldsymbol n'}. \end{equation*} We then observe that \begin{align*} \|\mathcal E\|_{\widehat{H}_\hbar^s(\mathbb R^d)} &\leq \sum_{\substack{[\boldsymbol m',\boldsymbol n'] \in \mathbb Z^{2d} \\ |[\boldsymbol m,\boldsymbol n]-[\boldsymbol m',\boldsymbol n']| > \hbar^{-\varepsilon}}} |U^{\boldsymbol m,\boldsymbol n}_{\boldsymbol m',\boldsymbol n'}| \cdot \|\gs{\hbar}{\boldsymbol m'}{\boldsymbol n'}\|_{\widehat{H}_\hbar^s(\mathbb R^d)} \\ &\leq C \sum_{\substack{[\boldsymbol m',\boldsymbol n'] \in \mathbb Z^{2d} \\ |[\boldsymbol m,\boldsymbol n]-[\boldsymbol m',\boldsymbol n']| > \hbar^{-\varepsilon}}} (1+|[\boldsymbol m,\boldsymbol n]|)^s e^{-|[\boldsymbol m,\boldsymbol n]-[\boldsymbol m',\boldsymbol n']|^{1/2}}, \end{align*} due to \eqref{eq_norm_gskmn}, and \eqref{eq_tight_expansion} follows since \begin{equation*} \sum_{\substack{[\boldsymbol m',\boldsymbol n'] \in \mathbb Z^{2d} \\ |[\boldsymbol m,\boldsymbol n]-[\boldsymbol m',\boldsymbol n']| > \hbar^{-\varepsilon}}} (1+|[\boldsymbol m,\boldsymbol n]|)^s e^{-|[\boldsymbol m,\boldsymbol n]-[\boldsymbol m',\boldsymbol n']|^{1/2}} \leq C_{\varepsilon,s,m} \hbar^{-m}. \end{equation*} \end{proof} We close this section with two technical results. As we believe they are of independent interest, and because their proof require tedious computations, they are presented later in Appendix \ref{sec:GS}. \begin{proposition}[Quasi orthogonality] \label{proposition_quasi_orthogonality} Consider two sets of indices $\Lambda,\Lambda' \subset \mathbb Z^{2d}$ with \begin{equation*} \mu := \operatorname{diam}(\Lambda) < +\infty \qquad \rho := \operatorname{dist}(\Lambda,\Lambda') > 0. \end{equation*} Consider $L \in \mathbb N$, smooth coefficients $(\textup A_{\boldsymbol a})_{\boldsymbol a \in \mathbb N^d} \subset C^\infty_{\rm b}(\mathbb R^d)$ and the differential operator \begin{equation*} \mathcal P_{\hbar,L,\textup A} := \sum_{\substack{\boldsymbol{\alpha} \in \mathbb N^d \\ [\boldsymbol{\alpha}] \leq L}} \hbar^{[\boldsymbol{\alpha}]} \textup A_{\boldsymbol{\alpha}} \partial^{\boldsymbol{\alpha}}. \end{equation*} Then, for all $q > 0$ and $\boldsymbol{\alpha} \in \mathbb N^d$, we have \begin{equation} \label{eq_phase_space_localisation} \sum_{[\boldsymbol m,\boldsymbol n] \in \Lambda} \sum_{[\boldsymbol m',\boldsymbol n'] \in \Lambda'} |(\boldsymbol x^{\boldsymbol{\alpha}}\mathcal P_{\hbar,L,\textup A} \gs{\hbar}{\boldsymbol m}{\boldsymbol n},\gs{\hbar}{\boldsymbol m'}{\boldsymbol n'})|^q \leq C_{L,\textup A,q,m} (1 + (\hbar\mu)^{[\boldsymbol{\alpha}]+L})^q(1+\rho)^{-m} \end{equation} for all $m \in \mathbb N$. \end{proposition} \begin{proposition}[Control of $(P_\hbar-p)^L$] For all $[\boldsymbol m,\boldsymbol n] \in \mathbb Z^{2d}$, we have \begin{subequations} \label{eq_iterated_operators} \begin{equation} \label{eq_iterated_operator} \left \| \left (P_\hbar - p_\hbar(\boldsymbol x^{k,\boldsymbol m},\bxi^{k,\boldsymbol n}) \right )^L \gs{\hbar}{\boldsymbol m}{\boldsymbol n} \right \|_{L^2(\mathbb R^d)} \leq C_L (1+ \hbar|\boldsymbol n|^2)^L \hbar^{L/2}, \end{equation} and \begin{equation} \label{eq_iterated_adjoint_operator} \left \| \left(P_\hbar^\star - \overline{p}_\hbar(\boldsymbol x^{k,\boldsymbol m},\bxi^{k,\boldsymbol n}) \right)^L \gs{\hbar}{\boldsymbol m}{\boldsymbol n} \right \|_{L^2(\mathbb R^d)} \leq C_L (1+ \hbar|\boldsymbol n|^2)^L \hbar^{L/2}. \end{equation} \end{subequations} \end{proposition} \subsection{Main proof} We focus here on the proof of Theorem \ref{theorem_approximability}. Hence, we fix an $\varepsilon > 0$, and consider a right-hand side micro-localised near $\{p_\hbar=0\}$. Specifically, we will assume that \begin{equation*} f_\hbar := \sum_{[\boldsymbol m,\boldsymbol n] \in \Lambda_{\hbar,{\rm rhs}}} F^{\hbar}_{\boldsymbol m,\boldsymbol n} \gs{\hbar}{\boldsymbol m}{\boldsymbol n}, \end{equation*} where $F^\hbar \in \ell^2(\Lambda_{\rm rhs})$ with $\Lambda_{\hbar,{\rm rhs}} := \Lambda(0,\alpha+2\hbar^{1/2-\varepsilon})$. Our goal is to show that the associated solution $u_\hbar$ is essentially micro-localised near $\{p_\hbar=0\}$ as well. Specifically, setting $\Lambda_{\hbar,{\rm near}} := \Lambda(0,\alpha+4\hbar^{1/2-\varepsilon})$, our goal will be to show that \begin{equation*} u^{\rm near}_\hbar := \sum_{[\boldsymbol m,\boldsymbol n] \in \Lambda_{\hbar,{\rm near}}} (u_\hbar,\gs{\hbar}{\boldsymbol m}{\boldsymbol n}^\star) \gs{\hbar}{\boldsymbol m}{\boldsymbol n} \end{equation*} is ``close'' to $u_\hbar$. The key idea is to separate the set of indices of $u_\hbar$ into $\Lambda_{\hbar,{\rm near}}$, $\Lambda_{\hbar,{\rm mid}} := \Lambda(\alpha+4\hbar^{1/2-\varepsilon},\alpha+6\hbar^{1/2-\varepsilon})$ and $\Lambda_{\hbar,{\rm far}} := \Lambda(\alpha+6\hbar^{1/2-\varepsilon},+\infty)$. We shall also need the ``enlarged'' sets \begin{equation*} \Lambda_{\hbar,{\rm near}}^\star := \Lambda(0,\alpha+5\hbar^{1/2-\varepsilon}), \quad \Lambda_{\hbar,{\rm mid}}^\star := \Lambda(\alpha+3\hbar^{1/2-\varepsilon},\alpha+7\hbar^{1/2-\varepsilon}), \quad \Lambda_{\hbar,{\rm far}}^\star := \Lambda(\alpha+5\hbar^{1/2-\varepsilon},+\infty), \end{equation*} for the test functions. We first state some elementary properties of these sets of indices. We do not report the (straightforward) proofs for the sake of shortness. \begin{lemma}[Index sets] Assume that $\alpha+7\hbar^{1/2-\varepsilon} \leq \delta_0$, then we have \begin{equation} \label{eq_dist_LBA} \operatorname{dist}(\Lambda_{\hbar,{\rm far}},\Lambda_{\hbar,{\rm near}}^\star) \geq C \hbar^{-\varepsilon} \quad \operatorname{dist}(\Lambda_{\hbar,{\rm rhs}},\Lambda_{\hbar,{\rm mid}}^\star) \geq C \hbar^{-\varepsilon} \quad \operatorname{dist}(\Lambda_{\hbar,{\rm near}},\Lambda_{\hbar,{\rm far}}^\star) \geq C \hbar^{-\varepsilon} \end{equation} and \begin{equation} \label{eq_diam_LBA} \operatorname{diam}(\Lambda_{\hbar,{\rm rhs}}^\star) \leq C\hbar^{-1/2} \quad \operatorname{diam}(\Lambda_{\hbar,{\rm near}}^\star) \leq C\hbar^{-1/2} \quad \operatorname{diam}(\Lambda_{\hbar,{\rm mid}}^\star) \leq C\hbar^{-1/2} \end{equation} In addition, if $\hbar$ is small enough, we have \begin{equation} \label{eq_LBA_mid_star_inclusion} \left \{ [\boldsymbol m',\boldsymbol n'] \in \mathbb Z^{2d} \; | \; \exists [\boldsymbol m,\boldsymbol n] \in \Lambda_{\hbar,{\rm mid}}; \; |[\boldsymbol m,\boldsymbol n]-[\boldsymbol m',\boldsymbol n']| \leq \hbar^{-\varepsilon/2} \right \} \subset \Lambda_{\hbar,{\rm mid}}^\star. \end{equation} \end{lemma} \begin{lemma}[Quasi orthogonality away from RHS micro-support] \label{lemma_LBA_mid_star} For $F^\hbar \in \ell^2(\Lambda_{\hbar,{\rm rhs}})$, consider the right-hand side \begin{equation*} f_\hbar = \sum_{[\boldsymbol m,\boldsymbol n] \in \Lambda_{\hbar,{\rm rhs}}} F^\hbar_{\boldsymbol m,\boldsymbol n} \gs{\hbar}{\boldsymbol m}{\boldsymbol n} \end{equation*} and the associated solution $u_\hbar$. Then, if $[\boldsymbol m,\boldsymbol n] \in \Lambda_{\hbar,{\rm mid}}^\star$, we have \begin{equation*} |(u_\hbar,\gs{\hbar}{\boldsymbol m}{\boldsymbol n})| \leq C_{\varepsilon,m} \hbar^m \|F^\hbar\|_{\ell^2(\mathbb Z^{2d})} \end{equation*} for all $m \in \mathbb N$. \end{lemma} \begin{proof} Throughout the proof, we fix a pair of indices $[\boldsymbol m,\boldsymbol n] \in \Lambda_{\hbar,{\rm mid}}^\star$. By definition of $\Lambda_{\hbar,{\rm mid}}^\star$, the assumption that $\alpha+7\hbar^{1/2-\varepsilon} \leq \delta_0$ and \eqref{eq_assumption_symbol_bounded}, we have \begin{equation} \label{tmp_bound_pmn_LBA_mid} c\hbar^{1/2-\varepsilon} \leq |{\rm p}_\hbar(\boldsymbol m,\boldsymbol n)| \leq C. \end{equation} In particular, we can write \begin{equation*} u_\hbar = \frac{1}{{\rm p}_\hbar(\boldsymbol m,\boldsymbol n)} f_\hbar + \frac{1}{{\rm p}_\hbar(\boldsymbol m,\boldsymbol n)} (P_\hbar-{\rm p}_\hbar(\boldsymbol m,\boldsymbol n)) u_\hbar. \end{equation*} Since $f_\hbar$ is smooth, so is $u_\hbar$, and we can iterate this relation, leading to \begin{equation*} u_\hbar = \sum_{\ell=1}^r \frac{1}{{\rm p}_\hbar(\boldsymbol m,\boldsymbol n)^\ell} (P_\hbar-{\rm p}_\hbar(\boldsymbol m,\boldsymbol n))^{\ell-1} f_\hbar + \frac{1}{{\rm p}_\hbar(\boldsymbol m,\boldsymbol n)^r} (P_\hbar-{\rm p}_\hbar(\boldsymbol m,\boldsymbol n))^r u_\hbar, \end{equation*} and \begin{align} \nonumber (u_\hbar,\gs{\hbar}{\boldsymbol m}{\boldsymbol n}) &= \sum_{\ell=1}^r \frac{1}{{\rm p}_\hbar(\boldsymbol m,\boldsymbol n)^\ell} (f_\hbar,(P_\hbar^\star-\overline{{\rm p}}_\hbar(\boldsymbol m,\boldsymbol n))^{\ell-1} \gs{\hbar}{\boldsymbol m}{\boldsymbol n}) \\ \label{tmp_iterated_relation} &+ \frac{1}{{\rm p}_\hbar(\boldsymbol m,\boldsymbol n)^r} (u_\hbar,(P_\hbar^\star-\overline{{\rm p}}_\hbar(\boldsymbol m,\boldsymbol n))^r \gs{\hbar}{\boldsymbol m}{\boldsymbol n}) \end{align} for all $r \in \mathbb N$. Then, if $[\boldsymbol m',\boldsymbol n'] \in \Lambda_{\hbar,{\rm rhs}}$, using the upper-bound in \eqref{tmp_bound_pmn_LBA_mid}, we have \begin{align*} |(\gs{k}{\boldsymbol m'}{\boldsymbol n'},(P_\hbar^\star-\overline{{\rm p}}_\hbar(\boldsymbol m,\boldsymbol n))^{\ell-1} \gs{\hbar}{\boldsymbol m}{\boldsymbol n})| &= \left | \sum_{j=0}^{\ell-1} \left ( \begin{array}{c} \ell-1 \\ j \end{array} \right ) \overline{{\rm p}}_\hbar(\boldsymbol m,\boldsymbol n)^{\ell-1-j} (\gs{k}{\boldsymbol m'}{\boldsymbol n'},(P_\hbar^\star)^j \gs{\hbar}{\boldsymbol m}{\boldsymbol n}) \right | \\ &\leq C_\ell \sum_{j=0}^{\ell-1} |(\gs{\hbar}{\boldsymbol m'}{\boldsymbol n'},(P_\hbar^\star)^j \gs{\hbar}{\boldsymbol m}{\boldsymbol n})|. \end{align*} Recalling, \eqref{eq_diam_LBA}, we have $\hbar^{\ell-1}\operatorname{diam}(\Lambda_{\hbar,{\rm mid}}^\star) \leq C\hbar^{\ell-3/2} \leq C$ for all $\ell \geq 2$. As a result, applying \eqref{eq_phase_space_localisation} gives \begin{equation*} |(\gs{\hbar}{\boldsymbol m'}{\boldsymbol n'},(P_\hbar^\star-\overline{{\rm p}}_\hbar(\boldsymbol m,\boldsymbol n))^{\ell-1} \gs{\hbar}{\boldsymbol m}{\boldsymbol n})| \leq C_{\ell,n}(1+|[\boldsymbol m,\boldsymbol n]-[\boldsymbol m'\boldsymbol n']|)^{-n} \leq C_{\ell,n}\hbar^{\varepsilon n}, \end{equation*} for all $n \in \mathbb N$, since $|[\boldsymbol m,\boldsymbol n]-[\boldsymbol m',\boldsymbol n']| \geq \hbar^{-\varepsilon}$ due to \eqref{eq_dist_LBA}. The case $\ell=1$ also easily follows by Proposition \ref{proposition_quasi_orthogonality}. We then write that \begin{align*} |(f_\hbar,(P_\hbar^\star-\overline{{\rm p}}_\hbar(\boldsymbol m,\boldsymbol n))^{\ell-1} \gs{\hbar}{\boldsymbol m}{\boldsymbol n})| &\leq \sum_{[\boldsymbol m',\boldsymbol n'] \in \Lambda_{\hbar,{\rm rhs}}} |F^\hbar_{\boldsymbol m,\boldsymbol n}||(\gs{\hbar}{\boldsymbol m'}{\boldsymbol n'},(P_\hbar^\star-\overline{{\rm p}}_\hbar(\boldsymbol m,\boldsymbol n))^{\ell-1} \gs{\hbar}{\boldsymbol m}{\boldsymbol n})| \\ &\leq C_{\ell,n} \hbar^{\varepsilon n} \sum_{[\boldsymbol m',\boldsymbol n'] \in \Lambda_{\hbar,{\rm rhs}}} |F^\hbar_{\boldsymbol m,\boldsymbol n}| \\ &\leq C_{\ell,n} \hbar^{\varepsilon n-d} \|F^\hbar\|_{\ell^2(\mathbb Z^{2d})}, \end{align*} where we used that $|\Lambda_{\hbar,{\rm rhs}}| \leq C\operatorname{diam}(\Lambda_{\hbar,{\rm rhs}})^{2d} \leq C \hbar^d$ due to \eqref{eq_diam_LBA}. As a result, using the lower-bound in \eqref{tmp_bound_pmn_LBA_mid}, we have \begin{equation*} \frac{1}{|{\rm p}(\boldsymbol m,\boldsymbol n)|^\ell} |(f_\hbar,(P_\hbar^\star-\overline{{\rm p}}_\hbar(\boldsymbol m,\boldsymbol n))^{\ell-1} \gs{\hbar}{\boldsymbol m}{\boldsymbol n})| \leq C_{\ell,n} \hbar^{\varepsilon n-d-\ell/2} \|F^\hbar\|_{\ell^2(\mathbb Z^{2d})}, \end{equation*} and \begin{equation*} \sum_{\ell=1}^r \frac{1}{|{\rm p}(\boldsymbol m,\boldsymbol n)|^\ell} |(f_\hbar,(P_\hbar^\star-\overline{{\rm p}}_\hbar(\boldsymbol m,\boldsymbol n))^{\ell-1} \gs{\hbar}{\boldsymbol m}{\boldsymbol n})| \leq C_{m,n} \hbar^{\varepsilon n-d-r/2} \|F^\hbar\|_{\ell^2(\mathbb Z^{2d})}, \end{equation*} for all $n \in \mathbb N$. Thus, for any $m \in \mathbb N$, we can select $n = n(m,d,r,\varepsilon)$ such that $\varepsilon n - d - r/2 \geq m$, leading to \begin{equation*} \sum_{\ell=1}^r \frac{1}{|{\rm p}(\boldsymbol m,\boldsymbol n)|^\ell} |(f_\hbar,(P_\hbar^\star-\overline{{\rm p}}_\hbar(\boldsymbol m,\boldsymbol n))^{\ell-1} \gs{\hbar}{\boldsymbol m}{\boldsymbol n})| \leq C_{\varepsilon,m,r} \hbar^{m} \|F^\hbar\|_{\ell^2(\mathbb Z^{2d})}. \end{equation*} On the other hand, using again \eqref{eq_diam_LBA} and applying \eqref{eq_iterated_adjoint_operator}, we have \begin{equation*} \|(P_\hbar^\star-\overline{{\rm p}}(\boldsymbol m,\boldsymbol n))^r \gs{\hbar}{\boldsymbol m}{\boldsymbol n}\|_{L^2(\mathbb R^d)} \leq C_r \hbar^{r/2}, \end{equation*} and the lower bound in \eqref{tmp_bound_pmn_LBA_mid} shows that \begin{equation*} \frac{1}{|{\rm p}(\boldsymbol m,\boldsymbol n)|^r}\|(P_\hbar^\star-\overline{{\rm p}}(\boldsymbol m,\boldsymbol n))^r \gs{\hbar}{\boldsymbol m}{\boldsymbol n}\|_{L^2(\mathbb R^d)} \leq C_r \hbar^{\varepsilon r}. \end{equation*} We then write that \begin{multline*} \frac{1}{|{\rm p}(\boldsymbol m,\boldsymbol n)|^m}|(u_\hbar,(P_\hbar^\star-\overline{{\rm p}}(\boldsymbol m,\boldsymbol n))^r \gs{\hbar}{\boldsymbol m}{\boldsymbol n})| \leq C_r \hbar^{\varepsilon r} \|u_\hbar\|_{L^2(\mathbb R^d)} \leq \\ C_r \hbar^{\varepsilon r-N} \|f_\hbar\|_{L^2(\mathbb R^d)} \leq C_r \hbar^{\varepsilon r-N} \|F^\hbar\|_{\ell^2(\mathbb Z^{2d})}, \leq C_{\varepsilon,m} \hbar^m \|F^\hbar\|_{\ell^2(\mathbb Z^{2d})}, \end{multline*} up to picking $r$ such that $\varepsilon r - N \geq m$. \end{proof} \begin{proof}[Proof of Theorem \ref{theorem_approximability}] We expend $u_\hbar$ in the frame $(\gs{\hbar}{\boldsymbol m}{\boldsymbol n})_{[\boldsymbol m,\boldsymbol n] \in \mathbb Z^{2d}}$ as \begin{equation*} u_\hbar = u_\hbar^{\rm near} + u_\hbar^{\rm mid} + u_\hbar^{\rm far} \end{equation*} where \begin{align*} u_\hbar^{\rm near} &:= \sum_{[\boldsymbol m,\boldsymbol n] \in \Lambda_{\hbar,{\rm near}}} (u_{\hbar},\gs{\hbar}{\boldsymbol m}{\boldsymbol n}^\star)\gs{\hbar}{\boldsymbol m}{\boldsymbol n} \\ u_\hbar^{\rm mid} &:= \sum_{[\boldsymbol m,\boldsymbol n] \in \Lambda_{\hbar,{\rm mid}}} (u_{\hbar},\gs{\hbar}{\boldsymbol m}{\boldsymbol n}^\star)\gs{\hbar}{\boldsymbol m}{\boldsymbol n} \\ u_\hbar^{\rm far} &:= \sum_{[\boldsymbol m,\boldsymbol n] \in \Lambda_{\hbar,{\rm far}}} (u_{\hbar},\gs{\hbar}{\boldsymbol m}{\boldsymbol n}^\star)\gs{\hbar}{\boldsymbol m}{\boldsymbol n}. \end{align*} The proof then consists in showing that $u_\hbar^{\rm mid}$ and $u_\hbar^{\rm far}$ are small. {\boldsymbol f Step 1.} We first treat the $u_\hbar^{\rm mid}$ term. To do so, we start by introducing the approximation \begin{equation} \label{tmp_def_tilde_umid} \widetilde u_\hbar^{\rm mid} := \sum_{[\boldsymbol m,\boldsymbol n] \in \Lambda_{\hbar,{\rm mid}}} \sum_{\substack{[\boldsymbol m'\boldsymbol n'] \in \mathbb Z^{2d} \\ |[\boldsymbol m,\boldsymbol n]-[\boldsymbol m',\boldsymbol n']| \leq \hbar^{-\varepsilon/2}}} U^{\boldsymbol m,\boldsymbol n}_{\boldsymbol m',\boldsymbol n'} (u_{\hbar},\gs{\hbar}{\boldsymbol m'}{\boldsymbol n'}) \gs{\hbar}{\boldsymbol m}{\boldsymbol n}. \end{equation} Recalling \eqref{eq_LBA_mid_star_inclusion}, all the $[\boldsymbol m',\boldsymbol n']$ indices in the sum belong to the enlarged set $\Lambda_{\hbar,{\rm mid}}^\star$, so that \begin{equation} \label{tmp_u_test_mid_star} |(u_{\hbar},\gs{\hbar}{\boldsymbol m'}{\boldsymbol n'})| \leq C_{\varepsilon,m} \hbar^m \|F^{\hbar}\|_{\ell^2(\mathbb Z^{2d})} \end{equation} by Lemma \ref{lemma_LBA_mid_star}. Recalling \eqref{eq_diam_LBA}, $|[\boldsymbol m,\boldsymbol n]| \leq C\hbar^{-1/2}$ for all $[\boldsymbol m,\boldsymbol n] \in \Lambda_{\hbar,{\rm mid}}$, and we have from \eqref{eq_norm_gskmn} \begin{equation} \label{tmp_gauss_hp_norm} \|\gs{\hbar}{\boldsymbol m}{\boldsymbol n}\|_{\widehat{H}_\hbar^p(\mathbb R^d)} \leq C_p. \end{equation} Thus, plugging \eqref{tmp_u_test_mid_star} and \eqref{tmp_gauss_hp_norm} into \eqref{tmp_def_tilde_umid}, we have \begin{align} \nonumber \|\widetilde u_\hbar^{\rm mid}\|_{L^2(\mathbb R^d)} &\leq C_{p,\varepsilon,m} \hbar^{m} \|F^\hbar\|_{\ell^2(\mathbb Z^{2d})} \sum_{[\boldsymbol m,\boldsymbol n] \in \Lambda_{\hbar,{\rm mid}}} \sum_{\substack{[\boldsymbol m'\boldsymbol n'] \in \mathbb Z^{2d} \\ |[\boldsymbol m,\boldsymbol n]-[\boldsymbol m',\boldsymbol n']| \leq \hbar^{-\varepsilon/2}}} 1 \\ \label{tmp_norm_tilde_umin} &\leq C_{p,\varepsilon,m} \hbar^{m-\varepsilon d^2/2} \|F^\hbar\|_{\ell^2(\mathbb Z^{2d})}. \end{align} We now estimate the difference between $u_\hbar^{\rm mid}$ and $\widetilde u_\hbar^{\rm mid}$ \begin{align*} u_\hbar^{\rm mid} - \widetilde u_\hbar^{\rm mid} = \sum_{[\boldsymbol m,\boldsymbol n] \in \Lambda_{\hbar,{\rm mid}}} \left (u_\hbar, \gs{\hbar}{\boldsymbol m}{\boldsymbol n}^\star- \sum_{\substack{[\boldsymbol m',\boldsymbol n'] \in \mathbb Z^{2d} \\ |[\boldsymbol m,\boldsymbol n]-[\boldsymbol m',\boldsymbol n']| \leq \hbar^{-\varepsilon/2}}} U^{\boldsymbol m,\boldsymbol n}_{\boldsymbol m',\boldsymbol n'} \gs{\hbar}{\boldsymbol m'}{\boldsymbol n'} \right ) \gs{\hbar}{\boldsymbol m}{\boldsymbol n}, \end{align*} so that \begin{align*} \|u_\hbar^{\rm mid} - \widetilde u_\hbar^{\rm mid}\|_{\widehat{H}_\hbar^p(\mathbb R^d)} \leq C_p \|u_\hbar\|_{L^2(\mathbb R^d)} \sum_{[\boldsymbol m,\boldsymbol n] \in \Lambda_{\hbar,{\rm mid}}} \left \| \gs{\hbar}{\boldsymbol m}{\boldsymbol n}^\star - \sum_{\substack{[\boldsymbol m',\boldsymbol n'] \in \mathbb Z^{2d} \\ |[\boldsymbol m,\boldsymbol n]-[\boldsymbol m',\boldsymbol n']| \leq \hbar^{-\varepsilon/2}}} \gs{\hbar}{\boldsymbol m'}{\boldsymbol n'} \right \|_{L^2(\mathbb R^d)} \end{align*} and it follows from Proposition \ref{proposition_tight_expansion} that \begin{equation} \label{tmp_diff_tilde_norm} \|u_\hbar^{\rm mid} - \widetilde u_\hbar^{\rm mid}\|_{\widehat{H}_\hbar^p(\mathbb R^d)} \leq C_{p,\varepsilon,m} \hbar^{m-d} \|u_\hbar\|_{\widehat{H}_\hbar^p(\mathbb R^d)} \leq C_{p,\varepsilon,m} \hbar^{m-d-N} \|F^\hbar\|_{\ell^2(\mathbb Z^{2d})}. \end{equation} Then, it follows from \eqref{tmp_norm_tilde_umin} and \eqref{tmp_diff_tilde_norm} that \begin{equation*} \|u_\hbar^{\rm mid}\|_{\widehat{H}_\hbar^p(\mathbb R^d)} \leq C_{p,\varepsilon,m} \hbar^m \|F^{\hbar}\|_{\ell^2(\mathbb Z^{2d})} \qquad \forall m \in \mathbb N, \end{equation*} up to redefining $m$. {\boldsymbol f Step 2.} We then turn our attention to $u_\hbar^{\rm far}$. On the one hand, we can apply Proposition \ref{proposition_quasi_orthogonality} with $\mathcal P_{\hbar,L,\textup A} := \hbar^{[\boldsymbol{\beta}]}\partial^{\boldsymbol{\beta}} \circ P_\hbar$. This gives \begin{align*} \sum_{[\boldsymbol m,\boldsymbol n] \in \Lambda_{\hbar,{\rm near}}^\star} |(\boldsymbol x^{\boldsymbol{\alpha}} \partial^{\boldsymbol{\beta}} (P_\hbar u_\hbar^{\rm far}),\gs{\hbar}{\boldsymbol m}{\boldsymbol n})| &= \hbar^{-[\boldsymbol{\beta}]} \left |\sum_{[\boldsymbol m,\boldsymbol n] \in \Lambda_{\hbar,{\rm near}}^\star} \sum_{[\boldsymbol m',\boldsymbol n'] \in \Lambda_{\hbar,{\rm far}}} (u_{\hbar},\gs{\hbar}{\boldsymbol m'}{\boldsymbol n'}^\star) (\boldsymbol x^{\boldsymbol{\alpha}}\mathcal P_{\hbar,L,\textup A} \gs{\hbar}{\boldsymbol m'}{\boldsymbol n'},\gs{\hbar}{\boldsymbol m}{\boldsymbol n}) \right | \\ &\leq C_{\boldsymbol{\alpha},\boldsymbol{\beta},r} \hbar^{r-[\boldsymbol{\beta}]} \|u_{\hbar}\|_{L^2(\mathbb R^d)} \qquad \forall r \in \mathbb N. \end{align*} As a result, since $|\Lambda_{\hbar,{\rm near}}^\star| \leq C(\operatorname{diam}(\Lambda_{\hbar,{\rm near}}^\star))^{2d} \leq C\hbar^{-d}$ due to \eqref{eq_diam_LBA}, we have \begin{align*} \sum_{[\boldsymbol m,\boldsymbol n] \in \Lambda_{\hbar,{\rm near}}^\star} |(\boldsymbol x^{\boldsymbol{\alpha}} \partial^{\boldsymbol{\beta}} (P_\hbar u_\hbar^{\rm far}),\gs{\hbar}{\boldsymbol m}{\boldsymbol n})|^2 &\leq C \hbar^{-d} \left ( \sum_{[\boldsymbol m,\boldsymbol n] \in \Lambda_{\hbar,{\rm near}}^\star} |(\boldsymbol x^{\boldsymbol{\alpha}}\partial^{\boldsymbol{\beta}} (P_\hbar u_\hbar^{\rm far}),\gs{\hbar}{\boldsymbol m}{\boldsymbol n})| \right )^2 \\ &\leq C_{\boldsymbol{\alpha},\boldsymbol{\beta},r} \hbar^{2r-2[\boldsymbol{\beta}]-d} \|u_\hbar\|_{L^2(\mathbb R^d)}^2 \end{align*} which we rewrite as \begin{equation*} \sum_{[\boldsymbol m,\boldsymbol n] \in \Lambda_{\hbar,{\rm near}}^\star} |(\boldsymbol x^{\boldsymbol{\alpha}} \partial^{\boldsymbol{\beta}} (P_\hbar u_\hbar^{\rm far}),\gs{\hbar}{\boldsymbol m}{\boldsymbol n})|^2 \leq C_{\boldsymbol{\alpha},\boldsymbol{\beta},m} \hbar^{2m} \|u_\hbar\|_{L^2(\mathbb R^d)}^2 \qquad \forall m \in \mathbb N \end{equation*} after changing variables. On the other hand, if $[\boldsymbol m,\boldsymbol n] \in \Lambda_{\hbar,{\rm far}}^\star$, we write that \begin{equation*} P_\hbar u_\hbar^{\rm far} = f_\hbar - P_\hbar u_\hbar^{\rm near} + P_\hbar u_\hbar^{\rm mid}, \end{equation*} so that \begin{equation*} |(\boldsymbol x^{\boldsymbol{\alpha}}\partial^{\boldsymbol{\beta}}(P_\hbar u_\hbar^{\rm far}),\gs{\hbar}{\boldsymbol m}{\boldsymbol n})|^2 \leq C \left ( |(\boldsymbol x^{\boldsymbol{\alpha}}\partial^{\boldsymbol{\beta}} f_\hbar,\gs{\hbar}{\boldsymbol m}{\boldsymbol n})|^2 + |(\boldsymbol x^{\boldsymbol{\alpha}}\partial^{\boldsymbol{\beta}}(P_\hbar u_\hbar^{\rm near}),\gs{\hbar}{\boldsymbol m}{\boldsymbol n})|^2 + |(\boldsymbol x^{\boldsymbol{\alpha}}\partial^{\boldsymbol{\beta}}(P_\hbar u_\hbar^{\rm mid}),\gs{\hbar}{\boldsymbol m}{\boldsymbol n})|^2 \right ). \end{equation*} We then have \begin{equation*} \sum_{[\boldsymbol m,\boldsymbol n] \in \Lambda_{\hbar,{\rm far}}^\star} |(\boldsymbol x^{\boldsymbol{\alpha}}\partial^{\boldsymbol{\beta}}(P_\hbar u_\hbar^{\rm mid}),\gs{\hbar}{\boldsymbol m}{\boldsymbol n})|^2 \leq C \|\boldsymbol x^{\boldsymbol{\alpha}}\partial^{\boldsymbol{\beta}}(P_\hbar u_\hbar^{\rm mid})\|_{L^2(\mathbb R^d)}^2 \leq C \|u_\hbar^{\rm mid}\|_{\widehat{H}_\hbar^{[\boldsymbol{\beta}]+2}(\mathbb R^d)}^2 \leq C_{\boldsymbol{\beta},m} \hbar^m \|F^\hbar\|_{\ell^2(\mathbb Z^{2d})}^2 \end{equation*} due to {\boldsymbol f Step 1}. Next, thanks to Proposition \ref{proposition_quasi_orthogonality}, \begin{align*} \sum_{[\boldsymbol m,\boldsymbol n] \in \Lambda_{\hbar,{\rm far}}^\star} |(\boldsymbol x^{\boldsymbol{\alpha}}\partial^{\boldsymbol{\beta}} f_\hbar,\gs{\hbar}{\boldsymbol m}{\boldsymbol n})|^2 &\leq \sum_{[\boldsymbol m,\boldsymbol n] \in \Lambda_{\hbar,{\rm far}}^\star} \sum_{[\boldsymbol m',\boldsymbol n'] \in \Lambda_{\hbar,{\rm rhs}}} |F^\hbar_{\boldsymbol m,\boldsymbol n}|^2|(\boldsymbol x^{\boldsymbol{\alpha}} \partial^{\boldsymbol{\beta}} \gs{\hbar}{\boldsymbol m'}{\boldsymbol n'},\gs{\hbar}{\boldsymbol m}{\boldsymbol n})|^2 \\ &\leq C_{\boldsymbol{\alpha},\boldsymbol{\beta},m}^2 \hbar^{2m} \|F^\hbar\|_{\ell^2(\mathbb Z^{2d})}^2. \end{align*} Finally, still by Proposition \ref{proposition_quasi_orthogonality}, it holds that \begin{align*} \sum_{[\boldsymbol m,\boldsymbol n] \in \Lambda_{\hbar,{\rm far}}^\star} |(\boldsymbol x^{\boldsymbol{\alpha}}\partial^{\boldsymbol{\beta}} (P_\hbar u_\hbar^{\rm near}),\gs{\hbar}{\boldsymbol m}{\boldsymbol n})|^2 &\leq \sum_{[\boldsymbol m,\boldsymbol n] \in \Lambda_{\hbar,{\rm far}}^\star} \sum_{[\boldsymbol m',\boldsymbol n'] \in \Lambda_{\hbar,{\rm near}}} |(u,\gs{k}{\boldsymbol m'}{\boldsymbol n'}^\star)||(\boldsymbol x^{\boldsymbol{\alpha}}\partial^{\boldsymbol{\beta}} (P_\hbar \gs{\hbar}{\boldsymbol m'}{\boldsymbol n'}),\gs{\hbar}{\boldsymbol m}{\boldsymbol n})|^2 \\ &\leq C_{\boldsymbol{\alpha},\boldsymbol{\beta},r} \hbar^{2r} \|u_\hbar\|_{L^2(\mathbb R^d)}^2 \leq C_{\boldsymbol{\alpha},\boldsymbol{\beta},m} \hbar^{2m} \|F^\hbar\|_{\ell^2(\mathbb Z^{2d})}^2. \end{align*} Since $\Lambda_{\hbar,{\rm near}}^\star$ and $\Lambda_{\hbar,{\rm far}}^\star$ form a partition of $\mathbb Z^{2d}$, we have thus shown that \begin{equation} \label{tmp_regularity_ffar} \|\boldsymbol x^{\boldsymbol{\alpha}}\partial^{\boldsymbol{\beta}}(P_\hbar u_\hbar^{\rm far})\|_{L^2(\mathbb R^d)} \leq C_{\boldsymbol{\alpha},\boldsymbol{\beta},m} \hbar^m \|F^{\hbar}\|_{\ell^2(\mathbb Z^{2d})}. \end{equation} Letting $f_\hbar^{\rm far} = P_\hbar u_\hbar^{\rm far}$, we see from \eqref{tmp_regularity_ffar} that $u_\hbar^{\rm far}$ solves $P_\hbar u_\hbar^{\rm far} = f_\hbar^{\rm far}$ with a right-hand side $f_\hbar^{\rm far} \in \widehat{H}_\hbar^p(\mathbb R^d)$ such that \begin{equation*} \|f_\hbar^{\rm far}\|_{\widehat{H}_\hbar^p(\mathbb R^d)} \leq C_{p,m} \hbar^m \|F^{\hbar}\|_{\ell^2(\mathbb Z^{2d})} \quad \forall m \in \mathbb N. \end{equation*} Then, we conclude the proof with \eqref{eq_polynomial_resolvant}. \end{proof} \section{Application to the Helmholtz equation} \label{sec:helmholtz} We now turn our attention to the model problem of the Helmholtz equation \eqref{eq_helmholtz_intro}, with a particular focus on the case of plane-wave scattering. \subsection{Notation} In this section $k$ will denote the wavenumber in the Helmholtz problem, For the sake of simplicity, we assume that $kR \geq 1$. We will apply the results of Section \ref{sec:setting} with $\hbar \sim (kR)^{-1}$. As a result, the norms \begin{equation} \label{eq_weighted_norm_k} \|v\|_{\widehat{H}_k^p(\mathbb R^d)}^2 := \sum_{\substack{\boldsymbol{\alpha} \in \mathbb N^d \\ [\boldsymbol{\alpha}] \leq p}} \sum_{q \leq p - [\boldsymbol{\alpha}]} k^{-2[\boldsymbol{\alpha}]} \left\|\left|\frac{\boldsymbol x}{R}\right|^q \partial^{\boldsymbol{\alpha}} v\right\|_{L^2(\mathbb R^d)}^2 \end{equation} will be convenient. Notice that if \begin{equation*} \mathcal F(v)(\boldsymbol{\xi}) := (2\pi)^{-d/2} \int_{\mathbb R^d} v(\boldsymbol x) e^{-i \boldsymbol x \cdot \boldsymbol{\xi}} \mathrm{d}\boldsymbol x,\, \mbox{ a.e. }\boldsymbol{\xi} \in \mathbb R^d. \end{equation*} is the Fourier transform of $v \in L^2(\mathbb R^d)$, and if we define the ``reverse'' norm by \begin{equation} \|v\|_{\check{H}_k^p(\mathbb R^d)}^2 := \sum_{\substack{\boldsymbol{\alpha} \in \mathbb N^d \\ [\boldsymbol{\alpha}] \leq p}} \sum_{q \leq p - [\boldsymbol{\alpha}]} k^{-2q} \left\|\left|\frac{\boldsymbol x}{R}\right|^q \partial^{\boldsymbol{\alpha}} v\right\|_{L^2(\mathbb R^d)}^2, \end{equation} then we have \begin{equation} \label{eq:TFNormePoids} c(R) \|\mathcal F v\|_{\check{H}^p_k(\mathbb R^d)} \leq \|v\|_{\widehat{H}^p_k(\mathbb R^d)} \leq C(R) \|\mathcal F v\|_{\check{H}^p_k(\mathbb R^d)}. \end{equation} We will also use the following ``standard'' norm \begin{equation*} \|v\|_{H_k^p(\mathbb R^d)}^2 := \sum_{\substack{\boldsymbol{\alpha} \in \mathbb N^d \\ [\boldsymbol{\alpha}] \leq p}} k^{-2[\boldsymbol{\alpha}]}\|\partial^{\boldsymbol{\alpha}} v\|_{L^2(\mathbb R^d)}^2. \end{equation*} \subsection{Model problem} \label{section_model_problem} \begin{subequations} \label{eq_helmholtz} We consider smooth coefficients $\mu,\boldsymbol A \in C_{\rm b}^\infty(\mathbb R^d)$ that are respectively equal to $1$ and $\boldsymbol I$ outside $B(0,R)$. Given $f: \Omega \to \mathbb C$ our model problem is to find $u: \Omega \to \mathbb C$ such that \begin{equation} \label{eq_helmholtz_volume} -k^2 \mu u - \div \left (\boldsymbol A\boldsymbol \nabla u\right ) = f \text{ in } \mathbb R^d \end{equation} and \begin{equation} \label{eq_sommerfeld} \frac{\partial u}{\partial |\boldsymbol x|}(\boldsymbol x) - ik u(\boldsymbol x) = o\left( |\boldsymbol x|^{(-d-1)/2}\right) \text{ as } |\boldsymbol x| \to +\infty. \end{equation} \end{subequations} Problem \eqref{eq_helmholtz} is well-posed in the sense that for all $f \in L^2_{\rm comp}(\mathbb R^d)$, there exists a unique $R_k f := u \in L^2_{\rm loc}(\mathbb R^d)$ such that \eqref{eq_helmholtz_volume} and \eqref{eq_sommerfeld} hold true. Here, we will further assume that the problem has polynomially bounded resolvent, meaning that there exists $C,N$ such that \begin{equation} \label{eq_resolvant_helmholtz} \|\chi R_k \chi\|_{L^2(\mathbb R^d) \to L^2(\mathbb R^d)} \leq C (kR)^N \end{equation} where $\chi$ is a smooth cut-off function that takes the value $1$ on $B(0,R)$ and $0$ outside $B(0,2R)$. \begin{remark}[When does the polynomial bound actually hold?] The bound \eqref{eq_resolvant_helmholtz} is known to hold in several situations: \begin{itemize} \item When the dynamics induced by the Hamiltonian $p$ has no trapped trajectory, i.e., when every trajectory leaves any compact set in finite time, the assumption holds with $\mathcal{K}= [1, +\infty)$. See for instance \cite{galkowski2019optimal}. This situation is often referred to as ``non-trapping''. \item When the dynamics induced by $p$ has a trapped set, and the dynamics is hyperbolic, close to this trapped set, it has been conjectured in \cite{zworski2017mathematical} that \eqref{eq_polynomial_resolvant} always holds with $\mathcal{K}= [1, +\infty)$. Actually, this is already known when the trapped set is ``filamentary enough'', see \cite{nonnenmacher2011spectral,nonnenmacher2009quantum}. \item Without any assumption on the dynamics, \eqref{eq_polynomial_resolvant} holds taking $\mathcal{K}$ to be $[1, +\infty)$ from which we exclude a set of frequencies $k$ whose intersection with $\{k \geq k_0\}$ has a length going to zero as $k_0 \to +\infty$. We refer the reader to \cite{lafontaine_spence_wunsch_2019a} for more details. \end{itemize} \end{remark} \subsection{Perfectly matched layers}\label{sec:PML} As advertised in the introduction, the formulation \eqref{eq_helmholtz} is not suited for immediate discretization by ``volume'' methods, as the radiation condition is hard to take into account. We will thus rely on an equivalent formulation that uses perfectly matched layers (PML). Specifically, given $f: \Omega \to \mathbb C$, we consider the problem to find $u: \Omega \to \mathbb C$ such that $P_k u = f$ where \begin{equation} \label{eq:PML} P_k u := -\frac{1}{k^2} \left((\boldsymbol I+ i \boldsymbol M)^{-1} \boldsymbol \nabla \right) \cdot \left((\boldsymbol I+ i \boldsymbol M)^{-1} \boldsymbol A \boldsymbol \nabla u \right) -\mu u. \end{equation} In \eqref{eq:PML}, the (SPD) matrix function $\boldsymbol M$ is given by \begin{equation*} \boldsymbol M(\boldsymbol x) := \frac{g(|\boldsymbol x|)}{|\boldsymbol x|^3} \left ( |\boldsymbol x|^2 \mathrm{Id} - \boldsymbol x\otimes \boldsymbol x \right ) + \frac{g'(|\boldsymbol x|)}{|\boldsymbol x|^2} \boldsymbol x \otimes \boldsymbol x, \end{equation*} where $g : \mathbb R \longrightarrow \mathbb R$ is a user-defined function such that $g(r) = 0$ if $r \leq R$, $g(r) = r$ if $r\geq R_0 > R$, and $g'(r) \geq 0$. In what follows, we will assume that $g$ a smooth function to satisfy the assumptions of Section \ref{sec:setting}, but many results about PML still hold with less regular $g$ (see, e.g., \cite{galkowski2021perfectly}). Notice that, if $|\boldsymbol x| \leq R$, $\boldsymbol M = \boldsymbol 0$, so that the original operator is not modified on the support of $\mu-1$ and $\boldsymbol A-\boldsymbol I$. On the other hand, $\boldsymbol M = \boldsymbol I$ if $|\boldsymbol x| \geq R_0$, so that dissipation is introduced away from the origin, where the operator simply reads \begin{equation*} P_kv = -\frac{2i}{k^2} \Delta v - \mu v \qquad \text{ whenever } \qquad \operatorname{supp} v \cap B(0,2R) = \emptyset. \end{equation*} This transformation can be naturally interpreted as a complex deformation of coordinates. It is also often called the complex scaling technique. Crucially, the PML is designed in such way that \begin{equation} \label{eq_pml_exact} \left ((P_k)^{-1} f\right )|_{B(0,R)} = \left (R_k f\right )|_{B(0,R)} \end{equation} whenever $\operatorname{supp} f \subset B(0,R)$. We refer the reader to \cite[\S 4.5]{dyatlov2019mathematical} or \cite{galkowski2021perfectly} for more information. \subsection{Abstract setting} We now verify that the Helmholtz problem formulated with PML indeed fits the abstract setting of Section \ref{ssec:general}. The only non-trivial facts to establish are the polynomial resolvent estimates in $\widehat{H}^q(\mathbb R^d)$ in \eqref{eq_polynomial_resolvant} and the boundedness of the energy layer \eqref{eq_assumption_symbol_bounded}. \begin{lemma}[Resolvent estimates] Let $q \in \mathbb N$. For all $f \in \widehat{H}^q(\mathbb R^d)$, there exists a unique $u \in L^2(\mathbb R^d)$ such that $P_k u = f$. In addition we have $u \in \widehat{H}^q(\mathbb R^d)$ with \begin{equation*} \|u\|_{\widehat{H}^q_k(\mathbb R^d)} \leq C (kR)^{N} \|f\|_{\widehat{H}^q_k(\mathbb R^d)}. \end{equation*} Furthermore, if $\operatorname{supp} f \subset B(0,R)$, then $u \in \widehat{H}^{q+2}(\mathbb R^d)$ with \begin{equation}\label{eq:ResolvantePoly} \|u\|_{\widehat{H}^{q+2}_k(\mathbb R^d)} \leq C (kR)^{N} \|f\|_{\widehat{H}^q_k(\mathbb R^d)}. \end{equation} \end{lemma} \begin{proof} We first invoke Theorem 1.6 of \cite{galkowski2021perfectly} which states that \begin{equation*} \|P_k^{-1}\|_{L^2(\mathbb R^d) \to H^2_k(\mathbb R^d)} \leq C \|\chi R_k \chi\|_{L^2(\mathbb R^d) \to L^2(\mathbb R^d)}. \end{equation*} Then, using a usual bootstrap technique, we easily show that \begin{equation}\label{eq:Bootstrap} \|P_k^{-1}\|_{H^q_k(\mathbb R^d) \to H^{q+2}_k(\mathbb R^d)} \leq C \|\chi R_k \chi\|_{L^2(\mathbb R^d) \to L^2(\mathbb R^d)} \end{equation} for all $q \in \mathbb N$. We then need to take care of the weights in the $\widehat{H}^q(\mathbb R^d)$ norms. To do so, we observe that if $u = P_k f$, then we may write \begin{equation*} 2ik^2 u + \Delta u = g, \end{equation*} with \begin{equation*} g := -2k^2 P_k u + (2k^2P_k u + \Delta u + 2ik^2) u = - 2k^2(f + Q_k u), \end{equation*} where $Q_k$ is differential operator of degree 2 with smooth coefficients supported in $B(0,R_0)$. Let $\boldsymbol{\alpha},\boldsymbol{\beta} \in \mathbb N^d$ with $|\boldsymbol{\alpha}|,|\boldsymbol{\beta}| \leq 2$. Since $Q_k$ is compactly supported, we have \begin{equation}\label{eq:Controleg} \left\|\left(\frac{\boldsymbol x}{R}\right)^{\boldsymbol{\alpha}} g\right\|_{\widehat{H}^q_k(\mathbb R^d)} \leq C_{\boldsymbol{\alpha}} k^2 \left(\|u\|_{H_k^{q+2}(\mathbb R^d)}+ \|f\|_{\widehat{H}^{q+2}_k(\mathbb R^d)}\right). \end{equation} Now, we note that \begin{equation}\label{eq:TFu} \left( \frac{\boldsymbol x^{\boldsymbol{\alpha}}}{R}\right) \partial^{\boldsymbol{\beta}} u = i^{\alpha - \beta} \mathcal F^{-1} \left( \left(\frac{1}{R}\partial\right)^{\boldsymbol{\alpha}} \left( \frac{\boldsymbol{\xi}^{\boldsymbol{\beta}} }{-|\boldsymbol{\xi}|^2 + 2ik^2} \mathcal F(g) \right)\right), \end{equation} Let us write \begin{equation*} \frac{\boldsymbol{\xi}^{\boldsymbol{\beta}}}{-|\boldsymbol{\xi}|^2 + 2ik^2} = k^{[\boldsymbol{\beta}]-2} \frac{(\boldsymbol{\xi}/k)^{\boldsymbol{\beta}}}{-|\boldsymbol{\xi}/k|^2 + 2i}, \end{equation*} so that the map \begin{equation*} \boldsymbol{\xi} \mapsto \frac{\boldsymbol{\xi}^{\boldsymbol{\beta}}}{-|\boldsymbol{\xi}|^2 + 2ik^2} \end{equation*} has $C^\ell$ norm bounded by $C_\ell k^{[\boldsymbol{\beta}] - 2- \ell} \leq C_\ell k^{[\boldsymbol{\beta}] - 2}$, since $k \geq 1$. We deduce from \eqref{eq:TFNormePoids} and \eqref{eq:Controleg} that \begin{equation*} \left \| \left(\frac{1}{R}\partial\right)^{\boldsymbol{\alpha}} \left( \frac{\boldsymbol{\xi}^{\boldsymbol{\beta}}}{-|\boldsymbol{\xi}|^2 + 2ik^2} \mathcal F( g) \right) \right \|_{\check{H}^q_k(\mathbb R^d)} \leq Ck^{[\boldsymbol{\beta}]} \left(\|u\|_{H_k^{q+2}(\mathbb R^d)}+ \|f\|_{\widehat{H}^{q+2}_k(\mathbb R^d)}\right) \end{equation*} and hence, thanks to \eqref{eq:TFu}, \begin{multline*} \|u\|_{\widehat{H}^{q+2}_k(\mathbb R^d)} \leq C \sum_{[\boldsymbol{\alpha}] + [\boldsymbol{\beta}] \leq 2} k^{-[\boldsymbol{\beta}]} \|\left(\frac{\boldsymbol x}{R}\right)^{\boldsymbol{\alpha}} \partial^{\boldsymbol{\beta}} u\|_{\widehat{H}_k^p(\mathbb R^d)} \leq C \sum_{[\boldsymbol{\alpha}] + [\boldsymbol{\beta}] \leq 2} k^{-[\boldsymbol{\beta}]} \left\|\mathcal{F} \left ( \left(\frac{\boldsymbol x}{R}\right)^{\boldsymbol{\alpha}} \partial^{\boldsymbol{\beta}} u \right )\right\|_{\check{H}_k^p(\mathbb R^d)} \\ \leq C \sum_{[\boldsymbol{\alpha}] + [\boldsymbol{\beta}] \leq 2} \left(\|u\|_{H_k^{q+2}(\mathbb R^d)}+ \|f\|_{\widehat{H}^{q+2}_k(\mathbb R^d)}\right) \leq C \|\chi R_k \chi\|_{L^2(\mathbb R^d) \to L^2(\mathbb R^d)} \|f\|_{\widehat{H}^{q+2}_k(\mathbb R^d)}, \end{multline*} thanks to \eqref{eq:Bootstrap}. The first part of the result follows from \eqref{eq_resolvant_helmholtz}. Now, if we further assume that $f$ is supported in $B(0,R)$, $g$ is compactly supported and equation \eqref{eq:Controleg} may be replaced by \begin{equation*} \left\|\left(\frac{\boldsymbol x}{R}\right)^{\boldsymbol{\alpha}} g\right\|_{\widehat{H}^q_k(\mathbb R^d)} \leq C_{\boldsymbol{\alpha}} k^2 \left (\|f\|_{H^q_k(\mathbb R^d)} + \|u\|_{H_k^{q+2}(\mathbb R^d)} \right ), \end{equation*} and the same reasoning as above leads to \begin{equation*} \|u\|_{\widehat{H}^{q+2}_k(\mathbb R^d)} \leq C \left(\|u\|_{H_k^{q+2}(\mathbb R^d)}+ \|f\|_{H^{q}_k(\mathbb R^d)}\right) \leq C \|\chi R_k \chi\|_{L^2(\mathbb R^d) \to L^2(\mathbb R^d)} \|f\|_{\widehat{H}^{q}_k(\mathbb R^d)}, \end{equation*} as announced. \end{proof} \begin{lemma}[Boundedness of the energy layer] For all $\delta \in (0,1)$, the set \begin{equation*} \left \{ [\boldsymbol x,\boldsymbol{\xi}] \in \mathbb R^{2d} \; | \; |p(\boldsymbol x,\boldsymbol{\xi})| \leq \delta \right \} \end{equation*} is bounded. \end{lemma} \begin{proof} Fix $0 < \delta < 1$, and let $U := \{ [\boldsymbol x,\boldsymbol{\xi}] \in \mathbb R^{2d}; \; |p(\boldsymbol x,\boldsymbol{\xi})| \leq \delta \}$. Let $\boldsymbol x \in \mathbb R^d$. We first assume that $|\boldsymbol x| \leq R_0$. Since we know that \begin{equation*} C|\boldsymbol{\xi}|^2-C' \leq |p(\boldsymbol x,\boldsymbol{\xi})| \leq \delta, \end{equation*} we see that $\{\boldsymbol{\xi}\in \mathbb R^d \; |p(\boldsymbol x,\boldsymbol{\xi})| \leq \delta \}$ is bounded, and thus $U \cap (B_{\boldsymbol x}(0,R_0)\times \mathbb R^d)$ is bounded. On the other hand, if $|\boldsymbol x| \geq R_0$, we have \begin{equation*} p(\boldsymbol x,\boldsymbol{\xi}) = (1+i)^2 |\boldsymbol{\xi}|^2-1 = 2i |\boldsymbol{\xi}|^2-1 \end{equation*} so that \begin{equation*} |p(\boldsymbol x,\boldsymbol{\xi})|^2 = 4|\boldsymbol{\xi}|^4 + 1 \geq 1 > \delta, \end{equation*} which implies that $U \setminus (B_{\boldsymbol x}(0,R_0)\times \mathbb R^d) = \emptyset$. \end{proof} We finally show that the right-hand side associated with plane-wave scattering are indeed well-approximated by Gaussian coherent states in order to apply Corollary \ref{corollary_approximability} later on. \begin{lemma}[Approximability of plane-wave right-hand sides] \label{Lem:ApproxPW} For $k > 0$, consider the right-hand side $f_k := \widetilde \chi e^{ik\boldsymbol d\cdot\boldsymbol n}$ where $\widetilde \chi \in C^\infty_{\rm c}(B_R)$ and $\boldsymbol d \in \mathbb R^d$ with $|\boldsymbol d| = 1$. Then, there exists $F^k \in \ell^2(\mathbb Z^{2d})$ such that \begin{equation} \label{eq_estimate_planewave} \left \| f_k - \sum_{[\boldsymbol m,\boldsymbol n] \in \Lambda_{k,{\rm rhs}}} F^k_{\boldsymbol m,\boldsymbol n} \gs{k}{\boldsymbol m}{\boldsymbol n} \right \|_{\widehat{H}^p_k(\mathbb R^d)} \leq C_{\varepsilon,p,m} (kR)^{-m}, \end{equation} where \begin{equation*} \Lambda_{k,{\rm rhs}}^{\prime} := \left \{ [\boldsymbol m, \boldsymbol n]\in \mathbb Z^{2d} \; | \; \mathrm{dist}\left(\boldsymbol x^{k,\boldsymbol m}, \mathrm{supp}(\chi)\right) \leq (kR)^{-1/2+\varepsilon} \text{ and } \left||\boldsymbol{\xi}^{k,\boldsymbol n}|-1\right| \leq (kR)^{-1/2+\varepsilon} \right \}. \end{equation*} In particular, if $p$ is the symbol of the function appearing in section \ref{sec:PML} and if $\chi$ is supported in the region where $p(\boldsymbol x,\boldsymbol{\xi}) = |\boldsymbol{\xi}|^2-1$, then $\Lambda_{k,{\rm rhs}}^{\prime}$ is of the form \eqref{eq:FormeRHS}, with $\alpha=0$. \end{lemma} \begin{proof} From now on, we fix $p$, $m$ and $\varepsilon$. We will first show that, if $[\boldsymbol m,\boldsymbol n] \notin \Lambda_{k,{\rm rhs}}^{\prime}$, then we have \begin{equation} \label{eq:DecayCoefPW} \left| ( f_k, \Psi_{k,\boldsymbol m,\boldsymbol n})\right| \leq C_{m} (kR)^{-m} \quad \forall m \in \mathbb N. \end{equation} The quantity $( f_k, \Psi_{k,\boldsymbol m,\boldsymbol n})$ is of the form $\int_{\mathbb R^d} \chi(\boldsymbol x) e^{ik \varphi_{\boldsymbol m,\boldsymbol n}(\boldsymbol x)}\mathrm{d}\boldsymbol x$, with \begin{equation*} \varphi_{\boldsymbol m,\boldsymbol n}(\boldsymbol x) = \boldsymbol x \cdot \left(\boldsymbol{\xi} - \boldsymbol{\xi}^{k,\boldsymbol n}\right) + \frac{i}{2}|\boldsymbol x-\boldsymbol x^{k,\boldsymbol m}|^2, \end{equation*} so that \begin{equation*} \nabla \varphi_{\boldsymbol m,\boldsymbol n}(\boldsymbol x) = \boldsymbol{\xi} - \boldsymbol{\xi}^{k,\boldsymbol n}+ i (\boldsymbol x- \boldsymbol x^{\boldsymbol m,k}). \end{equation*} In particular, we have \begin{equation*} |\nabla \varphi_{\boldsymbol m,\boldsymbol n}(\boldsymbol x)| \geq \left||\boldsymbol{\xi}^{k,\boldsymbol n}|-1\right| + \mathrm{dist}\left (\boldsymbol x^{k,\boldsymbol m}, \mathrm{supp}(\chi)\right ) \geq \frac{1}{2} (kR)^{-\frac{1}{2}+ \varepsilon}. \end{equation*} we may use the method of non-stationary phase (i.e., integrate by parts several times, as in \cite[Lemma 3.14]{zworski2012semiclassical}) to deduce \eqref{eq:DecayCoefPW}. Now, combining \eqref{eq:DecayCoefPW} with Proposition \ref{proposition_tight_expansion}, we see that there exists $C$ such that, for any $[\boldsymbol m,\boldsymbol n]\in \Lambda_{k,{\rm rhs}}$, we have \begin{equation} \label{eq:DecayPWStar} \left | (f_k, \gs{k}{\boldsymbol m}{\boldsymbol n}^\star) \right | \leq C_{m} (kR)^{-m} \quad \forall m \in \mathbb N. \end{equation} On the other hand, it follows from \cite[Theorem 3.1]{chaumontfrelet_ingremeau_2022a} that we have \begin{equation}\label{eq:Rappel:TheoetMaxSontTropCools} \left \| f_k - \sum_{\substack{[\boldsymbol m, \boldsymbol n]\in \mathbb Z^{2d}\\{|\boldsymbol m,\boldsymbol n]| \leq k}}} (f_k,\gs{k}{\boldsymbol m}{\boldsymbol n}^\star) \gs{k}{\boldsymbol m}{\boldsymbol n} \right\|_{\widehat{H}_k^p(\mathbb R^d)} \leq C_{p,m} (kR)^{-m}. \end{equation} Writing \begin{equation*} f_k = \sum_{[\boldsymbol m, \boldsymbol n]\in \Lambda_{k, \rm rhs}} (f_k,\gs{k}{\boldsymbol m}{\boldsymbol n}^\star) \gs{k}{\boldsymbol m}{\boldsymbol n} + \sum_{\substack{[\boldsymbol m, \boldsymbol n]\in \mathbb Z^{2d}\setminus \Lambda_{k, \rm rhs}\\{|\boldsymbol m,\boldsymbol n]| \leq k}}} (f_k,\gs{k}{\boldsymbol m}{\boldsymbol n}^\star) \gs{k}{\boldsymbol m}{\boldsymbol n} + \left ( f_k - \sum_{\substack{[\boldsymbol m, \boldsymbol n]\in \mathbb Z^{2d}\\{|\boldsymbol m,\boldsymbol n]| \leq k}}} (f_k,\gs{k}{\boldsymbol m}{\boldsymbol n}^\star) \gs{k}{\boldsymbol m}{\boldsymbol n} \right ), \end{equation*} we deduce from \eqref{eq:DecayPWStar} and \eqref{eq:Rappel:TheoetMaxSontTropCools} that the last two terms have a $\widehat{H}_k^p$ norm bounded by $C_{\varepsilon,p,m} (kR)^{-m}$, and the result follows. \end{proof} \subsection{Approximability estimates} We are now ready to present our approximability estimates for the Helmholtz problem. We start with an approximation result for general right-hand sides that does not hinge on Section \ref{sec:setting}, but rather on the results established in \cite{chaumontfrelet_ingremeau_2022a}. Recall that $N$ was introduced in \eqref{eq_resolvant_helmholtz}. \begin{theorem}[Approximability at a fixed frequency] \label{Th:ApproxKFixed} Assume $f \in \widehat{H}^p(\mathbb R^d)$ with $\operatorname{supp} f \subset B(0,R)$. If \begin{equation*} \Lambda := \left \{ [\boldsymbol m,\boldsymbol n] \in \mathbb Z^{2d} \; | \; |[\boldsymbol m,\boldsymbol n]| \leq \sqrt{\rho (kR)^N} \right \}, \end{equation*} then, we have \begin{equation*} \left \| u-\sum_{[\boldsymbol m,\boldsymbol n] \in \Lambda} (u,\gs{k}{\boldsymbol m}{\boldsymbol n}^\star)\gs{k}{\boldsymbol m}{\boldsymbol n} \right \|_{\widehat{H}^q_k(\mathbb R^d)} \leq C_p \rho^{-(2+p-q)/2} \|f\|_{\widehat{H}_k^p(\mathbb R^d)} \end{equation*} for all $p,q \in \mathbb N$ with $q \leq p$. \end{theorem} \begin{proof} Let us set $D := (kR/\pi)^{-1/2} (\rho (kR)^N)^{1/2}$. We start with \cite[Theorem 3.1]{chaumontfrelet_ingremeau_2022a}, showing that \begin{equation*} \left \| u-\sum_{[\boldsymbol m,\boldsymbol n] \in \Lambda} (u,\gs{k}{\boldsymbol m}{\boldsymbol n}^\star)\gs{k}{\boldsymbol m}{\boldsymbol n} \right \|_{\widehat{H}^q_k(\mathbb R^d)} \leq C_p D^{q-p-2} \|u\|_{\widehat{H}^{p+2}_k(\mathbb R^d)}. \end{equation*} The result then follows using \eqref{eq:ResolvantePoly}, since \begin{multline*} D^{q-p-2} \|u\|_{\widehat{H}^{p+2}_k(\mathbb R^d)} \leq C_p ((\rho (kR)^{N-1})^{(q-p-2)/2} \|u\|_{\widehat{H}^{p+2}_k(\mathbb R^d)} \\ \leq C_p ((\rho (kR)^{N-1})^{(q-p-2)/2} (kR)^{N} \|f\|_{\widehat{H}^{p}_k(\mathbb R^d)} = C_p \rho^{(q-p-2)/2} \|f\|_{\widehat{H}^{p}_k(\mathbb R^d)}. \end{multline*} \end{proof} Our second approximability estimate applies specifically to high-frequency scattering problems. It is a direct consequence of Theorem \ref{theorem_approximability} and Lemma \ref{Lem:ApproxPW}. \begin{theorem}[Approximability in the high-frequency regime] \label{Theo:ApproxHF} Fix $\varepsilon > 0$ and consider the index set \begin{equation*} \Lambda := \left \{ [\boldsymbol m,\boldsymbol n] \in \mathbb Z^{2d} \; | \; |p(\boldsymbol m,\boldsymbol n)| \leq (kR)^{-1/2+\varepsilon} \right \}. \end{equation*} Then, if the right-hand side is of the form $f_k := \widetilde \chi e^{ik\boldsymbol d\cdot\boldsymbol x}$ where $\boldsymbol d \in \mathbb R^d$ with $|\boldsymbol d| = 1$ and $\widetilde \chi \in C^\infty_{c}(B_R)$ is supported in the region where $\mu = 1$ and $\boldsymbol A = \boldsymbol I$, for all $q \in \mathbb N$, we have \begin{equation*} \left \| u_k-\sum_{[\boldsymbol m,\boldsymbol n] \in \Lambda} (u,\gs{k}{\boldsymbol m}{\boldsymbol n}^\star)\gs{k}{\boldsymbol m}{\boldsymbol n} \right \|_{\widehat{H}^q_k(\mathbb R^d)} \leq C_{\varepsilon,q,m} (kR)^{-m} \quad \forall m \in \mathbb N. \end{equation*} \end{theorem} \subsection{A least-squares method} \label{section_least_squares} In this section, we introduce a least-squares method based on Gaussian coherent states. For a finite set $\Lambda\subset \mathbb Z^{2d}$, we consider the discretization space \begin{equation*} W_\Lambda := \mathrm{Vect} \left\{ \gs{k}{\boldsymbol m}{\boldsymbol n}; \; [\boldsymbol m,\boldsymbol n] \in \Lambda \right\}. \end{equation*} Then, the least squares method consists in finding $u_\Lambda \in W_\Lambda$ such that \begin{equation} \label{eq_least_squares} \left( P_k u_{\Lambda},P_k w_\Lambda \right) = \left( f, P_k w_{\Lambda} \right) \end{equation} for all $w_\Lambda \in W_\Lambda$. Classically, there exists a unique solution $u_\Lambda$, and we have \begin{equation} \|P_k(u-u_\Lambda)\|_{L^2(\mathbb R^d)} = \min_{w_\Lambda \in W_\Lambda} \|P_k(u-w_\Lambda)\|_{L^2(\mathbb R^d)}. \end{equation} Using Theorem \ref{Th:ApproxKFixed} we can easily show that the method converges for any fixed frequency, if sufficiently many DOFs per wavelength are employed, as stated in the following corollary: \begin{corollary}[Convergence at a fixed frequency] Consider the set \begin{equation*} \Lambda_{\rho} := \left \{ [\boldsymbol m,\boldsymbol n] \in \mathbb Z^{2d} \; | \; |\boldsymbol m|^2 + |\boldsymbol n|^2 \leq \rho (kR)^N \right \}, \end{equation*} then the following error estimates holds true: \begin{equation*} \|u-u_\Lambda\|_{H^2(\mathbb R^d)} \leq C_p\rho^{-p/2} \|f\|_{\widehat{H}^p(\mathbb R^d)}. \quad \forall p \in \mathbb N. \end{equation*} \end{corollary} Relying on Theorem \ref{Theo:ApproxHF} instead, we can show a refined error estimate in the high-frequency regime. \begin{corollary}[Convergence in the high-frequency regime] Fix $\varepsilon > 0$, \begin{equation*} \Lambda := \left \{ [\boldsymbol m,\boldsymbol n] \in \mathbb Z^{2d} \; | \; |p(\boldsymbol x^{k,\boldsymbol m},\bxi^{k,\boldsymbol n})| \leq (kR)^{-1/2+\varepsilon} \right \}, \end{equation*} and let $f_k := \widetilde \chi e^{ik\boldsymbol d\cdot\boldsymbol x}$ with $\widetilde{\chi} \in C^\infty_{\rm c}(B_R)$ supported in the region where $\mu = 1$ and $\boldsymbol A = \boldsymbol I$. Then, if $k$ is sufficiently large, we have \begin{equation*} \|u-u_\Lambda\|_{\widehat{H}^2(\mathbb R^d)} \leq C_{\varepsilon,m} (kR)^{-m} \qquad \forall m \in \mathbb N. \end{equation*} \end{corollary} \section{Numerical results} \label{sec:numerics} In this section, we provide numerical illustrations of the above theory in the one-dimensional case. The purpose of these examples is simply to illustrate and support our theoretical findings. Extension to higher dimensions and discussion about efficient linear system assembly and solve will be reported elsewhere. \subsection{Setting} We consider the one dimensional case, where the differential operator in \eqref{eq:PML} simplifies to \begin{equation}\label{eq:helm} P_k v = -\mu\nu u - k^{-2} (\alpha \nu^{-1} v')' \end{equation} where $\mu,\alpha > 0$ are smooth ``physical'' coefficients, and $\nu$ is the PML scaling. For the sake of simplicity we take $R := 1$, meaning that $\mu = \alpha = 1$ outside of $(-1,1)$ and that our right-hand sides $f$ will be supported in $(-1,1)$. Here, we select $\nu := 1 + i\sigma$ where $\sigma$ is a stretching function defined as \begin{equation*} \sigma(x) := a\left (\mathbf{1}_{(1,+\infty)} (x-1)^r + \mathbf{1}_{(-\infty,-1)}(-1-x)^r \right ) \end{equation*} with $a := 1/10$ and $r := 4$. Notice that this choice slightly departs from our theoretical framework as the coefficients are only $C^3$ and not $C^\infty$. In all our experiments, the grid in phase space is chosen as \begin{equation*} x^{k,m} = \sqrt{k^{-1}\pi}m, \quad \xi^{k,n} = \sqrt{k^{-1}\pi}n, \qquad n,m \in \mathbb Z \end{equation*} and for a given values of $k$ and $\delta$, the set of indices is taken to be \begin{equation*} \Lambda := \left \{ (m,n) \in \mathbb Z^2 \; | \; |p(x^{k,m},\xi^{k,n})| < \delta \right \}. \end{equation*} \subsection{Homogeneous medium with analytical solution} Our first example concerns the case where $\alpha = \mu = 1$. The right-hand side is given by \begin{equation*} \label{eq_numerics_fk} f_k := P_k(\phi e^{ikx}) = (\phi'' + 2ik\phi')e^{ikx}, \end{equation*} where $\phi$ is the only even function in $C^3(\mathbb R^d)$ such that $\phi = 0$ on $(-\infty,-3/4)$, $\phi = 1$ on $(-1/2,0)$, and $\phi$ is a polynomial of degree $7$ in $(-3/4,-1/2)$, and the associated solution is $u_k(x) := \phi(x) e^{ikx}$. Results are reported in Table \ref{table_homogeneous}. Figure \ref{figure_spaces_homogeneous} represents the points $(x^{k,m},\xi^{k,n})$ included in the space $\Lambda$ for different values of $\delta$ for the case $k=400$. \input{tables/table_homogeneous} \begin{figure} \begin{minipage}{.48\linewidth} \input{figures/spaces/hom_sample0.1_k400.tex} \end{minipage} \begin{minipage}{.48\linewidth} \input{figures/spaces/hom_sample0.4_k400.tex} \end{minipage} \begin{minipage}{.48\linewidth} \input{figures/spaces/hom_sample0.6_k400.tex} \end{minipage} \begin{minipage}{.48\linewidth} \caption{Representation of $\Lambda$ for $k=400$ in the homogeneous example. Top-left: $\delta = 0.1$. Top-right: $\delta = 0.4$. Bottom-left: $\delta = 0.6$. The horizontal and vertical distances between dots are $\sqrt{\pi/k}$.} \label{figure_spaces_homogeneous} \end{minipage} \end{figure} On Figure \ref{figure_convergence_homogeneous}, we present the convergence history of the method as $\delta$ is increased for different values of $k$. These curves illustrate the fact that the method converges for any fixed $k$ when increasing the number of coherent Gaussian states, as predicted by our theory. Besides, on Figure \ref{figure_cost_homogeneous} we represent the values of $\delta$ and corresponding number of freedom $N_{\rm dofs}$ required to achieve a value of about $2 \times 10^{-5}$ (second column of Table \ref{table_homogeneous}) for different frequencies. Interestingly, the expected rates, namely, $\delta \sim k^{-1/2}$ and $N_{\rm dofs} \sim k^{1/2}$ are observed. \input{figures/scattering_homogeneous} \subsection{Scattering in an heterogeneous medium} We now focus on the case where $\alpha = 1$ and $\mu$ is the only even $C^3$ functions that equals $2$ on $[0,0.7]$, equals $1$ on $[0.8,+\infty]$ and is polynomial of degree $7$ on $[0.7,0.8]$. We select the right-hand side $f_k(x) := k^2(\mu-1)e^{ikx}$. Here, the analytical solution is not available, and instead, we rely on a reference solution computed by a Lagrange finite element method of order $4$ on a grid with $h = 0.02 \cdot k^{-\frac{9}{8}}$ in order to avoid the pollution effect (see, e.g., \cite{ihlenburg_babuska_1997a}). The results are listed on Table \ref{table_heterogeneous}, and Figure \ref{figure_spaces_heterogeneous} represents the phase-space points included in the space $\Lambda$ for the case $k=200$. \input{tables/table_heterogeneous} We provide the same figures than in the previous experiment, and arrive at similar observation. First, Figure \ref{figure_convergence_heterogeneous} illustrates the convergence of the method for fixed values of $k$ as the number of degrees of freedom is increased. Second, we compare the values of $\delta$ and $N_{\rm dofs}$ to achieve an accuracy about $10^{-4}$ (last column of Table \ref{table_heterogeneous}) for different frequencies. In this example too, the expected rates are numerically observed. \input{figures/scattering_heterogeneous} \begin{figure} \begin{minipage}{.48\linewidth} \input{figures/spaces/heter_sample0.5_k200.tex} \end{minipage} \begin{minipage}{.48\linewidth} \input{figures/spaces/heter_sample1_k200.tex} \end{minipage} \begin{minipage}{.48\linewidth} \input{figures/spaces/heter_sample2_k200.tex} \end{minipage} \begin{minipage}{.48\linewidth} \caption{Representation of $\Lambda$ for $k=200$ in the heterogeneous example. Top-left: $\delta = 0.5$. Top-right: $\delta = 1$. Bottom-left: $\delta = 2$. The horizontal and vertical distances between dots are $\sqrt{\pi/k}$.} \label{figure_spaces_heterogeneous} \end{minipage} \end{figure} \section{Conclusion} We propose a new family of finite-dimensional spaces to approximate the solutions to high-frequency Helmholtz problems. These discretization spaces are spanned by Gaussian coherent states, which have the key property to be micro-localised in phase space. This unique feature allows to carefully select which Gaussian coherent states are included in the discretization, leading to a frequency-aware discretization space specifically tailored to approximate the solutions to scattering problems efficiently. Our key findings correspond to two types of approximability results. First, assuming for simplicity that the problem is non-trapping, for general $L^2$ right-hand sides, we show that the Gaussian state approximation converges and provides a uniform error for all frequency if the number of degrees of freedom grows as $(kR)^d$. This result is similar to approximation results available for finite element discretizations. Our second result applies when the right-hand side corresponds to a plane-wave scattering problem. In this case, we show that it is sufficient for the number of degrees of freedom to grow only as $(kR)^{d-1/2}$ to achieve a constant accuracy for increasing frequencies. To the best of the authors knowledge, this last estimate suggests that the proposed discretization space requires substantially less degrees of freedom than any available method in the literature for high-frequency scattering problems in general smooth heterogeneous media. We also present a set of numerical examples where our Gaussian state spaces are coupled with a least-squares variational formulation. Although the setting is elementary, these examples successfully illustrate the key features of our abstract analysis. While we believe that the proposed results are very encouraging for a further development of the proposed method, there still remain several challenges that we would like to address in future works. First, we have chosen to focus on a least-squares method, because it is simpler to analyse than a Galerkin formulation. However, least-squares methods are typically poorly-conditioned as compared to their Galerkin counterparts. While there is no reason to believe that the proposed discrete space would not work with a Galerkin variational formulation, the analysis appears to be substantially more complex. Second, especially for three-dimensional problems, computing the entries of the matrices associated with Gaussian coherent states is not a trivial task, and specific algorithms should be employed. Third, although the entries of the discrete matrices decrease super-algebraically away from their diagonal, they are still dense in principle. While we are convinced that efficient truncation or compression methods can be employed, this is still to be analysed. Finally, the resulting linear systems (after possible truncation and/or compression) will likely have different properties than usual volume methods. As a result, the design of specific preconditioners will probably be required.
1,314,259,992,630
arxiv
\section{Introduction} Autonomous elevator operation is a promising solution for mobile navigation in office buildings. The system consists of three parts: button recognition, motion planning, and robot control. Among them, button recognition is the most basic but challenging part. Its performance directly determines the success rate and robustness of the entire elevator autonomous operating system. Traditional button recognition algorithms tend to place markers on the elevator button panel in advance, and then the position of each button relative to the markers can be acquired by calculating the geometric relationship between them. Unfortunately, these hand-engineered algorithms are inconvenient and will fail if the elevator button panel cannot be marked beforehand. To overcome this limitation, in the past few years, researchers have proposed plenty of deep learning-based button recognition algorithms \cite{zhu2021ocr,dong2017autonomous,liu2017recognizing}, which can output button recognition results directly from raw elevator button images. However, the recognition accuracy of these deep learning-based algorithms is not satisfactory due to various image conditions and distortions. There exist many kinds of button shapes, button sizes, elevator panel designs, and light conditions. Meanwhile, various perspective distortions and unexpected blurs make it more challenging to recognize buttons accurately. In this article, we propose a novel deep learning-based approach to autonomously correct perspective distortions of elevator button images based on button corner detection results. \begin{figure} \flushleft \subfigure[original image]{ \begin{minipage}[t]{0.45\linewidth} \centering \includegraphics[height=3.7cm,width=3.7cm]{./imgs/figure5d.png} \end{minipage} } \subfigure[corrected image]{ \begin{minipage}[t]{0.45\linewidth} \centering \includegraphics[height=3.7cm,width=3.7cm]{./imgs/figure5i.png} \end{minipage} } \centering \caption{A comparison between the original image (left) and the corrected image (right). The red crosses represent the corners of every elevator button.} \label{fig1} \end{figure} The proposed approach consists of two parts. The first part is a button corner detection algorithm. We train an image segmentation model to perform feature extraction of raw elevator button images and obtain button segmentation results. Then four lines of every button are identified by using the Hough Transform method \cite{duda1972use}. Pixel coordinates and the order of all button corners can be obtained as they are the intersections of identified lines. The second part is a pose estimation algorithm. It takes the hypothetical button corners with standard pixel coordinates as reference points and calculates the camera motions to align corners of raw elevator button images and the reference points. Then by using an inverse transformation, new elevator button images without perspective distortions can be generated. The contributions of this work are summarized as follows: \begin{itemize} \item We derive detection results of button corners by utilizing an image segmentation model and the Hough Transform method. \item We propose a novel algorithm that can autonomously remove perspective distortions of elevator button images based on the detection results of button corners. \end{itemize} The remainder of this article is organized as follows. The previous work on elevator button recognition and existing distortion removal methods are reviewed in Sec. II. Sec. III and Sec. IV outline the proposed autonomous perspective distortion removal approach, while the experimental results are presented and discussed in Sec. V. Finally, we draw some conclusions and discuss the future work in Sec. VI. \section{Related Work} \subsection{Elevator button recognition} Before deep learning techniques are widely used in the research area of object recognition for robotics, researchers tend to develop hand-engineered approaches based on traditional image processing techniques to recognize elevator buttons. For instance, Klingbeil \textit{et al.} \cite{klingbeil2010autonomous} designed a pipeline to realize the function of button detection and character recognition, using a grid fitting method to regress button locations based on a sliding window-based object detector. This method achieved an accuracy of 86.2\% in a test set containing 50 images. However, images in the test set were assumed to be in good light conditions without any perspective distortion. As a result, the pipeline designed by \cite{klingbeil2010autonomous} cannot be used in natural scenes. Zakaria \textit{et al.} \cite{zakaria2014elevator} developed a framework for vision-based elevator external button recognition and localization based on Sobel operator edge detection technique and Wiener filter. In \cite{kim2011robust}, the template matching combined with a homography-based transform was used to deal with vision-based button recognition for a robot arm manipulating the elevator. However, approaches in \cite{zakaria2014elevator}\cite{kim2011robust} were not robust to noise or environmental variability due to the limited capacity of traditional image processing methods. Various deep learning-based methods have recently been applied to elevator button recognition with the revolution of computational technologies. The accuracy of elevator button recognition can be significantly improved with the discrimination capabilities of deep neural networks. For instance, in \cite{islam2017elevator}, the recognition task was formalized as a classification problem, and a hybrid button classification system was proposed, which combined histogram of oriented gradients (HOG), bag-of-words (BoW), and artificial neural networks (ANN). Experimental results in \cite{islam2017elevator} showed that ANN could help improve the performance of button classification a lot. Dong \textit{et al.} \cite{dong2017autonomous} proposed a button recognition system based on convolutional neural networks (CNN), which can achieve a high recognition accuracy for known elevator button panels. In \cite{liu2017recognizing}, elevator button recognition was regarded as a multi-object detection problem, and a single-shot multi-box detector (SSD) was used as the detection network. Zhu \textit{et al.} \cite{zhu2021ocr} proposed a novel algorithm for elevator button recognition, called OCR-RCNN, which integrated a character recognition branch into the Faster-RCNN and turned multi-object detection problem into a binary button detection task and a character recognition task. Inspired by \cite{liu2021large}, in this article, we design a more advanced semantic segmentation model based on the Deeplabv3+ model \cite{chen2018encoder} and the Hough Transform method to obtain button segmentation and button corner detection results. \subsection{Removal of perspective distortions} In contrast to the vast literature on various elevator button recognition algorithms based on traditional image processing methods or deep learning models, only countable publications studied the removal of perspective distortions for elevator button images. Researchers have proposed some perspective distortion removal algorithms for document images \cite{takezawa2016camera,liu2015restoring,shafii2015skew}, electroluminescent images \cite{mantel2018correcting}, lithographic watermarked authentication images \cite{xie2014geometric}, and so on. Zhu \textit{et al.} \cite{zhu2019autonomous} proposed a novel perspective distortion removal algorithm that leveraged the Gaussian Mixture Model (GMM) and EM framework. The algorithm in \cite{zhu2019autonomous} took as input the outcomes of the button center recognizer and finally generated the corrected images. However, the algorithm in \cite{zhu2019autonomous} can only handle internal panel images that contain a number of buttons and may easily fail for external elevator button images with few button samples. We further integrate button corners as feature points in this article to realize autonomously perspective distortion removal for robotic elevator button images. The experimental results demonstrate that our proposed approach can handle external elevator panels well. \section{Button Corner Detection} \subsection{Button Segmentation} \begin{figure}[htpb] \flushleft \includegraphics[height=5.3286cm,width=8.3536cm]{./imgs/fig2.png} \caption{The proposed image segmentation model. The input is a raw elevator button image while the output is button segmentation result.} \label{fig2} \end{figure} In this article, we design an image segmentation model based on the Deeplabv3+ model to obtain segmentation results of pixels belonging to elevator buttons. The Deeplabv3+ model is one of the state-of-the-art models using advanced deep learning technologies to generate semantic segmentation results of input images, which combines the encoder-decoder structure and the spatial pyramid pooling module. The encoder-decoder structure can help extract sharp object boundaries, and the spatial pyramid pooling module can help capture rich contextual information. The detail of the proposed image segmentation model is shown in Fig. \ref{fig2}. The input is a raw elevator button image, and the output is a gray-scale image with button segmentation results. In the encoding stage, different from the deeplabv3+ model, we utilize the MobileNetv2 \cite{sandler2018mobilenetv2}, a depthwise separable backbone, to first extract low-level features and high-level features. Then several atrous convolutions \cite{chen2017rethinking} with different rates are applied to capture rich semantic information from high-level features. In the decoding stage, the low-level features module is concatenated with the output of the encoder first. Then, a $3*3$ convolution module is used to further fuse the extracted features. Finally, bilinear interpolation is used to obtain the segmentation prediction results of the same size as the input image. The value of every pixel represents which category it belongs to. For instance, when this button segmentation model is applied to distorted images Fig. \ref{fig1} (a), there exist four categories: `up', `down', `keyhole', and `non-button'. \subsection{Corner coordinates detection} After obtaining button segmentation results of raw elevator button images, we first use dilation and erosion methods to reduce image noise and smooth the edges of buttons to improve the performance of line detection. The process of erosion followed by dilation is called a closed operation, which is used to connect neighboring objects and smooth their boundaries at the same time without significantly changing their area. Then the Hough Transform method is applied to detect four lines of buttons. Hough transform is one of the primary methods to detect geometric shapes from images in computer vision, image analysis, and digital image processing. The transformation between two coordinate spaces is to map a curve or line with the same shape from one space to another coordinate space and form a peak. Finally, after obtaining detection results of four lines of the button, we can derive pixel coordinates of button corners as they are the intersections of the detected lines. And the order of corners of every button is defined in advance to facilitate the perspective distortion removal algorithm. The order is shown in Fig. \ref{fig3}. \begin{figure}[htpb] \flushleft \includegraphics[height=3.25cm,width=9cm]{./imgs/fig3.png} \caption{The order of corners on a button.} \label{fig3} \end{figure} \section{Perspective Distortion Removal} To begin with, we first define the notations which will be frequently used in this paper. Throughout this work, matrices are written as boldface uppercase letters, and vectors are written as boldface lower letters. The following notations are used: \begin{itemize} \item ${\textbf{C = [}}{{\textbf{c}}_1}{\rm{, \cdot \cdot \cdot , }}{{\textbf{c}}_N}{\rm{] }} \in {R^{2 \times N}}$ - the detected button corners in the image plane; \item $\mathop {{\textbf{C }}}\limits^ \wedge {\rm{ = [}}{\mathop {\textbf{c}}\limits^ \wedge _1}{\rm{, \cdot \cdot \cdot , }}{\mathop {\textbf{c}}\limits^ \wedge _N}{\rm{] }} \in {R^{3 \times N}}$ - the detected button corners in the normalized image plane; \item ${\textbf{U = [}}{{\textbf{u}}_1}{\rm{, \cdot \cdot \cdot , }}{{\textbf{u}}_N}{\rm{] }} \in {R^{2 \times N}}$ - the presupposed standard button corners without distortion in the image plane; \item $\mathop {\textbf{U}}\limits^ \wedge {\rm{ = [}}{\mathop {\textbf{u}}\limits^ \wedge _1}{\rm{, \cdot \cdot \cdot , }}{\mathop {\textbf{u}}\limits^ \wedge _N}{\rm{] }} \in {R^{3 \times N}}$ - the presupposed standard button corners without distortion in the normalized image plane; \item ${\textbf{G = [}}{{\textbf{g}}_1}{\rm{, \cdot \cdot \cdot , }}{{\textbf{g}}_N}{\rm{] }} \in {R^{2 \times N}}$ - the rectified button corners in the image plane; \item $\mathop {{\textbf{G }}}\limits^ \wedge {\rm{ = [}}{\mathop {\textbf{g}}\limits^ \wedge _1}{\rm{, \cdot \cdot \cdot , }}{\mathop {\textbf{g}}\limits^ \wedge _N}{\rm{] }} \in {R^{3 \times N}}$ - the rectified button corners in the normalized image plane; \item ${{\rm{M}}_{{\mathop{\rm int}} }}\, = \,\left[ \begin{array}{l} {\rm{F/}}{{\rm{s}}_x}\quad {\rm{0}}\quad \,{{\rm{o}}_x}\\ \;{\rm{0}}\quad \,\,{\rm{F/}}{{\rm{s}}_y}\;\,{{\rm{o}}_y}\\ \;{\rm{0}}\quad \,\;\;\;{\rm{0}}\quad \,{\rm{1}} \end{array} \right]$ - the intrinsic parameter of the camera; \item ${\rm{F}}$ - focal length in the meter for fixed focal length, non-zoomed camera; \item ${{\rm{o}}_x}{\rm{,}}\,{{\rm{o}}_y}$ - image center in pixel; \item ${{\rm{s}}_x}{\rm{,}}\,\,{{\rm{s}}_y}$ - pixel width and height in meter; \item ${\textbf{D = [}}{{\textbf{d}}_1}{\rm{, \cdot \cdot \cdot , }}{{\textbf{d}}_N}{\rm{] }} \in {R^{3 \times N}}$ - the spatial coordinates of detected button corners; \item ${\textbf{E = [}}{{\textbf{e}}_1}{\rm{, \cdot \cdot \cdot , }}{{\textbf{e}}_N}{\rm{] }} \in {R^{3 \times N}}$ - the spatial coordinates of standard button corners; \item ${\textbf{M = [}}{{\textbf{m}}_1}{\rm{, \cdot \cdot \cdot , }}{{\textbf{m}}_N}{\rm{] }} \in {R^{3 \times N}}$ - new spatial coordinates of detected button corners after rotation operation; \item ${\textbf{P = [}}{{\textbf{p}}_1}{\rm{, \cdot \cdot \cdot , }}{{\textbf{p}}_N}{\rm{] }} \in {R^{3 \times N}}$ - new spatial coordinates of detected button corners with depth equal to 1 after rotation and translation operation; \item ${\textbf{R}}\,({\rm{\theta }})$ - the matrix representation of angle-axis parameterized rotation ${\rm{\theta}}$; \item ${\textbf{T}}$ - the matrix representation of translation, between detected button corners and standard button corners; \item ${\rm{b}}$ - the number of buttons on image; \item ${{\textbf{K}}_H}{\rm{ = [}}{{\textbf{k}}_{h1}}{\rm{, \cdot \cdot \cdot , }}{{\textbf{k}}_{hN}}{\rm{] }} \in {R^{1 \times N}}$ - slopes of the horizontal line of every button in space coordinate; \item ${{\textbf{K}}_V}{\rm{ = [}}{{\textbf{k}}_{v1}}{\rm{, \cdot \cdot \cdot , }}{{\textbf{k}}_{vN}}{\rm{] }} \in {R^{1 \times N}}$ - slopes of the vertical line of every button in space coordinate; \item ${\textbf{Cos = [Co}}{{\textbf{s}}_1}{\rm{, \cdot \cdot \cdot , }}{{\textbf{Cos}}_N}{\rm{] }} \in {R^{1 \times N}}$ - cosine values of the angles between horizontal and vertical lines of every button in space coordinate. \end{itemize} The detail of the proposed perspective distortion removal algorithm is shown in Alg. \ref{pdr}. \begin{algorithm}[t] \DontPrintSemicolon \SetKwRepeat{Do}{do}{while} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{$\textbf{C}, \textbf{U}, Distorted \; Image$} \Output{$Rectified \; Image$} $\mathop {{\textbf{C}}}\limits^ \wedge = Norm(\textbf{C}),$ $\mathop {{\textbf{U}}}\limits^ \wedge = Norm(\textbf{U});$ \\ ${\textbf{D}} = \,M_{{\mathop{\rm int}} }^{ - 1}\,{\rm{*}}\,\mathop {\textbf{C}}\limits^ \wedge,$ ${\textbf{E}} = \,M_{{\mathop{\rm int}} }^{ - 1}\,{\rm{*}}\,\mathop {\textbf{U}}\limits^ \wedge;$\\ $Best \; CR = +\infty,$ ${\theta_x}^{'}={\theta_y}^{'}={\theta_z}^{'}=None;$\\ \For{${\theta_x} \; in \; range(\alpha,\beta,\gamma)$}{ \For{${\theta_y} \; in \; range(\alpha,\beta,\gamma)$}{ \For{${\theta_z} \; in \; range(\alpha,\beta,\gamma)$}{ ${\rm{\textbf{R}(\theta }}) = \;{{\textbf{R}}_x}\,{\rm{*}}\,{{\textbf{R}}_y}\,{\rm{*}}\;{{\textbf{R}}_z};$\\ ${\textbf{M}} = \;{\textbf{R}}({\rm{\theta }})\;*\;{\textbf{D}};$\\ ${\textbf{P}}\; = \;{\textbf{M}}\; + \;{\textbf{T}},$ ${\textbf{P}}\; = \;{\textbf{P}}\,{\textbf{/P}}\,{\rm{[3]}};$\\ \If{$Final \; CR < Best \; CR$}{ $Best \; CR = Final \; CR$\\ ${\theta_x}^{'}={\theta_x},{\theta_y}^{'}={\theta_y},{\theta_z}^{'}={\theta_z};$\\ ${{\textbf{R}}_x}^{'} \leftarrow {\theta_x}^{'}, {{\textbf{R}}_y}^{'} \leftarrow {\theta_y}^{'}, {{\textbf{R}}_z}^{'} \leftarrow {\theta_z}^{'};$\\ ${\rm{\textbf{R}(\theta }})^{'} = \;{{\textbf{R}}_x}^{'}\,{\rm{*}}\,{{\textbf{R}}_y}^{'}\,{\rm{*}}\;{{\textbf{R}}_z}^{'};$\ } } } } ${\textbf{M}}^{'} = \;{\textbf{R}}({\rm{\theta }})^{'}\;*\;{\textbf{D}};$\\ ${\textbf{P}}^{'}\; = \;{\textbf{M}}^{'}\; + \;{\textbf{T}}^{'},$ ${\textbf{P}}^{'}\; = \;{\textbf{P}}^{'}\,{\textbf{/P}}^{'}\,{\rm{[3]}};$\\ $\mathop {\textbf{G}}\limits^ \wedge = {{\rm{M}}_{{\mathop{\rm int}} }}*{\textbf{P}}^{'};$ $\text{Return} \;\mathop {\textbf{G}}\limits^ \wedge;$\\ \caption{Perspective Distortion Removal} \label{pdr} \end{algorithm} The first step is to establish a presupposed elevator button image, in which pixel coordinates of the button corners \textbf{U} are standard without perspective distortion. Two types of the presupposed elevator button images are shown in Fig. 4. \begin{figure}[h] \centering \includegraphics[height=4.62cm,width=6.42cm]{./imgs/fig4.png} \caption{Two types of the presupposed elevator button images without perspective distortion.} \end{figure} The second step is back projection. $\mathop {\textbf{C}}\limits^ \wedge $ and $\mathop {\textbf{U}}\limits^ \wedge \in {R^{{\rm{3}} \times N}}$ are obtained by adding a third row $\left[ {1 \cdot \cdot \cdot \;1} \right]$ to \textbf{C} and $\textbf{U} \in {R^{2 \times N}}$. Then the inverse matrix of the intrinsic camera parameter is leveraged to obtain the spatial coordinates of button corners, and the equation is shown as follows: \begin{equation} {\textbf{D}} = \,M_{{\mathop{\rm int}} }^{ - 1}\,{\rm{*}}\,\mathop {\textbf{C}}\limits^ \wedge, \label{eq} \end{equation} In this algorithm, we assume that for the standard button corners without perspective distortions, the slopes of horizontal lines equal to zero, the slopes of vertical lines equal to infinity, and the cosine values of the angles between horizontal and vertical lines equal to zero. Thus for \textbf{E}, we have: \begin{equation} {{\textbf{K}}_{\rm{H}}}\, = \,1/\,{{\textbf{K}}_V}{\textbf{ = Cos }} = {\rm{ [}}0{\rm{, \cdot \cdot \cdot , }}0{\rm{] }} \in {R^{1 \times N}}. \label{eq} \end{equation} The third step is to compute the rotation and translation matrix to form new spatial coordinates of detected button corners. Three-dimensional rotation matrices are utilized to rotate spatial coordinates of the corners, which include rotating ${{\rm{\theta }}_x}$ against the x-axis, ${{\rm{\theta }}_y}$ against the y-axis and ${{\rm{\theta }}_z}$ against the z-axis, shown as follows respectively: \begin{equation} {{\textbf{R}}_x}\, = \,\left[ \begin{array}{l} \,\,1\quad \,\;\;\;\;0\quad \,\quad \quad \;\,0\\ \;0\quad \,\,\cos ({\theta _x})\quad \;\sin ({\theta _x})\\ \;0\quad - \sin ({\theta _x})\;\;\;\;\,\cos ({\theta _x}) \end{array} \right], \label{eq} \end{equation} \begin{equation} {{\textbf{R}}_y}\, = \,\left[ \begin{array}{l} \cos ({\theta _y})\quad \,0\quad \, - \sin ({\theta _y})\\ \;\;\,\,0\quad \,\,\;\quad \;1\quad \;\quad \;0\\ \sin ({\theta _y})\quad \,\,0\;\quad \;\cos ({\theta _y}) \end{array} \right], \label{eq} \end{equation} \begin{equation} {{\textbf{R}}_z}\, = \,\left[ \begin{array}{l} \;\;\,\cos ({\theta _z})\quad \,\sin ({\theta _z})\quad \,0\\ - \sin ({\theta _z})\quad \,\cos ({\theta _z})\quad \;0\\ \;\;\quad 0\quad \,\,\quad \quad \,0\;\quad \;\quad \;1 \end{array} \right], \label{eq} \end{equation} where ${{\rm{\theta }}_x}$, ${{\rm{\theta }}_y}$, ${{\rm{\theta }}_z}$ are radian values and the relation between angle value and radian value is: \begin{equation} radian\;value\; = \;\frac{{angle\;value\;*\;\pi }}{{180}}. \label{eq} \end{equation} The rotation matrix is formed as ${\rm{\textbf{R}(\theta }}) = \;{{\textbf{R}}_x}\,{\rm{*}}\,{{\textbf{R}}_y}\,{\rm{*}}\;{{\textbf{R}}_z}.$ Then, new spatial coordinates of detected button corners are computed through the following equations: \begin{equation} {\textbf{M}} = \;{\textbf{R}}({\rm{\theta }})\;*\;{\textbf{D}}, {\textbf{P}}\; = \;{\textbf{M}}\; + \;{\textbf{T}}, {\textbf{P}}\; = \;{\textbf{P}}\,{\textbf{/P}}\,{\rm{[3]}}, \label{eq7} \end{equation} where \textbf{P[3]} represents the third row of new spatial coordinates, and the translation matrix \textbf{T} is defined as the difference value between spatial coordinates of the first corner in the presupposed elevator button image and the distorted elevator button image. The fourth step is to estimate camera motions. The goal is to find the optimal rotation matrix and translation matrix so that the lines formed by the new spatial coordinates of the distorted button corners are parallel to the lines formed by the spatial coordinates of the presupposed standard button corners. In this algorithm, we set $\alpha=-40,\beta=40,\gamma=0.5$, which means that every 0.5 degree against each axis is sampled to form the rotation matrix ${\textbf{R}}({\rm{\theta }})$. The range is from -40 degrees to 40 degrees. Three criteria are chosen to evaluate which formed rotation matrix is optimal: The first criterion is ${{\textbf{K}}_H}$, representing the slopes of horizontal lines of every button in space coordinate. ${{\textbf{k}}_{hi}}$ of each button is defined as: \begin{equation} {{\textbf{k}}_{hi}} = \frac{{{y_2} - {y_1}}}{{{x_2} - {x_1}}},\label{eq} \end{equation} where ${x_i}$ denotes the first value of the $i$-th corner and ${y_i}$ denotes the second value of the $i$-th corner, respectively. Then we can obtain the two-norm result of ${{\textbf{K}}_H}$, \begin{equation} {\left\| {{{\textbf{K}}_H}} \right\|_2} = \sqrt {\sum\limits_{i = 1}^{\rm{b}} {{\textbf{k}}_{hi}^2} }.\label{eq} \end{equation} The second criterion is ${{\textbf{K}}_V}$, representing the slopes of vertical lines of every button in space coordinate. ${{\textbf{k}}_{vi}}$ of each button is defined as: \begin{equation} {{\textbf{k}}_{vi}} = \frac{{{y_4} - {y_1}}}{{{x_4} - {x_1}}},\label{eq} \end{equation} Then we can obtain the two-norm result of ${{\rm{K}}_V}$ and its reciprocal ${{\textbf{K}}_{rV}}$, \begin{equation} \begin{array}{l} {\left\| {{{\textbf{K}}_V}} \right\|_2} = \sqrt {\sum\limits_{i = 1}^{\rm{b}} {{\textbf{k}}_{vi}^2} },\\ {{\textbf{K}}_{rV}} = 1\,{\rm{/}}\,{\left\| {{{\textbf{K}}_v}} \right\|_2}. \end{array} \label{eq} \end{equation} The third criterion is \textbf{Cos}, representing the cosine values of the angles between horizontal and vertical lines of every button in space coordinate. The horizontal line vector, vertical line vector, and ${\textbf{co}}{{\textbf{s}}_i}$ of each button are shown as follows: \begin{equation} \begin{array}{l} h = ({x_2} - {x_1},{y_2} - {y_1},{z_2} - {z_1}),\\ v = ({x_4} - {x_1},{y_4} - {y_1},{z_4} - {z_1}),\\ {\textbf{co}}{{\textbf{s}}_i}\; = \frac{{h \bullet v}}{{{{\left\| h \right\|}_2} \bullet {{\left\| v \right\|}_2}}}, \end{array}\label{eq} \end{equation} \begin{figure*} \centering \subfigure[]{ \begin{minipage}[t]{0.18\linewidth} \centering \includegraphics[height=3.4cm,width=3cm]{./imgs/figure5a.png} \end{minipage}} \subfigure[]{ \begin{minipage}[t]{0.18\linewidth} \centering \includegraphics[height=3.4cm,width=3cm]{./imgs/figure5b.png} \end{minipage}} \subfigure[]{ \begin{minipage}[t]{0.18\linewidth} \centering \includegraphics[height=3.4cm,width=3cm]{./imgs/figure5c.png} \end{minipage}} \subfigure[]{ \begin{minipage}[t]{0.18\linewidth} \centering \includegraphics[height=3.4cm,width=3cm]{./imgs/figure5d.png} \end{minipage}} \subfigure[]{ \begin{minipage}[t]{0.18\linewidth} \centering \includegraphics[height=3.4cm,width=3cm]{./imgs/figure5e.png} \end{minipage}} \\\subfigure[]{ \begin{minipage}[t]{0.18\linewidth} \centering \includegraphics[height=3.4cm,width=3cm]{./imgs/figure5f.png} \end{minipage}} \subfigure[]{ \begin{minipage}[t]{0.18\linewidth} \centering \includegraphics[height=3.4cm,width=3cm]{./imgs/figure5g.png} \end{minipage}} \subfigure[]{ \begin{minipage}[t]{0.18\linewidth} \centering \includegraphics[height=3.4cm,width=3cm]{./imgs/figure5h.png} \end{minipage}} \subfigure[]{ \begin{minipage}[t]{0.18\linewidth} \centering \includegraphics[height=3.4cm,width=3cm]{./imgs/figure5i.png} \end{minipage}} \subfigure[]{ \begin{minipage}[t]{0.18\linewidth} \centering \includegraphics[height=3.4cm,width=3cm]{./imgs/figure5j.png} \end{minipage}} \centering \caption{Demonstrations of perspective distortion removal results. (a)(b)(c)(d)(e) represent the original elevator button images, (f)(g)(h)(i)(j) represent the rectified elevator button images, respectively.} \label{fig:important} \end{figure*} where $({x_1},{y_1},{z_1})$ denotes the spatial coordinate of the first corner, $({x_2},{y_2},{z_2})$ the spatial coordinate of the second corner, and $({x_4},{y_4},{z_4})$ the spatial coordinate of the fourth corner. Then we can obtain the two-norm result of \textbf{Cos}, \begin{equation} {\left\| {{\textbf{Cos}}} \right\|_2} = \sqrt {\sum\limits_{i = 1}^{\rm{b}} {{\textbf{cos}}_i^{\rm{2}}} }. \label{eq13} \end{equation} \\When ${\left\| {{\textbf{Cos}}} \right\|_2}$ is smaller, we can obtain a better perpendicular result of horizontal and vertical lines of buttons. To combine the three criteria, they are normalized as follows: \begin{equation} {\mathop {\textbf{K}}\limits^ \wedge _H} = \frac{{{{\left\| {{{\textbf{K}}_H}} \right\|}_2} - {{\left( {{{\left\| {{{\textbf{K}}_H}} \right\|}_2}} \right)}_{\min }}}}{{{{\left( {{{\left\| {{{\textbf{K}}_H}} \right\|}_2}} \right)}_{\max }} - {{\left( {{{\left\| {{{\textbf{K}}_H}} \right\|}_2}} \right)}_{\min }}}},\label{eq} \end{equation} \begin{equation} {\mathop {\textbf{K}}\limits^ \wedge _{rV}} = \frac{{{{\left\| {{{\textbf{K}}_{rV}}} \right\|}_2} - {{\left( {{{\left\| {{{\textbf{K}}_{rV}}} \right\|}_2}} \right)}_{\min }}}}{{{{\left( {{{\left\| {{{\textbf{K}}_{rV}}} \right\|}_2}} \right)}_{\max }} - {{\left( {{{\left\| {{{\textbf{K}}_{rV}}} \right\|}_2}} \right)}_{\min }}}},\label{eq} \end{equation} \begin{equation} \mathop {{\textbf{Cos}}}\limits^ \wedge = \frac{{{{\left\| {{\textbf{Cos}}} \right\|}_2} - {{\left( {{{\left\| {{\textbf{Cos}}} \right\|}_2}} \right)}_{\min }}}}{{{{\left( {{{\left\| {{\textbf{Cos}}} \right\|}_2}} \right)}_{\max }} - {{\left( {{{\left\| {{\textbf{Cos}}} \right\|}_2}} \right)}_{\min }}}}.\label{eq} \end{equation} Then the final criterion is shown as follows: \begin{equation} Final\;CR = {\mathop {\textbf{K}}\limits^ \wedge _H} + {\mathop {\textbf{K}}\limits^ \wedge _{rV}} + \mathop {{\textbf{Cos}}}\limits^ \wedge. \label{eq} \end{equation} When $Final CR$ is smallest, we can obtain the optimal rotation matrix and translation matrix. The fifth step is to form new rectified images. After obtaining the optimal pose (${\textbf{R}}({\rm{\theta }})^{'},{\textbf{T}}^{'}$), each pixel of the distorted elevator button image can be transformed to have new spatial coordinates through Eq. (\ref{eq7}). Then intrinsic camera parameter is used to do projection and get new pixel coordinates in the normalized plane, \begin{equation} \mathop {\textbf{G}}\limits^ \wedge = {{\rm{M}}_{{\mathop{\rm int}} }}*{\textbf{P}}.\label{eq} \end{equation} By taking the first and second rows, we can obtain pixel coordinates of the rectified button corners in the image plane. Finally, a new rectified elevator button image is generated by applying an inverse image warping operation. \section{Experiments} To verify the effectiveness of the proposed approach, we collect a dataset with 15 images from 3 different elevators, which are captured from different angles of views containing severe perspective distortions. The intrinsic camera parameter is: \begin{equation} {{\rm{M}}_{{\mathop{\rm int}} }}^{'}\, = \,\left[ \begin{array}{l} \;{\rm{320}}\quad \,{\rm{0}}\quad \,\;\;{\rm{320}}\\ \;\;\;{\rm{0}}\quad \,{\rm{320}}\quad \,{\rm{240}}\\ \;\;\;{\rm{0}}\quad \,\;\;{\rm{0}}\quad \,\;\;\;\;\;\,{\rm{1}} \end{array} \right] \end{equation} The value of Eq. (\ref{eq13}) is used to measure the accuracy of the proposed perspective distortion removal algorithm, which represents the two-norm value of cosine values of the angles between horizontal and vertical lines of all buttons in space coordinate. When the value of Eq. (\ref{eq13}) is smaller, the rectification performance is better. The experimental results of 15 elevator button images are shown in Table \ref{table}. Some demonstrations of the corresponding original and rectified images are presented in Fig. \ref{fig:important}. From Fig. \ref{fig:important} and Table \ref{table}, we can see that our proposed approach is capable of removing perspective distortions of elevator button images autonomously with high accuracy. \begin{table}[htbp] \centering \caption{Accuracy of distortion removal} \begin{tabular}{ccccccc} \toprule No. & I-10 & I-20 & I-30 & I-40 & I-50 & Average \\ 1 & 0.036 & 0.042 & 0.050 & 0.007 & 0.024 & 0.032 \\ \midrule No. & I-160 & I-170 & I-180 & I-190 & I-200 & Average \\ 2 & 0.003 & 0.026 & 0.003 & 0.004 & 0.016 & 0.010 \\ \midrule No. & I-850 & I-860 & I-870 & I-880 & I-890 & Average \\ 3 & 0.048 & 0.070 & 0.097 & 0.100 & 0.074 & 0.078 \\ \bottomrule \end{tabular}% \label{table}% \end{table}% \section{Conclusion} This article proposes a novel deep learning-based approach that can autonomously remove perspective distortions of elevator button images. We utilize an image segmentation model and Hough Transform method to obtain the detection results of button corners. A novel algorithm is designed to correct the perspective distortions of original elevator button images. Currently, the presented algorithm can only handle elevator button images that contain several rectangle buttons. For elevator button images that contain circular buttons, as a circle has no slopes, the presented algorithm will fail. In the next step, we will use the Hough transform method to calculate the center coordinates and radius of the circular buttons and develop a novel algorithm that can autonomously remove perspective distortions of images containing circular elevator buttons. \bibliographystyle{IEEEtran}
1,314,259,992,631
arxiv
\section{Introduction} Current observations reveal the existence of galaxies out to redshifts as high as $z\sim 6.7$ (Chen et al.\ 1999; Weymann et al.\ 1998; Dey et al.\ 1998; Spinrad et al.\ 1998; Hu, Cowie, \& McMahon 1998) and bright quasars out to $z\sim 5$ (Fan et al.\ 1999). Based on sources for which high resolution spectra are available, the intergalactic medium appears to be predominantly ionized at this epoch, implying the existence of ionizing sources at even higher redshifts (Madau 1999; Madau, Haardt, \& Rees 1999; Haiman \& Loeb 1998a,b; Gnedin \& Ostriker 1997). Hierarchical Cold dark matter (CDM) models for structure formation predict that the first baryonic objects appeared near the Jeans mass ($\sim 10^5~{\rm M_\odot}$) at redshifts as high as $z\sim30$ (Haiman \& Loeb 1998b, and references therein). The Next Generation Space Telescope ({\it NGST}\,), planned for launch in 2008, is expected to reach an imaging sensitivity better than 1 nJy in the infrared, which will allow it to detect galaxies or mini-quasars at $z\ga 10$. In this paper we explore the ability of {\it NGST}\, to extend gravitational lensing studies well beyond their current limits. Due to the increased path length along the line-of-sight to the most distant sources, their probability for being lensed is expected to be the highest among all possible sources. Sources at $z>10$ will often be lensed by $z>2$ galaxies, whose masses can then be determined with lens modeling. Similarly, the shape distortions (or weak lensing) caused by foreground clusters of galaxies will be used to determine the mass distributions of less massive and higher redshift clusters than currently feasible. In addition to studying the lensing objects, observers will exploit the magnification of the sources to resolve and study more distant galaxies than would otherwise be possible. The probability for strong gravitational lensing depends on the abundance of lenses, their mass profiles, and the angular diameter distances among the source, the lens and the observer. The statistics of existing lens surveys have been used at low redshifts to constrain the cosmological constant (for the most detailed work see Kochanek 1996a, and references therein), although substantial uncertainties remain regarding the luminosity function of early-type galaxies and their dark matter content (Cheng \& Krauss 1999; Chiba \& Yoshii 1999). The properties of dark matter halos will be better probed in the future by individual as well as statistical studies of the large samples of lenses expected from quasar surveys such as the 2-Degree Field (Croom et al.\ 1998) and the Sloan Digital Sky Survey (SDSS Collaboration 1996). Given the early stage of observations of the redshift evolution of galaxies and their dark halos, we adopt a theoretical approach in our analysis and use the abundance of dark matter halos as predicted by the Press-Schechter (1974, hereafter PS) model. A similar approach has been used previously for calculating lensing statistics at low redshifts, with an emphasis on lenses with image separations above $5\arcsec$ (Narayan \& White 1988; Kochanek 1995; Nakamura \& Suto 1997) or on lensing rates of supernovae (Porciani \& Madau 1999). Even when multiple images are not produced, the shape distortions caused by weak lensing can be used to determine the lensing mass distribution. Large numbers of sources are required in order to average away the noise due to the intrinsic ellipticities of sources, and so the mass distribution can only be determined for the extended halos of rich clusters of galaxies (e.g., Hoekstra et al.\ 1998; Luppino \& Kaiser 1997; Seitz et al.\ 1996) or statistically for galaxies (e.g., Brainerd et al.\ 1996; Hudson et al.\ 1998). Schneider \& Kneib (1998) have noted that the ability of {\it NGST}\, to take deeper exposures than is possible with current instruments will increase the observed density of sources on the sky, particularly of those at high redshifts. The large increase might allow such applications as a detailed weak lensing mapping of substructure in clusters. Obviously, the source galaxies must be well resolved to allow an accurate shape measurement. Unfortunately, the characteristic galaxy size is expected to decrease with redshift for two reasons: (i) the mean density of collapsed objects scales as the density of the Universe at the collapse redshift, namely as $(1+z)^3$. Hence, objects of a given mass are expected to be more compact at high redshifts, and (ii) the characteristic mass of collapsed objects decreases with increasing redshift in the bottom-up CDM models of structure formation. In the following, we attempt to calculate the size distribution of high redshift sources. Aside from the obvious implications for weak lensing studies, the finite size of sources also has important implications for their detectability with {\it NGST}\, above the background noise of the sky brightness. The outline of the paper is as follows. In \S 2 we employ the PS halo abundance in several hierarchical models of structure formation to estimate the lensing rate of the high redshift objects that will be observed with {\it NGST}.\, This lensing rate has been calculated by Marri \& Ferrara (1998) assuming point mass lenses. We use the simple but more realistic model of a singular isothermal sphere (SIS) profile for dark matter halos and obtain a substantially lower lensing rate. The formation of galactic disks and the distributions of their various properties have been studied by Dalcanton, Spergel, \& Summers (1997) and Mo, Mao, \& White (1998) in the framework of hierarchical models of structure formation. In \S 3 we apply their models to high redshift sources, and find the angular size distribution of galactic disks as a function of redshift. We use this distribution to predict whether observations with {\it NGST}\, will be significantly limited by confusion noise. We also calculate the redshift evolution of the mean surface brightness of disks. Finally, \S 4 summarizes the implications of our results. \section{Lensing Rate of High-Redshift Sources} \subsection{Calculation Method} \label{Vc} We calculate the abundance of lenses based on the PS halo mass function. Relevant expressions for various CDM cosmologies are given, e.g., in Navarro, Frenk, \& White (1997, hereafter NFW). The PS abundance agrees with N-body simulations on the mass scale of galaxy clusters, but may over-predict the abundance of galaxy halos at present by a factor of 1.5--2 (e.g., Gross et al.\ 1998). At higher redshifts, the characteristic mass scale of collapsed objects drops and the PS abundance becomes more accurate for the galaxy-size halos which dominate the lensing rate. The probability for producing multiple images of a source at a redshift $z_S$ due to gravitational lensing by SIS lenses is obtained by integrating over lens redshift $z_L$ the differential optical depth (Turner, Ostriker, \& Gott 1984; Fukugita et al.\ 1992; Peebles 1993) \begin{equation} d\tau=16 \pi^3 n \left(\frac{\sigma}{c}\right)^4 (1+z_L)^3 \left(\frac{D_{OL} D_{LS}}{D_{OS}}\right)^2 \frac{c dt}{dz_L} d z_L\ , \label{dtau} \end{equation} in terms of the comoving density of lenses $n$, velocity dispersion $\sigma$, look-back time $t$, and angular diameter distances $D$ among the observer, lens and source. More generally we replace $n \sigma^4$ by \begin{equation} \label{nsig} \langle n \sigma^4\rangle = \int \frac{dn(M,z_L)}{dM} \sigma^4(M,z_L)\, dM\ , \end{equation} where $dn/dM$ is the PS halo mass function. We assume that $\sigma(M,z)=V_c(M,z)/\sqrt{2}$\,, and we calculate the circular velocity $V_c(M,z)$ corresponding to a halo of a given mass as in NFW, except that we vary the virialization overdensity using the fitting formula of Bryan \& Norman (1998). The lensing rate depends on a combination of redshift factors, as well as the evolution of halo abundance. At higher redshifts, halos of a given mass are more concentrated and have a higher $\sigma$, but lower-mass halos contain most of the mass in the Universe. When calculating the angular diameter distances we assume the standard distance formulas in a homogeneous universe. Inhomogeneities, however, cause a dispersion around the mean distance. The non-Gaussian, skewed distribution of distances in hierarchical models is best studied with numerical simulations (e.g., Wambsganss et al.\ 1998), and can in principle be included self-consistently in more elaborate calculations of the lensing statistics. We consider cosmological models with various values of the cosmological density parameters of matter and vacuum (cosmological constant), $\Omega_0$ and $\Omega_{\Lambda}$. In particular, we show results for $\Lambda$CDM (with $\Omega_0=0.3$ and $\Omega_{\Lambda}=0.7$), OCDM (with $\Omega_0=0.3$ and $\Omega_{\Lambda}=0$), and SCDM (with $\Omega_0=1$ and $\Omega_{\Lambda}=0$). The models assume a Hubble constant $h=0.5$ if $\Omega_0=1$ and $h=0.7$ otherwise (where $H_0=100\, h\mbox{ km s}^{-1}\mbox{Mpc}^{-1}$). They also assume a primordial scale invariant ($n=1$) power spectrum, normalized to the present cluster abundance, $\sigma_8=0.5\ \Omega_0^{-0.5}$ (e.g., Pen 1998 and references therein), where $\sigma_8$ is the root-mean-square amplitude of mass fluctuations in spheres of radius $8\ h^{-1}$ Mpc. \subsection{Numerical Results} In Figure \ref{figtau} we show the variation of the lensing optical depth with source redshift. This plot does not include the magnification bias which we discuss below. In order to show the relative variation, we normalize each curve to unity at $z_S=2$. The dashed curves show results for non-evolving lenses with $\langle n \sigma^4 \rangle =const$ in $\Lambda$CDM, OCDM, and SCDM, in order from top to bottom. The higher values obtained in low-$\Omega$ models are due to the increased spatial volume in these cosmologies. The solid curves show the results for the PS halo distribution in OCDM, $\Lambda$CDM, and SCDM, in order from top to bottom. High redshifts are characterized by a decrease in $dn/dM$ at high masses and an increase at low masses, so that the typical mass of collapsing objects decreases. In the OCDM model, the evolution of $dn/dM$ toward lower masses is slow enough that $\langle n \sigma^4\rangle$ increases with $z$ up to $z\sim3.5$, which increases the lensing optical depth above the value expected for non-evolving lenses. For a given source redshift, the distribution of lens redshifts is proportional to $d\tau/dz_L$, which is given by eqs.\ (\ref{dtau}) and (\ref{nsig}). In Figure \ref{figzLens} we show the probability density $p(z_L)$, defined so that the fraction of lenses between $z_L$ and $z_L+dz_L$ is $p(z_L)dz_L$. We assume PS halos in $\Lambda$CDM (solid curves), OCDM (dashed curves), or SCDM (dotted curves). In each cosmological model, we consider a source at $z_S=5$ or at $z_S=10$, where the higher curve at $z_L<1$ corresponds to $z_S=5$. The curves peak around $z_L=1$ in the low-$\Omega$ models and around $z_L=0.7$ in SCDM. In each case a significant fraction of the lenses are above redshift 2: $20\%$ for $z_S=5$ and $36\%$ for $z_S=10$ in $\Lambda$CDM. The $z_L>2$ fractions are higher in OCDM ($26\%$ for $z_S=5$ and $48\%$ for $z_S=10$) and lower in SCDM ($13\%$ for $z_S=5$ and $26\%$ for $z_S=10$). The fraction of lensed sources in an actual survey is enhanced, relative to the above lensing probability, by the so-called magnification bias. At a given observed flux level, unlensed sources compete with lensed sources that are intrinsically fainter. Since fainter galaxies are more numerous, the fraction of lenses in an observed sample is larger than the optical depth discussed above. The magnification bias is calculated in detail below, but for the purpose of the discussion here we adopt a uniform enhancement factor of 5 when computing the lensing fraction. Our results for the different cosmological models are summarized in Table 1. At $z_S=2$ we compare the results from the hierarchical PS models to a no-evolution model of the lens population based on the local luminosity function of galaxies. The last column of Table 1 shows the results (with a magnification bias factor of 5), for example, for the parameters of the no-evolution model of Kochanek (1996a), who adopted a number density $n_e=6.1\ h^3\times 10^{-3}\ $Mpc$^{-3}$ of E/S0 galaxies, a Schechter function slope $\alpha=-1$, a Faber-Jackson exponent $\gamma=4$, and a characteristic dark matter velocity dispersion $\sigma_{\star} =225~{\rm km~s^{-1}}$. The PS models yield a higher lensing fraction, although the difference is small for the $\Lambda$CDM model. In all the PS models, the fraction of multiply imaged systems at $z_S=10$ is around $5\%$ if the magnification bias is 5. In the SIS model, the two images of a multiply-imaged source have a fixed angular separation, independent of source position, of $\Delta \theta=8\pi (\sigma/c)^2 (D_{LS}/D_{OS})$. The overall distribution of angular separations is shown in Figure \ref{figang} for $\Lambda$CDM (solid curves), OCDM (dashed curves), and SCDM (dotted curves). The results are illustrated for $z_S=2$, 5, and 10 in each model. Image separations are typically reduced by a factor of 2--3 between $z_S=2$ and $z_S=10$, almost entirely due to the evolution of the lenses. With the {\it NGST}\, resolution of $\sim 0\farcs06$, a large majority ($\sim 85\%$) of lenses with $\Delta \theta < 5 \arcsec$ can be resolved even for $z_S=10$. Note, however, that a ground-based survey with $\sim 1 \arcsec$ seeing is likely to miss $\sim 60\%$ of these lenses. There is also a tail of lenses with separations $\Delta \theta > 5 \arcsec$. These large separation lenses, and the observational difficulties in identifying them, have been previously explored both analytically (Narayan \& White 1988; Kochanek 1995) and with numerical simulations (Cen et al.\ 1994; Wambsganss et al.\ 1995). The magnification bias is determined by the distribution of image magnifications and by the source luminosity function. Denoting the probability distribution of magnifications by $q(A)$ (independent of $z_L$ and $z_S$ for the SIS), and the number counts of sources per unit flux at a flux $F$ by $dN/dF$, the fraction of lensed sources at the observed flux $F$ is increased by a bias factor \begin{equation} B=\int \left.\frac{dN}{dF}\right|_{F/A} \left[\left. \frac{dN}{dF}\right|_F\right]^{-1}\,q(A)\frac{dA}{A}\ .\end{equation} As noted above, {\it NGST}\, will resolve almost all double images and so we count them as two apparent sources. Thus we compute the bias factors separately for the two images, using $q(A)=2/(A-1)^3$ and $A>2$ for the brighter image, and $q(A)=2/(A+1)^3$ and $A>0$ for the fainter image. We then find the sum, which is dominated by the brighter image of the two. This sum includes the contributions to sources observed at a flux $F$ from all lensed images (each of which is either the bright image or the faint image of a lensed pair). The product of the resulting bias factor and the lensing optical depth yields the fraction of all apparent sources which are part of a lensed system. We note that any attempt to estimate the magnification bias of high-redshift sources is highly uncertain at the present due to several tentative assumptions about their characteristic mass-to-light ratio, star formation history, initial stellar mass function, dust extinction amplitude, and quasar formation history. Figure \ref{figbias} illustrates the magnification bias for the {\it NGST}\, number count model of Haiman \& Loeb (1998b; 1997), who assumed cosmological parameters nearly equivalent to our $\Lambda$CDM model. Solid lines are for mini-quasars, dashed lines are for galaxies undergoing starbursts which convert $20\%$ of the gas of each halo into stars, and dotted lines are for starbursts which use only $2\%$ of the gas of each halo. For each type of source, we show separate curves corresponding to all sources at redshifts $z_S>5$ or to all sources at redshifts $z_S>10$. Although the $z_S>10$ number counts are smaller, they are steeper than the $z_S>5$ counts and produce a larger magnification bias. Similarly, for low $(2\%)$ star-formation efficiency, galaxies are detected only if they lie in relatively massive halos, which have a steeper mass function and thus a larger magnification bias than for a higher star-formation efficiency. These results indicate a magnification bias around 3--6, but this factor could be much higher if the actual number counts are only somewhat steeper than predicted by these models. Indeed, the number counts fall off roughly as power laws $dN/dF_{\nu} \propto F_{\nu}^{-\beta}$ with $\beta \sim$ 2--2.5, while for the SIS, the magnification bias diverges at the critical value $\beta=3$. Using Figure \ref{figbias}, Table 1, and the number counts of Haiman \& Loeb (1998b), the estimated number of sources (lensed sources) above 1 nJy per $4 \arcmin \times 4 \arcmin$ field of view is 90 (5) for $z>10$ quasars, 300 (12) for $z>5$ quasars, 400 (17) for $z>10$ galaxies with $20\%$ star-formation efficiency, $10^4$ (200) for $z>5$ galaxies with $20\%$ efficiency, 20 (1) for $z>10$ galaxies with $2\%$ efficiency, and $2\times 10^3$ (30) for $z>5$ galaxies with $2\%$ efficiency. Note, however, that the number counts for galaxies are reduced when we include the fact that most galaxies are resolved by {\it NGST}\, and cannot be treated as point sources (see \S 3). We have assumed that each lensing halo can be approximated as a SIS, although the mass distributions in actual halos might be more complicated. Numerical simulations of pure dark matter indicate a roughly universal profile (NFW) with a $1/r$ density profile in the core. This result is supported by very high resolution simulations of a small number of halos (Moore et al.\ 1999), although simulations of large numbers of halos typically find a shallower inner density profile in agreement with observed rotation curves of dark-matter-dominated galaxies (Kravtsov et al.\ 1998). In addition, galaxy halos undergo adiabatic compression when the baryons cool and contract (e.g., Flores et al.\ 1993). Halos with the NFW profile have a smaller lensing cross-section than the SIS, but this is partly compensated for by the higher mean magnification and thus the higher magnification bias produced by NFW lenses (Keeton 1999, in preparation). In the above discussion, we have also assumed spherical symmetry. If the SIS is made ellipsoidal, with an ellipticity of 0.3, then the total lensing cross-section is changed only slightly, but lenses above a total magnification of $\sim 8$ are then mostly four-image systems (see, e.g., Kochanek 1996b). We have also assumed that each halo acts as an isolated lens, while in reality galaxies are clustered and many galaxies lie in small groups. The large dark matter halo associated with the group may combine with the halos of the individual galaxies and enhance their lensing cross-section. External shear due to group halos will also tend to increase the fraction of four-image systems. On the other hand, dust extinction may reduce the number of lensed systems below our estimates, especially since high redshift galaxies are observed at rest-frame UV wavelengths. Significant extinction may arise from dust in the source galaxy itself as well as dust in the lens galaxy, if the image path passes sufficiently close to the center of the lens galaxy. \section{Size Distribution of High-Redshift Disk Galaxies} \subsection{Semi-Analytic Model} \label{limits} The formation of disk galaxies within hierarchical models of structure formation was explored by Fall \& Efstathiou (1980). More recently, the distribution of disk sizes was derived and compared to observations by Dalcanton, Spergel, \& Summers (1997; hereafter DSS) and Mo, Mao, \& White (1998; hereafter MMW). In order to estimate the ability of {\it NGST}\, to resolve high redshift disks, we adopt the simple model of an exponential disk in a SIS halo. We consider a halo of mass $M$, virial radius $r_{\rm vir}$, total energy $E$, and angular momentum $J$, for which the spin parameter is defined as \begin{equation} \lambda \equiv J |E|^{1/2} G^{-1} M^{-5/2}\ . \end{equation} If the disk mass is a fraction $m_d$ of the halo mass and its angular momentum is a fraction $j_d$ of that of the halo, then the exponential scale radius of the disk is given by (MMW) \begin{equation} R_d=\frac{1}{\sqrt{2}}\left(\frac{j_d}{m_d}\right) \lambda\,r_{\rm vir}\ . \end{equation} The observed distribution of disk sizes suggests that the specific angular momentum of the disk is similar to that of the halo (see DSS and MMW), and so we assume $j_d/m_d=1$. The distribution of disk sizes is then determined\footnote{For a halo of a given mass and redshift, we determine $r_{\rm vir}$ using NFW and eq.\ (6) of Bryan \& Norman (1998); see also \S \ref{Vc}\,.} by the PS halo abundance and by the distribution of spin parameters. The latter approximately follows a lognormal distribution, \begin{equation} p(\lambda) d\lambda= \frac{1} {\sigma_{\lambda} \sqrt{2 \pi}} \exp \left [-\frac{ \ln^2(\lambda/ \bar{\lambda})}{2\sigma_{\lambda}^2} \right] \frac{d\lambda}{\lambda}\ , \end{equation} with $\bar{\lambda}=0.05$ and $\sigma_{\lambda}=0.5$ following MMW, who determined these values based on the N-body simulations of Warren et al.\ (1992). Unlike MMW, we do not include a lower cutoff on $\lambda$ due to disk instability. If a dense bulge exists, it can prevent bar instabilities, or if a bar forms it may be weakened or destroyed when a bulge subsequently forms (Sellwood \& Moore 1999). The distribution of disks is truncated at the low-mass end due to the fact that gas pressure inhibits baryon collapse and disk formation in shallow potential wells, i.e.\ in halos with a low circular velocity $V_c$. In particular, photo-ionization heating by the cosmic UV background heats the intergalactic gas to a characteristic temperature of $\sim 10^{4-5}~{\rm K}$ and prevents it from settling into systems with a lower virial temperature. Using a spherical collapse code, Thoul \& Weinberg (1996) found a reduction of $\sim50\%$ in the collapsed gas mass due to heating, for a halo of $V_c=50~{\rm km~s}^{-1}$ at $z=2$, and a complete suppression of infall below $V_c=30~{\rm km~s}^{-1}$. Three-dimensional numerical simulations (Quinn, Katz, \& Efstathiou 1996; Weinberg, Hernquist, \& Katz 1997; Navarro \& Steinmetz 1997) found a suppression of gas infall into even larger halos with $V_c \sim 75~{\rm km~s}^{-1}$. We adopt a typical cutoff value $V_{\rm cut}=50~{\rm km~s}^{-1}$ in the PS halo function, requiring $V_c > V_{\rm cut}$ for the formation of disks. We note, however, that the appropriate $V_{\rm cut}$ could be lower at both very low and very high redshifts when the cosmic UV background was weak. In particular, the decline of the UV background at $z\sim 1$ allowed gas to condense in halos down to $V_c \sim 25~{\rm km~s}^{-1}$ (Kepner, Babul, \& Spergel 1997). Similarly, gaseous halos that had formed prior to reionization, when the cosmic UV background had been negligible, could have survived photo-ionization heating at later times as long as they satisfied $V_c \ga 13~{\rm km~s}^{-1}$ (Barkana \& Loeb 1999). Aside from its relevance to lensing studies, the distribution of disk sizes is useful for assessing the level of overlap of sources on the sky, namely the confusion noise. We first compute the geometric optical depth of galactic disks, i.e., the fraction of the sky covered by galactic disks. This corresponds to the probability of encountering a galactic disk (within one exponential scale length) inside an infinitesimal aperture. Averaging over all random orientations, a circular disk of radius $R_d$ at redshift $z_S$ occupies an angular area of $2 (R_d/D_{OS})^2$. The total optical depth then depends on $V_{\rm cut}$. For $\Lambda$CDM with $V_{\rm cut}=50~{\rm km~s}^{-1}$, we find the geometric optical depth to be $2.0 \times 10^{-4}$ when integrated over all $z>10$ sources, $5.5 \times 10^{-3}$ for $z>5$ sources, $1.7\%$ for $z>3$ sources, $4.6\%$ for $z>1$ sources, and $6.8\%$ for sources at all redshifts. If we lower $V_{\rm cut}$ to $30~{\rm km~s}^{-1}$, the optical depth becomes $8.8 \times 10^{-4}$ for $z>10$ sources, $3.5\%$ for $z>3$ sources, and $11.3\%$ for all source redshifts. A more realistic estimate of confusion noise must include the finite resolution of the instrument as well as its detection limit for faint sources. We characterize the instrument's resolution by a minimum circular aperture of angular diameter $\theta_a$. We include as sources only those galactic disks which are brighter than some threshold. This threshold is dictated by $F_{\nu}^{\rm ps}$, the minimum spectral flux\footnote{Note that $F_{\nu}^{\rm ps}$ is the total spectral flux of the source, not just the portion contained within the aperture.} required to detect a point-like source (i.e., a source which is much smaller than $\theta_a$). For an extended source of diameter $\theta_s \gg \theta_a$, we assume that the signal-to-noise ratio can be improved by using a larger aperture, with diameter $\theta_s$. The noise amplitude scales as the square root of the number of noise (sky) photons, or the square root of the corresponding sky area. Thus, the total flux needed for detection of an extended source at a given signal-to-noise threshold is larger than $F_{\nu}^{\rm ps}$ by a factor of $\theta_s/ \theta_a$. We adopt a simple interpolation formula between the regimes of point-like and extended sources, and assume that a source is detectable if its flux is at least $\sqrt{1+ (\theta_s/ \theta_a)^2}\, F_{\nu}^{\rm ps}$. We can now compute the ``\,intersection probability'', namely the probability of encountering a galactic disk (within one exponential scale length) anywhere inside the aperture of diameter $\theta_a$. A face-on circular disk of diameter $\theta_s=2 R_d/D_{OS}$ will overlap the aperture if its center lies within a radius of $(\theta_a + \theta_s)/2$ about the center of the aperture. Assuming a random orientation of the disk, the average cross-section is then $\pi \theta_a^2/4 +1.323\ \theta_a \theta_s + \theta_s^2/2$. We integrate this cross-section over the spin parameter distribution and over the abundance of halos at all masses and redshifts. The resulting intersection probability is closely related to the confusion noise. If this probability is small then individual sources are resolved from each other, since the aperture typically contains at most a single detectable source. We can also obtain a limit on the confusion noise from sources below the flux detection threshold, by computing the same intersection probability but including sources at all fluxes. The flux $F_{\nu}$ of a given disk depends on its mass-to-light ratio, which in turn depends on its star formation history and stellar mass function. We adopt a semi-analytic starburst model similar to that of Haiman \& Loeb (1998b), but different in detail. We assume that each halo of mass $M$ hosts a disk of mass $m_d M$, of which a fraction $f_d$ participates in star formation. Adopting a cosmological baryon density of $\Omega_b h^2=0.02$, we define the star formation efficiency $\eta$ so that $f_d m_d=\eta (\Omega_b/\Omega_0)$. We assume a fixed universal value of $\eta$, and illustrate our results for a high efficiency of $\eta=20\%$ (assumed unless indicated otherwise) and for a low efficiency $\eta=2\%$. These values cover the range of efficiencies suggested by observations of the metallicity of the Ly$\alpha$ forest at $z=3$ (Haiman \& Loeb 1998b) and the cumulative mass density of stars in the Universe at present (Fukugita, Hogan, \& Peebles 1998). Note that $\eta=20\%$ and a particular value of $F_{\nu}^{\rm ps}$ are equivalent to $\eta=2\%$ and a tenfold decrease in $F_{\nu}^{\rm ps}$. In order to determine the mass-to-light ratio of a halo of mass $M$ at a redshift $z$, we assume that the mass $\eta (\Omega_b/\Omega_0) M$ is distributed in stars with a Salpeter mass function ($dN \propto m^{-\alpha} dm$ with $\alpha$=2.35) from 1 $M_{\sun}$ up to 100 $M_{\sun}$. If the mass function were extended to masses below 1 $M_{\sun}$, the additional stars would contribute significant mass but little luminosity, so this would essentially be equivalent to a reduction in $\eta$. We use the stellar population code of Sternberg (1998) with Z=0.001 stellar tracks and Z=0.006 stellar spectra. We assume that the age of the stellar population equals that of the dark matter halo, whose age is determined from its merger history. The formation redshift $z_{\rm \,form}>z$ is defined as the time at which half the mass of the halo was first contained in progenitors more massive than a fraction $f$ of $M$. We set $f=0.5$ and estimate the formation redshift (and thus the age) using the extended Press-Schechter formalism (see, e.g., Lacey \& Cole 1993). At high redshifts, the young age of the Universe and high halo merger rate imply young stellar populations which are especially bright at rest-frame UV wavelengths. At each redshift $z$ we calculate the halo spectral flux by averaging the composite stellar spectrum over the wavelengths corresponding to the observed {\it NGST}\, spectral range of 0.6--3.5$\mu$m. We also include a Ly$\alpha$ cutoff in the spectrum due to absorption by the dense Ly$\alpha$ forest at all redshifts up to that of the source. We do not, however, include dust extinction. Despite the generally low metallicity at high redshifts, extinction could be significant since observations correspond to rest-frame UV wavelengths (Loeb \& Haiman 1997). Our starburst model is expected to describe galaxies at high redshifts, but it may fail at redshifts $z\la 2$. The model relies on two key assumptions, namely that stars form in disks, and that the stars in each galaxy have formed since the last major merger of its halo. At high redshifts, the fraction of gas that has collapsed into halos is small, and the fraction that has turned into stars is even smaller. Thus, a high-redshift galaxy is expected to be gas-rich whether it forms in a merger or accretes most of its gas from the intergalactic medium. Such a galaxy is likely to form most of its stars in a disk after the gas cools and settles onto a plane. At low redshifts, on the other hand, disk galaxies may have converted most of their gas into stars by the time they merge. In this case, the merger may form a massive elliptical galaxy rather than a disk-dominated galaxy. Indeed, elaborate semi-analytic models indicate that the stars in elliptical galaxies are typically much older than their halo merger age (e.g., Thomas \& Kauffmann 1999), in agreement with the red colors of ellipticals which suggest old stellar populations. Although the increased presence of elliptical galaxies invalidates our model for the mass-to-light ratios of galaxies at low redshifts, our results for the size distribution of galaxies may remain approximately valid. Theoretical considerations based on the virial theorem, as well as numerical simulations, suggest that the characteristic size of a galactic merger remnant is smaller by a factor of $\la 1.5$ than the size expected for a disk galaxy of the same mass and velocity dispersion (Hausman \& Ostriker 1978; Hernquist et al. 1993). \subsection{Numerical Results} Figure \ref{figconf} shows the total intersection probability as a function of limiting flux (right panel), for all sources with $z>0$, $z>2$, $z>5$, and $z>10$, from top to bottom. The total probability is dominated by the contribution of sources at low redshifts, which is relatively insensitive to the limiting flux (or to $\eta$). All curves assume the $\Lambda$CDM model with a circular-velocity cutoff for the host halo of $V_{\rm cut}=50~{\rm km~s}^{-1}$. The aperture diameter is chosen to be $\theta_a=0\farcs 06$, close to the expected {\it NGST}\, resolution at $2\mu$m. With $F_{\nu}^{\rm ps}=1$ nJy, the total intersection probability for all redshifts is $8.9\%$ (or $5.6\%$ if $\eta=2\%$) in $\Lambda$CDM (and it is $10\%$ or less also in the SCDM and OCDM models). The probability increases to $15\%$ ($6.2\%$ if $\eta=2\%$) if $V_{\rm cut}=30~{\rm km~s}^{-1}$ instead of $50~{\rm km~ s}^{-1}$. The contribution from sources at $z>5$ is $1.0\%$ ($9.0\times 10^{-4}$ if $\eta=2\%$). Thus the chance for overlapping sources will be small for {\it NGST}.\, If the resolution were $\theta_a = 0\farcs 12$, the probability would be $12\%$ ($6.6\%$ if $\eta=2\%$) for all redshifts and $1.7\%$ ($0.14\%$ if $\eta=2\%$) for sources at $z>5$. If we include all sources regardless of flux then the probability becomes independent of $\eta$, and (with $\theta_a = 0\farcs 06$) it equals $9.1\%$ if $V_{\rm cut}=50~{\rm km~ s}^{-1}$ and $18.8\%$ if $V_{\rm cut}=30~{\rm km~s}^{-1}$. The contribution from sources below the detection threshold is small due to the $V_c$ cutoff, i.e.\ the fact that the photo-ionizing background prevents the formation of galaxies in small dark matter halos. This fact should eventually result in a turnover, where the number counts do not increase with decreasing flux. However, the turnover occurs somewhat below 1 nJy, a flux that is much smaller than the detection threshold of current observations such as the Hubble Deep Field. In summary, we have shown that confusion noise for {\it NGST}\, will be low, assuming that there is one galaxy per halo and that the luminous stars form primarily in disks. Note that we have not included the possible confusion noise from multiple galaxies per halo, from clustered or interacting galaxies, or from galaxies being observed as separate fragments rather than smooth disks. We also have not included the confusion noise from stars and other sources in our own galaxy. Also note that with no flux limit on sources, the intersection probability approaches unity only if the aperture is increased to $0\farcs 9$. Our model predicts the size distribution of galaxies at various redshifts. Figure \ref{figsize} shows the fraction of the total number counts contributed by sources with diameters greater than $\theta$, as a function of $\theta$. The size distributions are shown for a high efficiency ($\eta=20\%$, solid curves) and for a low efficiency ($\eta=2\%$, dotted curves) of star formation. Each curve is marked by the lower limit of the corresponding redshift range, with `0' indicating sources with $0<z<2$, and similarly for $2<z<5$, $5<z<10$, and $z>10$. All curves include a cutoff of $V_{\rm cut}=50~{\rm km~s}^{-1}$ and a limiting point source flux of 1 nJy, and all are for the $\Lambda$CDM model. The vertical dashed line in Figure \ref{figsize} indicates the {\it NGST}\, resolution of $0\farcs 06$. Note that increasing $\eta$ leads to a decrease in the typical angular size of galaxies, since the set of observable galaxies then includes galaxies which are less massive, and thus generally smaller. However, a tenfold increase in $\eta$ decreases the observed angular sizes of $z>10$ galaxies by only a factor of two. The typical observed size of faint disks (i.e., of all disks down to 1 nJy) is $0\farcs4$ for sources at $0<z<2$, $0\farcs2$ for sources at $2<z<5$, $0\farcs10$ (or $0\farcs15$ if $\eta=2\%$) for sources at $5<z<10$, and $0\farcs065$ (or $0\farcs11$ if $\eta=2\%$) for sources at $z>10$. Roughly $60\%$ of all $z>10$ sources (or $90\%$ if $\eta=2\%$) can be resolved by {\it NGST},\, and the fraction is at least $85\%$ among lower redshift sources. Thus, the high resolution of {\it NGST}\, should make most of the detected sources useful for weak lensing. If reliable shape measurements require a diameter equal to twice the resolution scale (probably overly pessimistic), then the useful ($\theta > 0\farcs 12$) fractions are $13\%$ for $z>10$, $40\%$ for $5<z<10$, and $80\%$ for $2<z<5$ sources. If $\eta=2\%$, the corresponding fractions are $40\%$ for $z>10$, $65\%$ for $5<z<10$, and $80\%$ for $2<z<5$. These results are all in the $\Lambda$CDM model, but disk sizes in the SCDM and OCDM models differ by only about $10\%$. As noted by Schneider \& Kneib (1998), ground-based telescopes that are not equipped with adaptive optics or interferometry would be unable to resolve most of the high-redshift sources, even if they could reach the same flux sensitivity as {\it NGST}.\, For example, a ground-based survey down to 1 nJy with, e.g., $0\farcs 75$ seeing at $2\mu$m, could resolve only $0.003\%$ of the $z>10$ sources, with corresponding fractions of $0.1\%$ for $5<z<10$, and $2\%$ for $2<z<5$. If $\eta=2\%$ then the resolved fractions are $0.03\%$ for $z>10$, $0.8\%$ for $5<z<10$, and $4\%$ for $2<z<5$. Thus, the high resolution of {\it NGST}\, is crucial for resolving faint galaxies at the redshifts of interest. Current observations of galaxy sizes at $z>2$ are inadequate for a detailed comparison with our models. Gardner \& Satyapal (1999, in preparation) have determined the sizes of galaxies in the Hubble Deep Field South, finding typical half-light radii of $0\farcs 1$ with a very large scatter. This sample likely includes a wide range of redshifts, and it is expected to be strongly biased toward small galaxy sizes. Given the steep luminosity function of the detected galaxies, most of them are detected very close to the detection limit, especially those at high redshift. Of course, galaxies near the flux threshold can be detected only if they are nearly point sources, while large galaxies are excluded from the sample because of their low surface brightness. Since most galaxies will be resolved by {\it NGST},\, predictions for the total number counts are affected by the higher flux needed for the detection of extended objects relative to point sources. For a point source flux limit of 1 nJy and $\eta=20\%$, the total number counts are reduced (relative to a size-independent flux limit of 1 nJy) by a factor of 2 for $z>10$ and by only $10\%$ for $5<z<10$. The reduction for $z<10$ sources is small if $\eta=20\%$, since in this case the total flux of most $z<10$ sources is greater than 1 nJy, and these galaxies can still be detected even as extended objects. However, the reduction in number counts is more significant if $\eta=2\%$, with a factor of 8 for $z>10$, 4 for $5<z<10$, and 2.5 for $2<z<5$. We show in Figure \ref{figzSource} the resulting prediction for the redshift distribution of the galaxy population observed with {\it NGST}.\, We assume the $\Lambda$CDM model and plot $dN/dz$, where $N$ is the number of galaxies per {\it NGST}\, field of view. The solid curve assumes a high efficiency ($\eta=20\%$) of star formation and the dashed curve assumes a low efficiency ($\eta=2\%$). All curves assume a limiting point source flux of 1 nJy. The total number per field of view of galaxies at all redshifts is $N=59,000$ for $\eta=20\%$ and $N=15,000$ for $\eta=2\%$. The fraction of galaxies above redshift 5 is sensitive to the value of $\eta$ -- it equals $40\%$ for $\eta=20\%$ and $7.4\%$ for $\eta=2\%$ -- but the number of $z>5$ galaxies is large ($\sim 1000$) even for the low efficiency. The number of $z>5$ galaxies predicted in SCDM is close to that in $\Lambda$CDM, but in OCDM there are twice as many $z>5$ galaxies. \subsection{The Surface Brightness of Lensed Sources} In our estimates of the lensing rate in \S 2 we implicitly made two important assumptions: (i) the source is smaller than the image separation, so that the two images of the source are not blended; (ii) the surface brightness of the background source is comparable to or higher than that of the foreground lens, otherwise the background source could not be detected when it is superimposed on the lens galaxy. These assumptions are trivially justified for the point-like images of quasars. In the context of galactic sources, we can apply our estimates of disk sizes to test these two assumptions quantitatively. A lensed galaxy is generally much smaller than the separation of its two images. The combination of Figures \ref{figang} and \ref{figsize} shows that, regardless of the source redshift, the typical image separation is at least four times as large as the typical diameter of a source galaxy detected by {\it NGST}.\, Thus, the majority of all lensed sources will not be blended. Note, however, that if ellipticity or shear are included then some of the resulting four-image systems may include arcs produced by several blended images. In order to compare the surface brightness of source galaxies to that of lens galaxies, we calculate the redshift evolution of the mean surface brightness of galaxies. At high redshifts, we may apply our disk starburst model to find the surface brightness of a galaxy from the predicted size, mass, and mass-to-light ratio of its disk. We compute the average surface brightness (as observed in the {\it NGST}\, spectral range) out to one exponential scale length. Figure \ref{figSB} shows this surface brightness $\mu$ (expressed in nJy per square arcsecond) averaged over all galaxies at each redshift, in the $\Lambda$CDM model only (as the OCDM and SCDM models yield very similar results). Solid lines show the mean at $z>2$, where galaxies are weighed by their number density and their mass-to-light ratios are derived from the starburst model. As discussed at the end of \S \ref{limits}, although our model for the size distribution of galaxies should remain approximately valid at low redshifts, the starburst model may fail to predict the correct mass-to-light ratio of the stellar population at $z\la 2$, particularly for the lens galaxies. These lenses tend to be massive elliptical galaxies, with stellar populations that may be much older than the merger timescale assumed in our starburst model. In order to estimate the surface brightness of lens galaxies, we adopt a simple alternative model in which all their stars are uniformly old. The dashed lines in Figure \ref{figSB} show (for $z<2$) the mean surface brightness of lensing galaxies (i.e., where galaxies are weighed by the product of their number density and their lensing cross-section), assuming that their stars formed at $z=5$. In each case (i.e., for source galaxies or for lens galaxies), the upper curve assumes a high efficiency ($\eta=20\%$) and the lower curve assumes a low efficiency ($\eta=2\%$) of incorporating baryons into stars in the associated halos. All curves include a cutoff velocity of $V_{\rm cut}=50~{\rm km~s}^{-1}$ and a limiting point source flux of 1 nJy. As is apparent from Figure \ref{figSB}, the mean surface brightness of galaxies varies, for a fixed $\eta$, by a factor of $\la 2$ over all redshifts above 2, despite the large range in luminosity distances from the observer. Several different factors combine to keep the surface brightness nearly constant. Except for redshift factors, the surface brightness is proportional to the luminosity over the square of the disk radius, and the luminosity is in turn equal to the disk mass divided by its mass-to-light ratio. Although the typical mass of halos decreases at high redshifts, two other effects tend to increase the surface brightness. First, high redshift disks are compact due to the increased mean density of the Universe. The second effect results from the low mass-to-light ratio of the young stellar populations of high redshift disks, which makes these galaxies highly luminous despite their small masses. For example, the mean ratio of halo mass to disk luminosity for $z=2$ galaxies (with $\eta=20\%$ and $F_{\nu}^{\rm ps}=1$ nJy) is 14 in solar units, and this decreases to 3.8 at $z=5$ and 1.2 at $z=10$. This evolution in the mass-to-light ratio includes the so-called K-correction, i.e.\ the fact that for higher-redshift sources the {\it NGST}\, filter corresponds to shorter rest-frame wavelengths. Acting alone, the factors discussed above would result in a sharp increase with redshift in the surface brightness of galaxies. Additional redshift effects, however, counter-balance these other factors. According to the Tolman surface brightness law, the expansion of the universe yields a factor of $(1+z)^{-4}$ regardless of the values of the cosmological parameters. This redshift factor dominates and produces an overall decrease in $\mu$ among lens galaxies at low redshifts (up to $z \sim 1.5$). At these low redshifts, all galaxies are detected regardless of $\eta$, so the overall $\mu$ is exactly proportional to $\eta$. At higher redshifts, the 1 nJy flux limit preferentially removes low surface brightness galaxies from the detected sample. The resulting bias toward high surface brightness is larger if $\eta=2\%$, and this decreases the difference in $\mu$ between the cases of $\eta=2\%$ and $\eta=20\%$. The mass-to-light ratio begins to decrease rapidly at $z \ga 1.5$, and at $z>2$ the various factors combine to produce a slow variation in $\mu$. Although there is only a modest redshift evolution in the surface brightness of galaxies, there is an additional difficulty in detecting lensed sources. Lensing galaxies are biased toward larger circular velocities, i.e.\ toward larger masses at each redshift. Since a galaxy that is more massive is usually also more luminous, its surface brightness tends to be larger. As shown in Figure \ref{figSB}, this tendency makes the mean surface brightness of lenses somewhat higher than that of sources, despite our assumption of an old stellar population in lens galaxies. Consider, for example, a source at redshift 5 which is multiply imaged. The mean lens redshift for $z_S=5$ is $z_L=1.4$. If we select the source from the general galaxy population and the lens from the population of lenses, then the typical source-to-lens surface brightness ratio is 1:3 if $\eta=20\%$ (or close to 1:1 if $\eta=2\%$). Even though lens galaxies might have a somewhat higher mean surface brightness than the sources which they lens, it should be possible to detect lensed sources since (i) the image center will typically be some distance from the lens center, of order half the image separation, and (ii) the younger stellar population and higher redshift of the source will make its colors different from those of the lens galaxy, permitting an easy separation of the two in multi-color observations. These two helpful features, along with the source being much smaller than the lens and the image separation, are evident in the currently known systems which feature galaxy-galaxy lensing. These include two four-image `Einstein cross' gravitational lenses discovered by Ratnatunga et al.\ (1995) in the Groth-Westphal strip, and a lensed three-image arc detected in the Hubble Deep Field South and studied in detail by Barkana et al.\ (1999). In these cases of moderate redshifts and optical/UV observations, the sources appear bluer than the lens galaxies. In the infrared range of {\it NGST},\, high-redshift sources are expected to generally be redder than their low redshift lenses, since the overall redshift has a dominant effect on the spectrum. Suppose, e.g., that $z_S=5$ and $z_L=1.4$. We divide the {\it NGST}\, spectral range into four logarithmically-spaced parts (in order of increasing frequency). For a given spectrum, we find the fraction of the total luminosity which is emitted in each frequency quadrant. The mean fractions for $z_S=5$ galaxies are 0.37, 0.21, 0.26, and 0.16, respectively, while the fractions for $z_L=1.4$ lenses (assuming, as above, that their stars formed at redshift 5) are 0.16, 0.29, 0.39, and 0.16 . Thus, if we use the lowest frequency quadrant, the source will be brighter than the lens by an additional factor of 2.3 relative to the source-to-lens luminosity ratio when we use the full {\it NGST}\, bandwidth. Note that we have not included here extinction, which could further redden the colors of lensed sources. \section{Conclusions} We have calculated the lensing probability of high-redshift galaxies or quasars by foreground dark matter halos. We found that the lensing optical depth for multiple imaging of sources increases by a factor of 4--6 from $z_S=2$ to $z_S=10$. With a magnification bias of $\sim 5$ expected for $z_S>5$ sources, the fraction of apparent sources which form one of the images of a lensed source reaches $\sim 5\%$ for sources at $z_S=10$ (see Table 1). Among lenses with image separations below $5 \arcsec$, the typical image separation (in $\Lambda$CDM) drops from $1\farcs1$ at $z_S=2$ to $0.5 \arcsec$ at $z_S=10$. With its expected $\sim 0\farcs06$ resolution, {\it NGST}\, can resolve $\sim 85\%$ of the lenses with $z_S=10$. Assuming the number counts predicted by Haiman \& Loeb (1998b), the estimated number of lensed sources above 1 nJy per field of view of {\it NGST}\, is roughly 5 for $z>10$ quasars, 10 for $z>5$ quasars, 1--15 for $z>10$ galaxies, and 30--200 for $z>5$ galaxies. Note that these values are in a $\Lambda$CDM cosmology; the number of $z>10$ galaxies is smaller by a factor of $\sim 3$ in SCDM but larger by a factor of $\sim 10$ in OCDM. Although only a small fraction of the sources are multiply imaged, all sources are mildly altered by gravitational lensing due to foreground objects. For a source that is not multiply imaged, the cross-section for an amplification of at least $A$ varies as $1/(A-1)^2$ for a SIS lens. Thus, for $z_S=10$ the optical depth is unity for an amplification of $A=1.1$ or greater. This implies that extended sources at high redshifts are significantly distorted due to lensing. A typical $z=10$ source is magnified or de-magnified by $\sim 10\%$ and also has an ellipticity of at least $10\%$ due to lensing. We have also predicted the size distribution of galactic disks at high redshifts (see Figure \ref{figsize}) and found that the angular resolution of {\it NGST}\, will be sufficiently high to avoid confusion noise due to overlapping sources. Indeed, with a 1 nJy flux limit the probability of encountering a galactic disk inside an aperture of $0\farcs06$ diameter is $8.9\%$ for $\Lambda$CDM, of which $4\%$ comes from $z>2$ sources, $1\%$ comes from $z>5$ sources, and only $0.02\%$ is contributed by $z>10$ sources (see Figure \ref{figconf}). These values are for a high star formation efficiency of $\eta=20\%$, and they are reduced if $\eta=2\%$. In our estimates of the lensing rate in \S 2, we assumed that a lensed source can be detected even when its images overlap the lensing galaxy. We showed in \S 3 that the mean surface brightness of galaxies evolves modestly above redshift 2 (see Figure \ref{figSB}). Although the surface brightness of a background source will typically be somewhat lower than that of the foreground lens, the lensed images should be detectable since they are offset from the lens center and their colors are expected to differ from those of the lens galaxy. Although the typical size of sources decreases with increasing redshift, at least $60\%$ of the $z>10$ galaxies above 1 nJy can still be resolved by {\it NGST}.\, This implies that the shapes of these high redshift galaxies can be studied with {\it NGST}.\, We have also found that the high resolution of {\it NGST}\, is crucial in making the majority of sources on the sky useful for weak lensing studies. When we assumed a 1 nJy flux limit for detecting point sources, we included the fact that resolved sources require a higher flux in order to be detected with the same signal-to-noise ratio. Therefore, estimates of number counts that assume a constant flux limit of 1 nJy for all sources overestimate the number counts by a factor of 2 for $z>10$ sources and a star formation efficiency of $\eta=20\%$, or by as much as a factor of 8 if $\eta=2\%$. Even with this limitation, though, {\it NGST}\, should detect a total (over all redshifts) of roughly one galaxy per square arcsecond for $\eta=20\%$ (or one per 4 square arcseconds if $\eta=2\%$). In conclusion, the field of gravitational lensing is likely to benefit greatly over the next decade from the combination of unprecedented sensitivity and high angular resolution of {\it NGST}.\, \acknowledgments We thank Zoltan Haiman for providing number count data from earlier work. We are also grateful to Tal Alexander and Amiel Sternberg for numerical results of their stellar population model, and to David Hogg for useful discussions. RB acknowledges support from Institute Funds. This work was supported in part by NASA grants NAG 5-7039 and NAG 5-7768 for AL.
1,314,259,992,632
arxiv
\section{Introduction}\label{Intro} \input{intro.tex} \section{Theoretical Results and Illustrations}\label{Theory} \input{theory.tex} \section{An Algorithm Template for~\eqref{KeyTask}}\label{Template} \input{algtemplate.tex} \section{A Partial Realization of Algorithm~\ref{ELAT}}\label{Experiments} \input{exper.tex} \section{Concluding Remarks}\label{Conclusions} \input{conclusion.tex} \def$'${$'$}
1,314,259,992,633
arxiv
\section{Introduction} The Contextual Evaluation Model (CEM) is an expansion of Knowledge Representation (KR) models. KR has a long history going back to the beginning of artificial intelligence in the late 1950's and early 1960's with IPL and LISP and continuing into the present with projects such as Cyc and OWL \cite{gps,lisp,wiki:Cyc,cycSyntax,owl,owl2}. In all these models `knowledge' is represented by `facts' with the goal of using the facts in non-trivial ways. A classic example is taking two facts `All men are mortal' and `Socrates is a man' and deducing that `Socrates is mortal'. \par It is our view that KR is the foundation upon which \textit{intelligence} is constructed. For the purposes of this paper, \textit{intelligence} is broadly defined as an ongoing process by which an \textbf{intity}\label{def-intity}\footnote{The word `intity' is a contraction of `\underline{int}elligent' and `ent\underline{ity}'. It will be used in this paper to refer to a natural or artificial intelligent entity.} transitions from one moment to the next in such a manner that supports that intity's ability to thrive. These moment-to-moment decisions of what-to-do-next are determined by both the intity's knowledge and the changes to the intity's physical environment as detected by the intity's senses. Intelligence, as so defined, can be encapsulated by an intelligence function~(\symI\label{def-I}) as shown in equation (\ref{equIfunction}). It maps the intity's state (\vp{q_m})\label{def-m} and sensory input (\vp{s_m}) at moment\footnote{The symbol $t$ while usually used to denote time will be later used to represent a \textit{thought}. To avoid confusion, the symbol $m$ will be used to denote a moment in time.} (\vp{m}) to a new a new state (\vp{q_{m+1}}) at moment \vp{m+1}. Function \symI \ is then reevaluated at moment \vp{m+1} with new sensory input (\vp{s_{m+1}}) to get state \vp{q_{m+2}},~\ensuremath{\dots}. \begin{equation}\label{equIfunction} \symI(q_{m},s_{m}) \rightarrow q_{m+1} \Longrightarrow \symI(q_{m+1},s_{m+1}) \rightarrow q_{m+2} \Longrightarrow \dots \end{equation} In (\ref{equIfunction}) the state encompasses all the know-how, expertise, physical capabilities, actions, etc. required by the intity to exist and thrive. The sensory input reflects a snapshot of the intity's current sensory experience of the physical world. KR, in this framework, is the combination of the state and sensory input ($q$ and $s$). Given this, the paper investigates two questions: \begin{enumerate} \setlength\itemsep{-0.4em} \item How should the $q$ and $s$ parameters of \symI \ be modeled? \item What is function \symI \ and how would intelligence arise from its repetitive evaluation? \end{enumerate} The first half of this paper is devoted to explaining the Contextual Evaluation Model. The model consists of five components: the point, key set, binding, context and the contextual evaluation operation. By the end of the first half, the reader will be familiar with the components of the model and how the CEM relates to the intelligence function (\symI). Section~\ref{secCEM} of this paper introduces the components the CEM. Section~\ref{secV5LanEng} describes the V5 engine, a virtual machine that implements the CEM. Various examples are presented illustrating important features of the model. Section~\ref{secSeq} describes sequences: the ordering of points and actions. Sequences that sing a song, learn a maze and implement a Turing machine are presented. In section~\ref{secPatterns} an algorithm for learning patterns is given and tested against real world data in an example that recognizes voices. Section~\ref{secMotivate} discusses the modeling of pleasure and pain within the CEM/V5 framework. \par Singing a song and running a maze, while interesting examples, do not make a convincing argument that the CEM shows intelligence for anything but the simplest forms of intelligence. Demonstrating that the CEM is a fitting model for all forms of intelligence is a daunting task; especially considering that there are no universally accepted tests for intelligence\cite{wiki:Turing_test}. Instead, the remainder of the paper focuses on one form of intelligence that is unique to humans: thought and how it manifests as language. The second half of this paper investigates these questions: \begin{enumerate} \setlength\itemsep{-0.4em} \item What is a \textit{thought} and how is it represented with the CEM? \item What is the relationship between thought and language? \item How can an intity \textit{learn} a language? \end{enumerate} Chapter~\ref{secThoughtsLanMean} defines the \textit{thought} and how thoughts relate to language and meaning. Section~\ref{secThought2Lan} describes how thoughts are converted to language. Section~\ref{secLanLearn} deconstructs the language acquisition process into four steps: recognizing words, grounding words, syntax acquisition and thought-to-thought meaning. Each of these four steps is shown to be a variation of pattern learning as previously described in section~\ref{secPatterns}. \par Appendix~\ref{secGlossary} is a glossary and index of terms used in this paper. Appendix~\ref{secV5Stuff} contains tables detailing V5 commands, registers and instruction set. Appendixes \ref{datapatrecex} and \ref{secParseopRASM} contain supplemental material referenced within the paper. \section{The Contextual Evaluation Model}\label{secCEM} The Contextual Evaluation Model (CEM\label{def-CEM}) is a novel method for storing, retrieving and manipulating four classes of data: \begin{clist} \item Factual data describing an attribute or component of something such as the name of a person, the temperature of an oven, the reason for an action. \item Pattern data that collectively can be used to recognize something. \item Sequence data that describe an ordered collection such as the letters in a word, the notes in a song, the steps required to achieve a goal. \item Contextual data that influence the interpretation of the above three classes of data. \end{clist} The basics of the model and the contextual representation of facts are presented in this section. Sequences and patterns are covered in sections~\ref{secSeq} and \ref{secPatterns} following the introduction of requisite preliminary material. \subsection{Points, Key Sets, Bindings} \begin{definition}{} The \textbf{point}\label{def-point} is the atomic unit of the CEM. A point represents something with no restriction as to what that something can be. \end{definition} A point can represent an object, an idea, a number, a quality. It can represent something specific such as a particular person or an abstract concept such as love. A point can represent a class or category of things (dogs) or a specific instance of that class (your dog). A point can represent sensory input: a sound, visual input, an odor. A point may also trigger an action or movement, the generation of a sound, the transmission of a signal. \begin{definition}{} Points created by external senses are called \textbf{sensory}\label{def-sensory point} points while points that trigger external actions are \textbf{control points}\label{def-control point}. All other points are \textbf{internal}\label{def-internal point} points. \end{definition} A point, other than representing something, in and of itself, conveys no additional information about that something (e.g. a point representing your dog does not convey any additional information about your dog or even that it is a dog). \begin{definition}{} A \textbf{key set}\label{def-keyset} is a set of zero or more points. A key set is denoted as a list of points enclosed in brackets, [\vp{point_1} \vp{point_2} \ensuremath{\dots} \vp{point_n}]. The \textbf{null key set}\label{def-null keyset} (\nullKS) is an empty key set. \end{definition} \begin{definition}{} A \textbf{binding}\label{def-binding} is a key-value or attribute-value relationship between a non-null key set and a value set (of points). \end{definition} A binding, $\mathbf{b}$, is a 3-tuple: \begin{equation} \mathbf{b} = (\left\{ k^i \right\}_{i=1}^n, \left\{ v^i \right\}_{i=1}^{n'}, w) \end{equation} where $\mathbf{b}_k$ is a key set, $\mathbf{b}_v$ is a set of value points and $\mathbf{b}_w$ is the binding weight. \begin{definition}{} The binding \textbf{weight}\label{def-w} is proportional to the number of points in the key set ($|\mathbf{b}_k|$), i.e. the greater the number of points in the key set the greater the binding weight. \end{definition} When a value set consists of a single point then $\mathbf{b}_v$ will represent $\mathbf{b}_v^1$. A binding is notated as $keyset = value$: [\vp{point_1} \vp{point_2} \ensuremath{\dots} \vp{point_n}] = \vp{point_{value}}. The binding weight is normally automatically assigned using implementation dependent parameters. \subsection{Evaluation and Contextual Evaluation} \begin{definition}{} A binding set (\symB\label{def-B}) is a set of bindings. \end{definition} Initially, only binding sets will be considered having the restriction that no two binding key sets are identical within the set\footnote{This restriction will be relaxed in subsequent sections.}: \begin{equation} \forall \mathbf{b}^i, \mathbf{b}^j \in \mathbf{B} \ \textrm{if} \ \mathbf{b}^i_k = \mathbf{b}^j_k \ \textrm{then} \ i = j \end{equation} A simple evaluation is a function that given a binding set (\symB) and key set (\symK) searches the binding set for a binding with the identical key set and returns that binding's value: \begin{equation} E_s(\mathbf{B}, \mathbf{K}) = \mathbf{b}_v \ where \ \mathbf{b} = \forall \mathbf{b}^i \in \mathbf{B} \ \textrm{if} \ \mathbf{b}^i_k = \mathbf{K} \ \textrm{then} \ \mathbf{b}^i \end{equation} \begin{definition}{} A \textbf{context} (\symC\label{def-C}\label{def-context}) is a separate dynamic set of points. \end{definition} How points are inserted into the context and how they are removed from the context is implementation dependent. A more precise definition of the context is provided in sections~\ref{moreoncontext} and \ref{secEngine}. Equation (\ref{equ-CE}) defines a contextual evaluation. \begin{equation}\label{equ-CE} E_c(\mathbf{B}, \symC, \mathbf{K}) = \mathbf{b}_v \ \textrm{where} \ \mathbf{b} = \arg\max_{\mathbf{b} \in \mathbf{B}} \left\{ \mathbf{b}_w \ | \ \mathbf{b}_k \in ( \mathbf{K} \cup \symC ) \ \textrm{and} \ \mathbf{K} \in \mathbf{b}_k \right\} \end{equation} and this simplifies to (\ref{equEvalNullKS}) when \symK \ is the null key set: \begin{equation}\label{equEvalNullKS} E_c(\mathbf{B}, \symC, \mathbf{K}) = \mathbf{b}_v \ \textrm{where} \ b = \arg\max_{\mathbf{b} \in \mathbf{B}} \left\{ \mathbf{b}_w \ | \ \mathbf{b}_k \in \symC \right\} \end{equation} For an analogy, consider a binding as locking a value with its key set points. The value cannot be obtained without all the keys. A simple evaluation is a search through \symB \ attempting to unlock a binding using all the keys in $\mathbf{K}$. A contextual evaluation searches for a binding with the greatest binding weight (largest number of locks) that is unlocked using all the keys in \symK \ and as many additional key points from the context \symC \ as is necessary. For both $E_s$ and $E_c$, if no binding is found the operation fails and the result is undefined. \begin{definition}{} Going forward, \textbf{evaluating a key set} ($\mathbf{K})$ means evaluating $E_c(\mathbf{B}, \symC, \mathbf{K})$ where $\mathbf{B}$ and \symC \ are assumed. \end{definition} \subsection{Examples} The statement `the boiling point of water is $100\degree$ Celsius' could be represented with the binding [\vp{p_1} \vp{p_2} \vp{p_3}]=\vp{p_v} where \vp{p_1} represents the attribute of the boiling point of something, \vp{p_2} represents water, \vp{p_3} represents the Celsius scale and \vp{p_v} represents $100\degree$. Greater clarity and understanding is obtained by replacing symbolic points (\vp{p_x}) with descriptive labels. The same example becomes [\vp{boilingPoint} \vp{water} \vp{celsius}]=100. Descriptive labels may optionally be enclosed in double quotes. Below are some binding examples: \begin{clist} \item[] `Chocolate tastes good' - [\vp{chocolate} \vp{tastes}]=\vp{good} \item[] `John thinks chocolate tastes bad' - [\vp{johnThinks} \vp{chocolate} \vp{tastes}]=\vp{bad} \item[] `The meaning of life is 42 according to the Hitchhiker's Guide to the Galaxy' - [\vp{meaningLife} \vp{HHGuide}]=42 \item[] `the factorial of 5 is 120' - [\vp{factorial} 5]=120 \item[] `representation of the number 22 is ``22''' - [\vp{representation} 22] = `22' \item[] `representation of the number 22 in base 2' - [\vp{representation} 22 \vp{base2}] =`'10110' \end{clist} Given the above bindings and an empty context (\symC = $\varnothing$) then the contextual evaluation of [\vp{chocolate} \vp{tastes}] results in \vp{good}. If the context contains the point \vp{johnThinks} then the evaluation of the same key set, [\vp{chocolate} \vp{tastes}], results in \vp{bad}. In this second evaluation, both bindings ([\vp{chocolate} \vp{tastes}]=\vp{good} and [\vp{johnThinks} \vp{chocolate} \vp{tastes}]=\vp{bad}) satisfy the constraints but since the second binding has a higher binding weight (more points in the binding key set) it is selected. \subsection{Complete vs Incomplete Contextual Evaluations} The greater the number of points in a binding key set, the greater the binding weight and the greater the specificity of the binding. Your pet dog Fido might have multiple descriptions. In general if asked what Fido is, you would reply `a dog'. If you met your biology teacher in the park and he or she asked you would reply `a beagle'. But if your biology professor asked you in biology class you would reply `canis lupis familiaris'. Bindings describing these possible answers might be: \begin{clist} \item[] [\vp{whatIs} \vp{Fido}]=\vp{dog} \item[] [\vp{whatIs} \vp{Fido} \vp{profKnowItAll}]=\vp{beagle} \item[] [\vp{whatIs} \vp{Fido} \vp{profKnowItAll} \vp{biologyLecture}]=\vp{CanisLupusFamiliaris} \end{clist} In each instance, the same question, `What is Fido?', is answered by evaluating the same key set [\vp{whatIs} \vp{Fido}]. However, the context varies in each instance. If the context contains none of the points in this example the result is \vp{dog}. But if you meet your professor outside of school (\vp{profKnowItAll} $\in$ \symC) then the answer would be \vp{beagle} and if you met in biology class (\vp{profKnowItAll}, \vp{biologyLecture} $\in$ \symC) then it would be \vp{CanisLupusFamiliaris}. \par A \textbf{complete}\label{def-complete} evaluation is when the points in the key set are an identical match for the points in the binding key set (i.e. no points in the context were used). An \textbf{incomplete}\label{def-incomplete} evaluation is where one or more context points are required in finding a matching binding. The ability to perform incomplete evaluations allows for tremendous flexibility. Examples throughout this paper will demonstrate this flexibility. \subsection{Point Variants and Twines}\label{def-variant} \begin{definition}{} Every internal point has two \textbf{variants} notatated as: \vp{point.i} and \vp{point.v}. \end{definition} The first variant is for representing class membership or is-a membership, i.e. `x is a y' as in `an apple is a fruit'. The second variant is for representing a quality of or attribute of something, i.e. `the x of y is z' as in `the color of the apple is red'. \begin{definition}{}\label{def-twine} A \textbf{twine} is a binding with a variant point in its key set. Only one point of a binding's key set may be a variant. \end{definition} \par Twines have an optional abbreviated notation. A simple is-a relationship is shown in~(\ref{equEqvIsaTwine}a). The syntax in~(\ref{equEqvIsaTwine}b) is used when other conditional/contextual points (\vp{c_i}) are included: \begin{subequations}\label{equEqvIsaTwine} \begin{align} \vp{p'}<\vp{p} \ &\textrm{is equivalent to} \ [\vp{p.i}] = \vp{p'} \\ \vp{p'}<\vp{p}|\vp{c_1},\vp{c_2},\ensuremath{\dots} \ &\textrm{is equivalent to} \ [\vp{p.i} \ \vp{c_1} \ \vp{c_2} \ensuremath{\dots}] = \vp{p'} \end{align} \end{subequations} A similar notation is used for value-of twines. (\ref{equEqvValTwine}a) is the basic notation, (\ref{equEqvValTwine}b) is with additional points. \begin{subequations}\label{equEqvValTwine} \begin{align} \vp{p}>\vp{p'} \ &\textrm{is equivalent to} \ [\vp{p.v}] = \vp{p'} \\ \vp{p}>\vp{p'}|\vp{c_1},\vp{c_2},\ensuremath{\dots} \ &\textrm{is equivalent to} \ [\vp{p.v} \ \vp{c_1} \ \vp{c_2} \ensuremath{\dots}] = \vp{p'} \end{align} \end{subequations} If two points are doubly twined (\vp{p}>\vp{p'} and \vp{p}<\vp{p'}) then the notation in (\ref{equEqvBothTwine}a/b) can be used: \begin{subequations}\label{equEqvBothTwine} \begin{align} \vp{p}:\vp{p'} \ &\textrm{is equivalent to} \ [\vp{p.v}] = \vp{p'} \ \textrm{and} \ \ [\vp{p'.i}] = \vp{p} \\ \vp{p}:\vp{p'}|\vp{c_1},\vp{c_2},\ensuremath{\dots} \ &\textrm{is equivalent to} \ [\vp{p.v} \ \vp{c_1} \ \vp{c_2} \ensuremath{\dots}] = \vp{p'} \ \textrm{and} \ \ [\vp{p'.i} \ \vp{c_1} \ \vp{c_2} \ensuremath{\dots}] = \vp{p} \end{align} \end{subequations} As a general rule, the is-a twine is used to represent relationships and the value-of twine is evaluated to `retrieve' a relationship value. For example, given \vp{color}<\vp{red} and \vp{red}<\vp{ball} (red is a color and the ball is red) then evaluating [\vp{color.v} \vp{ball}] results in \vp{red} (the color of the ball is red). \subsection{Time} Time is an integral component of knowledge representation and should be included in an AI knowledge representation data model. With a couple of minor changes, time and the flow of time can easily and elegantly be represented in the CEM. The first change creates a time point with an associated magnitude that increments at a constant rate (e.g. every 100 milliseconds). The time point is always in the context. Secondly, the contextual evaluator ($E_c$) is modified such that bindings containing a time point will match if the time point in context has a magnitude equal to or greater than the time point in the binding. Thirdly, the binding weight of a binding containing a time point is based, in part, on the magnitude of the time point. Given two bindings each containing a time point and having identical other points then the binding with the greater time point would have the greater binding weight. \par In the example below time is notated as m(\vp{n}) for the moment in time with magnitude~\vp{n}. \begin{lstlisting}[numbers=none] maritalStatus>single|John,m(100) maritalStatus>married|John,m(200) maritalStatus>divorced|John,m(300) maritalStatus>remarried|John,m(400) \end{lstlisting} With a current time of \vp{m(1000)} in the context the evaluation of [\vp{maritalStatus.v} \vp{John}] would return \vp{remarried}. All four bindings would be potential matches but the last one with the largest time magnitude has the greatest binding weight. With time \vp{m(350)} in the context evaluating [\vp{maritalStatus.v} \vp{John}] returns \vp{divorced}. \subsection{More on Context and Evaluation of is-a Twines}\label{moreoncontext} The context consists of explicit and implicit points. The explicit points are those directly inserted into the context. The implicit points are obtained by evaluating the is-a twine for each point in the context. The results of successful evaluations are additionally added to the context. These is-a context points are linked to the base (explicit) point such that if the base point is removed from the context then any/all is-a points are also removed. \par While most contextual evaluations have a single result (the binding with the highest weight), the is-a evaluations return all valid binding results not just the is-a binding with the highest weight. The exception to this is when multiple potential values differ only in the time point (\vp{m}). In these cases the single binding with the highest time point is selected. \par Figure~\ref{figexpcon} shows two explicit points in the context: \{\vp{Harry},$m(1000)$\}. Point \vp{Harry} has the following is-a twines represented as a tree\footnote{In the tree, nodes are points and branches are is-a twines. See section~\ref{secThoughts} for more on tree representations.}: \vp{man}<\vp{Harry}, \vp{human}<\vp{man}, \vp{married}<\vp{Harry}|$m(100)$, \vp{single}<\vp{Harry}|$m(50)$ and \vp{father}<\vp{Harry}. The full context (explicit + implicit) is as shown in figure~\ref{figimpcon}. \begin{figure}[H] \centering \begin{subfigure}[b]{0.3\textwidth} \centering \begin{tikzpicture}[scale=0.8, grow'=up] \node[] { \{ }; \begin{scope}[xshift=1cm,yshift=-1mm] \Tree [.\vp{Harry} [.\vp{man} \vp{human} ] \vp{married} \vp{father} ] \end{scope} \begin{scope}[xshift=3.25cm] \node[] {, \enspace $m(1000)$ \enspace \}}; \end{scope} \end{tikzpicture} \caption{Explicit context}\label{figexpcon} \end{subfigure} \begin{subfigure}[b]{0.6\textwidth} \centering \{ \vp{Harry} , \vp{man} , \vp{human} , \vp{married} , $m(1000)$ , \vp{father} \} \caption{Explicit \& implicit context}\label{figimpcon} \end{subfigure} \caption{Explicit and implicit context points}\label{figimpexpcon} \end{figure} \subsection{Relating the Intelligence Function (\symI) to the CEM} The introduction posed two questions: what is the intelligence function and what are its arguments? Within the framework of the CEM, the intelligence function is the contextual evaluation function $E_c$ (equation~\ref{equ-CE}). The sensory inputs ($s$) are the sense points injected into the context. The state ($q$) is a combination of the binding set and the non-sensory points in the context. \par Figure~\ref{figce-i-relation} illustrates how an intity would utilize the CEM. Knowledge would be stored as bindings within the binding set (\symB). The context (\symC) is both referenced and updated by the contextual evaluation process. The context would also be updated periodically by the intity's sensory input. Context control points dictate any external actions taken by the intity. Each evaluation of the null key set is the transition from one moment to the next. \begin{figure}[H] \centering \begin{tikzpicture}[auto, node distance=3cm,>=latex'] \node [block](bs){$\symB = q$}; \node [block, right of=bs](ce){$\symI = E_c()$ }; \node[cloud, cloud puffs=12, cloud ignores aspect, align=center, draw, right of=ce, minimum width=2cm, minimum height=2cm] (context) {$\symC = s + q$}; \node [above of=context, node distance=1.75cm](sense) {sensory input ($s$)}; \node [below of=context, node distance=1.75cm](control){control output}; \node [below of=ce, node distance=1.5cm] (eval) {eval \nullKS}; \circledarrow{}{eval}{.75cm}; \draw [<->] (bs.east) -- (ce.west); \draw [<->](ce.east) -- (context.west); \draw [line width=.1cm,->,color=black!40](sense.south) -- (context.north); \draw [line width=.1cm,->,color=black!40](context.south) -- (control.north); \end{tikzpicture} \caption{The CEM - \symI \enspace Relationship}\label{figce-i-relation} \end{figure} Remaining to be demonstrated is how the CEM can effectively compute intelligent behavior. This will be accomplished with the help of the V5 language/engine and working examples. \section{The V5 Language and Engine}\label{secV5LanEng} V5 is an experimental language and contextual evaluation engine. The low level language is similar to a machine language in that it consists of primitive opcodes that operate within a virtual machine. The machine is accessed through a command interpreter. The components of V5 are its point sets, the opcodes for the virtual machine and a suite of interpreter commands for defining points, loading point sets, running the interpreter and of course debugging. It is written in C and comprises approximately 6,000 lines of code. \par The CEM consists solely of points. The sole data type of V5 is the point. There are no integers, floating point numbers or character strings and correspondingly no instructions that operate on those data types. There are only instructions to manipulate points, create bindings/twines and evaluate bindings. The minimalist design is intentional to explore the potential of the CEM. \par Points in V5 can be dynamically created using instructions within V5 or points can be declared through the command interpreter using the `def' command. This command creates a new point and associates a text label to the point. Every point has an internal reference number. When a point is output to the console, V5 checks to see if that point has a label. If so then the label is output. If the point has no label then the point is output as $\#nnn$ where \vp{nnn} is the hexadecimal value of the point's reference number. Variants have `.v' or `.i' suffix. Surrogate points (see section~\ref{def-surrogate}) have a `?' suffix. \par Many code examples will be presented. In these examples, comments are enclosed in /* \ensuremath{\dots} */. V5 output is displayed in boldface. Every line within an example is numbered starting with line 1. Examples may be interspersed with descriptive text. In this case the line numbers continue. \begin{figure}[H] \begin{lstlisting}[xleftmargin=2cm] /* this is a comment */ ps a,b,c,d \end{lstlisting} \begin{exDesc} \begin{adjustwidth}{2cm}{}This is descriptive text in the middle of the example\end{adjustwidth} \end{exDesc} \begin{lstlisting}[xleftmargin=2cm,firstnumber=3] /* The example continues at this line (3). Note that the output from the print queue (PQ) (line (*\ref{line:expqout}*)) includes the current time moment (`20' in this example) */ run (*\small \textbf{PQ(20): an example of V5 output}\label{line:expqout}*) \end{lstlisting} \caption{V5 example conventions} \end{figure} \subsection{The Engine}\label{secEngine} Figure~\ref{bdofv5} shows a block diagram of the V5 engine and its relationship to the intelligence function (\symI) and contextual evaluation function. The left-hand side consists of several register sets. These hold sets of points for various V5 instructions. \begin{definition}{} The point set (\textbf{PS})\label{def-PS} serves a dual function as a push down stack and as the context for the engine. \end{definition} The context consists of all the points in the PS plus all is-a points off of these points, plus all is-a points off of the is-a points$\dots$ as explained in section~\ref{moreoncontext}. The is-a points for a point \vp{p} are determined by evaluating the is-a variant of that point ([\vp{p}.i]) and then repeating for each resulting point of the evaluation. This continues until the evaluation fails (e.g. there are no more is-a points). Each evaluation is contextual and uses the points in the context at that instance. This is intentionally chaotic. Any given initial set of PS points may result in one of any number of different contexts due to the ordering and timing of the evaluations of all the possible is-a points. The aggregation set (\textbf{AS}\label{def-AS}) holds points used by the aggregate set reduce (opRAS and opRASM) instructions. Section~\ref{secASReduce} further describes the AS. The print queue (\textbf{PQ}\label{def-PQ}) holds a set of points for output to the console. With the exception of debugging features, the PQ and related instructions are the only output mechanism in V5. The twine context (\textbf{TC}\label{def-TC}) is a set of points that are automatically inserted into a twine binding when the twine is created. The miscellaneous register set (\textbf{MR}\label{def-MR}) is a collection of special registers updated with the running of the engine. A list of these can be found in appendix~\ref{secregisters}. \par The point-processing-unit (\textbf{PPU}\label{def-PPU}) executes V5 instructions. The contextual evaluator (\textbf{E\textsubscript{C}}\label{def-EC}) performs all contextual evaluations as requested from the PPU. The \textbf{BS} is the binding set for the engine. All bindings are maintained within the BS. Bindings are created and inserted into BS by the PPU. The command interpreter \textbf{CI}\label{def-CI} handles the interface via the console between the V5 engine and the user. \begin{figure}[H] \begin{center} \begin{tikzpicture}[auto, node distance=3cm,>=latex'] \node [block, name=as] {AS}; \node [block, name=ss, below of=as, node distance=1cm] {MR}; \node [block, name=ps, below of=ss, node distance=1cm]{PS/\symC}; \node [block, name=pq, below of=ps, node distance=1cm] {PQ}; \node [block, name=tc, below of=pq, node distance=1cm] {TC}; \node [block, name=ppu, right of=ps]{PPU}; \node [block, name=ce, right of=ppu]{E\textsubscript{C}/\symI}; \node [block, name=bs, above of=ce, node distance=2cm]{BS/\symB}; \node [block, name=ci, below of=ce, node distance=2cm]{CI}; \draw [-] (1.5,-4)--(1.5,0); \draw [-] (1.5,0)--(as.east); \draw [-] (1.5,-1)--(ss.east); \draw [-] (ps.east)--(ppu.west); \draw [-] (1.5,-3)--(pq.east); \draw [-] (1.5,-4)--(tc.east); \draw [-] (ci.west) -| (ppu.south); \draw [-] (ppu.east)--(ce.west); \draw [-] (ci.north)--(ce.south); \draw [-] (bs.west) -| (ppu.north); \draw [-] (bs.south)--(ce.north); \end{tikzpicture} \caption{Block Diagram of V5 Engine}\label{bdofv5} \end{center} \end{figure} \subsubsection{Operation} After the engine has been initialized with the necessary points and bindings the `run' command is given to the command interpreter. This initiates the following: \begin{enumerate} \item Scan the points in the PS from top to bottom. \item If the point is an opcode then remove it from the PS and execute it. \item If the point is a register then replace it with the register's current value. \item If the point is a value variant then evaluate the key set ([\vp{point}.v]). If the evaluation succeeds then remove the variant point and push the resulting value(s) onto the PS and continue with step \#1.\footnote{Except if the result is a single point null. In this case nothing is pushed onto the PS.} If any point pushed onto the PS is already in the PS then that instance is removed so that only the instance at the top of the PS remains. If the evaluation fails then leave the value variant unchanged in the PS. \item Leave all other points in the PS unchanged. \item Stop if there are no remaining instructions or value variants in the PS otherwise continue with step \#1. \end{enumerate} \subsection{Commands and the Instruction Set} The engine interface is command driven. All user input begins with a command followed by command arguments. A list of V5 commands is found in appendix~\ref{secV5Commands}. \par V5 is an experimental system and many instructions have been implemented for various experiments. Appendix~\ref{secopcodes} lists only the instruction opcodes used in the examples in this document. All instruction opcodes begin with lower case `op' and end with an uppercase mnemonic. If an opcode ends with a lower case `t' then it is identical to the opcode without the suffix. The suffix indicates that trace debugging output is to be performed when the opcode is executed. For example, the instruction opEVAL constructs and evaluates a key set while the instruction opEVALt performs the same action plus outputs trace information to the console. The naming convention for the processing registers (MR) is lower case `r' followed with an upper-case mnemonic. For example, the opNEW instruction creates a new point and updates register rNEW with that new point. A list of the registers is found in appendix~\ref{secregisters}. \subsubsection{Aggregate Set (AS) Reduction}\label{secASReduce} The purpose of the AS is to perform evaluations on multiple contiguous pairs of points. It is used primarily for parsing natural language to thoughts and generating natural language from thoughts. The aggregate set (AS) may be loaded with a set of points using the opLASM instruction. The opcode opRASM performs the reduction operation as follows: \begin{enumerate} \item Starting at the left side of the AS, add two contiguous points (plus any is-a points) to the PS and evaluate the null key set. If the evaluation succeeds then temporarily hold the result along with the binding weight of the matched binding. \item Move right one point and repeat the null key set evaluation again holding a success off to the side. \item When all contiguous pairs have been evaluated then select the held binding with the highest binding weight. In the case of multiple bindings choose the left most one. \item If any of the points in the binding are prefaced with a minus sign then remove the corresponding point from the AS. \item Execute the value of the selected binding. This means to take all value points except for the last, insert into the PS and execute. Take the last value point and insert it into the AS at the point of the matched points (unless it is the \vp{null} point). \end{enumerate} \par There are extended binding forms that can be used while performing AS reduction. Take the binding found in the example below: \begin{lstlisting}[numbers=none] bind +2 [-np/-noun] eoa,noun,noun.v,opEVAL,np.v,rEVAL,opTWISA,np.v \end{lstlisting} The weight of a binding is normally determined by the number of points in the binding key set. The `+2' in the above example adds 2 to the weight of the binding giving it a higher precedence over other two-point bindings. The use of the minus sign before a point (-\vp{np}, -\vp{noun}) indicates that the point is to be removed from the AS if this binding is matched. The slash means that the ordering of the points matters. In this example an \vp{np} point followed by a \vp{noun} point would match. The reverse would not. \par Normally a binding (other than value twine) is only allowed to have one value point. During AS reduction the multiple value points are pushed onto the PS a point at a time. If a point is an opcode then it is immediately executed. Note that pushing the value points from left to right onto a stack effectively reverses the ordering of the arguments. \par The evaluation of [\vp{p.v}] is a little different. The AS and all of its is-a points is first searched. If \vp{p} is located then [\vp{p.v}] is taken as the root point. \par After all value points have been handled, the last point (top of the PS) is pulled from the PS. If it is not the \vp{null} point then it is inserted into the AS at the point of the match. \subsection{Evaluation of [\vp{p.v}] with Respect to the PS} The evaluation of a value variant ([\vp{p.v}]) within V5 is not treated strictly as a CEM evaluation. V5 first examines the is-a points in the PS for an instance of \vp{p}. If found then the value of [\vp{p.v}] is taken as the root value of that is-a tree in the PS as the value. Otherwise the evaluation is performed as a normal CEM evaluation. Note that this only applies with the evaluation of [\vp{p.v}] where \vp{p.v} is the only point of the key set. For example, if the PS contains \{\vp{a},\vp{b},\vp{c}\} and \vp{p}<\vp{c} then the evaluation of [\vp{p.v}] results in \vp{c} (\vp{c} is-a an instance of \vp{p} within the current context). \subsection{Simple Examples} This example is about horses and four specific horses. Three of the horses: Pegasus, Mr. Ed and Seabiscuit\footnote{Pegasus is a mytholgical winged horse\cite{pegasus}, Mr. Ed is a talking horse featured on a TV series of the same name\cite{wiki:Mister_Ed}, Seabiscuit is a famous race horse\cite{seabiscuit}.} are defined as horses on line~\ref{line:exHorses}. The example examines whether or not horses can talk and fly. In general, they cannot as defined on line~\ref{line:exhorse1}. But Mr. Ed is able to talk (\ref{line:exhorse2}) and Pegasus can fly (\ref{line:exhorse3}). Seabiscuit, while being very fast, can neither talk nor fly. \begin{lstlisting} def horse, Pegasus, MrEd, Seabiscuit def canTalk, canFly def yes, no twine horse<Pegasus,MrEd,Seabiscuit(*\label{line:exHorses}*) twine canTalk>no|horse ; canFly>no|horse(*\label{line:exhorse1}*) twine canTalk>yes|MrEd,horse(*\label{line:exhorse2}*) twine canFly>yes|Pegasus,horse(*\label{line:exhorse3}*) \end{lstlisting} \begin{exDesc} The evaluations below test the horses for special qualities. \end{exDesc} \begin{lstlisting}[firstnumber=10] eval [canFly.v Seabiscuit] (*\small \textbf{ result = no}*) eval [canTalk.v MrEd] (*\small \textbf{ result = yes}*) eval [canFly.v Pegasus] (*\small \textbf{ result = yes}*) eval [canTalk.v Pegasus] (*\small \textbf{ result = no}*) \end{lstlisting} \begin{exDesc} A new point is defined for Secretariat\footnote{A more recent racehorse without any special abilities other than to win races.\cite{wiki:Secretariat_(horse)}.}. The evaluation on line~\ref{line:exSecFail} fails. Why? Because the only knowledge about whether or not something can talk applies to horses and only if Secretariat is declared as a horse (line~\ref{line:exSecHorse}) will the evaluation succeed (line~\ref{line:exSecOK}). \end{exDesc} \begin{lstlisting}[firstnumber=18] def Secretariat eval [canTalk.v Secretariat](*\label{line:exSecFail}*) (*\small \textbf{ ? No result found}*) Twine horse<Secretariat(*\label{line:exSecHorse}*) eval [canTalk.v Secretariat](*\label{line:exSecOK}*) (*\small \textbf{ result = no}*) \end{lstlisting} \section{Sequences}\label{secSeq} \begin{definition}{} A \textbf{sequence}\label{def-sequence} is collection of bindings that define an ordering of points or a set of steps to accomplish a goal. \end{definition} A sequence may be \textit{tightly coupled} where the \textit{next} point of the sequence directly follows the prior. Examples of tightly coupled sequences are: the letters of the alphabet; the letters in a word; the words in a sentence; the digits of $\pi$; the instructions in a computer program. Sequences that are \textit{loosely coupled} are a collection of bindings defining steps/actions where the \textit{next} step does not necessarily directly follow the current but from some other triggering context. Loosely coupled sequences are also known as \textit{goals}. An example would be the goal of driving from home to the grocery store. \par Sequences, being solely a collection of bindings do nothing in and of themselves. There is no necessary correlation between the ordering of bindings in a sequence and the resulting ordering of the sequence points. Performing a sequence requires an executing engine. Multiple sequences can be running concurrently. The following examples demonstrate how two concurrent sequences interact to print the spelling of a word. A step within a sequence may itself be the start of another sequence. And everything is contextual. There is tremendous flexibility in the definition and execution of sequences in the ever changing context of the real world. \subsection{Implementing and Running Sequences} There are many ways to represent sequences within the CEM. Suppose we want a sequence for the spelling of `book'. The first example below shows a sequence of four points (\vp{sb1}, \vp{sb2}, \vp{sb3}, \vp{sb4}). The twined value of \vp{sb1} are the two points "b" and \vp{sb2.v} (line~\ref{line:sb1line}). Evaluating [\vp{sb1.v}] results in both "b" and \vp{sb2.v} being added to the PS. The V5 engine sees \vp{sb2.v}, evaluates it and replaces it with "o" and \vp{sb3.v}. This continues through \vp{sb4}. Its evaluation, [\vp{sb4.v}], is only "k" so after adding "k" to the PS the run is finished. The result is that the PS contains the points "k", "o" and "b". Recall that a given point may only occur once in the PS. That is why only one instance of "o" (line~\ref{line:exBookSeq1}) is given. This sequence iterates through the spelling of `book', but not in a particularly useful way. \begin{lstlisting} def sb1, sb2, sb3, sb4 twine sb1>"b",sb2.v(*\label{line:sb1line}*) twine sb2>"o",sb3.v twine sb3>"o",sb4.v twine sb4>"k" ps sb1.v run show ps (*\small \textbf{ps: "k", "o", "b"}\label{line:exBookSeq1}*) \end{lstlisting} The second example outputs the spelling of `book'. The evaluation starts with \vp{sb1.v} in the PS. It is evaluated and the letter `b' is added to the print queue (line~\ref{line:exsb1}). It continues until [\vp{sb4}.v] is evaluated and `k' is added to the print queue and the contents of PQ are output to the console. The problem with this sequence is that it is only useful for spelling book and that the spelling is only output to the console. The spelling cannot be referenced in any other way. \begin{lstlisting} def sb1, sb2, sb3, sb4(*\label{line:exsb1}*) twine sb1>"b",opADDPQ,sb2.v twine sb2>"o",opADDPQ,sb3.v twine sb3>"o",opADDPQ,sb4.v twine sb4>"k",opADDPQ,opOUTPQ ps sb1.v run (* \small \textbf{PQ(2): "b" "o" "o" "k"} *) \end{lstlisting} The third example in this series demonstrates a more generalized and useful approach by having two sequences. One sequence is defined for the spelling of `book'. Then another sequence is defined that takes the first sequence (or any similar sequence) and outputs the spelling of the word to the console. The sequence points for the spelling are \vp{b1} through \vp{b4}. They are linked through value twines (line~\ref{line:spellword}). The sequence works/runs as follows: [\vp{spell} `book'] is bound to the first point (\vp{b1}) of the sequence (line~\ref{line:exBookSeq2a}). The evaluation of [\vp{b1.v}]) gives the next point (\vp{b2}) of the sequence. Each of the points \vp{b1} through \vp{b4} has an is-a twine of the corresponding letter. (V5 automatically defines the letters of the alphabet and twines each letter to the \vp{letter} point: \vp{letter}<`a', \vp{letter}<`b', \ensuremath{\dots}). Therefore the evaluation of [\vp{letter.v}] with \vp{b1} in the context results in `b', with \vp{b2} in the context `o', etc. \begin{lstlisting} def word,"book",spell,pEnd twine word<"book" def b1,b2,b3,b4 bind [spell "book"] b1(*\label{line:exBookSeq2a}*) twine b1>b2 ; b2>b3 ; b3>b4 ; b4>pEnd(*\label{line:spellword}*) twine "b"<b1 ; "o"<b2 ; "o"<b3 ; "k"<b4 twine spell<b1,b2,b3,b4 \end{lstlisting} \begin{exDesc} The second sequence is defined by points \vp{s} through \vp{s4}. The start of the sequence evaluates [\vp{spell} \textit{word}] to get the starting sequence point for the spelling of the word. Line~\ref{line:spellisas} uses the opcode opPSISAS to re-evaluate all of the is-a points in the PS so that the evaluation of \vp{letter.v} results in the letter for the current step in the word sequence. That letter is added to the print queue. Line~\ref{line:spellnext} attempts to get the next sequence point of the word [\vp{spell.v}]. If the evaluation results in the next sequence point of the word the opVAL instruction converts the point to its value variant which is then evaluated by the V5 engine. If that result is another \vp{b_n} point then \vp{s4.v} evaluates to \vp{s2.v}. If it is \vp{pEnd} then the twine on line~\ref{line:spelldone} is run, the spelling is output and the sequence terminates. \end{exDesc} \begin{lstlisting}[firstnumber=8] def s,s2,s3,s4 twine s>word.v,spell,eoa,opEVAL,rEVAL,s2.v twine s2>opPSISAS,letter.v,opADDPQ,s3.v(*\label{line:spellisas}*) twine s3>spell.v,opVAL,s4.v(*\label{line:spellnext}*) twine s4>s2.v twine s4>opOUTPQ | pEnd(*\label{line:spelldone}*) ps "book",opPSISAS,s.v run (*\small \textbf{PQ(2): "b" "o" "o" "k"}*) \end{lstlisting} \begin{exDesc} The next section shows how context can be used to add a plural ending to a word. The spelling for `spy' is defined just as `book' was above using points \vp{t1}, \vp{t2} and \vp{t3}. At line~\ref{line:spellplural} an alternative value for \vp{t2.v} is given in the context of \vp{plural}. Instead of linking to \vp{t3} (`y') it links to \vp{pEnd2}. The value of \vp{s4} is also redefined in the context of \vp{plural} and \vp{pEnd2} to add `ies' to the PQ and then output the PQ to the console. Line~\ref{line:spellies} show the output for spelling the word `spy' with the addition of plural to the context. \end{exDesc} \begin{lstlisting}[firstnumber=17] twine word<"spy" def t1,t2,t3 bind [spell "spy"] t1 twine t1>t2 ; t2>t3 ; t3>pEnd twine "s"<t1 ; "p"<t2 ; "y"<t3 twine spell<t1 ; spell<t2 ; spell<t3 def pEnd2,"ies" twine t2>pEnd2 | plural(*\label{line:spellplural}*) twine s4>"ies",opADDPQ,opOUTPQ | pEnd2,plural ps "spy",plural,opPSISAS,s.v run (*\small \textbf{PQ(2): "s" "p" "ies"}\label{line:spellies} *) \end{lstlisting} \subsection{Singing a Song} Singing the first few notes of the song `Mary had a little lamb' is the purpose of this example. Currently, V5 output is limited to console text so the singing is emulated by the outputting of pitch and lyrics with the proper timing. The example uses future time points to control the timing of the console output. If the PS contains a value variant and the twine defining the value contains a future moment (time) point (\vp{p}>\vp{value}|\vp{m}(\textit{future-time})) then it will remain (unevaluated) in the PS until the current time is equal to or greater than the \textit{future-time} in the twine binding. \par This first section below defines the basic points of the song and song sequence. The points \vp{n1} through \vp{n13} are for the first 13 notes of `Mary had a little lamb'. Line~\ref{line:marynotes} twines all the notes to the note point. Line~\ref{line:marynoteseq} links each note to the subsequent note ($note_n.v$ $\rightarrow$ $note_{n+1}$). Line~\ref{line:marynoteword} associates the words/lyrics with each of the notes. \begin{lstlisting}[firstnumber=1] def song,lyrics,note,songEnd twine songEnd>null def MarySong,Mar,ee,had,a,lit,tle,lamb twine song<MarySong twine lyrics<Mar,ee,had,a,lit,tle,lamb def n1,n2,n3,n4,n5,n6,n7,n8,n9,n10,n11,n12,n13 twine note<n1,n2,n3,n4,n5,n6,n7,n8,n9,n10,n11,n12,n13(*\label{line:marynotes}*) twine n1>n2 ; n2>n3 ; n3>n4 ; n4>n5 ; n5>n6 ; n6>n7 ; n7>n8 ;(*\label{line:marynoteseq}*) n8>n9 ; n9>n10 ; n10>n11 ; n11>n12 ; n12>n13 ; n13>songEnd twine Mar<n1 ; ee<n2 ; had<n3 ; a<n4 ; lit<n5 ; tle<n6 ; lamb<n7 ;(*\label{line:marynoteword}*) lit<n8 ; tle<n9 ; lamb<n10 ; lit<n11 ; tle<n12 ; lamb<n13 twine MarySong>n1 /* n1 is first point in sequence */ \end{lstlisting} \begin{exDesc} The timings of the notes are specified below. The timing point is twined to different numbers of opINCT instructions based on the tempo and note type (\vp{half} or \vp{qtr}/quarter notes). Line~\ref{line:marytime} twines \vp{half} or \vp{qtr} to each of the notes. The opINCT instruction increases the time point within the twine context set (TC). \end{exDesc} \begin{lstlisting}[firstnumber=13] def timing,tempofast,temposlow,qtr,half twine timing>opINCT,opINCT,opINCT|qtr,tempofast twine timing>opINCT,opINCT,opINCT,opINCT,opINCT, opINCT,opINCT,opINCT,opINCT,opINCT|half,tempofast twine timing>opINCT,opINCT,opINCT,opINCT,opINCT, opINCT|qtr,temposlow twine timing>opINCT,opINCT,opINCT,opINCT,opINCT, opINCT,opINCT,opINCT,opINCT,opINCT,opINCT,opINCT, opINCT,opINCT,opINCT,opINCT,opINCT,opINCT,opINCT, opINCT|half,temposlow twine qtr<n1 ; qtr<n2 ; qtr<n3 ; qtr<n4 ; qtr<n5 ; qtr<n6 ; half<n7 ;(*\label{line:marytime}*) qtr<n8 ; qtr<n9 ; half<n10 ; qtr<n11 ; qtr<n12 ; half<n13 \end{lstlisting} \begin{exDesc} Similarly, pitches are associated with each note. Each note now has two is-a twines: one for timing and one for pitch. The value (\textit{note}.v) of each note links to the next note. Pitch is represented with a letter-number pair. Each letter (A-G) represents a note on the scale and the number corresponds to the octave above middle C. For example, the note B1 represents the B above middle C (C0). \end{exDesc} \begin{lstlisting}[firstnumber=25] def pitch,a0,b0,c0,d0,e0,f0,g0,a1,b1,c1,d1,e1,f1,g1 twine pitch<a0,b0,c0,d0,e0,f0,g0,a1,b1,c1,d1,e1,f1,g1 twine b1<n1 ; a1<n2 ; g0<n3 ; a1<n4 ; b1<n5 ; b1<n6 ; b1<n7 ; a1<n8 ; a1<n9 ; a1<n10 ; b1<n11 ; d1<n12 ; d1<n13 \end{lstlisting} \begin{exDesc} Below sets up another sequence to iterate through all the notes of a song (points \vp{swt} through \vp{swt5}). Line~\ref{line:maryfirstnote} gets the first note of the sequence. This line also creates a new point to be used to distinguish one execution of this sequence from another. Line~\ref{line:maryfirstlp} gets the lyric and pitch associated with the current note and places each into the print queue. Then it outputs the queue and clears it. On line~\ref{line:marynextnote}, \vp{note.v} results in the note, opVAL converts the note to its value variant. The evaluation of the value variant gives the next note. The next/new note is saved in working memory register 1 (opLWM1). If the next note is \vp{songEnd} then the next step is at line~\ref{line:maryend}, otherwise at line~\ref{line:marycontinue} where \vp{swt5} is twined to \vp{swt2.v} to continue with the next note. This twine contains a future time point. Evaluations fail until the future time is reached (twine \vp{swt5}>\vp{swt2.v}|\vp{instancepoint,m(future),note}). \end{exDesc} \begin{lstlisting}[firstnumber=29] def swt,swt2,swt3,swt4,swt5 twine swt>song.v,opVAL,opNEW,rNEW,swt2.v(*\label{line:maryfirstnote}*) twine swt2>opPSISAS,lyrics.v,opADDPQ,pitch.v,opADDPQ,opOUTPQ,opCLRPQ,swt3.v(*\label{line:maryfirstlp}*) twine swt3>opRCTX,timing.v,note.v,opVAL,opLWM1,rWM1,swt4.v(*\label{line:marynextnote}*) twine swt4>opPSISAS,rWM1,opACTX,rNEW,opACTX,swt5,@swt2.v,opTWVAL,swt5.v(*\label{line:marycontinue}*) twine swt4>opSTATE,songEnd,opVAL|songEnd(*\label{line:maryend}*) \end{lstlisting} \begin{exDesc} All the components of the sequences are now ready to run. The song is started by placing the song point, \vp{MarySong}, into the PS along with the tempo point \vp{tempSlow} and the starting point for the sequence (\vp{swt.v}). The output below shows the current moment (time) point enclosed in parentheses. The time delay between lyric-note output can be determined by taking the difference between two moment points. The delay between outputs is 6 (quarter notes) until `lamb' (half notes) with a delay is 20. At the end of the sequence the opSTATE instruction (line~\ref{line:maryend}) results in a debugging output line (\ref{line:marystateout}). \end{exDesc} \begin{lstlisting}[firstnumber=35] ps MarySong,opPSISAS,temposlow,swt.v run (*\small \textbf{PQ(2): Mar b1}*) (*\small \textbf{PQ(8): ee a1}*) (*\small \textbf{PQ(14): had g0}*) (*\small \textbf{PQ(20): a a1}*) (*\small \textbf{PQ(26): lit b1}*) (*\small \textbf{PQ(32): tle b1}*) (*\small \textbf{PQ(38): lamb b1}*) (*\small \textbf{PQ(58): lit a1}*) (*\small \textbf{PQ(64): tle a1}*) (*\small \textbf{PQ(70): lamb a1}*) (*\small \textbf{PQ(90): lit b1}*) (*\small \textbf{PQ(96): tle d1}*) (*\small \textbf{PQ(102): lamb d1}*) (*\small \textbf{state: ps: songEnd, opVAL, \#338, temposlow}\label{line:marystateout}*) \end{lstlisting} \subsection{Running a Maze} A V5 sequence for running a simple T-maze is presented in this next example. The maze, shown in figure~\ref{figmaze}, is represented in V5 as a tree. \begin{figure}[H] \centering \begin{tikzpicture}[scale=0.4, every node/.style={scale=0.75}] \draw[very thick] (3,0)--(3,2)--(1,2)--(1,1)--(0,1)--(0,4)--(1,4)--(1,3)--(6,3)--(6,5)--(5,5)--(5,6)--(9,6)--(9,7); \draw[very thick] (4,0)--(4,2)--(6,2)--(6,2)--(7,2)--(7,5)--(9,5)--(9,4)--(10,4)--(10,7); \draw[below] (3.5,0) node(s){start}; \draw[above] (9.5,7) node(f){finish}; \draw[above] (3.5,2) node(n1) {n1}; \draw[above] (0.5,2) node(n2) {n2}; \draw [above] (6.5,2) node(n3) {n3}; \draw [above] (6.5,5) node(n4) {n4}; \draw [above] (9.5,5) node(n5) {n5}; \end{tikzpicture} \caption{Maze modeled in example}\label{figmaze} \end{figure} The section below defines the layout of the maze. In particular, the bindings on lines~\ref{line:mazedef1}-\ref{line:mazedef2} define the linked right-hand and left-hand nodes at any given node. If a node has no left or right binding then going that direction at that node leads to a dead end. \begin{lstlisting} def node,success,isSuccess, n1,n2,n3,n4,n5, direction,left,right,nextDirection twine node<n1,n2,n3,n4,n5 bind [success isSuccess] success bind [nextDirection left] right bind [n1 left] n2(*\label{line:mazedef1}*) bind [n1 right] n3 bind [n3 left] n4 bind [n4 right] n5 bind [n5 left] success(*\label{line:mazedef2}*) \end{lstlisting} \begin{exDesc} The maze running strategy is to begin with the left-hand side of a node (turn left). If that fails then try the right-hand side. If that fails then backtrack to the prior node and continue. If the maze is considered a tree then running the maze begins with the left most leaf and moves clockwise from leaf to leaf. Eventually it will hit the successful leaf and exit the maze. \par The value of \vp{pos} (\vp{pos.v}) is used to track the position within the maze. The time point (\vp{m}) is included in the twine as the value will be changing. The twine on line~\ref{line:mazepos1} sets the initial position at node 1. Bindings will be created to keep track of the direction taken at a node [turn \textit{node}] and to keep track of the prior node [prior \textit{node}]. The latter is for use in backtracking. \end{exDesc} \begin{lstlisting}[firstnumber=10] def pos, turn, prior twine pos>n1|rCTP(*\label{line:mazepos1}*) \end{lstlisting} \begin{exDesc} The logic for running the maze is below. Points \vp{m1} through \vp{m7} are for the main sequence loop, \vp{mf1} through \vp{mf3} are when a node fails, \vp{mb1} and \vp{mb2} are for backtracking to a prior node and \vp{ms1} through \vp{ms5} are for outputting the correct path on success. The first step is to determine the direction to turn at the current node (\vp{pos.v}). The binding [\textit{node} \vp{pos}] is evaluated. If it succeeds then continue with line~\ref{line:m2ok}, otherwise continue at line~\ref{line:m2fail} and perform the binding [\textit{node} \vp{pos}]=\vp{left} and start from the top. At line~\ref{line:m2ok} the register rEVAL has the direction to turn (left or right). Evaluate [\textit{node} \textit{direction}] to get the next node. If that fails continue at \vp{mf1}. If it succeeds and done with the maze then continue with \vp{ms1} otherwise set \vp{pos.v} to the new node (line~\ref{line:m6}) and continue back at \vp{m1}. \end{exDesc} \begin{lstlisting}[firstnumber=12] def m1,m2,m2a,m3,m4,m5,m6,m7, mf1,mf2,mf3, mb1,mb2, ms1,ms2,ms3,ms4,ms5 twine m1>opINCCTP,pos.v,turn,eoa,opEVAL,m2.v twine m2>evalFail,opVAL,m2a.v|evalFail(*\label{line:m2fail}*) twine m2a>pos.v,turn,eoa,left,opBIND,m1.v twine m2>pos.v,rEVAL,eoa,opEVAL,m4.v(*\label{line:m2ok}*) twine m4>evalFail,opVAL,mf1.v|evalFail twine m4>rEVAL,opLWM1,isSuccess,rEVAL,eoa,opEVAL,m5.v twine m5>ms1.v twine m5>evalFail,opVAL,m6.v|evalFail twine m6>rWM1,prior,eoa,pos.v,opBIND,m7.v(*\label{line:m6}*) twine m7>pos,rWM1,opTWVAL,m1.v \end{lstlisting} \begin{exDesc} The section below is executed when the evaluation of [\textit{node direction}] fails. It attempts to get the direction currently associated with the node (line~\ref{line:mf1} and evaluate [nextDirection \textit{direction}]. If that succeeds (i.e. [nextDirection left]=right) continue with \vp{mf3} and bind [turn \textit{node}] to the new direction and continue with \vp{m1}. If it fails then both directions failed at the node and backtracking needs to be done (\vp{mb1}). \end{exDesc} \begin{lstlisting}[firstnumber=23] twine mf1>pos.v,turn,eoa,opEVAL,mf2.v(*\label{line:mf1}*) twine mf2>rEVAL,nextDirection,eoa,opEVAL,mf3.v twine mf3>pos.v,turn,rCTP,eoa,rEVAL,opBIND,m1.v twine mf3>evalFail,opVAL,mb1.v|evalFail \end{lstlisting} \begin{exDesc} Backtracking is simple. Evaluation of [\textit{node} \vp{prior}] gives the prior node (\vp{mb1}). Set the current position to this node (\vp{mb2}) and continue back at \vp{mf1}. \end{exDesc} \begin{lstlisting}[firstnumber=27] twine mb1>pos.v,prior,eoa,opEVAL,mb2.v twine mb2>pos,rEVAL,opTWVALt,mf1.v \end{lstlisting} \begin{exDesc} The next portion of the sequence outputs the correct path through the maze. Set the current node (\vp{pos.v}) back to the first node (\vp{n1}). Evaluate the correct turn for the node, output the node and turn (\vp{ms3}). Then continue with the next node. If there is no next node then the traversal through the maze is complete and the sequence is finished. \end{exDesc} \begin{lstlisting}[firstnumber=29] twine ms1>pos,n1,opTWVAL,ms2.v twine ms2>pos.v,turn,eoa,opEVAL,ms3.v twine ms3>opCLRPQ,pos.v,opADDPQ,rEVAL,opADDPQ,opOUTPQ,ms4.v twine ms3>evalFail,opVAL|evalFail twine ms4>pos.v,rEVAL,eoa,opEVAL,ms5.v twine ms5>opINCCTP,pos,rEVAL,opTWVAL,ms2.v \end{lstlisting} \begin{exDesc} Running the sequence is done by initializing the PS with the current position (\vp{pos.v}) to \vp{n1} and then running the V5 engine. The solution to the maze is then output. \end{exDesc} \begin{lstlisting}[firstnumber=35] ps pos,n1,opTWVAL,m1.v run (*\small \textbf{PQ(15): n1 right}*) (*\small \textbf{PQ(16): n3 left}*) (*\small \textbf{PQ(17): n4 right}*) (*\small \textbf{PQ(18): n5 left}*) \end{lstlisting} \subsection{A Turing Machine}\label{secTM} This example constructs a Turing machine\cite{turingmachine} to demonstrate that V5 is Turing complete and that, in theory, any computable function can be implemented with V5.\footnote{This is not to suggest that V5 is suitable for general purpose computing\cite{wiki:Turing_tarpit}}. An overview of the example is below: \begin{clist} \item The infinite tape is implemented as a series of linked points as described in figure~\ref{figTMTape}. The tape, at any given point is finite in size. When the tape is positioned past the last tape cell on the right, a new tape cell is automatically allocated and appended to the tape. \item The states of this (simple) machine are declared points: \vp{sInit} and \vp{sHalt}. \item Actions are defined with bindings with \textit{sdcn} points as shown in figure~\ref{figTMStates}. \item The machine operates much like any other Turing machine. It repeatedly reads the contents of the current cell, uses that value with the current state to determine the new tape cell contents, new tape position and new state. The machine halts when its current state is \vp{sHalt}. \end{clist} \par The V5 commands to define points will no longer be included in the examples. The `set autodef' command below instructs the V5 command interpreter to automatically define any new point it encounters. \begin{lstlisting} set autodef on \end{lstlisting} \begin{exDesc} The machine runs as follows: \begin{clist} \item[] \vp{tm1}- get the contents of the current tape position in rEVAL \item[] \vp{tm2} - eval [\textit{tape content} \textit{current state}] to get the next sdc (new state, tape direction, new tape contents) point \item[] \vp{tm3} - twine \vp{curSDC}>\textit{new \vp{sdc}} (in rEVAL) \item[] \vp{tm4} - eval [\vp{sdc} \vp{newTapeContent}] for new tape contents \item[] \vp{tm5} - set the new tape contents \item[] \vp{tm6} - evaluate [\vp{sdc} \vp{tapeMove}] to see which direction to move \item[] \vp{tm7} - move the tape to \vp{tm8} on line~\ref{line:tm8ok} if the tape cell exists, or line~\ref{line:tm8fail} if the tape cell does not exist and needs to be created \end{clist} \end{exDesc} \begin{lstlisting}[firstnumber=2] twine tm1>curPos.v,contents,eoa,opEVAL,tm2.v twine tm2>rEVAL,curState.v,sdc,eoa,opEVAL,tm3.v twine tm3>curSDC,rEVAL,opTWVAL,tm4.v twine tm4>curSDC.v,newTapeContent,eoa,opEVAL,tm5.v twine tm5>contents,curPos.v,rCTP,eoa,rEVAL,opBIND,tm6.v twine tm6>curSDC.v,tapeMove,eoa,opEVAL,tm7.v twine tm7>curPos.v,rEVAL,eoa,opEVAL,tm8.v \end{lstlisting} \begin{exDesc} \begin{clist} \vspace{-15pt} \item[] \vp{tm8} - twine \vp{curPos}>\textit{new tape position} \item[] \vp{tm9} - eval [\textit{sdc} \vp{newState}] for the new \vp{sdc} \item[] \vp{tm10} - twine \vp{curState}>\textit{new state} \item[] \vp{tm11} - force an increment of the time point \item[] \vp{tm12} - check to see if the new state is \vp{sHalt}, goto line~\ref{line:tm13halt} if yes, line~\ref{line:tm13cont} if not \item[] \vp{tm13} - either resume loop at \vp{tm1} or leave with \vp{sHalt} in the PS if done \end{clist} \end{exDesc} \begin{lstlisting}[firstnumber=9] twine tm8>curPos,rEVAL,opTWVAL,tm9.v(*\label{line:tm8ok}*) twine tm9>curSDC.v,newState,eoa,opEVAL,tm10.v twine tm10>curState,rEVAL,opTWVAL,tm11.v twine tm11>opINCCTP,tm12.v twine tm12>curState.v,checkHalt,eoa,opEVAL,tm13.v twine tm13>evalFail,opVAL,tm1.v|evalFail(*\label{line:tm13cont}*) twine tm13>sHalt(*\label{line:tm13halt}*) \end{lstlisting} \begin{exDesc} When a tape move to the right fails the section below is run. \begin{clist} \item[] \vp{tm8} - clear out the \vp{evalFail} point and create a new tape cell point \item[] \vp{tm8a} - link the current last tape cell to the new point \item[] \vp{tm8b} - left link the new cell to prior last cell \item[] \vp{tm8c} - define \vp{moveNone} to remain on current new cell \item[] \vp{tm8d} - set the contents of new cell to \vp{tBlank} \item[] \vp{tm8e} - set the current tape position to the new cell \item[] \vp{tm8f} - resume at \vp{tm9} above \end{clist} \end{exDesc} \begin{lstlisting}[firstnumber=16] twine tm8>evalFail,opVAL,opNEW,tm8a.v|evalFail(*\label{line:tm8fail}*) twine tm8a>curPos.v,moveRight,eoa,rNEW,opBIND,tm8b.v twine tm8b>rNEW,moveLeft,eoa,curPos.v,opBIND,tm8c.v twine tm8c>rNEW,moveNone,eoa,rNEW,opBIND,tm8d.v twine tm8d>rNEW,contents,eoa,tBlank,opBIND,tm8e.v twine tm8e>curPos,rNEW,opTWVAL,tm8f.v twine tm8f>tm9.v(*\label{line:tm8failb}*) \end{lstlisting} \begin{exDesc} The points \vp{sdc1} and \vp{sdc2} are used to define the new state, tape move direction and new tape contents based on the current state and tape contents. Line~\ref{line:xxx1} defines the sdc (\vp{sdc1}) if state is \vp{SInit} and tape contents is \vp{tBlank}. \begin{figure}[H] \begin{center} \begin{tabular}{ c c | c c c | c } \multicolumn{2}{c |}{Current} & \multicolumn{3}{|c|}{Operation} \\ State & Tape & State & Tape & Move & SDC \\ \hline \hline sInit & tBlank & sInit & tX & none & sdc \\ sInit & tX & sHalt & tX & right & sdc2 \\ \end{tabular} \end{center} \caption{State Tables for Turing Machine}\label{figTMStates} \end{figure} \end{exDesc} \begin{lstlisting}[firstnumber=23] bind [sInit tBlank sdc] sdc1(*\label{line:xxx1}*) bind [sdc1 newTapeContent] tX bind [sdc1 tapeMove] moveNone bind [sdc1 newState] sInit bind [sInit tX sdc] sdc2 bind [sdc2 newTapeContent] tX bind [sdc2 tapeMove] moveRight bind [sdc2 newState] sHalt \end{lstlisting} \begin{exDesc} The following lines define the initial tape. It consists of one cell containing \vp{tBlank}. Each cell is left and right linked to its previous/next cell. If there is no right link then that cell is last cell to the right (and a new cell will be automatically appended if necessary). \begin{figure}[H] \begin{center} \begin{tikzpicture}[scale=0.5, auto, node distance=3cm,>=latex'] \node [block, name=tapePrior] {$tape_{n-1}$}; \node [block, name=tapeCur, right of=tapePrior] {$tape_n$}; \node [block, name=tapeNext, right of=tapeCur] {$tape_{n+1}$}; \draw [<-] (tapePrior.10pt)-- node[above, fill=white]{1} ([yshift=5pt] tapeCur.west); \draw [->] (tapePrior.-10pt)-- node[below, fill=white]{2} ([yshift=-5pt] tapeCur.west); \draw [<-] (tapeCur.10pt)-- node[above, fill=white]{3} ([yshift=5pt] tapeNext.west); \draw [->] (tapeCur.-10pt)-- node[below, fill=white]{4} ([yshift=-5pt] tapeNext.west); \path (tapeCur) edge [loop above] node {5} (tapeCur); \node [below of=tapePrior, node distance = 2cm, xshift=3cm] { \begin{tabular}{ r l } 1 & [moveLeft $tape_n$] = $tape_{n-1}$ \\ 2 & [moveRight $tape_{n-1}$] = $tape_n$ \\ 3 & [moveRight $tape_n$] = $tape_{n+1}$ \\ 4 & [moveLeft $tape_{n+1}$] = $tape_n$ \\ 5 & [moveNone $tape_{n}$] = $tape_n$ \\ \end{tabular} }; \end{tikzpicture} \end{center} \caption{Layout of Tape in Turing Machine}\label{figTMTape} \end{figure} \end{exDesc} \begin{lstlisting}[firstnumber=32] bind [tPos1 contents rCTP] tBlank bind [tPos1 moveLeft] tPos1 bind [tPos1 moveNone] tPos1 \end{lstlisting} \begin{exDesc} The sequence is now ready to execute. First define the current state as \vp{sInit} (line~\ref{line:tmcurstate}), then twine the current tape position to \vp{tPos1} (line~\ref{line:tmcurpos}), add the staring sequence point (\vp{tm1}) to the PS and run. \end{exDesc} \begin{lstlisting}[firstnumber=35] ps curState,sInit,opTWVAL(*\label{line:tmcurstate}*) run ps curPos,tPos1,opTWVAL(*\label{line:tmcurpos}*) run ps tm1.v /* Start running with tm1 */ run show ps (*\small \textbf{ps: sHalt}*) \end{lstlisting} \subsection{Recapping Tightly Coupled Sequences} All of the sequences described thus far have been tightly coupled sequences, i.e. the next point in a sequence is determined by the prior point. Defining tightly coupled sequences with contextual twines has several benefits: \begin{indent1} \begin{enumerate} \item Multiple sequences can be \textit{running} simultaneously as demonstrated with the prior examples. \item The order in which sequence twines are created is independent of the order in which they are \textit{executed}. This may become useful with the incremental learning of sequences. \item All evaluations are contextual and that offers tremendous flexibility when running sequences in a variety of contexts. \item The V5 engine is always doing \textit{next} whatever is most appropriate. It is not explicitly running a sequence as a traditional computer would run a subroutine or function. \end{enumerate} \end{indent1} \subsection{Loosely Coupled Sequences (aka Goals)} A loosely coupled sequence is a collections of bindings that collectively work to achieve a (long term) goal. Sequences that operate (apply) over long time periods and usually invoke other (sub)sequences are loosely coupled. An example would be the sequence/goal of grocery shopping. The steps to achieve this goal are: \vspace{10pt} \begin{indent1} \begin{enumerate} \setlength\itemsep{-0.4em} \item Get the shopping list \item Go to the grocery store \item Purchase the items on the list \item Go home \item Put the groceries away \end{enumerate} \end{indent1} \vspace{10pt} The twines and bindings to achieve this goal will differ from the twines used in the tightly coupled sequences. Patterns will be used instead of linked twine points. The V5 code below gives a possible example of how this might be done. The goal point used is \vp{doShopping}. Line~\ref{line:groceryStart} is the binding to start the shopping. It invokes the sequence \vp{getShoppingList}. The actions for this sequence would be defined separately and would depend on many factors such as, does the list already exist or not. The step in achieving the goal would be triggered by recognizing the pattern (see section~\ref{secPatterns}) [\vp{haveShoppingList} \vp{doShopping}]. When this is recognized the sequence \vp{gotoGroceryStore} is triggered. These patterns $\Rightarrow$ sub-action continue until the goal is satisfied. \begin{lstlisting} bind [start doShopping] getShoppingList.v(*\label{line:groceryStart}*) bind [haveShoppingList doShopping] gotoGroceryStore.v bind [atGroceryStore doShopping] purchaseItems.v bind [haveGroceryItems doShopping] goHome.v bind [atHome withGroceries doShopping] putAwayGroceries.v bind [groceriesPutAway doShoping] shoppingGoalFinished.v \end{lstlisting} \section{Patterns}\label{secPatterns} A pattern is a binding that is used to recognize something. In the CEM framework a pattern would consist primarily of sensory points: auditory to recognize a sound, visual to recognize an object, tactile to recognize a touch, etc. A pattern is represented as a binding: [\vp{r_1} \vp{r_2} \ensuremath{\dots}] = \vp{r_{thing}} where \vp{r_n} are the recognition descriptor points and \vp{r_{thing}} is the point that is recognized. \par In the CEM, everything is a point and all points are treated identically. Therefore it is possible to mix sensory modes. For example visual and smell points could be used in combination. Sensory and internal points may be combined. \subsection{Learning Patterns}\label{subseclearnpat} A sample (\symS\label{def-S}) is a set of sets of points, typically sense points. A pattern (\vp{a}) within \symS \ is a set of points satisfying the following two conditions: the cardinality of \vp{a} is greater than or equal to a minimum number of points \ensuremath{min_{points}}\label{def-minpoints} (equation~\ref{equpat2}); and that \vp{a} is a subset of at least \ensuremath{min_{occurs}}\label{def-minoccurs} sets within \symS (equation~\ref{equpat2}). \begin{equation} |a| \geq \ensuremath{min_{points}} \end{equation}\label{equpat1} \begin{equation} \sum_{i=1}^{\left | \symS \right |} (\textrm{if} \ a \subset \symS_i \ \textrm{then} \ 1 \ \textrm{else} \ 0) \geq \ensuremath{min_{occurs}} \end{equation}\label{equpat2} Patterns are found within \symS \ with the following algorithm: Intersect each set in \symS \ with all the other sets within \symS \ saving all resulting sets with a cardinality greater than or equal to \ensuremath{min_{points}} \ as new set of sets, $\mathbb{A}$. Then count the number of times any given set occurs within $\mathbb{A}$. Any set within $\mathbb{A}$ that occurs at least \ensuremath{min_{occurs}} \ times is considered a pattern and a binding is created with the key set consisting of points in the set and the value being a new internal point. \par It may be necessary to perform the pattern learning process multiple times. If none of the saved intersected sets occur at least \ensuremath{min_{occurs}} \ times then repeating the learning process on the intersected sets will result in one or more patterns, if any patterns are to be found. \par As an example, appendix~\ref{datapatrecex} shows a set of sets. Each set contains approximately 20 points with each point represented as an integer number between 0 and 999. There are 40 rows of points. The challenge is to determine whether or not a pattern exists within these 40 rows where, for this example, a pattern is defined as a set of at least 5 points (\ensuremath{min_{points}}) found in at least 5 (\ensuremath{min_{occurs}}) rows. \par The first step is to intersect each row with all of the others. With 40 rows this requires $_{n}C_{r}$ =$_{40}C_{2}$ = $\frac{40!}{(40 - 2)! \times 2!)}$ = $780$ intersections. Figure~\ref{patlearn2} shows the results of the intersections where the cardinality of the result is at least 3 points. Figure~\ref{patlearn3} shows a consolidated result by set. The first two columns are the lines within the data, the third column shows the points resulting from the intersection of the two lines. There are 7 occurrences of \{101 211 307 401 503 601\}. While this count exceeds the \ensuremath{min_{occurs}} (5), it is not the count of occurrences in the first set. If a pattern occurs $n$ times then the number of intersections after the first pass would be $\frac{n\times(n-1)}{2}$, giving 10 for $n = 5$.\footnote{Or given the number of sets after the intersections ($x$) then $\frac{1\pm \sqrt{1+8x}}{2}$ is the maximum value of \ensuremath{min_{occurs}} possible for a pattern.} The additional pass through the aggregated sets adding in for any set that is a super-set gives a final count of 10 (the pattern occurs 5 times in the original data set). Thus a new binding would be created [101 211 307 401 503 601]=\vp{p_{new}} and those points are now recognized as point \vp{p_{new}}. \begin{figure}[H] \centering \begin{subfigure}[b]{0.4\textwidth} \centering \begingroup \fontsize{8}{8}\selectfont \begin{tabular}{r r l} First&Second&Intersected Points\\ \toprule 4&30&121 246 647\\ 7&28&146 211 285\\ 14&19&307 326 457\\ 16&32&218 275 495\\ 19&20&101 211 307 401 503 601\\ 19&25&101 211 307 401 503 528 563 601\\ 19&29&101 211 307 401 503 601\\ 19&33&101 211 307 401 503 591 601\\ 20&25&101 211 307 401 503 548 601\\ 20&29&101 211 307 401 503 601\\ 20&33&101 211 307 401 503 601\\ 25&29&101 211 307 401 503 601\\ 25&33&101 211 307 401 503 601\\ 29&33&101 211 307 401 503 601\\ \bottomrule \end{tabular} \endgroup \caption{Results of first pass through learning}\label{patlearn2} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \begingroup \fontsize{8}{8}\selectfont \begin{tabular}{c l} Times & Points\\ \toprule 7 & 101 211 307 401 503 601\\ 1 & 101 211 307 401 503 528 563 601\\ 1 & 101 211 307 401 503 591 601\\ 1 & 101 211 307 401 503 548 601\\ \bottomrule \end{tabular} \endgroup \caption{Sets with minimum points}\label{patlearn3} \end{subfigure} \caption{After passing through intersections} \end{figure} \subsection{Patterns in Sequences} The patterns mentioned so far have not depended on any type of point ordering. But in many cases, the ordering of points does matter. A pattern to recognize the word `tea' cannot be represented with the binding [\vp{t} \vp{e} \vp{a}] = \vp{tea} because the ordering of points in a binding is irrelevant. This binding for `ate' would be [\vp{a} \vp{t} \vp{e}] = \vp{ate}. The key sets [\vp{t} \vp{e} \vp{a}] and [\vp{a} \vp{t} \vp{e}] are equivalent. \par One way to get around this problem and be able to recognize a specific sequence is to have incremental bindings for each subsequent point in a sequence. For example `tea' would be represented with the points $\vdash \vp{t} \ \vp{e} \ \vp{a} \dashv$ ($\vdash$ and $\dashv$ indicate begin and end of word). The following bindings would be used: [$\vdash$ \vp{t}]=\vp{r_1}, [\vp{r_1} \vp{e}]=\vp{r_2}, [\vp{r_2} \vp{a}]=\vp{r_3} and finally [\vp{r_3} $\dashv$]=\vp{tea}. \par Another approach is to encode the points with additional positional information. For instance encoding `tea' would be with [\vp{t} \vp{pos1}]=\vp{t_1}, [\vp{e} \vp{pos_2}]=\vp{e_2} and [\vp{a} \vp{pos_3}]=\vp{a_3} and [\vp{t_1} \vp{e_2} \vp{a_3}]=\vp{tea}. The recognition of `eat' would be [\vp{e_1} \vp{a_2} \vp{t_3}]=\vp{eat}. \par Some sequences may be recognized by a combination of points of different modalities. A melody consists of a sequence of notes with each note having a pitch and duration. The points representing the beginning of a melody would be the set of points {\vp{p_1}, \vp{d_1}, \vp{p_2}, \vp{d_2}, \ensuremath{\dots} } where \vp{p_x} is the pitch for the \vp{x^{th}} note and \vp{d_x} is the duration. A recognition binding for the (beginning) of the melody would be [\vp{p_1} \vp{d_1} \vp{p_2} \vp{d_2} \ensuremath{\dots}] = \vp{melody} where \vp{melody} is the point recognized as the beginning of the melody. The start of the recognition pattern need not necessarily be only the first few notes, it could anywhere throughout the melody. Pitch and duration are just two qualities that are used to recognize a melody. Orchestration and harmonic structure might also be included in the recognition binding. \subsection{Embedded Patterns within Patterns}\label{secTPwP} A set of points that represent a pattern may also contain the points of one or more other patterns. Consider the two patterns [\vp{p_1} \vp{p_2} \ensuremath{\dots} \vp{p_n} \vp{q_1} \vp{q_2} \ensuremath{\dots} \vp{q_{n'}}] = \vp{a_{main}} and [\vp{p_1} \vp{p_2} \ensuremath{\dots} \vp{p_n}] = \vp{a_{sub}}. The points of the second pattern are found within the first. When this situation occurs the resulting pattern points are twined. In this case, the twine \vp{a_{sub}}:\vp{a_{main}} would be created. \par An example of this is the recognition of a visual object as a red ball (i.e. a set of visual points is recognized as a discrete object point \vp{obj}). Within the points recognizing \vp{obj} are the points recognizing the properties of \vp{red} and \vp{ball}, therefore \vp{red}:\vp{obj} and \vp{ball}:\vp{obj}. \subsection{The Recognition of a Pattern}\label{secRecPat} When a pattern is recognized from a set of points the resulting pattern point is inserted into the PS with its value variant. For example if the pattern [\vp{p_1} \vp{p_2} \ensuremath{\dots} \vp{p_n}] = \vp{a_r} is recognized then \vp{a_{r}.v} is added to the PS, not \vp{a_r}. \subsection{An Experiment with Voice Recognition} An experiment was performed to determine if the pattern recognition process could be used in a real world example. A voice sample recorded as a standard WAV file\cite{wavFormat} was converted to a set of points and put through the pattern learning process. Bindings were created for all learned patterns. Then other test WAV files were similarly converted to sets of points. These test points were inserted into the context and the null key set was evaluated. A successful evaluation indicated that the test points were matched by a previously learned pattern. A failure indicated no pattern. \par The WAV files used in this experiment were first pre-processed using the FFT algorithm\footnote{FFT or Fast Fourier Transform is an efficient algorithm for converting a time domain signal into the frequency domain\cite[Chapters~12-13]{press}} into encoded \textit{fft} files, a text file consisting of one line per sample of the WAV file. In this experiment the sample duration is $\frac{1}{8}$ of a second. The preprocessing requires two passes through the file. In the first pass, each sample of the WAV file was processed with the FFT. The resulting frequency/magnitude buckets were converted to four digit numbers as shown in (\ref{eqnWav2FFT}). \begin{equation}\label{eqnWav2FFT} \left ( \log\left ( f \right )-5 \times 10+0.5 \right ) \times 100 + (\log(mag\times 1000/ \sum mag) + 0.05) \end{equation} where \vp{f} is the frequency (0-99), \vp{mag} is the magnitude (0-99). Per FFT convention the magnitude is calculated from the real ($mag_r$) and imaginary ($mag_i$) parts: $\sqrt{(mag_r)^2 + (mag_i)^2)}$ This four digit number is then saved in a bin along with the count of the number of times that number was seen. \par The second pass is similar to the first except that an output file was created for all samples that occurred more than once. Below is a sample of an fft file. Each line represents an encoding of the spectral analysis of a $\frac{1}{8}$ second sample. Each four digit number represents a frequency and magnitude ($f \times 100 + m$). Figure~\ref{figFFTprep} shows the first few lines of a preprocessed WAV file. \begin{figure}[H] \begin{lstlisting} !\fft\samples\see_ya.wav 1906,2507,2608,2609,2709,2808,2907,2906,3008,3007,3106,3107,3207,3306,3307,3407,3406,3507,3607,3608, 3609,3709,3708,3808,3907,3908,4008,4007,4108,4107,4209,4210,4310,4309,4308,4408,4407,4406,4507, 4506,4606,4607,4707,4706,4806,4906,5006,5206 0006,2306,2406,2407,2508,2608,2610,2708,2706,2807,2806,2906,3006,3106,3206,3306,3307,3406,3506,3507, 3607,3609, 3610,3608,3708,3707,3706,3806,3906,4006,4107,4106,4209,4210,4207,4208,4308,4306,4307, 4407,4406,4506,4607,4608,4606,4707,4706,4906,4907,5206,5606,6206,6306,6307 0008,1906,2208,2307,2409,2509,2610,2609,2709,2708,2808,2906,2908,2907,3007,3008,3106,3108,3207,3307, 3408,3409,3509,3510,3610,3609,3709,3708,3707,3808,3807,3806,3908,3907,4007,4008,4006,4108,4109, 4209,4208,4207,4307,4306,4406,4506,4607,4606,4706,4806,4906 1906,2308,2409,2509,2608,2706,2806,2807,2908,2906,3008,3009,3108,3207,3307,3408,3409,3508,3507,3506, 3606,3708,3706,3808,3807,3806,3906,3907,4006,4007,4009,4010,4108,4208,4209,4207,4210,4309,4310, 4308,4307,4306,4407,4406,4408,4409,4509,4508,4506,4507,4608,4607,4609,4707,4706,4708,4808,4807, 4806,4906,4908,4907,5006,5007,5008,5107,5106,5206,5506,5508,5607,5606,5608,5706,5707,5708,5806, 5907 \end{lstlisting} \caption{Sample output from WAV preprocessing}\label{figFFTprep} \end{figure} \par The second part of this experiment was finding any patterns in a test \textit{fft} file by first converting each four digit frequency-magnitude into a point using a simple dictionary mapping four-digit numbers to points. Then the pattern learning process as described in section~\ref{subseclearnpat} was performed. The resulting sets of points were bound to a single recognition point. Below is a sample output of the recognition process. Lines 1-\ref{line:ffttrainend} show the learning phase with the training file (FV1MessageMenu.fft). \par After any patterns were found, tests were performed using WAV files from the same speaker and different speakers. Lines \ref{line:ffttest1a}-\ref{line:ffttest1b} show the result of the first test: 96\% of the lines resulted in a successful evaluation indicating a strong resemblance between the file and the training file. Compare this with lines \ref{line:ffttest2a}-\ref{line:ffttest2b} showing very few successful evaluations. \begin{lstlisting} w: 0.0 c: 0.0 - Starting pattern search phase w: 68.6 c: 68.6 - Created 2744 bindings(*\label{line:ffttrainend}*) w: 68.6 c: 68.6 - Starting scan of test file (\vic\fft\samples\FV1MessagesUndeleted.fft)(*\label{line:ffttest1a}*) w: 68.7 c: 68.6 - 26 matches (9 w: 68.7 c: 68.6 - Starting scan of test file (\vic\fft\samples\see_ya.fft) w: 68.7 c: 68.6 - 0 matches ( w: 68.7 c: 68.6 - Starting scan of test file (\vic\fft\samples\FV1recordAfterTone.fft) w: 68.7 c: 68.6 - 33 matches (9 w: 68.7 c: 68.6 - Starting scan of test file (\vic\fft\samples\VN_UrgentPager_3.fft)(*\label{line:ffttest2a}*) w: 68.7 c: 68.6 - 3 matches ( \end{lstlisting} \subsection{Focus} In the `real world' all senses are simultaneously and continuously updating the context. It would be desirable to have the ability to limit recognition to only one sensory mode (e.g. visual versus auditory) or a subset of a mode (e.g. words on a page rather than the page itself). This is trivially achieved with the addition of discriminating or focus points. For example, assume a visual object (\vp{o_v}) is recognized by the visual points \vp{v_1}, \vp{v_2}, \ensuremath{\dots} ([\vp{v_1} \vp{v_2} \ensuremath{\dots}] = \vp{o_v}) and a sound (\vp{o_s}) is recognized with the binding [\vp{s_1} \vp{s_2} \ensuremath{\dots}] = \vp{o_s}. If the context includes both sets of points \{\vp{v_1},\vp{v_2},\ensuremath{\dots},\vp{s_1},\vp{s_2},\ensuremath{\dots}\} then either/both \vp{o_v} or \vp{o_s} could be recognized. However, if we included focus points, \textbf{\vp{f_x}}\label{def-f} in each of these bindings giving [\vp{f_v} \vp{v_1} \vp{v_2} \ensuremath{\dots}] = \vp{o_v} and [\vp{f_s} \vp{s_1} \vp{s_2} \ensuremath{\dots}] = \vp{o_s} then it would be possible to control or focus on visuals or sounds with the inclusion of either \vp{f_v} or \vp{f_s} into the context. \par A practical example of sensory focus would be the focus of attention to words spoken by a specific individual in a room with others speaking simultaneously. If the patterns that recognized phonemes/words included a focus point (\vp{f_w}) then words would only be recognized when that focus point was in the context. Assume that the particular individual's voice is recognized (as in the prior example) as \vp{p_{voice}} and that point was twined with point \vp{f_w} (\vp{p_{voice}} > \vp{f_w}) then word recognition would only occur when the individual's voice was detected/recognized. \section{Motivation: The Good and the Bad}\label{secMotivate} Some points may have one of two additional attributes: `+' and `-'. This optional attribute is used to ascribe good/pleasure or bad/pain to a point. Good points are tagged with the positive \vp{p^+} and bad points are tagged with the negative \vp{p^-}. Most points are neither good nor bad and thus have neither. The V5 engine continuously tracks the number of good/bad points in the PS and sums them by assigning +1 to good points and -1 to bad points. This sum is referred to as \textbf{\ensuremath{\Sigma}}\label{def-Sigma}; \ensuremath{\overline{\Sigma}}\label{def-SigmaBar} represents a smoothed value of \ensuremath{\Sigma}; \ensuremath{\Delta}\label{def-Delta} is the difference between the two. These relationships are summarized in equations (\ref{equSigDelta}a/b/c) for moment (int time) $m$ and where $s$ is a smoothing factor between 0.0 and 1.0. \begin{subequations}\label{equSigDelta} \begin{align} \ensuremath{\Sigma}_m &= \sum_{i}^{|\{PS\}|} \{PS\}_i \\ \ensuremath{\overline{\Sigma}}_{m} &= s * \ensuremath{\Sigma}_m + (1-s) * \ensuremath{\overline{\Sigma}}_{m-1} \\ \ensuremath{\Delta}_m &= \ensuremath{\Sigma}_m - \ensuremath{\overline{\Sigma}}_m \end{align} \end{subequations} \par A long term goal of the V5 engine is to maximize \ensuremath{\Sigma} \ by maximizing \vp{p^+} points and minimizing \vp{p^-} points. \ensuremath{\Sigma} \ indicates the current (instantaneous) good/bad state, \ensuremath{\Delta} \ reflects whether \ensuremath{\Sigma} \ is increasing or decreasing. A positive \ensuremath{\Delta} \ means the current value of \ensuremath{\Sigma} \ is greater than the recent average (getting better). Conversely, a negative \ensuremath{\Delta} \ indicates that \ensuremath{\Sigma} \ is smaller than the recent average (getting worse). Given the V5 engine's goal of maximizing \ensuremath{\Sigma} \ over time, what steps can be taken to achieve this? \begin{enumerate} \item Make the V5 processing clock speed inversely related to \ensuremath{\Sigma}. As \ensuremath{\Sigma} \ increases, slow processing clock to maintain the status quo. As \ensuremath{\Sigma} \ decreases, increase the clock speed to facilitate change to the status quo. \item if \ensuremath{\Delta} \ goes positive then \textit{conditions} are improving; continue with current actions. \item if \ensuremath{\Delta} \ goes negative then \textit{conditions} are worsening; perform different actions. \item if both \ensuremath{\Sigma} \ and \ensuremath{\Delta} \ are negative then bind neutral points to a minus point. \end{enumerate} Many situational factors dictate what `continue with current actions' and `perform different actions' mean. Two different situations are presented in the following simulation examples. The first simulation shows the use of \vp{p^+} points locating a target. The second simulation demonstrates the use of \vp{p^-} points in learning to avoid a `painful' situation. \subsection{Simulation 1: A Sensor Locking onto a Target} The first simulation investigates how an intity locates a target through the utilization of \vp{p^+} points. The intity has directional sensors such that when a sensor is pointed directly at a target (within $1\degree$) it generates a \vp{p^+} point. There are multiple sensors that span an arc of $90\degree$. The action of the intity is to move ahead a unit distance then decide to turn left $\Theta$ degrees, right $\Theta$ degrees or continue straight ahead. The specific actions are: \begin{enumerate} \item If \ensuremath{\Delta} \ < -5 and the prior action was move left, then move right. If it was move right then move left. If it was move straight then move left or right, each with a probability of 0.5. \item If \ensuremath{\Delta} \ > 5 then continue moving left/straight/right as before. \item Otherwise move left with a probability of 0.5, right with a probability of 0.25 or straight with a probability of 0.25. The reason for bias to the left is to ensure that the intity's sensors sweep the entire plane of the simulation space in a reasonable number of steps. \end{enumerate} The number of \vp{p^+} points needs to increase as the intity points more directly at the target reaching a maximum when it is heading directly towards the target. As it turns into the target \ensuremath{\Sigma} \ increases and \ensuremath{\Delta} \ is positive so that it continues its current action (turning or going straight). As it turns away from the target \ensuremath{\Sigma} \ decreases and \ensuremath{\Delta} \ goes negative so the intity changes direction. In this way it can home in on the target. Packing sensors more densely towards the center of the $90\degree$ arc for a gradient is necessary to achieve the desired result (figure~\ref{figGradient}). \begin{figure}[H] \centering \begin{tikzpicture}[scale=0.6] \draw [domain=-45:45] plot ({3*cos(\x)}, {3*sin(\x)}); \draw [domain=-45:45] plot ({1*cos(\x)}, {1*sin(\x)}); \node[] at (1.5,0) {\tiny{$90\degree$}}; \draw (0,0)[->] -- (-45:3.5); \draw (0,0)[->] -- (45:3.5); \foreach \i in {-4,-1,1,4} { \draw (\i:3) -- (\i:3.25); } \foreach \i in {-10,-7.5,-5,-2.5,0,2.5,5,7.5,10} { \draw (\i:3) -- (\i:3.25); } \foreach \i in {-15,-20,-25,15,20,25} { \draw (\i:3) -- (\i:3.25); } \foreach \i in {-35,35} { \draw (\i:3) -- (\i:3.25); } \end{tikzpicture} \caption{Sensor density increases towards center}\label{figGradient} \end{figure} \par The three example runs below show the action of the intity based on various starting positions with respect to the target. The small dot indicates the starting point. The larger dot is the target. The initial bearing is $0\degree$, due east. Runs (a) and (b) the show the bias to move to the left. All three demonstrate that once the target is detected the intity moves to the target. \begin{figure}[H] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \include*{v5demo1a-run1} \end{subfigure}\hfill% \begin{subfigure}[b]{0.45\textwidth} \centering \include*{v5demo1a-run2} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \include*{v5demo1a-run3} \end{subfigure} \caption{Sample runs with differing start/end positions} \end{figure} \subsection{Simulation 2: Learning to Avoid a `Painful' Wall} The second simulation has an intity enclosed within a square pen. The intity is free to roam randomly within the pen. If the intity hits any of the sides of the pen then it receives a painful jolt. The intity's sensors register the pain by inserting \vp{p^-} points into the context. The intity has additional sensors that create normal (i.e. neither + nor -) sensory points whenever it is in close proximity to any of the four sides. All of these sensed points are inserted into the intity's processing context. With each processing step, the null key set is evaluated. Any result is additionally added into the context. If \ensuremath{\Sigma} \ goes negative (from \vp{p^-} points) then it immediately reverses direction (i.e. when it senses pain it reverses). \par In the two runs below, the intity starts in the middle of the pen facing north. Both figures below show the intity's path over 5000 moves. There is no learning in the first run (figure~\ref{v5demo2-1}) and the intity repeatedly hits the boundaries. In the second run (figure~\ref{v5demo2-2}) it learns by binding the sensory points (proximity to a boundary) with pain points whenever \ensuremath{\Delta} \ goes negative. The intity recognizes pain before experiencing it and can reverse itself before hitting a boundary. \begin{figure}[H] \centering \begin{subfigure}[b]{0.45\textwidth} \include*{v5demo2-1} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \include*{v5demo2-2} \end{subfigure} \caption{Sample runs with and without learning} \end{figure} \section{Thoughts, Language and Meaning}\label{secThoughtsLanMean} The CEM has now been introduced and with the V5 engine many examples have been presented. Mazes have been solved, songs have been sung. A Turing machine has been implemented demonstrating that at least in principle, any computable function can be calculated with CEM/V5. None of the examples presented so far would be considered difficult AI problems. None of them are novel and all have been `solved' with many different AI and non-AI paradigms. \par The second half of this paper is devoted to a more challenging and fundamental problem in AI: what are \textit{thoughts} and how do thoughts relate to language and meaning? This section defines thoughts within the CEM framework. Section~\ref{secThought2Lan} demonstrates a sequence for translating thoughts to language by taking a single thought and generating both English and French output sentences. The converse problem of converting a string of words to a thought is covered in section~\ref{secLan2Thought}. Examples showing how \textit{meaning} arises from thought patterns are given in section~\ref{secconsofthought}. \subsection{Thoughts}\label{secThoughts} \begin{definition}{} A \textbf{thought}\label{def-t} is a point, \vp{t}, at the base of a multi-branch is-a tree (is-a twines) where the separate branches of the tree are the components of the thought. \end{definition} At its simplest, the thought of `x' (as represented as point \vp{x}) would be point \vp{x} is-a twined to a thought point (\vp{x}<\vp{t}). More complex thoughts have multiple branches with additional points that relate the branches to each other. For example, the thought corresponding to the sentence `Sue walked her dog' is (thought) point with two is-a branches. The first would be the point representing Sue, the second would be the point representing her dog: \vp{pSue}<\vp{t} and \vp{pDog}<\vp{t}. Additional is-a twines would relate the Sue point (\vp{pSue}) to her dog point (\vp{pDog}) as the subject and object of the verb walk: \vp{subWalk}<\vp{pSue} and \vp{objWalk}<\vp{pSue}. But these last two twines are not quite correct. \vp{pSue} is not always the subject of \vp{walk} and \vp{pDog} is not always the object. They are subject/object only for this thought. Or to restate only in the context of this thought. So these two twines are more properly written as \vp{subWalk}<\vp{pSue}|\vp{t} and \vp{objWalk}<\vp{pDog}|\vp{t}. \par Representing a thought as a collection of twines can be difficult to visualize especially for complicated thoughts. However, showing a thought graphically as a \textit{tree}\label{def-tree} makes it easier to understand the structure of a thought. In a thought \textit{tree}, the nodes represent points and the branches represent is-a twines. To further simplify, specific \textit{contextual} restrictions on a twine are not shown (e.g. \vp{subWalk}<\vp{pSue} and \vp{subWalk}<\vp{pSue}|t would have identical tree representations. A more compact \textit{fractional}\label{def-fractional} notation may also be used when representing is-a relationships. Figure~\ref{figTreeFrac} shows both the tree and fractional representations of the thought `Sue walked her dog'. \begin{figure}[H] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \begin{tikzpicture}[scale=0.8, grow'=up] \Tree [.t$$ [.\vp{pSue} \vp{subWalk} ] [.\vp{pDog} \vp{objWalk} ] ] \end{tikzpicture} \caption{Tree representation} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering $\frac{\begin{matrix} \frac{subWalk}{pSue} & \frac{objWalk}{pDog}\\ \end{matrix}}{t}$ \caption{Fractional representation} \end{subfigure} \caption{Tree and fractional representations of the thought `Sue walked her dog'}\label{figTreeFrac} \end{figure} Thoughts are points so it is possible to include one thought within another. Consider the thought `The man I knew from high school said that Fred ate the cookie'. This thought can be decomposed into two sub-thoughts within a third. Figure~\ref{figEmbedT} shows\vp{t1}, the main thought, linking the subject 'man I knew from high school' (\vp{t2}) to the object 'Fred ate the cookie' (\vp{t3}). \begin{figure}[H] \begin{subfigure}[b]{0.30\textwidth} \centering \begin{tikzpicture}[scale=0.8, grow'=up] \Tree [.$t_1$ [.\vp{pMan} \vp{subSaid} $t_2$ ] [.$t_3$ \vp{objSaid} ] ] \end{tikzpicture} \caption{Main thought} \end{subfigure} \begin{subfigure}[b]{0.30\textwidth} \centering \begin{tikzpicture}[scale=0.8, grow'=up] \Tree [.$t_2$ [.\vp{pMe} \vp{subKnow} ] [.\vp{pMan} \vp{objKnow} ] [.\vp{highschool} \vp{from} ] ] \end{tikzpicture} \caption{Man I know from high school} \end{subfigure} \begin{subfigure}[b]{0.30\textwidth} \centering \begin{tikzpicture}[scale=0.8, grow'=up] \Tree [.$t_3$ [.\vp{pFred} \vp{subAte} ] [.\vp{pCookie} \vp{objAte} ] ] \end{tikzpicture} \caption{Fred ate the cookie} \end{subfigure} \caption{Embedding thoughts within a thought}\label{figEmbedT} \end{figure} \subsection{Meaning of a Thought} The \textit{meaning} of a thought is defined as \textit{the subsequent actions taken by an intity as a consequence of the thought}. A single thought may have different meanings for different people. An individual's interpretation of a thought may change over time. Consider the thought shown in figure~\ref{figJackJill}. \begin{figure}[H] \centering \begin{tikzpicture}[scale=0.8, grow'=up] \Tree [.$t_x$ [.$s_1$ \vp{Jack} \vp{Jill} $went\_L$ ] [.$s_2$ $went\_R$ $up\_R$ \vp{hill} ] ] \end{tikzpicture} \caption{Thought for `Jack and Jill went up the hill'}\label{figJackJill} \end{figure} What is the meaning to the reader? No prior mention was made as to whom Jack or Jill are. There was no prior mention of hills in this paper. Yet each reader will interpret the sentence and have subsequent thoughts, for example: \begin{clist} \item Think that it's just an example to define the author's point. There is no meaning per se. \item Recall the old nursery rhyme and maybe continue it in their mind (`to fetch a pail of water'). \item Mentally picture two children jointly holding a bucket walking up a hill towards a well. \end{clist} Adding a thought point to the context (or the PS for V5) effectively adds all of the is-a points of the thought to the context. These points as well as any other points in the context (sensory, control and internal) can be used to recognize patterns. The value of a pattern can be another thought point, the starting point of a sequence, a long term goal point, etc. The constant processing of sensory input and thoughts leading to other (new) thoughts results in a chaotic stream of points flowing through the intity. \section{Converting Thoughts to Language}\label{secThought2Lan} The conversion of a thought to a sequence of words requires bindings representing information for: \begin{clist} \item words corresponding to the various points comprising the thought (e.g. `John' for point \vp{nounJohn}); \item rules for converting a point to one or more words (e.g. the ordering of adjectives before a noun); \item rules for the overall structure of the resulting sentence (e.g. active versus passive voice); \item a sequence to accomplish the task given the above information. \item and the thought point to be converted to a sequence of words; \end{clist} The example below demonstrates one way this can be done. It is self-contained with all the bindings necessary to convert the thought representing `John has the red ball' into an appropriate sequence of words reflecting the thought. \par Everything in the CEM is contextual. First, an English sentence is generated. Then a few more bindings are added to demonstrate contextual flexibility. The addition of point \vp{french} and associated bindings results in the thought generating a French language sentence. \par The example begins with bindings defining the value for point \vp{label} in various contexts corresponding to different points in the thought. Line~\ref{line:ttllabels} declares output labels in various contexts. For example, the value of \vp{label} given the context of \vp{nounJohn} is ``John''. \begin{lstlisting} twine noun<nounJohn,nounBall twine adj<adjRed|english,adjBig twine daLabel>"the" ; label>"John"|nounJohn ; twine label>"He"|pnounJohn ;(*\label{line:ttllabels}*) label>"big"|adjBig ; label>"red"|adjRed ; label>"ball"|nounBall ; label>"has"|verbHas \end{lstlisting} \begin{exDesc} The code in lines \ref{line:ttlsetup} through \ref{line:ttlsetup2} are instructions to create the thought in figure~\ref{figJohnHasBall} of `John has the big red ball''. Line~\ref{line:ttlsetup} inserts instructions and arguments into the PS to create twines corresponding to the tree below. The branches of the tree are is-a twines. The nodes are points. Points of the form `$\#number$' are internal points created with the opNEW opcode. \begin{figure}[H] \centering \begin{tikzpicture}[scale=0.8, grow'=up] \Tree [.\vp{base} [.\#2c8 \vp{sub} \vp{nounJohn} ] [.\#2cc \vp{obj} \vp{defArt} \vp{adjRed} \vp{adjBig} \vp{nounBall} ] [.\#2d0 \vp{verbHas} ]] \end{tikzpicture} \caption{Tree representation of `John has the big red ball'}\label{figJohnHasBall} \end{figure} \end{exDesc} \begin{lstlisting}[firstnumber=5] trace opTWISA ps opNEW,rNEW,base,opTWISA,sub,rNEW,opTWISA,nounJohn,rNEW,opTWISA,(*\label{line:ttlsetup}*) opNEW,rNEW,base,opTWISA,obj,rNEW,opTWISA,defArt,rNEW,opTWISA,adjRed, rNEW,opTWISA,adjBig,rNEW,opTWISA,nounBall,rNEW,opTWISA,opNEW,rNEW, base,opTWISA,verbHas,rNEW,opTWISA run(*\label{line:ttlsetup2}*) (*\small \textbf{opTWISA: \#44: [base.i rCTP(1)] = \#2e8}*) (*\small \textbf{opTWISA: \#45: [\#2e8.i rCTP(1)] = sub}*) (*\small \textbf{opTWISA: \#46: [\#2e8.i rCTP(1)] = nounJohn}*) (*\small \textbf{opTWISA: \#47: [base.i rCTP(1)] = \#2ec}*) (*\small \textbf{opTWISA: \#48: [\#2ec.i rCTP(1)] = obj}*) (*\small \textbf{opTWISA: \#49: [\#2ec.i rCTP(1)] = defArt}*) (*\small \textbf{opTWISA: \#50: [\#2ec.i rCTP(1)] = adjRed}*) (*\small \textbf{opTWISA: \#51: [\#2ec.i rCTP(1)] = adjBig}*) (*\small \textbf{opTWISA: \#52: [\#2ec.i rCTP(1)] = nounBall}*) (*\small \textbf{opTWISA: \#53: [base.i rCTP(1)] = \#2f0}*) (*\small \textbf{opTWISA: \#54: [\#2f0.i rCTP(1)] = verbHas}*) \end{lstlisting} \begin{exDesc} The next several groupings define bindings to be used by the opRASM\footnote{See section~\ref{secASReduce} for more detail on the opRASM instruction.} instruction. Line~\ref{line:defartadj} states that a definite article (\vp{defArt}) followed by an adjective (\vp{adj}) should trigger the adding of the label of the article (\vp{daLabel.v}) to the print queue (opADDPQ). The reference to the definite article should be removed (-\vp{defArt}) with no replacement (the value list ends with the \vp{null} point). Note that the \vp{asPH} point is automatically appended to the AS with opISATOAS instruction and is used to mark the end of a series of points. \end{exDesc} \begin{lstlisting}[firstnumber=21] bind +10 [-defArt/adj] daLabel.v,opADDPQ,null(*\label{line:defartadj}*) bind +10 [adj/-defArt] daLabel.v,opADDPQ,null bind +10 [-defArt/noun] daLabel.v,opADDPQ,null bind +10 [noun/-defArt] daLabel.v,opADDPQ,null bind +10 [-defArt/asPH] daLabel.v,opADDPQ,null bind +5 [-adj/noun] eoa,@label.v,adj.v,opEVAL,rEVAL,opADDPQ,null bind +5 [noun/-adj] eoa,@label.v,adj.v,opEVAL,rEVAL,opADDPQ,null bind +5 [-adj/asPH] eoa,@label.v,adj.v,opEVAL,rEVAL,opADDPQ,null bind +2 [-noun/asPH] eoa,@label.v,noun.v,opEVAL,rEVAL,opADDPQ,null bind +2 [-noun/adjpost] eoa,@label.v,noun.v,opEVAL,rEVAL,opADDPQ,null bind +2 [adjpost/-noun] eoa,@label.v,noun.v,opEVAL,rEVAL,opADDPQ,null bind [-adjpost/asPH] eoa,@label.v,adjpost.v,opEVAL,rEVAL,opADDPQ,null bind [-sub/asPH] null bind [-obj/asPH] null \end{lstlisting} \begin{exDesc} The following twines define the sequence that convert a thought into a series of words. Line~\ref{line:speakseq} evaluates [\vp{speak}] to get the `next' step of the sequence. The binding on line~\ref{line:speakbind} defines [\vp{speak}] in the context of the additional points (\vp{sub}, \vp{verbHas} and \vp{obj}) to have the value \vp{lsr.v} (line~\ref{line:speaklsr}). This clears the PQ (opCLRPQ), pushes the value of the subject onto the PS (\vp{sub.v}), runs the opISATOAS instruction to insert all is-a points of the subject into the AS and then run the AS reduction instruction (opRASM). This results in the words representing the subject to be appended to the PQ. The word for the \vp{verbHas} is appended and then the words for the object. The last step of the sequence outputs the contents to the PQ to the console. \end{exDesc} \begin{lstlisting}[firstnumber=35] twine lsa>opPSISAS,speak,eoa,opEVAL,rEVAL(*\label{line:speakseq}*) twine lsr>opCLRPQ,sub.v,opISATOAS,lsr2.v(*\label{line:speaklsr}*) twine lsr2>opRASM,lsr3,lsr3.v twine lsr3>@label.v,verbHas,eoa,opEVAL,rEVAL,opADDPQ,lsr4.v twine lsr4>obj.v,opISATOAS,lsr5.v twine lsr5>opRASM,lsr6.v twine lsr6>opOUTPQ bind [speak sub verbHas obj] lsr.v(*\label{line:speakbind}*) \end{lstlisting} \begin{exDesc} Starting up the sequence is done by adding the base thought (\vp{base}), the language (\vp{english}) and the starting point (\vp{lsa.v}) for the sequence. The run command starts the execution and the sentence corresponding to the thought is output. \end{exDesc} \begin{lstlisting}[firstnumber=43] ps base,english,lsa.v run (*\small \textbf{PQ(3): "John" "has" "the" "big" "red" "ball"}*) \end{lstlisting} \begin{exDesc} The remainder of the example demonstrates how a few additional bindings can be used to convert the same thought into French. Twines with the additional context point (\vp{french}) are given for the French equivalents of several word/points (lines starting at \ref{line:ttlfrenchwords}). The twine on line~\ref{line:ttlfrenchred} specifies that in the context of \vp{french} the \vp{adjRed} point is-a \vp{adjPost}. This, in combination with the binding on line~\ref{line:ttlfrenchred2}, ensures that the word for the color red appears after the noun as is typical for color adjectives in French. \end{exDesc} \begin{lstlisting}[firstnumber=46] twine adjpost<adjRed|french(*\label{line:ttlfrenchred}*) twine daLabel>"la"|french(*\label{line:ttlfrenchwords}*) twine label>"grande"|adjBig,french twine label>"rouge"|adjRed,french twine label>"balle"|nounBall,french twine label>"a"|verbHas,french bind +10 [-defArt/adjpost] daLabel.v,opADDPQ,null(*\label{line:ttlfrenchred2}*) bind +10 [adjpost/-defArt] daLabel.v,opADDPQ,null \end{lstlisting} \begin{exDesc} Finally, rerun the sequence but this time with \vp{french} in the context. \end{exDesc} \begin{lstlisting}[firstnumber=54] ps base,french,lsa.v run (*\small \textbf{PQ(5): "John" "a" "la" "grande" "balle" "rouge"}*) \end{lstlisting} The key points of this example are: \begin{enumerate} \item The thought is the starting point. A thought is fundamentally independent of language. \item A generating sequence is determined from the overall structure and content of the thought (line~\ref{line:speakbind}). Different sequences may be evaluated for different modes of expression (active voice versus passive voice) or any other contextual conditions. This implies that any thought can be expressed as many different sentences. \item The words for each element of the sentence are determined contextually. For example an additional twine (\vp{label}>`Johnny'|\vp{nounJohn},\vp{diminutive}) specifies that `Johnny' instead of `John' should be output when the point \vp{diminutive} is in the context. \end{enumerate} \section{Converting Language to Thoughts}\label{secLan2Thought} Language is the medium used to communicate a thought from one intity (the speaker) to another (the listener) where the speaker and listener have different internal point structures. A simple example is communicating a reference to an object (a particular red ball) that both the speaker and listener are familiar with. The speaker has the thought \vp{ball_s}<\vp{t_s} where \vp{ball_s} is the speaker's internal point for the ball and \vp{t_s} is the speaker's corresponding thought. The corresponding thought for the listener is \vp{ball_l}<\vp{t_l} where \vp{ball_l} is the listener's internal point for the (same) ball and \vp{t_l} is a listener's thought point. How does the speaker communicate the thought \vp{ball_l}<\vp{t_l} to the listener without having access to the listener's point \vp{ball_l}? \subsection{Language to Thought with Surrogates} The answer is that the speaker uses attributes of \vp{ball_s} to describe it in words. In this example \vp{colorRed_s}<\vp{ball_s} and \vp{shapeBall_s}<\vp{ball_s} result in the phrase `red ball'. Upon hearing this phrase, the listener converts the spoken/heard word `red' to \vp{colorRed_l} and the word `ball' to \vp{shapeBall_l} and uses these points to locate/find/deduce \vp{ball_l}. \par A new type of point is needed for this process to work in the CEM environment. \begin{definition}{} A \textbf{surrogate}\label{def-surrogate}\label{def-s} is a point (\vp{s_x}), a stand-in for another to-be-determined point. \end{definition} In figure~\ref{figXmitRedBall} the speaker's thought (\vp{t_s}) is converted to the words `red ball'. The listener, upon hearing, creates a thought (\vp{t_l}) with the surrogate point \vp{s_l}. Surrogate \vp{s_l} has two qualifying attributes linked via is-a twines (\vp{colorRed_l} and \vp{shapeBall_l}). The listener searches prior thoughts for a point having the same two points as is-a twines. A match is found with thought \vp{t'_l} with point \vp{ball_l} so surrogate \vp{s_l} is bound to point \vp{ball_l}. \begin{figure}[H] \centering \begin{tikzpicture}[scale=0.8, grow'=up] \Tree [.\vp{t_s} [.\vp{ball_s} \vp{colorRed_s} \vp{shapeBall_s} ] ] \node[anchor=west,text width=4cm] (note1) at (2.5,.5) { $\Longrightarrow$ }; \node[anchor=west,text width=5cm] (note1) at (2,1) { \small{`red ball'} }; \begin{scope}[xshift=6cm] \Tree [.\vp{t_l} [.\vp{s_l} \vp{colorRed_l} \vp{shapeBall_l} ] ] \end{scope} \node[anchor=west,text width=4cm] (note1) at (7.5,1) { + }; \begin{scope}[xshift=10cm] \Tree [.$t'_l$ [.\vp{ball_l} \vp{colorRed_l} \vp{shapeBall_l} ] ] \end{scope} \node[anchor=west,text width=5cm] (note2) at (12,1) { $\Longrightarrow$ \small{\vp{s_l} $\equiv$ \vp{ball_l}} }; \end{tikzpicture} \caption{Using language to communicate `red ball' from speaker to listener}\label{figXmitRedBall} \end{figure} \subsection{Resolving Surrogates} While the issue of resolving a surrogate to its corresponding point is an area of ongoing research, the simplest approach is to search backward through prior thoughts looking for a point with is-as covering the is-a attributes of the surrogate. For example a surrogate (\vp{s_x}) having the twines \vp{rare}<\vp{s_x}, \vp{juicy}<\vp{s_x} and \vp{steak}<\vp{s_x} (rare juicy steak) would be resolved by locating another point with at least those same attributes. \par Surrogates will have is-a attributes that are not to be included in the searching. These would be syntactic attributes local to the current thought. The inclusions or exclusion of these attributes can be handled with additional points in the is-a twines. \par The is-a twines off of a surrogate constitute a logical \textit{and} condition. A `rare juicy steak' must be rare and juicy and a steak. Logical \textit{or} conditions (`chocolate or strawberry milkshake') can be easily accommodated with the addition of thought specific twines. In figure~\ref{figredblueball} the point \vp{csflavor} is twined to both \vp{chocolate} and \vp{strawberry} (\vp{csflavor}<\vp{chocolate}|\vp{t_2}, \vp{csflavor}<\vp{strawberry}|\vp{t_2}) within the context of the thought \vp{t_2}. The twine \vp{csflavor}<\vp{surrogate} links the surrogate to this new point. Now either a chocolate or strawberry milkshake would match the \vp{csflavor} attribute. A similar strategy is shown in figure~\ref{figredblueball} with the `red [and] blue ball' versus `red or blue ball'. \begin{figure}[H] \begin{subfigure}[b]{0.45\textwidth} \centering \begin{tikzpicture}[scale=0.8, grow'=up] \Tree [.t1 [.\vp{s1} \vp{red} \vp{blue} \vp{ball} ] ] \end{tikzpicture} \caption{Representation of `red blue ball'} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \begin{tikzpicture}[scale=0.8, grow'=up] \Tree [.t2 [.\vp{s2} \vp{rbPoint} \vp{ball} ] ] \node[anchor=west,text width=4cm] (note1) at (2,1) { where \vp{rbPoint}<\vp{red}|\vp{t2} and \vp{rbPoint}<\vp{blue}|\vp{t2} }; \end{tikzpicture} \caption{Representation of `red or blue ball'} \end{subfigure} \caption{The difference between `red blue ball' and `red or blue ball'}\label{figredblueball} \end{figure} \par The handling of definite versus indefinite articles may be as simple as resolving an indefinite surrogate to a new point rather than performing any type of search. With `You will meet a tall dark stranger' the surrogate for `a tall dark stranger' can be replaced with just a new point. Nested clauses (`the fish that got away') require searching for prior conditional thoughts. Finally, in many instances, a surrogate may never be resolved. In `Jack and Jill went up the hill', the reader has no idea who either Jack or Jill are. \subsection{Grounded versus Non-Grounded Chiral Points} The task of converting a sequence of words into a thought is primarily the process of converting those words into surrogates, all is-a twined to a thought point. \begin{definition}{} A \textbf{grounded}\label{def-grounded} point is a point that has an is-a twine to an external sensory point.\footnote{See \cite{cogprints3106} for more on the grounding problem.} \end{definition} The first step is to distinguish grounded points from non-grounded points. This is done with an is-a twine to the \vp{grounded} point (\vp{grounded}<\vp{p}). Nouns and adjectives are grounded. \begin{definition}{} Points that are not grounded are considered \textbf{chiral}\label{def-chiral}\footnote{The word \textit{chiral} is related to handedness (left hand, right hand).}. \end{definition} Verbs, articles, adverbs and prepositions are not grounded. These points have a related left hand point and right hand point (\vp{p\_L} and \vp{p\_R}). For example the non-grounded point \vp{verbWalk} has left and right hand counterparts ($verbWalk\_L$ and $verbWalk\_R$). The reason for this is explained in section~\ref{secconsofthought}. \par Prior examples of thoughts have not been consistent in how verbs are represented in thoughts. From this point forward, a verb will not be represented with its own is-a branch on a thought tree. The verb will be split into its left and right hand parts. The two trees in figure~\ref{fignewoldrep} show the difference. \begin{figure}[H] \begin{subfigure}[b]{0.45\textwidth} \centering \begin{tikzpicture}[scale=0.8, grow'=up] \Tree [.\vp{t} [.\vp{s_1} \vp{John} \vp{sub} ] [.\vp{threw} \vp{verb} ] [.\vp{s_2} \vp{ball} \vp{obj} ] ] \end{tikzpicture} \caption{Old representation} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \begin{tikzpicture}[scale=0.8, grow'=up] \Tree [.\vp{t'} [.\vp{s_1} \vp{John} $threw\_L$ ] [.\vp{s_2} \vp{ball} \vp{threw}\_R ]] \end{tikzpicture} \caption{New representation} \end{subfigure} \caption{New and old representations of a thought}\label{fignewoldrep} \end{figure} Note that with the new representation there is no explicit subject or object. These have been replaced with \vp{verb}\_L (subject side) and \vp{verb}\_R points (object side) respectively. Section~\ref{secconsofthought} will demonstrate how these new points are used. \subsection{Overview of the Language to Thought (LTT\label{def-LTT}) Process}\label{secLTT} Converting a series of words into a thought is achieved with the opLUPARSE instruction. The execution of this instruction consists of several phases. A very simple sentence, `little boy threw ball', will be used as an example in the description of this process. \begin{enumerate} \item\label{lttphase1} The first phase is to load the AS with points from the PS. This is done on a point by point basis until the \vp{eoa} point is detected in the PS. For each point, a determination is made as to whether or not the point is grounded. If it is grounded then a new surrogate point is created, the point taken from the PS is is-a twined to the surrogate and the surrogate is appended to the AS. If the point pulled from the PS is not grounded then the corresponding left hand and right chiral points are appended to the AS, not the original point. As an example, if the PS contains the points $\begin {Bmatrix}little & boy & threw & ball & eoa\end{Bmatrix}$ with \vp{little}, \vp{boy} and \vp{ball} being grounded then the AS would contain five points: $\begin{Bmatrix} \frac{little}{s_1} & \frac{boy}{s_2} & threw\_L & threw\_R & \frac{ball}{s_3} \end{Bmatrix}$. \item\label{lttphase2} The next phase is to look at all combinations of contiguous surrogates. For each combination, temporarily add the surrogates (with is-a points) to the PS and evaluate the binding [\vp{surAction}]. If the evaluation fails go on to the next contiguous pair. If the evaluation is successful then the result should be either the opJOIN or opLULINK points/opcodes. If it is one of these two points then execute the instruction for the pair of points. In this example, the first two points would be combined with the opJOIN instruction/point. The AS would then be $\begin{Bmatrix} \frac{little \; boy}{s_1} & threw\_L & threw\_R & \frac{ball}{s_3} \end{Bmatrix}$ \item\label{lttphase3} This phase looks to join left and right hand chiral points to adjacent surrogates. If a surrogate is followed with a left hand point then the left hand point is-a twined to the surrogate. Similarly if a right hand point is immediately followed with a surrogate then it is is-a twined to that surrogate unless the surrogate has already been is-a twined with a left handed point. If a right handed point is immediately followed with a left handed point then remove the left handed point from the AS. The AS after this phase contains $\begin{Bmatrix} \frac{little \; boy \; threw_L}{s_1} & \frac{ball \; threw\_R}{s_3} \end{Bmatrix}$ \item If there are only surrogate points in the AS then continue phase \ref{lttphase5}. If there are other (chiral) points then continue with phase \ref{lttphase2} unless phases \ref{lttphase2} and \ref{lttphase3} have already been repeated with no actions taken. In this case remove any remaining chiral points from the AS and continue with phase \ref{lttphase5}. In our example, continue with phase \ref{lttphase5}. \item\label{lttphase4} Perform one final check for contiguous surrogates (similar to phase \ref{lttphase2}). \item\label{lttphase5} In the final phase is-a twine each remaining surrogate point in the AS to the current thought point (register rT) and remove all points from the AS. This example ends with the thought shown in figure~\ref{figlittleboy}. The thought point (\vp{t_x}) has two surrogate points is-a twined. Surrogate \vp{s_1} representing a point described by \vp{little} and \vp{boy}, and surrogate \vp{s_2} representing a point described by \vp{ball}. \begin{figure}[H] \centering \begin{tikzpicture}[scale=0.8, grow'=up] \Tree [.$t_x$ [.$s_1$ \vp{little} \vp{boy} $threw\_L$ ] [.$s_3$ \vp{ball} $threw\_R$ ] ] \end{tikzpicture} \caption{Resulting thought tree from the sentence 'little boy threw ball'}\label{figlittleboy} \end{figure} \end{enumerate} Below is a more complicated example with the sentence `The little boy threw the big red ball on the table to John': \begin{lstlisting} twine grounded<boy,ball,John,little,red,big,table twine obj<boy,ball,John,objSurface twine adjObj<little,red,big twine objSurface<table bind [surAction adjObj obj] opLUJOIN bind [surAction obj objSurface on_R] opLULINK ps the,little,boy,threw,the,big,red,ball,on,the,table,to,John,eoa run opLUPARSE \end{lstlisting} The resulting thought is shown by the tree in figure~\ref{figlittleboyball}. The surrogate points created in the execution of the opLUPARSE instruction are shown as `sur\textit{n}?' where \textit{n} is a unique suffix for each surrogate. Note that the tree includes only surrogates 1, 3, 6 and 7. The other surrogates (2, 4, and 5) were merged with the opJOIN instruction. \begin{figure}[H] \centering \begin{tikzpicture}[scale=0.8, grow'=up ] \Tree [.t [.\vp{sur1?} \vp{little} \vp{boy} \vp{threw}\_L ] [.\vp{sur3?} \vp{big} \vp{red} \vp{ball} \vp{threw}\_R [.\vp{sur6?} \vp{table} \vp{on}\_R \vp{to}\_L ] ] [.\vp{sur7?} \vp{John} \vp{to}\_R ] ] \end{tikzpicture} \caption{Excerpted parsing tree of `The little boy threw the big red ball on the table to John'}\label{figlittleboyball} \end{figure} \subsection{Limitations of this Approach} The parsing algorithm as just described is limited in its abilities to parse natural language. This is intentional as the next section (\ref{secLanLearn}) will describe how an intity can autonomously learn the bindings (rules) necessary to accomplish this level of linguistic sophistication. \par Linguistic parsing using the opRASM instruction does not have these two restrictions. See appendix~\ref{secParseopRASM} for several examples of parsing using opRASM. Merging this parsing sophistication into the LTT process is another area of ongoing research. \section{How a Language is Learned}\label{secLanLearn} How a natural language is learned is the topic of this section. A simplified subset of English consisting of nouns, adjectives, prepositions and verbs is used. Only simple noun phrases are considered; no nested clauses (e.g. `The man \textit{John saw at the park}'). Verb tenses are not considered. Even with these simplifications, language learning is a daunting task. Consider what a toddler acquires to understand this limited subset of English: \begin{clist} \item recognize and discriminate sounds into different points; \item learn to recognize visual objects (e.g. form a pattern that recognizes apples); \item learn the meaning of a word (the grounding problem) by associating sounds with physical objects (e.g. associate the spoken or written word `apple' with the visual pattern for an apple); \item learn the syntax of the language (e.g. `John ate the cookie' is not the same as `The cookie ate John'); \item learn the meaning of thought (e.g. what does it mean to throw something, what are the consequences of throwing something). \end{clist} The parsing process used in this section is the same as that described in section~\ref{secLTT}. The process requires that certain knowledge, in the form of bindings, be available to: \begin{enumerate} \setlength\itemsep{-0.4em} \item recognize words (i.e. patterns that recognize words from spoken sounds); \item recognize grounded word (i.e. points that have is-twines to physical descriptions); \item combine contiguous surrogate points in the AS using bindings of the form [$surAction$ $s_1$ $s_2$]; \item use the context of the resulting thought (all the is-a points linked to that thought) to find patterns resulting in consequential thoughts and/or actions. \end{enumerate} The following four sections address the four points just mentioned above. In all four cases the `learning' is just a form of pattern learning as discussed in section~\ref{subseclearnpat} albeit with different sets of pattern data. It should also be noted that although these four cases of learning are presented in a logical order, the actual learning need not be sequential. \subsection{Learning to Recognize Words} Spoken words enter the context as sensory input. Given sufficient repetition, the recognition process described in section~\ref{subseclearnpat} creates patterns that associate specific words with internal points. In this way, words are recognized. Embedded patterns (and points) are created distinguishing spoken words from other sounds, i.e. words have an is-a twine of the form \vp{spokenByPerson}<\vp{recognizedSound}. \par Visual objects are recognized with the same learning process. The only difference between the learning of sounds versus visual objects is the source of the sensory points. All recognized visual points have a common embedded pattern that is recognized as point \vp{visualObj} (i.e. all visual points are is-a twined to \vp{visualObj}) and that is-a twine \vp{grounded}<\vp{visualObj}\footnote{This would be a predefined twine within the intity.} exists (i.e. all recognized visual points are grounded). \subsection{Grounding: Associating Words with Visual Objects and Attributes} \par Associating a spoken word point with its visual object point is again achieved with the pattern recognition algorithm. For the association to occur, both the word and object sensory points must reside in the PS concurrently over multiple instances (e.g. the sounds for the word `apple' and the visual image of an apple). Given a word that is recognized with the following pattern/binding: [\vp{w_1} \vp{w_2} \ensuremath{\dots} \vp{w_n}]=\vp{word} and a visual object is recognized with the binding [\vp{v_1} \vp{v_2} \ensuremath{\dots} $v_{n'}$]=\vp{visObj} then if both the word is spoken and the corresponding object seen at the same time the context would contain \{\vp{w_1} \vp{w_2} \ensuremath{\dots} \vp{w_n} \vp{v_1} \vp{v_2} \ensuremath{\dots} \vp{v_{n'}}\} (in addition to other points). With enough repetitions, a new pattern would be detected= [\vp{w_1} \vp{w_2} \ensuremath{\dots} \vp{w_n} \vp{v_1} \vp{v_2} \ensuremath{\dots} \vp{v_{n'}}]=\vp{wvpoint}. However, the points comprising \vp{wvpoint} have embedded sub-patterns for both the word and visual objects resulting in the twines \vp{word}:\vp{wvpoint} and \vp{visObj}:\vp{wvpoint} (section~\ref{secTPwP}). \par Now if the PS contains $\begin{Bmatrix} \vp{w_1} \ \vp{w_2} \ensuremath{\dots} \vp{w_n} \end{Bmatrix}$ then \vp{word} is recognized resulting in \vp{word}.v being added to the PS (section~\ref{secRecPat}). The evaluation of [\vp{word}.v] results in \vp{wvpoint} and since \vp{visualObj}<\vp{wvpoint} and \vp{grounded}<\vp{visualObj} the resulting point is grounded. \par Only words associated with visual objects would be grounded. Any other recognized words would have the is-a twine \vp{spokenByPerson}<\vp{otherWord}, but not be grounded. \subsection{Combining Surrogate Points via Join and Link} Once the AS is loaded with points (grounded and chiral) then only two operations are used to combine them. The V5 opJOIN instruction is used to merge two surrogate points into one surrogate by combining the is-a points of each (e.g. combine an adjective and noun). The opLULINK instruction is used to is-a twine one surrogate to another (e.g. link a prepositional phrase to a noun phrase). This section describes how an intity learns to join/link two surrogates together as in: \begin{equation*} \begin{Bmatrix}\frac{big}{s_1} \ \frac{red}{s_2} \ \frac{apple}{s_3}\end{Bmatrix} \Longrightarrow \begin{Bmatrix}\frac{big \; red \; apple}{s_1}\end{Bmatrix} \end{equation*} with bindings of the form [\vp{surAction} \vp{p_1} \vp{p_2} \ensuremath{\dots}] = opJOIN. Initially an intity has no [\vp{surAction}] bindings. During the LTT process (section \ref{secLTT}) for each pair of contiguous surrogate points, the intity attempts to evaluate [\vp{surAction}]. If it fails, it looks at recent thoughts for points similar to the two surrogates in question. For example if the two surrogates are $\frac{big}{s_1}$ and $\frac{red}{s_2}$ then look through past thoughts (\vp{t} points) for is-a points having either the \vp{big} point or \vp{red} point. If the intity had recently noticed or thought of `big red apple' then there would be a thought with the branch $\frac{big \; red \; apple}{p_x}$. Both the \vp{big} point and the \vp{red} point have a common parent (in this case \vp{p_x}). The intity would then proceed and join the two surrogates in the AS as part of the LTT process. Additionally, the intity would save all the is-a points of the two LTT surrogates in a buffer for a future recognition run. When this buffer reached a certain number of entries or a certain amount of time has passed, the pattern learning process (section \ref{subseclearnpat}) would run. If a pattern was detected then the recognized points would be used to create a binding of the form [\vp{surAction} \vp{r_1} \vp{r_2} \ensuremath{\dots} \vp{r_n}]=opLUJOIN where \vp{r_i} are the recognized points. In this way the intity learns to join adjectives and nouns to form a single surrogate. \par As an example, if the surrogate points $\frac{red}{s_1}$ and $\frac{apple}{s_2}$ (corresponding to the grounded words `red' and `apple') are repeatedly encountered in the AS and repeatedly combined into a single surrogate then multiple sets of \{\vp{red},\vp{apple},opLUJOIN\} would be appended to the pattern learning buffer. During the pattern learning phase this set of points would be recognized as a pattern and the binding [\vp{red} \vp{apple} \vp{surAction}]=\vp{opJOIN} would be created. \par Success requires that the intity has a (previous) thought matching the spoken words. In other words it must have noticed a `big red ball' prior hearing the words `big', `red' and `ball'. As the intity learns more about the world it develops more sophisticated is-a twines and the patterns recognized for the surrogate action become more generalized. \par Learning when to apply the opLULINK operation is handled in a similar fashion. In this case the reference thought is not a single point but two points with one an is-a of the other. The learning processes are identical, the only difference is the use of the opLULINK instruction instead of the opJOIN instruction. \par The phrase `ball on table' maps into two surrogate points in the AS: \{$\frac{ball}{s_1},\frac{on \ table}{s_2}$\}. If the evaluation of [$surAction$] in the context of the two surrogates fails then a search of prior thoughts is performed looking for the match on the is-a points. Assuming the ball is perceived to be on the table then the thought $\frac{\frac{ball \ \frac{on \ p_{table}}{p_x}}{p_{apple}}}{t}$ exists. Matches would be found between the \vp{ball} and \vp{table} resulting in the set \{\vp{ball},\vp{table},\vp{on},\vp{opLULINK}\}. With enough repetitions a pattern would be found resulting in the binding [\vp{surAction} \vp{ball} \vp{table} \vp{on}] = \vp{opLULINK}. \subsection{Meaning / Consequences of a Thought}\label{secconsofthought} The \textit{meaning} of a thought is defined as the effect that thought has on subsequent actions and thoughts. How those subsequent actions/thoughts arise from a given thought is presented in this section. A general overview of the steps required is: \begin{indent1} \begin{itemize} \item Look at pairs of thoughts that were both created within a specific time interval. \item Create a set of points that relate the before and after points from the two thoughts and save these sets in a reserved pattern learning buffer. \item After a certain number of sets have been created (or a specified amount of time has passed) perform the pattern learning process (section~\ref{subseclearnpat}). \item If a pattern is found, use the matched points to create another twine of the form [\vp{cons}.v \vp{b_1} \vp{b_2} \ensuremath{\dots}]=\vp{consequences}, where \vp{b_x} are trigger points from the initial (before) thought) and \vp{consequences} are a series of points to create a consequential thought or initiate a consequential action/sequence. \end{itemize} \end{indent1} The encoding of the before and after points is the key to making this a simple process of pattern learning. \subsubsection{Action Verbs} Figure~\ref{fig2thoughtscommon} shows two thoughts. The first corresponds to `John threw the ball to Bob'. The second (subsequent) thought corresponds to `Bob has the ball'. These two thoughts are related by two pairs of surrogate points. Surrogates \vp{s_2} and \vp{s_5} both resolve to `the ball' and surrogates \vp{s_3} and \vp{s_4} resolve to `Bob'. For each of these pairs of points, a binding is created with a new link point as the value. For the `ball' this becomes [\vp{threw}\_R \vp{has}\_R]=\vp{link1}. For `Bob' it is [\vp{to}\_R \vp{has}\_L]=\vp{link2}. Additional twines are created so that given either \vp{link1} or \vp{link2} the cause and effect points can be determined. This is reflected in lines~\ref{line:caex1a} through \ref{line:caex1b} in the example below. An additional link point is created for the surrogate point \vp{s_1}. Since this surrogate point is not related to any point in the second thought, the binding contains all of the surrogate is-a twine points. Again, an additional twine is created for just the \textit{cause} since there is no corresponding effect side. Line~\ref{line:caex1c} and \ref{line:caex1c2} show this third link. The pattern data generated by this pair of thoughts would be $\begin{Bmatrix} \vp{link1} \ \vp{link2} \ \vp{link3} \end{Bmatrix}$. \par Now consider a similar pair of thoughts represented by the two thoughts: `Mary threw the bone to Fido' and `Fido has the bone'. In this case `bone' and `Fido' are identical. The links generated for this pair of thoughts would be \vp{link1}, \vp{link2} and \vp{link4}. The links \vp{link1} and \vp{link2} would be identical to the first pair because the binding key sets [\vp{threw}\_R \vp{has}\_R] and [\vp{to}\_R \vp{has}\_L] are identical. But `John' is not the same as `Mary' so while [\vp{john} \vp{threw}\_L]=\vp{link3}, [\vp{mary} \vp{threw}\_L]=\vp{link4}. The pattern data for this pair of thoughts would be $\begin{Bmatrix} \vp{link1} \ \vp{link2} \ \vp{link4} \end{Bmatrix}$. \par Given a sufficient number of thought pairs, the standard pattern matching algorithm would find the pattern \{\vp{link1},\vp{link2}\}. But rather than bind the pattern to a recognition point, a consequence would be created from the cause/effect is-a points of the pattern links. In this example (line~\ref{line:caex1d}) the trigger points are $to\_R$ and $threw\_R$ and the consequence is to create a new thought relating a \textit{value} to an \textit{attribute}. \begin{figure}[H] \centering \begin{tikzpicture}[scale=0.8, grow'=up] \centering \Tree [.$t_{m1}$ [.$s_1$ \vp{John} $threw\_L$ ] [.$s_2$ \node(ball1){\vp{ball}}; $threw\_R$ ] [.$s_3$ $to\_R$ \node(bob1){\vp{Bob}}; ] ] \node[anchor=west,text width=4cm] (note1) at (4,.5) { $\Longrightarrow$ }; \begin{scope}[xshift=8cm] \Tree [.$t_{m2}$ [.$s_4$ \node(bob2){\vp{Bob}}; $has\_L$ ] [.$s_5$ \node(ball2){\vp{ball}}; $has\_R$ ] ] \end{scope} \draw[semithick, <-> ] (bob1) to [bend left=90] (bob2); \draw[semithick, <-> ] (ball1) to [bend left=90] (ball2); \end{tikzpicture} \caption{Two thoughts related by common points ($m1 < m2$)}\label{fig2thoughtscommon} \end{figure} \vspace{-15pt} \centerline{(Note in above figure arrows linking Bob and ball are to the surrogate nodes, not to leaf points.)} \begin{lstlisting} bind [to_R has_L] link1(*\label{line:caex1a}*) twine to_R<link1|cause ; has_L<link1|effect bind [threw_R has_R] link2 twine threw_R<link2|cause ; has_R<link2|effect(*\label{line:caex1b}*) bind [threw_L John] link3(*\label{line:caex1c}*) twine threw_L,John<link3|cause(*\label{line:caex1c2}*) twine cons>to_R.v,has_L,eoa,opSURISA,threw_R.v,has_R,eoa,opSURISA | to_R,threw_R(*\label{line:caex1d}*) \end{lstlisting} \begin{exDesc} The consequence defined on line~\ref{line:caex1d} is tested with the thought representing the sentence `Jill threw the keychain to Helen' (line~\ref{line:caex1e}). The thought \vp{t1} is added to the PS (context). Then \vp{cons.v} is added and evaluated. The result are the twines shown below (lines~\ref{line:exHHB1} through \ref{line:exHHB2}) corresponding to figure~\ref{figConsKeychainHelen}; i.e. Helen has the keychain. \begin{figure}[H] \centering \begin{tikzpicture}[scale=0.8, grow'=up] \Tree [.$t_2$ [.$s_{2b4}$ \vp{Helen} $has\_L$ ] [.$s_{2b8}$ \vp{keychain} $has\_L$ ] ] \end{tikzpicture} \caption{Consequence of throwing a keychain to Helen}\label{figConsKeychainHelen} \end{figure} \end{exDesc} \begin{lstlisting}[firstnumber=9] twine Jill,keychain,Helen<t1(*\label{line:caex1e}*) twine threw_L<Jill ; threw_R<keychain ; to_R<Helen ps t1,opPSISAS,cons.v trace bind run (*\small \textbf{bind: \#48: [\#2b4?.i] = Helen}\label{line:exHHB1}*) (*\small \textbf{bind: \#49: [\#2b4?.i] = has\_L}*) (*\small \textbf{bind: \#50: [*T*(2).i] = \#2b4?}*) (*\small \textbf{bind: \#51: [\#2b8?.i] = keychain}*) (*\small \textbf{bind: \#52: [\#2b8?.i] = has\_R}*) (*\small \textbf{bind: \#53: [*T*(2).i] = \#2b8?}\label{line:exHHB2}*) \end{lstlisting} \begin{exDesc} The next example shows how a consequence that is not based solely on linked nodes can be derived. \begin{figure}[H] \centering \begin{tikzpicture}[scale=0.8, grow'=up] \centering \Tree [.$t_{m1}$ [.$s_1$ \vp{John} $threw\_L$ ] [.$s_2$ \node(ball1){\vp{ball}}; $threw\_R$ ] [.$s_3$ $to\_R$ \node(bob1){\vp{Bob}}; ] ] \node[anchor=west,text width=4cm] (note1) at (4,.5) { $\Longrightarrow$ }; \begin{scope}[xshift=8cm] \Tree [.$t_{m3}$ [.$s_6$ \node(ball2){\vp{ball}}; $flies\_L$ ] [.$s_7$ $through\_R$ \vp{air} ] ] \end{scope} \draw[semithick, <-> ] (ball1) to [bend left=90] (ball2); \end{tikzpicture} \caption{Example of two thoughts related by a common point ($m1 < m3$)} \end{figure} \end{exDesc} The first thought is identical to that of the previous example. The second thought is `The ball flew through the air'. These two thoughts are inter-linked only once with the ball. This creates link point \vp{linka}. The other points are created from the other nodes: \vp{link3} would be same as from the above example, \vp{linkc} and \vp{linkd} would be different points. The pattern data from this pair of thoughts is $\begin{Bmatrix} \vp{linka}, \ \vp{link3}, \ \vp{linkc}, \ \vp{linkd} \end{Bmatrix}$. Other similar pairs of thoughts (`Harry threw the stick to his dog' `The stick flew through the air') would generate pattern data duplicating \vp{linka} and \vp{linkd} but other link points would vary. The pattern would be detected and the consequence (line~\ref{line:caex2a}) would be created. \begin{lstlisting}[firstnumber=1] bind [threw_R flew_L] linka twine threw_R<linka|cause ; flew_L<linka|effect bind [threw_L John] link3 twine threw_L,John<link3|cause bind [to_R Bob] linkc twine to_R,Bob<linkc|cause bind [through_R air] linkd twine through_R,air<linkd|effect twine cons>threw_R.v,flew_L,eoa,opSURISA,through_R,air,eoa,opSURISA | threw_R(*\label{line:caex2a}*) \end{lstlisting} \begin{exDesc} A new thought corresponding to `Harry threw the rock' (thought \vp{2} on line~\ref{line:caex2b}) can now be tested. The resulting (consequential) thought, `the rock flew through the air', is shown in figure~\ref{figConsThrowRock} and in the bindings starting at line~\ref{line:caex2c}. \begin{figure}[H] \centering \begin{tikzpicture}[scale=0.8, grow'=up] \Tree [.$t_3$ [.$s_{2e8}$ \vp{rock} $flew\_L$ ] [.$s_{2ec}$ \vp{air} $through\_R$ ] ] \end{tikzpicture} \caption{Consequence of throwing a rock}\label{figConsThrowRock} \end{figure} \end{exDesc} \begin{lstlisting}[firstnumber=11] twine Harry,rock<t2 ; threw_L<Harry ; threw_R<rock(*\label{line:caex2b}*) ps t2,opPSISAS,cons.v trace bind run (*\small \textbf{bind: \#71: [\#2e8?.i] = rock}\label{line:caex2c}*) (*\small \textbf{bind: \#72: [\#2e8?.i] = flew\_L}*) (*\small \textbf{bind: \#73: [*T*(3).i] = \#2e8?}*) (*\small \textbf{bind: \#74: [\#2ec?.i] = through\_R}*) (*\small \textbf{bind: \#75: [\#2ec?.i] = air}*) (*\small \textbf{bind: \#76: [*T*(3).i] = \#2ec?}*) \end{lstlisting} \subsubsection{Copula (To Be)} In our simplified language, the sentence `\vp{x} is \vp{y}' (e.g. `the ball is red') is very similar to the noun phrase 'the red ball' in that the thoughts generated by each would reduce to the same tree structure as shown in figure~\ref{fig2RedBalls}. \begin{figure}[H] \centering \begin{subfigure}[b]{0.3\textwidth} \centering \begin{tikzpicture}[scale=0.8, grow'=up] \Tree [.$t_1$ [.$s_1$ \vp{red} \vp{ball} ] ] \end{tikzpicture} \caption{`The red ball'} \end{subfigure} \begin{subfigure}[b]{0.6\textwidth} \centering \begin{tikzpicture}[scale=0.8, grow'=up] \Tree [.$t_2$ [.$s_2$ \vp{ball} $is\_L$ ] [.$s_3$ \vp{red} $is\_R$ ] ] \node[anchor=west,text width=4cm] (note1) at (4,.5) { $\Longrightarrow$ }; \begin{scope}[xshift=8cm] \Tree [.$t_2$ [.$s_2$ \vp{ball} $is\_L$ \vp{red} $is\_R$ ] ] \end{scope} \end{tikzpicture} \caption{Merging of surrogates in `\vp{x} is \vp{y}' thought} \end{subfigure} \caption{Similarity between `the red ball' and `the ball is red'}\label{fig2RedBalls} \end{figure} \par There is a subtle semantic difference between `the red ball' and `the ball is red'. In the first phrase, a single surrogate has both the `red' and `ball' is-a links. Resolution of the surrogate would require matching on both points. In the second phrase, the subject surrogate only has the \vp{ball} point. Resolution requires only that the target point also have the \vp{ball} point. After resolution, the additional \vp{red} point would be twined. There is an unresolved timing issue with the phrase `The ball is red'. What happens if the \vp{ball} surrogate is not resolved when the \vp{red} point is-a twined? \section{Conclusions} This paper introduced the contextual evaluation model for representing knowledge. The model is conceptually simple consisting of only four elements: the point, the key set, the binding and context. Contextual evaluation is the only operation. A single simple algorithm is used for learning patterns. Despite the simplicity of the model, it is shown to be Turing complete. More importantly, many examples demonstrate how the model performs a range of artificial intelligence tasks from the simple (learning a maze) to the difficult (learning a natural language). It is the hope of the author that it will interest others to investigate the potential of the contextual evaluation model. \par The major contributions of this paper are: \begin{itemize} \item The CEM - the simplicity of the model and its ability to integrate internal, sensory and control data (points) into a contextual framework of facts, patterns and sequences. \item V5 - proof that the CEM is viable and that with relatively few primitive instructions ($\approx$ 25) much can be accomplished. \item The incorporation of time - how time naturally fits into the CEM model. \item Patterns as bindings - how the CEM binding is used for pattern matching; how points of different modalities (e.g. sound, smell, vision, internal) can be incorporated into a single pattern and how focus points can be used to implement focus and direct \textit{attention}. \item The \textit{thought}- defined as a collection of is-a twines linked to a base thought point; using sequences to convert thoughts to natural language utterances; converting natural language utterances to thoughts. \item Meaning - the \textit{meaning} of a thought, not as anything inherent in the thought but as the subsequent actions of an intity as a consequence of the thought. \item Motivation - the \vp{p^+} and \vp{p^-} points to represent good/pleasure and bad/pain and the effect these points have on the operation of the V5 engine. \item Natural Language Acquisition - that each of the steps in autonomous language acquisition is accomplished with the simple tools of this model. \end{itemize} \subsection{Major Unanswered Questions} The CEM provides a concise methodology for handling many disparate AI situations. The CEM and V5 are works in progress and much remains to be resolved: \begin{itemize} \item How are sense points encoded? The CEM assumes that all senses are encoded such that pattern learning and pattern recognition are possible using the techniques discussed in section~\ref{secPatterns}. Could it be that a neural-network pre-processor is required to convert raw sensory data into a point based format? \item How would the V5 engine scale to perform useful tasks? Despite decades of experience working with an earlier contextual engine, the effort required to scale up the V5 engine is an unknown. \item Would the language parsing and generation techniques scale up to handle a more complete language subset? This paper describes only the simplest forms of language parsing and generation for English. Can additional rules (bindings) be developed to handle complex English? Would similar rules or completely different rules be required for the majority of languages where syntax is not based on word position but word endings? \item How are surrogate points resolved? Only the simplest rules for surrogate resolution were given in this paper. How should surrogates resolve that are qualified with multiple points and thoughts? When should surrogates be resolved? Once resolved should they continue to be re-resolved? \item Does the CEM reflect, at some level, how brains actually operate? And if so then how did it evolve? What can be inferred from the similarities between viewing the E\textsubscript{C} engine as a set of rules (bindings) operating within a constantly changing context and a living cell having a set of rules (DNA/RNA) operating within a constantly changing environment (context)? \item How are sequences learned? The methods described in section~\ref{secconsofthought} for learning consequences of a thought would be one approach, albeit slow and inefficient for any sequence consisting of more than a few points. Another possible approach is to have a basic sequence scaffolding or framework on which points of a sequence could be twined. \end{itemize} \par As with everything else in science and technology, the ideas presented in this paper stand on the shoulders of many others in a diverse range of AI endeavors. The following sections of the conclusion point out some of the many areas of prior research utilized in developing the CEM and V5 engine. \subsection{Other Models of Knowledge Representation} Knowledge representation (KR) is the represention of information for solving non-trivial problems with a capability similar to that of a human. First order logic has been a key component of KR systems since the late 1950's with the IPL/GPS\cite{gps} programs. IPL introduced the concept of the \textit{list}. McCarthy expanded on that concept, added ideas from lambda calculus and developed LISP\cite{lisp,lisp2}. \par First order logic, while mathematically rigorous, was found to be difficult to apply to many real world situations. Other KR models moved from strictly declarative structures to formats allowing for procedural and frame-based formats\cite{planner,wiki:Prolog,frame,kl-one}. Larger, ongoing KR projects have expanded on the idea of semantic webs\cite{wiki:Cyc,cycSyntax,owl}. \subsection{Comparison of the CEM Context with Other Contextual Models} \par Intensional logic is a group of formal systems allowing expressions or statements whose values depend on hidden contexts. It was originally developed to help understand the contextual nature of natural languages. In this logic, the extension of an expression is the value in a specific context. The intension of an expression is the function that maps from a context the value of each expression within that context\cite{wadge}. \par Lucid is a programming language developed in this format\cite{lucid}. Early versions of the language supported only time and space as the context. Current versions permit the users to define their own dimensions. In Lucid, the context and the expressions are separate, while in the CEM, the context consists of the same points as the other components of the model. The expressions within Lucid are based on variables, functions, constants, and operations. These can be assembled into programs. The CEM has no explicit variables, functions, or constants, just bindings. This is not to say that the equivalent of variables, functions and constants cannot be implemented with the CEM, but that there is nothing intrinsic to the model implementing these concepts. There is no concept of a program within the CEM yet sophisticated programming can be achieved through contextual sequences. \par McCarthy and later Guha\cite{guha} attempted to define context through the $ist(c,p)$ relation that asserts that $p$ is true in the context of $c$. In this model the \enquote{Contexts are abstract objects. We don't offer a definition}.\cite{mccarthy} The propositions, p's, are expressions based on the predicate logic. As an example, McCarthy gives the following: \begin{equation} ist(context-of(`Sherlock Holmes Stories'), `Holmes is a detective') \end{equation} that states, in the context of Sherlock Holmes stories, the character Holmes is a detective. In another context, Holmes might be a short order cook or somebody's dog. \par Lenat further extends this thinking in the Cyc project by defining contexts within a 12 dimensional space. As with McCarthy and Guha, Cyc facts and propositions are stated within a context. There does not appear to be any overlap between context and proposition. A comparison of contexts based (or derivative) of the McCarthy model can be found in Akman\cite{akman}. Another review of contextual models can be found in Serafini\cite{serafini}. \subsection{V5 Language and Engine} V5 has its roots in the V4 language\cite{patent:6470490, v4Ref}. V4 was started in the mid-1990s and is currently used as a production language for web-based data analysis. V4 is based on points, bindings and contextual evaluations. It differs from V5 in several significant ways: \begin{enumerate} \item In V4, every point belongs to a \textit{dimension}. Dimensions are typed and supported types include integer, floating point, logical, string, date-time and dictionary. A point is specified as \textit{dimension:value} as in date:23-Apr-2015. \item V4 supports \textit{modules} that perform typical language functions such as string manipulation, mathematical and statistical analysis and input/output. \item The V4 context is `stacked' meaning that a new stack \textit{frame} is created with each execution frame. Points inserted into a frame take precedence over points in prior frames. Context frames are removed when the execution frame is exited. Recursive binding definitions are thus supported. The factorial function for integers greater than zero could be defined as shown in (\ref{equV4Fact})\footnote{With some syntactic sugaring, points on the NId dimension (named identifier dimension) can be written without the dimension prefix (e.g. NId:factorial $\equiv$ factorial). Similarly, integers can be written without the Int dimension prefix (e.g. Int:1 $\equiv$ 1). A dimension with an asterisk suffix references the value of that dimension in the context. Arithmetic expressions are enclosed in braces. If the evaluation of a key set fails and the key set is followed with a comma and another point then that point is taken as the value (e.g. evaluating [factorial 0],1 returns 1).}. \begin{equation}\label{equV4Fact}\text{[factorial Int:>0] = \{Int* * [factorial \{Int* - 1\}],1\}}\end{equation} \item V4 may be programmed much like any other functional language. The binding in figure~\ref{figHelloWorld}, when evaluated, generates HTML to display `Hello World'. \begin{figure}[H] \centering \begin{subfigure}[b]{0.4\textwidth} \begin{lstlisting} [HelloWorld] = Do(XML::html XML::body Echo(XML::h1 "Hello World") ) \end{lstlisting} \caption{V4 code to generate `Hello World' HTML page} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \begin{lstlisting}[numbers=none] <html> <body> <h1>Hello World</h1> </body> </html> \end{lstlisting} \caption{HTML generated by [HelloWorld] binding} \end{subfigure} \caption{Hello World example}\label{figHelloWorld} \end{figure} \end{enumerate} What V4 has demonstrated over the years is that an efficient algorithm for contextual evaluations exists and the binding / contextual evaluation paradigm is a practical tool for real world problems. The author is not aware of any work on contextual sequences other than his own\cite{hansen:isic:98}. \subsection{Language, Thought and Meaning} In this paper, a \textit{thought} is defined in section~\ref{secThoughts} as a point with is-a twines linked to it. While the example thoughts presented throughout this paper all had points with English labels, the thought itself, is not tied to any language. Several sections of this paper are devoted to mapping from a thought to a natural language utterance and from an utterance to a thought. This begs the question, `Is language necessary for thought?' There are two philosophical schools on this issue. Lingualism, most notably the Sapir-Whorf hypothesis, claims that language, to a greater or lesser degree, determines thoughts\cite{wiki:Linguistic_relativity}. Another school, the `language of thought hypothesis' (LOTH)\cite{fodor,wiki:Language_of_thought_hypothesis}, claims that thought is independent of language. The author tends more towards the LOTH school, not because of any philosophical reasons but because of empirical studies showing that people with aphasia (an inability to use language) still possess other mental faculties\cite{fedorenko}. \par There are any number of theories related to the \textit{meaning} of language\cite{wiki:Meaning-philosophy-of-language}. In this paper \textit{meaning} arises from the consequences of the thought. A thought, in and of itself, does not have meaning. Meaning is an ongoing process, not a static attribute or quality. Searle in his book \textit{The Language of Thought} makes a somewhat similar argument\cite[p. 120]{searle}: \blockquote{\ensuremath{\dots} I shall argue that for a large number of cases the notion of literal meaning of a sentence only has application relative to a set of background assumptions, and furthermore these background assumptions are not all and could not all be realized in the semantic structure of the sentence in the way that presupposition and indexically dependent elements of the sentence's truth conditions are realized in the semantic structure of the sentence.} \par The language learning process presented in this paper differs from many of the existing models\cite{frank,wintner} in several respects. First is that it assumes no a priori knowledge and no external `guidance'. Secondly it spans the learning process from distinguishing words from other sounds to learning the meanings of utterances (as mapped to thoughts). And thirdly it does so in a logical, easily explained way using no special techniques other than bindings and pattern learning. \subsection{Motivation} The point types \vp{p^+} and \vp{p^-} are used to differentiate good/pleasure and bad/pain points from other points. These +/- points impact the operation of an intity in such a way as to maximize \ensuremath{\Sigma} \ thus favoring good over bad. Given that the intity reacts appropriately to the +/- points, does the intity \textit{feel} the points? Others have asked this very question. For Dennet it is the notion of qualia\cite{dennett}. For Chalmers it is the discussion of philosophical zombies\cite{wiki:Philosophical_zombie}. This question is best left for the philosophers. \par Emotions are an important component to human level intelligence. Thirty years ago Minsky recognized the importance of emotions: \enquote{The question is not whether intelligent machines can have any emotions but whether machines can be intelligent without any emotions.}\cite[p.163]{minsky} Picard writes in her book \textit{Affective Computing}\cite{picard}: \enquote{The latest scientific findings indicate that emotions play an essential role in rational decision making, perception, learning, and a variety of other cognitive functions}. Do the +/- points have any relationship to emotions? Could \vp{p^+} points unrelated to senses (i.e. not associated with a sensory experience) be related to the feelings of happiness or contentment? Similarly, could \vp{p^-} points unrelated to senses be the root of fear and/or anxiety? Recall that the intity is constantly matching patterns in the PS. What if the PS is suddenly swamped with a novel set of points and was unable to recognize anything in the set? Would it momentarily \textit{freeze up} in a way that is similar to our reaction of surprise? \vspace{1cm} \par This paper is all about models: modeling knowledge using points and bindings, modeling thoughts as trees of is-a twines, modeling language as a medium for communicating thoughts, modeling meaning as a process, modeling motivation with \vp{p^{+/-}} points. Are these models valid? Might they reflect how we actually think? To paraphrase George Box\cite{box}, all models are wrong, but some are useful. My hope is that some will find this paper useful. \bibliographystyle{acm}
1,314,259,992,634
arxiv
\section{Hybrid scheme renormalization} \label{app:ren} \subsection{Definition of scheme} \label{app:def} As has been described in the main text, the hybrid scheme renormalization includes two parts: \begin{itemize} \item For $z\le z_S$, we form the ratio of bare matrix elements~\cite{Orginos:2017kos}, \begin{align}\label{eq:ratios} \frac{\tilde h(z,P^z,a)}{\tilde h(z,0,a)}\,, \end{align} which has a well-defined continuum limit and is renormalization group (RG) invariant. \item For $z\ge z_S$, the renormalized matrix element is \begin{align} e^{\delta m(a) |z-z_S|}{\tilde h(z,P^z,a) \over \tilde h(z_S,0,a)} \,, \end{align} which is equal to the ratio in \eq{ratios} at $z=z_S$. To determine $\delta m(a)$ we use the additive renormalization constant, $c_Q(a)=\delta m(a)$, which is obtained in Ref.~\cite{Bazavov:2018wmo} from the analysis of the free energy of a static quark, $F_Q(T)$, at non-zero temperature $T$ with the normalization condition in \eq{pot}. Recently $F_Q$ has been calculated using one step of HYP smearing \cite{Petreczky:2021mef}, and it was found that HYP smearing does not affect the temperature dependence of $F_Q(T)$, but only shifts it by an additive constant. Therefore, we have $F_Q^{B,1}(T)+\delta m(a)=F_Q^{B,0}(T)+c_Q(a)$ with superscripts 0 and 1 referring to the number of HYP smearing steps in the bare free energy of the static quark. Using the lattice results for $F_Q^{B,0}(T)$ and $F_Q^{B,1}(T)$ obtained on $N_{\tau}=12$ lattices and temperatures corresponding to $a=0.04$ fm and $a=0.06$ fm (where cutoff effects can be neglected), as well as the values of $c_Q$ from Table X of Ref.~\cite{Bazavov:2018wmo} for $\beta=7.825$ ($a=0.04$ fm) and $\beta=7.373$ ($a=0.06$ fm), we obtain $\delta m(a)$. The results are $a\delta m(a=0.06{\rm\ fm})=0.1586(8)$ and $a\delta m(a=0.04{\rm\ fm})=0.1508(12)$. \end{itemize} First of all, to test how well the subtraction of $\delta m(a)$ can remove the linear divergences in $\tilde h(z,P^z,a)$, we construct the ratio in \eq{mfit}, \begin{align}\label{eq:ratio} \tilde R(z,z_0,a) &\equiv e^{\delta m(a) (z-z_0)} {\tilde h(z,0,a)\over \tilde h(z_0,0,a)}\,, \end{align} where $z_0=0.24$ fm for both lattice spacings. According to \eq{renorm}, the renormalization factor $Z_O(a)$ cancels out in the ratio. Therefore, if $\delta m(a)$ includes all the linear divergences, then $\tilde R(z,z_0,a)$ should have a well-defined continuum limit. \begin{figure}[htb] \centering \includegraphics[width=0.8\columnwidth]{R0} \includegraphics[width=0.8\columnwidth]{R} \caption{Upper panel: ratios of bare lattice matrix elements without the Wilson-line mass subtraction. Lower panel: the ratio in \eq{ratio} with Wilson-line mass subtraction. The red and blue points are for $a=0.04$ fm and $0.06$ fm. The red and blue bands are interpolations of the points, and the gray band is the continuum extrapolation of them with $a^2$-dependence.} \label{fig:ratio} \end{figure} Our lattice results for the above ratio with $z_0=0.24$ fm is shown in \fig{ratio}. As one can see, the differences between the ratios at $a=0.04$ fm and $0.06$ fm are at sub-percent level, which clearly shows that the linear divergences have been sufficiently subtracted by $\delta m(a)$. Therefore, the ratio in \eq{ratio} has a continuum limit \begin{align} \lim_{a\to0} \tilde R(z,z_0,a) &= \tilde R(z,z_0)\,, \end{align} which is RG invariant. Our next step is to match the lattice subtraction scheme to ${\overline{\mathrm{MS}}}$. When $z,z_0 \ll \Lambda_{\rm QCD}^{-1}$, the ${\overline{\mathrm{MS}}}$ matrix element $\tilde h^{{\overline{\mathrm{MS}}}}(z,0,\mu)$ has an OPE that goes as \begin{align}\label{eq:ope} \tilde h^{{\overline{\mathrm{MS}}}}(z,0,\mu) &= e^{-m^{{\overline{\mathrm{MS}}}}_0|z|} \left[C_0(z^2\mu^2) \right.\nonumber\\ &\left. + z^2 C_2(z^2\mu^2) \langle P| O_{\rm tw4}(\mu) |P\rangle + \ldots\right]\,, \end{align} where $m^{{\overline{\mathrm{MS}}}}_0$ is the ${\cal O}(\Lambda_{\rm QCD})$ renormalon ambiguity from the Wilson line self-energy renormalization~\cite{Braun:2018brg}, $O_{\rm tw4}(\mu)$ is a twist-four operator (for example, $\bar{\psi} D^2 \psi$ or $g\bar{\psi} \sigma_{\mu\nu}F^{\mu\nu} \psi$), $C_0$ and $C_2$ are perturbative coefficient functions, and ``$\ldots$'' denotes contributions at higher twists. Since $P^z=0$, $C_0$ is the only Wilson coefficient that contributes at leading-twist. The leading-twist contribution is proportional to $\langle P|\bar{\psi}\gamma^t \psi|P\rangle/(2P^t)$ which is trivially one due to vector current conservation. Since $\tilde h^{{\overline{\mathrm{MS}}}}(z,0,\mu)$ is multiplicatively renormalizable, both $C_0(z^2\mu^2)$ and $C_2(z^2\mu^2) \langle P| O_{\rm tw4}(\mu) |P\rangle$ must satisfy RG equations with the same anomalous dimension, which is known to next-to-next-to-next-to-leading order (N$^3$LO)~\cite{Braun:2020ymy}. Due to the ambiguity in summing the perturbative series in $C_0(z^2\mu^2)$, there are $O(\Lambda_{\rm QCD}^{2n})$ IR renormalons in the leading-twist contribution that should be cancelled by those from higher-twist condensates, along with the ${\cal O}(\Lambda_{\rm QCD})$ UV renormalon to be cancelled by $m^{{\overline{\mathrm{MS}}}}_0$~\cite{Braun:2018brg,Beneke:1994sw}. Both the UV and IR renormalon contributions cannot be well defined unless one specifies how to sum the perturbative series in $C_0(z^2\mu^2)$ to all orders, which, however, is unknown as $C_0(z^2\mu^2)$ has been calculated to only NNLO so far~\cite{Li:2020xml}. Note that $m^{{\overline{\mathrm{MS}}}}_0$ is analogous to the mass renormalization in heavy-quark effective theory (HQET)~\cite{Beneke:1994sw}, which is of UV origin and cannot be attributed to any short-distance condensate. Instead, it appears as a residual mass term in the HQET Lagrangian and exists in $\tilde h^{{\overline{\mathrm{MS}}}}(z,0,\mu)$ at all $z$, i.e., \begin{align}\label{eq:ms} \tilde h^{{\overline{\mathrm{MS}}}}(z,0,\mu) &= e^{-m^{{\overline{\mathrm{MS}}}}_0|z|}\tilde h^{{\overline{\mathrm{MS}}}}_0(z,0,\mu)\,, \end{align} where $\tilde h^{{\overline{\mathrm{MS}}}}_0(z,0,\mu)$ at short distance reduces to the OPE series in the brackets of \eq{ope}. The renormalons have been studied extensively for the Polyakov loop and plaquette in lattice QCD~\cite{Bauer:2011ws,Bali:2013pla,Bali:2014fea,Bali:2014sja,Bali:2014gha}. In lattice perturbation theory, one has to compute the perturbative series to very high orders of $\alpha_s$ in order to see the renormalon effects. Nevertheless, in the ${\overline{\mathrm{MS}}}$ scheme, the OPE with Wilson coefficient at a few loop orders and the condensate term, turns out to be successful in describing the static potential at short distance up to $\sim0.25$ fm~\cite{Pineda:2002se}. One explanation is that $\alpha_s$ in the ${\overline{\mathrm{MS}}}$ scheme is larger than that in lattice perturbation theory, so the renormalon effect which is of ${\cal O}(\alpha_s^n)$ with $n\sim (2\pi)/(\beta_0\alpha_s)$ becomes significant at lower orders. This situation is similar to the OPE in QCD sum rules~\cite{Shifman:1978bx,Shifman:1978by,Novikov:1984ac,Novikov:1984rf,David:1985xj}, which works well in phenomenology. The reason behind such success is probably due to a proper choice of the renormalization scale $\mu$ so that $\alpha_s(\mu)$ is small enough for the perturbative series to converge, while the $\mu$-dependent effects in the condensate remain insignificant as they should be of the same magnitude of highest order in the truncated perturbative series~\cite{Novikov:1984rf,David:1985xj}. Therefore, we approximate \eq{ope} as \begin{align}\label{eq:renormalon} \tilde h^{{\overline{\mathrm{MS}}}}(z,0,\mu) &\approx e^{-m^{{\overline{\mathrm{MS}}}}_0(\mu)|z|} \left[C_0^{\rm FO}(z^2\mu^2) + \Lambda(\mu) z^2 \right]\,, \end{align} where ``FO'' stands for fixed order, $\Lambda(\mu)$ is a parameter of ${\cal O}(\Lambda_{\rm QCD}^2)$, and we ignore the higher power corrections by working at not too large $z$. The $\mu$ dependence of the parameters $m^{{\overline{\mathrm{MS}}}}_0$ and $\Lambda$ is understandable because this approximation is valid for a small window of $\mu$, and they also depend on the perturbative orders in $C_0^{\rm FO}$ if the latter does not converge fast. Note that the although the model in \eq{renormalon} is not guaranteed to satisfy the RG equation for $\tilde h^{{\overline{\mathrm{MS}}}}(z,0,\mu)$, we argue that within the range of $\mu$ where it can describe the physical results, the $\mu$-dependence in the power correction term, which is already suppressed, is weak and can be ignored. Based on the above approximation, we fit our lattice results of the ratio in \eq{ratio} to the following \textit{ansatz}, \begin{align}\label{eq:ansatz} \tilde R(z,z_0) &= e^{-\bar{m}_0(\mu)(z-z_0)} \frac{C_0^{\rm FO}(z^2\mu^2) + \Lambda(\mu) z^2}{C_0^{\rm FO}(z_0^2\mu^2) + \Lambda(\mu) z_0^2}\,, \end{align} where the mass shift \begin{align} \bar{m}_0(\mu) &= -m_0 + m^{{\overline{\mathrm{MS}}}}_0(\mu)\,, \end{align} cancels the lattice scheme dependence of $m_0$ in \eq{mass} and introduces the renormalon ambiguity of the ${\overline{\mathrm{MS}}}$ scheme. Effectively, $\bar{m}_0$ matches the hybrid-scheme matrix elements at $z\ge z_S$ to the ratio of $\tilde h_0^{{\overline{\mathrm{MS}}}}$ as \begin{align}\label{eq:matching} \lim_{a\to0} e^{ (\delta m(a)+\bar{m}_0(\mu))(z-z_S)}{\tilde h(z, P^z, a)\over \tilde h(z_S,0, a)} &= {\tilde h^{{\overline{\mathrm{MS}}}}_0(z,P^z,\mu)\over \tilde h^{{\overline{\mathrm{MS}}}}_0(z_S,0,\mu)}\,. \end{align} Moreover, since the \textit{ansatz} in \eq{ansatz} can describe the short-distance matrix elements well, we can correct the $\Lambda z^2$ term in $\tilde h^{{\overline{\mathrm{MS}}}}_0(z,0,\mu)$ at $z\le z_S$ as \begin{align} {\tilde h^{{\overline{\mathrm{MS}}}}_0(z,0,\mu)} \frac{C_0^{\rm FO}(z^2\mu^2)}{C_0^{\rm FO}(z^2\mu^2) + \Lambda(\mu) z^2}\,, \end{align} which is equivalent to replacing ${\tilde h^{{\overline{\mathrm{MS}}}}_0(z,0,\mu)}$ by the perturbative $C_0$, as in \eq{hybren}. Eventually, the continuum limit of the matched matrix element in \eq{hybren} is \begin{align}\label{eq:fullhyb} \tilde h(z, z_S,P^z,\mu) &= {\tilde h^{{\overline{\mathrm{MS}}}}_0(z,P^z,\mu)\over C_0^{\rm FO}(z^2\mu^2)} \theta(z_S-|z|) \nonumber\\ &\qquad + { \tilde h^{{\overline{\mathrm{MS}}}}_0(z,P^z,\mu)\over C_0^{\rm FO}(z^2_S\mu^2)} \theta(|z|-z_S)\,, \end{align} which is different from ${\overline{\mathrm{MS}}}$ through a perturbative matching for all $z$ as long as $z_S\ll \Lambda_{\rm QCD}^{-1}$. Therefore, the qPDF defined as FT of $\tilde h(z, z_S,P^z)$ is still factorizable. Note that $\bar{m}_0(\mu)$ introduces the ambiguity $m^{{\overline{\mathrm{MS}}}}_0(\mu)$ to the matched matrix elements. Nevertheless, we argue that $C^{\rm FO}_0(\mu^2 z^2)$ at NNLO is different from a particular summation prescription by ${\cal O}(\alpha_s^3)$ contributions, which cannot be smaller than the ambiguity in $m^{{\overline{\mathrm{MS}}}}_0(\mu)$ as the latter reflects the uncertainty in summing divergent perturbative series at sufficiently high orders. Therefore, we can attribute the renormalon ambiguity in $\bar{m}_0(\mu)$ to higher loop-order effects, and estimate the latter by varying $\mu$ by a factor of $\sqrt{2}$ and $1/\sqrt{2}$. The range of $\mu$ we vary from cannot be too large. If $\mu$ is too small, then $\alpha_s(\mu)$ becomes too large; if $\mu$ is too large, then we need to resum the large $\ln(z^2\mu^2)$ in $C_0(z^2\mu^2)$. In both cases the perturbative series converges slowly. In our analysis, we scan $\mu$ within $[0.9, 2.0]$ GeV for $C_0^{\rm NLO}$ and $[1.4, 3.2]$ GeV for $C_0^{\rm NNLO}$ to study the scale dependence and uncertainty from renormalon ambiguity. \subsection{Fitting of $\bar{m}_0$ and $\Lambda(\mu)$} \label{app:m0} Currently, the Wilson coefficient $C_0(\mu^2z^2)$ is known to NNLO~\cite{Chen:2020ody,Li:2020xml} and its anomalous dimension has been calculated at three-loop order~\cite{Braun:2020ymy}, \begin{align} &C_0\big(\mu^2z^2,\alpha_s(\mu)\big) = 1+ a_s\left(2 L+\frac{10}{3}\right)\nonumber\\ & +\! a_s^2 \left[\frac{13}{2} L^2 \!+\! \frac{1461\!+\!28 \pi ^2}{54} L \!+\! \frac{38127 \!-\! 824 \pi^2 \!-\! 4032 \zeta (3)}{648}\right]\nonumber\\ & + a_s^3\left[ \frac{143}{6} L^3 + \Big(\frac{6127}{36}+\frac{91 \pi ^2}{27}\Big) L^2 \right. \nonumber\\ & \left.\qquad + \frac{690939+760 \pi ^4-8976 \pi^2-94068 \zeta (3)}{972}L + 400 \right]\nonumber\\ & + O(a_s^4)\,, \end{align} where $a_s=\alpha_s/(2\pi)$, $L=\ln(\mu^2 z^2/b_0^2)$, and $b_0=2e^{-\gamma_E}$,. The factor $400$ in the last square bracket is a simple guess by assuming that the constant part of the perturbative correction grows as a geometric series in the order of $a_s$. We also consider the RG improved (RGI) Wilson coefficient~\cite{Gao:2021hxl} \begin{align} C_0^{\rm RGI}\big(\mu^2,z^2\big) &= C_0\big(1,\alpha_s(b_0/z)\big)\\ &\quad \times \exp\Big[\int_{b_0/z}^{\mu} {d\alpha_s(\mu')}\ {\gamma_{\cal O}(\alpha(\mu'))\over \beta(\alpha_s(\mu'))}\Big]\,,\nonumber \end{align} where $\gamma_{\cal O}$ is the anomalous dimension of the operator $O_{\Gamma}(z,\mu)$, and $\beta(\alpha_s(\mu))=d\alpha_s(\mu) / d\ln \mu^2$. In this way, we can first factor out the evolution factor in \eq{renormalon} as it must be satisfied by the full matrix element $\tilde h^{{\overline{\mathrm{MS}}}}(z,0,\mu)$, and therefore construct the ratio $\tilde R(z,z_0)$ in an explicitly $\mu$-independent way. \begin{figure}[htb] \centering \includegraphics[width=0.8\columnwidth]{c0} \caption{The fixed-order and RGI Wilson coefficients $C_0(z^2\mu^2)$ up to N$^3$LO.} \label{fig:c0} \end{figure} We compare $C_0$ and $C_0^{\rm RGI}$ at NLO, NNLO and N$^3$LO at $\mu=2.0$ GeV in \fig{c0}. The strong coupling constants at each perturbative order are defined by the corresponding $\Lambda_{\rm QCD}^{{\overline{\mathrm{MS}}}}$ with one-, two- and three- loop $\beta$ functions and $n_f=3$, which are fixed by matching to $\alpha_s(\mu=2\ {\rm GeV})=0.293$. The latter is obtained from $\Lambda_{\rm QCD}^{{\overline{\mathrm{MS}}}}=332$ MeV with five-loop $\beta$-function and $n_f=3$, as has been calculated using the same lattice ensembles~\cite{Petreczky:2020tky}. As one can see, at $z > 0.2$ fm the RGI Wilson coefficients start to deviate significantly from the fixed-order ones, which is mainly due to the large value of $\alpha_s$ as in RGI Wilson coefficients as we evolve from $\mu$ to $1/\pmb{z}$. This indicates that at $z>0.2$ fm, the scale uncertainty in the perturbative series is significant due to the enhancement of non-perturbative effects, and to use OPE we should work at very short distances ($z<0.2$ fm). However, there will not be enough room for varying $z$ to satisfy $z\gg a$ so that discretization effects are suppressed. Therefore, in our analysis we loosen our requirement for very small $z$ by only using the \textit{ansatz} in \eq{ansatz} and not considering the RGI Wilson coefficients. \begin{figure}[t] \centering \subfloat[]{ \centering \includegraphics[width=0.4\textwidth]{meff} \label{fig:meff0} }\quad \subfloat[]{ \centering \includegraphics[width=0.41\textwidth]{m2eff} \label{fig:meff2} } \caption{\label{fig:meff} Effective mass $\bar{m}_0^{\rm eff}(z)$ (a) and its slope $\bar{m}_2^{\rm eff}(z)$ (b) vs $z$.} \end{figure} In \fig{meff0}, we plot an effective mass $\bar{m}_0^{\rm eff}(z)$ which is defined as \begin{align} \bar{m}_0^{\rm eff}(z)(z-z_0) &\equiv -\ln {\tilde h(z,0,a)\over \tilde h(z_0,0,a)} + \ln {C_0^{\rm NNLO}(z^2\mu^2)\over C_0^{\rm NNLO}(z^2_0\mu^2)}\,, \end{align} where $\mu=2.0$ GeV. If the twist-four condensate is negligible, then we should expect a plateau in $z$, but \fig{meff0} shows that it has an almost constant nonzero slope at $z$ from $0.24$ fm up to $1.0$ fm. In \fig{meff2} we plot its slope \begin{align} \bar{m}_2^{\rm eff}(z) &= {\bar{m}_0^{\rm eff}(z) - \bar{m}_0^{\rm eff}(z-a)\over a}\,, \end{align} which is consistent with being constant for a wide range of $z$. This suggests that there is considerable quadratic $z$-dependence from the twist-four condensate, as inclued in the \textit{ansatz} in \eq{ansatz}. \begin{figure*}[t] \centering \subfloat[]{ \centering \includegraphics[width=0.45\textwidth]{m0-NLO} \label{fig:m0-nlo} }\quad \subfloat[]{ \centering \includegraphics[width=0.45\textwidth]{lambda-NLO} \label{fig:lambda-nlo} }\\ \subfloat[]{ \centering \includegraphics[width=0.45\textwidth]{m0-NNLO} \label{fig:m0-nnlo} }\quad \subfloat[]{ \centering \includegraphics[width=0.45\textwidth]{lambda-NNLO} \label{fig:lambda-nnlo} } \caption{\label{fig:mfit} Results for $\bar{m}_0(\mu)$ (a) and $\Lambda(\mu)$ (b) fitted from $\tilde R(z,z_0)$ for $z_0<z<z_{\rm max}$ and $z_0=0.24$ fm, with NLO and NNLO Wilson coefficients at various values of $\mu$.} \end{figure*} Our results for $\bar{m}_0$ and $\Lambda$ fitted from $\tilde R(z,z_0)$ for $z_0<z<z_{\rm max}$ with $z_0=0.24$ fm are shown in \fig{mfit}. As one can see, the two parameters remain constant in $z_{\rm max}$ up to around $0.5$ fm within a small window of $\mu$, which is different with the NLO and NNLO Wilson coefficients. At larger $z$, the higher-twist and $\alpha_s\ln(z^2\mu^2)$ effects become significant, which can no longer be described by the simple \textit{ansatz} in \eq{ansatz}. In this work, we use $\tilde R(z,z_0)$ at $0.24$ fm $<z<0.4$ fm to fit the parameters at all $\mu$ as input for the hybrid scheme renormalization and matching. To estimate the uncertainty from the choice of $\mu$, we will match the qPDFs obtained at different $\mu$ to the corresponding PDFs, and then evolve the final results to $\mu=2.0$ GeV for comparison. \section{Fourier transform (FT)} \label{app:ft} The qPDF is defined as the FT of $\tilde h(z,z_S,P^z)$ or $\tilde h(\lambda,\lambda_S,P^z)$, \begin{align} \tilde f(x, z_S,P^z) &= \int{d\lambda\over 2\pi}\ e^{ix\lambda} \tilde h(\lambda,\lambda_S,P^z)\,. \end{align} Since $\tilde h(\lambda,\lambda_S,P^z)$ is perturbatively matched from the ${\overline{\mathrm{MS}}}$ scheme, the factorization formula should still be valid for the corresponding qPDF $\tilde f(x, z_S,P^z)$~\cite{Ji:2020brr}. Therefore, we should integrate over all $z$ in the FT to obtain the $x$-dependence of the qPDF. However, due to finite lattice size effects, worsening signal-to-noise ratio and other systematics at large $z$, we have to truncate $\tilde h(z,z_S,P^z)$ at $z=z_L$ and extrapolate to $z\to\infty$ to complete the FT. As a result, the small-$x$ ($x\lesssim 1/\lambda_L$) region is the most sensitive to the extrapolation model, and the corresponding systematic uncertainty cannot be well controlled. On the other hand, the reliability of the $x\gtrsim 1/\lambda_L$ region depends on the premises that the $\tilde h(z)$ is small at $z=z_L$ and exhibits an exponential decay when $z_L$ is large enough. The first condition is easy to understand as a truncated FT will lead to an unphysical oscillation in the $x$-space with amplitude proportional to $|\tilde h(z_L)|$, while the exponential decay guarantees that the FT converges fast and the qPDF at $x\gtrsim 1/\lambda_L$ has very little dependence on the specific model used in the extrapolation. In this section, we first derive that the equal-time correlator in a hadron state does exhibit an exponential decay at large distances, then we demonstrate that including this constraint in the extrapolation will lead to a reliable FT in the moderate-to-large $x$ region. Finally, we perform the extrapolated FT on our lattice results. \subsection{Matrix elements at large $z$} \label{app:exp} To begin with, let us consider a current-current correlation in the vacuum, $\langle \Omega| J_5(x) J_5(0)| \Omega\rangle$, where $J_5=\bar{q}\gamma_5 q$ and $x^2<0$. If we ignore the existence of zero modes and only consider gapped vacuum excitations, then \begin{align} &\langle \Omega| J_5(x) J_5(0)| \Omega\rangle\nonumber\\ &= \sum_n \int {d^3k_n\over (2\pi)^3 2E_{k_n}} \langle \Omega| J_5(x)|n\rangle \langle n| J_5(0)| \Omega\rangle\nonumber\\ &= \sum_n\int {d^3k_n\over (2\pi)^3 2E_{k}} \langle \Omega| J_5(0)|n\rangle \langle n| J_5(0)| \Omega\rangle e^{-ix\cdot k_n}\nonumber\\ &= \sum_n |Z_n|^2\int {d^4k_n\over (2\pi)^4} {e^{-ix\cdot k_n}\over k^2_n -m_n^2 +i0}\nonumber\\ &= -{i\over 4\pi^2}\sum_n |Z_n|^2 {m_n\over \sqrt{-x^2}}K_1(m_n\sqrt{-x^2})\,. \end{align} where $Z_n$ is the overlap between the operator $J_5(x)$ and intermediate sate $|n\rangle$. Here $m_n$ is the mass of the intermediate state particle, and $K_n$ is the modified Bessel function of the second kind. Then, since \begin{align} \lim_{|x|\to\infty} {m_n\over \sqrt{-x^2}}K_1(m_n\sqrt{-x^2}) &= \sqrt{{\pi\over2}} {\sqrt{m_n}\over |x|^{3\over2}}e^{-m_n|x|}\, , \end{align} The correlation function should, therefore, be dominated by the exponential decay of the lowest-lying state that overlaps with $J_5(x)$. When the external state is a static hadron, it has also been shown that the spacelike correlations exhibit an exponential decay at large distance~\cite{Burkardt:1994pw}. We are interested in equal-time quark bilinear correlators in a boosted hadron state, which can be expressed in terms of the product of two ``heavy-light'' currents~\cite{Ji:2017oey,Green:2017xeu}, where the ``heavy quark'' $h_{\hat{x}}$ is an auxiliary field defined along the $\hat{x}$ direction, similar to that in HQET. Let us choose the external state to be a pion. According to Lorentz covariance, we can decompose the correlation as \begin{align} &\langle \pi(p)| \bar{q}(x)\gamma^\mu h_{\hat{x}}(x) \bar{h}_{\hat{x}}(0) q(0)| \pi(p) \rangle \nonumber\\ &= p^\mu f_p(p\cdot x, x^2) + x^\mu f_x(p\cdot x, x^2)\,, \end{align} where the scalar functions $f_{p,x}(p\cdot x, x^2)$ are analytic functions of $p\cdot x$ and $x^2$. We can select the index $\mu$ such that $x^\mu=0$. For example, we can choose $\mu=z$ when $x^\mu=(t,0,0,0)$ or $\mu=t$ when $x^\mu=(0,0,0,z)$. The HQET corresponds to the timelike case, as \begin{align} p^z f_p(p\cdot x, x^2) &= \sum_n\int {d^3k_n\over (2\pi)^3 2E_{k_n}} e^{-ix\cdot (k_n-m_Qv-p)}\nonumber\\ &\quad \times \langle \pi(p)| \bar{q}\Gamma h_v|n\rangle \langle n| \bar{h}_v q| \pi(p)\rangle \,, \end{align} where $h_v$ is the effective heavy-quark field moving with velocity $v^\mu$ and related to the QCD heavy quark $Q$ by the projection \begin{align} h_v(x)&=e^{im_Qv\cdot x} {1+\slashed v\over 2}Q(x)\,. \end{align} The lowest intermediate state $|H(v)\rangle$ is a heavy-light meson with mass $m_H=m_Q + \bar{\Lambda}$ and momentum $k^\mu = m_H v^\mu$, where $m_Q$ is the heavy quark pole mass, and $\bar{\Lambda}$ can be interpreted as the mass of the constituent light quark or binding energy. Both $\bar{\Lambda}$ and $m_Q$ have ${\cal O}(\Lambda_{\rm QCD})$ renormalon ambiguities which cancel between each other. In the $\Lambda_{\rm QCD}/m_Q\to0$ limit, $\bar{\Lambda}$ should be independent of the heavy quark mass, but can depend on the light quark mass. The matrix element $\langle \pi(p)| \bar{q}\Gamma h_v|H(v)\rangle$ is given by the transition form factors~\cite{Falk:1990yz}, \begin{align} &\langle \pi(p)| \bar{q}\Gamma h_v|H(v)\rangle \nonumber\\ &= -\mathrm{Tr}\left\{\gamma_5\left[f_1(v\cdot p) + f_2(v\cdot p) {\slashed p\over v\cdot p}\right] \Gamma {\cal M}(v)\right\}\,, \end{align} where the form factors $f_1$ and $f_2$ only depend on $v\cdot p$ in HQET, and the projetion operator ${\cal M}(v)$ depends on the spin of the heavy-light meson $H(v)$, \begin{align} {\cal M}(v) &= {1+\slashed v\over 2} \Bigg\{\begin{array}{cc} - \gamma_5\,, & {\rm for}\ J^P=0^-\,, \\ \slashed \epsilon\,, & {\rm for}\ J^P=1^-\,, \end{array} \end{align} with $\epsilon^\mu$ being the polarization vector for vector mesons. Therefore, \begin{align} \langle \pi(p)| \bar{q}\gamma^\mu h_v|H(v)\rangle &= 2f_1(v\cdot p)v^\mu + 2f_2(v\cdot p)p^\mu\,,\\ \langle \pi(p)| \bar{q} h_v|H(v)\rangle &= 2f_1(v\cdot p)+ 2f_2(v\cdot p)\,. \end{align} Then, the correlation function becomes \begin{align}\label{eq:transit} & p^z f_p(p\cdot x, x^2) \approx 4m_Q^2\sum_n\int {d^3\vec{v}\over (2\pi)^3 2\sqrt{1+\vec{v}^2}}\nonumber\\ &\qquad \times e^{-i\left(\bar{\Lambda}\sqrt{1+\vec{v}^2}-p^0\right)x^0 } (f_1+f_2)(f_1 v^z + f_2 p^z)\,. \end{align} Note that when $x^0\to\infty$, $x^0 \bar{\Lambda}\sqrt{1+\vec{v}^2}\ge x^0\bar{\Lambda}$ constitutes a large phase, so the integrand is quickly oscillating and should be suppressed. To have a naive estimate, let us assume $f_1$ and $f_2$ are constant in $v\cdot p$, and the remaining integral is simply \begin{align} &\int {d^3\vec{v}\over (2\pi)^3 2\sqrt{1+\vec{v}^2}} e^{-i\left(\bar{\Lambda}\sqrt{1+\vec{v}^2}-p^0\right)x^0 } \nonumber\\ &\qquad = {1\over 4\pi^2} K_1 \left(\bar{\Lambda}\sqrt{-x_0^2}\right){e^{ip^0x^0}\over \bar{\Lambda} \sqrt{-x_0^2}}\nonumber\\ &\qquad ={1\over 4\pi^2} K_1 \left(\bar{\Lambda}\sqrt{-x^2}\right){e^{ip\cdot x}\over \bar{\Lambda} \sqrt{-x^2}}\,, \end{align} where we first obtained the result for imaginary $x^0$ and then analytically continued back to the real axis. Then, using Lorentz invariance and analyticity, we can obtain the result for $x^2<0$, which corresponds to the equal-time correlator that we calculate in this work. At large separation, we have \begin{align}\label{eq:largez} \lim_{|x|\to\infty}f_p(p\cdot x, x^2) &\propto m_Q^2 {e^{-\bar{\Lambda}|x|}\over (\bar{\Lambda}|x|)^{3\over2}} e^{ip\cdot x}\,, \end{align} which also exhibits an exponential behavior with decay constant $\bar{\Lambda}$. Moreover, the correlation also includes a phase $e^{ip\cdot x}$ which becomes $\cos(p\cdot x)$ in the case of the valence quark distribution. Another important takeaway is that $\bar{\Lambda}$ is a Lorentz-invariant quantity and should be independent of the external momentum. However, it must be pointed out that the conclusion in \eq{largez} is based on a rather crude approximation that $f_1$ and $f_2$ are constant in $v\cdot p$. In practice, the transition form factors could have a pole at the mass of a heavy-light meson created by the current $\bar{q}\gamma^\mu h_v$ or $\bar{h}_v q$, which is different from $m_H$ for the intermediate state $|H(v)\rangle$. As a result, the binding energy $\bar{\Lambda}$ would also be different. If we take this into account in \eq{transit}, then the result will exhibit a more complicated asymptotic behavior at large distance, \begin{align}\label{eq:largez2} \lim_{|x|\to\infty}f_p(p\cdot x, x^2) &\propto {e^{-\bar{\Lambda}|x|}\over |x|^{d}}\ g[p\cdot x, \cos(p\cdot x), \sin(p\cdot x)]\,, \end{align} where the decay constant $\bar{\Lambda}$ should vary among the different binding energies for the heavy-light mesons, which is similar to the observation in Ref.~\cite{Burkardt:1994pw}, and $g$ is a function that can have both oscillating and non-oscillating dependence on $p\cdot x$. For large enough $|x|$, the exponential decay should suppress the correlation and make it or its extremes decrease monotonically in magnitude.\\ Note that after we match the hybrid scheme matrix elements to ${\overline{\mathrm{MS}}}$, the renormalon ambiguity in the Wilson line mass, $ m_0^{{\overline{\mathrm{MS}}}}$, is subtracted out, so the matched result should exhibit an asymptotic behavior that goes as $e^{-(\bar{\Lambda}-m_0^{{\overline{\mathrm{MS}}}})|z|}$ at large $z$. Therefore, the sign of $(\bar{\Lambda}-m_0^{{\overline{\mathrm{MS}}}})$ becomes crucial in determining whether it is exponentially decaying or growing. In QCD sum rule calculations, the result is $\bar{\Lambda} = 0.4-0.6$ GeV from phenomenology, while $m^{\overline{\mathrm{MS}}}_0$ is expected to be $0.1-0.2$ GeV~\cite{Beneke:1994sw}, so $\bar{\Lambda}-m^{\overline{\mathrm{MS}}}_0=0.2-0.5$ GeV. Since the quarks have heavier-than-physical masses in our lattice calculation, one should expect a larger $\bar{\Lambda}$, so it is very likely that $\bar{\Lambda}-m^{\overline{\mathrm{MS}}}_0$ still remains positive. After all, this can be always put to test on the $P^z=0$ matrix elements since $\bar{\Lambda}-m^{\overline{\mathrm{MS}}}_0$ is a Lorentz-invariant quantity. \subsection{Extrapolation and FT} \label{app:ext} If $z_L$ is large enough for the correlation $\tilde h(z)$ to reach the asymptotic region, then an extrapolation that encodes the exponential decay behavior we derived in \app{exp} should lead to reliable FT for moderate-to-large $x$. To be more precise, there is a rigorous upper bound for the uncertainty of FT which decreases with $x$. To prove the above statement, let us consider extrapolation based on the general model \begin{align} \tilde h(\lambda) &= e^{-c|\lambda-\lambda_L|}g(\lambda)\,, \end{align} where $g(\lambda_L)= \tilde h(\lambda_L)$, and $c=m_{\rm eff}/P^z$ with $m_{\rm eff}$ being the effective mass for the exponential decay. Motivated by QCD sum rule results, we expect $m_{\rm eff}\sim 0.2-0.5$ GeV, which can be larger since we have used heavier-than-physical quark masses. Therefore, for $P^z\sim 2.0$ GeV in the current work, we should have $c\sim 0.10-0.25$ or higher. Now let us compare two extrapolations $h_1$ and $h_2$ with different $g_1$ and $g_2$. The difference between the two extrapolations, \begin{align} \delta \tilde h(\lambda) &\equiv \tilde h_1(\lambda) - \tilde h_2(\lambda)\,, \end{align} should satisfy $\delta \tilde h(\lambda_L)=0$ and $\delta \tilde h(\infty)=0$. The difference in the FT with extrapolation is therefore \begin{align} \delta \tilde f(x) &= \int_{\lambda_L}^\infty {d\lambda\over \pi} \delta \tilde h(\lambda) \cos(x\lambda)\,. \end{align} If we can approximate $\delta \tilde h(\lambda)$ as a flat curve within one period of the oscillatory function $\cos(x\lambda)$, then the integral in that region vanishes. This condition can be satisfied if $|\delta \tilde h'(\lambda)| \ll x$, which should be reached very quickly due to the exponential suppression at large $\lambda$. For each $x$, there should be a minimal integer $N_x$ which satisfies $|\delta \tilde h'(\lambda_L+N_x 2\pi/x)| \ll x$, so that we can approximate $\delta \tilde f(x)$ as \begin{align}\label{eq:df} \delta \tilde f(x) &\approx \int_{\lambda_L}^{\lambda_L+N_x {2\pi\over x}} {d\lambda\over \pi} \delta \tilde h(\lambda) \cos(x\lambda)\,. \end{align} Since $\delta \tilde h(\lambda_L)=0$ and $\delta \tilde h(\infty)=0$, there must be at least one extremum of $\delta \tilde h(\lambda)$ for $\lambda_L<\lambda <\infty$, so we have the inequality \begin{align}\label{eq:bound} |\delta \tilde f(x)| & < \int_{\lambda_L}^{\lambda_L+N_x {2\pi\over x}}{d\lambda\over \pi}\ |\delta \tilde h(\lambda)| |\cos(x\lambda)| \nonumber\\ & < N_x|\delta \tilde h(\lambda)|_{\rm max} \int_{\lambda_L}^{\lambda_L+ {2\pi\over x}} {d\lambda\over \pi} |\cos(x\lambda)|\nonumber\\ &= {4N_x|\delta \tilde h(\lambda)|_{\rm max}\over \pi x} \lesssim {4N_x|\tilde h(\lambda_L)|\over \pi x} \,. \end{align} According to our estimate of $\bar{\Lambda} - m_0^{{\overline{\mathrm{MS}}}}$, $c\gtrsim 0.1$ at $P^z\sim 2$ GeV, so \begin{align} e^{-cN_x (2\pi)/x} \lesssim e^{-0.6 N_x/x} \,, \end{align} and $N_x\sim {\cal O}(1)$ should be sufficient to satisfy $|\delta \tilde h'(\lambda_L+N_x 2\pi/x)| \ll x$ with $0<x<1$. Therefore, in \eq{bound} we demonstrate that there is an upper bound for the model uncertainty in the FT with exponential extrapolation, which decreases in $x$. The error is also proportional to $|\delta \tilde h(\lambda)|_{\rm max}$ which can be much smaller than $|\tilde h(\lambda_L)|$ that is already close to zero. If $h(\lambda_L)=0.1$, $|\delta \tilde h(\lambda)|_{\rm max}=0.05$, and $N_x=1$, then we have \begin{align} |\delta \tilde f(x)| < {0.07\over x} \,, \end{align} which is less than 0.15 at $x=0.5$ and around 15\% of the central value of the qPDF as we obtain below. It is worth pointing out that our estimate of the upper bound in \eq{bound} can be highly overestimated, as $\delta \tilde h(\lambda)$ has an oscillation from $\cos(\lambda)$ and $\sin(\lambda)$ which are out of pace with $\cos(x\lambda)$ for $0<x<1$, and $|\delta \tilde h(\lambda)|_{\rm max}$ could be much smaller than $|\tilde h(\lambda_L)|$ and at a sharp peak within $\lambda_L < \lambda < \lambda_L + N_x 2\pi/x$. Therefore, the FT with exponential extrapolation is under control for moderate and large $x$. When $\tilde h(\lambda_L)$ is small enough, the model uncertainty from the extrapolation can be controlled to be much smaller than the other systematic uncertainties which are about $10\%-20\%$ in this work. It is worth to compare with the extrapolation error when the correlation function decreases algebraically as $1/|\lambda|^{d}$, which corresponds to the generic model \begin{align} \tilde h(\lambda) &= \left({\lambda_L\over \lambda}\right)^d g(\lambda)\,. \end{align} Suppose we truncate at $\lambda_L=10$, then \begin{align} \label{eq:algebraic} \left({\lambda_L\over \lambda_L + N_x 2\pi/x}\right)^d & \sim (1+0.6N_x/x)^{-d}\,. \end{align} The power $d$ is related to the small-$x$ behavior of the PDF. If we parameterize the PDF as $\sim x^a(1-x)^b$, then with LO matching one can derive that $d={\rm min}\{1+a, 1+b\}$~\cite{Ji:2020brr}, which is ${\cal O}(1)$ empirically. Therefore, it will take $N_x\gg 1$ for the factor in \eq{algebraic} to decrease sufficiently to satisfy the condition $|\delta \tilde h'(\lambda_L+N_x2\pi/x)| \ll x$. As a result, the uncertainty in the FT is of orders of magnitude larger than that of extrapolation with exponential decay.\\ To test our claim of controlled FT error with exponential decay, we choose a particular model \begin{align}\label{eq:extest} \tilde h(\lambda) &= \tilde h(\lambda_L) \left({\lambda_L\over \lambda}\right)^de^{- c|\lambda - \lambda_L|}\,. \end{align} Suppose that the extrapolation is done at $\lambda_L=10$ with $\tilde h(\lambda_L)=0.15$, and the parameters $c$ and $d$ are fitted with errors $\delta c$ and $\delta d$, then we analytically FT the extrapolated result to the $x$-space, and calculate its error using \begin{align} \delta \tilde f(x,c,d) &= \sqrt{\left({\partial \tilde f \over \partial c}\right)^2\delta c^2 + \left({\partial \tilde f \over \partial d}\right)^2\delta d^2}\,. \end{align} \begin{figure}[htb] \centering \includegraphics[width=0.8\columnwidth]{error} \caption{Estimate of error in the FT with extroplation using the model in \eq{extest}.} \label{fig:error} \end{figure} In \fig{error}, we plot the extrapolation error against $x$. We have chosen different central values of the parameters $c$ and $d$ and fairly large uncertainties in them. The parameter $d$ cannot have a large negative value, otherwise it would make $\tilde h(\lambda)$ grow beyond $\lambda_L$. In most of the scenarios considered, the error is $\lesssim 0.1$ for $x>0.1$. As we shall see below, the actual extrapolation error is much smaller than this estimate and thus negligible when compared to the other systematic errors.\\ In the following, we perform the extrapolation with four different models. The extrapolation is carried out on each bootstrap sample by a minimal-square fit. For each $P^z$, we truncate $\tilde h(z)$ at the largest $z$, $z_{>0}$, where the central value of $\tilde h(z)$ remains positive, and choose $z_{\rm max}=\{z_{>0}-2a, z_{>0}-a, z_{>0}\}$ to estimate the truncation error. The range of $z$ used to fit the parameters is $z_{\rm min} \le z \le z_{\rm max}$ where $z_{\rm min}$ satisfies $\tilde h(z_{\rm min})< 0.2$. The continuty condition between data and model was imposed in the middle point of the fit range, namely $z_L$, which is listed in Table~\ref{tab:zL}. The extrapolation models are: \begin{table} \centering \begin{tabular} {|>{\centering\arraybackslash} p{1.0cm}|>{\centering\arraybackslash} p{3.0cm}|>{\centering\arraybackslash} p{3.0cm}|} \hline & \multicolumn{2}{c|}{$z_L/a$}\\ \hline $n_z$ & $a=0.04$ fm & $a=0.06$ fm \\ \hline $1$ & $\{29, 30, 31\}$ & N/A \\ \hline $2$ & $\{26, 27, 28\}$ & $\{19, 20, 21\}$\\ \hline $3$ & $\{19, 20, 21\}$ & $\{16, 17, 18\}$\\ \hline $4$ & $\{24, 25, 26\}$ & $\{14, 15, 16\}$\\ \hline $5$ & $\{21, 22, 23\}$ & $\{15, 16, 17\}$ \\ \hline \end{tabular} \caption{Choices of $z_L$ for the extrapolations.} \label{tab:zL} \end{table} \paragraph*{Exponential decay model,} or ``model-exp''. The model for extrapolation is \begin{align}\label{eq:exponential} A {e^{-m_{\rm eff}|z|}\over |\lambda|^d}\,. \end{align} We have tried to fit $m_{\rm eff}$ from the same range of $z$ for $P^z=0$ matrix elements with a similar form, $A e^{-m_{\rm eff}|z|}/|z|^d$, and found that $m_{\rm eff}$ is around $0.1$ GeV, about the same scale as the phenomenological estimate. For the $P^z\neq0$ matrix elements, we do not fix $m_{\rm eff}$, but constrain it with a prior $m_{\rm eff} \ge m_{\rm min}$. To test the dependence on this prior condition, we have set $m_{\rm min}=\{0, 0.1, 0.2\}$ GeV. Besides, we also impose $A>0$ and $d>0$ to ensure that the extrapolated result is positive and decreases in $\lambda$. \paragraph*{Power-law decay model,} or ``model-pow''. The model is defined by setting $m_{\rm eff}=0$ in model-exp. As the $P^z\to\infty$ limit of model-exp, model-pow can be used to give a coarse estimate of the significance of higher-twist effects, although its FT error is not well under control as we discussed above. We impose the conditions $A, d >0$ so that the fitted results decrease to zero as $\lambda\to\infty$. \paragraph*{Two-parameter model with exponential decay,} or ``model-2p-exp''. As we can see from \fig{hybme}, the matrix elements at $\lambda_L\sim 6-10$ do not show a clear exponential decay, although they can be fitted by the latter with $\chi^2/d.o.f < 1$ due to the large errors. This may indicate that there is oscillation in $\tilde h(\lambda)$. To incorporate such dependence, we ignore the higher-twist contributions and assume that the qPDF is parameterized as \begin{align} f_v(x;a,b)&={\Gamma(2+a+b)\over \Gamma(1+a)\Gamma(1+b)} |x|^a(1-|x|)^b\nonumber\\ &\qquad \times \theta(|x|)\theta(1-|x|)\,. \end{align} By doing an inverse FT into the $\lambda$-space, the asymptotic form of $\tilde h_{\rm 2p}(\lambda)$ at large $\lambda$ reads, \begin{align}\label{eq:asym} \tilde h_{\rm 2p}(\lambda) & = A\ {\rm Re}\left[\frac{\Gamma(1+a)}{(-i|\lambda|)^{a+1}}+e^{i\lambda}\frac{\Gamma(1+b)}{(i|\lambda|)^{b+1}}\right]\,. \end{align} Then we multiply $\tilde h_{\rm 2p}(\lambda)$ with an exponential decay factor as our model for extrapolation, \begin{align}\label{eq:2pexp} \tilde h_{\text{2p-exp}} &=\tilde h_{\rm 2p}(\lambda) e^{-m_{\rm eff}(z-z_L)}\,. \end{align} \paragraph*{Two-parameter model,} or ``model-2p''. Again, we ignore the exponential decay and use $\tilde h_{\rm 2p}$ as the extrapolation model, which can help us estimate the significance of higher-twist effects.\\ \begin{figure*}[htb] \centering \includegraphics[width=0.33\textwidth]{qPDF_zL_pz3} \includegraphics[width=0.32\textwidth]{qPDF_zL_pz4} \includegraphics[width=0.32\textwidth]{qPDF_zL_pz5} \caption{FT with different $z_L$ for model-exp extrapolation (with prior $m_{\rm eff}>0.1$ GeV) of the NNLO-matched $\tilde h(\lambda, \lambda_S, P^z,\mu,a)$ at $z_S=0.24$ fm.} \label{fig:zLs} \end{figure*} In \fig{zLs} we compare the FT with different $z_L$ for extrapolation with model-exp and condition $m_{\rm eff}>0.1$ GeV. Except for very small $x$, the results are consistent, and those at smaller $z_L$ have smaller errors because the error of the matrix element grows with $z$. Therefore, for the rest of our analysis, we simply use the largest $z_L$ for each $P^z$. \begin{figure*}[htb] \centering \includegraphics[width=0.33\textwidth]{ext-pz3} \includegraphics[width=0.32\textwidth]{ext-pz4} \includegraphics[width=0.32\textwidth]{ext-pz5} \caption{Extrapolation with different models for the NNLO-matched $\tilde h(\lambda, \lambda_S, P^z,\mu,a)$. At $P^z=1.94$ GeV, we have added the comparison with the 2p-exp and 2p models.} \label{fig:extrapolation} \end{figure*} In \fig{extrapolation} we show the extrapolations with different models, which have noticeable differences at $\lambda > \lambda_L$. In \fig{models} we compare the FT with different extrapolation models as well as with the discrete FT (DFT). As we can see, the DFT introduces unphysical oscillation in the qPDF which is due to the truncation of $\tilde h(\lambda)$ at $\lambda_L$. In contrast, the extrapolations are free of such oscillation, and different models yield consistent qPDFs at moderate and large $x$, thought they differ significantly at small $x$. We notice that the qPDF from model-2p extrapolation still has slight oscillations despite its agreement with the others, because the extrapolated result decays too slowly at $\lambda > \lambda_L$. As expected, the models with exponential decay lead to regular qPDFs at $x=0$, whereas model-pow and model-2p give divergent qPDFs as $x\to0$. \begin{figure*}[htb] \centering \includegraphics[width=0.33\textwidth]{qPDF_model_pz3} \includegraphics[width=0.32\textwidth]{qPDF_model_pz4} \includegraphics[width=0.32\textwidth]{qPDF_model_pz5} \caption{Comparison of DFT and FT with different extrapolation models for the NNLO-matched $\tilde h(\lambda, \lambda_S, P^z,\mu,a)$ at $z_S=0.24$ fm. At $P^z=1.94$ GeV, we have added the comparison with the 2p-exp and 2p models.} \label{fig:models} \end{figure*} Based on the above results, we use model-exp with $m_{\rm eff}>0.1$ GeV for the FT in our following analysis. To have a coarse estimate of the uncertainties from extrapolation model and higher-twist contributions, we look into the difference between final PDFs matched from qPDFs with model-exp and model-pow extrapolations. Recall that although the hybrid-scheme matrix elements $\tilde h(\lambda,\lambda_S,P^z)$ should be RG invariant, they can still depend on $\mu$ due to the fixed-order Wilson coefficients used in the matching between lattice and ${\overline{\mathrm{MS}}}$ schemes. In \fig{nlovsnnlo}, we compare the qPDFs which are FTs of $\tilde h(\lambda,\lambda_S,P^z,\mu,a)$ obtained at $a=0.04$ fm with $C_0^{\rm NLO}$ and $C_0^{\rm NNLO}$. We choose $\mu=1.0$ GeV for $C_0^{\rm NLO}$ and $\mu=2.0$ GeV for $C_0^{\rm NNLO}$ as the \textit{ansatz} in \eq{ansatz} appear to best describe the lattice matrix elements according to \fig{mfit} at these scales. The results are almost identical to each other, which shows that the renormalon-inspired model with fixed-order Wilson coefficient can indeed describe the data within a specific window of $\mu$. At NLO, smaller $\mu$ is favored as $\alpha_s(\mu)$ is larger so that the renormalon effects become important at lower orders. In \fig{mu} we show the $\mu$-dependence of the qPDFs from NLO- and NNLO-matched $\tilde h(\lambda,\lambda_S,P^z,\mu,a)$. As one can see, the results have mild dependence on $\mu$ which becomes more significant at lower scales. Therefore, the uncertainty from scale variation will also be larger in this region. \begin{figure}[htb] \centering \includegraphics[width=0.8\columnwidth]{qPDF_nlo_vs_nlo_pz4} \caption{Comparison of the qPDF with model-exp extrapolation (with $m_{\rm eff}>0.1$ GeV) of the NLO- and NNLO-matched $\tilde h(\lambda, \lambda_S, P^z,\mu,a)$ at $z_S=0.24$ fm and $z_L=26a$. The choices of $\mu$ are based on where the renormalon model best describes the matrix elements.} \label{fig:nlovsnnlo} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=0.8\columnwidth]{qPDF_mu_nlo_pz4} \includegraphics[width=0.8\columnwidth]{qPDF_mu_nnlo_pz4} \caption{Comparison of the qPDF at different $\mu$ with model-exp extrapolation of the NLO- and NNLO-matched $\tilde h(\lambda, \lambda_S, P^z,\mu,a)$.} \label{fig:mu} \end{figure} \section{Perturbative matching} \label{app:match} In this section we perform the perturbative matching to the qPDF. Recall that \eq{fact} relates the qPDF to the PDF, \begin{align}\label{eq:fact2} f_v(x, \mu)=& \int_{-\infty}^{\infty} \frac{dy}{|y|} \ C^{-1}\!\left(\frac{x}{y}, \frac{\mu}{yP^z},|y|\lambda_S\right) \tilde f_v(y,z_S,P^z) \nonumber\\ &\qquad + {\cal O}\Big(\frac{\Lambda_{\text{QCD}}^2}{(xP^z)^2},\frac{\Lambda_{\text{QCD}}^2}{((1-x)P^z)^2}\Big)\,. \end{align} The matching kernel $C$ can be expanded to $O(\alpha_s)$ as \begin{align} &C\!\left(\frac{x}{y}, \frac{\mu}{yP^z},|y|\lambda_S\right) \nonumber\\ &= \delta\left(\frac{x}{y} - 1\right) + \alpha_s C^{(1)}\!\left(\frac{x}{y}, \frac{\mu}{yP^z},|y|\lambda_S\right) \nonumber\\ & \quad + \alpha_s^2 C^{(2)}\!\left(\frac{x}{y}, \frac{\mu}{yP^z},|y|\lambda_S\right) + {\cal O}(\alpha_s^3)\,. \end{align} The inverse matching kernel $C^{-1}$ can obtained by solving \begin{align} \int{dz\over |z|} C^{-1}\!\left(\frac{x}{z}, \frac{\mu}{zP^z},|z|\lambda_S\right)C\!\left(\frac{z}{y}, \frac{\mu}{yP^z},|y|\lambda_S\right) &= \delta\big({x\over y}-1\big) \end{align} order by order in $\alpha_s$~\cite{Zhao:2021xxx}, and the result is \begin{align}\label{eq:invmatch} &C^{-1}\!\left(\frac{x}{y}, \frac{\mu}{yP^z},|y|\lambda_S\right) \nonumber\\ &= \delta\left(\frac{x}{y} - 1\right) - \alpha_s C^{(1)}\!\left(\frac{x}{y}, \frac{\mu}{yP^z},|y|\lambda_S\right) \nonumber\\ & \quad + \alpha_s^2 \int{dz\over |z|} C^{(1)}\!\left(\frac{x}{z}, \frac{\mu}{zP^z},|z|\lambda_S\right)C^{(1)}\!\left(\frac{z}{y}, \frac{\mu}{yP^z},|y|\lambda_S\right) \nonumber\\ & \quad - \alpha_s^2 C^{(2)}\!\left(\frac{x}{y}, \frac{\mu}{yP^z},|y|\lambda_S\right) + {\cal O}(\alpha_s^3)\,. \end{align} It has been shown in Ref.~\cite{Zhao:2021xxx} that the inverse matching coefficient satisfies the correct RG and $P^z$-evolution equations. \subsection{Numerical implementation of matching} Since in the asymptotic regions, \begin{align} \lim_{y\to\infty}C\Big({x\over y}\Big) \to {\rm\ finite}\,,\quad \quad \lim_{y\to0}C\Big({x\over y}\Big) \propto {y^2\over x^2}\,, \end{align} and \begin{align} C\left({x\over y}\right) &\equiv C_r\left({x\over y}\right) - \delta\left({x\over y}-1\right) \int_{-\infty}^\infty dy'\ C_r(y') \end{align} is a plus function (with ``$r$'' denotes the $x\neq y$ part) that regulates the singularity at $y=x$, the convolution integral in \eq{fact2} is convergent and insensitive to the cutoffs for $y\to 0, x, \infty$, as long as the qPDF is integrable. Therefore, we are able to evaluate the integral numerically within a finite range of $y$ with a target precision. \begin{figure}[htb] \centering \includegraphics[width=0.8\columnwidth]{int_vs_mat} \caption{Comparison of matrix multiplication to direct numerical integration for the NLO matching correction to one qPDF sample.} \label{fig:matrix} \end{figure} The numerical integration in \eq{fact} is time consuming, especially when we have to perform the matching for the qPDF on each bootstrap sample. Therefore, to speed up the matching procedure, we discretize the integral in \eq{fact} and reexpress it as multiplication of a matching matrix and the qPDF vector. In our implementation, our integration domain is $-2.0< y < 2.0$ discretized with a step size $\delta y = 0.001$. Since the qPDF falls very close to zero at $|y|=2.0$, the corresponding uncertainty is negligible as we have varied the truncation point. Note that the matching coefficient is a plus function, the step size $\delta y$ also serves as a soft cutoff for the singularity near $|x/y|=1$ in the plus functions. To test how well the matrix multiplication can reproduce the exact numerical intergration, we compare the NLO corrections to the qPDF from one bootstrap sample using the two methods in \fig{matrix}. With our current step size, the results are almost indentical for $x$ as small as $0.01$. Moreover, to test the reliability of our inverse matching coefficient, which is obtained through expansion in $\alpha_s$, we compare it to direct matrix inversion. To be specific, we construct a square matching matrix $C$ in $x$ and $y$ with $x,y\in [-2,2]$, which is asymmetric but has dominant diagonal elements, and then invert it to obtain the inverse matching matrix $C^{-1}$. At small $\alpha_s$, the matrix $C$ can be schematically expressed as \begin{align} C & = {\cal I} + {\cal E}\,, \end{align} where ${\cal I}$ is an identity matrix, whereas ${\cal E}$ is ${\cal O}(\alpha_s)$, so that its inverse can be expanded as \begin{align}\label{eq:iter} C^{-1} &= {\cal I} - {\cal E} + {\cal E}^2 - {\cal E}^3 + \ldots\,. \end{align} \begin{figure}[htb] \centering \subfloat[]{ \centering \includegraphics[width=0.8\columnwidth]{inv_vs_as_exp2} \label{fig:matinv1} } \subfloat[]{ \centering \includegraphics[width=0.8\columnwidth]{inv_vs_as_exp} \label{fig:matinv2} } \caption{(a) Comparison of the NLO matching correction to the qPDF with matrix inversion and the expansion in \eq{iter} to order $n$. (b) Comparison of NLO and NNLO matching corrections to the qPDF from direct matrix inversion and the $\alpha_s$-expansion in \eq{invmatch}.} \label{fig:inv} \end{figure} In \fig{matinv1} we first test the convergence of the solution in \eq{iter} for the NLO matching matrix. By expanding the solution to order $n$, we calculate the NLO matching correction to a qPDF sample, and then compare it to the result from direct matrix inversion. Since our main purpose is to compare the two inversion methods, we increase the step size to $\delta y=0.01$ to reduce the computing time regardless the accuracy of numerical integration. We find that by increasing $n$, the expansion method gradually coverges to direct inversion, as expected. Of course, in perturbation theory, we should calculate the matching coefficient to $n$-loop accuracy for consistency, for $\alpha_s$ is the actual power-counting parameter. In \fig{matinv2} we compare the NLO and NNLO matching corrections to a qPDF sample using direct matrix inversion and the $\alpha_s$-expansion methods. The results are basically consistent with each other for almost the entire range of $x\in(0,1)$, except for small deviations. This is because direct matrix inversion includes all-order terms in $\alpha_s$, and the deviations reflect the size of higher-order effects, whose smallness shows that the perturbation series is convergent. With our current two-loop accuracy, we adopt the $\alpha_s$-expansion method. \subsection{Perturbative convergence} In \fig{nnlo} we show the matched results for the PDF from the qPDF obtained from model-exp extrapolation (with $m_{\rm eff}>0.1$ GeV) of the NNLO-matched $\tilde h(\lambda, \lambda_S, P^z,\mu,a)$. As one can see, the NNLO correction is generally smaller than the NLO correction for moderate $x$, which indicates good perturbative convergence. Near the end-point regions, the NLO and NNLO corrections become larger than 50\%, which suggests that higher-order corrections or resummation effects become important. \begin{figure*}[htb] \centering \includegraphics[width=0.32\textwidth]{convergence0} \includegraphics[width=0.32\textwidth]{convergence} \includegraphics[width=0.32\textwidth]{convergence1} \includegraphics[width=0.32\textwidth]{lo_ratio0} \includegraphics[width=0.32\textwidth]{lo_ratio} \includegraphics[width=0.32\textwidth]{lo_ratio1} \caption{Upper row: the PDFs from NLO and NNLO matching corrections are compared to the qPDF (or LO PDF), which is obtained from model-exp (with $m_{\rm eff}>0.1$ GeV) extrapolation of the NNLO-matched $\tilde h(\lambda, \lambda_S, P^z,\mu,a)$. Lower row: the ratio of NLO and NNLO corrections to the qPDF.} \label{fig:nnlo} \end{figure*} \begin{figure}[htb] \centering \includegraphics[width=0.8\columnwidth]{scale_variation} \caption{Comparison of the PDFs at different $\mu$ obtained from the NLO- and NNLO-matched $\tilde h(\lambda, \lambda_S, P^z,\mu,a)$.} \label{fig:scale_var} \end{figure} To see whether the NNLO matching reduces the uncertainty from scale variation, we match qPDFs at different $\mu$ to the corresponding PDFs, and then use DGLAP equation to evolve the results to $\mu=2.0$ GeV. We use NLO matching coefficient and LO DGLAP evolution kernel for the qPDF obtained from the NLO-matched $\tilde h(\lambda,\lambda_s,P^z,\mu,a)$, and NNLO matching coefficient and NLO DGLAP evolution kernel for the qPDF obtained from the NNLO-matched $\tilde h(\lambda,\lambda_s,P^z,\mu,a)$. The NLO DGLAP evolution formula takes the following form, \begin{align} f_v(x,\mu) &= f_v(x,\mu_0) \\ &+ {\alpha_s(\mu_0)t\over 2\pi} \int_x^1{dy\over |y|}P_{qq}^{(0)}\left({x\over y}\right) f_v(y,\mu_0) \nonumber\\ &+ \left({\alpha_s(\mu_0)t\over 2\pi}\right)^2 \int_x^1{dy\over |y|}\left[P_{qq}^{V(1)} + {1\over2}P_{qq}^{(0)}\otimes P_{qq}^{(0)} \right.\nonumber\\ &\qquad\qquad\qquad\left. - {\beta_0\over2}P_{qq}^{(0)}\right]\left({x\over y}\right) f_v(y,\mu_0)\,, \end{align} where $t=\ln(\mu^2/\mu_0^2)$, $\beta_0=(11C_A-2n_f)/6$, $P_{qq}^{(0)}$ is the LO splitting kernel, and $P_{qq}^{V(1)}$ is the NLO splitting kernel~\cite{Curci:1980uw} for the valence quark PDF. \begin{figure}[htb] \centering \includegraphics[width=0.8\columnwidth]{scale_variation2} \caption{Comparison of the PDFs obtained from NNLO matching of the qPDFs at different $\mu$ and NLO DGLAP evolution to $\mu=2.0$ GeV.} \label{fig:scale_var2} \end{figure} Since there are only a few common $\mu$ values for the NLO- and NNLO- matched $\tilde h(\lambda,\lambda_s,P^z,\mu,a)$, we choose $\mu=1.4$ and $2.0$ GeV for our comparison. In \fig{scale_var} we show the scale variation of the PDFs from NLO and NNLO matching, where only the central values are plotted for our purpose. As one can see, the NNLO matching correction significantly reduces the uncertainty for $x\lesssim 0.4$ at NLO, while for $x\gtrsim 0.4$ the NNLO uncertainty band is still about a factor of one half of the NLO case. Therefore, the NNLO matching does indeed improve the perturbation theory uncertainty. Finally, for the NNLO matching we vary $\mu=2.0$ GeV by a factor of $\sqrt{2}$ and $1/\sqrt{2}$, and then use NLO DGLAP equation to evolve the matched results to $\mu=2.0$ GeV, whose central vavlues are shown in \fig{scale_var2}. As one can see, there is virtually no difference between choosing $\mu=2.0$ and $2.8$ GeV as the factorization scale, but the lower choice of $\mu=1.4$ GeV does introduce larger uncertainty mainly because $\alpha_s$ becomes too large. Nevertheless, such uncertainty is still quite small compared to the other systematics.\\ \subsection{Dependence on $P^z$, $a$ and extrapolation model} \label{app:pzdependence} \begin{figure*}[htb] \centering \includegraphics[width=0.32\textwidth]{pz_dependence} \includegraphics[width=0.32\textwidth]{all_pz_f} \includegraphics[width=0.34\textwidth]{all_pz_xf} \caption{The PDFs from NNLO matching of the qPDFs at different $P^z$, which is obtained from model-exp extrapolation of the NNLO-matched $\tilde h(\lambda, \lambda_S, P^z,\mu,a)$.} \label{fig:pzs} \end{figure*} \begin{figure*}[htb] \centering \includegraphics[width=0.32\textwidth]{a04_pz3_exp_vs_pow} \includegraphics[width=0.32\textwidth]{a04_pz4_exp_vs_pow} \includegraphics[width=0.32\textwidth]{a04_pz5_exp_vs_pow} \caption{Comparison of the final results from qPDFs obtained by different model extrapolations for the FT.} \label{fig:exp_vs_pow} \end{figure*} In \fig{pzs} we show the $P^z$-dependence of the PDF with NNLO matching correction. We find that despite the considerable differences between the qPDFs at $P^z\le 1.45$ GeV and those at $P^z\ge 1.94$ GeV, the matching corrections bring the final results closer, which shows the effectiveness of LaMET. Note that the matching drives the qPDF closer to the smaller $x$ region, so the error bands of the PDFs also shrink after matching as they are contributed from the larger $x$ region. Moreover, we find that the PDFs start to converge at $P^z\ge 1.29$ GeV, which corresponds to a boost factor of $\sim 4$. As $P^z$ increases, the results becomes smaller as $x\to1$, which agrees with our expectation that large momentum suppresses the higher-twist contributions. It is worth mentioning that both the $P^z$-dependence and matching correction appear to be small for $x$ as low as $0.05$, which hints that the power correction and resummation effects are less severe than our naive estimate through power counting. In \fig{exp_vs_pow} we compare the PDFs matched from the qPDFs with model-exp (with $m_{\rm eff}>0.1$ GeV) and model-pow extrapolations. For $a=0.04$ fm and $P^z=1.94$ GeV, we also added comparison to the model-2p-exp and model-2p extrapolations. Despite the differences between the qPDFs at small $x$, the matched results are almost identical even at the smallest $x$ shown in the plot. Again, this is the outcome of the PDF receiving contributions from the qPDF at larger $x$ through matching, which suggests that the extrapolation error can still be under control for $x$ as small as $\sim 0.01$. Note that the result from model-2p also shows agreement, but it includes slight oscillations in the $x$-space, because the extrapolated $\tilde h(\lambda)$ decays too slowly in the coordinate space. Therefore, in the region where other systematic errors are under control, the difference between model-exp and other extrapolations is negligible, and we will use the model-exp extrapolation to obtain the final results. \section{Final results} \label{app:final} The central value of our final result is obtained from the qPDF at $a=0.04$ fm, $z_S=0.24$ fm, $z_L=0.92$ fm, $\mu=2.0$ GeV and $P^z=2.42$ GeV with exponential extrapolation ($m_{\rm eff}>0.1$ GeV) and NNLO matching. The error from variation of the factorization scale is obtained by repeating the same procedure for $\mu=1.4$ and $2.8$ GeV and evolving the matched results to $\mu=2.0$ GeV with the NLO DGLAP equation, as shown in \fig{scale_var2}, where let the error band cover all the data sets from the three different factorization scales. In order to obtain a target precision of 10\%, we aim to control the relative ${\cal O}(\alpha_s^3)$ matching correction at $\mu=2.0$ GeV be smaller than 5\%. By assuming that the perturbation series grows geometrically, it means that the relative NLO correction should be less than $^3\sqrt{5\%}=37\%$ and the relative NNLO correction less than 14\%. By comparing to \fig{nnlo}, it means that we should exclude the regions $x<0.03$ and $ x > 0.88$. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{pow_correction0} \includegraphics[width=0.8\columnwidth]{pow_correction} \caption{Estimate of the size of power correction $\alpha(x)/P_z^2$ (upper panel) and its relative size to the qPDF $\tilde f_v(x, P^z=2.42\ {\rm GeV})$ (lower panel).} \label{fig:power} \end{figure} To estimate the size of the power corrections, we fit the PDFs obtained at $a=0.04$ fm, $P^z=\{1.45,1.94,2.42\}$ GeV and $a=0.06$ fm, $P^z=\{1.72,2.15\}$ GeV to the \textit{ansatz} $f_v(x) + \alpha(x) / P_z^2$ for each fixed $x$, and show the size of the power correction term in \fig{power}. At $P^z=2.42$ GeV, we find that the absolute value of the power correction diverges at very small $x$, as expected, but its relative size $ \alpha(x) / [P_z^2 f_v(x)]$ remains finite because the PDF also diverges. On the contrary, $ \alpha(x) / [P_z^2 f_v(x)]$ starts to grow as $x\to1$. According to our estimate, $ \alpha(x) / [P_z^2 f_v(x)] \lesssim 0.1$ for $0.01<x<0.80$ and $ \alpha(x) / [P_z^2 f_v(x)] \lesssim 0.05$ for $0.01<x<0.70$. According to \fig{exp_vs_pow}, the qPDF from power-law extrapolation leads to almost identical PDF after the matching correction for $x$ as small as 0.01. Our explanation is that the matching correction drives the qPDF to smaller $x$, so the PDF at a given $x$ receives contributions from the larger-$x$ region of the qPDF which has less $P^z$ dependence. Although there are logarithms of $\mu/(xP^z)$ in the matching coefficient which become large at small $x$, they are always multiplied by the DGLAP splitting function, which when convoluted with the qPDF always drives the result to smaller $x$, thus the perturbative correction remains small even at $x=0.03$. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{final_uncertainty} \caption{Statistical and scale-variation uncertainty of the PDF obtained from the qPDF at $a=0.04$ fm and $P^z=2.42$ GeV.} \label{fig:sigma} \end{figure} In \fig{sigma} we show uncertainty of the PDF, $\delta f_v(x)/f_v(x)$, where $\delta f_v(x)$ includes both statistical and scale-variation errors. The uncertainty is $\le 20\%$ for $0.01\le x \le 0.93$, as $x=0.01$ is the smallest $x$ that we show in the plot, and $\le 10\%$ for $0.08\le x \le 0.45$. Therefore, by combining the estimates of power correction, higher-order perturbative correction, statistical and scale-variation errors, we determine the PDF at $0.03\lesssim x\lesssim 0.80$ with $\le 20\%$ uncertainty and at $0.08\lesssim x \lesssim 0.45$ with $\le 10\%$ uncertainty, which is shown in \fig{comp}. \newpage
1,314,259,992,635
arxiv
\section{Introduction} \label{intro} Relativistic collimated outflows have been extensively observed in active galactic nuclei (AGN) and X-ray binaries (XRB). Gamma-ray bursts (GRB) are also believed to be connected to ultrarelativistic and collimated outflows to overcome the ``compactness problem'' (e.g. Piran 1999) and to explain the achromatic afterglow breaks (Rhoads 1997; Sari et al. 1999). It is also believed that all these sources are powered by accretion of matter by a compact object. The widely accepted mechanism for jet acceleration and collimation in the context of AGN and XRB jets is that of magnetic driving. According to this paradigm, magnetic fields anchored to a rotating object can launch an outflow. The rotating object can be a star (Weber \& Davis 1967; Mestel 1968), a pulsar (Michel 1969; Goldreich \& Julian 1970), an accretion disk (Bisnovatyi-Kogan \& Ruzmaikin 1976; Blandford 1976; Lovelace 1976; Blandford \& Payne 1982) or a rotating black hole (Blandford \& Znajek 1977). The material is accelerated thermally up to the sonic point and centrifugally until the Alfv\'en point, defined as the point where the flow speed equals the Alfv\'en speed. After the Alfv\'en point the inertia of mater does not allow corotation of the magnetic field. As a result, the magnetic field lines bend, developing a strong toroidal component. Further out the flow passes through the fast magnetosonic point where most of the energy of the flow remains in the form of Poynting flux in the case of relativistic outflows (Michel 1969; Goldreich \& Julian 1970; Sakurai 1985; Beskin et al. 1998). Further acceleration of the flow is not straightforward within ideal MHD. It can be shown, for example, that a radial flow is not accelerated after the fast point (e.g. Beskin 1998). This is a result of the fact that the magnetic pressure and tension terms of the Lorentz force almost cancel each other (Begelman \& Li 1994). A limited degree of acceleration of the flow is possible if it has a decollimating shape (i.e. the magnetic field diverges faster than radial; Li et al. 1992; Begelman \& Li 1994). Magnetized jets suffer from a number of instabilities. Interaction with the environment causes instabililty of the Kelvin-Helmholtz type and kink instability causes internal rearrangement of the field configuration. Here we focus on kink instability, since it internally dissipates magnetic energy associated with the Poynting flux. As demonstrated elsewhere (Drenkhahn 2002; Drenkhahn \& Spruit 2002; Spruit \& Drenkhahn 2003) such internal energy dissipation directly leads to acceleration of the flow. Dissipation steepens the radial decrease of magnetic pressure, thereby lifting the cancellation between outward pressure force and inward magnetic tension, and allowing the magnetic pressure gradient to accelerate the flow. \subsection{``AC'' versus ``DC'' outflows} While dissipation of magnetic energy can thus happen through kink instability in an initially axisymmetric (``DC'') flow, it can also happen more directly by reconnection in the outflow generated by a non-axisymmetric rotator (``AC'' flow). The two cases behave differently in terms of the acceleration profile, and the location and amount of radiation produced by the dissipation process. A nonaxisymmetric rotator produces a ``striped'' outflow (as in the case of a pulsar wind) with reconnectable changes of direction of the field embedded in the flow, and energy release independent of the opening angle of the jet. In the DC case, where energy release is instead mediated by an instability, the rate of energy release is limited by the time it takes an Alfv\'en wave to travel across the width of the jet. This makes it a sensitive function of the jet opening angle. The ``AC'' case has been studied in detail by Drenkhahn (2002), Drenkhahn and Spruit (2002), with application to Gamma-ray bursts. In the case of AGN and XRB, on the other hand, the collimated jet is arguably best understood if the field in the inner disk is of uniform polarity, resulting in an initially axisymmetric flow. Another difference is the lower bulk Lorentz factors in the AGN/XRB case, resulting in faster energy release (in units of the dynamical time of the central engine). The purpose of this paper is to explore the consequences of magnetic dissipation by internal instability in such axisymmetric (or DC) cases, and its observational signatures. We also apply the calculations to the GRB case, where we compare the results with the AC case studied before. \subsection{Energy release and field decay by the instability} We limit ourselves to a flow with constant opening angle. That is, we leave aside the collimation process. Kink instability is modeled by adding a sink term to the induction equation to account for the non-ideal MHD effects arising from it. Linear stability theory of kink instability yields a growth time of the order $t_{\rm k}=r\theta /v_{\rm A,\phi}$ where $r$ is the radius of the jet, $\theta$ its opening angle and $v_{\rm A,\phi}$ the Alfv\'en speed based on the azimuthal component $B_\phi$ of the field. This is independent of the poloidal field component, at least for the so-called internal kink modes (which do not disturb the outer boundary of the field, cf.\ Bateman 1978 for details). For stability analysis and numerical simulations with astrophysical applications see, e.g. Begelman (1998); Appl et al. (2000); Lery et al. (2000); Ouyed et al. (2003); Nakamura \& Meier (2004). Based on linear theory, we would predict that the poloidal field component can be ignored for the rate of energy release. It is not clear, however, that the poloidal component can be ignored for the nonlinear development of the instability, which is what actually determines the energy release. As a way to explore the effect of possible nonlinear stabilization by a poloidal component we compare two cases in the calculations: one with energy release and field decay given by the Alfv\'en time across the jet ( $t_{\rm k}$ above), and one in which this rate is assumed to be reduced by the poloidal component. This mainly affects the early phases of the acceleration of the flow beyond the light cylinder. We find that kink instability has time to grow in the AGN and XRB cases, dissipating energy in the toroidal component of the magnetic field while accelerating the flow at the same time. The dissipation of magnetic energy is almost complete and fast in the case of AGN jets, so that on parsec scales the flow has become kinetic energy dominated, in agreement with current interpretations of the observations (e.g. Sikora et al. 2005, where the possible effects of magnetic dissipation are also discussed briefly). The DC model with kink instability also produces significant flow acceleration in the GRB case, but conversion of the Poynting flux is less effective than the AC model in this case. The structure of the paper is as follows. In Sec.~2 we discuss MHD instabilities in jets and focus on the kink instability and its growth rate. The model is described in Sec.~3 including the assumptions, the dynamical equations and the parameters at the base of the flow. In Sec.~4, we apply the model to the case of AGN jets and GRBs, while the last two Sections present the discussion and conclusions. \section{The kink instability} \label{kink} Magnetized outflows are subject to a variety of instabilities. These can be classified as pressure driven, Kelvin-Helmholtz and current driven instabilities (see, e.g., Kadomtsev 1966; Bateman 1978). Pressure driven instabilities (Kersal\'e et al. 2000; Longaretti 2003) are related to the interplay between the gas pressure and the curvature of magnetic field lines. They are relevant close to the launching region of the outflows and may be important as long as the outflow is still subsonic. Kelvin-Helmholtz (KH) instabilities (Ray 1981; Ferrari et al. 1981; Bodo et al. 1989; Hardee \& Rosen 1999) arise from velocity gradients in the flow and may be important in the shearing layer between the outflow and the external medium. KH instabilities have been extensively studied and become strongest in the region beyond the Alfv\'en point but still within the fast magnetosonic point. Current driven (CD; Eichler 1993; Spruit et al. 1997; Begelman 1998; Lyubarskii 1999; Appl et al. 2000) instabilities have received much less attention but are the most relevant ones for Poynting-flux dominated outflows, since they can convert the bulk Poynting flux into radiation and kinetic energy of the flow (for the role of CD instabilities in an electromagnetic model for GRBs see Lyutikov \& Blandford 2003). Among the CD instabilities, the $m=$1 kink instability is generally the most effective. In this work, we focus on the effect of the kink instability on the dynamics of these outflows. \subsection{The growth rate of the instability} While magnetized outflows can be accelerated ``centrifugally'' by large scale poloidal fields (Blandford \& Payne 1982; Sakurai 1985, 1987), at the radius of the light-cylinder inertial forces become significant and the magnetic field cannot force corotation. At this radius the strength of the toroidal and the poloidal components are comparable. Further out, the induction equation dictates that, within ideal MHD, the toroidal component dominates over the poloidal one since the strength of the former scales as $1/r$ while that of the latter as $1/r^2$. This magnetic configuration of a strongly wound-up magnetic field like this is known, however, to be highly unstable to the kink $m=1$ mode from tokamak experiments (see, e.g., Bateman 1978). Linear stability analysis has shown that the growth time of the instability is given by the Alfv\'en crossing time across the outflow in a frame comoving with it (Begelman 1998; Appl et al. 2000). The study of the non-linear evolution of the instability demands three dimensional relativistic MHD simulations over many decades of radii and it is, therefore, not surprising that the issue is not settled. Lery et al. (2000) and Baty \& Keppens (2002) argued in favor of the dynamical importance of the instability in reorganizing the magnetic configuration inside the jet. It has been argued, however, that the jet creates a ``backbone'' of strong poloidal field which slows down the development of instabilities (Ouyed et al. 2003; Nakamura \& Meier 2004). In view of these works and since the growth rate of the instability is important for this study, we consider two alternatives for the non-linear stage of the instability. In the first case, the instability proceeds at the Alfv\'en crossing time across the outflow (as suggested by linear stability analysis) and rearranges the magnetic field configuration to a more chaotic one. In this case the instability time scale is given by the expression (in the central engine frame) \begin{equation} t_{\rm{k}}=\frac{r\theta \gamma}{v_{\rm{A,\phi}}}. \label{kink1} \end{equation} We will refer to this as the ``fast kink'' case. For the second case, we reduce the dissipation rate by a suitable (but arbitrary) function of the poloidal-to-toroidal field ratio. \begin{equation} t_{\rm{k}}=\frac{r\theta \gamma}{v_{\rm{A,\phi}}}e^{B^{\rm{co}}_p/B^{\rm{co}}_\phi}. \end{equation} We will refer to this as the ``slow kink'' case. This recipe is meant only as a means to explore the possible effect that the poloidal field component could have on the net acceleration of the flow, if it affects the dissipation rate, and is not meant to be quantitative. Numerical simulations of the instability {would be} needed to determine which of these prescriptions (if any) of the growth time scale is close in describing its non-linear development (see also Section 5). In the last expressions, $B^{\rm{co}}_\phi$, $B^{\rm{co}}_p$, are the toroidal and the poloidal components of the magnetic field as measured by an observer comoving with the flow, $\theta$ is the jet opening angle, $\gamma$ is the bulk Lorentz factor of the flow and $v_{\rm{A,\phi}}$ is the $\phi$ component of the Alfv\'en speed given by \begin{equation} v_{\rm{A,}\phi}=c\frac{u_{\rm{A},\phi}}{\sqrt{1+u_{\rm{A,}\phi}^2+u_{\rm{A,}p}^2}}, \qquad u_{\rm{A,}\phi}=\frac{B_\phi^{\rm{co}}}{(4\pi w)^{1/2}}. \label{alfvenspeed} \end{equation} Here, $w$ is the internal enthalpy, to be defined below. \section{The model} A magnetically launched outflow passes through three characteristic points where the speed of the flow equals the speed of slow mode, the poloidal Alfv\'en wave and the fast mode and are called the slow magnetosonic, the Alfv\'en and the fast magnetosonic points respectively. For flows where the energy density of the magnetic field dominates that of matter, the Alfv\'en point lies very close to the light-cylinder \begin{equation} R_L=c/\Omega, \end{equation} where $\Omega$ is the angular velocity of the foot point (e.g. Camenzind 1986). At the Alfv\'en radius most of the centrifugal acceleration has already taken place and the magnetic field cannot force corotation of matter. At this location, the toroidal and the poloidal components of the magnetic field are comparable in magnitude. Further out, the flow passes through the fast magnetosonic point at a distance $\sim \rm{a \quad few}\quad R_L$ (Sakurai 1985; Li et al. 1992; Beskin et al. 1998). At the location of the fast magnetosonic point the speed of the four-velocity of the flow equals $\sim \mu^{1/3}$, where $\mu$ is the Michel magnetization parameter (i.e., the energy flux per unit rest mass; Michel 1969). For Poynting-flux dominated flows (i.e., $\mu\gg 1$), most of the energy is still in magnetic form at this point since the ratio of magnetic to matter energy flux is $\sim \mu^{2/3}$. There is thus a choice between flows with high Lorentz factors (but inefficient conversion of Poynting flux to kinetic energy), or efficient conversion at the price of low terminal Lorentz factors. Better conversion within ideal MHD appears to be hard to achieve except by decollimation of the flow (Li et al. 1992; Begelman and Li 1994, Bogovalov 2001; Daigne and Drenkhahn 2002; but see claims to the contrary by Vlahakis and Konigl 2003a,b; Fendt and Ouyed 2004). Even with such decollimation, the additional acceleration is rather modest (Begelman and Li 1994; Daigne and Drenkhahn 2002). We set the initial conditions of our calculation at the fast magnetosonic point $r_0$. To make the problem tractable we make a number of simplifying assumptions. First, we limit ourselves to a {\it radial}, {\it static} flow. Evidently, this approach does not allow us to explore the important issue of jet collimation (see, however Section 5.1). Furthermore, the flow is assumed {\it one-dimensional} by ignoring the structure of the jet in the $\theta$ direction. Also, we ignore the azimuthal component of the velocity. This component is not dynamically important beyond the fast magnetosonic point (e.g. Goldreich \& Julian 1970) and can be neglected from the dynamic equations. On the other hand, the poloidal component (taken to be radial for simplicity) still has to be taken into account when modeling the effect of the kink instability since it influences its growth timescale [see Eqs.~(1), (2)]. These simplifying assumptions minimize the number of the free parameters of the model, allowing us to study the effect of each on the jet dynamics, as will become clear in the next sections. \subsection{Dynamical equations} To determine the characteristics of the flow as a function of radius, one needs the conservation equations for mass, energy and angular momentum. These equations can be brought in the form (if, for the moment, we neglect radiative losses; e.g. Lyutikov 2001; Drenkhahn 2002) \begin{equation} \partial_rr^2\rho u=0, \label{massc} \end{equation} \begin{equation} \partial_rr^2\Big(w\gamma u+\frac{\beta B_\phi^2}{4\pi}\Big)=0, \label{energyc} \end{equation} \begin{equation} \partial_r r^2\Big(wu^2+p+\frac{(1+\beta^2)B_\phi^2}{8\pi}\Big)=2rp, \label{momentumc} \end{equation} where $w=\rho c^2+e+p$ is the proper enthalpy density, $e$ and $p$ are the internal energy and pressure respectively and $u=\gamma \beta$ is the radial four-velocity. We still need to assume an equation of state that will provide a relation between the pressure $p$ and the internal energy. Assuming an ideal gas, we take $p=(\gamma_a-1)e$, where $\gamma_a$ is the adiabatic index. Mass conservation (\ref{massc}) can be integrated to yield mass flux per sterad \begin{equation} \dot{M}=r^2u\rho c, \end{equation} while energy conservation gives the total luminosity per sterad \begin{equation} L=wr^2\gamma u c+\frac{\beta c(rB_\phi)^2}{4\pi}. \label{lum} \end{equation} The first term of the last expression corresponds to the kinetic energy flux and the second to the Poynting flux. A key quantity is the ``magnetic content'' of the flow which we will refer to as the magnetization parameter $\sigma$, defined as the ratio of the radial Poynting to matter energy flux \begin{equation} \sigma=\frac{L_{\rm pf}}{L_{\rm{kin}}}=\frac{B_\phi^2}{4\pi \gamma^2w}. \label{sigma} \end{equation} For a flow to reach large asymptotic Lorentz factors (observations indicate $\gamma \sim 10-20$ for quasars, and theoretical arguments arising from the ``compactness problem'' such as Piran 1999 constrain $\gamma \simmore 100$ for GRBs), it must start with a high energy to mass ratio. Within the fireball model for GRBs (Paczy\'nski 1986; Goodman 1986) this means that $e\gg \rho c^2$. In this work, we focus on the opposite limit where most of the energy is initially stored in the magnetic field ($\sigma_0\gg 1$), while we treat the flow as cold (i.e., $e\simless \rho c^2$). Obviously, there can exist an intermediate regime of a ``magnetized-fireball'' models where both $e/\rho c^2$ and $\sigma_0\gg 1$ at the base of the flow. Finally, the strength of the radial component is given by flux conservation by the expression \begin{equation} B_r=B_{r,0}\Big(\frac{r_0}{r}\Big)^2. \label{br} \end{equation} For a flow that is moving radially with a bulk Lorentz factor $\gamma$, the expressions that relate the comoving components of the magnetic field to those measured in the central engine frame are \begin{equation} B^{co}_r=B_{r} \label{brlabco} \end{equation} and \begin{equation} B^{co}_\phi=\frac{B_\phi}{\gamma}. \label{bphilabco} \end{equation} \subsection{Modeling the kink instability} The set of equations presented in the previous section is not complete. There is one more equation needed to determine the problem at hand, which is the induction equation. For ideal MHD, the induction equation yields $\partial_r \beta rB_\phi=0$ and can be integrated to give the scaling $B_\phi \propto 1/r$ for relativistic flows. One can immediately see that the Poynting-flux term in equation (\ref{lum}) is approximately constant and no further acceleration of the flow is possible within ideal MHD for a radial flow. This is a result of the fact that the magnetic pressure and tension terms of the Lorentz force almost cancel each other (Begelman \& Li 1994). We argue, however, that when the toroidal component of the magnetic field becomes dynamically dominant the kink instability sets in. The instability drives its energy from $B_\phi^2$ on the instability growth time scale. This effect can be crudely modeled by the addition of one sink term on the right hand side of the induction equation following Drenkhahn (2002), Drenkhahn \& Spruit (2002) \begin{equation} \partial_r \beta rB_\phi=-\frac{rB_\phi}{ct_{\rm{k}}}. \label{induction} \end{equation} The kink instability time scale is given by the expressions (2) or (1) depending on whether the poloidal component of the magnetic field is assumed to have a stabilizing effect. When the instability sets in, $B_\phi$ drops faster than $1/r$ and acceleration of the flow is possible at the expense of its magnetic energy. \subsection{Radiative losses} The dynamical equations (\ref{energyc}) and (\ref{momentumc}) are derived under the assumption that no energy or momentum escape from the outflow. This is accurate when the instability releases energy in the optically thick region of the flow. On the the other hand, in the optically thin regime energy and momentum may be transfered into the radiative form that escapes and does not interact with matter. Let $\Lambda$ be the emissivity of the medium in the comoving frame, that is, the energy that is radiated away per unit time and per unit volume. If the emission is isotropic in the comoving frame the energy and momentum Eqs.~(\ref{energyc}), (\ref{momentumc}) including the radiative loss terms are (K\"onigl \& Granot 2002) \begin{equation} \label{energyc2} \partial_r r^2 \left( w \gamma u + \frac{\beta B_\phi^2}{4\pi} \right) = - r^2 \Gamma\frac{\Lambda}{c} \ , \end{equation} \begin{equation} \label{momentumc2} \partial_r r^2 \left( w u^2 + p + \left(1+\beta^2\right) \frac{B_\phi^2}{8\pi} \right) = 2rp - r^2 u\frac{\Lambda}{c}. \end{equation} The importance of the cooling term depends on the cooling time scale. If it is short compared to the expansion time scale, the matter stays cold during the dissipation process. In this limit, all the dissipated energy is locally radiated away. The dissipative processes that appear in the non-linear stage of the instability are poorly understood. It could be the case that the released energy leads to fast moving particles (i.e. electrons and ions) and/or to Alfv\'en turbulence (Thompson 1994). Synchrotron emission is a plausible fast cooling process for the electrons. It is particularly effective in our model, because the magnetic field strengths are high in a Poynting flux dominated outflow. Ions, however, are, due to their higher masses, much less efficient radiators. The form of this cooling term we assume here is \begin{equation} \label{eq:Lambda} \Lambda = \kappa \frac{ecu}{r} \end{equation} where $k$ is an adjustable cooling length parameter. The cooling length is the distance by which the matter travels outward while the internal energy $e$ is lost. When $\kappa\gg 1$ the cooling length is very short, only a small fraction of the expansion length scale $r$ and thus qualifies for the description of a fast cooling flow. This, in more physical terms, corresponds to the case where most of the energy is dissipated to fast moving (and therefore fast cooling) electrons. On the other hand, setting $\kappa\ll 1$, the cooling length is much longer than the expansion length and most of the energy stays in the flow leading to more efficient adiabatic expansion. This is the case when the dissipated energy is mostly shared among the ions. \subsection{Initial conditions, model parameters} The characteristics of the flow are determined when a number of quantities are specified at the fast magnetosonic point $r_0$ which is taken to be $\sim$ a few times the light cylinder radius (Sakurai 1985; Begelman et al. 1994; Beskin et al. 1998), or expressed in terms of the gravitational radius $r_g=GM/c^2$ of the central engine $r_0\sim 100r_g$. These quantities are the initial magnetization $\sigma_0$, the luminosity $L$, the opening angle $\theta$, the ratio $B_{r,0}/B_{\phi,0}$ and the cooling length scale $\kappa$. The quantities one has to solve for so as to determine the characteristics of the flow are $\rho$, $e$, $u$ and $B_\phi$ as functions or radius $r$. This is done by integrating numerically the mass, energy, momentum conservation equations and the modified induction equation. The parameters of the model determine the initial values of $\rho$, $e$, $u$ and $B_\phi$ at $r_0$. The initial four-velocity for our calculations is assumed to be \begin{equation} u_0=\mu^{1/3}=\sqrt{\sigma_0}, \end{equation} in accordance with previous studies (Michel 1967; Goldreich \& Julian 1970; Camenzind 1986; Beskin et al. 1998) which show that at the fast point the ratio of Poynting to kinetic flux is $\mu^{2/3}$. The flow is assumed to be cold at $r_0$, i.e. $e=0$ and using the previous expression with Eqs.~(\ref{lum}), (\ref{sigma}) one finds for $\rho_0$ and $B_{\phi, 0}$ \begin{equation} \rho_0=\frac{L}{r_0^2c^3\sqrt{\sigma_0(\sigma_0+1)^3}} \end{equation} and \begin{equation} B_{\phi,0}=\frac{4\pi}{r_0}\Big(\frac{\sigma_0}{\sigma_0+1}\Big)^{1/4}\sqrt{\frac{L}{c}}. \end{equation} The role of the different model parameters becomes clear in the next section where the model is applied to the case of AGN jets and GRBs. Out of the free parameters of the model, $\sigma_0$ and $\theta$ are of special importance. The magnetization $\sigma_0$ determines the ``magnetic dominance'' of the flow, i.e., the speed of the flow at the fast magnetosonic point and at a large distance from the central engine. On the other hand, the opening angle $\theta$ is directly related to the growth rate of the instability [see Eqs. (1), (2)]. The instability has enough time to grow if it is faster than the expansion time $r/c$. The ratio of the two time scales at the base of the flow is (using prescription (1) for the time scale of the kink instability) \begin{equation} t_k/t_{exp}=\frac{\theta \gamma_0c}{v_{A,\phi}}\simeq \frac{\theta \sqrt{\sigma_0}c}{v_{A,\phi}}. \label{ratio} \end{equation} If $t_k/t_{exp}\gg 1$, the kink instability does not have enough time to grow and the evolution is close to that predicted by ideal MHD (where not much acceleration takes place). On the other hand, if $t_k/t_{exp}\ll 1$, the instability grows for many e-foldings and turns almost all the magnetic energy in the flow into radiation and kinetic flux. Keeping the opening angle fixed, this happens much more efficiently in lower $\sigma_0$ flows (provided that $\sigma_0\simmore 1$ so that the Alfv\'en speed is a significant fraction of the speed of light). We return to this point in the next sections. \section{Applications} Although at first sight different, jets in AGNs (and microquasars) and GRBs probably have central engines of similar characteristics. AGN jets are launched in the inner regions of magnetized accretion disks (Blandford \& Payne 1982), or drive their power by magnetic fields that are threading the ergosphere of a rotating black hole (Blandford \& Znajek 1977). In the case of GRBs, the same central engine may be at work, or the energy is tapped by a millisecond magnetar (Usov 1992; Klu\'zniak \& Ruderman 1998; Spruit 1999). In all of these situations, strong magnetic fields play an important role and most of the energy is released in the form of Poynting flux. \footnote{ An exception is the possibility of creation of a fireball by neutrino-antineutrino annihilation at the poles of a hyperaccreting compact object (Jaroszy\'nski 1993; Mochkovitch et al. 1993), an idea applied to long bursts within the collapsar scenario (Woosley 1993; MacFadyen \& Woosley 1999) and short bursts within the binary merger scenario (Blinnikov et al. 1984; Eichler et al. 1989; Janka et al. 1999; Aloy et al. 2005)} All the above scenarios may give rise to magnetized outflows, whose evolution depends, to a large extent, on the dominance of the magnetic energy or on the ratio of the Poynting-flux to matter energy flux at the base of the flow. By varying this ratio, one can apply the model to jets in both the cases of GRBs and AGNs. \subsection{AGN jets} Relativistic jets are commonly observed in AGNs to have bulk Lorentz factors in the range $\gamma\sim 10-20$. Such terminal Lorentz factors can be achieved for the ratio $\sigma_0$ of Poynting to matter energy flux of the order of several at the fast magnetosonic Point $r_0$. The location of the fast point is most likely at a few light cylinder radii (e.g. Sakurai 1985; Camenzind 1986) and is taken to be $100r_g$. Actually, $\sigma_0$ is a very important parameter of the flow. Its effect on the acceleration of the flow is clearly seen in Fig. 1, where the bulk Lorentz factor is plotted as a function radius $r$ for different $\sigma_0$. The rest of the parameters have the values $\theta=10^o$, $B_{r,0}/B_{\phi,0}=0.5$, while the energy released by the instability is assumed to be locally radiated away (this is done by taking the ``cooling length'' parameter $\kappa\gg 1$). The results do not depend on the luminosity $L$ of the flow in the case of AGN jets, while $r_0$ sets the scale of the problem (since it is the only length scale) which means that the results can be trivially rescaled in the case of a different choice of $r_0$. The solid lines in Fig. 1 correspond to the case where Eq.~(1) is used for the timescale of the kink instability (i.e., the fast kink case) and the dashed lines to the case where the instability is slowed down by the poloidal component of the magnetic field and the time scale is given by Eq. (2) (i.e., the slow kink case). From Fig.~\ref{fig1}, one can see that the instability acts quickly and accelerates the flow within 1-2 orders of magnitude in distance from the location of the fast magnetosonic point. The acceleration is faster in the ``fast kink'' case and much more gradual in the ``slow kink'' one. This is due to the fact that close to the base of the flow the ratio $B_{r}^{co}/B_{\phi}^{co}=\gamma B_{r}/B_{\phi}\sim 1$ and the instability is slowed down [see Eq. (2)]. Further out, however, the toroidal component of the field also dominates in the frame comoving with the flow and the instability proceeds faster. At larger distances, practically all the magnetic energy has been dissipated and the terminal Lorentz factors are very similar in the slow and fast kink cases. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[angle=270]{4107fig1.ps}} \caption[] {The bulk Lorentz factor of the flow as a function of radius for different values of $\sigma_0$. The black, red, blue and green curves correspond to $\sigma=5,$ 8, 10 and 25 respectively. The solid curves correspond to the case where Eq.~(1) is used for the instability growth time scale (fast kink) and the dashed to the one where Eq.~(2) is used (slow kink case). \label{fig1} } \end{figure} The acceleration of the flow and the terminal Lorentz factor depend also on what fraction of the instability-released energy is radiated away. If the dissipative processes that appear in the non-linear regime of the evolution of the instability lead to fast moving electrons, then it is easy to check that they will radiate away most of this energy through synchrotron (and/or inverse Compton) radiation on a time scale much shorter than the expansion timescale. If, on the other hand, most of the energy is dissipated to the ions, then most of it stays in the system as internal energy and accelerates the flow further. To keep this study fairly general, we have calculated the bulk Lorentz factor of the flow in the two extreme cases. In the ``fast cooling'' case, all the released energy is radiated away very efficiently, while in the ``slow cooling'' case, the energy is assumed to stay in the flow (practically this means that we set the cooling length parameter $\kappa\ll 1$). In Fig.~\ref{fig2}, the bulk Lorentz factor of the flow is plotted for $\sigma_0=8$. The red curves correspond to the ``fast cooling'' case and the black to the ``slow cooling'' one. The asymptotic bulk Lorentz factor differs substantially in these two cases, showing that a large fraction of the energy of the flow can in principle be radiated away due to the instability-related dissipative processes. Furthermore, the acceleration of the flow depends on the jet opening angle and is faster for narrower jets (see green curves in Fig.~\ref{fig2}). This is expected, since for a narrower opening angle, the Alfv\'en crossing time across the jet is shorter and so is the instability growth timescale. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[angle=270]{4107fig2.ps}} \caption[] {The bulk Lorentz factor dependence on the cooling efficiency of the flow and jet opening angles. The solid curves correspond to the fast kink case while the dashed to the slow kink one. The black, red and green curves correspond to fast cooling, slow cooling and jet opening angle of $6^o$ respectively. \label{fig2} } \end{figure} Another quantity of special interest is the Poynting to matter energy flux ratio $\sigma$ as a function of radius. While the flow is initially moderately Poynting flux dominated, the $\sigma$ drops rapidly as a function of distance and the flow is matter-dominated at distances $r\simmore 10^3r_g$ independently of the prescription of the instability or the cooling timescales. Far enough from the fast magnetosonic point, practically all the magnetic energy has been transfered to the matter, and the bulk Lorentz factor saturates. The thick lines in Fig.~\ref{fig3} show the ratio of the radial to the toroidal components of the magnetic field. This ratio drops rapidly as a function of distance showing that $B_\phi\gg B_r$. So, despite the fact that the instability grows quickly from the toroidal component of the magnetic field, this component still dominates over the radial one. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[angle=270]{4107fig3.ps}} \caption[] {The dependence of the magnetization parameter on the radius for the different prescriptions of radiative cooling and the instability timescale. Notice that the flow becomes matter-dominated at distances greater than $\sim 10r_0$. The thick lines show the ratio of the radial to the toroidal components of the magnetic field in the flow. The dominant component is clearly the toroidal one. \label{fig3} } \end{figure} Having solved for the dynamics of the flow predicted by our modeling of the kink instability, we turn to the implications of these findings for observations of AGN jets. One highly debated issue is whether the AGN jets are Poynting-flux dominated on pc and kpc scales or not. The case-dependent arguments are reviewed in Sikora et al. (2005), where it is shown that there is no strong observational reason to assume Poynting-flux dominated jets on scales larger than a few pc and that the observed emission on these scales can be understood as energy dissipated in shocks internally in the flow (Sikora et al. 1994; Spada et al. 2001) or due to interaction of the flow with the external medium. Our model predicts that most of the energy is in the form of kinetic flux at distances say $\simmore 10^3-10^4 r_g\simeq 10^{17}-10^{18}m_9$ cm, where $m_9$ is a black hole of $10^9$ solar masses. So, on pc scales the magnetic fields are dynamically insignificant, in agreement with observations. Further information on the dynamics of AGN jets comes from the shortest variability timescale in the optical and gamma-ray bands in blazars. This timescale can be as short as a few days, indicating that most of the non-thermal radiation comes from a compact region of size $R\simless 10^{17}$ cm (the so-called blazar zone). On the other hand, polarimetry measurements of the variable optical, infrared and mm radiation are consistent with a toroidal magnetic field geometry on sub-pc scales (e.g. Impey et al. 1991; Gabuzda \& Sitko 1994; Nartallo et al. 1998). Since most of the magnetic energy is dissipated on these scales, it is quite probable that the observed radiation is the result of the instability-released energy, provided that the dissipative processes lead to wide enough particle distributions (see also Sikora et al. 2005). However, one cannot exclude the possibility that, within this model, the ``blazar zone'' emission is a result of internal shocks. On scales of $10^{17}$ cm, the magnetization parameter of the flow is of the order of unity and it is interesting to study the outcome of internal shocks of moderately magnetized plasma. The rich blazar phenomenology may indicate that both these mechanisms (i.e. magnetic dissipation and internal shocks) are at work. Further constraints on where the acceleration of the flow takes place come from the lack of bulk-flow Comptonization features in the soft X-rays. This indicates that $\gamma\simless 10$ at $\sim 10^3 r_g$ (Begelman \& Sikora 1987; Moderski et al. 2003). This shows that the acceleration process is still going on at these distances. In view of our results, this could in principle rule out the ``fast kink'' case since the acceleration appears to be too fast and $\gamma\sim 10$ already at $\sim 300 r_g$ or so. At this point, however, the uncertainties in the model are too high to make a strong statement on this issue. If, for example, the fast point is located at a factor of, say, $\sim 3$ larger distance, our results are compatible with the lack of soft X-ray features. Numerical simulations of the instability are needed so that these issues can be settled (see also discussion in Sect. 5). \subsection{Gamma-ray bursts} The analysis we follow so as to apply the model to GRBs is very similar to that described in the previous sections. The only new ingredient that has to be added is related to the very high luminosities that characterize the GRB jets. As a result, the inner part of the flow is optically thick to electron scattering and matter and radiation are closely coupled. At the photospheric radius the optical depth drops to unity and further out the flow is optically thin. So, the high luminosity introduces a new length scale to the problem that has to be treated in a special way described in the next section. \subsubsection{Below and above the photosphere} At the photosphere, the equation of state changes from one dominated by radiation to one dominated by the gas pressure. To connect the two, the radiation emitted at the photosphere has to be taken into account. The amount of energy involved can be substantial, and appears as an (approximate) black body component in the GRB spectrum. It depends on the temperature of the photosphere. The temperature at the photosphere is $kT\ll m_e c^2$ for all parameter values used so that pairs can be neglected. The photosphere is then simply defined as (e.g. Abramowicz et al. 1991) \begin{equation} \int_{r_{\rm{ph}}}^\infty (1-\beta)\gamma \kappa_{es}\rho \rm{d}r\equiv 1, \label{rph} \end{equation} where $\kappa_{es}$ is the electron scattering opacity. The dynamics of the flow depend on the location of the photosphere since above the photosphere all the dissipated energy can in principle be radiated away while this is not possible at large optical depths. To solve Eq.~(\ref{rph}) for $r_{\rm{ph}}$, we have followed an iterative method. First, we guess a value for $r_{\rm{ph}}$ and integrate the dynamical equations assuming no radiative losses below the photosphere and fast cooling above it. Then we calculate the optical depth $\tau$ from $r_{ph}$ to $\infty$ and, if $\tau$ differs from unity by more than a threshold value ($\sim 0.01$ in these calculations), a new guess for $r_{\rm{ph}}$ is made. The procedure continues until the definition (\ref{rph}) is satisfied. At the photosphere one has to subtract the energy and momentum carried away by the decoupled radiation. To calculate these quantities one needs the temperature at the photosphere. The dimensionless temperature $\theta=kT/(m_e c^2)$ in the optically thick region is given by the solution of \begin{equation} \label{eq:theta} e = 3 \frac{m_\mathrm{e}}{m_\mathrm{p}} \rho c^2 \theta + \frac{8\pi^5}{15} \frac{m_\mathrm{e}c^2}{\lambda_e^3} \theta^4 \end{equation} where $\lambda_e$ is the electron Compton wave length. The terms in (23) correspond to the matter and radiation energy density. From this solution, we have also found that the internal energy is always dominated by radiation (for parameters relevant for GRBs), so we take $\gamma_a=4/3$ below the photosphere. At the photosphere we calculate the temperature $\theta_\mathrm{ph}$ and subtract the radiation energy density of a black body \begin{equation} \label{eq:ebb} e_\mathrm{bb} = \frac{8\pi^5}{15} \frac{m_\mathrm{e}c^2}{\lambda_e^3} \theta_\mathrm{ph}^4 \end{equation} from the total energy density: $e \equiv e - e_\mathrm{bb}$. The integration proceeds with an adiabatic index of $\gamma_a = 5/3$. The temperature $\theta_\mathrm{ph}$ is the temperature of the emitted black-body radiation which has a luminosity per sterad of \begin{equation} \label{eq:Lbb} L_\mathrm{ph} = r_\mathrm{ph}^2 \frac{4}{3} e_\mathrm{bb} u_\mathrm{ph} \gamma_\mathrm{ph} c \quad \mbox{for~} r\ge r_\mathrm{ph}. \end{equation} The integration continues until large distances from the source (taken as $10^{16}$ cm, where the afterglow phase starts). There, the radiative luminosity is determined by \begin{equation} \label{eq:Lnt} L_\mathrm{rad} = L - L_\mathrm{pf} - L_\mathrm{mat} \ . \end{equation} This means that the radiative luminosity is the sum of the photospheric luminosity plus the component coming from the instability-released energy above the photosphere. The role of the photospheric component and its connection to the observed spectral peaks of the GRB prompt emission in internal shock and slow dissipation models (like this one) has been studied in a number of recent papers (Ryde 2005; Rees \& M\'esz\'aros 2005; Pe'er et al. 2005). \subsubsection{Results} Following the procedure described in the previous section, we have calculated the bulk Lorentz factor of the flow for different values of $\sigma_0$ at the base of the flow and for the two prescriptions for the timescale of the kink instability [Eqs.~(1), (2)]. The fast magnetosonic point is set to $R_0=10^8$ cm, the jet opening angle to $\theta=10^o$, the initial ratio $B_{r,0}/B_{\phi,0}=0.5$ and the luminosity of the flow $L=10^{51}\quad \rm{erg}/\rm{sec\cdot sterad}$. The results are given in Fig.~\ref{fig4}, where it is shown that the flow reaches terminal Lorentz factors $\gamma\simmore 100$ for $\sigma_0 \simmore 30$. The solid curves correspond to the case where the timescale for the kink instability is given by Eq.~(1) (fast kink case) and the dashed to the case where Eq.~(2) is used for the timescale of the growth of the instability (slow kink case). Notice that the initial acceleration of the flow differs in the two cases, being much faster in the fast kink case. This is expected since this case is characterized by rapid dissipation of magnetic energy and acceleration from the base of the flow, while in the slow kink case the non-negligible poloidal component of the magnetic field close to $r_0$ slows down the instability. The terminal Lorentz factors are, however, similar in the two cases. Notice also that there is a discontinuity in the slope of the curves $\gamma(r)$ at the location of the photosphere which is a result of our simplistic approach (for details see previous section). \begin{figure} \resizebox{\hsize}{!}{\includegraphics[angle=270]{4107fig4.ps}} \caption[] {The bulk Lorentz factor of the flow for different $\sigma_0$ and for the fast and slow kink case. Notice that larger values for $\sigma_0$ result in faster outflows. \label{fig4} } \end{figure} A second key parameter of the model is the opening angle of the jet. For smaller opening angles, the instability timescale becomes shorter and the flow is accelerated faster and to higher terminal Lorentz factors as is shown in Fig.~\ref{fig5}. This implies that for smaller opening angles, more magnetic energy is dissipated and the flow is less strongly magnetized at large distances. This is clearly shown in Fig.~\ref{fig6}, where the Poynting to matter energy flux ratio is plotted as a function of radius $r$ (compare the thin curve with the thick black dashed curves). Notice that the $\sigma$(r) curves are discontinuous at the location of the photosphere. This is caused by our simplified treatment at the location of the photosphere, where we subtract the energy density of the radiation field (see previous section) and reduce the internal enthalpy of the flow, increasing the ratio of Poynting to matter energy flux. More detailed radiative transfer models of the transition from optically thick to optically thin condition, predict a rather sharp transition which indicates that our simple approach does not introduce large errors. In Figs.~5 and 6 we have also plotted (see blue curves) the bulk Lorentz factor and the magnetization $\sigma$ as functions of $r$ for the ``typical values'' of the parameters of the model proposed by Drenkhahn \& Spruit (2002; the ``AC'' flow). In the context of that model the magnetic field lines change direction on small scales and magnetic reconnection dissipates magnetic energy and accelerates the flow. Notice that the non-axisymmetric model predicts more gradual acceleration and rather higher terminal Lorentz factors (for the same initial magnetization of the flow) than the current model. Furthermore, it is characterized by efficient dissipation of the Poynting flux, resulting in negligible magnetization sufficiently far from the central engine (at least in the case where the non-decayable axisymmetric component is negligible). \begin{figure} \resizebox{\hsize}{!}{\includegraphics[angle=270]{4107fig5.ps}} \caption[] {The bulk Lorentz factor of the flow for different jet opening angles $\theta$. For smaller opening angles, the terminal Lorentz factors of the flow become larger because of more efficient dissipation of the magnetic energy. The blue curve corresponds to the non-axisymmetric case studied by Drenkhahn \& Spruit (2002). \label{fig5} } \end{figure} One important point deduced from Fig.~\ref{fig6} is that, for $\sigma_0\simmore 100$, the flow remains Poynting-flux dominated even at large distances away from the source where deceleration of the outflow because of its interaction with the interstellar medium or the stellar wind is expected, which means that the instability is not fast enough to convert most of the magnetic energy into bulk motion of matter. Afterglow observations can in principle probe to the magnetic content of the ejecta through early observations of the reverse shock emission (Fan et al. 2002; Zhang et al. 2003; Kumar \& Panaitescu 2003). Modeling of the forward and reverse shock emission in cases where quick follow ups were possible suggests the existence of frozen-in magnetic fields in the ejecta (Kumar \& Panaitescu 2003) that are dynamically important, with $\sigma \simmore 0.1$ (Zhang \& Kobayashi 2005). Rapid follow-ups in the X-rays, UV and optical are now possible thanks to Swift satellite and ground based telescopes and can test our model which predicts a magnetization parameter of the order of unity for the outflowing material in the afterglow region. The XRT instrument on board Swift has already provided several early X-ray afterglows (e.g. Tagliaferri et al. 2005; Campana et al. 2005; Burrows et al. 2005). Many of these observations indicate a slow fading component at times $~10^2-10^4$ sec after the GRB trigger (Nowsek et al. 2005; Zhang et al. 2005; Panaitescu et al. 2005) which may be expected by the deceleration of ejecta with $\sigma \simmore 1$ (Zhang \& Kobayashi 2005; Zhang et al. 2005) in agreement with our model predictions. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[angle=270]{4107fig6.ps}} \caption[] {The magnetization $\sigma(r)$ of the flow as a function of distance for different $\sigma_0$ and jet opening angles. Keeping $\theta=10^o$, the jet is still magnetically dominated at large distance from the source for $\sigma_0\simmore 100$. Smaller jet opening angles lead to lower values of $\sigma_\infty$. The blue curve corresponds to the non-axisymmetric case studied by Drenkhahn \& Spruit (2002). The discontinuity at the location of the photospheric radius is a result of the subtraction of the radiation energy density from the internal energy of the flow. \label{fig6}} \end{figure} The ratio $\sigma$ is even higher for the range of distances $r\sim 10^{13}-10^{15}$ cm where internal shocks are expected to happen in the internal shock scenario for GRBs (Rees \& M\'esz\'aros 1994; Piran 1999) and is expected the reduce their radiative efficiency. However, allowing for non-ideal MHD effects in the shocked region, Fan et al. (2004) show that the radiative efficiency of $\sigma\sim 1$ plasma may not be much lower than the $\sigma=0$ case. On the other hand, since the efficiency of internal shocks to convert kinetic energy into gamma rays is already low (typically of the order of a few percent; Panaitescu et al. 1999; Kumar 1999) and observations indicate much higher radiative efficiency (Panaitescu \& Kumar 2002; Lloyd \& Zhang 2004), we investigate the possibility that the energy released by the instability powers the prompt gamma-ray emission. In Fig.~\ref{fig7}, we plot the radiative efficiency -defined as the radiated luminosity $L_{\rm{rad}}$ divided by the flow luminosity $L$- for different values of $\sigma_0$ and $\theta$. Fixing the angle $\theta$ to $10^o$, one can see that the radiative efficiency peaks at $\sim 16$\% for $\sigma_0\sim 100$. For smaller values of $\sigma_0$, most of the magnetic energy is dissipated below the photosphere and is lost to adiabatic expansion, resulting in lower radiative efficiencies. For larger values of $\sigma_0$, the flow remains magnetically dominated at all radii, keeping the radiative efficiency lower. Furthermore, the ``slow kink'' case has rather higher efficiencies and this comes from the fact that dissipation happens at larger radii and therefore in optically thin environments. So, the model can have large radiative efficiencies for $\sigma_0\sim 10-500$. Notice that one also needs $\sigma_0\simmore 30$ to overcome the ``compactness problem'' (e.g. Piran 1999). Fixing $\sigma_0=100$, one can now calculate the radiative efficiency for different opening angles of the flow. Smaller opening angles result in more magnetic energy dissipated (by shortening the instability timescale) and therefore to smaller values of Poynting to matter flux at large distances. This also means that more energy is radiated away. Although very model dependent, the opening angles of the GRB jets can be estimated by the achromatic breaks of the afterglow lightcurves (Rhoads 1997, Sari et al. 1999). For $\theta\sim 6^o$ (a value typically inferred), the efficiency is quite high and of the order of 20\%. In Fig.~7, the radiative efficiency of the non-axisymmetric model (Drenkhahn \& Spruit 2002) is also shown for different values of $\sigma_0$. The non-axisymmetric model can have a higher radiative efficiency which is close to $\sim50\%$ for $\sigma_0\simmore 100$ (See also Giannios \& Spruit 2005 for more detailed study on the spectra expected from this model). \begin{figure} \resizebox{\hsize}{!}{\includegraphics[angle=270]{4107fig7.ps}} \caption[] {The radiative efficiency of the flow, defined as the ratio of the radiated luminosity over the luminosity of the flow for different $\sigma_0$ and $\theta$. The black and red stars correspond to the fast kink and slow kink cases respectively. For opening angles of $\sim 6^o$ (in accordance with the values deduced by achromatic breaks of the afterglows) the efficiency reaches values of $\sim 20$\%. The circles correspond to the non-axisymmetric case studied by Drenkhahn \& Spruit (2002). \label{fig7} } \end{figure} \section{Discussion} This work suggests that the kink instability plays a significant role in the dynamics of magnetized outflows. The instability sets in once the toroidal component of the magnetic field becomes dominant and drives its energy by $B_\phi$ on a short time scale. The energy dissipated by the instability accelerates the flow and turns it into kinetic flux dominated flow for AGN jets at distance $\simmore 1000r_g$ and to moderately magnetized flow for GRB jets in the the afterglow region. If the dissipated magnetic energy is transferred to fast moving electrons with wide enough energy distribution, then it can power the blazar zone emission and the prompt GRB emission with high radiative efficiency. These results have been compared with those that are predicted by other dissipative models (Coroniti 1990; Spruit et al. 2001; Lyubarsky \& Kirk 2001; Drenkhahn 2002; Drenkhahn \& Spruit 2002). According to these models, if the magnetic field lines change direction on small scales, magnetic energy can be dissipated through reconnection processes. Drenkhahn (2002) and Drenkhahn \& Spruit (2002) applied this idea to GRB outflows and showed that efficient acceleration and radiation (as high as 50\%) is possible. In the context of this model, most of the magnetic energy is dissipated, resulting in kinetic flux dominated flows at large distances where the flow starts to be decelerated by the external medium. On the other hand, our model predicts moderately magnetized ejecta at this region. Since the initial phase of the afterglow emission depends on the magnetic content of the ejecta (e.g. Lyutikov 2005), these models make different predictions about this phase and can be tested against observations. This study assumes a radial flow and although this allowed us to minimize the number of free parameters and clarify the role of each of them, it nevertheless leaves a number of issues unsettled. Two important issues are these of jet collimation and of the non-linear evolution of the kink instability. We discuss these issues in the next subsections. \subsection{Collimation} The collimation of MHD outflows is usually believed to take place in the trans-Alfv\'enic region because of the ``hoop stress'' exerted by the toroidal component of the magnetic field. One issue that arises is whether the same mechanism is at work in the case where the kink instability sets in and reduces the strength of $B_\phi$. Our one dimensional approach cannot settle this question; 2-D calculations would be needed if the instability is parametrized as in the present models. Time dependent, 3-D simulations will be needed if the effects of the instability are to be included realistically, since the relevant ones are nonaxisymmetric. Collimation of the flow can be achieved by its interaction with the environment. This may be the collapsing star in the context of gamma-ray bursts (Woosley 1993) or a large scale poloidal field in the case of AGN jets (Spruit et al. 1997). Another interesting possibility is that small scale toroidal fields (probably a result of the development of the instability) can lead to flow collimation under certain conditions (Li 2002). \subsection{The non-linear evolution of the instability} The linear evolution of the kink instability is rather well understood and has been studied by linearizing the MHD equations by a number of authors (e.g. Begelman 1998; Appl et al. 2000), which shows that the instability grows on the Alfv\'en crossing time across the jet. The non-linear evolution of the instability is an issue that cannot be solved with analytical tools and 3-dimensional RMHD simulations that cover many decades of radii are needed to settle this issue. Preliminary numerical investigations have been done (e.g. Lery et al. 2000; Ouyed et al. 2003; Nakamura \& Meier 2004) which indicate that the kink instability is an internal mode that does not disrupt the jet. On the other hand, whether it is able to rearrange the magnetic field configuration internally in the flow on the short timescale implied by linear stability analysis is still not clear. Some intuition on this issue can be gained by this study. We have tried two different prescriptions for the instability growth time scale, the second of which accounts for its possible slowing down because of a strong poloidal ``backbone'' in the core on the jet (Ouyed et al. 2003). A non-negligible poloidal component can slow down the initial growth of the instability; eventually it grows in a conical jet. This occurs because as the jet expands, the $B_\phi$ and $B_p$ scale as $1/r$ and $1/r^2$ respectively so as to satisfy the induction equation. This means that the toroidal component dominates the poloidal at some point and the instability sets in. A study that assumes a cylindrical jet, on the other hand, will not deal with the $B_\phi \gg B_p$ situation. Since the observed jets do expand laterally (despite their strong collimation) by many orders in radius from their launching region to their termination shock, we believe that it is important for numerical investigations of the role of kink instability to allow for jet expansion to reveal the characteristics of the non-linear development of the instability. \subsection {More realistic models} The limitations of the calculations presented here are obvious from the parameterizations used. One may wonder to what extent these can be overcome in numerical simulations. Since the most relevant instabilities are nonaxisymmetric, such simulations have to be 3-dimensional. The computational expense of 3D MHD simulations puts strong limitations on the kind of calculations that can be done, and the realism of the conclusions that can be drawn from them. An astrophysical jet operates over many decades in length scale, with different physics dominating at different distances from the source. For reasons of computational feasibility, the 3D simulations that have been done so far use only a small range in distance, or a cylindrical geometry (e.g. Nakamura et al. 2001; Ouyed et al. 2003; Nakamura \& Meier 2004). In the first case, the range of distance is too narrow to follow the consequences of 3-D instabilities effectively. In the second case, the effect of instability is limited by the boundaries. It is well known that kink instability can saturate into a finite amplitude, helical equilibrium when confined in a cylinder (in the astrophysical context see e.g. K\"onigl and Choudhuri 1986; Lyubarskii 1999). But a computational cylinder taylored to the size of the source covers a negligible range in length scales perpendicular to the axis, compared with an actual jet. If, instead, the simulations were done in a spherical or conical geometry, the continued expansion of the flow would stretch these helical configurations perpendicular to the axis, immediately making them unstable again. This is the rationale for our assumption that dissipation by instability will be a process that persists for a large distance along the jet. It may be possible to make numerical progress in, say, a conical geometry, but limitations due to the finite range in length scales and time scales that can be achieved will remain serious. For this reason, it is important to isolate physical effects that can not (yet) be included realistically in simulations, and explore them in more approximate models like the ones we have presented here. \section{Conclusions} The standard scenario for jet launching, acceleration and collimation involves large scale magnetic fields anchored to a rotating object (e.g. Blandford \& Payne 1982; Sakurai 1985). The flow passes through three critical points, i.e. the slow, the Alfv\'en and the fast point. At the fast point the ratio of Poynting to matter energy flux is much larger than unity in the case of relativistic jets (Michel 1967; Camenzind 1986; Beskin et al. 1998) while further acceleration of the flow appears hard to achieve within ideal MHD except if the flow is decollimated (Li et al. 1992; Begelman \& Li 1994). In this work, we study how this picture is modified when one takes into account the fastest growing current driven instability, i.e. the $m=1$ mode kink instability. We have modeled the instability by modifying the induction equation to account for non-ideal MHD processes and solving the relativistic MHD equations in the case of a radial flow. The instability is driven by $B_\phi$, dissipates Poynting flux and has been shown to be an efficient mechanism to accelerate the flow. The key parameter of the model is the ratio $\sigma_0$ of the Poynting to matter energy flux at the base of the flow. A large part of the AGN jet phenomenology can be understood in the context of this model for $\sigma_0\sim$ several. On sub-pc scales the flow is Poynting-flux dominated with $B_\phi\gg B_r$. The flow is shown to be accelerated fast and to become matter dominated already at $\sim$pc scales, while it reaches terminal bulk factors of a few tens. The emission at the blazar zone can be a result of either internal shocks that take place in an unsteady flow, where fast shells catch up with slower ones, converting a small fraction of the bulk kinetic energy of the flow into radiation (Rees \& M\'esz\'aros 1994; Spada et al. 2001), or direct manifestation of the energy released by the instability. Within the same model, we propose that GRBs are a result of more Poynting flux dominated outflows with $\sigma_0\sim$100. For these values of $\sigma_0$ the flow reaches terminal bulk Lorentz factors of the order of a few to several hundreds, while it remains moderately magnetized (i.e. $\sigma_\infty\sim 1$) at the afterglow region region. Although there is evidence for magnetized ejecta from afterglow modeling (e.g. Kumar \& Panaitescu 2003; Zhang \& Kobayashi 2005), more results are anticipated from early afterglow follow-ups that can test the model. In the internal shock scenario for the prompt GRB emission, the shells collide at typical distances of $10^{13}-10^{15}$ cm, where the flow is moderately Poynting-flux dominated. On the other hand, internal shock and Poynting-flux models exclude each other somewhat. If a strong magnetic field is added to an internally-shocked outflow, the radiative efficiency is further reduced with respect to that expected from the collision of unmagnetized shells (e.g. Fan et al. 2004). At the same time, dissipation in a predominantly magnetic outflow by instability (DC model) or internal reconnection (AC model) can produce radiation naturally at very high efficiency (up to 50\%). \begin{acknowledgements} Giannios acknowledges support from the EU FP5 Research Training Network ``Gamma Ray Bursts: An Enigma and a Tool.'' \end{acknowledgements}
1,314,259,992,636
arxiv
\section{Introduction} Face detection is the most important pre-processing step for many facial analysis tasks such as landmark detection\cite{koestinger2011annotated,zhu2012face}, face alignment \cite{zhang2016joint,xiong2013supervised,ren2014face}, face recognition \cite{ranjan2017hyperface}, face synthesis \cite{wang2018high,di2017gp}, etc. The accuracy of face detection systems has a direct impact on these tasks and hence, the success of face detection is of crucial importance. Various challenges such as variations in pose, scale, illumination changes, variety of facial expressions, occlusion, \etc, have to be addressed while building face detection algorithms. The success of Viola Jones face detector \cite{viola2001rapid} enabled widespread usage of face detection in a variety of consumer devices and security systems. Current state-of-the-art face detectors achieve impressive detection rates on a variety of datasets that contain many challenges. The success of these systems can be attributed to two key steps: (i) advancements in the field of deep learning which has had a direct impact on many facial analysis tasks including face detection, and (ii) dataset collection efforts led by different researchers in the community. Moreover, improvements in detection algorithms have almost always been followed by publication of more challenging datasets and vice versa. Such synchronous advancements in both steps have led to a even more rapid progress in the field. \begin{figure*}[ht!] \begin{center} \includegraphics[width=0.12\linewidth]{figures/samples/face_box_annotation/rain/rain_00906_gt.jpg} \includegraphics[width=0.12\linewidth]{figures/samples/face_box_annotation/snow/snow_00181_gt.jpg} \includegraphics[width=0.12\linewidth]{figures/samples/face_box_annotation/haze/haze_02447_gt.jpg} \includegraphics[width=0.12\linewidth]{figures/samples/face_box_annotation/gaussian/gaussian_00538_gt.jpg} \includegraphics[width=0.12\linewidth]{figures/samples/face_box_annotation/illumination/illumination_00090_gt.jpg} \includegraphics[width=0.12\linewidth]{figures/samples/face_box_annotation/lens/lens_00013_gt.jpg} \includegraphics[width=0.12\linewidth]{figures/samples/face_box_annotation/distractors/Cake_02316.jpg} \includegraphics[width=0.12\linewidth]{figures/samples/face_box_annotation/rain/rain_01218_gt.jpg} \includegraphics[width=0.12\linewidth]{figures/samples/face_box_annotation/snow/snow_01077_gt.jpg} \includegraphics[width=0.12\linewidth]{figures/samples/face_box_annotation/haze/haze_02465_gt.jpg} \includegraphics[width=0.12\linewidth]{figures/samples/face_box_annotation/gaussian/gaussian_03236_gt.jpg} \includegraphics[width=0.12\linewidth]{figures/samples/face_box_annotation/illumination/illumination_03323_gt.jpg} \includegraphics[width=0.12\linewidth]{figures/samples/face_box_annotation/lens/lens_03238_gt.jpg} \includegraphics[width=0.12\linewidth]{figures/samples/face_box_annotation/distractors/Sheep_01949.jpg} \includegraphics[width=0.12\linewidth]{figures/samples/face_box_annotation/rain/rain_03267_gt.jpg} \includegraphics[width=0.12\linewidth]{figures/samples/face_box_annotation/snow/snow_02048_gt.jpg} \includegraphics[width=0.12\linewidth]{figures/samples/face_box_annotation/haze/haze_02855_gt.jpg} \includegraphics[width=0.12\linewidth]{figures/samples/face_box_annotation/gaussian/gaussian_01731_gt.jpg} \includegraphics[width=0.12\linewidth]{figures/samples/face_box_annotation/illumination/illumination_03348_gt.jpg} \includegraphics[width=0.12\linewidth]{figures/samples/face_box_annotation/lens/lens_02619_gt.jpg} \includegraphics[width=0.12\linewidth]{figures/samples/face_box_annotation/distractors/Vague_resion_02075.jpg}\\ \hskip 15pt Rain \hskip 40pt Snow \hskip 40pt Haze \hskip 40pt Blur \hskip 30pt Illumination \hskip 5pt Lens impediments \hskip 10pt Distractors \end{center} \vskip -15pt \caption{Sample annotated images from the proposed UFDD dataset. The dataset is constructed specifically to capture seven different conditions.} \label{fig:samples} \end{figure*} Various methods have been proposed in the literature that attempt to address different aspects of the face detection problem. Some of the initial work involved design of robust hand-crafted representations \cite{viola2004robust,sung1998example,brubaker2008design,li2014efficient,mathias2014face,yang2014aggregate,chen2014joint} and the use of powerful machine learning algorithms. This was followed by methods \cite{zhu2012face,yan2014face} that exploited structural dependencies in the face. Benchmark datasets such as FDDB \cite{jain2010fddb}, AFW \cite{zhu2012face}, Multi-PIE \cite{gross2010multi} and PASCAL FACE~\cite{everingham2015pascal} enabled the initial progress in face detection research. However, these datasets capture limited variations in scale, pose, occlusion and illumination. In order to address the aforementioned issues, Yang \etal \cite{yang2016wider} presented a large scale face dataset with rich annotations, called as WIDER FACE. They explicitly attempt to capture different variations and demonstrate, through detailed experiments, that a significant gap exists between the accuracy of existing detectors and the expected performance. The performance gap is especially large in the case of tiny faces. Researchers in the community, quickly adopted this new dataset and incorporated advances in CNN-based learning for improving the detection performance. Some of initial works involved cascaded and multi-task CNN-based networks \cite{zhang2016joint}, followed by the use of object detection approaches~\cite{zhu2017cms,jiang2017face} for face detection. Most recent research on face detection has focused more on improving the performance of small face detection\cite{hu2017finding,zhang2017s3fd,najibi2017ssh}. Some of these approaches employ feature maps from different conv layers \cite{hu2017finding,yang2017face} to build multiple detectors that are robust to scale variations, while others \cite{najibi2017ssh,zhang2017s3fd} have attempted to develop new anchor design strategies to overcome some of the issues faced by anchor design-based methods such as Faster-RCNN \cite{ren2015faster}. Although WIDER FACE \cite{yang2016wider} attempts to capture a variety of conditions during the dataset collection process, there still exist several practical considerations such as weather-based degradations, different types of blur and distractor images, which have not been explicitly captured by existing face datasets. These conditions are particularly important for a variety of applications such as biometric surveillance, maritime surveillance, and long-range surveillance, where detection accuracy is of critical importance. Based on this observation, we explore the next set of challenges that require focused attentions from the face detection community, and in this attempt, we present a new Unconstrained Face Detection Dataset (UFDD) involving a richer set of challenges. Specifically, this new dataset contains a total of 6,425 images with 10,897 face-annotations and it involves following key degradations or conditions: (1) \emph{Rain}, (2) \emph{Snow}, (3) \emph{Haze}, (4) \emph{Lens impediments}\footnote{By lens impediments, we mean obscurants that appear between the object and camera lens. A few examples are water droplets on the lens, lens dirt, window panes, \etc. }, (5) \emph{Blur}, (6) \emph{Illumination variations}, and (7) \emph{Distractors}. Fig. \ref{fig:samples} shows sample images from the dataset with different variations and its corresponding annotations. For benchmarking existing face detectors, we define two protocols: (1) \textbf{Internal} and (2) \textbf{External}. In the \enquote{internal} protocol, the dataset is divided into $10$-splits and a $10$-fold cross-validation is performed. In the \enquote{external} protocol, the face detectors are trained on another face dataset or a synthetically created face dataset and tested on the real-world dataset. For creating the synthetic dataset, the above listed degradations and conditions are artificially simulated on images from the WIDER FACE dataset. Details of the dataset collection/annotation process and synthetic dataset creation are explained in Section \ref{sec:dataset}. Through various experiments, we demonstrate that existing algorithms are far from optimal in terms of their detection performance. A detailed analysis is performed by studying the performance of recent face detectors on this newly proposed dataset. The analysis includes separate study of the effect of different conditions on the performance. In particular, we benchmark four representative algorithms such as Faster R-CNN \cite{ren2015faster}, SSH \cite{najibi2017ssh}, HR-ER \cite{hu2017finding} and S3FD \cite{zhang2017s3fd} using the external protocol and analyze the failure cases of these algorithms. We hope that this detailed analysis will provide deep insights for the design of new algorithms to address these newly identified challenges. \begin{table*}[ht!] \centering \caption{Comparison of different datasets. ('\checkmark': Contained in DB. '-': Not contained or mentioned in the paper.) } \label{tab:summary} \vskip -10pt \resizebox{1\linewidth}{!}{% \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|} \hline & \#Images & \#Annotations & \multicolumn{8}{c|}{Properties} \\ \hline & & & Source & Rain & Snow & Haze & Illumination & Blur & Lens impediments & Distractors \\ \hline AFW \cite{zhu2012face} & 205 & 473 & Flickr & - & - & - & - & - & - & - \\ \hline PASCAL FACE \cite{everingham2015pascal} & 851 & 1341 & PASCAL-VOC & - & - & - & - & - & - & - \\ \hline FDDB \cite{jain2010fddb} & 2,845 & 5,171 & Yahoo & - & - & - & - & \checkmark & - & - \\ \hline MALF \cite{faceevaluation15} & 5,250 & 11,900 & Flickr, www & - & - & - & - & - & - & - \\ \hline IJB-C \cite{mazeiarpa-IJB-C} & 130K & 300K & www & - & - & - & \checkmark & - & - & \checkmark \\ \hline WIDER FACE \cite{yang2016wider} & 32,303 & 393,703 & www & \checkmark & - & - & \checkmark & \checkmark & - & - \\ \hline UCCS \cite{gunther2017unconstrained} & - & 75,738 & camera & - & \checkmark & - & \checkmark & \checkmark & - & - \\ \hline UFDD (Proposed) & 6,424 & 10,895 & www & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark \\ \hline \end{tabular} } \end{table*} \section{Related Work} \label{sec:relatedwork} \noindent\textbf{Face detection approaches. }Initial research \cite{viola2004robust,sung1998example,brubaker2008design,li2014efficient,mathias2014face,yang2014aggregate,chen2014joint} on face detection was based on robust hand-crafted representations and involved training of powerful machine learning classifiers. Later approaches such as \cite{zhu2012face,yan2014face} utilized structural dependencies present in faces and modeled them using elastic deformation structures. Recently, the success of CNN-based methods in different computer vision tasks such as object recognition \cite{simonyan2014very,he2016deep} and detection \cite{redmon2017yolo9000,ren2015faster,liu2016ssd} has inspired several face detection approaches. Early work on CNN-based face detection involved cascaded architectures \cite{zhang2016joint,li2015convolutional,qin2016joint,yang2015facial} and multi-task training of correlated tasks \cite{ranjan2015deep,ranjan2017hyperface,sindagi2017cnn}. Although these approaches were able to obtain impressive detection rates on datasets like Pascal-Faces \cite{yan2014face} and FDDB \cite{jain2010fddb}, the introduction of the WIDER dataset \cite{yang2016wider} demonstrated the lack of robustness of these methods to different factors such as large variations in scales, pose and occlusion. Significant performance gap was observed especially in the case of smaller faces. Hence, most of the recent work involves design of novel strategies to build detectors that are especially robust to scale variations. Some methods employ feature maps from multiple layers similar to \cite{zhu2017cms,hu2017finding}, while other methods develop new anchor design strategies \cite{zhang2017s,najibi2017ssh}. Zhu \etal \cite{zhu2017cms} employed the Faster-RCNN framework and fused features from multiple conv layers to build robustness in scale variations. Additionally, they encoded context to provide additional information to the classifier. Hu \etal \cite{hu2017finding} trained multiple detectors to cater to different scale and employed image pyramids for performing the inference. Najibi \etal \cite{najibi2017ssh} and Zhang \etal \cite{zhang2017s} proposed single shot detectors that provided significant improvements while maintaining good computational efficiency. While, Najibi \etal \cite{najibi2017ssh} is based on the region proposal network of Faster-RCNN \cite{ren2015faster}, Zhang \etal \cite{zhang2017s} is based on the SSD \cite{liu2016ssd} detector, where they used additional conv layers converted from VGG-16's fully connected layers. To address the overlap issue of anchor-based techniques, they also introduced new anchor design strategies to ensure increased overlap between anchor boxes and ground-truth faces of smaller sizes during training process. \\ \noindent\textbf{Face detection datasets. } As discussed, face detection is an extensively studied problem. Several datasets such as AFW \cite{zhu2012face}, FDDB \cite{jain2010fddb}, PASCAL FACE \cite{everingham2015pascal}, \etc, have been constructed specifically for face detection. The AFW dataset \cite{zhu2012face} consists of 205 images collected from Flickr and has 473 face annotations. Additionally, the authors provide facial landmark and pose labels for each face. The PASCAL FACE dataset \cite{everingham2015pascal} has a total of 851 images which are a subset of the PASCAL VOC and has a total of 1,341 annotations. These datasets contain only a few hundreds of images and have limited variations in face appearance. Jain and Miller in \cite{jain2010fddb} collected a relatively larger dataset that consists of 2,845 images with 5,171 annotations. Although the authors explicitly attempt to capture a wide range of difficulties including occlusions, the images are collected from Yahoo! website and mostly belong to celebrities, due to which the dataset has some inherent bias. The AFLW dataset \cite{koestinger2011annotated} presented a large-scale collection of face images collected from the web, consisting of large variations in appearance, pose, expression, ethnicity, age, gender, \etc. The dataset consists of a total of 25,000 face annotations. However, the AFLW dataset does not have occlusion and pose labels. The IJB-A dataset \cite{klare2015pushing} is constructed for face detection and recognition. It contains 24,327 images with 49,759 face annotations. The recently introduced IJB-C dataset \cite{mazeiarpa-IJB-C} is an extension of IJB-A with about 138,000 face images, 11,000 face videos, and 10,000 non-face images. The MALF dataset \cite{faceevaluation15} is a large dataset with 5,250 images annotated with multiple facial attributes and it is specifically constructed for fine grained evaluation. More recently, Yang \etal \cite{yang2016wider} presented a very large scale dataset called the WIDER FACE with large variations in scale, pose and occlusion. While this dataset demonstrated that existing face detectors performed poorly especially on smaller scale faces, recent CNN-based face detectors \cite{najibi2017ssh,zhang2017s} have incorporated robustness to scale variations and have achieved impressive performances. Additionally, these datasets do not focus on specifically capturing weather-based degradations such as snow, rain, haze, \etc. Gunther \etal \cite{gunther2017unconstrained} recently presented an unconstrained dataset for face detection and recognition, where the authors do attempt to capture weather-based degradations, however, these are limited to a smaller set of conditions such as sunny day and snowy day. Several conditions such as haze and rain are not captured. In contrast, the proposed dataset in this work captures a much larger set of variations with large set of images in each condition. Additionally, we also include a large set of distractor images which is largely ignored by the existing datasets. Table \ref{tab:summary} gives a summary of different datasets in comparison with the proposed dataset. \section{Dataset} \label{sec:dataset} To the best of our knowledge, UFDD is among the first datasets that explicitly captures variations in different weather conditions and other degradations such as lens impediments, motion blur and focus blur. Additionally, the dataset also consists of a large set of distractor images which is largely ignored by the existing datasets where every image almost necessarily has at least one face annotation. These images either contain non-human faces such as animal faces or no faces at all. The presence of distractor images is especially important to measure the performance of a face detector in rejecting non-face images and to study the false positive rate of an algorithm. Some existing datasets capture a few of these conditions separately. For instance, the UCCS dataset \cite{gunther2017unconstrained} contains sunny, snow and blur images, however, other degradations such as haze and rain are missing. Moreover, these images are collected from a single location using a surveillance camera. In contrast, the proposed dataset is collected from the Internet and hence, it is more diverse. As a result, this dataset can be used to evaluate the generalization ability of different face detectors on a diverse set of conditions. Similar to FDDB \cite{jain2010fddb}, we define two separate protocols to evaluate face detection performance on the proposed dataset: (1) \textbf{Internal} and (2) \textbf{External}. In the \enquote{internal} protocol, the dataset is divided into $10$-splits and a $10$-fold cross-validation is performed. In the \enquote{external} protocol, the face detectors are trained on another face dataset or on a synthetically created face dataset and tested on the real-world dataset. In this work, we use the WIDER FACE dataset as another training dataset to create the synthetic dataset. \subsection{Data Collection and Annotation} \noindent\textbf{Collection and Annotation. } Images in the proposed dataset are collected from different sources on the web such as Google, Bing, Yahoo, Creative commons search, Pixabay, Pixels, Wikimedia commons, Flickr, Unsplash, Vimeo and Baidu. Images are searched using various keywords such as \enquote{rain + faces}, \enquote{snow + faces}, \enquote{rain + crowd}, \enquote{dark + crowd} \etc. Images are collected in such a way that the dataset captures a total of seven conditions and degradations. These conditions are chosen based on the observation that they are entirely plausible in a variety of applications such as video surveillance and maritime surveillance. In section \ref{sec:evaluation}, we present a detailed analysis of the effect of these conditions separately on face detection performance. Wherever possible, we ensured a uniform distribution of different conditions so that the dataset has minimal bias towards any particular condition. Table~\ref{tab:distribution} shows the distribution of number of images per condition. \begin{table}[t!] \centering \caption{Distribution of images in the UFDD dataset.} \label{tab:distribution} \vskip -10pt \small \resizebox{1\linewidth}{!}{% \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline Condition & Rain & Snow & Haze & Blur & Illumination & \begin{tabular}[c]{@{}l@{}}Lens\\ impediments\end{tabular} & Distractors \\ \hline \#Images & 628 & 680 & 442 & 517 & 612 & 95 & 3450 \\ \hline \end{tabular} } \end{table} After collection, the dataset is cleaned to remove near-duplicate images using \cite{hash}. After dataset cleaning, the images are resized to have a width of $1024$ while preserving its original aspect ratio. These resized images are used for annotation and evaluation. For annotations, these images are uploaded to Amazon mechanical turk (AMT) and each image is assigned to around $5$ to $9$ AMT workers. The workers are asked to annotate all recognizable faces in the image. Once the annotation is complete, the labels are then cleaned and consolidated using the procedure outlined by Taborsky \etal in \cite{taborsky2015annotating}. The consolidation process is viewed as a clustering problem. For each image, annotations from all workers are converted to a list of sets, where each set represents annotations of a particular face in the image by different workers. For instance, if there are $n$ faces in an image, then there would be $n$ sets in the final list. First, sets in the list are initialized by annotations created by the first worker. Annotations from other workers are then added to those sets if the overlap between them is greater than $0.3$, otherwise a new set is created. After processing all annotations, the list consist of sets of overlapping annotations, ideally corresponding to each face in the image. Final, a pruning step is carried out to remove erroneous annotations, where sets from the final list are removed if they do not contain at least 2 boxes. For each remaining set in the list, the average bounding box is computed and used as the ground-truth for the image. \section{Evaluation, Benchmarking and Analysis} \label{sec:evaluation} \subsection{Methods used for evaluation} We evaluate the following recent face detection approaches on the proposed UFDD dataset. \noindent\textbf{Faster-RCNN. } Faster-RCNN \cite{ren2015faster} is among the first end-to-end CNN-based object detection methods and it consists of a region proposal network (RPN) and a region classification network (RCN). RPN is based on VGG-16 \cite{simonyan2014very} architecture and produces candidate regions which are agnostic to object class. These candidate regions are further processed by RCN which pools features from the final conv layer of VGG-16 and forwards them through a set of fully connected layers to produce the final object class and bounding box. Since most face detectors are based on anchor boxes and Faster-RCNN was the first method to propose anchor boxes, this method was chosen to be the baseline approach. For the purpose of evaluation, we used an open-source implementation \cite{pyfaster} specifically implemented for face detection which is based on the original Faster-RCNN source code. \noindent\textbf{HR-ER. }Hu \etal \cite{hu2017finding} specifically addressed the problem of large variations in scale found in the WIDER FACE dataset by designing scale-specific detectors based on ResNet-101 \cite{he2016deep}. Each scale-specific detector is a conv layer which processes features extracted from earlier layers and produces a spatial heat map that indicates the detection confidence at every location. Image pyramids are used during training and inference. Additionally, balanced-sampling and hard negative mining are employed during training to effectively learn difficult samples. For the purpose of evaluation, we used the implementation provided by the authors. \noindent\textbf{SSH. } Najibi \etal \cite{najibi2017ssh} presented a single stage headless (SSH) face detector, which is primarily based on the RPN of Faster-RCNN. In contrast to Faster-RCNN, SSH consists of multiple detectors placed on top of different conv layers of VGG-16 to explicitly address scale variations. Each detector is designed with an additional context processing module that incorporates surrounding context by increasing the receptive filed of the network. In contrast to earlier multi-scale detection work \cite{cai2016unified}, the authors fuse features from different layer before using them for detection. Also, similar to \cite{shrivastava2016training}, they use online hard example mining to boost the detection performance. For the purpose of evaluation, we used the implementation provided by the authors. \noindent\textbf{S3FD. } Similar to \cite{najibi2017ssh}, Zhang \etal \cite{zhang2017s} proposed single shot scale invariant face detector (S3FD) where they presented new anchor design strategies to overcome issues of anchor-based techniques for small object detection. The authors propose a max-out background technique to address the issue of high false positive rate of small faces. S3FD is based on the popular object detection framework called single shot detector (SSD) \cite{liu2016ssd}, where they use VGG-16 as the base network. Similar to earlier approaches on face detection and object detection, S3FD uses hard negative mining to improve the detection accuracy. For the purpose of evaluation, we used the implementation provided by the authors. \begin{figure}[htp!] \begin{center} \includegraphics[width=.55\linewidth, height=0.55\linewidth]{figures/result-05102018-UFDD-texv15/fig2/pretrained_pr_curve.png} \end{center} \vskip -15pt \caption{Evaluation results of different algorithms, that are pre-trained on WIDER FACE, on the proposed UFDD dataset.} \label{fig:pretrained_pr_curve} \end{figure} \subsection{Evaluation and Analysis} For the purpose of analysis, the aforementioned methods are evaluated in two different scenarios: \noindent(i) Using pre-trained models: As argued by Yang \etal in \cite{yang2016wider}, most of the recent methods (including the ones described above) use WIDER FACE as a source dataset as it is a significantly large dataset that captures large scale variations in different factors and conditions. Based on this argument, we evaluate these recent methods which are pre-trained on WIDER FACE directly on the proposed UFDD dataset. Fig. \ref{fig:pretrained_pr_curve} shows the precision-recall curves corresponding to various methods, that are pre-trained using the WIDER FACE training set \cite{yang2016wider}, on the proposed UFDD dataset. Contrary to the suggestions made by the authors of \cite{yang2016wider}, it can be observed that WIDER FACE need not necessarily be effective as a source training set especially for constraints such as rain, snow, haze, blur, distractors, that are captured by the UFDD dataset. The poor performance of state-of-the-art detectors in such cases highlights the need for a dataset that explicitly captures them. Additionally, this argument calls for an improvement on the design of algorithms and networks in order to capture these kind of variations and hence, improve the robustness of the detectors. \begin{figure}[htp!] \begin{center} \includegraphics[width=0.25\linewidth]{figures/samples/synthetic/1.jpg} \includegraphics[width=0.25\linewidth]{figures/samples/synthetic/2.jpg} \includegraphics[width=0.25\linewidth]{figures/samples/synthetic/3.jpg} \vskip +2pt \includegraphics[width=0.25\linewidth]{figures/samples/synthetic/4.jpg} \includegraphics[width=0.25\linewidth]{figures/samples/synthetic/5.jpg} \includegraphics[width=0.25\linewidth]{figures/samples/synthetic/6.jpg} \end{center} \vskip -15pt \caption{Sample annotated images from the synthetic WIDER FACE dataset. Left to right and top to bottom: Rain, snow, motion blur, Gaussian blur, illumination, lens impediments.} \label{fig:samples_synthetic} \end{figure} \noindent(ii) Use synthetic dataset for fine-tuning: Under ideal conditions, one would want to train their networks on large scale datasets. However, conditions such as rain, snow, and haze occur with relatively less frequency, due to which the availability of such images on the web is limited. A potential solution to address this issue is to synthesize these conditions and simulate images containing these less frequently occurred constraints. Since WIDER FACE \cite{yang2016wider} is the largest face dataset containing occlusions, different scale variations, and difficult poses, we use this dataset to produce the synthetic dataset that contains variations such as rain, snow, lens impediments and blur \footnote{Transmission maps are required to synthesize hazy images \cite{he2011single,ren2016single}. Since transmission maps are not available in the considered datasets, we are unable to synthesize the corresponding hazy images.}. Fig. \ref{fig:samples_synthetic} illustrates sample images from the synthetic dataset. In the following, we discuss the details of the synthesis procedure for different conditions. \noindent \textbf{Rain:} Following \cite{synth-rain}, 15 large rainy masks are synthesized, which are used to be blended with the images in WIDER FACE \cite{yang2016wider} to synthesize rainy images. Particularly, these 15 masks are with three Gaussian noise levels ($16$, $32$, and $48$), and five rotation angles ($70^\circ, 80^\circ, 90^\circ, 100^\circ$ and $110^\circ$). \noindent \textbf{Snow:} Following the procedure discussed in \cite{synth-snow}, 3 large snowy masks are synthesized. In particular, 3 different masks with different resize ratios and number of mixtures, $(75\%, 9), (100\%, 12)$ and $(133\%, 16)$ are synthesized. Finally we get 15 types of snowy masks with same rotation method as rainy image. Then we crop mask randomly and blend it with the original image. \noindent \textbf{Blur:} Both focus blur and motion blur are used to synthesize the blurry images. Particularly, 3 levels of focus blur kernels ($\alpha, 1.5\alpha$ and $2\alpha$, where $\alpha =$ image height $/ 640$) are leveraged to synthesize images with focus blur and 3 levels of motion blur kernels ($5\beta, 10\beta$ and $15\beta$, where $\beta =$ image height $/ 640$) are used to synthesize images with motion blur. We use random motion angles in the range of $[0^\circ , 180^\circ]$. \noindent \textbf{Illumination:} Pixel intensity values of the original images are modified to make these images brighter or darker. To make these images brighter, we change the image intensity from $[0, 255\gamma]$ to $[0, 255]$, where $\gamma = 0.6\delta + 0.4$ and $\delta$ is random number in $(0, 1)$. To make these images darker, we add Possion noise with \cite{matpoisson} to reproduce (or approximate) shot noise by high ISO sensitivity as discussed in \cite{noise-iso} and change the image intensity from $[0, 255]$ to $[0, 255\gamma]$, where $\gamma$ is the same as above. \noindent \textbf{Lens impediments} Seven different lens impediment masks are downloaded from web and are blended with the images by the procedure discussed in \cite{synth-lens}. The masks have almost uniform background to keep the contrast of images. To increase the number of masks, we rotate, combine and crop each mask to be similar in size as the image. Then, these augmented masks are blended with images with opacity in the range of $[0.5, 1]$. This synthetic dataset is then used as a source training dataset to fine-tune the existing state-of-the-art face detectors discussed above. Fig.~\ref{fig:synthetic_pr_curve} shows the precision-recall curves corresponding to different methods (trained on the proposed synthetic dataset) evaluated on the proposed UFDD datasets. Table \ref{tab:map_source} shows the mean average precision (mAP) corresponding to different methods that are trained on the original WIDER FACE training set and synthetic dataset. It can be observed that there is considerable improvements in the detection performance when the networks are trained on the synthesized dataset. This also demonstrates the limitations of existing large scale face detection datasets, where many real-world conditions such as rain and haze are not considered. \begin{figure}[htp!] \begin{center} \includegraphics[width=.55\linewidth, height=0.55\linewidth]{figures/result-05102018-UFDD-texv15/fig4/synthetic_pr_curve.png} \end{center} \vskip -15pt \caption{Evaluation results of different algorithms on the proposed UFDD dataset. Note that the face detectors are trained on the synthetic WIDER FACE dataset.} \label{fig:synthetic_pr_curve} \end{figure} \begin{table}[htp!] \centering \caption{The mAP scores using different training sets.} \label{tab:map_source} \vskip -10pt \resizebox{1\linewidth}{!}{% \begin{tabular}{|l|c|c|} \hline Training set& Original WIDER FACE & Synthetic WIDER FACE \\ \hline Faster-RCNN \cite{ren2015faster} & 0.521 & 0.541 \\ \hline SSH \cite{najibi2017ssh} & 0.695 & 0.731 \\ \hline S3FD \cite{najibi2017ssh} & 0.725 & - \\ \hline HR-ER \cite{hu2017finding} & 0.742 & 0.762 \\ \hline \end{tabular} } \end{table} \begin{figure*}[ht!] \begin{center} \centering\rotatebox{90}{Annotation} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/gt/rain/rain_01218.jpg} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/gt/snow/snow_02048.jpg} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/gt/haze/haze_02465.jpg} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/gt/gaussian/gaussian_03236.jpg} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/gt/illumination/illumination_00090.jpg} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/gt/lens/lens_00013.jpg} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/gt/distractor/Sheep_01949.jpg} \rotatebox{90}{Faster-RCNN \hskip 15pt} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/Faster-RCNN/rain/rain_01218.jpg} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/Faster-RCNN/snow/snow_02048.jpg} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/Faster-RCNN/haze/haze_02465.jpg} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/Faster-RCNN/gaussian/gaussian_03236.jpg} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/Faster-RCNN/illumination/illumination_00090.jpg} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/Faster-RCNN/lens/lens_00013.jpg} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/Faster-RCNN/distractor/Sheep_01949.jpg} \rotatebox{90}{SSH} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/SSH/rain/rain_01218.jpg} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/SSH/snow/snow_02048.jpg} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/SSH/haze/haze_02465.jpg} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/SSH/gaussian/gaussian_03236.jpg} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/SSH/illumination/illumination_00090.jpg} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/SSH/lens/lens_00013.jpg} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/SSH/distractor/Sheep_01949.jpg} \rotatebox{90}{S3FD} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/S3FD/rain/rain_01218.jpg} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/S3FD/snow/snow_02048.jpg} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/S3FD/haze/haze_02465.jpg} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/S3FD/gaussian/gaussian_03236.jpg} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/S3FD/illumination/illumination_00090.jpg} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/S3FD/lens/lens_00013.jpg} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/S3FD/distractor/Sheep_01949.jpg} \rotatebox{90}{HR-ER} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/HR-ER/rain/rain_01218.jpg} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/HR-ER/snow/snow_02048.jpg} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/HR-ER/haze/haze_02465.jpg} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/HR-ER/gaussian/gaussian_03236.jpg} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/HR-ER/illumination/illumination_00090.jpg} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/HR-ER/lens/lens_00013.jpg} \includegraphics[width=0.13\linewidth]{figures/samples/badfd-result/HR-ER/distractor/Sheep_01949.jpg} \\ \hskip 15pt Rain \hskip 45pt Snow \hskip 45pt Haze \hskip 45pt Blur \hskip 30pt Illumination \hskip 10pt Lens impediments \hskip 10pt Distractors \hskip 15pt \end{center} \vskip -14pt \caption{Sample face detection results on the proposed UFDD dataset.} \label{fig:fail_detect} \end{figure*} \begin{figure*}[t] \centering \begin{minipage}{.3\textwidth} \centering \includegraphics[width=.8\linewidth, height=0.7\linewidth]{figures//result-05102018-UFDD-texv15//fig6//pr_curve_UFDD_rain.png} \captionsetup{labelformat=empty} \captionsetup{justification=centering} \vskip -8pt \caption*{Rain} \end{minipage} \begin{minipage}{.3\textwidth} \centering \includegraphics[width=.8\linewidth, height=0.7\linewidth]{figures//result-05102018-UFDD-texv15//fig6//pr_curve_UFDD_snow.png} \captionsetup{labelformat=empty} \captionsetup{justification=centering} \vskip -8pt \caption*{Snow} \end{minipage} \begin{minipage}{.3\textwidth} \centering \includegraphics[width=.8\linewidth, height=0.7\linewidth]{figures//result-05102018-UFDD-texv15//fig6//pr_curve_UFDD_haze.png} \captionsetup{labelformat=empty} \captionsetup{justification=centering} \vskip -8pt \caption*{Haze} \end{minipage} \begin{minipage}{.3\textwidth} \centering \includegraphics[width=.8\linewidth, height=0.7\linewidth]{figures//result-05102018-UFDD-texv15//fig6//pr_curve_UFDD_blur.png} \captionsetup{labelformat=empty} \captionsetup{justification=centering} \vskip -8pt \caption*{Blur} \end{minipage} \begin{minipage}{.3\textwidth} \centering \includegraphics[width=.8\linewidth, height=0.7\linewidth]{figures//result-05102018-UFDD-texv15//fig6//pr_curve_UFDD_illumination.png} \captionsetup{labelformat=empty} \captionsetup{justification=centering} \vskip -8pt \caption*{Illumination} \end{minipage} \begin{minipage}{.3\textwidth} \centering \includegraphics[width=.8\linewidth, height=0.7\linewidth]{figures//result-05102018-UFDD-texv15//fig6//pr_curve_UFDD_lens.png} \captionsetup{labelformat=empty} \captionsetup{justification=centering} \vskip -8pt \caption*{Lens impediments} \end{minipage} \vskip -10pt\caption{Cohort Analysis: Individual precision-recall curves of different face detection algorithms on the proposed UFDD dataset. Note that the face detectors are pre-trained on the WIDER FACE dataset.} \label{fig:over} \vskip -18pt \label{fig:prcurves_cohort} \end{figure*} \subsection{Cohort Analysis} In this section, we individually analyze the effect of different conditions such as rain, haze, \etc on the performance of recent state-of-the-art face detection methods\footnote{All four methods use WIDER FACE as the source training set and these pre-trained models are evaluated on the UFDD dataset.}. Results of this study (precision-recall curves) are presented in Fig. \ref{fig:prcurves_cohort}. Detection results on a sample image for all the four benchmark methods are shown in Fig. \ref{fig:fail_detect}. It can be clearly observed from these figures that all the degradations hinder the performance of the recent state-of-the-art detectors. These degradations introduce different kinds of artifacts in the feature maps, thereby resulting in a slightly modified representation as compared to the original representation. Since the existing methods are trained on the datasets that do not necessarily contain large number of images with these conditions, such methods do not generalize well to new conditions. Performance drops are observed under all degradations, although to different degrees. Among all degradations, the presence of haze and lens impediments have a relatively more impact, which is probably because these conditions severely degrade the image and the problem is further aggravated due to the fact that the WIDER FACE dataset does not contain many hazy and lens impediments images. A surprising observation is that the HR-ER method \cite{hu2017finding} consistently performs better than more recent methods such as SSH \cite{najibi2017ssh} and S3FD \cite{zhang2017s3fd}. This is especially important considering the fact that SSH and S3FD perform better on the WIDER FACE dataset as compared to HR-ER. Based on this observation, we may conclude that HR-ER has better generalization ability as compared to the other detectors. In the following, we discuss the results for each condition in detail. \begin{table}[t!] \centering \caption{The mAP scores corresponding to different detectors on the UFDD dataset with and without distractors and their differences.} \label{tab:map_dist} \vskip -8pt \resizebox{0.8\linewidth}{!}{% \begin{tabular}{|l|c|c|c|} \hline DB & UFDD & UFDD without distractors & Difference\\ \hline Faster-RCNN \cite{ren2015faster} & 0.521 & 0.564 & 0.043 \\ \hline SSH \cite{najibi2017ssh} & 0.695 & 0.725 & 0.030 \\ \hline S3FD \cite{najibi2017ssh} & 0.725 & 0.761 & 0.036 \\ \hline HR-ER \cite{hu2017finding} & 0.742 & 0.767 & 0.025 \\ \hline \end{tabular} } \end{table} \noindent \textbf{Rain:} The presence of rain streaks alters the high frequency components in an image, thus changing the filter responses. This results in degradation of visual quality and poor detection performance \cite{zhang2017rain}. The problem is further exacerbated when images containing occluded faces are degraded with rain streaks. \noindent \textbf{Snow:} Similar to rain, the presence of snow also degrades the performance of face detection since it blocks certain parts of the face (as shown in Fig. \ref{fig:fail_detect}. However, the degradation observed due to snow is comparatively higher which could be due to the fact that the presence of snow results in larger degrees of occlusion as compared to that caused by rain. \noindent \textbf{Haze:} Haze, caused by the absorption or reflection of light by floating particles in the air, results in low image contrast affecting the visibility of faces in images. In addition to causing serious degradation of image quality, haze causes a significant drop in face detection performance. As shown in the third column in Fig. \ref{fig:fail_detect}, the faces are less visible and tend to be darker due to the presence of haze. It can be observed that haze causes relatively more degradation in the performance compared to rain and snow. \noindent \textbf{Blur:} Blur, caused either by camera shake or due to depth, results in loss of crucial high frequency details in an image. This loss of information results in considerable difficulties for face detection. Since existing face detectors are trained on datasets containing sharp and high-quality images, the representations learned by these detectors are not robust to blurry images. \noindent \textbf{Illumination:} Extreme illumination conditions such as excessive brightness or darkness affects the visibility of faces. It can be observed from Fig. \ref{fig:fail_detect} that all four methods are unable to detect the faces in images with extreme illumination conditions. \noindent \textbf{Lens impediments:} Lens impediments, caused by the presence of dirt particles or water droplets on the camera lens, introduces sudden discontinuities in frequencies and hence, large variations in focus in the captured images. As shown in the last column in Fig. \ref{fig:fail_detect}, the presence of water droplets results in regions in the image that have different focus. This results either in false detections or miss-detections. \noindent \textbf{Distractors:} Distractors are images that do not contain human faces. Example of distractor images are the ones containing hand regions, animal faces, \etc These images contain regions which can be easily confused as faces and hence, these kind of images result in high false positive rate. It can be observed from Table \ref{tab:map_dist} and Fig.~\ref{fig:distractors} that the detection accuracies drop drastically in the presence of distractor images. Similar observation can be made from the last column in Fig. \ref{fig:fail_detect}. \begin{figure}[t!] \begin{center} \includegraphics[width=.5\linewidth, height=0.5\linewidth]{figures/result-05102018-UFDD-texv15/fig7/pretrained_pr_curve_with_and_without_distractor.png} \end{center} \vskip -20pt \caption{Evaluation results of different algorithms on the proposed UFDD dataset with and without distractors.} \label{fig:distractors} \end{figure} \section{Conclusion} \label{sec:conclusion} We identified the next set of challenges that plague the face detection task. While existing datasets capture large variations in different factors, these newly identified conditions are largely ignored. To overcome this, we collected a new UFDD dataset that specifically captures different image degradations due to weather conditions such as rain, snow, haze, \etc and blur-based degradations. In addition, the dataset also consists of several images known as distractors that contain non-human faces and objects. We benchmarked recent state-of-the-art face detection algorithms on this newly proposed dataset and demonstrate a significant gap in their performance. Additionally, in order to provide an insight for design of future algorithms, we also presented a detailed cohort analysis that studies the effect of different conditions on the detection performance. \section*{Acknowledgments} {\footnotesize{This research is based upon work supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA R\&D Contract No. 2014-14071600012. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon.}} {\small \bibliographystyle{ieee}
1,314,259,992,637
arxiv
\section{Introduction} \label{intro} Reaction control is an essential task in the chemical process industry; it can stabilize unstable (but profitable) operating states, and, in a multiobjective context, can enhance reactor performance while satisfying safety and environmental constraints. Non-steady state (e.g. periodic) operation may result in better average productivity or selectivity than steady state operation (as suggested in the early 1960s in the pioneering work of F. Horn (see Ref.~\cite{n-s-reactor} and references therein)). At a much finer scale, the nonlinear characteristics of heterogeneous catalytic reactions on single-crystal catalysts could be exploited to improve overall yield and selectivity through spatiotemporally resolved actuation \cite{addressing,twists}. Micro-lithography, by designing the shape of reactive domains as well as prescribing the geometry and statistics of heterogeneous inclusions, provides a different avenue of ``talking" to the local dynamics of catalytic reactions (e.g. Ref.~\cite{chaos}). The phenomenology of pattern formation on micro-composite catalytic surfaces can be much richer than that in uniform media. There have been extensive experimental and computational studies in this area, both from our group \cite{pwork1,pwork2,pwork3,pwork4,pwork5,pwork6,pwork7,pwork8,pwork9,pwork10} and in the work of others \cite{pwork11,pwork12,pwork13}. For the low pressure CO oxidation on Pt(110), the heterogeneities deposited onto the catalytic Pt surface can be either inert, such as TiO$_2$, or may consist of different active catalysts for the reaction, e.g. metals like Rh and Pd. Such fields of heterogeneous inclusions affect the reaction dynamics and the formation of patterns on the catalytic Pt surface largely through their interfaces. The heterogeneity in such composite catalysts is solely spatial; their geometry is determined upon construction and does not change with time. In more recent studies, spatiotemporal forcing (using a galvanometer mirror-manipulated focused laser beam) was applied during CO oxidation on a Pt(110) single-crystal \cite{addressing}. Pulses and fronts, the basic building blocks of patterns, could be formed, guided and destroyed with an addressable laser beam; the addressability function was also used to enhance catalytic reactivity in Ref.~\cite{twists}. From the implementation of chemical ``logical gates" (e.g. Ref.~\cite{Showalter}) to the recent spatiotemporal control of morphogenetic patterns in drosophila embryos \cite{Lucchetta}, and from drop formation control in microfluidics \cite{Joanicot} to the guidance of matter waves in Bose-Einstein condensates \cite{PKEV}, guiding pulses and fronts in complex geometries is an essential task in spatiotemporal pattern control. What makes it increasingly possible is the combination of spatiotemporally finely resolved sensing combined with spatiotemporally finely resolved actuation. Whether the pattern-forming system is naturally photosensitive, like the chemical waves in Ref.~\cite{Sakurai} or rendered photosensitive, like the neurons in Ref.~\cite{Lima} through genetic manipulation, rapid technological developments in finely resolved optical actuation techniques (e.g. see Ref.~\cite{Grier}) are rapidly and radically changing the experimental exploration and control of spatiotemporal pattern formation. In this paper we explore the combined effects of geometry and of spatiotemporally resolved addressability. In particular, we explore the influence of inert boundary geometry on the dynamics of propagating pulses for CO oxidation on a micro-composite Pt(110) surface, and show how these dynamics can be altered using an addressable laser beam. Our geometry of choice for pulse propagation studies is a Y-shaped junction structure, in which sharp boundary curvature changes (corners) can effectively dictate pulse propagation. We systematically explore the effect of varying the junction geometry on reactive pulse propagation; a related rhomb geometry is also studied with qualitatively similar results. We then show how one can {\it actively} alter the phenomena dictated by geometry through the use of single-shot spatiotemporally localized laser heating. The computational predictions are validated by experimental observations of reactive pulses visualized through Reflection Anisotropy Microscopy (RAM) \cite{RAM}. \section{Modeling} In our simulations we use the three-variable Krischer-Eiswirth-Ertl reaction-diffusion model for CO oxidation on Pt(110) with a surface phase transition described in Ref.~\cite{model}. This surface reaction follows a Langmuir-Hinshelwood mechanism: \[CO+* \leftrightharpoons CO_{ads}\] \[2*+O_2 \rightarrow 2O_{ads}\] \[CO_{ads}+O_{ads} \rightarrow 2*+CO_2\uparrow \] accompanied by a $1\times2\rightarrow1\times1$ phase transition of the Pt(110) surface due to CO adsorption. When the coverage of CO lies between 0.2 and 0.5, the fraction of $1\times1$ surface increases monotonically as the CO coverage increases. The sticking coefficient of oxygen is $50\%$ higher on the $1\times1$ surface as compared to the $1\times2$ surface. The $1\times1$ phase favors oxygen adsorption, which leads to reactive consumption of CO. This can lead to oscillatory behavior of the reaction, and also allows the formation of propagating reaction pulses. The equations for this kinetic model are \[\dot{u}=k_us_up_{co}\left(1-\left(\frac{u}{u_s}\right)^3\right)-k_1u-k_2uv+D_u\nabla^2u\] \[\dot{v}=k_vp_{o_2}(ws_{v_1}+(1-w)s_{v_2})\left(1-\frac{u}{u_s}-\frac{v}{v_s}\right)^2-k_2uv\] \[\dot{w}=k_3(f(u)-w) \] where by $u$, $v$ and $w$ we denote the surface coverage of CO and O, and the surface fraction of $1\times1$ phase respectively. The adsorption rate constants for CO and O$_2$, $k_u$ and $k_v$ respectively, are considered to be constant within the temperature range considered here. The rate constants $k_1,k_2$ and $k_3$ for the desorption, reaction and surface phase transition are given by the Arrhenius formula $k_i=(k^0)_iexp(-E_i/RT)$ and T is the temperature of the single-crystal. The function $f(u)$ is fit to experimental data to give the rate of surface phase transition as a function of $u$, the CO surface coverage, as follows: \[f(u)=\left\{\begin{array}{ccc}0&\mbox{for}&u\leq0.2\\ \frac{u^3-1.05u^2+0.3u-0.026}{-0.0135}&\mbox{for}&0.2<u<0.5\\ 1&\mbox{for}&u\geq0.5\end{array}\right.\] For simplicity, the diffusion of CO is taken to be isotropic and no-flux boundary conditions are used in our simulations. To reflect the influence of laser heating on the dynamics of the reaction, we approximate the local temperature increase caused by a laser spot through a local Gaussian temperature field. The heat generated by the reaction ($\sim1mW$) can be neglected compared to the power of the laser beam($\sim1W$). Since the diffusivity of adsorbed CO ($\sim10^{-8}cm^2/s$) is much smaller than the thermal diffusivity constant of Pt ($\sim10^{-1}cm^2/s$), we can assume the local Gaussian temperature field is established (resp. vanishes) instantaneously as the laser beam is applied (resp. removed) \cite{dragging,Cisternas}. In our simulations, we use the commercial Finite Element package FEMLAB to compute the time-dependent evolution of reactive pulses in 2D Y-junction geometries; our meshes typically consisted of $\sim 12,000$ linear elements. \section{Computational results} \subsection{Pulse propagation in a Y-junction} The geometry of the Y-junction structure is described through the parameters $W$, $w$, $h$, $\alpha$, and $\theta$ (see the first snapshot in Fig.~\ref{y_propagation}(a)). \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{y_none1.eps} \includegraphics[width=0.9\columnwidth]{y_lower1.eps} \includegraphics[width=0.9\columnwidth]{y_both1.eps} \includegraphics[width=0.9\columnwidth]{y_upper1.eps} \caption{(Color online) Pulse propagation in different Y-junction structures. By choosing appropriate geometric parameters, we are able to dictate the pulse transmission patterns. (a) $h=5$, $\theta=\pi$, (b) $h=3.7$, $\theta=\pi$, (c) $h=3.5$, $\theta=\pi$, (d) $h=3.5$, $\theta=\frac{3}{4}\pi$. Other parameters: $W=5$, $w=1.93$, $\alpha=\frac{7}{9}\pi$, $T=535.5K$, $P_{CO}=4.95\times10^{-5}mbar$, $P_{O_2}=2.0\times10^{-4}mbar$.} \label{y_propagation} \end{figure} When a reactive pulse reaches the corners of the Y-junction that are denoted by the two angles $\alpha$ and $\theta$, the pulse front starts becoming convex. The local travelling speed of the pulse decreases due to this convex curvature \cite{K_v1,K_v2}. When the local curvature required to ``go around" one of the corners exceeds a certain limit, the pulse loses stability, ``decollates" from the boundary, and disappears \cite{K_v3}. With appropriate angles $\alpha$ and $\theta$, by adjusting the position of the ``prow" to the exit (i.e. adjusting $h$), we can force the pulse to choose none, one, or both of the channels to propagate in (Fig.~\ref{y_propagation}). The behavior of the pulse also depends strongly on the selected partial pressures of CO and O$_2$. Under reaction conditions for Fig.~\ref{y_propagation}, decreasing the amount of CO in the gas phase stabilizes the pulse. Below some critical value for $P_{CO}$ the pulses will propagate through both channels, essentially independent of the selected geometry. \subsection{Quantifying the geometry-induced instability in reactive pulse propagation} We now begin a quantitative exploration of the geometry-induced instability of pulse propagation in our model system. Junctions between different linear ``corridors" are an important building block for complex geometries. The two geometrical parameters characterizing a junction are the width of the channel $W$ and the angle $\theta$ at the corner (Fig.~\ref{y_structure}). When the reaction conditions are fixed, depending on the choice of $W$ and $\theta$ , the pulse may or may not be able to pass the corner, as shown in Fig.~\ref{Y_Pco}. \begin{figure} \centering \includegraphics[width=0.6\columnwidth]{y_structure1.eps} \caption{Geometry of a corner structure, $r=\frac{W}{sin(\pi-\theta)}$.} \label{y_structure} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{Y_Pco.eps} \caption{(Color online) Critical angle $\theta_c$ for a pulse to turn around a corner, as a function of dimensionless channel width $W$. For fixed CO pressure, pulses in a geometry with $\theta$ and $W$ chosen ``above" the solid lines shown are able to turn around the corner. The solid lines, fitted to data points, are included to guide the eye. The dashed and dotted lines are given by $r_c=\frac{W}{sin(\pi-\theta_c)}$ for different $r_c$. The unit length of $W$ corresponds to a real length of 3.7${\mu}m$. $T=535.5K$, $P_{O_2}=2.0\times10^{-4}mbar$.} \label{Y_Pco} \end{figure} When a pulse attempts to turn around a corner, if $W$ is small, the entire pulse becomes curved as a circular arc due to the influence of the no-flux boundaries, and the curvature of the pulse is given by $\frac{1}{r}=\frac{sin(\pi-\theta)}{W}$. If this geometry-determined curvature becomes larger than a critical value $\frac{1}{r_c}$, a constant determined by the dynamics of the system and the reaction conditions, the pulse becomes unstable and disappears~\cite{K_v3}. This disappearance occurs dynamically through a ``decollation" mechanism \cite{2Dring}. When $W$ is large, the pulse only curves locally, close to the corner, and further increase in $W$ has little influence on this high local curvature of the pulse. When $W$ is above some critical value (about 6 for curves in Fig.~\ref{Y_Pco}), this local curvature of the pulse reaches a minimum and becomes independent of $W$ and only a function of $\theta$. If this minimum value of local curvature is still larger than $\frac{1}{r_c}$, the pulse will decollate from the corner and fail to pass through. We can thus rationalize the existence, for each given set of reaction conditions, of a critical angle below which the pulse can not propagate around the corner for any $W$. Another geometric structure with features similar to the Y-junction is the rhomb constriction, which arises in microcomposite TiO$_2$/Pt checkerboards (Fig.~\ref{rhomb}). \begin{figure} \centering \includegraphics[width=0.6\columnwidth]{rhomb_structure1a.eps} \includegraphics[width=0.9\columnwidth]{rhomb_structure1b.eps} \caption{(a) A snapshot showing a microdesigned TiO$_2$/Pt checkerboard. The inactive TiO$_2$ rhombs appear black while pulses (little arcs in the first snapshot) are propagating on the light-gray Pt(110) surface. (b) A blow-up of the checkerboard structure. To its right, a geometry (symmetric around the centerline) we used in the simulations of the rhomb constriction is depicted.} \label{rhomb} \end{figure} The critical angles $\theta_c$ (for the Y-junction) and $\phi_c$ (for the rhomb constriction) that allow the pulse to propagate through the structure are plotted as a function of the width $W$, as in Fig.~\ref{Y_rhomb_snapshots}, under the same reaction conditions. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{Y_rhomb_snapshots.eps} \caption{(Color online) Critical angle for pulse turning around the corner as a function of channel width $W$ for the rhomb constriction and the Y-junction structure. For the rhomb constriction structure, we plot $\phi_c/2+\pi/2$ vs $W$ instead of $\phi_c$ vs $W$. For rhomb constriction (Y-junction) structures with values of $W$ and $\phi/2+\pi/2$ ($\theta$) chosen ``above" the corresponding curve, pulses are able to turn around the corner. The solid lines, fitted to data points, are included to guide the eye. Snapshots of pulse propagating in the rhomb constriction structure at point $a$ ($2.7$, $0.88$) and $b$ ($4.0$, $0.88$) are shown in the insets. $T=535.5K$, $P_{CO}=4.95\times10^{-5}mbar$, $P_{O_2}=2.0\times10^{-4}mbar$.} \label{Y_rhomb_snapshots} \end{figure} The y-axis in Fig.~\ref{Y_rhomb_snapshots} for the rhomb constriction structure is chosen to be $(\phi/2 + \pi/2)/\pi$, so that the two critical curves correspond (they are almost identical) when $W$ is small ($<1$). The non-monotonic decrease of $\phi_c$ for the rhomb constriction structure can be rationalized by a simple geometric observation: as $W$ increases, the front and back of the pulse are approaching closer and closer to each other at the corner (see the insets in Fig.~\ref{Y_rhomb_snapshots}); this can lead to a different mechanism for the decollation of the pulse. \subsection{Pulse manipulation with local laser heating} \subsubsection{Using the laser to assist pulse propagation} The local increase of temperature caused by a short laser ``burst" accelerates the local desorption of CO and may thus assist the propagation of an oxygen pulse. We can therefore use local laser heating to assist pulse propagation around corners that would be too sharp for the pulse to go through at isothermal conditions. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{y_laser_help1.eps} \caption{(Color online) Using local laser heating to assist pulse propagation around corners. Snapshots (a)-(d) show the coverage of oxygen; corresponding instantaneous temperature fields before, after ((a') and (c') respectively) as well as during the laser shot ((b')) are plotted in the second row. Laser heating is turned on for a total of 0.4s (from t=5.0s to 5.4s) and centered at the lower junction corner; the maximum temperature increase there is $3K$. (a) t=4.5s, (b) t=5.0s, (c) t=5.5s, (d) t=8.0s. $T=535.5K$, $P_{CO}=4.95\times10^{-5}mbar$, $P_{O_2}=2.0\times10^{-4}mbar$. } \label{laser_help} \end{figure} In Fig.~\ref{laser_help}, a local increase of temperature (see the high temperature field close to corner in Fig.~\ref{laser_help}(b')) reignites the CO-poisoned surface, and reattaches the decollated oxygen pulse back towards the corner (close to the corner in Figs.~\ref{laser_help}(c) and \ref{laser_help}(d)). This result has been confirmed experimentally as we will see below. \subsubsection{Using the laser to prevent pulse propagation} Laser heating can also be used to eliminate a pulse, as shown in Fig.~\ref{laser_prevent}. First, we apply local laser heating for a short time to locally ignite the CO-poisoned catalytic surface far ahead from the pulse without creating a new pulse (Figs.~\ref{laser_prevent}(b) and \ref{laser_prevent}(c)). \begin{figure}[htp] \centering \includegraphics[width=0.9\columnwidth]{y_laser_preventa1.eps} \includegraphics[width=0.9\columnwidth]{y_laser_preventb1.eps} \caption{(Color online) Using local laser heating to prevent pulse propagation. Similar to Fig.~\ref{laser_help}, instantaneous temperature fields ((a')-(c')) are plotted below the corresponding oxygen coverage ((a)-(c)). The local laser heating is turned on for a total of 0.4s (from t=1.5s to 1.9s) and centered in the middle of the lower channel entrance with a maximum temperature increase of $4.5K$. (a) t=1.0s, (b) t=1.5s, (c) t=2.0s, (d) t=7.0s, (e) t=10.0s, (f) t=12.0s. $T=535.5K$, $P_{CO}=4.95\times10^{-5}mbar$, $P_{O_2}=2.0\times10^{-4}mbar$.} \label{laser_prevent} \end{figure} The local surface slowly reverts to the quenched steady state after the laser heating is turned off (as in Fig.~\ref{laser_prevent}(d)). There exists a refractory period before this area of the surface can be reignited, because the dynamics of the Pt phase-reconstruction are slow. If a propagating pulse passes through this area before the local surface recovers, the pulse may not propagate through this area and ``evaporates" (see Figs.~\ref{laser_prevent}(e) and \ref{laser_prevent}(f)). \section{Experiments} \subsection{Experimental setup} In earlier experiments, propagating chemical waves had to be visualized in the presence of inert Ti boundaries \cite{exp1,exp2}. These measurements were all performed using a photoemission electron microscope (PEEM), which has a sufficient lateral resolution and images the changes of the work function due to the adsorbed species. To achieve its high resolution, the sample has to be at a distance of 4 mm to the electron cathode lens, restricting severely any access to the surface and, in particular, preventing laser beam addressing. At first look, ellipso-microscopy for surface imaging (EMSI) seems to be a more appropriate choice, since it leaves the whole surface totally open to additional experiments \cite{RAM}. But EMSI relies on differences in adsorbed layer thickness. A microstructured surface features boundaries, laterally limiting the chemical waves, that are made out of Ti or similar metals and are several hundred ${\AA}$ tall. The signals from those boundaries would easily saturate the image, leaving no contrast to observe the reactive pattern formation. A different imaging method was therefore necessary, one that provided good contrast for the observation of the reaction dynamics, but which was preferably insensitive to the Ti microstructures. Additionally a sufficiently large working distance between the imaging instrument and the sample was necessary to allow for local addressing of surface activity by means of focused laser light. Reflection Anisotropy Microscopy (RAM) conveniently combines the required properties. RAM has been extensively used to study CO oxidation on platinum. The first setup basically consisted of a classical ellipsometric configuration under almost normal incidence \cite{RAM}. This setup was later improved by utilizing the intrinsic properties of a Foster prism and working under exactly normal incidence \cite{exp3}. However, the spatial resolution of RAM ($30{\mu}m$) was insufficient to image the relatively weak and thin reaction pulses presented in this paper. In order to improve the spatial resolution significantly, a new instrument was designed and built. Figure~\ref{exp_setup} shows a sketch of the experimental setup. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{exp_setup.eps} \caption{Schematic image of the experimental setup (not to scale). For details see text.} \label{exp_setup} \end{figure} The Pt single crystal is located in a UHV chamber. As a light source for RAM the $488nm$ line of an $Ar^+$ laser is used. The laser light is collimated and afterwards polarized by means of a Glan-Thompson prism. The dielectric film within the polarizing beam splitter cube reflects about 99.9\% of the incoming vertically pre-polarized light onto the Pt sample. During reflection at the Pt(110) surface the polarization may change depending on the coverage dependent surface reconstruction \cite{exp3}. The light propagates back to the beam splitter. The component with retained vertical polarization is reflected back to the light source. The other component which is polarized parallelly, due to interactions with the anisotropic Pt(110) surface, is transmitted and used for imaging of the crystal. Twice the light has to pass a UHV window. This window is specially designed to minimize stress-induced birefringence. The spatial resolution of this new setup is $8{\mu}m$. Thus, also thin pulses close to the limit of stability can now be observed with RAM. Through a second UHV window, the light of an additional $Ar^+$ laser ($514nm$) can be focused onto the Pt surface. The position of the laser spot can be controlled via two computer-controlled galvanic mirrors. In order to remove unwanted $514nm$ stray light from the RAM imaging path, a dielectric filter is used. \subsection{Experimental results} \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{river_delta_laser.eps} \caption{Time-lapse images (an interval of 3s between each two snapshots) showing a dying oxygen-rich pulse ``rescued" by a laser shot centered at $B$. The arrows indicate the propagation direction of the pulses. $T= 455K, P_{o_2}= 2\times10^{-4} mbar$, $P_{co}=6.42\times10^{-5} mbar$. (a) No laser shot is applied. The pulse decollates at $B$ and enters the upper right channel. (b) A laser shot is applied for $50ms$ with a power of $500mW$ when the lower end of the pulse reaches point $B$. The pulse does not decollate at $B$, and splits into two at $C$ entering both channels.} \label{river_delta} \end{figure} In the experiments, we were able to guide pulse propagation through different local laser heating protocols. We can assist the propagation of a pulse in the direction originally prohibited by the boundary curvature-induced instability through a short duration laser shot (Fig.~\ref{river_delta}); alternatively, we can apply excessive heating that actually destroys the pulses (Fig.~\ref{crossroads}). In Fig.~\ref{river_delta}, an oxygen-rich pulse is first ignited at the upper left channel and propagates toward the right. It is able to turn around corner $A$, but it decollates at corner $B$ unless it is assisted by local laser shot. After applying a laser shot at the lower end of the pulse at $B$, the increase of local temperature reduces the CO coverage of the local CO-poisoned surface, which makes the adsorption of oxygen easier, so that the pulse is able to pass around the $B$ corner (Fig.~\ref{river_delta}(b)). If excessive heating is applied to a propagating CO-rich pulse, removing enough CO from the local surface, the pulse can be locally destroyed. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{crossroads_laser1.eps} \caption{Guidance of CO-rich pulses in a labyrinthine structure. The circles indicate the location where the laser shot is applied. The laser shot duration is 1s with a power of $230mW$, which destroys any part of the pulse within the yellow circle. $T= 451K, P_{o_2}= 2\times10^{-4} mbar$, $P_{co}=5.50\times10^{-5} mbar$. (a) No laser shot is applied and the pulse enters all the three channels. (b) A laser shot is applied when the lower end of the pulse is at corner $A$ of the crossing. The pulse only enters the upper channel. (c) A laser shot is applied when the upper end of the pulse passes corner $B$ of the crossing. The pulse is thus prevented from entering the channel on the right.} \label{crossroads} \end{figure} In Fig.~\ref{crossroads}(b), we apply the laser heating for 1s when the lower end of the pulse reaches corner $A$ of the crossing. This erases the lower part of the pulse within the yellow circle and prevents the pulse from entering the right and lower channels. In Fig.~\ref{crossroads}(c), we wait until the {\it upper end} of the pulse passes the $B$ corner of the crossing before applying the laser heating. Only the pulse propagation in the right channel is thus obstructed. \section{Summary and Conclusions} Spontaneous pattern formation, and the dynamics of the resulting patterns in reacting systems, can be controlled at several levels. Designing the geometry of the catalytic domain, and using a galvanometer mirror-manipulated laser beam, we have demonstrated here through experiment and computation, that the propagation of pulses in complex geometries can be guided, facilitated or forbidden in real time. Sudden and intense boundary curvature changes can lead to a fundamental ``decollation" instability; on the other hand the laser-induced heating can either assist or prohibit pulse propagation depending on its intensity and location. The combination of spatially fine-grained sensing with spatially fine-grained actuation opens up a wide array of possibilities in manipulating physical processes in complex microgeometries. Here we studied heterogeneous reacting systems; similar tools can be used to implement spatiotemporal networks in {\it homogeneous} reacting excitable media (e.g. Ref.~\cite{tinsley}) or electrochemically reacting systems (e.g. Ref.~\cite{Kiss}). Beyond chemically reacting systems, such tools are becoming increasingly useful -and used- in applications as diverse as droplet formation and mixing in microfluidics~\cite{Ismagilov}, directing cell migration~\cite{Jiang-Whitesides} or manipulating coherent matter-waves~\cite{Oberthaler}. {\bf Acknowledgements} This work was partially supported by an NSF/ITR grant and by AFOSR (IGK, LQ); LQ gratefully acknowledges the support of a PPPL Fellowship. The authors also wish to acknowledge the experimental observations of rhomb constrictions in Michael Pollmann's Thesis at the FHI.
1,314,259,992,638
arxiv
\section{Conclusion} We present the first large-scale densely-annotated material segmentation dataset which can train or evaluate indoor and outdoor scene parsing models. \footnote{Our data is available at https://github.com/apple/ml-dms-dataset.} We propose a benchmark on 46 kinds of materials. Our data can be a foundation for algorithms which utilize material type, make use of physical properties for simulations or functional properties for planning and human-computer interactions. We look forward to expanding the number of materials, finding new methods to reach even better full-scene material segmentation, and combining the point-wise annotations of MINC~\cite{minc} with our data in future work. {\bf Acknowledgements.} We thank Allison Vanderby, Hillary Strickland, Laura Snarr, Mya Exum, Subhash Sudan, Sneha Deshpande, and Doris Guo for their help with acquiring data; Richard Gass, Daniel Kurz and Selim Ben Himane for their support. \section{Data Collection} \label{sec:dataset} \text{DMS}\xspace is a set of dense polygon annotations of 52 material classes across 44,560 images, which are a subset of OpenImages~\cite{openimages2}. We followed a four step process. First, a set of labels was defined. Next, a large set of images was studied by people and algorithms to select images for annotation. Next, the selected images were fully segmented and labeled by a human annotator. Finally, each segmented image was relabeled by multiple people and a final label map was created by fusing all labels. The last three steps were followed multiple times. \subsection{Material Labels} \label{sec:dataset1} We choose to predefine a label set which is the approach of COCO-Stuff~\cite{cocostuff}. This encourages annotators to create consistent labels suitable for machine learning. We instructed annotators to assign \matlabel{not on list} to recognized materials which do not fit in any category and \matlabel{I cannot tell} to unknown and unrecognizable surfaces (\emph{e.g.}} \def\Eg{\emph{E.g.}, watermarks and under-/over-saturated pixels). We defined a label set based on appearance, which is the approach of OpenSurfaces~\cite{opensurfaces}. A label can represent a solid substance (\emph{e.g.}} \def\Eg{\emph{E.g.}, wood), a distinctive arrangement of substances (\emph{e.g.}} \def\Eg{\emph{E.g.}, brickwork), a liquid (\emph{e.g.}} \def\Eg{\emph{E.g.}, water) or a useful non-material (\emph{e.g.}} \def\Eg{\emph{E.g.}, sky). We used 35 labels from OpenSurfaces and \matlabel{asphalt} from~\cite{schwartz2019recognizing}. We added 2 fine-grained people and animal categories (\matlabel{bone} and \matlabel{animal skin}). We introduced 3 labels for workplaces (\matlabel{ceiling tile}, \matlabel{whiteboard} and \matlabel{fiberglass wool}), 6 for indoor scenes (\matlabel{artwork}, \matlabel{clutter}, \matlabel{non-water liquid}, \matlabel{soap}, \matlabel{pearl} and \matlabel{gemstone}) and 4 for outdoors (\matlabel{sand}, \matlabel{snow}, \matlabel{ice} and \matlabel{tree wood}). \matlabel{Artwork} identifies an imitative surface which is photographic or fine art---affording further analysis by Materials In Paintings~\cite{van2021materials}. \matlabel{Clutter} is a region of visually indistinguishable manufactured stuff (typically a mixture of metal, plastic and paper) which occurs in trash piles. Lastly, we defined a label called \matlabel{engineered stone} for artificial surfaces which imitate stone, which includes untextured and laminated solid surfaces. See Figure~\ref{fig:matlabels} for an example of each label. \subsection{Image Selection} \label{sec:dataset2} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/all_occurrences.pdf} \caption{{\bf Image diversity.} We plot number of categories ($y$-axis) vs. occurrence in images (log-scale $x$-axis) of Places365 scene type (a), COCO objects (b), and SUN attributes (c). Our dataset ({\it blue}) is larger, more diverse and more balanced across categories (higher slope) compared to the largest segmentation dataset ({\it orange}).} \label{fig:occurrences} \end{figure} Bell \emph{et al.}~\cite{minc} found that a balanced set of material labels can achieve nearly the same performance as a 9x larger imbalanced set. Since we collect dense annotations we cannot directly balance classes. Instead, we searched 191k images for rare materials and assumed common materials would co-occur. Furthermore, we ran Detectron~\cite{detectron} to detect COCO~\cite{coco} objects, and Places365~\cite{places365} to classify scenes and recognize SUN~\cite{sunattributes} attributes. EXIF information was used to infer country. These detections were used to select images of underrepresented scenes, objects and countries. Figure~\ref{fig:occurrences} compares the diversity of the 45k images in \text{DMS}\xspace to the 19k images in OpenSurfaces by a plot of the number of categories, $y$, which have at least $x$ occurrences. Occurrences of scene type, object and SUN attribute are plotted. Note that the $x$-axis is logarithmic scale. We find our dataset is more diverse having more classes present in greater amounts (more than can be explained by the 2.24x difference in image count). We balance the distribution of skin appearance in \text{DMS}\xspace so that algorithms trained with our data perform well on all kinds of skin~\cite{gendershades}. We use Fitzpatrick~\cite{fitzpatrick} skin type to categorize skin into 3 groups, inspired by an approach used by~\cite{towardsfairerdatasets}. We ran the DLIB~\cite{dlib} face detector and labeled a subset of the faces. Our 157 manual annotations were used to calibrate a preexisting face attribute predictor (trained on a different dataset) which was then used to predict skin types for the rest of \text{DMS}\xspace. We found that the ratio of the largest group to the smallest was 9.4. Next, we selected images which would increase the most underrepresented skin type group and found this reduced the ratio to 2.2. We calibrated the same detector for OpenSurfaces faces and measured its ratio as 10.4. According to the findings of~\cite{gendershades}, we expect skin classifiers trained on OpenSurfaces would underperform on dark skin. Table~\ref{tab:skintypes} shows the distribution of skin types. We used Places365 scene type detection to select outdoor images but we found this did not lead to outdoor materials. We took two steps to address this. First, we annotated our images with one of nine \emph{photographic types} which distinguish outdoor from indoor from unreal images. Table~\ref{tab:photographic_type} shows the annotated types. Next, we used these labels to select outdoor scenes and underrepresented viewpoints. This was effective---growing the dataset by 17\% more than doubled 9 kinds of outdoor materials: \matlabel{ice} (3x), \matlabel{sand} (4.4x), \matlabel{sky} (8x), \matlabel{snow} (9.5x), \matlabel{soil} (3x), \matlabel{natural stone} (2.4x), \matlabel{water} (2.5x), \matlabel{tree wood} (2.3x) and \matlabel{asphalt} (9.2x). \begin{table}[t] \centering \caption{{\bf Skin types.} We report estimated occurrences. Our dataset has 12x more occurrences of the smallest group and 4.8x more fair representation by ratio.} \label{tab:skintypes} \begin{tabular}{@{}lrr@{}}\toprule & OpenSurfaces & \text{DMS}\xspace (Ours)\\\midrule Type I-II (light) & 2,332 & 4,535\\ Type III-IV (medium) & 3,889 & 9,776\\ Type V-VI (dark) & 375 & 5,899\\\midrule Ratio of largest to smallest group & 10.37\,:\,1 & 2.16\,:\,1\\ \bottomrule \end{tabular} \end{table} \begin{table}[t] \centering \caption{{\bf Photographic types.} Our data contains indoor views ({\it top}), outdoor views ({\it middle}), and close-up and unusual views ({\it bottom}).} \label{tab:photographic_type} \begin{minipage}[c]{0.69\linewidth} \begin{tabular}{@{}lp{3mm}r@{}}\toprule Photographic Type & & Images \\\midrule An area with visible enclosure & & 16,013 \\ A collection of indoor things & & 6,064 \\ A tightly cropped indoor thing & & 2,634 \\\midrule A ground-level view of reachable outdoor things & & 3,127 \\ A tightly cropped outdoor thing & & 1,196 \\ Distant unreachable outdoor things & & 971 \\\midrule A real surface without context & & 847 \\ Not a real photo & & 805 \\ An obstructed or distorted view & & 204 \\ \bottomrule \end{tabular} \end{minipage} \begin{minipage}[c]{0.275\linewidth} \includegraphics[height=10.9ex]{figures/example_photographic_types/8_22591_crop.jpg}\hfill \includegraphics[height=10.9ex]{figures/example_photographic_types/7_22711_crop.jpg}\hfill \includegraphics[height=10.9ex]{figures/example_photographic_types/6_228861_crop.jpg} \includegraphics[height=10.9ex]{figures/example_photographic_types/9_232645_crop.jpg}\hfill \includegraphics[height=10.9ex]{figures/example_photographic_types/5_228134_crop.jpg}\hfill \includegraphics[height=10.9ex]{figures/example_photographic_types/4_193852_crop.jpg} \includegraphics[height=10.9ex]{figures/example_photographic_types/2_230376_crop.jpg}\hfill \includegraphics[height=10.9ex]{figures/example_photographic_types/1_46126_crop.jpg}\hfill \includegraphics[height=10.9ex]{figures/example_photographic_types/3_57208_crop.jpg} \end{minipage} \end{table} \subsection{Segmentation and Instances} \label{sec:dataset3} Images were given to annotators for polygon segmentation of the entire image. We instructed annotators to segment parts larger than a fingertip, ignore gaps smaller than a finger, and to follow material boundaries tightly while ignoring geometry and shadow boundaries. Following~\cite{opensurfaces}, annotators were instructed to segment glass and mirror surfaces rather than the covered or reflected surfaces. Unreal elements such as borders and watermarks were segmented separately. Images with objectionable content (\emph{e.g.}} \def\Eg{\emph{E.g.}, violence) were not annotated. Annotators segmented resized images, with median longest edge of 1024 pixels, creating over 3.2 million segments (counting only those larger than 100 pixels) with a mean of 72 segments per image. The created segments are detailed---wires, jewelry, teeth, eyebrows, shoe soles, wheel rims, door hinges, clasps, buttons and latches are some of the small and thin materials segmented separately. See Figure~\ref{fig:teaser} and Figure~\ref{fig:fusedlabels} for examples of detailed segmentations. We defined a material instance as materials of the same type from the same manufacturing source. For example a wooden cabinet should be segmented separately from a wood floor but the planks making up a single-source floor would be one instance. \text{DMS}\xspace is the first large-scale densely segmented dataset to have detailed material instances. \begin{table}[t] \centering \caption{{\bf Annotator agreement rates.} High rates indicate consistent label assignment. Low rates indicate disagreement, confusion or unstructured error.} \label{tab:agreement} \begin{tabular}{@{}llp{3mm}llp{3mm}llp{3mm}ll@{}}\toprule Hair & 0.95 & & Glass & 0.80 & & Wood & 0.67 & & Non-clear plastic & 0.60\\ Skin & 0.93 & & Paper & 0.76 & & Tree wood & 0.66 & & Leather & 0.53\\ Foliage & 0.86 & & Carpet/rug & 0.73 & & Tile & 0.66 & & Cardboard & 0.53\\ Sky & 0.86 & & Nat. stone & 0.72 & & Metal & 0.65 & & Artwork & 0.51\\ Food & 0.84 & & Ceramic & 0.70 & & Paint/plaster & 0.62 & & Clear plastic & 0.50\\ Fabric/cloth & 0.82 & & Mirror & 0.68 & & Rubber & 0.61 & & Concrete & 0.45\\ \bottomrule \end{tabular} \end{table} \subsection{Labeling} The annotator who segmented an image also assigned labels based on their judgment and our instruction. We found that surfaces coated with another material or colored by absorbing ink required clarification. Appearance-changing coatings were labeled \matlabel{paint} while clear or appearance-enhancing coatings (\emph{e.g.}} \def\Eg{\emph{E.g.}, varnish, cosmetics, sheer hosiery) were labeled as the underlying material. Small amounts of ink (\emph{e.g.}} \def\Eg{\emph{E.g.}, printed text) are disregarded. Some surfaces imitate the appearance of other materials (\emph{e.g.}} \def\Eg{\emph{E.g.}, laminate). High-quality imitations were labeled as the imitated material and low-quality imitations as the real material. Our instructions were refined in each iteration and incorrect labels from early iterations were corrected. Some cases needed special instruction. We instructed annotators to label electronic displays as \matlabel{glass} and vinyl projection screens as \matlabel{not on list}. Uncovered artwork or photographs were to be labeled \matlabel{artwork} while glass-covered art should be labeled \matlabel{glass}. In ambiguous cases, we assume framed artwork has a glass cover. \matlabel{Sky} includes day sky, night sky and aerial phenomenon (\emph{e.g.}} \def\Eg{\emph{E.g.}, clouds, stars, moon, and sun). We collected more opinions by presenting a segmentation, after removing labels, to a different annotator who relabeled the segments. The relabeling annotator could fix bad segments by adjusting polygons or assign special labels to indicate a segment does not follow boundaries or is made of multiple material types. We collected 98,526 opinions across 44,560 images consisting of 8.2 million segment labels (counting only segments larger than 100 pixels). We studied label agreement by counting occurrences of a segment label and matching pixel-wise dominant label by a different annotator. We found an agreement rate of 0.675. In cases of agreement, 8.9\% were unrecognizable (\matlabel{I cannot tell}) and 0.6\% were \matlabel{not on list}. Table~\ref{tab:agreement} shows the agreement rate for classes larger than the median number of segments per class. Among the largest classes the most agreed-upon labels are \matlabel{hair}, \matlabel{skin}, \matlabel{foliage}, \matlabel{sky}, and \matlabel{food}. We only analyze the largest classes since unstructured error (\emph{e.g.}} \def\Eg{\emph{E.g.}, misclicks) can overwhelm the statistics of small classes, which are up to 2,720 times smaller. \begin{figure}[t] \includegraphics[height=12.1ex]{figures/fusedlabel_examples/fused_202582_crop.png}\hfill \includegraphics[height=12.1ex]{figures/fusedlabel_examples/fused_138031.png}\hfill \includegraphics[height=12.1ex]{figures/fusedlabel_examples/fused_219628_crop.png}\hfill \includegraphics[height=12.1ex]{figures/fusedlabel_examples/fused_41296.png}\hfill \includegraphics[height=12.1ex]{figures/fusedlabel_examples/fused_53567.png}\hfill \includegraphics[height=12.1ex]{figures/fusedlabel_examples/fused_56373_crop.png} \caption{{\bf Fused labels.} We show segmentation quality and variety of scenes, activities and materials ({\it left to right:} building exterior, workplace, road, swimming pool, shop, dining room). See Table~\ref{tab:fusedcount} for color legend. Black pixels are unlabeled (no consensus).} \label{fig:fusedlabels} \end{figure} \begin{table}[t] \centering \caption{{\bf Material occurrence in images.} We report the number of images in which a label occurs. The colors are used for visualizations.} \label{tab:fusedcount} \begin{tabular}{@{}p{3.1mm}lrp{3mm}p{3.1mm}lrp{3mm}p{3.1mm}lr@{}}\toprule \cellcolor{matcolor29} & Paint/plaster & 39,323 & & \cellcolor{matcolor38} & Sky & 3,306 & & \cellcolor{matcolor8} & Chalkboard & 668 \\ \cellcolor{matcolor13} & Fabric/cloth & 31,489 & & \cellcolor{matcolor27} & Mirror & 3,242 & & \cellcolor{matcolor56} & Asphalt & 474 \\ \cellcolor{matcolor34} & Non-clear plas& 30,506 & & \cellcolor{matcolor4} & Cardboard & 3,150 & & \cellcolor{matcolor15} & Fire & 412 \\ \cellcolor{matcolor26} & Metal & 30,504 & & \cellcolor{matcolor17} & Food & 2,908 & & \cellcolor{matcolor19} & Gemstone & 369 \\ \cellcolor{matcolor20} & Glass & 28,934 & & \cellcolor{matcolor10} & Concrete & 2,853 & & \cellcolor{matcolor42} & Sponge & 326 \\ \cellcolor{matcolor52} & Wood & 24,248 & & \cellcolor{matcolor6} & Ceiling tile & 2,524 & & \cellcolor{matcolor12} & Eng. stone & 299 \\ \cellcolor{matcolor30} & Paper & 20,763 & & \cellcolor{matcolor43} & Natural stone & 2,076 & & \cellcolor{matcolor25} & Liquid & 294 \\ \cellcolor{matcolor37} & Skin & 18,524 & & \cellcolor{matcolor48} & Water & 2,063 & & \cellcolor{matcolor31} & Pearl & 282 \\ \cellcolor{matcolor21} & Hair & 17,766 & & \cellcolor{matcolor53} & Tree wood & 2,026 & & \cellcolor{matcolor11} & Cork & 273 \\ \cellcolor{matcolor16} & Foliage & 11,384 & & \cellcolor{matcolor51} & Wicker & 1,895 & & \cellcolor{matcolor36} & Sand & 272 \\ \cellcolor{matcolor46} & Tile & 10,173 & & \cellcolor{matcolor41} & Soil/mud & 1,855 & & \cellcolor{matcolor39} & Snow & 191 \\ \cellcolor{matcolor5} & Carpet/rug & 9,516 & & \cellcolor{matcolor44} & Pol. stone & 1,831 & & \cellcolor{matcolor40} & Soap & 154 \\ \cellcolor{matcolor7} & Ceramic & 8,314 & & \cellcolor{matcolor3} & Brickwork & 1,654 & & \cellcolor{matcolor9} & Clutter & 128 \\ \cellcolor{matcolor35} & Rubber & 7,811 & & \cellcolor{matcolor18} & Fur & 1,567 & & \cellcolor{matcolor23} & Ice & 96 \\ \cellcolor{matcolor24} & Leather & 7,354 & & \cellcolor{matcolor50} & Whiteboard & 1,171 & & \cellcolor{matcolor45} & Styrofoam & 88 \\ \cellcolor{matcolor33} & Clear plastic & 6,431 & & \cellcolor{matcolor49} & Wax & 1,107 & & \cellcolor{matcolor14} & Fiberglass wool & 33 \\ \cellcolor{matcolor32} & Artwork & 4,344 & & \cellcolor{matcolor47} & Wallpaper & 1,076 \\ \cellcolor{matcolor2} & Bone/horn & 3,751 & & \cellcolor{matcolor1} & Animal skin & 1,007 \\ \bottomrule \end{tabular} \end{table} \subsection{Label Fusion} Each annotator's segments are rendered to create a label map. Label maps were inspected for correctness and we fixed incorrect labels in 1,803 images. Next, we create a single \emph{fused label map} for each image. First, we combined label maps pixel-wise by taking the strict majority label. Next, we overlaid manual corrections and reassigned non-semantic labels (\emph{e.g.}} \def\Eg{\emph{E.g.}, \matlabel{I cannot tell}) to \matlabel{no label}. The fused maps have a mean labeled area fraction of 0.784. For comparison, we created fused label maps for OpenSurfaces and found its density is 0.210. \text{DMS}\xspace is 2.3x larger and 3.7x denser, which is 8.4x more labeled area. Compared to the 3M points in MINC~\cite{minc}, DMS has 3.2M fused segments which carry more information about shape, boundary and co-occurrences. While MINC annotations span 10x more images, point annotations cannot evaluate segmentation boundaries for scene parsing tasks. Example fused maps and class occurrences are shown in Figure~\ref{fig:fusedlabels} and Table~\ref{tab:fusedcount}. The smallest class appears in 33 images whereas the largest class, \matlabel{paint}, appears in 39,323 images, which is 88\% of the images. \captionsetup[subfigure]{labelformat=empty} \begin{figure}[t] \subfloat[Asphalt]{\includegraphics[height=7.5ex]{figures/class_examples/56_asphalt_small.jpg}}\hfill \subfloat[Bone]{\includegraphics[height=7.5ex]{figures/class_examples/2_bone.jpg}}\hfill \subfloat[Brick]{\includegraphics[height=7.5ex]{figures/class_examples/3_brick_small.jpg}}\hfill \subfloat[Eng. stone]{\includegraphics[height=7.5ex]{figures/class_examples/12_engineeredstone_crop.jpg}}\hfill \subfloat[Fabric]{\includegraphics[height=7.5ex]{figures/class_examples/13_fabric_small.jpg}}\hfill \subfloat[Carpet]{\includegraphics[height=7.5ex]{figures/class_examples/5_carpet_small.jpg}}\hfill \subfloat[Ceiling tile]{\includegraphics[height=7.5ex]{figures/class_examples/6_ceilingtile.jpg}}\hfill \subfloat[Ceramic]{\includegraphics[height=7.5ex]{figures/class_examples/7_ceramic.jpg}}\hfill \subfloat[Wax]{\includegraphics[height=7.5ex]{figures/class_examples/49_wax_crop.jpg}} \subfloat[Wallpaper]{\includegraphics[height=7.5ex]{figures/class_examples/47_wallpaper_small.jpg}}\hfill \subfloat[Clear plastic]{\includegraphics[height=7.5ex]{figures/class_examples/33_clearplastic_small.jpg}}\hfill \subfloat[Plastic]{\includegraphics[height=7.5ex]{figures/class_examples/34_plastic_small.jpg}}\hfill \subfloat[Concrete]{\includegraphics[height=7.5ex]{figures/class_examples/10_concrete_small.jpg}}\hfill \subfloat[Artwork]{\includegraphics[height=7.5ex]{figures/class_examples/32_photograph.jpg}}\hfill \subfloat[Cardboard]{\includegraphics[height=7.5ex]{figures/class_examples/4_cardboard_small.jpg}}\hfill \subfloat[Chalkboard]{\includegraphics[height=7.5ex]{figures/class_examples/8_chalkboard.jpg}}\hfill \subfloat[Fiberglass]{\includegraphics[height=7.5ex]{figures/class_examples/14_fiberglass.jpg}}\hfill \subfloat[Rubber]{\includegraphics[height=7.5ex]{figures/class_examples/35_rubber_masked_small.jpg}} \subfloat[Fur]{\includegraphics[height=7.5ex]{figures/class_examples/18_fur_small.jpg}}\hfill \subfloat[Foliage]{\includegraphics[height=7.5ex]{figures/class_examples/16_foliage_small.jpg}}\hfill \subfloat[Food]{\includegraphics[height=7.5ex]{figures/class_examples/17_food_small.jpg}}\hfill \subfloat[Hair]{\includegraphics[height=7.5ex]{figures/class_examples/21_hair.jpg}}\hfill \subfloat[Cork]{\includegraphics[height=7.5ex]{figures/class_examples/11_cork.jpg}}\hfill \subfloat[Fire]{\includegraphics[height=7.5ex]{figures/class_examples/15_fire_small.jpg}}\hfill \subfloat[Gemstone]{\includegraphics[height=7.5ex]{figures/class_examples/19_gemstone.jpg}}\hfill \subfloat[Glass]{\includegraphics[height=7.5ex]{figures/class_examples/20_glass_small.jpg}}\hfill \subfloat[Ice]{\includegraphics[height=7.5ex]{figures/class_examples/23_ice_small.jpg}} \subfloat[Paper]{\includegraphics[height=7.3ex]{figures/class_examples/30_paper_small.jpg}}\hfill \subfloat[Leather]{\includegraphics[height=7.3ex]{figures/class_examples/24_leather_small.jpg}}\hfill \subfloat[Liquid]{\includegraphics[height=7.3ex]{figures/class_examples/25_liquid.jpg}}\hfill \subfloat[Metal]{\includegraphics[height=7.3ex]{figures/class_examples/26_metal_small.jpg}}\hfill \subfloat[Mirror]{\includegraphics[height=7.3ex]{figures/class_examples/27_mirror_small.jpg}}\hfill \subfloat[Paint]{\includegraphics[height=7.3ex]{figures/class_examples/29_paint_masked_small.jpg}}\hfill \subfloat[Pearl]{\includegraphics[height=7.3ex]{figures/class_examples/31_pearl_crop.jpg}}\hfill \subfloat[Sponge]{\includegraphics[height=7.3ex]{figures/class_examples/42_sponge.jpg}} \subfloat[Soap]{\includegraphics[height=7.5ex]{figures/class_examples/40_soap_crop.jpg}}\hfill \subfloat[Clutter]{\includegraphics[height=7.5ex]{figures/class_examples/9_clutter_crop.jpg}}\hfill \subfloat[Wicker]{\includegraphics[height=7.5ex]{figures/class_examples/51_wicker_small.jpg}}\hfill \subfloat[Snow]{\includegraphics[height=7.5ex]{figures/class_examples/39_snow_small.jpg}}\hfill \subfloat[Sand]{\includegraphics[height=7.5ex]{figures/class_examples/36_sand_small.jpg}}\hfill \subfloat[Skin]{\includegraphics[height=7.5ex]{figures/class_examples/37_skin.jpg}}\hfill \subfloat[Sky]{\includegraphics[height=7.5ex]{figures/class_examples/38_sky_small.jpg}}\hfill \subfloat[Soil]{\includegraphics[height=7.5ex]{figures/class_examples/41_soil_small.jpg}} \subfloat[Nat. stone]{\includegraphics[height=7.5ex]{figures/class_examples/43_stone_small.jpg}}\hfill \subfloat[Pol. stone]{\includegraphics[height=7.5ex]{figures/class_examples/44_polishedstone_crop.jpg}}\hfill \subfloat[Styrofoam]{\includegraphics[height=7.5ex]{figures/class_examples/45_styrofoam_crop.jpg}}\hfill \subfloat[Tile]{\includegraphics[height=7.5ex]{figures/class_examples/46_tile_crop_small.jpg}}\hfill \subfloat[Water]{\includegraphics[height=7.5ex]{figures/class_examples/48_water_crop_small.jpg}}\hfill \subfloat[Whiteboard]{\includegraphics[height=7.5ex]{figures/class_examples/50_whiteboard_small.jpg}}\hfill \subfloat[Wood]{\includegraphics[height=7.5ex]{figures/class_examples/52_wood_small.jpg}}\hfill \subfloat[Tree wood]{\includegraphics[height=7.5ex]{figures/class_examples/53_treewood_small.jpg}}\hfill \subfloat[Animal skin]{\includegraphics[height=7.5ex]{figures/class_examples/1_hide_small.jpg}} \caption{{\bf Material labels.} For each label we show a cut-out example.} \label{fig:matlabels} \end{figure} \section{Discussion and Conclusion} \label{sec:conclusion} {\bf Dense Annotation.} Prior works~\cite{opensurfaces,minc,schwartz2019recognizing} instruct annotators to locate and segment regions made of a given material. Our approach is different. We instruct annotators to segment and label the entire image. This approach collects different data because annotators address all surfaces---not just those which are readily recognized. We hypothesize this creates a more difficult dataset, and propose this approach is necessary for evaluation of scene parsing, which predicts all pixels. {\bf Real vs. Synthetic.} Synthetic data has achieved high levels of realism (\emph{e.g.}} \def\Eg{\emph{E.g.}, Hypersim~\cite{hypersim}) and may be a valuable generator of training data. We opted to label real photos because models trained on synthetic data need a real evaluation dataset to confirm the domain gap from synthetic to real has been bridged. {\bf Privacy.} Material predictions can be personal. Knowing a limb is not made of skin reveals a prosthetic. The amount of body hair reveals one aspect of appearance. Precious materials in a home reveals socio-economic status. Clothing material indicates degree of nakedness. Care is needed if material segmentation is tied to identity. Limiting predicted materials to only those needed by an application or separating personal materials from identity are two ways, among many possible ways, to strengthen privacy and protect personal information. \section{Experiments} \label{sec:experiments} First, we investigate the impact of our data on training deep learning models with a cross-dataset comparison (Section~\ref{sec:crossdataset}). Then, we compare the impact of skin type distributions on fairness of skin recognition (Section~\ref{sec:skin}). Next, we establish a material segmentation benchmark for 46 kinds of materials (Section~\ref{sec:baseline}). Finally, we show predictions on real world images (Section~\ref{sec:real}). {\bf Splits.} We created train, validation and test splits for our data by assigning images according to material occurrence. The smallest classes are assigned a ratio of 1\,:\,1\,:\,1, which increases to 2.5\,:\,1\,:\,1 for the largest. An image assignment impacts the ratio of multiple classes so small classes are assigned first. There are 24,255 training images, 10,139 validation images and 10,166 test images. \subsection{Cross-Dataset Comparison} \label{sec:crossdataset} Does training with our data lead to a better model? This experiment compares a model fit to our data against two baselines fit to OpenSurfaces data---the strongest published model~\cite{upernet} and a model with the same architecture as ours. There are two sources of data. The first is OpenSurfaces data with the splits and 25 labels proposed by~\cite{upernet}. The second is comparable DMS training and validation data (\cite{upernet} does not define a test split) created by translating our labels to match~\cite{upernet}. The evaluation set, which we call Avg-Val, is made of both parts---the validation sets of OpenSurfaces and \text{DMS}\xspace, called OS-Val and \text{DMS}\xspace-Val, respectively---weighted equally. For evaluation of our data we fit models to DMS training data and choose the model that performs best on \text{DMS}\xspace-Val. This model, which we call \text{DMS}\xspace-25, is a ResNet-50 architecture~\cite{he2016deep} with dilated convolutions~\cite{chen2017deeplab,yu2015multi} as the encoder, and Pyramid Pooling Module from PSPNet~\cite{zhao2017pyramid} as the decoder. The first baseline (Table~\ref{tab:results1}, row 2) is UPerNet~\cite{upernet}, a multitask scene parsing model which uses cross-domain knowledge to boost material segmentation performance. The second baseline (Table~\ref{tab:results1}, row 3), called OS-25, has the same architecture as \text{DMS}\xspace-25 but is fit to OpenSurfaces training data. Table~\ref{tab:results1} shows the results. We report per-pixel accuracy (Acc), mean class accuracy (mAcc), mean intersection-over-union (mIoU) and $\Delta$, the absolute difference in a metric across \text{DMS}\xspace-Val and OS-Val. A low $\Delta$ indicates a model is more consistent across datasets. We find that fitting a model to \text{DMS}\xspace training data leads to higher performance and lower $\Delta$ on all metrics. We also report the metrics on each validation set and find that both baselines underperform on \text{DMS}\xspace-Val. We find that DMS-25 performs 0.01 lower on OS-Val mAcc compared to a model trained on OpenSurfaces data. This may be due to differences in annotation and image variety. We use our photographic type labels to investigate the larger performance gaps on \text{DMS}\xspace-Val. Why do models trained with OpenSurfaces underperform on our validation images? In Table~\ref{tab:results2} we report per-pixel accuracy of \text{DMS}\xspace-25, UPerNet, and OS-25 across nine categories. We find that \text{DMS}\xspace-25 performs consistently across categories with the lowest performing category (unreal images) 0.071 below the highest performing category (images of enclosed areas). UPerNet shows lower performance across all categories with a drop of 0.426 from images of enclosed areas to images of distant outdoor things. And OS-25 shows similar performance with a drop of 0.407. We observe that both UPerNet and OS-25 have low performance on outdoor images and images without any context. This study shows that photographic types can improve our understanding of how material segmentation models perform in different settings. And, these results justify our decision to collect outdoor images and images of different photographic types. \begin{table}[t] \centering \caption{{\bf Training data evaluation.} We compare segmentation of 25 materials with our training data ({\it row 1}) to OpenSurfaces data with two kinds of models ({\it rows 2 and 3}). Avg-Val is the equally-weighted validation sets of each dataset, \text{DMS}\xspace-Val and OS-Val. $\Delta$ is the difference in a metric across datasets. A convnet fit to our data achieves higher performance and is more consistent across datasets.} \label{tab:results1} \begin{tabular}{@{}lrlrlrcccccc@{}}\toprule Training data & & Model & & Metric & & {Avg-Val }$\uparrow$ & $\Delta\downarrow$ & {\text{DMS}\xspace-Val }$\uparrow$ & {OS-Val }$\uparrow$\\\midrule & & & & Acc & & {\bf 0.777} & {\bf 0.047} & 0.753 & 0.800\\ \text{DMS}\xspace (Ours) & & DMS-25 & & mAcc & & {\bf 0.689} & {\bf 0.006} & 0.686 & 0.692\\ & & & & mIoU & & {\bf 0.500} & {\bf 0.014} & 0.507 & 0.493\\\midrule & & & & Acc & & 0.682 & 0.310 & 0.527 & 0.837\\ OpenSurfaces~\cite{opensurfaces} & & UPerNet~\cite{upernet} & & mAcc & & 0.486 & 0.274 & 0.349 & 0.623\\ & & & & mIoU & & 0.379 & 0.298 & 0.230 & 0.528\\\midrule & & & & Acc & & 0.705 & 0.231 & 0.589 & 0.820\\ OpenSurfaces~\cite{opensurfaces} & & OS-25 & & mAcc & & 0.606 & 0.193 & 0.509 & 0.702\\ & & & & mIoU & & 0.416 & 0.199 & 0.316 & 0.515\\ \bottomrule \end{tabular} \end{table} \begin{table}[t] \centering \caption{{\bf Performance analysis with photographic types.} A model fit to our data, \text{DMS}\xspace-25 ({\it Table~\ref{tab:results1}, row 1}), performs well on all photographic types whereas two models fit to OpenSurfaces, UPerNet and OS-25 ({\it Table~\ref{tab:results1}, rows 2-3}) have low performance outdoors ({\it middle}) and on surfaces without any context ({\it row 7}).} \label{tab:results2} \begin{tabular}{@{}lccccc@{}}\toprule \multicolumn{1}{l}{Photographic Type} & \multicolumn{5}{c}{Per-Pixel Accuracy}\\ \cmidrule{2-6} & \text{DMS}\xspace-25 (Ours) & & UPerNet~\cite{upernet} & & OS-25\\\midrule An area with visible enclosure & 0.756 & & 0.615 & & 0.632\\ A collection of indoor things & 0.752 & & 0.546 & & 0.622\\ A tightly cropped indoor thing & 0.710 & & 0.441 & & 0.561\\\midrule A view of reachable outdoor things & 0.750 & & 0.265 & & 0.388\\ A tightly cropped outdoor thing & 0.731 & & 0.221 & & 0.359\\ Distant unreachable outdoor things & 0.736 & & 0.189 & & 0.225\\\midrule A real surface without context & 0.691 & & 0.222 & & 0.348\\ Not a real photo & 0.685 & & 0.528 & & 0.551\\ An obstructed or distorted view & 0.729 & & 0.370 & & 0.496\\ \bottomrule \end{tabular} \end{table} \begin{table}[t] \centering \caption{{\bf Test set results.} We report metrics for our model, \text{DMS}\xspace-46. 17 materials, in italics, are new---not predicted by prior general-purpose models~\cite{minc,upernet,schwartz2019recognizing}.} \label{tab:OM46_per_class_testset} \begin{tabular}{@{}lccp{3mm}lccp{3mm}lcc@{}}\toprule Category & Acc & IoU & & Category & Acc & IoU & & Category & Acc & IoU\\\midrule Sky & 0.962 & 0.892 & & \textit{Chalkboard} & 0.712 & 0.548 & & \textit{Artwork} & 0.454 & 0.301 \\ Fur & 0.910 & 0.707 & & Paint/plaster & 0.694 & 0.632 & & Mirror & 0.452 & 0.278 \\ Foliage & 0.902 & 0.761 & & Wicker & 0.674 & 0.460 & & \textit{Sand} & 0.444 & 0.340 \\ Skin & 0.886 & 0.640 & & Natural stone & 0.665 & 0.436 & & \textit{Ice} & 0.440 & 0.362 \\ Hair & 0.881 & 0.673 & & Glass & 0.653 & 0.483 & & \textit{Tree wood} & 0.428 & 0.261 \\ Food & 0.868 & 0.668 & & Asphalt & 0.628 & 0.442 & & Pol. stone & 0.379 & 0.236 \\ \textit{Ceiling tile} & 0.867 & 0.611 & & Leather & 0.615 & 0.373 & & \textit{Clear plastic} & 0.360 & 0.222 \\ Water & 0.866 & 0.712 & & \textit{Snow} & 0.610 & 0.465 & & Rubber & 0.255 & 0.163 \\ Carpet/rug & 0.849 & 0.592 & & Concrete & 0.603 & 0.304 & & \textit{Clutter} & 0.182 & 0.152 \\ \textit{Whiteboard} & 0.838 & 0.506 & & Metal & 0.575 & 0.303 & & \textit{Fire} & 0.176 & 0.147 \\ Fabric/cloth & 0.801 & 0.692 & & \textit{Wax} & 0.573 & 0.371 & & \textit{Gemstone} & 0.116 & 0.096 \\ Wood & 0.797 & 0.635 & & Cardboard & 0.570 & 0.363 & & Eng. stone & 0.088 & 0.071 \\ Ceramic & 0.757 & 0.427 & & Wallpaper & 0.544 & 0.329 & & \textit{Cork} & 0.082 & 0.066 \\ Brickwork & 0.746 & 0.491 & & \textit{Non-clear plastic} & 0.519 & 0.321 & & \textit{Bone/horn} & 0.074 & 0.070 \\ Paper & 0.729 & 0.508 & & Soil/mud & 0.511 & 0.332 \\ Tile & 0.722 & 0.550 & & \textit{Animal skin} & 0.472 & 0.308 \\ \bottomrule \end{tabular} \end{table} \subsection{Recognition of Different Skin Types} \label{sec:skin} Models trained on face datasets composed of unbalanced skin types exhibit classification disparities~\cite{gendershades}. Does this impact skin recognition? Without any corrections for skin type imbalance we find that \text{DMS}\xspace-25 has a 3\% accuracy gap among different skin types on \text{DMS}\xspace-val (Type I-II: 0.933, Type III-IV: 0.924, Type V-VI: 0.903) while OS-25 has a larger gap of 13.3\% (Type I-II: 0.627, Type III-IV: 0.571, Type V-VI: 0.494). This confirms that skin type imbalance impacts skin recognition. Our contribution lies in providing more data for all skin types (Table~\ref{tab:skintypes}), which makes it easier for practitioners to create fair models. \subsection{A Material Segmentation Benchmark} \label{sec:baseline} It is common practice to select large categories and combine smaller ones (our smallest occurs in only 12 training images) for a benchmark. Yet, we cannot know {\it a priori} how much training data is sufficient to learn a category. We choose to be guided by the validation data. We fit many models to all 52 categories then inspect the results to determine which categories can be reliably learned. We select ResNet50~\cite{he2016deep} with dilated convolutions~\cite{chen2017deeplab,yu2015multi} as the encoder, and Pyramid Pooling Module from PSPNet~\cite{zhao2017pyramid} as the decoder. We choose this architecture because it has been shown to be effective for scene parsing~\cite{zhao2017pyramid,zhou2019semantic}. Our best model, which we call \text{DMS}\xspace-52, predicts 52 materials with per-pixel accuracy 0.735, mean class accuracy 0.535 and mIoU 0.392 on \text{DMS}\xspace-val. We inspected a few strongest \text{DMS}\xspace-52 fitted models and found that 6 categories consistently stood out as underperforming---having 0 accuracy in some cases and, at best, not much higher than chance. Those categories are \matlabel{non-water liquid}, \matlabel{fiberglass}, \matlabel{sponge}, \matlabel{pearl}, \matlabel{soap} and \matlabel{styrofoam}, which occur in 129, 12, 149, 129, 58 and 33 training images, respectively. Guided by this discovery we select the other 46 material labels for a benchmark. We train a model, called \text{DMS}\xspace-46, to predict the selected categories, with the same architecture as DMS-52. We use a batch size of 64 and stochastic gradient descent optimizer with 1e-3 base learning rate and 1e-4 weight decay. We use ImageNet pretraining~\cite{zhou2017scene,zhou2019semantic} to initialize the encoder weights, and scale the learning rate for the encoder by 0.25. We update the learning rate with a cosine annealing schedule with warm restart~\cite{loshchilov2016sgdr} every 30 epochs for 60 epochs. Because the classes are imbalanced we use weighted symmetric cross entropy~\cite{wang2019symmetric}, computed across \text{DMS}\xspace training images, as the loss function, which gives more weight to classes with fewer ground truth pixels. We apply stochastic transformations for data augmentation (scale, horizontal and vertical flips, color jitter, Gaussian noise, Gaussian blur, rotation and crop), scale inputs into [0, 1], and normalize with mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225] from ImageNet~\cite{deng2009imagenet}. The training tensor has height and width of 512. \text{DMS}\xspace-46 predicts 46 materials with per-pixel accuracy 0.731/0.729, mean class accuracy 0.598/0.585 and mIoU 0.435/0.420 on \text{DMS}\xspace-val/\text{DMS}\xspace-test respectively. We report the test set per-class accuracy and IoU in Table~\ref{tab:OM46_per_class_testset}. We find that \matlabel{sky}, \matlabel{fur}, \matlabel{foliage}, \matlabel{skin} and \matlabel{hair} have the highest recognition rates, similar to the findings of~\cite{minc}. 17 materials do not appear in any prior large-scale material benchmarks. Among these new materials we report high recognition rates for \matlabel{ceiling tile}, \matlabel{whiteboard} and \matlabel{chalkboard}. To our knowledge, \text{DMS}\xspace-46 is the first material segmentation model evaluated on large-scale dense segmentations and predicts more classes than any general-purpose model. \begin{figure}[t] \includegraphics[height=13.0ex]{figures/predictions_om46/44.jpg}\hfill \includegraphics[height=13.0ex]{figures/predictions_om46/56.jpg}\hfill \includegraphics[height=13.0ex]{figures/predictions_om46/12.jpg} \includegraphics[height=14.4ex]{figures/predictions_om46/21.jpg}\hfill \includegraphics[height=14.4ex]{figures/predictions_om46/49.jpg}\hfill \includegraphics[height=14.4ex]{figures/predictions_om46/5.jpg}\hfill \includegraphics[height=14.4ex]{figures/predictions_om46/0.jpg} \caption{{\bf Real-world examples.} Our model, \text{DMS}\xspace-46, predicts 46 kinds of indoor and outdoor materials. See Table~\ref{tab:fusedcount} for color legend.} \label{fig:examples} \end{figure} \subsection{Real-World Examples} \label{sec:real} In Figure~\ref{fig:examples} we demonstrate \text{DMS}\xspace-46 on indoor and outdoor photos from daily life. Our model recognizes and localizes \matlabel{food} on \matlabel{ceramic} plates, workplace materials (\matlabel{whiteboard} and \matlabel{ceiling tile}), ground cover materials (\matlabel{soil}, \matlabel{stone}, \matlabel{foliage} and \matlabel{snow}), unprocessed \matlabel{tree wood}, and \matlabel{fire} on a \matlabel{wax} candle. {\bf A Failure Case.} The last image is a failure case where our model is confused by decorative tile artwork. We also see opportunities for further improving boundaries and localizing small surfaces. \section{Introduction} A goal of computer vision is to develop the cognitive ability to plan manipulation of something and predict how it will respond to stimuli. This is informed by the properties of what something is made of. Those properties can be discovered by segmenting a photograph into recognized materials. Material recognition can be understood through the science of material perception starting with Adelson's~\cite{thingsstuff} proposal to divide the world into \emph{things} (countable objects) and \emph{stuff} (materials). Adelson argued stuff is important because of its ubiquity in everyday life. Ritchie \emph{et al.}~\cite{ritchie2021material} describe material perception in two parts. The first part is categorical recognition of what something is made of. The second part is recognizing material properties (\emph{e.g.}} \def\Eg{\emph{E.g.}, glossy, flexible, sound absorbent, sticky) which tells us how something will feel or how it will interact with other objects. While Schwartz \emph{et al.}~\cite{schwartz2019recognizing} proposed to recognize properties from local image patches we follow Bell \emph{et al.}~\cite{minc} who segmented images by recognizing material classes. Deep learning-based material recognition builds on some key developments. Sharan \emph{et al.}~\cite{sharan2013recognizing} showed that people can recognize 10 kinds of materials in the wild~\cite{fmd} with 85\% accuracy. Bell \emph{et al.}~\cite{opensurfaces}, following~\cite{labelme}, built an efficient annotation tool to create a large-scale material database from crowds and Internet photos. Next, Bell \emph{et al.}~\cite{minc} introduced large-scale training data and a deep learning approach leading to material segmentation as a building-block for haptics, material assignment, robotic navigation, acoustic simulation and context-aware mixed reality~\cite{gao2016deep,park2018photoshape,schissler2017acoustic,zhao2017fully,brandao2016material,chen2020context}. Xiao \emph{et al.}~\cite{upernet} introduced a multi-task scene parsing model which endows a photograph with a rich prediction of scene type, objects, object parts, materials and textures. Despite widespread adoption of material segmentation, a lack of large-scale data means evaluation rests on the only large-scale segmentation dataset, OpenSurfaces~\cite{opensurfaces}. We find there is room for improvement and propose the Dense Material Segmentation dataset (DMS) which has 3.2 million segments across 44k densely annotated images, and show empirically that our data leads to models which further close the gap between computer vision and human perception. \begin{figure}[t] \includegraphics[width=\linewidth]{figures/teaser3.pdf} \caption{{\bf Densely annotated materials.} Our annotations are full-scene, highly detailed and enable prediction of 46 kinds of materials.} \label{fig:teaser} \end{figure} There are goals to consider for a material dataset. First, we need a general-purpose set of material labels. We want to mimic human perception so we choose distinguishable materials even if they are of the same type. For example, we separate clear from opaque plastic rather than have a single label for all plastics. We define fine-grained labels which have useful properties, physical or otherwise. For example, a painted whiteboard surface has utility not found in a \matlabel{paint} label---it is appropriate for writing, cleaning and virtual content display. These functional properties come from how the material is applied rather than its physical structure. Ultimately we choose a set of 52 labels based on prior work and useful materials we found in photographs (details in Section~\ref{sec:dataset1}). Following~\cite{schwartz2019recognizing}, we also want indoor and outdoor scenes. Counter-intuitively, this could be unnecessary. Material is recognizable regardless of where it occurs in the world, and deep learning methods aim to create a model which generalizes to unseen cases. Thus, an indoor residential dataset~\cite{opensurfaces} could be sufficient. We find this is not the case. In Section~\ref{sec:crossdataset} we show that a model trained on~\cite{opensurfaces} performs worse on outdoor scenes. This is a key finding which impacts all algorithms which use~\cite{opensurfaces} for training. We also show that a model trained on our dataset is consistent across indoor and outdoor scenes. We want our database to support many scene parsing tasks so we need broad coverage of objects and scene attributes (which include activities, \emph{e.g.}} \def\Eg{\emph{E.g.}, eating). In Section~\ref{sec:dataset2} we show that we achieve better coverage compared to~\cite{opensurfaces}. We propose nine kinds of photographic types which distinguish different viewpoints and circumstances. Our motivation was to quantitatively evaluate cases where we had observed poor performance. This data can reveal new insights on how a model performs. We find that a state-of-the-art model underperforms in some settings whereas a model fit to our data performs well on all nine types. Our final goal is to have diversity in skin types. Skin is associated with race and ethnicity so it is crucial to have fair representation across different types of skin. We compare our skin type data to OpenSurfaces~\cite{opensurfaces} in Section~\ref{sec:dataset2} and show our data has practical benefits for training in Section~\ref{sec:skin}. The paper is organized as follows. In Section~\ref{sec:related} we review datasets. In Section~\ref{sec:dataset} we describe how we collected data to achieve our goals. In Section~\ref{sec:experiments} we compare our dataset to state-of-the-art data and a state-of-the-art model, study the impact of skin types on training, propose a material segmentation benchmark, and demonstrate material segmentation on real world photos. In summary, our contributions are: \begin{itemize}[topsep=3pt,itemsep=0pt] \item We introduce \text{DMS}\xspace, a large-scale densely-annotated material segmentation dataset and show it is diverse with extensive analysis (Section~\ref{sec:dataset}). \item We advance fairness toward skin types in material datasets (Section~\ref{sec:dataset2}). \item We introduce photographic types which reveal new insights on prior work and show that a model fit to our data performs better across datasets and viewpoints compared to the state-of-the-art (Section~\ref{sec:crossdataset}). \item We propose a new large-scale indoor and outdoor material segmentation benchmark of 46 materials and present a baseline result (Section~\ref{sec:baseline}). \end{itemize} \section{Related Work} \label{sec:related} {\bf Material Segmentation Datasets.} The largest dataset is OpenSurfaces~\cite{opensurfaces} which collected richly annotated polygons of residential indoor surfaces on 19k images, including 37 kinds of materials. The largest material recognition dataset is the Materials in Context Database~\cite{minc} which is 3M point annotations of 23 kinds of materials across 437k images. This data enables material segmentation by CNN and a dense CRF tuned on OpenSurfaces segments. The Local Materials Database~\cite{schwartz2019recognizing} collected segmentations, with the goal of studying materials using only local patches, of 16 kinds of materials across 5,845 images sourced from existing datasets. The Light-Field Material Dataset~\cite{wang20164d} is 1,200 4D indoor and outdoor images of 12 kinds of materials. The Multi-Illumination dataset~\cite{murmann2019dataset} captured 1,016 indoor scenes under 25 lighting conditions and annotated the images with 35 kinds of materials. Table~\ref{tab:datasets} lists the largest datasets. \begin{table}[t] \centering \caption{{\bf Large-scale datasets.} We propose a dataset with 23x more segments, more classes and 2.3x more images as the largest segment-annotated dataset.} \label{tab:datasets} \begin{tabular}{@{}lrlcrrl@{}}\toprule Dataset & & Annotation & Classes & Images & & Scenes\\\midrule OpenSurfaces~\cite{opensurfaces} & & 137k segments & 37 & 19,447 & & Indoor residential\\ Materials in Context~\cite{minc} & & 3M points & 23 & 436,749 & & Home interior \& exterior\\ Local Materials~\cite{schwartz2019recognizing} & & 9.4k segments & 16 & 5,845 & & Indoor \& outdoor\\ \text{DMS}\xspace (Ours) & & 3.2M segments & 52 & 44,560 & & Indoor \& outdoor\\ \bottomrule \end{tabular} \end{table} Materials have appeared in purpose-built datasets. The Ground Terrain in Outdoor Scenes (GTOS) database~\cite{gtos} and GTOS-mobile~\cite{gtos2} are 30k images of hundreds of instances of 40 kinds of ground materials and 81 videos of 31 kinds of ground materials, respectively. The Materials in Paintings dataset~\cite{van2021materials} is bounding box annotations and extracted segmentations on 19k paintings of 15 kinds of materials depicted by artists, partly distinguished into 50 fine-grained categories. COCO-Stuff~\cite{cocostuff} is segmentations of 91 kinds of stuff on 164k COCO~\cite{coco} images. While this is a source of material data, it is not a general-purpose material dataset because important surfaces (\emph{e.g.}} \def\Eg{\emph{E.g.}, objects labeled in COCO) are not assigned material labels. ClearGrasp~\cite{cleargrasp} is a dataset of 50k synthetic and 286 real RGB-D images of glass objects built for robotic manipulation of transparent objects. The Glass Detection Dataset~\cite{mei2020don} is 3,916 indoor and outdoor images of segmented glass surfaces. The Mirror Segmentation Dataset~\cite{yang2019my} is 4,018 images with segmented mirror surfaces across indoor and outdoor scenes. Fashionpedia~\cite{fashionpedia} is a database of segmented clothing images of which 10k are annotated with fashion attributes which include fine-grained clothing materials. Figaro~\cite{figaro} is 840 images of people with segmented hair distinguished into 7 kinds of hairstyles. {\bf Categorical Material Names.} Bell \emph{et al.}~\cite{opensurfaces} created a set of names by asking annotators to enter free-form labels which were merged into a list of material names. This approach is based on the appearance of surfaces as perceived by the annotators. Schwartz \emph{et al.}~\cite{schwartz2019recognizing} created a three-level hierarchy of material names where materials are organized by their physical properties. Some categories were added for materials which could not be placed in the hierarchy. In practice, both approaches resulted in a similar set of entry-level~\cite{ordonez2013large} names which also closely agree with prior studies of categorical materials in Internet images~\cite{fmd,hu2011toward}. \section{Dataset Details} In this section we supplement Section~\ref{sec:dataset} of the main paper. In Table~\ref{tab:matarea} we list names used in annotation tools. For brevity, names in the main paper are shortened and ``Photograph/painting'' is called \matlabel{artwork}. We also report the number of images in which a material occurs and total area, the sum over all images of the fraction of pixels covered by a material. In Table~\ref{tab:fusedpixelcount} we show the number of annotated pixels for each class. This count is according to the resized images which are smaller than the original images. { \begin{longtable}{@{}lrrrrcrrrr@{}} \caption{{\bf Material occurrence.} We report the number of images and total area (in units of image proportion, rounded).} \label{tab:matarea}\\ \toprule \endfirsthead \caption{continued from previous page}\\ \endhead & \multicolumn{4}{c}{Image Count} & \phantom{\;} & \multicolumn{4}{c}{Total Area}\\ \cmidrule{2-5} \cmidrule{7-10} & All & Train & Val & Test & & All & Train & Val & Test\\\midrule Animal skin & 1,007 & 479 & 260 & 268 & & 34 & 14 & 8 & 11\\ Bone/teeth/horn & 3,751 & 2,084 & 858 & 809 & & 4 & 2 & 1 & 2\\ Brickwork & 1,654 & 862 & 388 & 404 & & 204 & 113 & 46 & 44\\ Cardboard & 3,150 & 1,773 & 681 & 696 & & 133 & 73 & 30 & 30\\ Carpet/rug & 9,516 & 5,470 & 2,073 & 1,973 & & 985 & 567 & 208 & 209\\ Ceiling tile & 2,524 & 1,460 & 529 & 535 & & 299 & 173 & 65 & 61\\ Ceramic & 8,314 & 4,608 & 1,854 & 1,852 & & 260 & 135 & 69 & 56\\ Chalkboard/blackboard\;\; & 668 & 332 & 166 & 170 & & 68 & 34 & 16 & 19\\ Clutter & 128 & 41 & 43 & 44 & & 12 & 3 & 5 & 5\\ Concrete & 2,853 & 1,381 & 731 & 741 & & 400 & 186 & 109 & 105\\ Cork/corkboard & 273 & 122 & 78 & 73 & & 9 & 4 & 2 & 3\\ Engineered stone & 299 & 134 & 81 & 84 & & 18 & 8 & 5 & 5\\ Fabric/cloth & 31,489 & 17,727 & 6,875 & 6,887 & & 4,799 & 2,732 & 1,038 & 1,030\\ Fiberglass wool & 33 & 12 & 9 & 12 & & 3 & 1 & 1 & 1\\ Fire & 412 & 184 & 110 & 118 & & 12 & 5 & 4 & 3\\ Foliage & 11,384 & 5,902 & 2,714 & 2,768 & & 1,377 & 640 & 372 & 364\\ Food & 2,908 & 1,553 & 687 & 668 & & 287 & 126 & 82 & 79\\ Fur & 1,567 & 761 & 398 & 408 & & 206 & 95 & 55 & 55\\ Gemstone/quartz & 369 & 165 & 99 & 105 & & 10 & 5 & 2 & 3\\ Glass & 28,934 & 16,142 & 6,378 & 6,414 & & 2,159 & 1,192 & 488 & 479\\ Hair & 17,766 & 10,076 & 3,823 & 3,867 & & 336 & 190 & 74 & 72\\ Ice & 96 & 31 & 32 & 33 & & 27 & 10 & 8 & 8\\ Leather & 7,354 & 4,146 & 1,609 & 1,599 & & 210 & 118 & 50 & 42\\ Liquid, non-water & 294 & 129 & 83 & 82 & & 9 & 2 & 4 & 3\\ Metal & 30,504 & 16,917 & 6,801 & 6,786 & & 805 & 427 & 187 & 190\\ Mirror & 3,242 & 1,871 & 684 & 687 & & 315 & 176 & 67 & 72\\ Paint/plaster/enamel & 39,323 & 21,765 & 8,773 & 8,785 & & 10,965 & 6,073 & 2,434 & 2,458\\ Paper & 20,763 & 11,692 & 4,592 & 4,479 & & 883 & 485 & 200 & 199\\ Pearl & 282 & 129 & 77 & 76 & & 0 & 0 & 0 & 0\\ Photograph/painting & 4,344 & 2,435 & 976 & 933 & & 174 & 90 & 41 & 43\\ Plastic, clear & 6,431 & 3,583 & 1,425 & 1,423 & & 129 & 69 & 28 & 31\\ Plastic, non-clear & 30,506 & 17,154 & 6,662 & 6,690 & & 1,278 & 708 & 282 & 288\\ Rubber/latex & 7,811 & 4,244 & 1,788 & 1,779 & & 65 & 32 & 17 & 16\\ Sand & 272 & 110 & 76 & 86 & & 70 & 24 & 20 & 26\\ Skin/lips & 18,524 & 10,444 & 4,014 & 4,066 & & 509 & 287 & 113 & 108\\ Sky & 3,306 & 1,447 & 911 & 948 & & 1,020 & 435 & 286 & 298\\ Snow & 191 & 70 & 60 & 61 & & 57 & 19 & 20 & 18\\ Soap & 154 & 58 & 50 & 46 & & 0 & 0 & 0 & 0\\ Soil/mud & 1,855 & 860 & 495 & 500 & & 165 & 73 & 42 & 51\\ Sponge & 326 & 149 & 89 & 88 & & 1 & 1 & 0 & 0\\ Stone, natural & 2,076 & 962 & 569 & 545 & & 355 & 156 & 102 & 98\\ Stone, polished & 1,831 & 993 & 435 & 403 & & 187 & 97 & 46 & 44\\ Styrofoam & 88 & 33 & 27 & 28 & & 2 & 1 & 0 & 1\\ Tile & 10,173 & 5,722 & 2,206 & 2,245 & & 1,490 & 845 & 321 & 323\\ Wallpaper & 1,076 & 577 & 252 & 247 & & 233 & 127 & 56 & 49\\ Water & 2,063 & 959 & 552 & 552 & & 564 & 260 & 156 & 149\\ Wax & 1,107 & 578 & 260 & 269 & & 7 & 3 & 2 & 2\\ Whiteboard & 1,171 & 642 & 265 & 264 & & 111 & 60 & 24 & 27\\ Wicker & 1,895 & 1,031 & 438 & 426 & & 75 & 35 & 22 & 18\\ Wood & 24,248 & 13,496 & 5,309 & 5,443 & & 3,608 & 2,006 & 802 & 800\\ Wood, tree & 2,026 & 929 & 561 & 536 & & 72 & 30 & 19 & 22\\ Asphalt & 474 & 211 & 132 & 131 & & 73 & 35 & 17 & 22\\ \bottomrule \end{longtable} } \begin{table}[t] \centering \caption{{\bf Material occurrence in pixels.} We report the number of pixels covered by each label according to the resized images used by annotation tools.} \label{tab:fusedpixelcount} \begin{tabular}{@{}lrp{3mm}lr@{}}\toprule Animal skin & 22,995,883 & & Paint/plaster/enamel & 7,796,144,397\\ Bone/teeth/horn & 3,050,548 & & Paper & 628,009,751\\ Brickwork & 145,410,237 & & Pearl & 411,455\\ Cardboard & 93,881,191 & & Photograph/painting & 123,296,052\\ Carpet/rug & 707,147,207 & & Plastic, clear & 93,002,805\\ Ceiling tile & 216,289,692 & & Plastic, non-clear & 906,618,216\\ Ceramic & 185,191,692 & & Rubber/latex & 45,644,757\\ Chalkboard/blackboard\;\; & 48,346,203 & & Sand & 47,860,125\\ Clutter & 8,845,550 & & Skin/lips & 359,727,474\\ Concrete & 283,303,562 & & Sky & 702,864,398\\ Cork/corkboard & 6,468,131 & & Snow & 40,936,881\\ Engineered stone & 13,140,139 & & Soap & 265,782\\ Fabric/cloth & 3,408,488,743 & & Soil/mud & 114,322,155\\ Fiberglass wool & 1,874,005 & & Sponge & 1,075,671\\ Fire & 7,965,989 & & Stone, natural & 253,271,347\\ Foliage & 961,103,715 & & Stone, polished & 134,425,626\\ Food & 192,755,372 & & Styrofoam & 1,552,343\\ Fur & 145,359,760 & & Tile & 1,068,909,615\\ Gemstone/quartz & 7,273,649 & & Wallpaper & 168,289,772\\ Glass & 1,535,538,311 & & Water & 390,040,955\\ Hair & 238,600,730 & & Wax & 4,791,692\\ Ice & 18,308,742 & & Whiteboard & 80,692,711\\ Leather & 149,122,712 & & Wicker & 50,066,493\\ Liquid, non-water & 5,861,652 & & Wood & 2,584,799,129\\ Metal & 573,827,793 & & Wood, tree & 50,922,547\\ Mirror & 224,631,105 & & Asphalt & 51,218,822\\ \bottomrule \end{tabular} \end{table} \begin{table}[t] \centering \caption{{\bf Case resolution.} For some cases we provided additional instruction, which we summarize here.} \label{tab:labeldesc} \begin{tabular}{@{}lrl@{}}\toprule Case & & Resolution\\\midrule Skin with sparse hair & & \matlabel{Skin} for people; \matlabel{animal skin} for animals.\\ Coat of hair (\emph{e.g.}} \def\Eg{\emph{E.g.}, horse) & & \matlabel{Fur}.\\ Smoothed stone & & \matlabel{Polished stone}.\\ Laminated paper & & \matlabel{Clear plastic}.\\ Sauces & & \matlabel{Food} on food; \matlabel{non-water liquid} during preparation.\\ Chandelier prisms & & \matlabel{Gemstone} or \matlabel{glass} based on appearance.\\ Seasoned or blued metal & & \matlabel{Metal}.\\ Metal patina & & \matlabel{Metal}.\\ Printed text & & The underlying material.\\ Mirror-like finishes & & \matlabel{Mirror} if sole purpose is to reflect; the material otherwise.\\ Wrapped items & & The material of the wrap.\\ Electronic display & & \matlabel{Glass}.\\ Glass-top surface & & \matlabel{Glass}.\\ Thatch & & \matlabel{Wicker}.\\ Stained wood & & \matlabel{Wood}.\\ Projection screen & & \matlabel{Not on list}.\\ Vinyl & & The closest of \matlabel{non-clear plastic}, \matlabel{rubber} or \matlabel{leather}.\\ \bottomrule \end{tabular} \end{table} \begin{table}[t] \centering \caption{{\bf Objects and functional spaces.} We report the number of images for the largest classes of detected objects ({\it top}) and estimated scene functions ({\it bottom}).} \label{tab:topk} \begin{tabular}{@{}lrrrrp{3mm}lrrrr@{}}\toprule & All & Train & Val & Test & & & All & Train & Val & Test\\\midrule person & 19,966 & 11,219 & 4,303 & 4,426 & & tie & 1,398 & 802 & 280 & 314\\ chair & 17,617 & 9,987 & 3,826 & 3,780 & & bench & 1,196 & 671 & 244 & 277\\ dining table & 8,086 & 4,511 & 1,765 & 1,806 & & keyboard & 1,192 & 648 & 272 & 272\\ bottle & 5,964 & 3,320 & 1,313 & 1,325 & & cell phone & 1,121 & 629 & 269 & 222\\ cup & 5,656 & 3,136 & 1,248 & 1,265 & & mouse & 939 & 516 & 199 & 224\\ potted plant & 5,078 & 2,762 & 1,122 & 1,191 & & refrigerator & 834 & 504 & 161 & 168\\ book & 4,384 & 2,465 & 976 & 939 & & backpack & 739 & 420 & 154 & 165\\ tv & 4,303 & 2,411 & 947 & 942 & & oven & 737 & 399 & 173 & 165\\ laptop & 3,076 & 1,737 & 664 & 675 & & remote & 718 & 403 & 166 & 148\\ bowl & 2,900 & 1,579 & 636 & 682 & & dog & 692 & 369 & 162 & 160\\ couch & 2,846 & 1,614 & 628 & 602 & & cat & 685 & 344 & 162 & 178\\ vase & 2,790 & 1,551 & 626 & 609 & & toilet & 677 & 383 & 144 & 149\\ bed & 2,357 & 1,348 & 524 & 482 & & knife & 579 & 335 & 123 & 120\\ sink & 1,747 & 949 & 395 & 402 & & car & 542 & 292 & 128 & 121\\ handbag & 1,617 & 906 & 366 & 345 & & boat & 524 & 227 & 136 & 161\\ wine glass & 1,473 & 797 & 332 & 343 & & suitcase & 510 & 310 & 94 & 106\\ clock & 1,452 & 814 & 294 & 343 & & spoon & 477 & 258 & 106 & 112\\\midrule working & 14,343 & 8,032 & 3,124 & 3,166 & & swimming & 868 & 397 & 240 & 230\\ reading & 14,039 & 7,931 & 3,118 & 2,970 & & sports & 824 & 442 & 181 & 198\\ socializing & 8,545 & 4,869 & 1,794 & 1,873 & & using tools & 686 & 369 & 149 & 167\\ congregating & 7,317 & 4,129 & 1,559 & 1,620 & & praying & 649 & 363 & 144 & 138\\ eating & 5,862 & 3,217 & 1,294 & 1,345 & & touring & 626 & 283 & 159 & 180\\ shopping & 2,419 & 1,325 & 563 & 526 & & waiting in line & 593 & 362 & 118 & 113\\ studying & 2,070 & 1,147 & 459 & 463 & & exercise & 574 & 329 & 106 & 137\\ competing & 1,960 & 1,085 & 410 & 458 & & diving & 556 & 275 & 163 & 117\\ spectating & 1,489 & 845 & 305 & 335 & & bathing & 524 & 288 & 120 & 115\\ training & 1,335 & 744 & 295 & 295 & & research & 451 & 251 & 92 & 108\\ transporting & 1,153 & 587 & 268 & 297 & & cleaning & 445 & 247 & 94 & 104\\ boating & 876 & 371 & 235 & 267 & & driving & 404 & 199 & 92 & 113\\ \bottomrule \end{tabular} \end{table} \begin{table}[t] \centering \caption{{\bf Judgments.} We report the number of unique opinions (\emph{i.e.}} \def\Ie{\emph{I.e.}, label maps) collected for images.} \label{tab:judgments} \begin{tabular}{@{}ccr@{}}\toprule Label Map Count & & Images\\\midrule 1 & & 1,245\\ 2 & & 35,039\\ 3 & & 7,459\\ 4 & & 122\\ 5 & & 867\\ \bottomrule \end{tabular} \end{table} \begin{figure}[t] \includegraphics[height=13.5ex]{figures/fusedlabel_examples/fused_231048.png}\hfill \includegraphics[height=13.5ex]{figures/fusedlabel_examples/fused_197290.png}\hfill \includegraphics[height=13.5ex]{figures/fusedlabel_examples/fused_197794.png}\hfill \includegraphics[height=13.5ex]{figures/fusedlabel_examples/fused_57588.png}\hfill \includegraphics[height=13.5ex]{figures/density_representative_openmaterials.png}\hfill \crule{0.15ex}{13.5ex}\hfill \includegraphics[height=13.5ex]{figures/density_representative_opensurfaces.png} \caption{{\bf Fused material labels.} {\it Left to right:} van, sports, aerial photo, conference and dining area. The 5th image has a label density close to the mean density of \text{DMS}\xspace. The rightmost image is a fused label map from OpenSurfaces with a label density close to the mean density of OpenSurfaces. See Table~\ref{tab:fusedcount} for color legend.} \label{fig:morefusedlabels} \end{figure} We found that asking annotators to label all surfaces required extensive instruction. Our training document grew to include clarifications for rare and uncommon cases. In Table~\ref{tab:labeldesc} we summarize how we choose to resolve cases. In Table~\ref{tab:topk} we report the number of images in which an object class is detected by~\cite{detectron}, and the number of images which are predicted by~\cite{places365} to have scene elements for an activity. There are 80 object classes and 30 functional scene attributes. For brevity, we report only the largest classes. For most images we collected two unique opinions for labels. In Table~\ref{tab:judgments} we report the number of images with a given number of opinions. In Figure~\ref{fig:morefusedlabels} we expand on Figure~\ref{fig:fusedlabels} by showing more fused label maps and we show a fused label map from \text{DMS}\xspace and OpenSurfaces which are representative of the mean density of the respective datasets. \section{Skin Type Experiment} In Section~\ref{sec:skin}, we compared skin accuracies for three skin groups, Type I-II, Type III-IV, and Type V-VI. In order to compute accuracy we have to assign ground truth pixels to a group. We do this for images which contain detections of only one skin group. However, there are images where multiple skin groups co-occur and where no skin groups were detected. We do not evaluate on these two scenarios to avoid assigning groups incorrectly. \section{Benchmark Experiment Details} In this section we include more details on training our material segmentation benchmark model, DMS-46, from Section~\ref{sec:baseline} of the main paper. All the models are trained on NVIDIA Tesla V100 GPUs with 32 GB of memory. \subsection{Data Augmentation} In this section we show details on how we apply different data augmentation in training. We apply the following data transformation in order: {\bf Scale.} We first scale the input image so that the shortest dimension is 512 given that the training image size has height 512 and width 512. Then we randomly scale the input dimension with a ratio in [1, 2, 3, 4] uniformly. {\bf Horizontal Flip.} We apply random horizontal flip with probability 0.5. {\bf Vertical Flip.} We apply random vertical flip with probability 0.5. {\bf Color Jitter.} We apply color jitter with probability 0.9, using torchvision\footnote{https://pytorch.org/vision/} ColorJitter with brightness 0.4, contrast 0.4, saturation 0.4, and hue 0.1. {\bf Gaussian Blur or Gaussian Noise.} We apply this transformation with probability 0.5. Gaussian blur or Gaussian noise is selected with equal chance. We use a kernel size of 3 for Gaussian blur with uniformly chosen standard deviation in [0.1, 2.0]. Gaussian noise has mean of 0 and standard deviation 3 across all the pixels. {\bf Rotation.} We apply random rotation in [-45, 45] degrees with probability 0.5. We fill 0 for the area outside the rotated color image and an ignore value for the rotated segmentation map. The loss calculation ignores those pixels. {\bf Crop.} Finally, we randomly crop a subregion, height 512 and width 512, to feed into the neural network. \subsection{Loss Function} We use weighted symmetric cross entropy~\cite{wang2019symmetric} as the loss function for DMS-46. The weight ${W_{i}}$ for each class is calculated as a function of frequency of pixel count, ${F_{i}}$, for each material class ${i \in N}$~\cite{paszke2016enet}, in Equation~\ref{eq:weighted}.\begin{equation} \label{eq:weighted} W_{i} = \frac{1} {\log \left(1.02 + \frac{F_{i}} {\sum_{i=1}^{N}F_{i}}\right)} \end{equation} The number 1.02 is introduced in~\cite{paszke2016enet} to restrict the class weights in [1, 50] as the probability approaches 0. The weights we are using for DMS-46 are presented in Table~\ref{tab:allnames}. Symmetric cross entropy (SCE)~\cite{wang2019symmetric} is composed of a regular cross entropy (CE) and a reverse cross entropy (RCE) to avoid overfitting to noisy labels. Given the target distribution P and the predicted distribution Q, Equation~\ref{eq:sce} shows each part of the loss function for SCE. We choose $\alpha=1$ and $\beta=0.5$ for the weighting coefficients. \begin{equation} \label{eq:sce} L_{SCE} = \alpha L_{CE} + \beta L_{RCE} = \alpha (- \sum P \log Q \\) + \beta (- \sum Q \log P \\) \end{equation} \begin{table}[t] \centering \caption{{\bf Class weights.} We show the class weights we applied in the loss function for DMS-46.} \label{tab:allnames} \begin{tabular}{@{}lrp{3mm}lrp{3mm}lr@{}}\toprule Label & Weight & & Label & Weight & & Label & Weight \\\midrule Bone & 50.259 & & Whiteboard & 43.585 & & Hair & 33.870 \\ Wax & 50.140 & & Clear plastic & 42.709 & & Water & 30.402 \\ Clutter & 50.136 & & Soil & 42.585 & & Skin & 29.049 \\ Cork & 49.995 & & Cardboard & 42.482 & & Sky & 24.133 \\ Fire & 49.945 & & Artwork & 40.905 & & Metal & 23.981 \\ Gemstone & 49.826 & & Fur & 40.427 & & Paper & 22.447 \\ Engineered stone & 49.459 & & Pol. stone & 40.226 & & Carpet & 20.422 \\ Ice & 49.163 & & Brickwork & 38.979 & & Foliage & 19.325 \\ Animal skin & 48.646 & & Leather & 38.715 & & Non-clear plastic & 17.986 \\ Snow & 47.972 & & Food & 38.368 & & Tile & 15.895 \\ Sand & 47.603 & & Wallpaper & 37.854 & & Glass & 12.555 \\ Tree wood & 46.759 & & Ceramic & 37.201 & & Wood & 8.388 \\ Rubber & 46.672 & & Nat. stone & 35.919 & & Fabric & 6.596 \\ Wicker & 46.465 & & Mirror & 34.651 & & Paint & 3.415 \\ Chalkboard & 46.462 & & Ceiling tile & 34.617 \\ Asphalt & 46.447 & & Concrete & 34.095 \\ \bottomrule \end{tabular} \end{table} \subsection{Model Architecture Implementation} We select ResNet50~\cite{he2016deep} with dilated convolutions~\cite{chen2017deeplab,yu2015multi} as the encoder, and Pyramid Pooling Module from PSPNet~\cite{zhao2017pyramid} as the decoder. We choose this architecture because it has been shown to be effective for scene parsing~\cite{zhao2017pyramid,zhou2019semantic}. We use a publicly-available implementation of ResNet50dilated architecture with pre-trained weights (on an ImageNet task) from~\cite{zhou2017scene,zhou2019semantic}\footnote{https://github.com/CSAILVision/semantic-segmentation-pytorch}, under a BSD 3-Clause License. \subsection{Material Class Selection For Benchmark} In Section~\ref{sec:baseline} we reported empirically finding that six material categories (\matlabel{non-water liquid}, \matlabel{fiberglass}, \matlabel{sponge}, \matlabel{pearl}, \matlabel{soap} and \matlabel{styrofoam}) fail consistently across models. We present the three top candidates of DMS-52 which led us to this conclusion. Each one is the best fitted model, according to DMS-val, from a comprehensive hyper-parameter search on learning rate, learning rate scheduler, and optimizer. The first model, called DMS-52, is the best model across all models, is introduced in the main paper, and we report the per-class performance in Table~\ref{tab:first}. The second model, called DMS-52 variant A, has the same architecture as DMS-52 and uses all of OpenSurfaces data as additional training data. We report the per-class performance of DMS-52A in Table~\ref{tab:second}. The third model, called DMS-52 variant B, has a ResNet101 architecture and uses OpenSurfaces data as additional training data. We report the per-class performance of DMS-52B in Table~\ref{tab:third}. Across DMS-52, DMS-52A and DMS-52B the same six material classes are the worst-performing categories. Based on these findings we selected the other 46 categories for a benchmark and leave these six to future work. \begin{table}[t] \centering \caption{{\bf DMS-Val results for DMS-52.} Results are sorted by accuracy.} \label{tab:first} \begin{tabular}{@{}lccp{3mm}lccp{3mm}lcc@{}}\toprule & Acc & IoU & & & Acc & IoU & & & Acc & IoU\\\midrule Sky & 0.937 & 0.891 & & Glass & 0.703 & 0.489 & & Animal skin & 0.396 & 0.268 \\ Fur & 0.913 & 0.694 & & Paper & 0.686 & 0.496 & & Rubber & 0.345 & 0.240 \\ Foliage & 0.897 & 0.769 & & Leather & 0.676 & 0.397 & & Pol. stone & 0.332 & 0.236 \\ Ceiling tile & 0.890 & 0.679 & & Nat. stone & 0.634 & 0.447 & & Tree wood & 0.327 & 0.224 \\ Hair & 0.885 & 0.673 & & Wax & 0.626 & 0.430 & & Ice & 0.320 & 0.284 \\ Food & 0.882 & 0.689 & & Wicker & 0.622 & 0.432 & & Bone & 0.213 & 0.178 \\ Water & 0.881 & 0.695 & & Wallpaper & 0.603 & 0.397 & & Clutter & 0.209 & 0.186 \\ Skin & 0.876 & 0.647 & & Concrete & 0.579 & 0.333 & & Gemstone & 0.127 & 0.077 \\ Carpet & 0.855 & 0.582 & & Soil & 0.578 & 0.376 & & Cork & 0.115 & 0.102 \\ Fire & 0.821 & 0.621 & & Cardboard & 0.571 & 0.340 & & Eng. stone & 0.096 & 0.069 \\ Wood & 0.801 & 0.657 & & Non-clear plastic & 0.562 & 0.322 & & {\bf Sponge} & 0.051 & 0.050 \\ Fabric & 0.787 & 0.690 & & Asphalt & 0.560 & 0.386 & & {\bf Liquid} & 0.048 & 0.044 \\ Brickwork & 0.785 & 0.514 & & Metal & 0.548 & 0.305 & & {\bf Fiberglass} & 0.034 & 0.034 \\ Whiteboard & 0.771 & 0.508 & & Sand & 0.548 & 0.407 & & {\bf Styrofoam} & 0.003 & 0.003 \\ Tile & 0.752 & 0.564 & & Snow & 0.495 & 0.414 & & {\bf Pearl} & 0.000 & 0.000 \\ Chalkboard & 0.747 & 0.616 & & Clear plastic & 0.441 & 0.254 & & {\bf Soap} & 0.000 & 0.000 \\ Ceramic & 0.746 & 0.482 & & Mirror & 0.423 & 0.297 \\ Paint & 0.707 & 0.640 & & Artwork & 0.407 & 0.271 \\ \bottomrule \end{tabular} \end{table} \begin{table}[t] \centering \caption{{\bf DMS-Val results for DMS-52A.} Results are sorted by accuracy.} \label{tab:second} \begin{tabular}{@{}lccp{3mm}lccp{3mm}lcc@{}}\toprule & Acc & IoU & & & Acc & IoU & & & Acc & IoU\\\midrule Sky & 0.946 & 0.889 & & Leather & 0.695 & 0.407 & & Clear plastic & 0.405 & 0.255 \\ Fur & 0.921 & 0.692 & & Paint & 0.680 & 0.625 & & Rubber & 0.367 & 0.240 \\ Foliage & 0.912 & 0.768 & & Wicker & 0.670 & 0.436 & & Tree wood & 0.358 & 0.221 \\ Ceiling tile & 0.886 & 0.686 & & Concrete & 0.646 & 0.347 & & Wax & 0.327 & 0.246 \\ Hair & 0.883 & 0.677 & & Soil & 0.635 & 0.385 & & Ice & 0.230 & 0.228 \\ Water & 0.883 & 0.707 & & Fire & 0.626 & 0.570 & & Eng. stone & 0.207 & 0.108 \\ Skin & 0.877 & 0.636 & & Nat. stone & 0.620 & 0.439 & & Clutter & 0.204 & 0.185 \\ Food & 0.875 & 0.688 & & Wallpaper & 0.600 & 0.417 & & Bone & 0.167 & 0.139 \\ Carpet & 0.830 & 0.614 & & Asphalt & 0.599 & 0.401 & & Cork & 0.126 & 0.112 \\ Wood & 0.821 & 0.654 & & Cardboard & 0.586 & 0.362 & & Gemstone & 0.087 & 0.057 \\ Fabric & 0.801 & 0.700 & & Snow & 0.584 & 0.484 & & {\bf Sponge} & 0.066 & 0.060 \\ Whiteboard & 0.801 & 0.515 & & Non-clear plastic & 0.555 & 0.319 & & {\bf Fiberglass} & 0.029 & 0.029 \\ Brickwork & 0.789 & 0.496 & & Metal & 0.548 & 0.289 & & {\bf Liquid} & 0.009 & 0.009 \\ Ceramic & 0.772 & 0.471 & & Animal skin & 0.517 & 0.272 & & {\bf Pearl} & 0.000 & 0.000 \\ Tile & 0.745 & 0.576 & & Pol. stone & 0.489 & 0.254 & & {\bf Soap} & 0.000 & 0.000 \\ Chalkboard & 0.744 & 0.593 & & Sand & 0.463 & 0.389 & & {\bf Styrofoam} & 0.000 & 0.000 \\ Paper & 0.718 & 0.509 & & Artwork & 0.445 & 0.294 \\ Glass & 0.696 & 0.502 & & Mirror & 0.434 & 0.308 \\ \bottomrule \end{tabular} \end{table} \begin{table}[t] \centering \caption{{\bf DMS-Val results for DMS-52B.} Results are sorted by accuracy.} \label{tab:third} \begin{tabular}{@{}lccp{3mm}lccp{3mm}lcc@{}}\toprule & Acc & IoU & & & Acc & IoU & & & Acc & IoU\\\midrule Sky & 0.943 & 0.865 & & Glass & 0.690 & 0.488 & & Tree wood & 0.352 & 0.257 \\ Foliage & 0.905 & 0.776 & & Nat. stone & 0.685 & 0.402 & & Rubber & 0.310 & 0.265 \\ Hair & 0.891 & 0.687 & & Wicker & 0.684 & 0.454 & & Animal skin & 0.301 & 0.254 \\ Water & 0.889 & 0.655 & & Paper & 0.681 & 0.510 & & Ice & 0.239 & 0.232 \\ Food & 0.862 & 0.687 & & Wallpaper & 0.651 & 0.384 & & Bone & 0.206 & 0.177 \\ Skin & 0.861 & 0.675 & & Leather & 0.603 & 0.431 & & Wax & 0.202 & 0.166 \\ Ceiling tile & 0.858 & 0.673 & & Snow & 0.593 & 0.507 & & Eng. stone & 0.198 & 0.106 \\ Carpet & 0.847 & 0.566 & & Concrete & 0.587 & 0.316 & & Cork & 0.192 & 0.134 \\ Fur & 0.829 & 0.720 & & Metal & 0.553 & 0.300 & & Clutter & 0.131 & 0.113 \\ Wood & 0.820 & 0.642 & & Soil & 0.542 & 0.337 & & Gemstone & 0.095 & 0.082 \\ Fabric & 0.789 & 0.701 & & Non-clear plastic & 0.540 & 0.344 & & {\bf Liquid} & 0.029 & 0.022 \\ Whiteboard & 0.752 & 0.539 & & Asphalt & 0.536 & 0.369 & & {\bf Fiberglass} & 0.017 & 0.016 \\ Fire & 0.739 & 0.654 & & Cardboard & 0.529 & 0.367 & & {\bf Sponge} & 0.003 & 0.003 \\ Ceramic & 0.737 & 0.499 & & Sand & 0.498 & 0.407 & & {\bf Pearl} & 0.000 & 0.000 \\ Brickwork & 0.734 & 0.501 & & Pol. stone & 0.459 & 0.238 & & {\bf Soap} & 0.000 & 0.000 \\ Chalkboard & 0.733 & 0.634 & & Artwork & 0.438 & 0.276 & & {\bf Styrofoam} & 0.000 & 0.000 \\ Paint & 0.705 & 0.633 & & Clear plastic & 0.392 & 0.251 \\ Tile & 0.704 & 0.535 & & Mirror & 0.358 & 0.265 \\ \bottomrule \end{tabular} \end{table} \subsection{More Real-World Examples} We show more DMS-46 predictions on real world images in Figure~\ref{fig:examples2}. \begin{figure}[t] \includegraphics[height=14.7ex]{figures/predictions_om46/1.jpg}\hfill \includegraphics[height=14.7ex]{figures/predictions_om46/14.jpg}\hfill \includegraphics[height=14.7ex]{figures/predictions_om46/41.jpg}\hfill \includegraphics[height=14.7ex]{figures/predictions_om46/10.jpg} \includegraphics[height=13.0ex]{figures/predictions_om46/40.jpg}\hfill \includegraphics[height=13.0ex]{figures/predictions_om46/2.jpg}\hfill \includegraphics[height=13.0ex]{figures/predictions_om46/3.jpg} \includegraphics[height=13.55ex]{figures/predictions_om46/24.jpg}\hfill \includegraphics[height=13.55ex]{figures/predictions_om46/32.jpg}\hfill \includegraphics[height=13.55ex]{figures/predictions_om46/15_masked.jpg}\hfill \includegraphics[height=13.55ex]{figures/predictions_om46/51.jpg} \caption{{\bf Real-world examples.} Our model, \text{DMS}\xspace-46, predicts 46 kinds of indoor and outdoor materials. See Table~\ref{tab:fusedcount} for color legend.} \label{fig:examples2} \end{figure} \section{Image Credits} Photos in the paper and supplemental are used with permission. We thank the following Flickr users for sharing their photos with a CC-BY-2.0\footnote{https://creativecommons.org/licenses/by/2.0/} license. Some photos in the main paper were changed to remove logos or faces, scale, mask, or crop. Image credits: Random Retail, Ross Harmes, Amazing Almonds, Jonathan Hetzel, Patrick Lentz, Colleen Benelli, Jannes Pockele, FaceMePLS, Michael Button, samuelrodgers752, Ron Cogswell, David Costa, Janet McKnight, Jennifer, Adam Bartlett, www.toprq.com/iphone, Seth Goodman, Municipalidad Antofagasta, Tom Hughes-Croucher, Travis Grathwell, Associated Fabrication, Tjeerd Wiersma, mike.benedetti, Frédéric BISSON, Wendy Cutler, with wind, Barry Badcock, Joel Kramer, Gwydion M. Williams, Andreas Kontokanis, Jim Winstead, Mike Mozart, Keith Cooper, Kurman Communications, Inc., Paragon Apartments, Pedro Ribeiro Simões, jojo nicdao, Gobierno Cholula, David Becker, Emmanuel DYAN, Ewen Roberts, Supermac1961, fugzu, Erik (HASH) Hersman, Eugene Kim, Bernt Rostad, andrechinn, Geología Valdivia, peapod labs, Alex Indigo, Turol Jones, un artista de cojones, Blake Patterson, cavenderamy, tapetenpics, DLSimaging, Andy / Andrew Fogg, Scott, Justin Ruckman, espring4224, objectivised, Li-Ji, Bruno Kussler Marques, and BurnAway.
1,314,259,992,639
arxiv
\section{Introduction} It is since long known that ``branes'', i.e.~massive objects with a number of worldvolume and transverse directions, play a crucial role in string theory and M-theory. Historically, the first example of a brane other than a string was the eleven-dimensional supermembrane \cite{Bergshoeff:1987cm}. An important class of branes are the Dirichlet branes or, shortly, D-branes of ten-dimensional superstring theory \cite{Polchinski:1995mt}. These branes are non-perturbative in the sense that their brane tension scales with the inverse of the string coupling constant. D-branes played a decisive role in the calculation of the entropy of a certain class of black holes \cite{Strominger:1996sh}. Branes also play a central role in the AdS/CFT correspondence \cite{Maldacena:1997re} and the brane-world scenario \cite{Randall:1999ee}. Much information about branes in string theory and/or M-theory can be obtained by studying the low-energy approximation of these theories which is a supergravity theory that realizes the gauging of a specific supersymmetry algebra. For instance, the mere fact that eleven-dimensional supergravity contains a 3-form potential is already indicative of the fact that M-theory contains a membrane since 3-forms naturally couple to membranes. The fact that this membrane is actually a supermembrane which breaks half of the supersymmetry follows from the construction of a kappa-symmetric supermembrane action \cite{Bergshoeff:1987cm}. The occurrence of an eleven-dimensional supermembrane can also be deduced from the presence of a 2-form central charge in the eleven-dimensional super-Poincar\'e algebra \cite{de Azcarraga:1989gm}. Due to its relevance it is important to classify the branes of string theory and/or M-theory. One way to do this is to scan the possible $(p+1)$-forms in supergravity and verify whether they may couple to a supersymmetric brane by investigating the corresponding kappa-symmetric worldvolume action. In the case of $D$-dimensional supergravity with maximal supersymmetry such an investigation has been done for all $(p+1)$-forms with $0 \le p \le D-4$. One finds that to each $(p+1)$-form potential there corresponds precisely one half-supersymmetric $p$-brane. In the case that the potential transforms according to a certain representation of the U-duality group one finds as many half-supersymmetric branes as the dimension of that U-duality representation. One may wonder, given the above result, what more information about branes can be extracted from the low-energy supergravity theory. The reason why more information can be extracted is that our knowledge about the general structure of a supergravity theory has considerably been improved in recent years. Up to some years ago most of our knowledge about the $(p+1)$-forms of supergravity was restricted to the ones with $0\le p \le D-4$. Note that all such $p$-forms describe physical degrees freedom of the supergravity multiplet and that some potentials are related to each other by electromagnetic duality\,\footnote{ In general a $(p+1)$-form potential in $D$ dimensions is dual to a $(D-p-3)$-form potential.}. A common feature of these potentials is that they all couple to a brane whose number of transverse directions is more than or equal to three. Such branes approach flat Minkowski spacetime asymptotically and have a finite energy density. We will call such branes ```standard'' branes. In this work we will focus on the branes that have less than three transverse directions and compare them with the standard branes. These so-called ``non-standard'' branes couple to $(p+1)$-form potentials with $p=D-3, p=D-2$ or $p=D-1$. A special class is formed by the $(D-2)$-form potentials of supergravity. These potentials are special in the sense that they are dual to 0-form potentials, or scalars, but the duality relations do not imply that the number of $(D-2)$-form potentials is equal to the number of scalars. The $(D-2)$-form potentials couple to so-called ``defect branes'', i.e.~branes with two transverse directions. In four dimensions they occur as cosmic strings \cite{Greene:1989ya} while in ten dimensions they are the seven-branes \cite{Gibbons:1995vg} that underly F-theory \cite{Vafa:1996xn}. Defect branes differ from standard branes in the sense that they are not asymptotically flat and cannot be given finite energy unless one takes several of them in the presence of an orientifold. Another notewearthy feature is that the number of $(D-2)$-form potentials is not equal to the number of half-supersymmetric $(D-3)$-branes \cite{Bergshoeff:2011se}. This result is based on an analysis of the Wess-Zumino (WZ) terms in the world-volume action of a single $(D-3)$-brane, see, e.g., \cite{Bergshoeff:2010xc}. Based on U-duality arguments we know that for those cases that a gauge-invariant WZ term, consistent with world-volume supersymmetry, can be constructed a kappa-symmetric worldvolume action exists. Furthermore, we expect that configurations of $(D-3)$-branes with a finite energy, using the same techniques as in ten dimensions, can be constructed. It is natural to extend the discussion of the non-standard $(D-3)$-branes to the non-standard branes with one and zero transverse directions. Such branes are called ``domain walls'' and ``space-filling branes'', respectively. Domain walls play an important role in the AdS/CFT correspondence since they describe the renormalization group flow of the boundary conformal field theory. Space-filling branes are used in string theory to define strings with sixteen supercharges. Domain walls and space-filling branes are even more special than the defect branes in the sense that they couple to potentials that do not describe any physical degree of freedom in the corresponding supergravity theory\,\footnote{Note that the $(D-1)$-form potentials that couple to domain walls are dual to an integration constant such as a gauge coupling constant or a mass parameter.}. Much less was known about these $(D-1)$-form and $D$-form potentials because, unlike the $(p+1)$-form potentials with $0\le p\le D-4$, their existence does not follow from the representation theory of the supersymmetry algebra. One of the remarkable developments about our knowledge on supergravity in recent years has been that a full classification has been given of all $(D-1)$-form and $D$-form potentials that can be added to maximal supergravity. This has been achieved using three different techniques. By an explicit verification of the supersymmetry algebra it was shown that IIA and IIB supergravity allow such potentials and a classification, including the U-duality representations in the case of IIB supergravity, was given \cite{Bergshoeff:2005ac}. Although in principle possible, it is very impractical to extend this method to all lower dimensions. Fortunately, it turns out that a full classification for all dimensions $3\le D\le 11$ can be given \cite{Riccioni:2007au,Bergshoeff:2007qi} making use of the properties of the very extended Kac-Moody algebra $E_{11}$ \cite{West:2001as}. Remarkably, independently a full classification, including all dimensions lower than ten, has been given using the so-called embedding tensor technique \cite{deWit:2008ta}. Given the $(p+1)$-forms and their U-duality representations the next question to answer is how many components of these U-duality representations correspond to half-supersymmetric $p$-branes. For the standard branes the answer is simple: each component of the U-duality representation corresponds to a half-supersymmetric brane. However, for the half-supersymmetric non-standard branes the answer is less clear. Demanding that a gauge-invariant WZ term can be constructed, consistent with worldvolume supersymmetry, the half-supersymmetric non-standard branes of maximal supergravity have been classified in our earlier work \cite{Bergshoeff:2010xc,Bergshoeff:2011qk,Bergshoeff:2012ex}. An alternative derivation, based upon the counting of the real roots of the very extended Kac-Moody algebra $E_{11}$, has been given in \cite{Kleinschmidt:2011vu}. It is the purpose of this work to give a simple and elegant group-theoretical explanation of why the ``WZ method'' of \cite{Bergshoeff:2010xc,Bergshoeff:2011qk,Bergshoeff:2012ex} and the ``real-root method'' of \cite{Kleinschmidt:2011vu} give the same result. In general, given a supergravity theory with a $(p+1)$-form potential in a specific U-duality representation, the half-supersymmetric branes resulting from the WZ-term analysis correspond to the weights that can be chosen as highest weights of that U-duality representation\,\footnote{If, for given $p$, there are several irreducible U-duality representations, the half-supersymmetric branes belong to the highest-dimensional representation.}. A U-duality representation has typically weights of different lengths, and the weights that can be chosen as highest weights are those of maximum length. This simple observation leads to a way of counting the half-supersymmetric branes by counting the longest weights of the corresponding U-duality representation. As will be better explained in the conclusions, the longest weights of the U-duality representation of a field corresponding to a brane precisely correspond to the real roots of $E_{11}$. The above ``longest-weight rule'' explains several properties of the standard and non-standard branes we already mentioned. It turns out that all $(p+1)$-forms that couple to the standard branes only occur in U-duality representations where all weights have equal length and hence are longest weights. This explains why for standard branes each component of the U-duality representation corresponds to a half-supersymmetric brane. The U-duality representations corresponding to $(p+1)$-forms that couple to the non-standard branes are different: they have weights of different lengths and only the longest weights correspond to the half-supersymmetric non-standard branes. Such representations have the defining property that they contain more than one so-called {\sl dominant weight}, a notion that we will explain in the main text of this work. An interesting special case is formed by the $(p+1)$-forms that couple to the defect branes. These $(p+1)$-forms are always in the adjoint representation of the U-duality group $G$. These representations have the property that all weights are longest weights except for the Cartan generators. This explains the result of \cite{Bergshoeff:2011se} that out of the $\text{dim}\, G$ $(p+1)$-forms that couple to the defect branes only $\text{dim}\, G - \text{rank}\, G$ components couple to half-supersymmetric defect branes. For instance, IIB supergravity has three 8-form potentials that transform as the ${\bf 3}$ of SL(2,$\mathbb{R}$). Only two of them couple to a half-supersymmetric defect brane: the D7-brane and its S-dual. There is a second crucial difference between standard and non-standard branes: while for standard branes there is a one-to-one relation between half-supersymmetric branes and the BPS conditions they satisfy, in the case of non-standard branes this relation is many-to-one, i.e.~several non-standard branes may satisfy the same BPS condition~\cite{Bergshoeff:2011se,Bergshoeff:2012pm,Bergshoeff:2012jb}. This implies that, unlike the standard branes, the non-standard branes may form bound states that satisfy the same BPS condition. In this work we will explain why this property holds from a purely group-theoretical point of view by comparing the U-duality representations of the $(p+1)$-forms with the $R$-symmetry representations of the central charges in the supersymmetry algebra. In this way we are able to derive the explicit degeneracies of the different BPS conditions, i.e.~how many branes satisfy the same BPS condition. In this work we will point out a third difference between the behaviour of the standard and non-standard branes which concerns the brane orbits. Given a half-supersymmetric brane one can consider its orbit under the action of the U-duality symmetry group. All half-supersymmetric branes in maximal supergravity define highest-weight orbits. These highest-weight orbits are single-charge orbits. In the case of standard branes it has been shown that, if not all longest weights can be reached from the initial configuration by an infinitesimal transformation of the group $G$ (that is a transformation generated by the corresponding Lie algebra $g$), one can consider a two-charge state that is the sum of the initial state and one that cannot be reached by the initial state. One can then compute the orbit of this 2-charge configuration. In case the single-charge and two-charge configurations do not fill up the full U-duality representation one continues to consider three-charge configurations etc. This procedure can be iterated until one has a configuration in which all the weights can be reached \cite{Lu:1997bg}. In \cite{Bergshoeff:2012ex} we applied this method to compute the single-charge orbits for all the non-standard branes. In this work we will show how the multi-charge orbits of the non-standard branes can be calculated as well. A crucial difference with the standard brane orbits will be the existence of half-supersymmetric multi-charge orbits. We will furthermore show how the different standard and non-standard brane orbits can be characterized in terms of invariants of the U-duality group \cite{Ferrara:1997ci}. This work is organized as follows. In section 2 we show the relation between half-supersymmetric branes and the longest weights of the U-duality representation of the $(p+1)$-forms that couple to these branes. In particular, we will clarify the longest-weight rule mentioned earlier and use it to explain the number of standard and non-standard $(p+1)$-branes as compared to the number of U-duality components of the $(p+1)$-form potentials. Next, in section 3 we focus on a second difference between standard and non-standard branes which concerns the supersymmetry properties. More prescisely, we discuss the relation between the BPS conditions and the central charges in the supersymmetry algebra and calculate the degeneracies of the different BPS conditions. We will show that, unlike the standard banes, different non-standard branes may satisfy the same BPS condition. Finally, in section 4 we discuss the difference between the standard-brane and non-standard-brane orbits. We first review the standard-brane orbits and next show how to compute the orbits of the non-standard branes including the multi-charge orbits. We furthermore give the U-duality invariant that characterizes the different orbits. Our conclusions are presented in the last section. \section{Weights of half-supersymmetric branes} In this section we will show that the potentials associated to standard branes belong to irreducible representations with only one dominant weight, which is the highest weight of the representation, while the potentials associated to non-standard branes belong to irreducible representations with more than one dominant weight. If a representation contains more than one dominant weight, each dominant weight other than the highest weight defines a sub-representation whose weights are shorter than the highest weight, while if a representation has only one dominant weight, this means that all the weights have the same length. We will show that all half-supersymmetric branes correspond to the longest weights in the irreducible representation of the potential. In particular, this explains why the number of standard branes always coincides with the dimension of the corresponding representation, while the number of non-standard branes is less than the dimension of the corresponding representation. In order to make all these statements clear, we will give in the first part of this section a review of the Lie algebra tools that are needed to understand them\,\footnote{For a pedagogical introduction to Lie algebras, see e.g. \cite{Cahn:1985wk}.}. In the second part of this section we will proceed with identifying the branes with the longest weights within each irreducible representation in any dimension. The simple Lie algebra $sl(2)$ is the prototype of any simple finite-dimensional Lie algebra. The generators of $sl(2)$ are the Cartan generator $L_3$ and the creation and annihilation operators $L_+$ and $L_-$. The commutator between the Cartan generator $L_3$ and the $L_\pm$ generators is given by \begin{equation} [ L_3 , L_\pm ] = \pm L_\pm \quad . \end{equation} Similarly, for any simple Lie algebra $g$ of dimension $d$ and with Cartan subalgebra $h$ of dimension $r$, the $d-r$ generators which are not Cartan can be split into $(d-r)/2$ creation operators $E_\alpha$ and $(d-r)/2$ annihilation operators $E_{-\alpha}$ obeying the commutation relations \begin{equation} [ H , E_{\pm \alpha} ] = \pm \alpha (H) E_{\pm \alpha} \end{equation} with the Cartan generators $H \in h$, where the roots $\pm \alpha (H)$ are linear functions of $H$\,\footnote{\label{positiverootfootnote}One defines $\alpha (H)$ as the {\it positive} roots, and their opposite as the {\it negative} ones. Clearly, this definition corresponds to the choice of which operators are creation operators and which ones are annihilation operators. We will make this more clear later.}. Moreover, for every $E_\alpha$ there is a corresponding $H_\alpha$ such that the root $\alpha (H)$ is proportional to the Cartan-Killing form $(H_\alpha , H)$. Thus, the Cartan-Killing form induces a scalar product $< \alpha ,\beta >$ in the space of roots, which is proportional to $(H_\alpha , H_\beta )$. One can then associate to each root a vector in an $r$-dimensional vector space with Euclidean signature. One then defines the {\it simple} roots $\alpha_1 , ..., \alpha_r$ as those such that all the other positive roots (see footnote~$^{\ref{positiverootfootnote}}$) can be obtained as positive sums of them. We consider as a simple example the roots of $sl(3)$, which are drawn in Fig.~\ref{rootssl3}. The simple roots are $\alpha_1$ and $\alpha_2$, while the other positive root is their sum $ \alpha_1+ \alpha_2$. Actually, in the diagram any pair of roots that form an angle of $2\pi /3$ can be chosen as simple roots. The choice made in Fig.~\ref{rootssl3} defines the operators $E_{\alpha_1}$, $E_{\alpha_2}$ and $E_{\alpha_1 +\alpha_2}$ as ``creation'' operators, and correspondingly $E_{-\alpha_1}$, $E_{-\alpha_2}$ and $E_{-\alpha_1 -\alpha_2}$ are ``annihilation'' operators. \begin{figure} \centering \scalebox{1} { \begin{pspicture}(0,-3.8992147)(9.81,3.9364839) \psline[linewidth=0.06cm,arrowsize=0.05291667cm 4.0,arrowlength=1.4,arrowinset=0.0,linecolor=red]{->}(4.8,-0.08921477)(3.38,2.3507853) \psline[linewidth=0.06cm,arrowsize=0.05291667cm 4.0,arrowlength=1.4,arrowinset=0.0,linecolor=red]{->}(4.8,-0.08921477)(1.98,-0.08921477) \psline[linewidth=0.06cm,arrowsize=0.05291667cm 4.0,arrowlength=1.4,arrowinset=0.0,linecolor=red]{->}(4.8,-0.08921477)(6.22,2.3507853) \psline[linewidth=0.06cm,arrowsize=0.05291667cm 4.0,arrowlength=1.4,arrowinset=0.0,linecolor=red]{->}(4.8,-0.08921477)(3.38,-2.5292149) \psline[linewidth=0.06cm,arrowsize=0.05291667cm 4.0,arrowlength=1.4,arrowinset=0.0,linecolor=red]{->}(4.8,-0.08921477)(7.62,-0.08921477) \psline[linewidth=0.06cm,arrowsize=0.05291667cm 4.0,arrowlength=1.4,arrowinset=0.0,linecolor=red]{->}(4.8,-0.08921477)(6.22,-2.5292149) \usefont{T1}{ppl}{m}{n} \rput{55.888874}(0.58888847,-3.6979384){\rput(3.7434375,-1.2742147){\large $-\alpha_{1}-\alpha_{2}$}} \usefont{T1}{ppl}{m}{n} \rput{60.39754}(3.8652787,-3.9893463){\rput(5.3234377,1.3457853){\large $\alpha_{1}+\alpha_{2}$}} \psline[linewidth=0.02cm](1.0,-0.08921477)(8.6,-0.08921477) \psline[linewidth=0.02cm](4.8,2.3107853)(4.8,-2.4892147) \psline[linewidth=0.02cm,linestyle=dashed,dash=0.16cm 0.16cm](4.8,-3.8892148)(4.8,-1.6892148) \psline[linewidth=0.02cm,linestyle=dashed,dash=0.16cm 0.16cm](4.8,1.9107852)(4.8,3.5107853) \psline[linewidth=0.02cm,linestyle=dashed,dash=0.16cm 0.16cm](0.0,-0.08921477)(1.8,-0.08921477) \psline[linewidth=0.02cm,linestyle=dashed,dash=0.16cm 0.16cm](8.0,-0.08921477)(9.8,-0.08921477) \usefont{T1}{ppl}{m}{n} \rput(6.7434373,0.10578523){\large $\alpha_{1}$} \usefont{T1}{ppl}{m}{n} \rput{-56.242115}(3.9211056,4.189283){\rput(5.8434377,-1.5542147){\large $-\alpha_{2}$}} \usefont{T1}{ppl}{m}{n} \rput(3.1634376,0.12578523){\large $-\alpha_{1}$} \usefont{T1}{ppl}{m}{n} \rput{-59.27895}(0.5316899,4.2266393){\rput(3.9434376,1.6657852){\large $\alpha_{2}$}} \pscircle[linewidth=0.02,dimen=outer,fillstyle=solid](4.8,-0.08921477){0.2} \end{pspicture} } \caption{The roots of the Lie algebra $sl(3)$. The roots are painted in red because they are the six longest weights of the ${\bf 8}$. In general, for any $sl(3)$ representation, we paint in red the longest weights of the representation. \label{rootssl3}} \end{figure} In $sl(2)$, irreducible representations are labelled by $j_{\rm max}$ (which takes integer or half-integer positive values), which is the eigenvalue of $L_3$ with eigenvector ${\bf j_{\rm max}}$ annihilated by $L_+$. Acting with $L_-$, one lowers by 1 the $L_3$ eigenvalue. Proceeding this way, one can lower the eigenvalue down to $-j_{\rm max}$, whose corresponding eigenvector ${\bf - j_{\rm max}} $ is annihilated by $L_-$. This altogether forms a representation of dimension $2j_{\rm max} +1$. Analogously, in a generic simple Lie algebra $g$, irreducible representations are labelled by eigenstates of the Cartan generators (i.e.~weight vectors) ${\bf W_{\rm max} }$ of eigenvalue (weight) $W_{\rm max}(H)$, such that $E_{\alpha_i} {\bf W_{\rm max} } =0$ for all simple roots $\alpha_i$\,\footnote{This implies that $E_{\alpha} {\bf W_{\rm max} } =0$ vanishes for all positive roots $\alpha$.}. Such weights are called {\it highest weights}. Acting with $E_{-\alpha_i}$ on ${\bf W_{\rm max} }$, one either gets zero or a weight vector ${\bf W_{\rm max} - \alpha_i }$ of eigenvalue $W_{\rm max}(H) - \alpha_i (H)$. One can then keep acting with $E_{-\alpha_i}$ until one finds a $q_i$ such that \begin{equation} ( E_{-\alpha_i} )^{q_i+1} {\bf W_{\rm max} } =0 \ . \end{equation} Exactly as for the roots, for every weight vector ${\bf W } $ there is a corresponding Cartan generator $H_W$ such that the weight $W (H)$ is proportional to the Cartan-Killing form $(H_W , H)$. Thus, one can define a scalar product $<W, \alpha >$ between the weight and the roots, and draw the weight on the $r$-dimensional vector space of the roots. In terms of the scalar product, $q_i$ is then given by the relation \begin{equation} q_i = \frac{2 <W_{\rm max} , \alpha_i > }{<\alpha_i , \alpha_i >}\quad ,\label{qihighestweight} \end{equation} where clearly the $q_i$'s must be non-negative for consistency. For a generic weight vector (not a highest-weight vector) ${\bf W}$, one can similarly obtain $m_i -p_i$, such that \begin{equation} ( E_{-\alpha_i} )^{m_i+1} {\bf W } = ( E_{\alpha_i} )^{p_i+1} {\bf W } =0 \end{equation} with non-negative $m_i$ and $p_i$, from the relation \begin{equation} m_i - p_i = \frac{2 <W , \alpha_i > }{<\alpha_i , \alpha_i >}\quad .\label{Dynkinlabelsanyweight} \end{equation} \begin{figure} \centering \scalebox{1} { \begin{pspicture}(0,-4.8498244)(10.414945,4.8573036) \psline[linewidth=0.02cm](1.9865161,1.2114505)(8.386516,1.2114505) \psline[linewidth=0.02cm](5.1865163,3.4114504)(5.1865163,-2.7885494) \psline[linewidth=0.02cm,linestyle=dashed,dash=0.16cm 0.16cm](5.1865163,-4.1885495)(5.1865163,-1.9885495) \psline[linewidth=0.02cm,linestyle=dashed,dash=0.16cm 0.16cm](5.1865163,2.6114504)(5.1865163,4.2114506) \psline[linewidth=0.02cm,linestyle=dashed,dash=0.16cm 0.16cm](0.98651606,1.2114505)(2.7865162,1.2114505) \psline[linewidth=0.02cm,linestyle=dashed,dash=0.16cm 0.16cm](7.786516,1.2114505)(9.586516,1.2114505) \psline[linewidth=0.04cm,arrowsize=0.05291667cm 4.0,arrowlength=1.4,arrowinset=0.0,linecolor=red]{->}(5.1865163,1.2114505)(5.1865163,-2.1885495) \psline[linewidth=0.04cm,arrowsize=0.05291667cm 4.0,arrowlength=1.4,arrowinset=0.0,linecolor=red]{->}(5.174273,1.2014505)(8.118759,2.9014504) \psline[linewidth=0.04cm,arrowsize=0.05291667cm 4.0,arrowlength=1.4,arrowinset=0.0,linecolor=red]{->}(5.198759,1.2014505)(2.254273,2.9014504) \usefont{T1}{ppl}{m}{n} \rput{31.270283}(2.1499527,-3.0707326){\rput(6.5310473,2.3214505){$\frac{2}{3}\alpha_{1}+\frac{1}{3}\alpha_{2}$}} \usefont{T1}{ppl}{m}{n} \rput{90.29318}(4.4618526,-5.387741){\rput(4.8810472,-0.45854953){$-\frac{1}{3}\alpha_{1}-\frac{2}{3}\alpha_{2}$}} \usefont{T1}{ppl}{m}{n} \rput{-32.20001}(-0.63939524,2.3961024){\rput(3.8010473,2.3214505){$-\frac{1}{3}\alpha_{1}+\frac{1}{3}\alpha_{2}$}} \end{pspicture} } \caption{\label{theweightsofthe3} The weights of the ${\bf 3}$ of $sl(3)$. All the weights have the same length and we paint them in red.} \end{figure} The quantities ${2 <W , \alpha_i > }/{<\alpha_i , \alpha_i >}$ are in general called {\it Dynkin labels}, and one denotes the representation in terms of the Dynkin labels of the highest weight as $\boxed{q_1 \ q_2 \ ...\ q_r }$. If $q_i \neq 0$, this means that $W_{\rm max} - \alpha_i$ is a weight, and one then obtains its Dynkin labels by simply subtracting to each $q_j$ the following element from the the $i$-th row of the Cartan matrix $A$: \begin{equation} A_{ij} = 2 \frac{<\alpha_i , \alpha_j> }{<\alpha_j , \alpha_j >}\quad . \end{equation} One can then read from eq. \eqref{Dynkinlabelsanyweight} which $m_j$'s are different from zero (using the fact that $p_j = \delta_{ij}$ because the weight was obtained by subtracting $\alpha_i$ to the highest weight), and correspondingly one can construct the weight $W_{\rm max} -\alpha_i -\alpha_j$, whose Dynkin labels are obtained by subtracting to the previous ones the $j$th row of the Cartan matrix. The full representation is constructed by iterating this procedure, that is by keeping subtracting simple roots. One can show that one can never act on a weight with a raising operator in a direction different from the one the weight comes from without annihilating it. This means that at each stage one knows the value of all $p_j$'s because one knows how each weight is related to the previous ones. Thus, given the Dynkin labels of the highest weight, there is a simple iterative procedure to re-construct all the weights of the representation. The representation is complete when one obtains a weight such that all $m_i$'s are zero. Such weight is called the {\it lowest weight}. \begin{figure} \centering \scalebox{1} { \begin{pspicture}(0,-5.1296544)(11.604319,4.1296754) \psline[linewidth=0.06cm,arrowsize=0.05291667cm 4.0,arrowlength=1.4,arrowinset=0.0,linecolor=red]{->}(5.7536855,-0.08980105)(2.9377182,1.5571133) \psline[linewidth=0.06cm,arrowsize=0.05291667cm 4.0,arrowlength=1.4,arrowinset=0.0,linecolor=red]{->}(5.7536855,-0.08980105)(8.577701,1.5432748) \psline[linewidth=0.02cm](8.754422,-0.08931123)(2.5544224,-0.08931123) \psline[linewidth=0.02cm](5.743871,-4.089789)(5.754422,2.1106887) \psline[linewidth=0.02cm,linestyle=dashed,dash=0.16cm 0.16cm](5.7679167,3.3101814)(5.762519,1.1101881) \psline[linewidth=0.02cm,linestyle=dashed,dash=0.16cm 0.16cm](5.7448525,-3.4897902)(5.7409267,-5.0897856) \psline[linewidth=0.02cm,linestyle=dashed,dash=0.16cm 0.16cm](9.953667,-0.10550432)(8.153672,-0.10108778) \psline[linewidth=0.02cm,linestyle=dashed,dash=0.16cm 0.16cm](2.6344223,-0.08931123)(1.8544223,-0.08931123) \psdots[dotsize=0.12,dotangle=179.85942](5.7536855,-0.08980105) \usefont{T1}{ppl}{m}{n} \rput{89.93101}(7.127371,-3.6461434){\rput(5.3589535,1.7606888){$\frac{1}{3}\alpha_{1}+\frac{2}{3}\alpha_{2}$}} \usefont{T1}{ppl}{m}{n} \rput{-32.132645}(0.008459474,2.4391885){\rput(4.2089534,1.2206888){$-\frac{2}{3}\alpha_{1}+\frac{2}{3}\alpha_{2}$}} \usefont{T1}{ppl}{m}{n} \rput{-30.994379}(1.5553126,3.8991597){\rput(7.7789536,-0.83931124){$\frac{1}{3}\alpha_{1}-\frac{1}{3}\alpha_{2}$}} \usefont{T1}{ppl}{m}{n} \rput{31.240772}(0.12812975,-2.0884645){\rput(3.7689536,-0.7993112){$-\frac{2}{3}\alpha_{1}-\frac{1}{3}\alpha_{2}$}} \usefont{T1}{ppl}{m}{n} \rput{90.00887}(3.624694,-7.134318){\rput(5.3489537,-1.7393112){$-\frac{2}{3}\alpha_{1}-\frac{4}{3}\alpha_{2}$}} \usefont{T1}{ppl}{m}{n} \rput{30.055645}(1.5019294,-3.4245937){\rput(7.0989537,1.1006888){$\frac{4}{3}\alpha_{1}+\frac{2}{3}\alpha_{2}$}} \psline[linewidth=0.06cm,arrowsize=0.05291667cm 4.0,arrowlength=1.4,arrowinset=0.0,linecolor=red]{->}(5.7536855,-0.08980105)(5.7456865,-3.3497913) \psline[linewidth=0.06cm,arrowsize=0.05291667cm 4.0,arrowlength=1.4,arrowinset=0.0]{->}(5.7536855,-0.08980105)(7.1716695,-0.91328275) \psline[linewidth=0.06cm,arrowsize=0.05291667cm 4.0,arrowlength=1.4,arrowinset=0.0]{->}(5.7536855,-0.08980105)(4.331678,-0.90631443) \psline[linewidth=0.06cm,arrowsize=0.05291667cm 4.0,arrowlength=1.4,arrowinset=0.0]{->}(5.7536855,-0.08980105)(5.7577095,1.550194) \end{pspicture} } \caption{\label{the6ofsl3} The weights of the {\bf 6} of $sl(3)$. We have painted in red the three longest weights. } \end{figure} It is instructive to consider the simple example of $sl(3)$, whose Cartan matrix is \begin{equation} \begin{pmatrix} 2 & -1\\ -1 & 2 \end{pmatrix} \quad , \end{equation} as it can be deduced from Fig.~\ref{rootssl3}. The lowest-dimensional representation is the ${\bf 3}$, whose highest weight is denoted by the Dynkin labels $\boxed{1 \ 0}$. Writing $W_{\rm max}^{ \bf 3}$ as a linear combination of the simple roots and using eq. \eqref{qihighestweight} with $q_1=1$ and $q_2=0$, one derives \begin{equation} W_{\rm max}^{ \bf 3} = \frac{2}{3} \alpha_1 + \frac{1}{3} \alpha_2 \quad . \end{equation} From the fact that $q_1 =1$ and $q_2 =0$ one obtains the weight $W_{\rm max }^{\bf 3} - \alpha_1 $, whose Dynkin labels are $\boxed{ -1 \ 1}$. We know that $p_1 =1$, which implies $m_1=0$, and $p_2 =0$, which implies $m_2 =1$. We can then write the weight $W_{\rm max }^{\bf 3} - \alpha_1 -\alpha_2$, with Dynkin labels $\boxed{0 \ -1}$. We know that this weight has $p_1=0$ and $p_2 =1$, which imply that all $m_i$'s are zero. This is the lowest weight. All the weights of the representation are drawn in Fig.~\ref{theweightsofthe3}. As another example, we consider the adjoint of $sl(3)$, whose highest weight has Dynkin labels $\boxed{1 \ 1 }$, which gives \begin{equation} W_{\rm max}^{\bf 8} = \alpha_1 + \alpha_2 \quad . \end{equation} Using the technique that we have just reviewed, one obtains all the weights of this representation, which are the roots of Fig.~\ref{rootssl3}. In general, it can happen that the Dynkin labels are all non-negative. In such case one calls the corresponding weight a {\it dominant weight}. The highest weight is clearly a dominant weight, but the contrary is not necessarily true: there can be dominant weights that are not highest weights. We consider as an example the ${\bf 6}$ of $sl(3)$. The Dynkin labels of the highest weight are $\boxed{2 \ 0}$, corresponding to \begin{equation} W_{\rm max}^{\bf 6} = \frac{4}{3} \alpha_1 + \frac{2}{3} \alpha_2 \quad . \end{equation} The weights of the representation are shown in Fig.~\ref{the6ofsl3}. \begin{figure} \centering \scalebox{.8} { \begin{pspicture}(0,-6.7350965)(14.601648,5.7372804) \psline[linewidth=0.06cm,arrowsize=0.05291667cm 4.0,arrowlength=1.4,arrowinset=0.0]{->}(7.00721,0.20367658)(7.008521,-1.4363229) \psdots[dotsize=0.12,dotangle=180.04582](7.00721,0.20367658) \psline[linewidth=0.06cm,arrowsize=0.05291667cm 4.0,arrowlength=1.4,arrowinset=0.0]{->}(7.00721,0.20367658)(9.82852,-1.4340677) \psline[linewidth=0.06cm,arrowsize=0.05291667cm 4.0,arrowlength=1.4,arrowinset=0.0,linecolor=red]{->}(7.00721,0.20367658)(5.5904727,-3.8774576) \psline[linewidth=0.06cm,arrowsize=0.05291667cm 4.0,arrowlength=1.4,arrowinset=0.0,linecolor=red]{->}(7.00721,0.20367658)(11.246552,1.0270672) \psline[linewidth=0.06cm,arrowsize=0.05291667cm 4.0,arrowlength=1.4,arrowinset=0.0]{->}(7.00721,0.20367658)(8.426554,1.0248119) \psline[linewidth=0.06cm,arrowsize=0.05291667cm 4.0,arrowlength=1.4,arrowinset=0.0]{->}(7.00721,0.20367658)(4.188522,-1.4385781) \psline[linewidth=0.06cm,arrowsize=0.05291667cm 4.0,arrowlength=1.4,arrowinset=0.0,linecolor=red]{->}(7.00721,0.20367658)(9.824601,3.4659307) \psline[linewidth=0.06cm,arrowsize=0.05291667cm 4.0,arrowlength=1.4,arrowinset=0.0]{->}(7.00721,0.20367658)(5.586554,1.0225407) \psline[linewidth=0.06cm,arrowsize=0.05291667cm 4.0,arrowlength=1.4,arrowinset=0.0]{->}(7.00721,0.20367658)(7.0046024,3.4636755) \usefont{T1}{ppl}{m}{n} \rput{67.83036}(1.1150942,-6.823015){\rput(5.601501,-2.5664835){$-\frac{4}{3}\alpha_{1}-\frac{5}{3}\alpha_{2}$}} \psline[linewidth=0.06cm,arrowsize=0.05291667cm 4.0,arrowlength=1.4,arrowinset=0.0,linecolor=red]{->}(7.00721,0.20367658)(2.766555,1.0202855) \psline[linewidth=0.06cm,arrowsize=0.05291667cm 4.0,arrowlength=1.4,arrowinset=0.0,linecolor=red]{->}(7.00721,0.20367658)(4.184603,3.4614203) \psline[linewidth=0.06cm,arrowsize=0.05291667cm 4.0,arrowlength=1.4,arrowinset=0.0,linecolor=red]{->}(7.00721,0.20367658)(8.430472,-3.8751864) \usefont{T1}{ppl}{m}{n} \rput{-30.23159}(-0.22843747,2.5097709){\rput(4.501501,1.6935165){$-\frac{1}{3}\alpha_{1}+\frac{1}{3}\alpha_{2}$}} \usefont{T1}{ppl}{m}{n} \rput{-26.833107}(1.2118564,3.9958003){\rput(8.951501,-0.5264835){$\frac{2}{3}\alpha_{1}-\frac{2}{3}\alpha_{2}$}} \usefont{T1}{ppl}{m}{n} \rput{-89.22752}(9.334733,5.256912){\rput(7.301501,-2.0864835){$-\frac{1}{3}\alpha_{1}-\frac{2}{3}\alpha_{2}$}} \usefont{T1}{ppl}{m}{n} \rput{31.781218}(2.2211142,-4.6066647){\rput(9.171501,1.6135166){$\frac{2}{3}\alpha_{1}+\frac{1}{3}\alpha_{2}$}} \usefont{T1}{ppl}{m}{n} \rput{-68.029816}(7.627771,6.0977616){\rput(8.301501,-2.5864835){$-\frac{1}{3}\alpha_{1}-\frac{5}{3}\alpha_{2}$}} \psline[linewidth=0.02cm](12.207208,0.20783512)(1.6072114,0.19935809) \psline[linewidth=0.02cm](7.0104084,-3.796322)(7.003851,4.403675) \psline[linewidth=0.02cm,linestyle=dashed,dash=0.16cm 0.16cm](7.002571,6.0036745)(7.0043306,3.8036754) \psline[linewidth=0.02cm,linestyle=dashed,dash=0.16cm 0.16cm](7.0100884,-3.3963223)(7.0113683,-4.9963217) \psline[linewidth=0.02cm,linestyle=dashed,dash=0.16cm 0.16cm](13.4072075,0.20879479)(11.607208,0.20735529) \psline[linewidth=0.02cm,linestyle=dashed,dash=0.16cm 0.16cm](2.0072112,0.19967797)(0.20721175,0.19823848) \psdots[dotsize=0.12,dotangle=180.04582](7.00721,0.20367658) \usefont{T1}{ppl}{m}{n} \rput{-49.04286}(-0.7599882,4.80948){\rput(4.8615007,3.2535164){$-\frac{1}{3}\alpha_{1}+\frac{4}{3}\alpha_{2}$}} \usefont{T1}{ppl}{m}{n} \rput{90.81805}(8.754715,-4.7151318){\rput(6.671501,1.9735166){$\frac{2}{3}\alpha_{1}+\frac{4}{3}\alpha_{2}$}} \usefont{T1}{ppl}{m}{n} \rput{11.196235}(0.42397848,-1.9700599){\rput(10.231501,1.1935165){$\frac{5}{3}\alpha_{1}+\frac{1}{3}\alpha_{2}$}} \usefont{T1}{ppl}{m}{n} \rput{48.894444}(5.491604,-5.7242293){\rput(9.011501,3.1935165){$\frac{5}{3}\alpha_{1}+\frac{4}{3}\alpha_{2}$}} \usefont{T1}{ppl}{m}{n} \rput{-10.659738}(-0.10541875,0.8254959){\rput(4.3415008,0.99351656){$-\frac{4}{3}\alpha_{1}+\frac{1}{3}\alpha_{2}$}} \usefont{T1}{ppl}{m}{n} \rput{28.546108}(0.39093283,-2.501239){\rput(5.081501,-0.46648347){$-\frac{4}{3}\alpha_{1}-\frac{2}{3}\alpha_{2}$}} \end{pspicture} } \caption{\label{the15ofsl3} The weights of the ${\bf 15}$ of $sl(3)$. The three shortest weights have multiplicity 2. We paint in red the 6 longest weights. } \end{figure} The weight $W_{\rm max}^{\bf 6} - \alpha_1 = \frac{1}{3}\alpha_1 + \frac{2}{3} \alpha_2$ has Dynkin labels $\boxed{0 \ 1}$ and thus it is a dominant weight. If one considered this as a highest weight, one would obtain the sub-representation ${\bf \overline{3}}$ which corresponds to the black weights in the figure. The black weights are shorter than the red ones (in particular, one can notice that the difference of the squared lengths is equal to the squared length of the roots). This result is completely general: dominant weights other than the highest weight give rise to sub-representations whose weights are shorter than the highest weight. Only representations with one dominant weight (i.e.~the highest weight) have all weights of the same length. The case of the adjoint representation is actually a particular case of a representation with more than one dominant weight. Indeed, the Cartan generators, whose Dynkin labels are all zero, are a degenerate case of a dominant weight. As an additional example we consider the ${\bf 15}$, whose weights are shown in Fig.~\ref{the15ofsl3}. The Dynkin labels of the highest weight are $\boxed{ 2 \ 1}$, giving \begin{equation} W_{\rm max}^{\bf 15} = \frac{5}{3} \alpha_1 + \frac{4}{3} \alpha_2 \quad. \end{equation} The dominant weight $\frac{2}{3} \alpha_1 + \frac{4}{3}\alpha_2$ has Dynkin labels $\boxed{ 0 \ 2 }$, while the dominant weight $\frac{2}{3}\alpha_1 + \frac{1}{3} \alpha_2$ (with multiplicity 2) has Dynkin labels $\boxed{1 \ 0}$. As it is clear from the figure, there are 6 long weights, 3 medium weights and 6 (3 with multiplicity 2) short weights. \begin{figure} \centering \scalebox{1} { \begin{pspicture}(-1,-2.3322396)(2.281979,2.3322396) \psline[linewidth=0.02cm](1.9767709,-0.06015625)(0.75677085,-1.8601563) \psline[linewidth=0.02cm](0.17677084,1.8598437)(1.5367708,0.09984375) \usefont{T1}{ppl}{m}{n} \rput(0.4919271,-2.0251563){\large \color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{0 -1}} \usefont{T1}{ppl}{m}{n} \rput(0.40786457,1.9748437){\large \color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{1 0}} \usefont{T1}{ppl}{m}{n} \rput(1.695677,-0.02515625){\large \color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{-1 1}} \end{pspicture} } \scalebox{1} { \begin{pspicture}(0,-2.3848958)(3.2985418,2.3848958) \psline[linewidth=0.02cm](0.76927084,2.1046875)(2.2892709,0.1446875) \psline[linewidth=0.02cm](2.3692708,-0.0153125)(1.1492709,-1.8153125) \usefont{T1}{ppl}{m}{n} \rput(0.9927083,2.0196874){\large \color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{$T_{1}$}} \usefont{T1}{ppl}{m}{n} \rput(2.1927083,0.0196875){\large \color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{$T_{2}$}} \usefont{T1}{ppl}{m}{n} \rput(0.9927083,-1.9803125){\large \color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{$T_{3}$}} \end{pspicture} } \caption{\label{3ofsl3weightsandcomponents} The Dynkin labels and the components of the ${\bf 3}$ of $sl(3)$. Note that the black lines in the left part of the Figure connect to given entries of the boxes. This indicates which root is subtracted from a box when going down the black line. In general, we paint in red all the Dynkin labels and components that are associated to the longest weights of an irreducible representation. In this case all the weights have the same length (see Fig.~\ref{theweightsofthe3}). } \end{figure} In order to determine the relation between the weights of a representation and the half-supersymmetric branes associated to the corresponding potential, it is instructive to consider the special case of $sl(n)$ algebras where there is a natural action of the creation and annihilation operators $E_{\pm \alpha}$ and of the Cartan generators $H_{\alpha}$ on the fundamental representation in terms of components. Denoting with $M$ the index of the fundamental representation, the $n-1$ generators associated to the simple roots $E_{\alpha_i}$, $i=1,...,n-1$ are the upper-triangular matrices $(T_i{}^{i+1})_M{}^N$ whose entries are 1 for $M=i$, $N=i+1$, and zero otherwise, while the Cartan generators $H_{\alpha_i}$ are diagonal matrices $( T_i{}^i )_M{}^N$ whose entries are $1/2$ for $M=N=i$, $-1/2$ for $M=N=i+1$ and zero otherwise. The annihilation operators $E_{-\alpha_i}$ are equal to $E_{\alpha_i}^\dagger$. In $sl(n)$, summing $\alpha_i$ to $\alpha_j$ gives a root only if $i=j\pm 1$, and the root $\alpha_i + \alpha_{i+1}$ corresponds to the generator $E_{\alpha_i + \alpha_{i+1}}=[ E_{\alpha_i}, E_{\alpha_{i+1}} ]$. Realising the algebra in terms of $n\times n $ matrices as above, this leads to the matrix multiplication \begin{equation} (T_i{}^{i+1})_M{}^N (T_{i+1}{}^{i+2})_N{}^P = (T_i{}^{i+2})_M{}^P \ , \end{equation} which is the upper-triangular matrix whose entries are 1 for $M=i$, $P=i+2$, and zero otherwise. This generalises to all the positive roots: the sum of $k$ simple roots $\alpha_{i_1},\alpha_{i_2}, \alpha_{i_3},...,\alpha_{i_k}$, with $i_1 \leq i_2 \leq i_3 \leq...\leq i_k$, is a root only if $i_2 = i_1+1, i_3= i_1+2,...,i_k=i_1+k-1$, and the corresponding generator is the upper-triangular matrix $( T_{i_1}{}^{i_1 +k})_M{}^N$ whose entries are 1 for $M=i_1$, $P=i_1+k$, and zero otherwise. The whole set of positive roots thus gives all the possible real upper-triangular $n\times n$ matrices. \begin{figure} \centering \scalebox{1} { \begin{pspicture}(-2,-4.3261456)(2.6788542,4.3261456) \psline[linewidth=0.02cm](0.27364585,-2.1803124)(0.9136458,-3.9003124) \psline[linewidth=0.02cm](2.3936458,-2.1803124)(1.5536458,-3.8603125) \psline[linewidth=0.02cm](1.0336459,-0.1403125)(1.8936459,-1.8603125) \psline[linewidth=0.02cm](1.4536458,-0.1003125)(0.7736458,-1.8603125) \psline[linewidth=0.02cm](0.11364584,1.8796875)(1.0136459,0.1596875) \psline[linewidth=0.02cm](2.4336457,1.9196875)(1.4736458,0.1796875) \psline[linewidth=0.02cm](0.9736458,3.8396876)(1.9136459,2.1196876) \psline[linewidth=0.02cm](1.5136459,3.8596876)(0.73364586,2.1596875) \usefont{T1}{ppl}{m}{n} \rput(1.1991146,3.9746876){\large \color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{1 1}} \usefont{T1}{ppl}{m}{n} \rput(1.1725521,-4.0253124){\large \color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{-1 -1}} \usefont{T1}{ppl}{m}{n} \rput(2.0925522,-2.0253124){\large \color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{-2 1}} \usefont{T1}{ppl}{m}{n} \rput(0.47739583,-2.0253124){\large \color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{1 -2}} \usefont{T1}{ppl}{m}{n} \rput(0.48723957,1.9746875){\large \color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{2 -1}} \usefont{T1}{ppl}{m}{n} \rput(2.0908334,1.9746875){\large \color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{-1 2}} \usefont{T1}{ppl}{m}{n} \rput(1.2144271,-0.0253125){\large \psframebox[linewidth=0.02,fillstyle=solid]{0 0}} \end{pspicture} } \scalebox{1} { \begin{pspicture}(0,-4.384896)(8.618542,4.384896) \psline[linewidth=0.02cm](3.1292708,-2.1953125)(4.349271,-3.7553124) \psline[linewidth=0.02cm](5.269271,-2.2553124)(4.189271,-3.8353126) \psline[linewidth=0.02cm](4.249271,-0.2353125)(5.229271,-1.7153125) \psline[linewidth=0.02cm](4.309271,-0.2753125)(3.1692708,-1.7353125) \psline[linewidth=0.02cm](3.1092708,1.7646875)(4.309271,0.2646875) \psline[linewidth=0.02cm](5.269271,1.7646875)(4.209271,0.2046875) \psline[linewidth=0.02cm](4.269271,3.7046876)(5.249271,2.3046875) \psline[linewidth=0.02cm](4.289271,3.6446874)(3.1092708,2.2446876) \usefont{T1}{ppl}{m}{n} \rput(4.1927085,4.0196877){\large \color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{$T_{1}^{3}$}} \usefont{T1}{ppl}{m}{n} \rput(3.1927083,2.0196874){\large \color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{$T_{1}^{2}$}} \usefont{T1}{ppl}{m}{n} \rput(5.1927085,2.0196874){\large \color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{$T_{2}^{3}$}} \usefont{T1}{ppl}{m}{n} \rput(4.2527084,0.0196875){\large \psframebox[linewidth=0.02,fillstyle=solid]{$T_{1}^{1},T_{2}^{2},T_{3}^{3}$}} \usefont{T1}{ppl}{m}{n} \rput(4.1927085,-3.9803126){\large \color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{$T_{3}^{1}$}} \usefont{T1}{ppl}{m}{n} \rput(3.1927083,-1.9803125){\large \color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{$T_{3}^{2}$}} \usefont{T1}{ppl}{m}{n} \rput(5.1927085,-1.9803125){\large \color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{$T_{2}^{1}$}} \end{pspicture} } \caption{\label{adjofsl3weightsandcomponents} The Dynkin labels and the components of the weights of the adjoint representation of $sl(3)$. The red entries correspond to the roots (which are the longest weights of the representation). } \end{figure} Consider again $sl(3)$ as an example. In components, the highest weight of the ${\bf 3}$ corresponds to the first component $T_1$ of a column 3-vector $T_M$. Acting with $E_{-\alpha_1}$ leads to $T_2$ and then acting with $E_{-\alpha_2}$ leads to $T_3$. This is summarised in Fig.~\ref{3ofsl3weightsandcomponents}. On the left-hand side of the figure, we write down the Dynkin labels of the weights of Fig.~\ref{theweightsofthe3}, while on the right-hand side we identify each weight with the corresponding component of $T_M$. The same construction is given is Fig.~\ref{adjofsl3weightsandcomponents} for the case of the adjoint representation. In this case the highest weight is the root $\alpha_1 + \alpha_2$ with Dynkin labels $\boxed{1 \ 1 }$ and it corresponds to the third upper-triangular matrix $T_1{}^3$, which when acting on $T_1$ gives $T_3$. The Cartan generators are associated to the weight $\boxed{0 \ 0}$, which are the tensors $T_1{}^1$, $T_2{}^2$ and $T_3{}^3$ with $T_1{}^1+ T_2{}^2+ T_3{}^3 =0$, thus giving the multiplicity 2 of the weight. We finally consider the ${\bf 6} $ and the ${\bf 15}$ in Figs.~\ref{6ofsl3weightsandcomponents} and \ref{15ofsl3weightsandcomponents}. The ${\bf 6}$ is the symmetric product ${\bf 3} \otimes_{\rm S} {\bf 3}$, leading to the symmetric tensor $T_{MN} = T_{NM}$. The highest weight corresponds to the component $T_{11}$, and by comparing Figs.~\ref{6ofsl3weightsandcomponents} and \ref{the6ofsl3} one notices that the three long weights correspond to the components $T_{11}$, $T_{22}$ and $T_{33}$, while the short weights correspond to the components $T_{12}$, $T_{13}$ and $T_{23}$. These latter components transform exactly as the components of the antisymmetric tensor $T_{[MN]}$. This antisymmetric tensor corresponds to the ${\bf \overline{3}}$, with highest weight of Dynkin labels $\boxed{0 \ 1}$, and this therefore explains the presence of this weight as dominant weight of the ${\bf 6}$. The ${\bf 15}$ corresponds to the irreducible tensor $T_{MN}^P = T_{NM}^P$ and satisfying $T_{1M}^1 + T_{2M}^2 + T_{3M}^3 =0$. The highest weight corresponds to the component $T_{11}^3$, and by comparing Figs. \ref{15ofsl3weightsandcomponents} and \ref{the15ofsl3} one can notice that the six long weights correspond to the components $T_{MM}^N$, with $M\neq N$, the medium weights correspond to the components $T_{MN}^P$ with $M$, $N$ and $P$ all different and, finally, each short weight corresponds to the components $T_{1M}^1$, $T_{2M}^2$ and $T_{3M}^3$, their sum being equal to zero, which explains the multiplicity 2 of each of these weights. The components corresponding to the medium weights transform like $T^{PP}$, which are associated to the long weights of the representation ${\bf \overline{6}}$ whose highest weight has Dynkin labels $\boxed{0 \ 2 }$. This explains the presence of this weight as dominant weight of the ${\bf 15}$. The components corresponding to the short weights transform like the tensor $T_M$ in the ${\bf 3}$. The highest weight of this representation has Dynkin labels $\boxed{1 \ 0}$. This weight occurs as dominant weight of the ${\bf 15}$ with multiplicity 2. \begin{figure} \centering \scalebox{1} { \begin{pspicture}(-1,-4.3322396)(3.2785416,4.3322396) \psline[linewidth=0.02cm](1.9367708,-2.1401563)(0.7967708,-3.8601563) \psline[linewidth=0.02cm](2.956771,-0.10015625)(1.9567708,-1.9001563) \psline[linewidth=0.02cm](0.27677083,-0.14015625)(1.5167708,-1.9401562) \psline[linewidth=0.02cm](0.39677083,3.8798437)(1.3967708,2.1398437) \psline[linewidth=0.02cm](1.4967709,1.8598437)(2.456771,0.13984375) \psline[linewidth=0.02cm](1.8367709,1.8598437)(0.75677085,0.17984375) \usefont{T1}{ppl}{m}{n} \rput(0.49020833,-4.025156){\large \color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{0 -2}} \usefont{T1}{ppl}{m}{n} \rput(1.701302,-2.0251563){\large \psframebox[linewidth=0.02,fillstyle=solid]{-1 0}} \usefont{T1}{ppl}{m}{n} \rput(2.6939583,-0.02515625){\large \color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{-2 2}} \usefont{T1}{ppl}{m}{n} \rput(1.611927,1.9748437){\large \psframebox[linewidth=0.02,fillstyle=solid]{0 1}} \usefont{T1}{ppl}{m}{n} \rput(0.61598957,3.9748437){\large \color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{2 0}} \usefont{T1}{ppl}{m}{n} \rput(0.48223957,-0.02515625){\large \psframebox[linewidth=0.02,fillstyle=solid]{1 -1}} \end{pspicture} } \scalebox{1} { \begin{pspicture}(0,-4.384896)(4.5785418,4.384896) \psline[linewidth=0.02cm](1.1692709,-0.1753125)(2.4492707,-2.1153126) \psline[linewidth=0.02cm](2.1292708,-2.2953124)(0.9892708,-4.0153127) \psline[linewidth=0.02cm](3.4092708,-0.0553125)(2.4092708,-1.8553125) \psline[linewidth=0.02cm](1.1892709,3.9246874)(2.1892707,2.1846876) \psline[linewidth=0.02cm](2.489271,1.7046875)(3.4492707,-0.0153125) \psline[linewidth=0.02cm](2.229271,1.9046875)(1.1492709,0.2246875) \usefont{T1}{ppl}{m}{n} \rput(1.3327084,4.0196877){\large \color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{$T_{11}$}} \usefont{T1}{ppl}{m}{n} \rput(2.3327084,2.0196874){\large \psframebox[linewidth=0.02,fillstyle=solid]{$T_{12}$}} \usefont{T1}{ppl}{m}{n} \rput(1.1327083,0.0196875){\large \psframebox[linewidth=0.02,fillstyle=solid]{$T_{13}$}} \usefont{T1}{ppl}{m}{n} \rput(3.3327084,0.0196875){\large \color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{$T_{22}$}} \usefont{T1}{ppl}{m}{n} \rput(2.3327084,-1.9803125){\large \psframebox[linewidth=0.02,fillstyle=solid]{$T_{23}$}} \usefont{T1}{ppl}{m}{n} \rput(1.1327083,-3.9803126){\large \color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{$T_{33}$}} \end{pspicture} } \caption{\label{6ofsl3weightsandcomponents} The Dynkin labels and the components of the ${\bf 6}$ of $sl(3)$. The red entries correspond to the longest weights in the representation (see Fig. \ref{the6ofsl3}). } \end{figure} This finishes our short review of the group-theoretical tools that are needed to understand the relation between branes and weights as expressed by the ``longest-weight rule'' given in the introduction. We now proceed with elucidating this longest-weight rule. But first we need to know what the actual U-duality representations of the different $(p+1)$-form potentials are. The U-duality representations of the potentials that couple to the standard branes have been determined long ago. They follow from the representation theory of the supersymmetry algebra. As explained in the introduction the U-duality representations of the potentials associated to all the non-standard branes of maximal supergravity have been determined over the last few years using three different techniques: closure of the supersymmetry algebra \cite{Bergshoeff:2005ac}, using properties of $E_{11}$ \cite{Riccioni:2007au,Bergshoeff:2007qi} and applying the embedding tensor technique \cite{deWit:2008ta}. \begin{figure} \centering \scalebox{1} { \begin{pspicture}(-1,-6.299427)(3.1544793,6.299427) \psline[linewidth=0.02cm](1.0483333,-2.1535938)(1.6883334,-3.8335938) \psline[linewidth=0.02cm](1.4483334,-2.1335938)(0.66833335,-3.9135938) \psline[linewidth=0.02cm](2.9083333,-2.0935938)(2.0683334,-3.8535938) \psline[linewidth=0.02cm](1.3083333,1.8464062)(0.66833335,0.06640625) \psline[linewidth=0.02cm](1.0283333,1.8664062)(1.6883334,0.10640625) \psline[linewidth=0.02cm](2.9683332,1.9464062)(2.0683334,0.10640625) \psline[linewidth=0.02cm](0.16833334,-0.13359375)(1.0683334,-1.8535937) \psline[linewidth=0.02cm](2.1483333,-0.15359375)(1.4683334,-1.9335938) \psline[linewidth=0.02cm](1.5883334,-0.13359375)(2.4083333,-1.8135937) \psline[linewidth=0.02cm](0.38833332,3.8464062)(1.0483333,2.1064062) \psline[linewidth=0.02cm](1.9483334,3.8664062)(1.3083333,2.0864062) \psline[linewidth=0.02cm](1.5083333,3.8664062)(2.4283333,2.1864061) \psline[linewidth=0.02cm](0.24833333,-4.0535936)(0.86833334,-5.893594) \psline[linewidth=0.02cm](2.0683334,-4.1335936)(1.2283334,-5.913594) \psline[linewidth=0.02cm](0.9683333,5.9064064)(1.8683333,4.0664062) \psline[linewidth=0.02cm](1.3083333,5.8664064)(0.38833332,4.106406) \usefont{T1}{ppl}{m}{n} \rput(1.1631771,5.976406){\color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{2 1}} \usefont{T1}{ppl}{m}{n} \rput(0.43614584,-0.02359375){\psframebox[linewidth=0.02,fillstyle=solid]{-2 2}} \usefont{T1}{ppl}{m}{n} \rput(0.42864582,-4.023594){\color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{1 -3}} \usefont{T1}{ppl}{m}{n} \rput(1.2344271,-2.0235937){\psframebox[linewidth=0.02,fillstyle=solid]{0 -1}} \usefont{T1}{ppl}{m}{n} \rput(1.8375521,-0.02359375){\psframebox[linewidth=0.02,fillstyle=solid]{-1 1}} \usefont{T1}{ppl}{m}{n} \rput(1.1061459,-6.023594){\color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{-1 -2}} \usefont{T1}{ppl}{m}{n} \rput(1.8420833,-4.023594){\psframebox[linewidth=0.02,fillstyle=solid]{-2 0}} \usefont{T1}{ppl}{m}{n} \rput(2.6361458,-2.0235937){\color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{-3 2}} \usefont{T1}{ppl}{m}{n} \rput(0.6325521,3.9764063){\color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{3 -1}} \usefont{T1}{ppl}{m}{n} \rput(1.1609896,1.9764062){\psframebox[linewidth=0.02,fillstyle=solid]{1 0}} \usefont{T1}{ppl}{m}{n} \rput(1.7630209,3.9764063){\psframebox[linewidth=0.02,fillstyle=solid]{0 2}} \usefont{T1}{ppl}{m}{n} \rput(2.6397395,1.9764062){\color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{-2 3}} \end{pspicture} } \scalebox{1} { \begin{pspicture}(0,-6.245052)(4.940729,6.245052) \psline[linewidth=0.02cm](2.0958333,-2.1942186)(1.4958333,-3.7942188) \psline[linewidth=0.02cm](2.0958333,-2.1942186)(2.8958333,-3.7942188) \psline[linewidth=0.02cm](3.4958334,-1.9942187)(2.6958334,-3.7942188) \psline[linewidth=0.02cm](2.6958334,3.8057814)(3.2958333,2.4057813) \psline[linewidth=0.02cm](3.2958333,1.8057812)(2.6958334,0.40578124) \psline[linewidth=0.02cm](2.6958334,-0.19421875)(3.4958334,-1.7942188) \psline[linewidth=0.02cm](1.6358334,-4.1142187)(2.2558334,-5.954219) \psline[linewidth=0.02cm](3.0558333,-3.9942188)(2.2158334,-5.7742186) \psline[linewidth=0.02cm](1.3558333,0.00578125)(2.2558334,-1.7142187) \psline[linewidth=0.02cm](2.7358334,-0.01421875)(2.0558333,-1.7942188) \psline[linewidth=0.02cm](2.0958333,1.9857812)(1.4558333,0.20578125) \psline[linewidth=0.02cm](2.0158334,2.0057812)(2.6758332,0.24578124) \psline[linewidth=0.02cm](1.3758334,3.9857812)(2.0358334,2.2457812) \psline[linewidth=0.02cm](2.7358334,4.005781)(2.0958333,2.2257812) \psline[linewidth=0.02cm](1.9558333,6.045781)(2.8558333,4.2057815) \psline[linewidth=0.02cm](2.2958333,6.005781)(1.3758334,4.2457814) \usefont{T1}{ppl}{m}{n} \rput(1.6203645,-3.8842187){\color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{$T_{33}^{2}$}} \usefont{T1}{ppl}{m}{n} \rput(1.4203646,4.1157813){\color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{$T_{11}^{2}$}} \usefont{T1}{ppl}{m}{n} \rput(2.2203646,-5.8842187){\color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{$T_{33}^{1}$}} \usefont{T1}{ppl}{m}{n} \rput(1.6203645,0.11578125){\psframebox[linewidth=0.02,fillstyle=solid]{$T_{13}^{2}$}} \usefont{T1}{ppl}{m}{n} \rput(2.1203647,-1.8842187){\psframebox[linewidth=0.02,fillstyle=solid]{$T_{M3}^{M}$}} \usefont{T1}{ppl}{m}{n} \rput(2.2203646,5.915781){\color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{$T_{11}^{3}$}} \usefont{T1}{ppl}{m}{n} \rput(3.4203646,-1.8842187){\color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{$T_{22}^{1}$}} \usefont{T1}{ppl}{m}{n} \rput(3.0203645,-3.8842187){\psframebox[linewidth=0.02,fillstyle=solid]{$T_{23}^{1}$}} \usefont{T1}{ppl}{m}{n} \rput(2.8203645,4.1157813){\psframebox[linewidth=0.02,fillstyle=solid]{$T_{12}^{3}$}} \usefont{T1}{ppl}{m}{n} \rput(2.1203647,2.1157813){\psframebox[linewidth=0.02,fillstyle=solid]{$T_{M1}^{M}$}} \usefont{T1}{ppl}{m}{n} \rput(3.4203646,2.1157813){\color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{$T_{22}^{3}$}} \usefont{T1}{ppl}{m}{n} \rput(2.7203646,0.11578125){\psframebox[linewidth=0.02,fillstyle=solid]{$T_{M2}^{M}$}} \end{pspicture} } \caption{\label{15ofsl3weightsandcomponents} The Dynkin labels and the components of the ${\bf 15}$ of $sl(3)$. The red entries correspond to the longest weights in the representation (see Fig. \ref{the15ofsl3}).} \end{figure} In \cite{Bergshoeff:2006gs} it was shown by an analysis of the brane effective actions that the non-standard branes of the IIB theory are fewer than the dimensions of the $\text{SL}(2,\mathbb{R})$ representations of the corresponding fields\,\footnote{From now on, we will always consider groups instead of algebras. An infinitesimal transformation of a field in a given representation under the group corresponds to the action of the generators of the algebra in that representation.}. In particular, there are two 7-branes associated to the 8-forms, that belong to the ${\bf 3}$, and two 9-branes associated to the 10-forms, that belong to the ${\bf 4}$\,\footnote{There is also an additional doublet of 10-forms in the IIB theory~\cite{Bergshoeff:2005ac}, but one cannot write down a kappa-symmetric brane effective action associated to it. This is in accordance with the longest-weight rule.}. This analysis was generalised to any maximal supergravity theory in any dimensions~\cite{Bergshoeff:2010xc,Bergshoeff:2011qk,Bergshoeff:2012ex}, revealing that when one considers the potentials that couple to the non-standard branes, one always finds that the number of branes is less than the dimension of the U-duality representation. This is in sharp contrast with the case of standard branes, where the number of half-supersymmetric branes is always the same as the dimension of the corresponding U-duality representation. Below we will show in a few explicit examples that this corresponds to the fact that while the representations of the standard branes only contain one dominant weight (the highest weight), those of the non-standard branes always contain more than one dominant weight. In the latter case the branes correspond to the longest weights in the representation (those with the same length as the highest weight). To show how this works it is instructive to consider an explicit example, that is the non-standard branes of eight-dimensional maximal supergravity, whose global symmetry is $\text{SL}(3,\mathbb{R}) \times \text{SL}(2,\mathbb{R})$. There are 6 defect branes in the adjoint of $\text{SL}(3,\mathbb{R})$. Their corresponding 6-form potential is $A_{6,M}{}^N$, which is contracted in the Wess-Zumino term by the 5-brane charge $T^M{}_N$ with $M\neq N$ \cite{Bergshoeff:2011se}. As we have shown, these directions correspond to the roots of $\text{SL}(3,\mathbb{R})$, which clearly are the longest weights of the representation (see Figs. \ref{rootssl3} and \ref{adjofsl3weightsandcomponents}). The 6-brane charges of the domain walls are $T_{MNa}$ $(a=1,2)$ in the $({\bf 6,2})$. There are 6 domain walls, corresponding to the charges $T_{11a}$, $T_{22a}$ and $T_{33a}$ \cite{Bergshoeff:2012pm}. Looking at Figs. \ref{6ofsl3weightsandcomponents} and \ref{the6ofsl3}, we see that these components correspond to the longest weights of the ${\bf 6}$ of $\text{SL}(3,\mathbb{R})$. Finally, there are six half-supersymmetric space-filling branes with 7-brane charges $T_{MN}{}^P$ in the $({\bf 15,1})$ such that $M=N$ and $P\neq M$. From Figs.~\ref{15ofsl3weightsandcomponents} and \ref{the15ofsl3} we know that these components precisely correspond to the longest weights of the representation. We find that the above result is completely general. The defect branes in any dimension always correspond to the components of the adjoint representation associated to the roots. Given that the symmetry groups of maximal supergravities are all simply laced, all the roots have the same length, and thus the number of defect branes is always ${\rm dim}\,G - {\rm rank}\,G$, where the Cartan generators are $\boxed{0\ 0\ ...\ 0}$ dominant weights with ${\rm rank}\,G$ multiplicity. Similarly, for all domain walls and space-filling branes one can determine all the dominant weights of the associated representations, and the number of weights that have the same length as each dominant weight. Counting the weights of the same length as the highest weight reproduces the number of half-supersymmetric branes determined in \cite{Bergshoeff:2010xc,Bergshoeff:2011qk,Bergshoeff:2012ex,Kleinschmidt:2011vu}. The result is summarised in Table \ref{dominantweightsofnonstandardbranes}. For the exceptional cases $\text{E}_{6(6)}$, $\text{E}_{7(7)}$ and $\text{E}_{8(8)}$ we label the Dynkin weights following the numbering of the nodes of the Dynkin diagrams of Fig.~\ref{Ed+1diagram}. \begin{figure}[h] \begin{center} \begin{picture}(220,70) \multiput(10,10)(40,0){2}{\circle{10}} \multiput(130,10)(40,0){3}{\circle{10}} \put(15,10){\line(1,0){30}} \put(55,10){\line(1,0){5}} \put(67,10){\line(1,0){10}} \put(85,10){\line(1,0){10}} \put(103,10){\line(1,0){10}} \put(120,10){\line(1,0){5}} \multiput(135,10)(40,0){2}{\line(1,0){30}} \put(130,50){\circle{10}} \put(130,15){\line(0,1){30}} \put(8,-8){$1$} \put(48,-8){$2$} \put(117,-8){$n-3$} \put(157,-8){$n-2$} \put(197,-8){$n-1$} \put(140,47){$n$} \end{picture} \caption{\sl The Dynkin diagrams of $E_6$, $E_7$ and $E_8$. \label{Ed+1diagram}} \end{center} \end{figure} This finishes our discussion of the relation between branes and weights. In the next section we will show that the property of the representations of non-standard branes of having more than one dominant weight naturally leads to a second difference with the standard branes, namely a degeneracy of the BPS conditions. \begin{table}\small \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline \rule[-1mm]{0mm}{1mm} $D$ & $G$ & representation & dominant weights & weights of same length\\ \hline \hline \rule[-1mm]{0mm}{1mm} 8 & $\text{SL}(3,\mathbb{R}) \times \text{SL}(2,\mathbb{R})$ & $({\bf 6,2})$ & ${\color{red}\boxed{2 \ 0} \times \boxed{1} }$ & ${\color{red}3 \times 2}$ \\ \rule[-1mm]{0mm}{1mm} & & & $\boxed{0 \ 1} \times \boxed{1} $ & $3 \times 2 $\\ \cline{3-5} \rule[-1mm]{0mm}{1mm} & & $({\bf 15,1})$ & ${\color{red}\boxed{2 \ 1} \times \boxed{0}} $ & ${\color{red}6} $\\ \rule[-1mm]{0mm}{1mm} & & & $\boxed{0 \ 2} \times \boxed{0} $ & $3 $\\ \rule[-1mm]{0mm}{1mm} & & & $2 \times \boxed{1 \ 0} \times \boxed{0} $ & $2 \times 3 $\\ \hline \rule[-1mm]{0mm}{1mm} 7 & $\text{SL}(5,\mathbb{R})$ &${\bf \overline{40}}$ & ${\color{red} \boxed{1 \ 1\ 0 \ 0}} $ & ${\color{red} 20} $\\ \rule[-1mm]{0mm}{1mm} & & & $2 \times \boxed{0 \ 0 \ 1\ 0} $ & $2 \times 10 $\\ \cline{3-5} \rule[-1mm]{0mm}{1mm} & &${\bf \overline{15}}$ & ${\color{red} \boxed{0\ 0\ 0 \ 2}} $ & ${\color{red} 5} $\\ \rule[-1mm]{0mm}{1mm} & & & $\boxed{0 \ 0 \ 1\ 0} $ & $10 $\\ \cline{3-5} \rule[-1mm]{0mm}{1mm} & &${\bf {70}}$ & ${\color{red} \boxed{2\ 0\ 0 \ 1}} $ & ${\color{red} 20} $\\ \rule[-1mm]{0mm}{1mm} & & & $\boxed{0 \ 1 \ 0\ 1} $ & $30 $\\ \rule[-1mm]{0mm}{1mm} & & & $4 \times \boxed{1 \ 0 \ 0 \ 0} $ & $4 \times 5 $\\ \hline \rule[-1mm]{0mm}{1mm} 6 & $\text{SO}(5,5)$ &${\bf {144}}$ & ${\color{red} \boxed{1 \ 0 \ 0\ 0\ 1 }} $ & ${\color{red} 80} $\\ \rule[-1mm]{0mm}{1mm} & & & $4 \times \boxed{0 \ 0 \ 0\ 1\ 0 } $ & $4 \times 16 $\\ \cline{3-5} \rule[-1mm]{0mm}{1mm} & &${\bf {320}}$ & ${\color{red} \boxed{1 \ 1\ 0 \ 0 \ 0}} $ & ${\color{red} 80} $\\ \rule[-1mm]{0mm}{1mm} & & & $2 \times \boxed{0 \ 0 \ 1 \ 0\ 0} $ & $2 \times 80 $\\ \rule[-1mm]{0mm}{1mm} & & & $8 \times \boxed{1 \ 0 \ 0 \ 0\ 0} $ & $8 \times 10 $\\ \cline{3-5} \rule[-1mm]{0mm}{1mm} & &${\bf {\overline{126}}}$ & ${\color{red} \boxed{0\ 0\ 0 \ 0 \ 2}} $ & ${\color{red} 16} $\\ \rule[-1mm]{0mm}{1mm} & & & $\boxed{0 \ 0 \ 1\ 0 \ 0} $ & $80 $\\ \rule[-1mm]{0mm}{1mm} & & & $3 \times \boxed{1 \ 0 \ 0 \ 0 \ 0} $ & $3 \times 10 $\\ \hline \rule[-1mm]{0mm}{1mm} 5 & $\text{E}_{6(6)}$ &${\bf \overline{351}}$ & ${\color{red} \boxed{0 \ 0 \ 0\ 1\ 0 \ 0 }} $ & ${\color{red} 216} $\\ \rule[-1mm]{0mm}{1mm} & & & $5 \times \boxed{1 \ 0 \ 0\ 0\ 0 \ 0} $ & $5 \times 27 $\\ \cline{3-5} \rule[-1mm]{0mm}{1mm} & &${\bf \overline{1728}}$ & ${\color{red} \boxed{0 \ 0\ 0 \ 0 \ 1 \ 1 }} $ & ${\color{red} 432} $\\ \rule[-1mm]{0mm}{1mm} & & & $4 \times \boxed{0 \ 1 \ 0 \ 0\ 0\ 0} $ & $4 \times 216 $\\ \rule[-1mm]{0mm}{1mm} & & & $16 \times \boxed{0\ 0 \ 0 \ 0 \ 1\ 0} $ & $16 \times 27 $\\ \hline \rule[-1mm]{0mm}{1mm} 4 & $\text{E}_{7(7)}$ &${\bf {912}}$ & ${\color{red} \boxed{0\ 0 \ 0 \ 0\ 0 \ 0 \ 1 }} $ & ${\color{red} 576} $\\ \rule[-1mm]{0mm}{1mm} & & & $6 \times \boxed{1 \ 0 \ 0\ 0\ 0 \ 0} $ & $6 \times 56 $\\ \cline{3-5} \rule[-1mm]{0mm}{1mm} & &${\bf {8645}}$ & ${\color{red} \boxed{0 \ 0\ 0 \ 0 \ 1 \ 0\ 0 }} $ & ${\color{red} 2016} $\\ \rule[-1mm]{0mm}{1mm} & & & $5 \times \boxed{0 \ 1 \ 0 \ 0\ 0\ 0\ 0} $ & $5 \times 756 $\\ \rule[-1mm]{0mm}{1mm} & & & $22 \times \boxed{0\ 0\ 0 \ 0 \ 0 \ 1\ 0} $ & $22 \times 126 $\\ \rule[-1mm]{0mm}{1mm} & & & $77 \times \boxed{0\ 0\ 0 \ 0 \ 0 \ 0 \ 0} $ & $77 \times 1 $\\ \hline \rule[-1mm]{0mm}{1mm} 3 & $\text{E}_{8(8)}$ &${\bf {3875}}$ & ${\color{red} \boxed{0\ 0 \ 0 \ 0\ 0 \ 0 \ 1 \ 0 }} $ & ${\color{red} 2160} $\\ \rule[-1mm]{0mm}{1mm} & & & $7 \times \boxed{1 \ 0 \ 0\ 0\ 0 \ 0\ 0} $ & $7 \times 240 $\\ \rule[-1mm]{0mm}{1mm} & & & $35 \times \boxed{0\ 0\ 0 \ 0 \ 0 \ 0 \ 0\ 0} $ & $35 \times 1 $\\ \cline{3-5} \rule[-1mm]{0mm}{1mm} & &${\bf {147250}}$ & ${\color{red} \boxed{0 \ 0\ 0 \ 0 \ 0 \ 0\ 0\ 1 }} $ & ${\color{red} 17280} $\\ \rule[-1mm]{0mm}{1mm} & & & $6 \times \boxed{0 \ 1 \ 0 \ 0\ 0\ 0\ 0\ 0} $ & $6 \times 6720 $\\ \rule[-1mm]{0mm}{1mm} & & & $29 \times \boxed{0\ 0\ 0\ 0 \ 0 \ 0 \ 1\ 0} $ & $29 \times 2160 $\\ \rule[-1mm]{0mm}{1mm} & & & $111 \times \boxed{1\ 0\ 0\ 0 \ 0 \ 0 \ 0 \ 0} $ & $111 \times 240 $\\ \rule[-1mm]{0mm}{1mm} & & & $370 \times \boxed{0\ 0\ 0 \ 0 \ 0 \ 0 \ 0\ 0} $ & $370 \times 1 $\\ \hline \end{tabular} \caption{ \label{dominantweightsofnonstandardbranes} The potentials associated to the domain walls and the space-filling branes. In each dimension apart from 7 and 6, the domain walls correspond to the first line and the space-filling branes to the second. In seven dimensions, the first row corresponds to vector domain-walls, the second to tensor domain-walls and the third to space-filling branes. In six dimensions, the first row corresponds to domain walls, the second to vector space-filling branes and the third to tensor space-filling branes. In all cases, the branes correspond to the longest weights, that is the weights of the same length as the highest weight, for each representation. Their number, as well as the Dynkin labels of the highest weight, is painted in red. } \end{center} \end{table} \section{Central charges and degeneracies} In the previous section we have given a group-theoretic characterisation of the difference between standard and non-standard branes. We have seen that the potentials corresponding to standard branes belong to representations of the global symmetry group with only one dominant weight, while the potentials corresponding to non-standard branes belong to representations with more than one dominant weight. In all cases the branes are associated to the weights within the representation that have the same length as the highest weight. For standard branes, the representations have weights which are all of the same length and therefore the number of branes is the same as the dimension of the representation, while for non-standard branes there are sets of weights of different length, each corresponding to a different dominant weight, and the half-supersymmetric branes correspond to the weights of maximum length, which is the length of the highest weight. This implies that the number of non-standard branes is less than the dimension of the representation. In \cite{Bergshoeff:2011se,Bergshoeff:2012pm,Bergshoeff:2012jb} it was shown that there is another crucial difference between standard and non-standard branes: while for standard branes there is a one-to-one relation between half-supersymmetric branes and BPS conditions, in the case of non-standard branes this relation is many-to-one, i.e.~more branes give rise to the same BPS condition. This has the important consequence that one can consider half-BPS configurations corresponding to bound states of different half-BPS branes which correspond to the same BPS condition. Using group-theory arguments we will show in this section why non-standard branes can have degenerate BPS conditions. The number of different BPS conditions that can be imposed on a half-supersymmetric $p$-brane is equal to the number of central charges of rank $p$. In maximal supersymmetric theories, the central charges form representations of the R-symmetry $H$, which is the maximal compact subgroup of the maximally non-compact U-duality group $G$. The representations of the central charges of various rank in any dimension are given in Table \ref{centralchargetable}. The table only contains central charges of rank up to $[D/2]$, because the charges of rank $p > [D/2]$ are equal to the charges of rank $D-p$ by Hodge duality. This means that for instance in $D=7$ the $p=4$ charges (associated to defect branes) are in the ${\bf 10}$, the $p=5$ charges (associated to domain walls) are in the ${\bf 1+ 5}$ and the $p=6$ charges (associated to space-filling branes) are in the ${\bf 5}$\,\footnote{For $p=1$, only the charges other than the momentum operator can be dualised giving a $D-1$ charge for space-filling branes.}. \begin{table}[t] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline $D$&$R$-symmetry&$p=0$&$p=1$&$p=2$&$p=3$&$p=4$&$p=5$\\[.1truecm] \hline \rule[-1mm]{0mm}{6mm} IIA&${\unity}$&{\bf 1}&{\bf 1} +{\bf 1}&{\bf 1}&--&{\bf 1}&${\bf 1}$\\[.05truecm] \hline \rule[-1mm]{0mm}{6mm} IIB&SO(2)&--&{\bf 1} +{\bf 2}&--&{\bf 1}&--&${\bf 1}^+ + {\bf 2}^+$\\[.05truecm] \hline \rule[-1mm]{0mm}{6mm} 9&SO(2)&${\bf 1}+{\bf 2}$&{\bf 1}+{\bf 2}&{\bf 1}&{\bf 1}&${\bf 1}+{\bf 2}$&\\[.05truecm] \hline \rule[-1mm]{0mm}{6mm} 8&U(2)& $ 2 \times {\bf 3}$ &{\bf 1}+{\bf 3} & $2 \times {\bf 1}$& ${\bf 1} + {\bf 3}$ &${\bf 3}^+ + {\bf 3}^-$&\\[.05truecm] \hline \rule[-1mm]{0mm}{6mm} 7&Sp(4)& {\bf 10} &{\bf 1}+ {\bf 5} & ${\bf 1} +{\bf 5}$ &{\bf 10} &&\\[.05truecm] \hline \rule[-1mm]{0mm}{6mm} 6&Sp(4)$\times$Sp(4)&$({\bf 4},{\bf 4})$&$({\bf 1},{\bf 1})+({\bf 1},{\bf 1})$ & $({\bf 4},{\bf 4})$& $({\bf 10},{\bf 1})^+$ &&\\[.05truecm] \rule[-1mm]{0mm}{6mm} & & &$+({\bf 1}, {\bf 5})+({\bf 5}, {\bf 1})$ & &$+({\bf 1},{\bf 10})^-$ &&\\[.05truecm] \hline \rule[-1mm]{0mm}{6mm} 5&Sp(8)& {\bf 1} + {\bf 27}&{\bf 1}+ {\bf 27}& {\bf 36}&&&\\[.05truecm] \hline \rule[-1mm]{0mm}{6mm} 4&SU(8)& ${\bf 28} + {\bf \overline{28}}$ &{\bf 1}+ {\bf 63} & ${\bf 36}^+ + {\bf \overline{36}}^-$ &&&\\[.05truecm] \hline \rule[-1mm]{0mm}{6mm} 3&SO(16)& {\bf 120} & {\bf 1}+{\bf 135} &&&&\\[.05truecm] \hline \end{tabular} \end{center} \caption{\sl This table indicates the $R$-representations of the $p$-form central charges of $3\le D\le 10$ maximal supergravity. If applicable, we have also indicated the space-time duality of the central charges with a superscript $\pm$. There is always a singlet $p=1$ charge which is the momentum operator. } \label{centralchargetable} \end{table} In order to determine the half-BPS branes, one has to decompose the representation of $G$ of the brane charges $T$ in representations of $H$. For standard branes, the representations of $H$ one obtains are all contained in Table \ref{centralchargetable} for any $p$ and in any dimension. This means that for each component of the representation of $G$ of a $p$-brane charge $T$ in $D$ dimensions, there is a rank $p$ central charge $Q$ in the supersymmetry algebra. For non-standard branes the situation is different: in this case the decomposition contains the associated central charge $Q$ of the correct rank, but it also contains additional representations that are not contained in the Table. We denote these additional representations collectively by $R$. Summarising, one has schematically \begin{eqnarray} & & {\rm standard \ branes}: \ \hskip .75truecm T \rightarrow Q \,,\nonumber \\[.2truecm] & & {\rm non}\text{-}{\rm standard \ branes}: \ T \rightarrow R + Q \quad .\label{standardTQnonstandardTRQ} \end{eqnarray} As an example we consider the non-standard branes in $D=7$. The 4-brane charges are in the ${\bf 24}$ (adjoint) of $\text{SL}(5,\mathbb{R})$, which decomposes under the R-symmetry $\text{Sp}(4)$ as \begin{equation} {\bf 24 } \rightarrow {\bf 14} + {\bf 10} \quad . \end{equation} From Table \ref{centralchargetable} one notices that only the representation ${\bf 10}$ of $\text{SO}(5)$ is present as a $p=4$ (i.e.~dual $p=3$) central charge. The fact that only part of the representation goes to the representation of the central charge explains the fact that there is a degeneracy of the BPS conditions. To understand this, it is instructive to analyse the decomposition in terms of components. The ${\bf 24}$ of $\text{SL}(5,\mathbb{R})$ corresponds to a traceless tensor $T_M{}^N$, with $M,N =1,...,5$. This decomposes under $\text{Sp}(4)$ in a symmetric traceless tensor $R_{MN}$ in the ${\bf 14}$ and an antisymmetric tensor $Q_{MN}$ in the ${\bf 10}$, where now $M,N$ are indices in the ${\bf 5}$ of $\text{Sp}(4)$. The diagonal components of the ${\bf 24}$ (i.e.~the 4 directions along the Cartan generators), that are entirely contained in the ${\bf 14}$, are not associated to any central charge and these are precisely the components that do not correspond to half-supersymmetric branes. The other 10 components correspond to the same central charges as the ${\bf 10}$ and this is the reason for the degeneracy 2. In terms of components the degeneracy arises as follows: the components $T_M{}^N$ and $T_N{}^M$ decompose as $R_{MN}+ Q_{MN}$ and $R_{MN} - Q_{MN}$, corresponding to the same central charge $Q_{MN}$ (up to a sign). As another example we consider the $D=7$ tensor 5-branes which have brane charges $T^{MN}=T^{NM}$ in the ${\bf \overline{15}}$. This decomposes under $\text{Sp}(4)$ as \begin{equation} {\bf \overline{15}} \rightarrow {\bf 14 } + {\bf 1} \quad . \end{equation} One can see from Table \ref{centralchargetable} that only the singlet corresponds to a $p=5$ (i.e. $p=2$) central charge in $D=7$. In components this means that a symmetric tensor $T^{MN}$ of $\text{SL}(5,\mathbb{R})$ decomposes in a symmetric traceless tensor $R_{MN}$ and the trace part $\delta_{MN} Q$ under $\text{Sp}(4)$. All the components of $T^{MN}$ that are not diagonal have no projection on the singlet and thus are not associated to branes, while all the five diagonal components, that regardless of $R_{MN}$ have the same projection on the singlet, give rise to the same BPS condition with degeneracy 5. The same analysis applies to the remaining non-standard branes in $D=7$. The vector domain walls correspond to 5-brane charges in the ${\bf \overline{40}}$, which decomposes as \begin{equation} {\bf \overline{40}} \rightarrow {\bf 35} + {\bf 5} \quad . \end{equation} One can see that only the ${\bf 5}$ representation is present in Table \ref{centralchargetable} as a $p=5$ (i.e.~dual $p=2$) central charge in $D=7$. In components, the charge $T_{MN,P}$, antisymmetric in $MN$ and such that $T_{[MN,P]}=0$, has a non-zero projection on the ${\bf 5}$ only if $P=M$ or $P=N$, in which case the representation decomposes to $Q_{[M} \delta_{N]P}$ (ignoring the part along the ${\bf 35}$). This implies that 4 different choices of $N$ lead to the same charge $Q_M$, for fixed $M$. Therefore, each charge $Q_M$ has degeneracy 4. Finally, the space-filling branes correspond to the 6-brane charges $T_{MN}^P$ in the ${\bf 70}$ (which is symmetric in $MN$ and such that $T_{MN}^N =0$), which decomposes according to \begin{equation} {\bf 70} \rightarrow {\bf 35} + {\bf 30} + {\bf 5} \quad . \end{equation} We see that only the ${\bf 5}$ representation appears in Table \ref{centralchargetable} as a $p=6$ (i.e.~dual $p=1$) central charge. The projection of the ${\bf 70}$ on the ${\bf 5}$ of $\text{Sp}(4)$ is $T_{MN}^P \rightarrow \delta_{MN} Q^P$, and the components that have non-zero projection on the ${\bf 5}$ are the 20 components $T_{MM}^P$, with $M\neq P$, which implies a degeneracy 4 for the central charge. \begin{table}\small \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline \rule[-2mm]{0mm}{1mm} $D$ & $H$ & repr. of $G$ & decomposition under $H$ & degeneracy& \# of branes \\ \hline \hline \rule[-2mm]{0mm}{1mm} 8 & $\text{U}(2) $ & $({\bf 8,1})$ & ${\bf 5}_0 +{\color{red}{\bf 3}_0 }$ & 2 &6 \\ \cline{3-6} \rule[-2mm]{0mm}{1mm} & & $({\bf 1,3})$& ${\bf 1}_{+2} + {\bf 1}_{-2} + {\color{red} {\bf 1}_0} $ & 2&2\\ \cline{3-6} \rule[-2mm]{0mm}{1mm} & & $({\bf 6,2})$& ${\bf 5}_{+1} +{\bf 5}_{-1} + {\color{red}{\bf 1}_{+1} + {\bf 1}_{-1}} $ & 3 & 6\\ \cline{3-6} \rule[-2mm]{0mm}{1mm} & & $({\bf 15,1})$& ${\bf 7}_0 + {\bf 5}_0 + {\color{red}{\bf 3}_0} $ & $2 $& 6\\ \hline \hline \rule[-2mm]{0mm}{1mm} 7 & $\text{Sp}(4)$ &${\bf {24}}$ & ${\bf 14} +{\color{red} \bf{10}} $ & $2 $& 20\\ \cline{3-6} \rule[-2mm]{0mm}{1mm} & &${\bf \overline{40}}$ & ${\bf 35} + {\color{red} \bf{5}} $ & $4 $& 20\\ \cline{3-6} \rule[-2mm]{0mm}{1mm} & &${\bf \overline{15}}$ & ${\bf 14} +{\color{red} \bf 1} $ & $5 $& 5\\ \cline{3-6} \rule[-2mm]{0mm}{1mm} & &${\bf {70}}$ & ${\bf 35} + {\bf 30} +{\color{red} \bf 5} $ & $4 $& 20\\ \hline \hline \rule[-2mm]{0mm}{1mm} 6 & $\text{Sp}(4)\times \text{Sp}(4)$ &${\bf {45}}$ & $({\bf 5,5}) + {\color{red}({\bf 10,1})+ ({\bf 1,10})} $ & 2 & 40 \\ \cline{3-6} \rule[-2mm]{0mm}{1mm} & &${\bf {144}}$ & $({\bf 16,4})+ ({\bf 4,16}) + {\color{red}({\bf 4,4})} $ & $5 $ & 80\\ \cline{3-6} \rule[-2mm]{0mm}{1mm} & &${\bf {320}}$ & $({\bf 14,5})+ ({\bf 5,14}) +({\bf 35,1})+ ({\bf 1,35}) $ & 8 & 80 \\ \rule[-2mm]{0mm}{1mm} & & & $+ ({\bf 5,10})+({\bf 10,5}) + {\color{red}({\bf 5,1})+({\bf 1,5}) } $ & & \\ \cline{3-6} \rule[-2mm]{0mm}{1mm} & &${\bf {\overline{126}}}$ & $({\bf 10,10}) + ({\bf 5,5}) + {\color{red}({\bf 1,1})}$ & 16 & 16 \\ \hline \hline \rule[-2mm]{0mm}{1mm} 5 & $\text{Sp}(8)$ &${\bf {78}}$ & ${\bf 42} + {\color{red}\bf 36}$ & 2 & 72\\ \cline{3-6} \rule[-2mm]{0mm}{1mm} & &${\bf {351}}$ & $ {\bf 315} + {\color{red}{\bf 36}}$ & 6& 216\\ \cline{3-6} \rule[-2mm]{0mm}{1mm} & &${\bf \overline{1728}}$ & ${\bf 792} +{\bf 594}+{\bf 315}+{\color{red} \bf 27}$ & 16 & 432\\ \hline \hline \rule[-2mm]{0mm}{1mm} 4 & $\text{SU}(8)$ &${\bf {133}}$ & $ {\bf 70}+{\color{red} \bf 63}$ & 2 & 126\\ \cline{3-6} \rule[-2mm]{0mm}{1mm} & &${\bf {912}}$ & ${\bf 420} + {\bf \overline{420}}+ {\color{red}{\bf 36}+{\bf \overline{36}}}$ & 8 & 576\\ \cline{3-6} \rule[-2mm]{0mm}{1mm} & &${\bf {8645}}$ & $ {\bf 3584} + {\bf 2352}+{\bf 945}+{\bf \overline{945}}$ & 32 & 2016 \\ \rule[-2mm]{0mm}{1mm} & & & $+ {\bf 378}+{\bf \overline{378}}+ {\color{red}{\bf 63}}$ & & \\ \hline \hline \rule[-2mm]{0mm}{1mm} 3 & $\text{SO}(16)$ &${\bf {248}}$ & ${\bf 128} +{\color{red}{\bf 120}}$ & 2 &240\\ \cline{3-6} \rule[-2mm]{0mm}{1mm} & &${\bf {3875}}$ & ${\bf 1820} +{\bf \overline{1920}}+{\color{red}{\bf 135}}$ & $16 $ & 2160\\ \cline{3-6} \rule[-2mm]{0mm}{1mm} & &${\bf {147250}}$ & ${\bf 60060}+{\bf \overline{56320}}+{\bf 15360} $ & 128 & 17280 \\ \rule[-2mm]{0mm}{1mm} & & & $+{\bf 7020}+{\bf \overline{6435}}+{\bf \overline{1920}}+ {\color{red}{\bf 135}} $ & & \\ \hline \end{tabular} \caption{ \label{nonstandardcentralcharge} The decomposition of the representations of the non-standard branes with respect to the R-symmetry $H$ in any dimension. In each case, the representation of the central charge is painted in red. The dimension of this representation times the degeneracy gives the number of branes. } \end{center} \end{table} The result we find for the non-standard branes in $D=7$ is completely general. In any dimension the representations of the non-standard branes decompose under $H$ into the representation of the corresponding central charge plus additional representations. Moreover, only the components corresponding to half-supersymmetric branes (that as we know from the previous section are those associated to the longest weights) have non-zero projection on the representation of $H$ corresponding to the central charge. This projection occurs with a given degeneracy: a given central charge component corresponds to more branes. This gives the degeneracy of the BPS conditions. The general result is summarised in Table \ref{nonstandardcentralcharge}, where for any representation associated to non-standard branes in $D\leq 8$ we give the decomposition under $H$ and the multiplicity of the BPS conditions. \section{Orbits and Invariants} In this section we wish to consider the orbits of the half-supersymmetric branes under infinitesimal U-duality transformations. The orbits of the standard branes of maximal supergravity theories have been derived long ago in \cite{Lu:1997bg,Ferrara:1997uz}. Here we will consider the orbits of the non-standard branes as well and point out what the differences with the orbits of the standard branes are. In general, under the algebra $g$, a weight can either transform infinitesimally to the other weights or stay invariant. The generators that leave the weight invariant form a subalgebra of $g$ which is the stabiliser of the weight orbit. Therefore, all the half-supersymmetric branes in maximal supergravity theories define highest-weight orbits under the action of the symmetry group $G$. These highest-weight orbits are single-charge orbits. If not all the other long weights can be reached by an infinitesimal transformation, one can consider a two-charge state that is the sum of the initial state and the one that cannot be reached by the initial state. One can then compute the orbit of this 2-charge configuration. In case not all weights are reached one continues to consider 3-charge configurations etc. This procedure can be iterated until one has a configuration in which all the weights can be reached. This strategy, that was used in \cite{Lu:1997bg} to compute the orbits for all standard branes, can be applied to non-standard branes as well. All the different orbits can be characterised in terms of invariants of $G$ \cite{Ferrara:1997ci}. For instance, the charge $T_M$ of a half-BPS string in six dimensions is a lightlike vector of $\text{SO}(5,5)$. Therefore, the orbit for a single-charge configuration corresponds to the constraint $T^2=0$, while a two-charge configuration corresponds to $T^2 \neq 0$. As we have pointed out in section 3, the representations of the standard branes decompose under $H$ entirely into the R-symmetry representations of the central charges, while for non-standard branes this decomposition gives these R-symmetry representations of the central charges plus additional representations (see Table \ref{nonstandardcentralcharge}). This implies that for standard branes the invariants of $G$ correspond to R-symmetry invariants of the central charge. These invariants characterise the amount of supersymmetry that the configuration preserves. For non-standard branes, instead, different invariants of $G$ may correspond to the same R-symmetry invariant when projected onto the central charge. This means that multiple-charge configurations of non-standard branes, corresponding to different invariants and different orbits, can preserve the same amount of supersymmetry. The aim of this section is to discuss the orbits and invariants of non-standard branes. We will first review the case of standard branes in the first subsection, while the non-standard branes will be considered in the second subsection. \begin{figure} \centering \scalebox{0.4} { \begin{pspicture}(0,-6.4306774)(8.284166,6.4306774) \psline[linewidth=0.02cm](6.579583,-3.2348435)(2.3195832,-5.7548437) \psline[linewidth=0.02cm](7.299583,-0.13484354)(5.819583,-2.8548436) \psline[linewidth=0.02cm](5.179583,2.8051565)(6.559583,0.20515646) \psline[linewidth=0.02cm](0.31958312,5.805156)(5.179583,3.2651565) \usefont{T1}{ppl}{m}{n} \rput(1.2305206,5.9551563){\huge \color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{1 0 0 0}} \usefont{T1}{ppl}{m}{n} \rput(1.3667706,-6.0448437){\huge \color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{0 0 0 -1}} \usefont{T1}{ppl}{m}{n} \rput(5.3667707,-3.0448434){\huge \color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{0 0 -1 1}} \usefont{T1}{ppl}{m}{n} \rput(6.775833,-0.044843547){\huge \color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{0 -1 1 0}} \usefont{T1}{ppl}{m}{n} \rput(5.381927,2.9551566){\huge \color{white}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{-1 1 0 0}} \usefont{T1}{ppl}{m}{n} \rput{-27.708605}(-1.6921408,2.0807898){\rput(3.3323956,4.4901567){\Large $\alpha_{1}$}} \usefont{T1}{ppl}{m}{n} \rput{-63.806404}(2.1459281,6.349024){\rput(6.1323957,1.4701564){\Large $\alpha_{2}$}} \usefont{T1}{ppl}{m}{n} \rput{58.320164}(1.5029364,-6.0717173){\rput(6.1523957,-1.6698436){\Large $\alpha_{3}$}} \usefont{T1}{ppl}{m}{n} \rput{31.509905}(-1.9301298,-2.616664){\rput(3.6323957,-4.7098436){\Large $\alpha_{4}$}} \end{pspicture} } \scalebox{0.4} { \begin{pspicture}(0,-6.634271)(9.600729,6.634271) \psline[linewidth=0.02cm](7.6477084,-3.2434373)(3.3877084,-5.7634373) \psline[linewidth=0.02cm](8.367708,-0.1434373)(6.887708,-2.8634374) \psline[linewidth=0.02cm](6.2477083,2.7965627)(7.6277084,0.1965627) \psline[linewidth=0.02cm](1.3877083,5.7965627)(6.2477083,3.2565627) \usefont{T1}{ppl}{m}{n} \rput{31.509905}(-1.462984,-3.3696783){\rput(5.200521,-4.258437){\Large $T^{4}_{5}$}} \usefont{T1}{ppl}{m}{n} \rput(1.5222396,6.1265626){\huge \color{white}\psframebox[linewidth=0.04,fillstyle=solid,fillcolor=red]{$T_{1}$}} \usefont{T1}{ppl}{m}{n} \rput(6.3222394,3.1265626){\huge \color{white}\psframebox[linewidth=0.04,fillstyle=solid,fillcolor=red]{$T_{2}$}} \usefont{T1}{ppl}{m}{n} \rput(7.92224,-0.073437296){\huge \color{white}\psframebox[linewidth=0.04,fillstyle=solid,fillcolor=red]{$T_{3}$}} \usefont{T1}{ppl}{m}{n} \rput(3.1222396,-6.073437){\huge \color{white}\psframebox[linewidth=0.04,fillstyle=solid,fillcolor=red]{$T_{5}$}} \usefont{T1}{ppl}{m}{n} \rput(7.3222394,-3.0734372){\huge \color{white}\psframebox[linewidth=0.04,fillstyle=solid,fillcolor=red]{$T_{4}$}} \usefont{T1}{ppl}{m}{n} \rput{59.599396}(2.3118954,-6.9921584){\rput(7.220521,-1.4584373){\Large $T^{3}_{4}$}} \usefont{T1}{ppl}{m}{n} \rput{-61.73042}(2.0902603,7.262076){\rput(7.0805206,1.9015627){\Large $T^{2}_{3}$}} \usefont{T1}{ppl}{m}{n} \rput{-29.180836}(-1.9472893,2.4438004){\rput(3.6805208,4.9815626){\Large $T^{1}_{2}$}} \end{pspicture} } \caption{\label{the5ofsl5} The Dynkin labels and the corresponding components of the ${\bf 5}$ of $\text{SL}(5,\mathbb{R})$. On the left-hand side of the figure, we write down the simple roots that are subtracted to each weight to get the weight below. On the right-hand side, we write down the corresponding generator in components.} \end{figure} \subsection{Standard-brane orbits} We will consider $D=7$ as a prototype example. The global symmetry group is $\text{SL}(5,\mathbb{R})$, and the strings correspond to charges $T_{ M} $ in the ${\bf 5}$. Using the same component notation as done in section 2 for the $\text{SL}(3,\mathbb{R})$ case, we associate charge components to weights as in Fig.~\ref{the5ofsl5}. The representation clearly has only one dominant weight and all the weights have the same length. All the weights thus correspond to branes. The stabiliser of the highest weight orbit is generated by all the elements of the algebra that annihilate the highest weight. These are the Cartan generators $H_{\alpha_{2}}$, $H_{\alpha_{3}}$ and $H_{\alpha_{4}}$, all the positive root vectors and all the negative root vectors that do not contain $\alpha_{1}$. The positive root vectors that do not contain $\alpha_1$, together with the negative root vectors and the Cartan stabilisers, generate the group $\text{SL}(4,\mathbb{R})$, while the remaining positive root vectors form the ${\bf 4}$ of this algebra. The orbit is therefore~\cite{Lu:1997bg} \begin{equation} \dfrac{\text{SL}(5,\mathbb{R})}{\text{SL}(4,\mathbb{R})\ltimes T_{\bf 4}} \label{theorbitofhw5ofsl5} \quad . \end{equation} The charges $T^{MN}=-T^{NM}$ for the 0-branes are in the ${\bf \overline{10}}$. The weights in terms of Dynkin labels and components are shown in Fig.~\ref{the10barofsl5}. \begin{figure} \centering \scalebox{0.4} { \begin{pspicture}(0,-9.430677)(13.672135,9.430677) \definecolor{color35}{rgb}{0.996078431372549,0.996078431372549,0.996078431372549} \definecolor{color34b}{rgb}{0.807843137254902,0.0,0.0} \definecolor{color36b}{rgb}{0.8313725490196079,0.00784313725490196,0.00784313725490196} \definecolor{color40b}{rgb}{0.8196078431372549,0.00392156862745098,0.00392156862745098} \psline[linewidth=0.02cm](5.852396,-6.154843)(7.5723953,-8.814843) \psline[linewidth=0.02cm](0.35239577,-3.254843)(5.1323953,-5.694843) \psline[linewidth=0.02cm](8.352395,-3.194843)(6.692395,-5.7548428) \psline[linewidth=0.02cm](1.9523957,-0.2148429)(6.892396,-2.694843) \psline[linewidth=0.02cm](13.4123955,-0.2148429)(9.172396,-2.734843) \psline[linewidth=0.02cm](3.5723956,-0.2148429)(1.8923957,-2.814843) \psline[linewidth=0.02cm](0.9923958,2.805157)(2.8323956,0.18515709) \psline[linewidth=0.02cm](9.132396,2.745157)(4.352396,0.2651571) \psline[linewidth=0.02cm](6.772396,2.805157)(11.392395,0.2651571) \psline[linewidth=0.02cm](7.412396,5.825157)(2.4923956,3.225157) \psline[linewidth=0.02cm](5.772396,5.765157)(7.5723953,3.285157) \psline[linewidth=0.02cm](8.452395,8.825157)(6.732396,6.305157) \usefont{T1}{ppl}{m}{n} \rput(8.054896,8.955156){\huge \color{color35}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=color34b]{0 0 1 0}} \usefont{T1}{ppl}{m}{n} \rput(1.3542707,-3.044843){\huge \color{color35}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=color36b]{1 0 -1 0}} \usefont{T1}{ppl}{m}{n} \rput(7.7848954,-9.044844){\huge \color{color35}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=color34b]{0 -1 0 0}} \usefont{T1}{ppl}{m}{n} \rput(7.7361455,2.955157){\huge \color{color35}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=color40b]{1 -1 0 1}} \usefont{T1}{ppl}{m}{n} \rput(3.0661457,-0.044842906){\huge \color{color35}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=color34b]{1 -1 1 -1}} \usefont{T1}{ppl}{m}{n} \rput(7.908958,-3.044843){\huge \color{color35}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=color34b]{-1 0 1 -1}} \usefont{T1}{ppl}{m}{n} \rput(6.166771,5.9551573){\huge \color{color35}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=color40b]{0 1 -1 1}} \usefont{T1}{ppl}{m}{n} \rput(1.3667709,2.955157){\huge \color{color35}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=color34b]{0 1 0 -1}} \usefont{T1}{ppl}{m}{n} \rput(12.178959,-0.044842906){\huge \color{color35}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=color34b]{-1 0 0 1}} \usefont{T1}{ppl}{m}{n} \rput(6.1270833,-6.0448427){\huge \color{color35}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=color34b]{-1 1 -1 0}} \usefont{T1}{ptm}{m}{n} \rput{-25.741508}(1.1330917,2.0318854){\rput(4.9728127,-1.444323){\Large $\alpha_{1}$}} \usefont{T1}{ptm}{m}{n} \rput{-54.618923}(9.14427,2.582438){\rput(7.0328126,-7.544323){\Large $\alpha_{2}$}} \usefont{T1}{ptm}{m}{n} \rput{-29.042452}(0.53184146,4.9662566){\rput(9.812813,1.4756771){\Large $\alpha_{1}$}} \usefont{T1}{ptm}{m}{n} \rput{-55.36516}(-0.2839217,2.491726){\rput(2.1928124,1.5356771){\Large $\alpha_{2}$}} \usefont{T1}{ptm}{m}{n} \rput{-25.25097}(2.1731179,0.9346147){\rput(3.1328125,-4.364323){\Large $\alpha_{1}$}} \usefont{T1}{ptm}{m}{n} \rput{-55.1361}(-0.39414895,7.757954){\rput(7.1928124,4.275677){\Large $\alpha_{2}$}} \usefont{T1}{ptm}{m}{n} \rput{56.056866}(-0.44232348,-7.9361887){\rput(7.1928124,-4.364323){\Large $\alpha_{3}$}} \usefont{T1}{ptm}{m}{n} \rput{30.824013}(0.79333687,-5.764919){\rput(10.812813,-1.424323){\Large $\alpha_{4}$}} \usefont{T1}{ptm}{m}{n} \rput{54.099735}(-0.57841253,-2.5143232){\rput(2.1328125,-1.804323){\Large $\alpha_{3}$}} \usefont{T1}{ptm}{m}{n} \rput{31.192688}(1.4824241,-2.7978384){\rput(5.7128124,1.2756771){\Large $\alpha_{4}$}} \usefont{T1}{ptm}{m}{n} \rput{57.646217}(9.693213,-2.5821402){\rput(7.1528125,7.535677){\Large $\alpha_{3}$}} \usefont{T1}{ptm}{m}{n} \rput{29.193901}(2.582943,-1.3653184){\rput(3.8728125,4.295677){\Large $\alpha_{4}$}} \end{pspicture} } \scalebox{0.4} { \begin{pspicture}(0,-9.724231)(14.200729,9.724468) \definecolor{color2}{rgb}{0.996078431372549,0.996078431372549,0.996078431372549} \definecolor{color1b}{rgb}{0.807843137254902,0.0,0.0} \definecolor{color3b}{rgb}{0.8196078431372549,0.00392156862745098,0.00392156862745098} \psline[linewidth=0.02cm](6.6877084,5.5923414)(1.8677083,3.3523417) \psline[linewidth=0.02cm](6.1077085,5.8123417)(7.907708,3.3323417) \psline[linewidth=0.02cm](1.3277084,2.8523417)(3.5477083,0.31234163) \psline[linewidth=0.02cm](8.227708,2.6923416)(3.8877084,0.37234163) \psline[linewidth=0.02cm](7.9877086,2.6323416)(12.287708,0.31234163) \psline[linewidth=0.02cm](12.287708,-0.40765837)(8.327708,-2.6676583) \psline[linewidth=0.02cm](3.5677083,-0.36765838)(8.187708,-2.6476583) \psline[linewidth=0.02cm](3.9077084,-0.16765839)(1.5277083,-2.6476583) \psline[linewidth=0.02cm](6.0477085,-6.327658)(7.6877084,-8.687658) \psline[linewidth=0.02cm](8.187708,-3.2476585)(6.1677084,-5.827658) \psline[linewidth=0.02cm](1.4477084,-3.3676584)(6.2677083,-5.787658) \psline[linewidth=0.02cm](8.787708,8.872341)(7.0677085,6.3523417) \usefont{T1}{ppl}{m}{n} \rput(8.922239,9.002341){\huge \color{color2}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=color1b]{$T^{45}$}} \usefont{T1}{ppl}{m}{n} \rput(6.3222394,-5.9976583){\huge \color{color2}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=color3b]{$T^{13}$}} \usefont{T1}{ppl}{m}{n} \rput(6.5222397,6.0023417){\huge \color{color2}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=color3b]{$T^{35}$}} \usefont{T1}{ppl}{m}{n} \rput(8.32224,-2.9976585){\huge \color{color2}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=color1b]{$T^{14}$}} \usefont{T1}{ppl}{m}{n} \rput(1.7222396,-2.9976585){\huge \color{color2}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=color1b]{$T^{23}$}} \usefont{T1}{ppl}{m}{n} \rput(3.7222395,0.002341618){\huge \color{color2}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=color1b]{$T^{24}$}} \usefont{T1}{ppl}{m}{n} \rput(8.32224,3.0023415){\huge \color{color2}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=color1b]{$T^{25}$}} \usefont{T1}{ppl}{m}{n} \rput(1.9222395,3.0023415){\huge \color{color2}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=color3b]{$T^{34}$}} \usefont{T1}{ppl}{m}{n} \rput(7.92224,-8.997659){\huge \color{color2}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=color1b]{$T^{12}$}} \usefont{T1}{ppl}{m}{n} \rput(12.32224,0.002341618){\huge \color{color2}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=color1b]{$T^{15}$}} \usefont{T1}{ppl}{m}{n} \rput{27.923681}(1.5524259,-2.5679078){\rput(5.900521,1.8573416){\Large $T^{4}_{5}$}} \usefont{T1}{ppl}{m}{n} \rput{-52.832184}(0.017797528,2.9920733){\rput(2.9805207,1.4973416){\Large $T^{2}_{3}$}} \usefont{T1}{ppl}{m}{n} \rput{-53.010376}(-0.23808256,7.9188347){\rput(7.780521,4.2173414){\Large $T^{2}_{3}$}} \usefont{T1}{ppl}{m}{n} \rput{24.464548}(2.256619,-1.132641){\rput(3.7005208,4.6573415){\Large $T^{4}_{5}$}} \usefont{T1}{ppl}{m}{n} \rput{51.624146}(8.784463,-3.045522){\rput(7.5005207,7.5773416){\Large $T^{3}_{4}$}} \usefont{T1}{ppl}{m}{n} \rput{27.923681}(0.650509,-4.940226){\rput(10.220521,-1.1426584){\Large $T^{4}_{5}$}} \usefont{T1}{ppl}{m}{n} \rput{-27.623865}(1.3219821,2.7335584){\rput(6.180521,-1.3026584){\Large $T^{1}_{2}$}} \usefont{T1}{ppl}{m}{n} \rput{-28.579546}(0.6402201,5.389804){\rput(10.86052,1.4573417){\Large $T^{1}_{2}$}} \usefont{T1}{ppl}{m}{n} \rput{51.111923}(-0.65760094,-7.0285234){\rput(6.9805207,-4.182658){\Large $T^{3}_{4}$}} \usefont{T1}{ppl}{m}{n} \rput{-24.14912}(2.187732,1.4232564){\rput(4.380521,-4.3826585){\Large $T^{1}_{2}$}} \usefont{T1}{ppl}{m}{n} \rput{41.616947}(-0.37697127,-1.8918123){\rput(2.260521,-1.4226583){\Large $T^{3}_{4}$}} \usefont{T1}{ppl}{m}{n} \rput{-54.95811}(9.251431,2.8239539){\rput(7.300521,-7.4626584){\Large $T^{2}_{3}$}} \end{pspicture} } \caption{\label{the10barofsl5} The Dynkin labels and the components of the ${\bf \overline{10}}$ of $\text{SL}(5,\mathbb{R})$. We denote the simple roots and the corresponding generators connecting the weights as explained in the caption of Fig. \ref{the5ofsl5}.} \end{figure} Again, there is only one dominant weight, and all the weights have the same length, and thus they all correspond to branes. The generators that annihilate the highest weight are the Cartan's $H_{\alpha_{1}}$, $H_{\alpha_{2}}$ and $H_{\alpha_{4}}$, all the positive root vectors and the negative root vectors that do not contain $\alpha_3$. Altoghether, this generates the orbit~\cite{Lu:1997bg} \begin{equation} \dfrac{\text{SL}(5,\mathbb{R})}{\bigr(\text{SL}(3,\mathbb{R})\times \text{SL}(2,\mathbb{R})\bigl)\ltimes T^{({\bf 3,2})}}\,. \end{equation} By looking at Figs. \ref{the5ofsl5} and \ref{the10barofsl5}, one notices that while in the case of the ${\bf 5}$ any weight can be reached starting by any other weight by the action of a given generator, in the case of the ${\bf \overline{10}}$ this is no longer true: if one considers any weight in the representation, there are always three weights that are not connected to it by transformations in the algebra. In particular, if one considers the highest weight, one can see that the weights $\boxed{1 \ 0 \ -1 \ 0}$, $\boxed{-1 \ 1 \ -1 \ 0}$ and $\boxed{0 \ -1 \ 0 \ 0}$ in fig.~\ref{the10barofsl5} are not connected by transformations in the algebra. This can be easily seen by noticing that the difference between the weight $\boxed{1 \ 0 \ -1 \ 0}$ and the highest weight is $\alpha_2 +2 \alpha_3 + \alpha_4$ which is not a root. This is also easy to understand in terms of components: the highest weight corresponds to the charge $T^{45}$, and the three charges $T^{12}$, $T^{13}$ and $T^{23}$ are not connected because an infinitesimal transformation cannot change both indices. One can then compute the orbit of a 2-charge configuration, which for instance we choose to be $T^{45}+ T^{12}$. In the 2-charge case, the generators that stabilise the orbit are not only the common stabilisers of both weights, but also those generators that take the two weights to the same weight with opposite sign, so that the overall transformation vanishes. This can be seen in Fig. \ref{10barhighestlowest}, which shows that the components $T^{14}$, $T^{15}$, $T^{24}$ and $T^{25}$ are connected to both the highest weight and the lowest weight by infinitesimal transformations. In general, we call such generators the ``conjunction'' stabilisers. \begin{figure} \centering \scalebox{0.6} { \begin{pspicture}(0,-5.7914467)(17.505625,5.7914467) \psbezier[linewidth=0.02,fillstyle=solid,fillcolor=blue,opacity=0.5](7.06,-5.7785535)(8.326644,-5.7814465)(10.780136,-0.9977477)(10.74,0.0014464753)(10.699863,1.0006407)(9.832819,2.4410713)(8.84,2.3414464)(7.847181,2.2418215)(8.1,1.5014465)(7.7,0.78144646)(7.3,0.061446477)(5.3121037,0.87740755)(5.32,-0.058553524)(5.3278966,-0.9945146)(5.7933564,-5.7756605)(7.06,-5.7785535) \psbezier[linewidth=0.02,fillstyle=solid,fillcolor=yellow,opacity=0.5](7.14,5.7814465)(8.18,5.7814465)(10.7764635,0.756783)(10.68,-0.23855352)(10.583537,-1.23389)(9.44,-2.5985534)(8.94,-2.5385535)(8.44,-2.4785535)(8.24,-2.0585535)(7.78,-0.9985535)(7.32,0.061446477)(6.12105,-1.0707908)(5.54,-0.09855352)(4.9589505,0.8736837)(5.212335,2.547707)(5.62,3.6814466)(6.027665,4.815186)(6.1,5.7814465)(7.14,5.7814465) \psdiamond[linewidth=0.02,dimen=outer](8.12,0.021446476)(2.0,3.6) \psdots[dotsize=0.2](8.12,3.5814464) \psdots[dotsize=0.2](6.12,0.021446476) \psdots[dotsize=0.2](10.08,0.061446477) \psdots[dotsize=0.2](8.1,-3.5185535) \psdots[dotsize=0.2](9.1,-1.7785535) \psdots[dotsize=0.2](7.1,-1.7785535) \psdots[dotsize=0.2](7.14,1.8414465) \psdots[dotsize=0.2](9.08,1.8414465) \psline[linewidth=0.02cm](7.06,5.4014463)(8.14,3.5814464) \psline[linewidth=0.02cm](8.1,-3.5385535)(7.16,-5.3785534) \psdots[dotsize=0.2](7.06,5.4214463) \psdots[dotsize=0.2](7.16,-5.3785534) \usefont{T1}{ppl}{m}{n} \rput(8.832812,3.5864465){\Large $\alpha_{3}$} \usefont{T1}{ppl}{m}{n} \rput(5.8428125,1.8464465){\Large $\alpha_{3}+\alpha_{4}$} \usefont{T1}{ppl}{m}{n} \rput(10.402813,1.8464465){\Large $\alpha_{2}+\alpha_{3}$} \usefont{T1}{ppl}{m}{n} \rput(4.0928126,0.046446476){\Large $\alpha_{2}+\alpha_{3}+\alpha_{4}$} \usefont{T1}{ppl}{m}{n} \rput(11.892813,0.066446476){\Large $\alpha_{1}+\alpha_{2}+\alpha_{3}$} \usefont{T1}{ppl}{m}{n} \rput(5.1628125,-1.7935535){\Large $\alpha_{2}+2\alpha_{3}+\alpha_{4}$} \usefont{T1}{ppl}{m}{n} \rput(11.702813,-1.7735535){\Large $\alpha_{1}+\alpha_{2}+\alpha_{3}+\alpha_{4}$} \usefont{T1}{ppl}{m}{n} \rput(10.812812,-3.5135536){\Large $\alpha_{1}+\alpha_{2}+2\alpha_{3}+\alpha_{4}$} \usefont{T1}{ppl}{m}{n} \rput(10.142813,-5.3535533){\Large $\alpha_{1}+2\alpha_{2}+2\alpha_{3}+\alpha_{4}$} \pscircle[linewidth=0.02,dimen=outer,fillstyle=solid,fillcolor=red](7.08,5.4014463){0.2} \pscircle[linewidth=0.02,dimen=outer,fillstyle=solid,fillcolor=red](7.16,-5.3785534){0.2} \end{pspicture} } \caption{This figure describes the weights of the ${\bf \overline{10}}$ that can be reached by an infinitesimal $\text{SL}(5,\mathbb{R})$ transformation starting from the highest weight (yellow set) and from the lowest weight (blue set). The intersection of the two sets contains the weights associated to the conjunction stabilisers. For each weight we write its distance from the highest weight in terms of simple roots. \label{10barhighestlowest}} \end{figure} The 2-charge orbit is determined as follows~\cite{Lu:1997bg}. The common stabilisers are the Cartan generators $H_{\alpha_{1}}$ and $H_{\alpha_{4}}$ and the root vectors \begin{equation} E_{\pm\alpha_{1}}\ , \ \ E_{\pm\alpha_{4}}\ ,\ \ E_{-\alpha_{2}}\ ,\ \ E_{\alpha_{3}}\ ,\ \ E_{\alpha_{3}+\alpha_{4}}\ ,\ \ E_{-\alpha_{1}-\alpha_{2}} \ , \end{equation} while the conjunction stabilisers are \begin{eqnarray} & & E_{\alpha_{1}+\alpha_{2}+\alpha_{3}}-E_{-\alpha_{2}-\alpha_{3}-\alpha_{4}} \ , \ \ E_{\alpha_{1}+\alpha_{2}+\alpha_{3}+\alpha_{4}}-E_{-\alpha_{2}-\alpha_{3}}\ ,\nonumber \\ & & E_{\alpha_{2}+\alpha_{3}+\alpha_{4}}-E_{-\alpha_{1}-\alpha_{2}-\alpha_{3}}\ , \ \ E_{\alpha_{2}+\alpha_{3}}-E_{-\alpha_{1}-\alpha_{2}-\alpha_{3}-\alpha_{4}} \ \ . \end{eqnarray} To extract the semisimple part we make the identifications \begin{eqnarray} && E_{\alpha_{1}+\alpha_{2}+\alpha_{3}}-E_{-\alpha_{2}-\alpha_{3}-\alpha_{4}}\rightarrow E_{\frac{\alpha_{1}-\alpha_{4}}{2}}\ ,\nonumber \\ &&E_{\alpha_{1}+\alpha_{2}+\alpha_{3}+\alpha_{4}}-E_{-\alpha_{2}-\alpha_{3}}\rightarrow E_{\frac{\alpha_{1}+\alpha_{4}}{2}}\ ,\nonumber \\ && E_{\alpha_{2}+\alpha_{3}+\alpha_{4}}-E_{-\alpha_{1}-\alpha_{2}-\alpha_{3}}\rightarrow E_{\frac{-\alpha_{1}+\alpha_{4}}{2}}\ ,\nonumber \\ && E_{\alpha_{2}+\alpha_{3}}-E_{-\alpha_{1}-\alpha_{2}-\alpha_{3}-\alpha_{4}}\rightarrow E_{-\frac{\alpha_{1}+\alpha_{4}}{2}} \ . \end{eqnarray} Defining $\beta_{1}=\frac{\alpha_{1}-\alpha_{4}}{2}$ and $\beta_{2}=\alpha_{4}$ one recognises that the conjunction stabilisers together with the stabilizing roots $\pm\alpha_{1}$ and $\pm\alpha_{4}$ and the Cartan generators $ H_{\beta_{1}}= H_{\frac{\alpha_{1}-\alpha_{4}}{2}}=H_{\alpha_{1}}-H_{\alpha_{4}}$, $H_{\beta_{2}}=H_{\alpha_{4}}$ generate an algebra $SO(2,3)$ with simple roots $\beta_{1}$ and $\beta_{2}$; the rest of the stabilising roots reorganise themselves in the representation ${\bf 4}$ of this group with highest weight $-\alpha_{2}$. Thus, the two-charge orbit for the one forms in $D=7$ (coupling to 0-branes) is the 10-dimensional coset \begin{equation} \dfrac{Sl(5,\mathbb{R})}{SO(2,3)\ltimes T^{\bf 4}}. \end{equation} Given that all the weights can be reached in any 2-charge orbit, there are no configurations with 3 charges in this case. The fact that there is only a one-charge orbit in the ${\bf 5}$, while there is also a 2-charge orbit in the ${\bf \overline{10}}$ can be understood in terms of invariants: for $T_M$ in the ${\bf 5}$, there is no non-trivial contraction with the invariant tensor $\epsilon^{M_1 ...M_5}$ that one can write, while for $T^{MN}$ one can construct \begin{equation}\label{inv} T^{MN} T^{PQ} \epsilon_{MNPQR} \quad . \end{equation} For the highest weight orbit, for which only one component of the charge is turned on, this quantity is clearly vanishing, while it is not vanishing for the 2-charge orbit. We now discuss the amount of supersymmetry that these orbits preserve. As we discussed in section 3, the representations of $\text{SL}(5,\mathbb{R})$ for standard branes decompose under the R-symmetry $\text{SO}(5)$ as in the first line of eq. \eqref{standardTQnonstandardTRQ}, to give entirely the central charges $Q$. This means that the $\text{SL}(5,\mathbb{R})$ invariants in terms of $T$ are identical to the R-symmetry invariants in terms of the central charges $Q$. In the case of the ${\bf 5}$ of $\text{SL}(5,\mathbb{R})$, there is only one orbit, and therefore any charge in the ${\bf 5}$ corresponds to a half-BPS configuration. For the ${\bf \overline{10}}$, the invariant of $\text{SL}(5,\mathbb{R})$, given in eq.~\eqref{inv}, is mapped to the same invariant of $\text{SO}(5)$ written in terms of the central charges $Q_{MN}$, and thus the single-charge and the 2-charge orbits preserve a different amount of supersymmetry, namely 1/2 and 1/4 respectively. This analysis is completely general. For instance, in $D=4$ the 0-brane orbits are classified in terms of a quartic invariant~\cite{Ferrara:1997ci} \begin{equation} I_4 = d^{MNPQ} T_M T_N T_P T_Q \quad , \end{equation} where the index $M$ denotes the ${\bf 56}$ of $\text{E}_{7(7)}$ and $d^{MNPQ}$ is the invariant tensor of $E_{7(7)}$ in the fully symmetric product of 4 ${\bf 56}$'s. The highest weight orbit is given by \begin{equation} I_4 = \frac{\partial I_4}{\partial T_M} = \frac{\partial^2 I_4}{\partial T_M \partial T_N} =0 \quad , \end{equation} and it preserves 16 supercharges. The 2-charge orbit is given by the constraints \begin{equation} I_4 = \frac{\partial I_4}{\partial T_M} = 0 \ \ \frac{\partial^2 I_4}{\partial T_M \partial T_N} \neq0 \quad , \end{equation} and it preserves 8 supercharges. The 3-charge orbit is given by \begin{equation} I_4 = 0\ \ \frac{\partial I_4}{\partial T_M} \neq 0 \quad , \end{equation} and it preserves 4 supercharges. Finally, the case $I_4 \neq 0$ can either give a 4-charge orbit, preserving again 4 supercharges, or a dyonic orbit, preserving no supersymmetry at all. A complete summary of orbits and invariants in maximal supergravity theories can be found in \cite{Borsten:2010aa}. This finishes our discussion of the standard brane orbits. In the next subsection we will consider the non-standard-brane orbits, and the supersymmetry that they preserve. \subsection{Non-standard-brane orbits} The single-charge orbits for non-standard branes have been derived in \cite{Bergshoeff:2012ex}. Here we consider the multiple-charge orbits for non-standard branes. Again, we focus on the seven-dimensional case and we consider in particular the tensor 5-branes, with charges $T^{MN}=T^{NM}$ in the ${\bf \overline{15}}$. The Dynkin labels of the weights, and the corresponding components, are shown in Fig.~\ref{the15barofsl5}. The five brane-charges $T^{MM}$ correspond to the long weights, which are as usual painted in red in the figure. The other ten weights are exactly the weights of the ${\bf \overline{10}}$, as one can see by looking at Fig.~\ref{the10barofsl5}. The highest weight of the ${\bf \overline{10}}$ is $\boxed{0\ 0\ 1 \ 0}$, and indeed this weight is present as a dominant weight in Fig.~\ref{the15barofsl5}. \begin{figure} \centering \scalebox{0.3} { \begin{pspicture}(0,-16.430677)(15.284167,16.430677) \definecolor{color189}{rgb}{0.996078431372549,0.996078431372549,0.996078431372549} \definecolor{color191b}{rgb}{0.807843137254902,0.0,0.0} \psline[linewidth=0.02cm](8.379583,7.8051558)(4.1595836,4.3251557) \usefont{T1}{ppl}{m}{n} \rput{39.154793}(5.2771573,-2.4566276){\rput(6.052396,6.210156){\Large $\alpha_{4}$}} \usefont{T1}{ppl}{m}{n} \rput{35.41034}(1.0519768,-7.1933784){\rput(11.752396,-1.9298441){\Large $\alpha_{4}$}} \psline[linewidth=0.02cm](14.559584,-0.27484417)(9.959583,-3.5148442) \psline[linewidth=0.02cm](8.099584,-12.174844)(12.939584,-15.754844) \usefont{T1}{ppl}{m}{n} \rput{-37.808403}(11.027237,3.8620942){\rput(11.112396,-14.149844){\Large $\alpha_{1}$}} \psline[linewidth=0.02cm](3.4795833,-8.134844)(8.159583,-11.734844) \psline[linewidth=0.02cm](7.4395833,-8.234844)(8.999583,-11.854844) \usefont{T1}{ppl}{m}{n} \rput{-39.194077}(7.7066684,1.7082552){\rput(6.2123957,-9.949844){\Large $\alpha_{1}$}} \usefont{T1}{ppl}{m}{n} \rput{-65.67299}(14.301894,1.7830572){\rput(8.492395,-10.169845){\Large $\alpha_{2}$}} \psline[linewidth=0.02cm](2.5595834,-4.0748444)(4.2195835,-7.774844) \psline[linewidth=0.02cm](1.8395833,-4.0548444)(9.119583,-7.6948442) \psline[linewidth=0.02cm](9.099584,-3.9948442)(8.259583,-7.774844) \usefont{T1}{ppl}{m}{n} \rput{-24.752228}(2.6807318,1.5588244){\rput(4.852396,-5.309844){\Large $\alpha_{1}$}} \usefont{T1}{ppl}{m}{n} \rput{-63.836166}(7.811928,-0.11656395){\rput(3.7723958,-6.309844){\Large $\alpha_{2}$}} \usefont{T1}{ppl}{m}{n} \rput{76.59314}(0.89201146,-12.707747){\rput(8.452395,-5.769844){\Large $\alpha_{3}$}} \psline[linewidth=0.02cm](0.9195833,-0.27484417)(2.2795832,-3.654844) \psline[linewidth=0.02cm](3.2595832,-0.25484416)(7.7195835,-3.5348442) \psline[linewidth=0.02cm](4.8795834,-0.23484416)(3.4395833,-3.594844) \usefont{T1}{ppl}{m}{n} \rput{62.927948}(0.024187824,-4.577653){\rput(3.712396,-2.249844){\Large $\alpha_{3}$}} \usefont{T1}{ppl}{m}{n} \rput{-37.051865}(2.3561027,3.1729453){\rput(5.872396,-1.9098442){\Large $\alpha_{1}$}} \usefont{T1}{ppl}{m}{n} \rput{-69.30491}(3.0714183,0.46543965){\rput(1.8323958,-1.9698441){\Large $\alpha_{2}$}} \psline[linewidth=0.02cm](3.2395833,3.8051558)(1.8995833,0.18515584) \psline[linewidth=0.02cm](2.4995832,3.8251557)(4.099583,0.20515585) \psline[linewidth=0.02cm](7.6595836,3.7651558)(12.599584,0.20515585) \psline[linewidth=0.02cm](9.959583,3.7451558)(5.7395835,0.26515585) \usefont{T1}{ppl}{m}{n} \rput{67.623085}(3.1497402,-0.9611019){\rput(2.2523959,1.8901558){\Large $\alpha_{3}$}} \usefont{T1}{ppl}{m}{n} \rput{39.154793}(3.0684018,-4.365969){\rput(7.6323957,2.1501558){\Large $\alpha_{4}$}} \usefont{T1}{ppl}{m}{n} \rput{-64.33946}(0.2782237,4.344204){\rput(3.5523958,1.9701558){\Large $\alpha_{2}$}} \usefont{T1}{ppl}{m}{n} \rput{-35.568764}(1.086935,6.6904535){\rput(10.932396,1.6701559){\Large $\alpha_{1}$}} \psline[linewidth=0.02cm](4.6595836,7.8251557)(3.2595832,4.285156) \psline[linewidth=0.02cm](6.9195833,7.8051558)(8.539583,4.205156) \usefont{T1}{ppl}{m}{n} \rput{66.949005}(7.606078,0.19922464){\rput(3.6123958,5.870156){\Large $\alpha_{3}$}} \usefont{T1}{ppl}{m}{n} \rput{-67.453}(-0.37120062,11.02584){\rput(8.032395,5.810156){\Large $\alpha_{2}$}} \psline[linewidth=0.02cm](9.7195835,11.725156)(5.519583,8.305156) \psline[linewidth=0.02cm](9.039583,11.825156)(7.8395834,8.185156) \usefont{T1}{ppl}{m}{n} \rput{41.565414}(8.206853,-2.1624763){\rput(6.912396,9.750155){\Large $\alpha_{4}$}} \usefont{T1}{ppl}{m}{n} \rput{65.48303}(13.0030155,-1.8401575){\rput(7.892396,9.2101555){\Large $\alpha_{3}$}} \psline[linewidth=0.02cm](13.679584,15.805156)(9.639584,12.285156) \usefont{T1}{ppl}{m}{n} \rput{41.356895}(11.868701,-3.863497){\rput(11.012396,13.810156){\Large $\alpha_{4}$}} \usefont{T1}{ppl}{m}{n} \rput(12.634114,15.955155){\huge \color{color189}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=color191b]{0 0 0 2}} \usefont{T1}{ppl}{m}{n} \rput(9.175834,-12.044845){\huge \psframebox[linewidth=0.02,fillstyle=solid]{0 -1 0 0}} \usefont{T1}{ppl}{m}{n} \rput(7.711927,-8.044845){\huge \psframebox[linewidth=0.02,fillstyle=solid]{-1 1 -1 0}} \usefont{T1}{ppl}{m}{n} \rput(4.5733333,-8.044845){\huge \color{color189}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=color191b]{2 -2 0 0}} \usefont{T1}{ppl}{m}{n} \rput(8.702865,-3.844844){\huge \psframebox[linewidth=0.02,fillstyle=solid]{-1 0 1 -1}} \usefont{T1}{ppl}{m}{n} \rput(1.3758334,-0.044844158){\huge \color{color189}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=color191b]{0 2 -2 0}} \usefont{T1}{ppl}{m}{n} \rput(4.481458,-0.044844158){\huge \psframebox[linewidth=0.02,fillstyle=solid]{1 -1 1 -1}} \usefont{T1}{ppl}{m}{n} \rput(8.751458,3.9551558){\huge \psframebox[linewidth=0.02,fillstyle=solid]{1 -1 0 1}} \usefont{T1}{ppl}{m}{n} \rput(3.0267708,3.9551558){\huge \psframebox[linewidth=0.02,fillstyle=solid]{0 1 0 -1}} \usefont{T1}{ppl}{m}{n} \rput(2.760521,-3.844844){\huge \psframebox[linewidth=0.02,fillstyle=solid]{1 0 -1 0}} \usefont{T1}{ppl}{m}{n} \rput(4.3641148,7.955156){\huge \color{color189}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=color191b]{0 0 2 -2}} \usefont{T1}{ppl}{m}{n} \rput(7.3667707,7.955156){\huge \psframebox[linewidth=0.02,fillstyle=solid]{0 1 -1 1}} \usefont{T1}{ppl}{m}{n} \rput(8.645833,11.955155){\huge \psframebox[linewidth=0.02,fillstyle=solid]{0 0 1 0}} \usefont{T1}{ppl}{m}{n} \rput(13.372865,-0.044844158){\huge \psframebox[linewidth=0.02,fillstyle=solid]{-1 0 0 1}} \usefont{T1}{ppl}{m}{n} \rput(13.781927,-16.044844){\huge \color{color189}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=color191b]{-2 0 0 0}} \end{pspicture} } \scalebox{0.3} { \begin{pspicture}(0,-16.488802)(16.22073,16.488802) \definecolor{color283}{rgb}{0.996078431372549,0.996078431372549,0.996078431372549} \definecolor{color285b}{rgb}{0.807843137254902,0.0,0.0} \definecolor{color27}{rgb}{0.00392156862745098,0.00392156862745098,0.00392156862745098} \definecolor{color41b}{rgb}{0.7254901960784313,0.0,0.0} \psline[linewidth=0.02cm](7.6077085,7.751094)(3.3877084,4.271094) \usefont{T1}{ppl}{m}{n} \rput{39.154793}(5.0696917,-1.9813921){\rput(5.280521,6.1560936){\Large $T_5^4$}} \psline[linewidth=0.02cm](2.8277082,-4.148906)(8.387709,-7.668906) \usefont{T1}{ppl}{m}{n} \rput{35.41034}(0.9237371,-6.7797995){\rput(11.040521,-1.9239062){\Large $T_5^4$}} \psline[linewidth=0.02cm](13.847709,-0.26890624)(9.247708,-3.5089064) \psline[linewidth=0.02cm](9.467709,-12.188907)(14.307709,-15.768907) \usefont{T1}{ppl}{m}{n} \rput{-37.808403}(11.323074,4.697834){\rput(12.480521,-14.163906){\Large $T_2^1$}} \psline[linewidth=0.02cm](4.927708,-8.128906)(9.607708,-11.728907) \psline[linewidth=0.02cm](7.947708,-8.188907)(9.507709,-11.808907) \usefont{T1}{ppl}{m}{n} \rput{-39.194077}(8.02873,2.6247327){\rput(7.660521,-9.943906){\Large $T_2^1$}} \usefont{T1}{ppl}{m}{n} \rput{-65.67299}(14.558841,2.27308){\rput(9.000521,-10.123906){\Large $T_3^2$}} \psline[linewidth=0.02cm](2.8349233,-4.031556)(4.7579303,-7.6019406) \psline[linewidth=0.02cm](9.087708,-3.9489062)(8.247708,-7.728906) \usefont{T1}{ppl}{m}{n} \rput{-30.78627}(3.6775713,2.0713391){\rput(5.5605206,-5.623906){\Large $T_2^1$}} \usefont{T1}{ppl}{m}{n} \rput{-59.692616}(7.447431,0.60089105){\rput(4.207341,-6.170141){\Large $T_3^2$}} \usefont{T1}{ppl}{m}{n} \rput{76.59314}(0.9275766,-12.660911){\rput(8.440521,-5.723906){\Large $T_4^3$}} \psline[linewidth=0.02cm](1.4277084,-0.18890625)(2.7877083,-3.5689063) \psline[linewidth=0.02cm](4.427708,-0.20890625)(8.887709,-3.4889061) \psline[linewidth=0.02cm](4.847708,-0.12890625)(3.4077084,-3.4889061) \usefont{T1}{ppl}{m}{n} \rput{62.927948}(0.10115026,-4.4915457){\rput(3.6805208,-2.1439064){\Large $T_4^3$}} \usefont{T1}{ppl}{m}{n} \rput{-37.051865}(2.5642788,3.8860598){\rput(7.0405207,-1.8639063){\Large $T_2^1$}} \usefont{T1}{ppl}{m}{n} \rput{-69.30491}(3.319582,0.99634534){\rput(2.3405209,-1.8839062){\Large $T_3^2$}} \psline[linewidth=0.02cm](3.2277083,3.8510938)(1.8877083,0.23109375) \psline[linewidth=0.02cm](3.0277083,3.8910937)(4.6277084,0.27109376) \psline[linewidth=0.02cm](8.867708,3.8110938)(13.807709,0.25109375) \psline[linewidth=0.02cm](9.387709,3.7510939)(5.1677084,0.27109376) \usefont{T1}{ppl}{m}{n} \rput{67.623085}(3.1848648,-0.92167175){\rput(2.2405207,1.9360938){\Large $T_4^3$}} \usefont{T1}{ppl}{m}{n} \rput{39.154793}(2.9437325,-4.003544){\rput(7.0605206,2.1560938){\Large $T_5^4$}} \usefont{T1}{ppl}{m}{n} \rput{-64.33946}(0.5182157,4.8576274){\rput(4.0805206,2.0360937){\Large $T_3^2$}} \usefont{T1}{ppl}{m}{n} \rput{-35.568764}(1.2856282,7.401767){\rput(12.140521,1.7160938){\Large $T_2^1$}} \psline[linewidth=0.02cm](4.6477084,7.8710938)(3.2477083,4.331094) \psline[linewidth=0.02cm](7.5077085,7.9110937)(9.127708,4.311094) \usefont{T1}{ppl}{m}{n} \rput{-67.453}(-0.10642798,11.634328){\rput(8.620521,5.916094){\Large $T_3^2$}} \usefont{T1}{ppl}{m}{n} \rput{66.949005}(7.6411233,0.23810239){\rput(3.6005208,5.916094){\Large $T_4^3$}} \psline[linewidth=0.02cm](9.187708,11.811093)(4.9877086,8.391094) \psline[linewidth=0.02cm](9.027708,11.871094)(7.8277082,8.231093) \usefont{T1}{ppl}{m}{n} \rput{41.565414}(8.129944,-1.7879512){\rput(6.380521,9.836094){\Large $T_5^4$}} \usefont{T1}{ppl}{m}{n} \rput{65.48303}(13.037864,-1.8024778){\rput(7.880521,9.256094){\Large $T_4^3$}} \psline[linewidth=0.02cm](13.667708,15.851093)(9.627708,12.331094) \usefont{T1}{ppl}{m}{n} \rput{41.356895}(11.896092,-3.8441942){\rput(11.000521,13.856093){\Large $T_5^4$}} \usefont{T1}{ppl}{m}{n} \rput(13.23224,16.001093){\huge \color{color283}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=color285b]{$T^{55}$}} \usefont{T1}{ppl}{m}{n} \rput(9.632239,-11.998906){\huge \psframebox[linewidth=0.02,fillstyle=solid]{$T^{12}$}} \usefont{T1}{ppl}{m}{n} \rput(8.19224,-7.998906){\huge \psframebox[linewidth=0.02,fillstyle=solid]{$T^{13}$}} \usefont{T1}{ppl}{m}{n} \rput(5.0322394,-7.998906){\huge \color{color283}\psframebox[linewidth=0.02,linecolor=color27,fillstyle=solid,fillcolor=color285b]{$T^{22}$}} \usefont{T1}{ppl}{m}{n} \rput(9.03224,-3.7989063){\huge \psframebox[linewidth=0.02,fillstyle=solid]{$T^{14}$}} \usefont{T1}{ppl}{m}{n} \rput(1.8322396,0.00109375){\huge \color{color283}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=color285b]{$T^{33}$}} \usefont{T1}{ppl}{m}{n} \rput(4.8322396,0.00109375){\huge \psframebox[linewidth=0.02,fillstyle=solid]{$T^{24}$}} \usefont{T1}{ppl}{m}{n} \rput(9.23224,4.001094){\huge \psframebox[linewidth=0.02,fillstyle=solid]{$T^{25}$}} \usefont{T1}{ppl}{m}{n} \rput(3.4322395,4.001094){\huge \psframebox[linewidth=0.02,fillstyle=solid]{$T^{34}$}} \usefont{T1}{ppl}{m}{n} \rput(3.2322395,-3.7989063){\huge \psframebox[linewidth=0.02,fillstyle=solid]{$T^{23}$}} \usefont{T1}{ppl}{m}{n} \rput(4.8322396,8.001094){\huge \color{color283}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=color41b]{$T^{44}$}} \usefont{T1}{ppl}{m}{n} \rput(7.8322396,8.001094){\huge \psframebox[linewidth=0.02,fillstyle=solid]{$T^{35}$}} \usefont{T1}{ppl}{m}{n} \rput(9.23224,12.001094){\huge \psframebox[linewidth=0.02,fillstyle=solid]{$T^{45}$}} \usefont{T1}{ppl}{m}{n} \rput(13.832239,0.00109375){\huge \psframebox[linewidth=0.02,fillstyle=solid]{$T^{15}$}} \usefont{T1}{ppl}{m}{n} \rput(14.23224,-15.998906){\huge \color{color283}\psframebox[linewidth=0.02,fillstyle=solid,fillcolor=color285b]{$T^{11}$}} \end{pspicture} } \caption{\label{the15barofsl5} The Dynkin labels and the components of the ${\bf \overline{15}}$ of $\text{SL}(5,\mathbb{R })$. We denote the simple roots and the corresponding generators connecting the weights as explained in the caption of Fig. \ref{the5ofsl5}.} \end{figure} One can compute the orbits exactly as for the standard branes. The highest-weight orbit is the same as the highest-weight orbit of the ${\bf 5}$, which is given in eq.~\eqref{theorbitofhw5ofsl5}. The fact that these two highest-weight orbits coincide is not surprising since the weights that can be reached from the highest weight of the ${\bf \overline{15}}$ form the ${\bf \overline{5}}$ of $\text{SL}(5,\mathbb{R})$. In components, this means that infinitesimally one can only transform one of the two indices. \begin{figure} \centering \scalebox{0.7} { \begin{pspicture}(0,-10.076092)(19.423656,10.076092) \psbezier[linewidth=0.02,fillstyle=solid,fillcolor=yellow,opacity=0.4](13.744595,0.8460917)(13.524594,0.38609168)(8.904594,-6.9339085)(8.564594,-7.313908)(8.224594,-7.693908)(7.2500324,-7.3801284)(7.4845943,-6.693908)(7.719156,-6.007688)(12.291435,0.95848805)(12.764594,1.4460917)(13.237753,1.9336953)(13.964594,1.3060917)(13.744595,0.8460917) \psbezier[linewidth=0.02,fillstyle=solid,fillcolor=green,opacity=0.4](9.624594,-5.4739084)(8.951031,-5.5923204)(5.9921303,-1.4422174)(6.144594,-0.45390832)(6.297058,0.5344007)(12.008481,4.393749)(12.624594,3.6460917)(13.240707,2.8984342)(7.884594,-0.27390832)(7.844594,-0.87390834)(7.804594,-1.4739083)(9.024594,-3.0339084)(9.484594,-3.7539084)(9.944594,-4.4739084)(10.298158,-5.3554964)(9.624594,-5.4739084) \psbezier[linewidth=0.02,fillstyle=solid,fillcolor=purple,opacity=0.4](5.104594,1.9260917)(5.5045943,2.4260917)(6.046441,3.3110914)(6.904594,3.9060917)(7.7627473,4.501092)(11.104594,6.226092)(11.504594,5.5660915)(11.904594,4.9060917)(8.664594,3.4860916)(7.9245944,2.9460917)(7.184594,2.4060917)(6.220324,1.485968)(6.284594,1.0660917)(6.348864,0.64621544)(7.6049094,-0.45083404)(8.524594,-1.0139083)(9.444279,-1.5769826)(11.744595,-2.2139084)(11.204595,-3.1139083)(10.664594,-4.0139084)(7.744594,-2.0739083)(7.244594,-1.8139083)(6.744594,-1.5539083)(5.5845942,-0.2539083)(5.144594,0.3060917)(4.704594,0.86609167)(4.704594,1.4260917)(5.104594,1.9260917) \psbezier[linewidth=0.02,fillstyle=solid,fillcolor=blue,opacity=0.4](10.004594,7.766092)(10.644439,7.092736)(7.704594,3.8660917)(7.784594,3.3660917)(7.864594,2.8660917)(8.595051,2.4245338)(9.444594,1.8460916)(10.294138,1.2676495)(11.864594,0.06609169)(12.404594,-0.3139083)(12.944594,-0.69390833)(12.184594,-1.5939083)(11.744595,-1.3339083)(11.304594,-1.0739083)(6.0930114,2.4480517)(6.124594,3.0660918)(6.156177,3.6841316)(6.9845943,4.826092)(7.4245944,5.4060917)(7.864594,5.9860916)(9.364749,8.439447)(10.004594,7.766092) \psbezier[linewidth=0.02,fillstyle=solid,fillcolor=orange,opacity=0.4](7.904594,9.626092)(8.804594,10.066092)(11.304594,5.726092)(11.864594,4.746092)(12.424594,3.7660916)(14.264594,1.2660917)(13.584594,0.68609166)(12.904594,0.106091686)(11.884594,1.7660917)(11.444594,2.4660916)(11.004594,3.1660917)(8.444594,7.0460916)(8.304594,7.306092)(8.164594,7.5660915)(7.0045943,9.186091)(7.904594,9.626092) \psdiamond[linewidth=0.02,dimen=outer](9.424594,1.1660917)(3.9,6.0) \psline[linewidth=0.02cm](9.424594,7.1460915)(8.044594,9.166092) \psline[linewidth=0.02cm](9.424594,-4.793908)(8.144594,-6.793908) \psdots[dotsize=0.2](5.5245943,1.1460917) \psdots[dotsize=0.2](9.264594,1.1460917) \psdots[dotsize=0.2](13.284595,1.1460917) \psdots[dotsize=0.2](6.864594,3.1660917) \psdots[dotsize=0.2](8.144594,5.1460915) \psdots[dotsize=0.2](10.684594,5.166092) \psdots[dotsize=0.2](9.404594,7.1460915) \psdots[dotsize=0.2](12.004594,3.1460917) \psdots[dotsize=0.2](9.404594,-4.833908) \psdots[dotsize=0.2](6.824594,-0.8539083) \psdots[dotsize=0.2](8.124594,-2.8539083) \psdots[dotsize=0.2](10.724594,-2.8539083) \psdots[dotsize=0.2](12.004594,-0.8339083) \psdots[dotsize=0.2](8.164594,-6.813908) \psdots[dotsize=0.2](8.044594,9.146091) \usefont{T1}{ppl}{m}{n} \rput(10.029125,7.1560917){$\alpha_{4}$} \usefont{T1}{ppl}{m}{n} \rput(7.4191256,5.1560917){$2\alpha_{4}$} \usefont{T1}{ppl}{m}{n} \rput(11.989125,5.1760917){$\alpha_{3}+\alpha_{4}$} \usefont{T1}{ppl}{m}{n} \rput(13.269125,3.1560917){$\alpha_{2}+\alpha_{3}+\alpha_{4}$} \usefont{T1}{ppl}{m}{n} \rput(5.9591255,3.1760917){$\alpha_{3}+2\alpha_{4}$} \usefont{T1}{ppl}{m}{n} \rput(4.1091253,1.1560917){$2\alpha_{3}+2\alpha_{4}$} \usefont{T1}{ppl}{m}{n} \rput(9.579125,1.4160917){$\alpha_{2}+\alpha_{3}+2\alpha_{4}$} \usefont{T1}{ppl}{m}{n} \rput(15.209125,1.1560917){$\alpha_{1}+\alpha_{2}+\alpha_{3}+\alpha_{4}$} \usefont{T1}{ppl}{m}{n} \rput(5.1891255,-0.8239083){$\alpha_{2}+2\alpha_{3}+2\alpha_{4}$} \usefont{T1}{ppl}{m}{n} \rput(6.2791254,-2.8239083){$2\alpha_{2}+2\alpha_{3}+2\alpha_{4}$} \usefont{T1}{ppl}{m}{n} \rput(13.959126,-0.8239083){$\alpha_{1}+\alpha_{2}+\alpha_{3}+2\alpha_{4}$} \usefont{T1}{ppl}{m}{n} \rput(12.809125,-2.8839083){$\alpha_{1}+\alpha_{2}+2\alpha_{3}+2\alpha_{4}$} \usefont{T1}{ppl}{m}{n} \rput(11.619125,-4.8839083){$\alpha_{1}+2\alpha_{2}+2\alpha_{3}+2\alpha_{4}$} \usefont{T1}{ppl}{m}{n} \rput(10.649125,-6.7639084){$2\alpha_{1}+2\alpha_{2}+2\alpha_{3}+2\alpha_{4}$} \usefont{T1}{ptm}{m}{n} \rput(8.129437,-6.8039083){\color{red}\pscirclebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{5}} \usefont{T1}{ptm}{m}{n} \rput(5.5110006,1.1160917){\color{red}\pscirclebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{3}} \usefont{T1}{ptm}{m}{n} \rput(8.128032,5.1360917){\color{red}\pscirclebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{2}} \usefont{T1}{ptm}{m}{n} \rput(8.022719,9.136091){\color{red}\pscirclebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{1}} \usefont{T1}{ptm}{m}{n} \rput(8.120219,-2.8439083){\color{red}\pscirclebox[linewidth=0.02,fillstyle=solid,fillcolor=red]{4}} \end{pspicture} } \caption{\label{fivecolourfigure}This figure describes the sets of weights of the ${\bf \overline{15}}$ that can be reached by an infinitesimal $\text{SL}(5,\mathbb{R})$ transformation starting from any of the five longest weights. Sets of weights associated to different longest weights correspond to different colours. As it can be seen from the figure, any short weight belongs to two sets, which means that one associates to it a 2-conjunction stabiliser. For each weight we write its distance from the highest weight in terms of simple roots. } \end{figure} We now consider the multiple-charge orbits of the ${\bf \overline{15}}$. In Fig.~\ref{fivecolourfigure} we show all the weights that can be reached by an infinitesimal $\text{SL}(5,\mathbb{R})$ transformation starting from any of the five long weights. We note that one can never reach a long weight starting from another one\,\footnote{Again, this is easy to understand in components because one cannot rotate for instance $T^{11}$ to any $T^{MM}$ with $M \neq 1$ with infinitesimal transformations.}. Each short weight $T^{MN}$, $M\neq N$, is connected infinitesimally to two long weights, $T^{MM}$ and $T^{NN}$. This implies that there is a conjunction stabiliser in the orbit of the bound state containing the branes corresponding to the charges $T^{MM}$ and $T^{NN}$. There are no $n$-conjunction stabilisers with $n>2$ because there are no weights that are connected to $n$ long weights for $n>2$. The 2-charge orbit is \begin{equation} \dfrac{\text{SL}(5,\mathbb{R})}{(\text{SL}(3,\mathbb{R})\times \text{SO}(1))\ltimes( T^{\bf 3}\times T^{\bf 3})} \ , \end{equation} the three-charge orbit is \begin{equation} \dfrac{\text{SL}(5,\mathbb{R})}{(\text{SL}(2,\mathbb{R})\times \text{SO}(3))\ltimes T^{\bf(3,2)}} \ , \end{equation} while the four and five-charge orbits are respectively \begin{equation} \dfrac{\text{SL}(5,\mathbb{R})}{\text{SO}(3)\ltimes T^{\bf 4}} \end{equation} and \begin{equation} \dfrac{\text{SL}(5,\mathbb{R})}{\text{SO}(5)}. \end{equation} In particular, one can see that the stabilisers of the 5-charge orbit are the generators $E_\alpha - E_{-\alpha}$ for all the positive roots of $\text{SL}(5,\mathbb{R})$, and the Cartan generators and the simple-root vectors of $\text{SO}(5)$ can be written as \begin{eqnarray} &&H_{\beta_{1}}=iE_{\alpha_{2}+\alpha_{3}}-iE_{-\alpha_{2}-\alpha_{3}}-iE_{\alpha_{3}+\alpha_{4}} +iE_{-\alpha_{3}-\alpha_{4}}\,,\nonumber \\[.2truecm] &&H_{\beta_{2}}=2iE_{\alpha_{3}+\alpha_{4}}-2iE_{-\alpha_{3}-\alpha_{4}}\,,\nonumber \\[.2truecm] & &E_{\beta_{1}}=\frac{1}{4}\bigl(E_{\alpha_{2}}-E_{-\alpha_{2}}-iE_{\alpha_{2}+\alpha_{3}+\alpha_{4}}+ iE_{-\alpha_{2}-\alpha_{3}-\alpha_{4}}+E_{\alpha_{4}}-E_{-\alpha_{4}}-iE_{\alpha_{3}}+iE_{-\alpha_{3}}\bigr) \ ,\nonumber\\ && E_{\beta_{2}}=\frac{\sqrt{2}}{4}\bigl(iE_{\alpha_{1}+\alpha_{2}}-iE_{-\alpha_{1}-\alpha_{2}}-E_{\alpha_{1}+\alpha_{2} +\alpha_{3}+\alpha_{4}}+E_{-\alpha_{1}-\alpha_{2}-\alpha_{3}-\alpha_{4}}\bigr)\,, \end{eqnarray} where $\beta_1$ and $\beta_2$ are the simple roots of $\text{SO}(5)$. We next analyse the invariants. The highest-weight (single-charge) orbit is defined by the constraint \begin{equation} T^{M_1 N_1} T^{M_2 N_2} \epsilon_{N_1 N_2 ...N_5}=0 \quad . \end{equation} When this quantity is instead non-vanishing, but \begin{equation} T^{M_1 N_1} T^{M_2 N_2} T^{M_3 N_3} \epsilon_{N_1 N_2 ...N_5}=0 \quad ,\label{2chargeorbit15} \end{equation} one obtains the 2-charge orbit. Proceeding this way, one arrives at a five-charge orbit, for which the quantity \begin{equation} T^{M_1 N_1} T^{M_2 N_2} T^{M_3 N_3} T^{M_4 N_4} T^{M_5 N_5} \epsilon_{N_1 N_2 ...N_5} \end{equation} is non-vanishing. We finally consider the supersymmetry. The projection of the brane charge $T^{MN}$ on the singlet central charge $Q$ is (see section 3) \begin{equation} T^{MN} \rightarrow \delta^{MN} Q \ , \end{equation} which means that all the different constraints on the charges that define the five different orbits are all projected on the same $\text{SO}(5)$ epsilon symbol. This means that all these brane configurations preserve the same amount of supersymmetry. The 7D example discussed above can be generalised to other representations and other dimensions. In general we expect that if different brane orbits correspond to invariants that lead to the same central charge constraints when projected on the R-symmetry, these brane configurations all preserve the same amount of supersymmetry. From Table \ref{nonstandardcentralcharge} one can determine all these configurations in general. We hope to report on this in more detail in the near future. \section{Conclusions} In this work we studied several properties of branes in string theory with 32 supercharges from a purely group-theoretical point of view. We contrasted the branes with three or more transverse directions, which we called ``standard'' branes, with the branes which have two or less transverse directions, which we denominated ``non-standard'' branes. More specifically, we called them ``defect'' branes (two transverse directions), domain walls (one transverse direction) and space-filling branes (no transverse direction). We focussed on three distinct brane properties. First, we showed that the half-super\-sym\-me\-tric branes, both standard and non-standard ones, always correspond to the longest weights of the U-duality representation these branes belong to. It turns out that the standard branes always occur in U-duality representations where all weights are longest weights. This explains why for standard branes the dimension of the U-duality representation equals the number of half-supersymmetric branes. In contrast, the non-standard branes always occur in U-duality representations with different lengths of weights. This is why the number of half-supersymmetric non-standard branes is always less than the dimension of the U-duality representation to which they belong. Using this simple group-theoretical characterization we calculated the number of half-supersymmetric non-standard branes, reproducing the results of \cite{Bergshoeff:2011qk,Bergshoeff:2012ex,Kleinschmidt:2011vu}. For defect branes the number is given by ${\rm dim }\,G - {\rm rank}\, G$ where $G$ is the U-duality group. The answer for the domain walls and space-filling branes can be found in Table \ref{dominantweightsofnonstandardbranes}. We next studied the BPS properties of the standard and non-standard branes. Using a decomposition of the U-duality representation of the brane charges into representations of the R-symmetry of the central charges we found a second crucial difference between standard and non-standard branes. Whereas for standard branes for each BPS condition there is a unique brane, we find that different non-standard branes may satisfy the same BPS condition. We calculated the degeneracy of these BPS conditions for all non-standard branes in different dimensions. The result can be found in Table \ref{nonstandardcentralcharge}. We finally discussed the standard and non-standard brane orbits. Our results on the multi-charge non-standard-brane orbits are new. We discussed the invariants that characterize these orbits and found that for non-standard branes different invariants may project onto the same central charge showing that different brane configurations may preserve the same supersymmetry. In our discussion the length of the weights of the representations of the U-duality group $G$ played an important role. In particular, the longest weights were associated to the half-supersymmetric branes. In \cite{Kleinschmidt:2011vu} the same counting of the half-supersymmetric branes was obtained using a different method, based upon the counting of the real roots of the very extended Kac-Moody algebra $E_{11}$. Considering the longest weights of the U-duality representations can indeed be translated to taking the real roots within the very extended Kac-Moody algebra $E_{11}$. A relation between the squared length of the $E_{11}$ roots and the squared length of the weights of the U-duality representations was given in the Appendix of \cite{Riccioni:2008jz}, based on the analysis of \cite{West:2004kb}. The relation consists in writing down the expression of $\alpha^2$ for an $E_{11}$ root and decomposing it in terms of the weights of the subalgebra $\text{SL}(D,\mathbb{R}) \times E_{11-D}$, where $E_{11-D}$ is the U-duality group $G$. One restricts the attention to the form fields, i.e.~to the completely antisymmetric representations of $\text{SL}(D,\mathbb{R})$. All these representations have only one dominant weight, which means that all the components give the same contribution to $\alpha^2$. On the other hand, the representations of $E_{11-D}$ are decomposed in longest weights, next-to-longest weights, etc. The difference between the squared length of the longest weights and the next-to-longest weights is equal to 2, which is the squared length of the roots, as noticed in section 2\,\footnote{Here we have normalised the squared length of the simple roots to 2 for simplicity.}. On the other hand, the roots of $E_{11}$ have squared length $\alpha^2 =2,0,-2,-4,...$. This implies that the relation between the lengths of the weights of the U-duality representation and the lengths of the roots of the very extended Kac-Moody algebra $E_{11-D}$ is as follows: \begin{equation}\label{relation} \begin{array}{ccc} \text{weights of}\ G && \text{roots of}\ E_{11-D}\\[.3truecm] \text{longest}&& \alpha^2=2\\ \text{next-to-longest}&&\alpha^2=0\\ \text{next-to-next-to-longest}&&\alpha^2=\text{-}2\\ \vdots&& \vdots\end{array} \end{equation} This relation holds for the highest-dimensional representation. For a given form, smaller representations, whose highest weight coincides with one of the dominant weights of the highest-dimensional representation, also occur. For these fields the value of $\alpha^2$ is given by eq. \eqref{relation} where one has to pick the dominant weight of the highest-dimensional representation that has the same length as the highest weight of the lower-dimensional representation. These fields therefore have $\alpha^2<2$ and are not associated to branes. Knowing that the longest weights of the U-duality representation correspond to the half-supersymmetric branes, it is natural to consider also the interpretation of the shorter weights. The first ones to consider are the next-to-longest weights, corresponding to the $\alpha^2=0$ roots of $E_{11}$. In the case of the 7-branes of IIB, the short weight is the Cartan of $\text{SL}(2,\mathbb{R})$, and a charge in the Cartan corresponds to a bound state of the D7-brane and its S-dual. This can be easily understood in terms of invariants. Given the charge $T^{\alpha\beta}$ in the ${\bf 3}$, the orbits are defined by the value of the invariant \begin{equation} T^{\alpha \beta} T^{\gamma \delta} \epsilon_{\alpha \gamma} \epsilon_{\beta \delta} \quad . \end{equation} This quantity is vanishing for a highest-weight orbit, i.e. a single-charge orbit, corresponding to the charge $T^{11}$ or $T^{22}$, while it is non-vanishing for the charge $T^{11} + T^{22}$, corresponding to a bound state, as well as for the charge $T^{12}$, which thus belongs to the same orbit as the bound state. A similar conclusion can be reached for the case of the ${\bf \overline{15}}$ of $\text{SL}(5,\mathbb{R})$ analysed in Section 4.2. Any charge $T^{MN}$, with $M \neq N$, satisfies the constraint \eqref{2chargeorbit15}, and thus corresponds to a 2-charge state. One could reach similar conclusions in all the other cases. It would be interesting to compare such an analysis with the work of \cite{West:2004st}, \cite{Englert:2004it} or with the more recent work of \cite{Cook:2011ir}. One of the results of our investigations is that lower-dimensional string theory contains many non-standard branes, much more than the standard ones. It is natural to ask whether there are any applications of these branes. For a recent application in the context of black holes, see \cite{deBoer:2012ma}. As shown in \cite{Bergshoeff:2011mh,Bergshoeff:2011ee,Bergshoeff:2012ex}, the branes of the ten-dimensional theory satisfy generalised wrapping rules when compactified on tori. In the case of the fundamental string, these wrapping rules are a manifestation of the stringy doubled geometry discussed in \cite{Hull:2004in}. It would be interesting to see whether a similar geometric interpretation can be given for the wrapping rules of the other branes, as well as for the branes, among those listed in this paper, that do not follow from any wrapping rule from the branes of the ten-dimensional theory. It is natural to extend our work to the the branes of half-maximal supergravities or the supergravity theories with even less supersymmetry. The branes of the half-maximal supergravities have been obtained in \cite{Bergshoeff:2012jb} using the so-called `light-cone rule' derived in \cite{Bergshoeff:2011zk}. We expect that this rule can be translated to general group-theoretic properties that can also be determined for the more complicated U-duality symmetries that occur in even less supersymmetric theories, exactly as we did for the case of maximally non-compact groups in this paper. We hope to report on progress in this direction in the nearby future. \vskip 1.5cm \section*{Acknowledgements} E.A.B. wishes to thank the University of Rome ``La Sapienza'' and INFN Sezione di Roma where part of this work was done for its hospitality. F.R. would like to thank A. Marrani for discussions. \vskip 1.5cm
1,314,259,992,640
arxiv
\section{Introduction} \setcounter{equation}{0} \noindent In the era of big data, survey sampling remains one of the most important data collection vehicles for many fields of scientific investigations. Population health research, social and economic studies such as inequality measures and other policy related issues focus on a particular finite population, and design-based framework with complex survey data is well suited for the inferential problems. Regression analysis and estimating equations with survey data have become a standard tool for statistical inference (Wu and Thompson, 2020). Empirical likelihood (EL), first proposed by Owen (1988) for independent samples, has been adapted successfully for survey data analysis through the pseudo EL approach (Chen and Sitter, 1999; Wu and Rao, 2006). Zhong and Rao (2000) studied EL inferences on population mean under stratified sampling. Finite population parameters defined through the so-called census estimating equations and the related inferential procedures have been discussed by Chen and Kim (2014) and Zhao et al. (2022) through the sample EL approach as well as the pseudo EL approach (Zhao and Wu, 2019). For parameters defined through U-statistics, jackknife EL can be used to reduce the computational complexities (Chen and Tabri, 2021). Nuisance parameters of a finite dimension are typically handled through profiling; see, for instance, Berger and Torres (2016), Oguz-Alper and Berger (2016), Zhao {\em et al.} (2022), among others. Statistical inferences in the presence of nuisance functionals, i.e., nuisance parameters with infinite dimension, are an important problem, especially in social and economic studies. The most commonly used strategy is to use a two-step procedure where a consistent nonparametric estimator for the nuisance functional is constructed first and then used in the second step as a plug-in estimator for inferences on the main parameters of interest. Zhao et al. (2020) is among the first to discuss the design-based two-step EL method and the generalized method of moments (GMM) method for complex survey data in the presence of nuisance functionals. The two-step survey weighted estimating equations (SWEE) approach discussed by Zhao et al. (2020), however, has two major limitations. First, the maximum EL or GMM estimators are sensitive to the plug-in estimator for the nuisance functional and do not achieve the semiparametric efficiency bound. Second, the two-step EL ratio statistic does not lead to the nonparametric version of the Wilks' theorem even for simple random sampling. Applications of the results require tedious evaluation of the limiting distributions and design-based variance estimation and therefore are very difficult. There has been a well developed statistical and econometric literature with non-survey data on semiparametric efficiency bounds and Wilks' theorem for semiparametric models; see, for instance, Newey (1990), Chen et al. (2008), Cattaneo (2010), Ackerberg et al. (2014), Frazier and Renault (2017), Chernozhukov et al. (2018), Bravo et al. (2020), Matsushita and Otsu (2020), Chernozhukov et al. (2022), among others. Unfortunately, these model-based efficient analytical procedures do not apply directly to complex survey data for design-based inference on finite population parameters. This article presents an augmented two-step survey weighted estimating equations approach with nuisance functionals and complex survey data. The proposed methods are formulated through the generalized empirical likelihood (GEL, Newey and Smith, 2004; Parente and Smith, 2011) and represent a major advance over the usual two-step SWEE approach as discussed in Zhao et al. (2020). The GEL methods cover a large class of estimators as special cases, including the EL estimators (Owen 1988; Qin and Lawless 1994; Chen and Sitter 1999; and Zhao et al., 2022), the continuous updating estimators (CU, Hansen et al., 1996), and the exponential tilting estimators (ET, Kitamura and Stutzer, 1997; and Imbens et al., 1998). Under our proposed methods, the second-step augmented estimating functions obey the Neyman orthogonality condition (Chernozhukov et al., 2018) and automatically handle the impact of the first-step plug-in estimator, and the resulting estimators of the main parameters of interest are invariant to the first step method for the plug-in estimator for the nuisance functional. Our methods are bias-corrected for the main parameters of interest in the sense that the nonparametric Wilks' theorem with standard chi-square limiting distributions holds under commonly used survey designs, and the maximum GEL estimators achieve the semiparametric efficiency bound. Our results are established under the design-based framework for complex survey data, and our setting is very general, allowing the estimation equations system to be over-identified, the estimating functions to be nonsmooth and the plug-in estimator of the nuisance functional to be slower than root-$n$-consistent. In other words, our results allow the nuisance functional to be estimated through any consistent nonparametric procedures in the first step, including the non-parametric series-based method (Newey, 1994b; Chen, 2007). These features have theoretical and practical importance since the estimating equations under study can be semiparametric and encompass a large class of econometric and statistical models. Our proposed methods have immediate applications to inequality measures widely used in social and economic studies. Popular income inequality measures, such as the Lorenz curve, income shares and the Gini index, all involve nuisance functionals. The measurement and analysis of income inequality have been well documented in econometric literature; see, for instance, Atkinson (1970), Beach and Davidson (1983), Davidson and Duclos (2000), among others. Income data are usually collected through complex surveys. Design-based approach to estimation and inference for income inequality measures with the focus on a particular finite population has been addressed by several authors; see, for instance, Nyg\aa rd and Sandstr\"{o}m (1989), Zheng (2002), Bhattacharya (2007), Goga and Ruiz-Gazen (2014), Zhao et al. (2020), among others. Our proposed augmented two-step SWEE approach provides a powerful inference tool for this important topic in statistics and econometrics. The rest of the paper is organized as follows. In Section \ref{sec.method}, we first describe basic setup and the conventional two-step method of Zhao et al. (2020), and then present our proposed augmented two-step method with the GEL approach. In Section \ref{sec.thm}, we examine the theoretical properties of the proposed point estimators and general hypothesis test problems. In Section \ref{sec.exam}, we discuss general procedures with illustrating examples on the construction of the augmentation terms. Complex survey designs and asymptotic variance estimation are discussed in Section \ref{sec.design}. Results from simulation studies are reported in Section \ref{sec.sim}, and an application to income share using the New York City Social Indicators Survey data is presented in Section \ref{sec.data}. Some concluding remarks are given in Section \ref{sec.dis}. Technical details and proofs of the main theoretical results are presented in Appendices A and B. \section{Proposed Methods}\label{sec.method} \setcounter{equation}{0} \subsection{Preliminaries} \label{sec.pre} \noindent Consider a survey population $\mathcal {U}_{\mbox{\tiny N}}=\{1,\cdots,N\}$ with $N$ labelled units. Let $Z\in\mathsf {R}^{d_z}$ be a $d_z$-dimensional vector of variables, and let $Z_i$ be the value of $Z$ associated with the $i$th unit. Denote by $\mathcal {F}_{\mbox{\tiny N}}=(Z_1,\cdots, Z_{\mbox{\tiny N}})$ the full set of vectors for the finite population. Let $\mathcal{S}$ be the set of $n$ sampled units selected from $\mathcal {U}_{\mbox{\tiny N}}$ by a probability survey design. For asymptotic development, we assume there is a sequence of finite populations and a sequence of survey samples which allow $n$ and $N$ go to infinity; see Fuller (2009) for further details. The sample size $n$ could be a random number under certain sampling designs. Let $\pi_i = {\rm Pr}(i\in \mathcal{S})$ and $\pi_{ij} = {\rm Pr}(i,j \in \mathcal{S})$ be the first and second order inclusion probabilities. A detailed discussion on the probability space induced by the survey design is given in Section \ref{subsec.ident}. Let $\Theta \subseteq \mathsf {R}^{p}$ be the parameter space and assume it is a compact set. Let $\Psi$ be the space for the nuisance functional and assume it is a linear subspace of the space of square integrable functions with respect to $Z$. Consider a vector of $r$ real-valued functions $g(Z,\theta,\varphi)$ with a known form up to the unknown parameters of interest $\theta\in\Theta$ and the nuisance functional $\varphi\in\Psi$. The main assumption on $\mathcal {F}_{\mbox{\tiny N}}$ is that for some $\theta_{\mbox{\tiny N}}\in\Theta$, \begin{equation} \label{pee} U_{\mbox{\tiny N}}(\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})=\dfrac{1}{N}\sum\limits_{i=1}^Ng(Z_i,\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})= 0, \end{equation} where $\varphi_{\mbox{\tiny N}}=\varphi_{\mbox{\tiny N}}(\cdot,\theta_{\mbox{\tiny N}})\in\Psi$ is the true value of the nuisance functional with the given finite population. We assume that the census estimating equations (\ref{pee}) may be an over-identified system in the sense that $r \geq p$, and that the estimating functions $g(Z,\theta,\varphi)$ can be non-smooth in $\theta$ and/or $\varphi$. As in Chen et al. (2003) and Zhao et al. (2020), the nuisance functional $\varphi\in\Psi$ is allowed to depend on the parameters $\theta$ and the population data on $Z$. For ease of presentation, we use the notation $(\theta,\varphi)\equiv(\theta,\varphi(\cdot,\theta))$, $(\theta,\varphi_{\mbox{\tiny N}})\equiv(\theta,\varphi_{\mbox{\tiny N}}(\cdot,\theta))$, and $(\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})\equiv(\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}}(\cdot,\theta_{\mbox{\tiny N}}))$. The empirical likelihood (EL) of Owen (1988) is a popular tool for effectively combining available auxiliary information and parameters of interest through a system of estimating equations (Qin and Lawless, 1994). Assume that we have at hand a suitable ``plug-in" estimator $\hat{\varphi}$ for $\varphi_{\mbox{\tiny N}}$. Let $(p_1,\cdots,p_n)$ be the discrete probability measure assigned to the sampled units in $\mathcal{S}$. For any $\theta\in\Theta$ and the given $\hat{\varphi}$, the two-step survey-weighted EL ratio statistic is defined as (Zhao et al., 2020) \[ L_{\mbox{\tiny N}}(\theta,\hat\varphi)=\sup\left\{\prod_{i\in \mathcal{S}} (np_i) \; \Big| \; p_i\geq0,\sum_{i\in \mathcal{S}}p_i=1, \sum_{i\in \mathcal{S}}p_i \big[\pi_i^{-1}g(Z_i,\theta,\hat\varphi)\big] = 0\right\} \,. \] Note that the survey weights $\pi_i^{-1}$ are part of the parameter constraints. Using the standard Lagrange multiplier method, we can rewrite the EL ratio statistic as $L_{\mbox{\tiny N}}(\theta,\lambda,\hat\varphi) = \prod_{i\in \mathcal{S}} \{1+\lambda^{\top}\pi_i^{-1}g(Z_i,\theta,\hat\varphi)\}^{-1}$, where the Lagrange multiplier $\lambda=\lambda(\theta, \hat\varphi)$ is the solution to $ \sum_{i\in \mathcal{S}}\pi_i^{-1}g(Z_i,\theta,\hat\varphi)\{1+\lambda^{\top}\pi_i^{-1}g(Z_i,\theta,\hat\varphi)\}^{-1} = 0 $ with the given $\theta$ and $\hat\varphi$. The two-step maximum EL estimator $\hat{\theta}_{\scriptscriptstyle EL}$ for $\theta_{\mbox{\tiny N}}$ is given by \[ \hat\theta_{\scriptscriptstyle EL}=\arg\min_{\theta\in\Theta}\sup_{\lambda\in\hat{\Lambda}_{\mbox{\tiny N},g}(\theta,\hat\varphi)}l_{\mbox{\tiny N}}(\theta,\lambda,\hat\varphi) \,, \] where $l_{\mbox{\tiny N}}(\theta,\lambda,\varphi)=-\log L_{\mbox{\tiny N}}(\theta,\lambda,\varphi)$ and $\hat{\Lambda}_{\mbox{\tiny N},g}(\theta,\hat\varphi) = \{\lambda: \lambda^{\top}\pi_i^{-1}g(Z_i,\theta,\hat\varphi)> -1, i\in \mathcal{S}\}$ for the given $\theta$ and $\hat\varphi$. The estimator $\hat{\theta}_{\scriptscriptstyle EL}$ is also called the maximum sample EL estimator by Zhao et al. (2020). Suppose that $\Psi$ is a vector space of functions endowed with the sup-norm metric $\|\varphi\|_{\Psi}=\sup_{\theta}\|\varphi(\cdot,\theta)\|_{\infty}=\sup_{\theta}\sup_{{z}}|\varphi({z},\theta)| \,. $ Define $\Theta(\delta)=\{\theta: \, \theta\in\Theta, \, \|\theta-\theta_{\mbox{\tiny N}}\|\leq\delta\}$ and $\Psi(\delta)=\{\varphi: \, \varphi\in\Psi, \; \|\varphi-\varphi_{\mbox{\tiny N}}\|_{\Psi}\leq\delta\}$. Throughout the paper, we denote $E(\cdot \mid \mathcal{F}_{\mbox{\tiny N}})$ and ${\rm Var}(\cdot \mid \mathcal{F}_{\mbox{\tiny N}})$ to be the expectation and variance with respect to the design probability space, which will be discussed in detail in Section \ref{subsec.ident}. Let $n_{\scriptscriptstyle B} = E(n \mid \mathcal{F}_{\mbox{\tiny N}})$ be the expected sample size under the sampling design. Let $\|A\|=\{{\rm trace}(A^{\top}A)\}^{1/2}$ and $A^{\otimes 2}=AA^{\top}$ for any matrix or vector $A$. We use $\stackrel{{\cal L}}{\rightarrow}$ to denote convergence in distribution. The asymptotic properties of $\hat{\theta}_{\scriptscriptstyle EL}$ under the design-based framework were investigated by Zhao et al. (2020) under the regularity conditions presented in Appendix A. In particular, Condition A2 states that there exists vector-valued functions $U(\theta,\varphi)$ such that $\sup_{\theta \in \Theta,\varphi\in\Psi(\delta_{\mbox{\tiny N}})}\|U_{\mbox{\tiny N}}(\theta,\varphi) - U(\theta,\varphi)\|=o(1)$ with $\delta_{\mbox{\tiny N}}=o(1)$; Condition A4 indicates that for any $(\theta,\varphi)\in\Theta(\delta)\times\Psi(\delta)$, the ordinary derivative $\Gamma_1(\theta,\varphi)$ in $\theta$ of the limiting functions $U(\theta,\varphi)$ exists and satisfies $\Gamma_1(\theta,\varphi)(\bar{\theta}-\theta)=\lim_{t\rightarrow 0}[U(\theta+t(\bar{\theta}-\theta),\varphi(\cdot,\theta+t(\bar{\theta}-\theta))) -U(\theta,\varphi(\cdot,\theta))]/t$ for $\bar{\theta}\in\Theta$; and Condition A5 requires that for any $\theta\in \Theta(\delta)$, the limiting function $U(\theta,\varphi)$ is pathwise differentiable at $\varphi\in \Psi(\delta)$ in the direction $[\bar{\varphi}-\varphi]$ in the sense that the limit $D(\theta,\varphi)[\bar{\varphi}-\varphi]=\lim_{t\rightarrow 0}[U(\theta,\varphi(\cdot,\theta)+t(\bar{\varphi}(\cdot,\theta)-\varphi(\cdot,\theta))) -U(\theta,\varphi(\cdot,\theta))]/t$ exists for $\{\varphi+t(\bar{\varphi}-\varphi):t\in[0,1]\}\subset \Psi$. Moreover, Condition A6 specifies that the pathwise derivative $D(\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})[\hat{\varphi}-\varphi_{\mbox{\tiny N}}]$ is of the following form: \begin{eqnarray} \label{first-step} D(\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})[\hat{\varphi}-\varphi_{\mbox{\tiny N}}]=\dfrac{1}{N}\sum_{i\in \mathcal{S}}\pi_i^{-1}\Xi(Z_i,\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})+o_p(n_{\scriptscriptstyle B}^{-1/2}), \end{eqnarray} where $\Xi(Z,\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})$ \ has finite fourth population moments and $\sum_{i\in \mathcal{S}}\pi_i^{-1}\Xi(Z_i,\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})$ is asymptotically normally distributed with mean zero and variance-covariance matrix at the order $O(n_{\scriptscriptstyle B}^{-1}N^2)$. The following results were established by Zhao et al. (2020). \begin{proposition} \label{prop1} Under the regularity conditions A1--A8 specified in Appendix A and as $N \rightarrow \infty$, \begin{itemize} \item [(a)] $ n_{\scriptscriptstyle B}^{1/2}(\hat{\theta}_{\scriptscriptstyle EL}-\theta_{\mbox{\tiny N}}) \stackrel{{\cal L}}{\rightarrow} N(0,V_1), $ where $ V_1=\Sigma_1\Gamma_1^{\top}W_1^{-1}\Omega W_1^{-1}\Gamma_1 \Sigma_1, \label{V1} $ $\Sigma_1=(\Gamma_1^{\top}W_1^{-1}\Gamma_1)^{-1}$, $\Gamma_1=\Gamma_1(\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})$, $ \Omega= (n_{\scriptscriptstyle B}/ N^2){\rm Var}\{\sum_{i\in \mathcal{S}}\pi_i^{-1}[g(Z_i,\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})+\Xi(Z_i,\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})]\mid \mathcal{F}_{\mbox{\tiny N}}\} $, and $W_1= (n_{\scriptscriptstyle B}/ N^2)\sum_{i=1}^{N}\pi_i^{-1}g(Z_i,\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})^{\otimes 2}$. \item [(b)] $-2\log L_{\mbox{\tiny N}}(\theta_{\mbox{\tiny N}},\hat\varphi)\stackrel{{\cal L}}{\rightarrow}\delta_1\chi_{1}^{2}+\cdots+\delta_r\chi_{r}^{2}$, where the $\chi_{j}^{2}$'s are independent $\chi^2$ random variables with one degree of freedom and the weights $\delta_j$ are the eigenvalues of $W_1^{-1}\Omega$. \end{itemize} \end{proposition} Proposition \ref{prop1} shows that for complex survey data the Wilks' theorem breaks down with the two-step EL approach even under simple random sampling. When using the two-step EL ratio statistic $-2\log L_n(\theta,\hat\varphi)$ to construct confidence regions or conduct hypothesis tests on $\theta_{\mbox{\tiny N}}$, one needs to approximate the distribution of a weighted $\chi^2$ random variable, and finding the weights $\delta_j$ requires estimation of the matrix $W_1$ and the design-based variance-covariance matrix $\Omega$. The last component is especially cumbersome for complex surveys. A bootstrap calibration procedure could be an option but the method is computationally intensive and theoretical justifications are not available for general survey designs. Moreover, inferences based on the two-step EL approach do not use information on the main parameters and on the nuisance functionals simultaneously and therefore are not efficient, which motivates the research presented in the current paper. For an in-depth discussion on weighted chi-squared statistic, see Rao and Scott (1981). \subsection{An augmented survey weighted estimating equations approach} \label{sec.gel} \subsubsection{Neyman orthogonal score} \label{sec.orthogonal-score} We first investigate the key condition for restoring Wilks’ phenomenon in two-step survey weighted EL inferences. We refer to $\pi_i^{-1}\Xi(Z_i,\theta,\varphi)$ defined in (\ref{first-step}) as the first step survey weighted influence function (FSSWIF). It follows from the arguments of Zhao et al. (2020) that \begin{equation} \label{swee-taylor} \frac{1}{N}\sum\limits_{i\in \mathcal{S}}\pi_i^{-1}g(Z_i,\theta_{\mbox{\tiny N}},\hat{\varphi})=\frac{1}{N}\sum\limits_{i\in \mathcal{S}}\pi_i^{-1}\{g(Z_i,\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})+\Xi(Z_i,\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})\}+o_p(n_{\scriptscriptstyle B}^{-1/2}). \end{equation} Therefore, we conclude from (\ref{swee-taylor}) and the arguments of Zhao et al. (2020) that the two-step survey weighted EL ratio statistic satisfies a nonparametric version of Wilks' theorem if the FSSWIF at $(\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})$ is zero, or equivalently $\Xi(Z,\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})=0$. This motivates us to propose an augmented survey weighted estimating equations approach to mitigate the impact of the plug-in estimator $\hat{\varphi}$ of the nuisance functional in the usual two-step survey weighted estimating equations through an ingenious augmentation term for the main estimating functions. Motivated by Chernozhukov et al. (2018) and Chernozhukov et al. (2022), we define the augmented estimating functions as, \begin{eqnarray}\label{nee} \psi(Z,\theta,\varphi)=g(Z,\theta,\varphi)+\Xi(Z,\theta,\varphi). \end{eqnarray} With the given finite population $\mathcal {F}_{\mbox{\tiny N}}=(Z_1,\cdots, Z_{\mbox{\tiny N}})$, we define the following augmented population (census) estimating functions \begin{equation} \label{pee-new} \mathbb{U}_{\mbox{\tiny N}}(\theta,\varphi)=\dfrac{1}{N}\sum\limits_{i=1}^N\psi(Z_i,\theta,\varphi). \end{equation} It follows from the original population estimating equations given in (\ref{pee}) and Condition A3(ii) above that $\mathbb{U}_{\mbox{\tiny N}}(\theta,\varphi) = 0$ has a unique root at $(\theta, \varphi) = (\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})$. We analogously impose some conditions on the augmented population estimating functions: (i) there exists real-valued functions $\mathbb{U}(\theta,\varphi)$ such that $\sup_{(\theta,\varphi)\in\Theta\times\Psi(\delta_{\mbox{\tiny N}})}\|\mathbb{U}_{\mbox{\tiny N}}(\theta,\varphi)-\mathbb{U}(\theta,\varphi)\|=o(1)$ for all sequences of positive numbers $\{\delta_{\mbox{\tiny N}}\}$ with $\delta_{\mbox{\tiny N}} = o(1)$; (ii) for any $\theta\in \Theta(\delta)$, the limiting function $\mathbb{U}(\theta,\varphi)$ is pathwise differentiable at $\varphi\in \Psi(\delta)$ in the direction $[\bar{\varphi}-\varphi]$ in the sense that $\mathbb{D}(\theta,\varphi)[\bar{\varphi}-\varphi]=\lim_{t\rightarrow 0}[\mathbb{U}(\theta,\varphi(\cdot,\theta)+t(\bar{\varphi}(\cdot,\theta)-\varphi(\cdot,\theta))) -\mathbb{U}(\theta,\varphi(\cdot,\theta))]/t$ exists for $\{\varphi+t(\bar{\varphi}-\varphi):t\in[0,1]\}\subset \Psi$. The augmented population estimating functions has the orthogonality property in the sense that \begin{equation} \label{orthg-eq} \mathbb{D}(\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})[\varphi-\varphi_{\mbox{\tiny N}}]=0,\,\,\,\mbox{for all}\,\,\,\varphi\in\Psi. \end{equation} Given the set of sampled units $\mathcal{S}$ and the set of survey weights $\{\pi_i^{-1},i\in\mathcal{S}\}$, the augmented survey weighted estimating functions are defined as \begin{equation} \label{swee-ad} \hat{\mathbb{U}}_{\mbox{\tiny N}}(\theta,\varphi)=\frac{1}{N}\sum\limits_{i\in \mathcal{S}}\pi_i^{-1}\psi(Z_i,\theta,\varphi). \end{equation} It is clear that $E\{\hat{\mathbb{U}}_{\mbox{\tiny N}}(\theta,\varphi)\mid\mathcal{F}_{\mbox{\tiny N}}\}=\mathbb{U}_{\mbox{\tiny N}}(\theta,\varphi)$ for any $(\theta,\varphi)\in\Theta\times\Psi$. The orthogonality property in (\ref{orthg-eq}) implies that, modulo some regularity conditions, the following invariance property holds: \begin{equation} \label{swee-invariance} \hat{\mathbb{U}}_{\mbox{\tiny N}}(\theta_{\mbox{\tiny N}},\hat{\varphi})=\hat{\mathbb{U}}_{\mbox{\tiny N}}(\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}}) +o_p(n_{\scriptscriptstyle B}^{-1/2}). \end{equation} In this sense, the augmented estimating functions defined in (\ref{nee}) are also referred to as Neyman orthogonal score (Chernozhukov et al., 2018; Chernozhukov et al., 2022). \subsubsection{Generalized empirical likelihood} For scenarios where $r=p$, a design-based estimator of $\theta_{\mbox{\tiny N}}$ may be obtained by solving $\hat{\mathbb{U}}_{\mbox{\tiny N}}(\theta,\hat{\varphi})=0$. The resulting estimator for $\theta_{\mbox{\tiny N}}$ is bias-corrected due to the invariance property (\ref{swee-invariance}). In other words, the estimation of the nuisance functional has no impact asymptotically on the estimation of the main parameters of interest. Note that the discussions of Binder (1983) and Godambe and Thompson (1986) on survey weighted estimating equations based inferences do not apply to the augmented estimating equations proposed here. For general cases where $r\geq p$, we consider the generalized empirical likelihood (GEL) approach. GEL has a well-know dual representation that facilitates computations and analysis of higher-order properties (Newey and Smith, 2004). Let $\rho(v)$ be a concave function of the scalar $v\in \mathcal{V}$ (an open interval containing zero); let $\rho_j(v)=\partial^j\rho(v)/\partial v^j$ and $\rho_j=\rho_j(0)$ for $j=0,1,2,\ldots$. Following Newey and Smith (2004), we impose a normalization constraint on $\rho(v)$ such that $\rho_1=\rho_2=-1.$ Define the re-centred GEL objective function as \begin{equation*} \hat{P}_{\mbox{\tiny N}}(\theta,\eta,\varphi) =\sum\limits_{i\in \mathcal{S}}\big\{\rho\big(\eta^{\top}\pi_i^{-1}\psi(Z_i,\theta,\varphi)\big)-\rho_0\big\}, \end{equation*} where $\eta$ is an $r$-vector of ``pseudo parameters'' related to the Lagrange multipliers. Given the first step plug-in estimator $\hat{\varphi}$ for the nuisance functional $\varphi_{\mbox{\tiny N}}$, a class of augmented design-based two-step GEL estimators for $\theta_{\mbox{\tiny N}}$ can be defined as the solution to the following saddle-point problem \begin{equation} \label{gele} \hat{\theta}_{\scriptscriptstyle GEL}=\arg\inf_{\theta\in\Theta} \sup_{\eta\in\hat{\Lambda}_{\mbox{\tiny N},\psi}(\theta,\hat{\varphi})}\hat{P}_{\mbox{\tiny N}}(\theta, \eta,\hat{\varphi}), \end{equation} where $\hat{\Lambda}_{\mbox{\tiny N},\psi}(\theta,\varphi)=\{\eta:\eta^{\top}\pi_i^{-1} \psi(Z_i,\theta,\varphi)\in\mathcal {V}, i\in\mathcal{S}\}$. For nonsmooth estimating functions, the augmented design-based two-step GEL estimators $\hat{\theta}_{\scriptscriptstyle GEL}$ are no longer required to be defined by (\ref{gele}) but satisfy \[ \hat{P}_{\mbox{\tiny N}}(\hat{\theta}_{\scriptscriptstyle GEL},\hat{\eta}_{\scriptscriptstyle GEL},\hat{\varphi}) \leq\arg\inf_{\theta\in\Theta}\sup_{\eta\in\hat{\Lambda}_{\mbox{\tiny N},\psi}(\theta,\hat{\varphi})} \hat{P}_{\mbox{\tiny N}}(\theta,\eta,\hat{\varphi})+o_p(1), \] where $\hat{\eta}_{\scriptscriptstyle GEL}=\eta(\hat\theta_{\scriptscriptstyle GEL},\hat{\varphi})$ and $\eta(\theta,\varphi)=\arg\max_{\eta\in\hat{\Lambda}_{\mbox{\tiny N},\psi}(\theta,\varphi)} \hat{P}_{\mbox{\tiny N}}(\theta,\eta,\varphi)$. Specific choices of the function $\rho(\cdot)$ for the GEL estimators lead to specific types of estimators. The EL estimator is obtained by taking $\rho(v)=\log(1-v)$ and $\mathcal {V}=(-\infty,1)$; the ET estimator is constructed by setting $\rho(v)=-\exp(v)$. The CU estimator is defined as $$ \hat{\theta}_{\scriptscriptstyle CUE}=\mathop{\arg\min}_{\theta} n_{\scriptscriptstyle B}\hat{\mathbb{U}}_{\mbox{\tiny N}}(\theta,\hat{\varphi})^{\top} \big\{\hat{W}_{\mbox{\tiny N}}(\theta,\hat{\varphi})\big\}^{-1} \hat{\mathbb{U}}_{\mbox{\tiny N}}(\theta,\hat{\varphi}), $$ where $\hat{\mathbb{U}}_{\mbox{\tiny N}}(\theta,\varphi)$ is defined in (\ref{swee-ad}) and $\hat{W}_{\mbox{\tiny N}}(\theta,\varphi)= n_{\scriptscriptstyle B}N^{-2}\sum_{i\in \mathcal{S}}\pi_i^{-2}\psi(Z_i,\theta,\varphi)^{\otimes 2}$. Using the arguments of Newey and Smith (2004), we can show that $\hat{\theta}_{\scriptscriptstyle CUE}=\hat{\theta}_{\scriptscriptstyle GEL}$ if $\rho(\cdot)$ is quadratic. A dual representation to the augmented design-based two-step GEL estimators is described in detail in supplementary material. Let $\tilde{\theta}$ be an initial design-consistent estimator for $\theta_{\mbox{\tiny N}}$. Then the augmented design-based two-step GMM estimator of $\theta_{\mbox{\tiny N}}$ is obtained as $$ \hat{\theta}_{\scriptscriptstyle GMM}=\mathop{\arg\min}_{\theta} \hat{\mathbb{U}}_{\mbox{\tiny N}}(\theta,\hat{\varphi})^{\top} \{\hat{W}_{\mbox{\tiny N}}(\tilde{\theta},\hat{\varphi})\}^{-1} \hat{\mathbb{U}}_{\mbox{\tiny N}}(\theta,\hat{\varphi}). $$ Detailed discussion on the regular design-based two-step GMM estimator can be found in Zhao et al. (2020). The maximum GEL-based estimators for the empirical probabilities $(p_1,\cdots,p_n)$ are given by \begin{eqnarray}\label{gel-ep} \hat{p}_i=\dfrac{ \rho_1\big(\hat{\eta}_{\scriptscriptstyle GEL}^{\top}\pi_i^{-1}\psi(Z_i,\hat{\theta}_{\scriptscriptstyle GEL}, \hat{\varphi})\big)}{\sum\limits_{j\in\mathcal{S}}\rho_1\big(\hat{\eta}_{\scriptscriptstyle GEL}^{\top}\pi_j^{-1} \psi(Z_j,\hat{\theta}_{\scriptscriptstyle GEL},\hat{\varphi})\big)},~~ i \in\mathcal{S} \,, \end{eqnarray} which satisfy the sample moment condition $\sum_{i\in\mathcal{S}}\hat{p}_i\psi(Z_i,\hat{\theta}_{\scriptscriptstyle GEL},\hat{\varphi})=0$. The invariance property (\ref{swee-invariance}), together with some regularity conditions, imply that $ n_{\scriptscriptstyle B}^{1/2}\hat{\mathbb{U}}_{\mbox{\tiny N}}(\theta_{\mbox{\tiny N}},\hat{\varphi})\stackrel{{\cal L}}{\rightarrow} N(0,\Omega), $ where $ \Omega= (n_{\scriptscriptstyle B}/ N^2){\rm Var}\{\sum_{i\in \mathcal{S}}\pi_i^{-1}[g(Z_i,\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})+\Xi(Z_i,\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})]\mid \mathcal{F}_{\mbox{\tiny N}}\} $. Under single-stage PPS sampling with replacement or single-stage PPS sampling without replacement with negligible sampling fractions, we have that $\Omega=W_2$, where $ W_2= (n_{\scriptscriptstyle B}/ N^2)\sum_{i=1}^{N}\pi_i^{-1}\psi(Z_i,\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})^{\otimes 2}.$ This, coupled with the fact $\|\hat{W}_{\mbox{\tiny N}}(\theta_{\mbox{\tiny N}},\hat{\varphi})$ $ - W_2\|=o_p(1)$, intuitively implies that the Wilks’ phenomenon is restored in the augmented design-based two-step GEL inferences. More details can be found in Sections \ref{sec.thm} and \ref{cee-approach}. \section{Main Results} \label{sec.thm} \setcounter{equation}{0} \noindent We now present the main results on the proposed methods. We first present theorems regarding the consistency and efficiency of the augmented design-based two-step GEL estimators $\hat{\theta}_{\scriptscriptstyle GEL}$. We then discuss the construction of confidence regions and general hypothesis testing problems on $\theta_{\mbox{\tiny N}}$ based on the GEL ratio statistic. The following regularity conditions are used for the establishment of the main results. \begin{itemize} \item [B1.] There exists real-valued functions $\mathbb{U}(\theta,\varphi)$ such that $\sup_{(\theta,\varphi)\in\Theta\times\Psi(\delta_{\mbox{\tiny N}})}\|\mathbb{U}_{\mbox{\tiny N}}(\theta,\varphi)-\mathbb{U}(\theta,\varphi)\|=o(1)$ for all sequences of positive numbers $\{\delta_{\mbox{\tiny N}}\}$ with $\delta_{\mbox{\tiny N}} = o(1)$, and $\mathbb{U}(\theta,\varphi)$ satisfies the following conditions: \begin{itemize} \item [(i)] The ordinary derivative $\Gamma_2(\theta,\varphi)$ of $\mathbb{U}(\theta,\varphi)$ with respect to $\theta$ exists for $\theta\in\Theta(\delta)$, and is continuous at $\theta=\theta_{\mbox{\tiny N}}$; the matrix $\Gamma_2(\theta,\varphi)$ has full column rank $p$; \item [(ii)] There exists a unique $\theta_0\in\Theta$ such that \ $\mathbb{U}(\theta_0,\varphi_0)=0$, where $\varphi_0=\varphi_0(\cdot,\theta_0)\in\Psi$; \item [(iii)] For any $\theta\in\Theta$, $\mathbb{U}(\theta,\varphi)$ is continuous (with respect to the metric $\|\cdot\|_{\Psi}$) in $\varphi$ at $\varphi = \varphi_0$. \end{itemize} \item [B2.] The augmented estimating functions $\psi(Z,\theta,\varphi)$ defined in (\ref{nee}) satisfy the following conditions: \begin{itemize} \item[(i)] $\max_{i\in \mathcal{S}}\sup_{\theta\in\Theta,\varphi\in\Psi}\|\psi(Z_i,\theta,\varphi)\|=o_p(n_{\scriptscriptstyle B}^{1/\alpha})$ for some $\alpha>2$; \item[(ii)] For any sequence $c_{\mbox{\tiny N}}=O(N^{-\kappa})$ with $\kappa\in(1/4,1/2]$, $$\sup_{(\theta,\varphi)\in\Theta\times\Psi} \dfrac{1}{N}\sum_{i=1}^{N}\|\psi(Z_i,\theta,\varphi)-\psi(Z_i,\theta+c_{\mbox{\tiny N}},\varphi+c_{\mbox{\tiny N}})\|= O(|c_{\mbox{\tiny N}}|) \, .$$ \end{itemize} \item[B3.] (i) For any sequence of positive numbers $\{\delta_{\mbox{\tiny N}}\}$ with $\delta_{\mbox{\tiny N}}=o(1)$, $$\sup_{(\theta,\varphi),(\theta',\varphi') \in \Theta(\delta_{\mbox{\tiny N}})\times\Psi(\delta_{\mbox{\tiny N}})} \| \mathbb{U}_{\mbox{\tiny N}}(\theta,\varphi) - \mathbb{U}(\theta,\varphi)-[\mathbb{U}_{\mbox{\tiny N}}(\theta',\varphi') - \mathbb{U}(\theta',\varphi')] \| = o(N^{-1/2});$$ (ii) For all $\delta>0$ and some positive constant $c$, $$ \sup_{(\theta,\varphi),(\theta',\varphi') \in \Theta(\delta)\times\Psi(\delta)} {\rm Var}\Big\{[\hat{\mathbb{U}}_{\mbox{\tiny N}}(\theta,\varphi)-\hat{\mathbb{U}}_{\mbox{\tiny N}}(\theta',\varphi')]\mid \mathcal {F}_{\mbox{\tiny N}}\Big\}\leq cn_{\scriptscriptstyle B}^{-1}|\delta| \,. $$ \item [B4.] For all $(\theta, \varphi), (\theta', \varphi')\in\Theta(\delta_{\mbox{\tiny N}})\times\Psi(\delta_{\mbox{\tiny N}})$ with $\delta_{\mbox{\tiny N}}=o(1)$, $\|\mathbb{U}(\theta,\varphi)-\mathbb{U}(\theta,\varphi')\|\leq c\|\varphi-\varphi'\|_{\Psi}^2$ for some constant $c\ge0$. \end{itemize} Condition B1 states that the limiting function of the augmented population estimating equations $\mathbb{U}_{\mbox{\tiny N}}(\theta,\varphi)$ defined in (\ref{pee-new}) exists with certain smoothness properties. Condition B2(i) is commonly adopted in the literature on EL inference with estimating equations, while condition B2(ii) gives a bound on the variation of the estimating functions. Condition B3(i) restricts the class of moment functions under study by requiring that the empirical process $\{\mathbb{U}_{\mbox{\tiny N}}(\theta,\varphi) - \mathbb{U}(\theta,\varphi):\theta\in\Theta,\varphi\in\Psi\}$ is asymptotically stochastically equicontinuous, which can easily be verified under the model-based framework. Condition B3(ii) is on the correlation between two Horvitz-Thompson estimators at two close points of $(\theta,\varphi)$. As discussed in the proof of Theorem \ref{thm2} below, Condition B4 is key to guaranteeing the invariance property (\ref{swee-invariance}). Moreover, if the orthogonality equation (\ref{orthg-eq}) holds, then $\|\mathbb{U}(\theta_{\mbox{\tiny N}},\varphi)-\mathbb{U}(\theta,\varphi_{\mbox{\tiny N}})-\mathbb{D}(\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})[\varphi-\varphi_{\mbox{\tiny N}}]\|=\|\mathbb{U}(\theta_{\mbox{\tiny N}},\varphi)-\mathbb{U}(\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})\|\leq c\|\varphi-\varphi'\|_{\Psi}^2$, which is a commonly used condition in the literatures of two-step semiparametric inferences, see, e.g., Chen et al. (2003) and Chen (2007). Condition B4, together with condition A6 presented in Appendix A, imply that the first step plug-in estimator $\hat{\varphi}$ can attain rate of convergence that are faster than $n_{\scriptscriptstyle B}^{-1/4}$. \subsection{Consistency and efficiency} \noindent We first study the design consistency and asymptotic normality of the proposed augmented design-based two-step GEL estimators. The main results are presented in the following two theorems. Regularity conditions A1--A8 were used by Zhao et al. (2020) and are listed in Appendix A. \begin{theorem} \label{thm1} Suppose that $\hat{\varphi}=\varphi_{\mbox{\tiny N}}+o_p(1)$, and that conditions A1, A7--A8, B1--B2 hold. Then the proposed augmented design-based two-step GEL estimator is design-consistent for $\theta_{\mbox{\tiny N}}$ in the sense that $ \lim_{\mbox{\tiny N} \rightarrow \infty}{\rm Pr}\{\|\hat{\theta}_{\scriptscriptstyle GEL} - \theta_{\mbox{\tiny N}} \| > \epsilon \mid \mathcal {F}_{\mbox{\tiny N}}\} = 0$ for any $\epsilon >0$. \end{theorem} \begin{theorem} \label{thm2} Suppose that conditions A1, A6--A8, B1 and B3--B4 hold, and that $\hat{\theta}_{\scriptscriptstyle GEL}=\theta_{\mbox{\tiny N}}+o_p(1)$. Then, as $N \rightarrow \infty$, $$ n_{\scriptscriptstyle B}^{1/2}(\hat{\theta}_{\scriptscriptstyle GEL}-\theta_{\mbox{\tiny N}})\stackrel{{\cal L}}{\rightarrow}N(0,V_2), $$ where $ V_2=\Sigma_2\Gamma_2^{\top}W_2^{-1}\Omega W_2^{-1}\Gamma_2 \Sigma_2 \,, \label{V1} $ $\Sigma_2=(\Gamma_2^{\top}W_2^{-1}\Gamma_2)^{-1}$, $\Gamma_2=\Gamma_2(\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})$, $ W_2= (n_{\scriptscriptstyle B}/ N^2)\sum_{i=1}^{N}\pi_i^{-1}\psi(Z_i,\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})^{\otimes 2},$ with $\Omega$ defined in Proposition \ref{prop1}. \end{theorem} \begin{corollary} \label{cor1} Suppose that the assumptions for Theorem \ref{thm2} hold. Under single-stage PPS sampling with replacement or single-stage PPS sampling without replacement with negligible sampling fractions, the asymptotic variance-covariance matrix $V_2=\Sigma_2$. \end{corollary} \begin{remark} One important observation from the results presented in Theorem \ref{thm2} is that the limiting distribution of our proposed estimator of $\theta_{\mbox{\tiny N}}$ based on the augmented estimating equations is invariant to the first-step estimator of the nuisance functional. This leads to the earlier statement that the proposed augmented two-step GEL estimators are less sensitive to the estimation of nuisance functionals. By combining the results from Corollary \ref{cor1} with the arguments of Ackerberg et al. (2014, Lemma 1), we conclude that the proposed estimators also achieve the semiparametric efficiency bound under the survey designs specified in Corollary \ref{cor1}. As discussed further in Section 5, the estimator $\hat{\theta}_{\scriptscriptstyle GEL}$ together with its standard errors can be used as bases for statistical inferences. Note that if the nuisance functional $\varphi_{\mbox{\tiny N}}$ does not depend on the parameter of interest $\theta_{\mbox{\tiny N}}$ and the estimating equations (\ref{pee}) is just-identified (i.e., $r=p$), then the proposed augmented two-step GEL estimators have the same limit distribution as the two step EL estimator proposed in Zhao et al. (2020). However, Zhao et al.'s (2020) estimator does not satisfy invariance property similar to that stated in (\ref{swee-invariance}). \end{remark} \subsection{Hypothesis testing} \noindent We next consider the GEL ratio based confidence regions and hypothesis tests on $\theta_{\mbox{\tiny N}}$. The GEL ratio statistic for $\theta_{\mbox{\tiny N}}$ is defined as \begin{equation*} \label{sel-test1} {\rm T}_{\mbox{\tiny N}}(\theta) = -2\{[\hat{P}_{\mbox{\tiny N}}(\hat{\theta}_{\scriptscriptstyle GEL},\hat{\eta}_{\scriptscriptstyle GEL},\hat{\varphi})- \hat{P}_{\mbox{\tiny N}}(\theta,\eta_{\theta},\hat{\varphi})\} \end{equation*} for the given $\theta$, where $\hat{\eta}_{\scriptscriptstyle GEL}=\eta(\hat{\theta}_{\scriptscriptstyle GEL},\hat{\varphi})$ and $\eta_{\theta}=\eta(\theta,\hat{\varphi})$. The asymptotic distribution of ${\rm T}_{\mbox{\tiny N}}(\theta)$ at $\theta = \theta_{\mbox{\tiny N}}$ is given in the following theorem. \begin{theorem}\label{thm3} Suppose that the assumptions for Theorem \ref{thm2} hold. Then, as $N \rightarrow \infty$, \begin{eqnarray*} {\rm T}_{\mbox{\tiny N}}(\theta_{\mbox{\tiny N}})\;\ \stackrel{{\cal L}}{\rightarrow} \;\; Q^{\top} \Delta Q, \end{eqnarray*} where $Q\sim N(0, I_{r})$, $ \Delta =\Omega^{1/2} W_2^{-1} \Gamma_2 (\Gamma_2^{\top}W_2^{-1}\Gamma_2)^{-1} \Gamma_2^{\top}W_2^{-1}\Omega^{1/2} $, and $I_{r}$ is the $r\times r$ identity matrix. \end{theorem} \begin{corollary} \label{cor2} Suppose that the assumptions for Theorem \ref{thm3} hold. Under single-stage PPS sampling with replacement or single-stage PPS sampling without replacement with negligible sampling fractions, we have $ {\rm T}_{\mbox{\tiny N}}(\theta_{\mbox{\tiny N}}) \; \stackrel{{\cal L}}{\rightarrow} \; \chi_p^2 $ as $N \rightarrow \infty$. \end{corollary} \begin{remark} Theorem \ref{thm3} indicates that, under general unequal probability sampling designs, the proposed augmented design-based two-step GEL ratio statistic converges in distribution to a weighted chi-square random variable, with the weights independent of the first-step estimation of the nuisance functions. More importantly, Corollary \ref{cor2} shows that the proposed GEL ratio statistics satisfy a nonparametric version of the Wilks' Theorem under commonly used single-stage unequal probability sampling designs. Note that the Wilks’ theorem breaks down in the two-step survey weighted EL approach proposed in Zhao et al. (2020) even under simple random sampling. \end{remark} The standard Wilks phenomenon with the proposed two-step survey weighted GEL provides a convenient way to construct confidence regions for $\theta_{\mbox{\tiny N}}$ defined via the population estimating equations (\ref{pee}) or test the hypothesis $H_0$: $\theta_{\mbox{\tiny N}} = \theta_0$ with a pre-specified $\theta_0$. The $(1-\alpha)100\%$ confidence region for $\theta_{\mbox{\tiny N}}$ can be constructed as \[ \mathcal {C}_\alpha = \Big\{\theta \mid -2\Big[\hat{P}_{\mbox{\tiny N}}(\hat{\theta}_{\scriptscriptstyle GEL},\hat{\eta}_{\scriptscriptstyle GEL},\hat{\varphi})- \hat{P}_{\mbox{\tiny N}}\Big(\theta,\eta(\theta,\hat{\varphi}),\hat{\varphi}\Big)\Big]\leq \chi^2_{p,\alpha}\Big\} \,, \] where $\chi^2_{p,\alpha}$ satisfies ${\rm Pr}(\chi_p^2\geq \chi^2_{p,\alpha})=\alpha$. The empirical results from simulation studies presented in section \ref{sec.sim} provide strong evidence that the standard Wilks' Theorem is also a good approximation for stratified sampling and cluster sampling. Auxiliary population information is often available for different sources. Including the information for survey data analysis often leads to efficiency gains in estimation and hypothesis testing problems. Auxiliary information can often be formed through a set of population estimating equations as \begin{equation} \label{pee-side} \mathfrak{U}_{\mbox{\tiny N}}(\theta_{\mbox{\tiny N}})=\dfrac{1}{N}\sum\limits_{i=1}^N q(Z_i,\theta_{\mbox{\tiny N}})= 0, \end{equation} where $q(Z,\theta)$ is a known $s$-vector of estimating functions. Under the proposed GEL framework, the side information in the form of (\ref{pee-side}) can easily be incorporated into the inferential problems. A general hypothesis test problem on the unknown parameters $\theta_{\mbox{\tiny N}}$ can often be imposed as $H_0$: $R(\theta_{\mbox{\tiny N}}) = 0$, where $R(\theta)$ is a $k\times 1$ vector of functions, with $k\leq p$. We are interested in developing tests for the general parametric hypotheses in the form of $R(\theta_{\mbox{\tiny N}}) = 0$ under the proposed GEL inferential framework. Let $\Theta^{\scriptscriptstyle R} = \big\{ \theta \mid \theta \in \Theta \; {\rm and} \; R(\theta) = 0\big\}$ be the restricted parameter space under $H_0$. Write the combined estimating functions as $\phi(Z,\theta,\varphi)=(\psi(Z,\theta,\varphi)^{\top},q(Z,\theta)^{\top})^{\top}$. We define the ``restricted'' maximum GEL estimator as \begin{equation} \label{rgele} \hat{\theta}_{\scriptscriptstyle GEL}^{\scriptscriptstyle R}=\arg\inf_{\theta\in\Theta^{\scriptscriptstyle R}}\sup_{\nu\in\hat{\Lambda}_{\mbox{\tiny N},\phi}(\theta,\hat{\varphi})} \hat{P}_{\mbox{\tiny N}}^{\scriptscriptstyle R}(\theta,\nu,\hat{\varphi}), \end{equation} where $\hat{P}_{\mbox{\tiny N}}^{\scriptscriptstyle R}(\theta,\nu,\varphi)=\sum_{i\in\mathcal{S}}(\rho(\eta^{\top}\pi_i^{-1}\phi(Z_i,\theta,\varphi))-\rho_0)$, $\nu$ is an $(r+s)$-vector of auxiliary parameters and $\hat{\Lambda}_{\mbox{\tiny N},\phi}(\theta,\varphi) = \{\nu:\nu^{\top}\pi_i^{-1}\phi(Z_i,\theta,\varphi)\in\mathcal {V}, i\in\mathcal{S}\}$. The GEL ratio statistic for testing $H_0$: $R(\theta_{\mbox{\tiny N}}) = 0$ is given by \begin{equation} \label{sel-test2} {\rm T}_{\mbox{\tiny N}}^{\scriptscriptstyle R}(\theta) = -2\{\hat{P}_{\mbox{\tiny N}}(\hat{\theta}_{\scriptscriptstyle GEL},\hat{\nu}_{\scriptscriptstyle GEL},\hat{\varphi})- \hat{P}_{\mbox{\tiny N}}^{\scriptscriptstyle R}(\hat{\theta}_{\scriptscriptstyle GEL}^{\scriptscriptstyle R},\hat{\nu}_{\scriptscriptstyle GEL}^{\scriptscriptstyle R},\hat{\varphi})\}, \end{equation} where $\hat{\nu}_{\scriptscriptstyle GEL}^{\scriptscriptstyle R}=\nu^{\scriptscriptstyle R}(\hat{\theta}_{\scriptscriptstyle GEL}^{\scriptscriptstyle R},\hat{\varphi})$ and $\nu^{\scriptscriptstyle R}(\theta,\varphi) =\arg\max_{\nu\in\hat{\Lambda}_{\mbox{\tiny N},\phi}(\theta,\varphi)} \hat{P}_{\mbox{\tiny N}}^{\scriptscriptstyle R}(\theta,\nu,\varphi)$. Let \ \ $\mathscr{U}_{\mbox{\tiny N}}(\theta,\varphi) =\sum_{i=1}^{N}\phi(Z_i,\theta,\varphi)/N$, \ \ $\hat{\mathfrak{U}}_{\mbox{\tiny N}}(\theta)=\sum_{i\in\mathcal{S}}\pi_i^{-1} q(Z_i,\theta)/N$, \ \ and \ \ $\Phi(\theta)=\partial R(\theta)/\partial\theta^{\top}$. The following additional regularity conditions are used to investigate the asymptotic properties of the estimator $\hat{\theta}_{\scriptscriptstyle GEL}^{\scriptscriptstyle R}$ defined in (\ref{rgele}) and the test statistic ${\rm T}_{\mbox{\tiny N}}^{\scriptscriptstyle R}(\theta)$ defined in (\ref{sel-test2}). \begin{itemize} \item [B5.] The finite population parameter vector $\theta_{\mbox{\tiny N}} \in \Theta$ is the unique solution to $\mathscr{U}_{\mbox{\tiny N}}(\theta,\varphi_{\mbox{\tiny N}})$ $=0$. \item [B6.] (i) There exists a function $\mathfrak{U}(\theta)$ such that $\sup_{\theta\in\Theta}\|\mathfrak{U}_{\mbox{\tiny N}}(\theta)-\mathfrak{U}(\theta)\|=o(1)$; (ii) for all $\theta\in\Theta$, the ordinary derivative of $\mathfrak{U}(\theta)$ with respect to $\theta$, denoted as $H(\theta)$, exists and has full column rank $p$. \item[B7.] (i) $\max_{i\in \mathcal{S}}\sup_{\theta\in\Theta}\|q(Z_i,\theta)\|=o_p(n_{\scriptscriptstyle B}^{1/\alpha})$, where $\alpha$ is as defined in condition B2(i); (ii) For any sequence $c_{\mbox{\tiny N}}=O(N^{-\kappa})$ with $\kappa\in(1/4,1/2]$, $$\sup_{\theta\in\Theta} \dfrac{1}{N}\sum_{i=1}^{N}\|q(Z_i,\theta)-q(Z_i,\theta+c_{\mbox{\tiny N}})\|= O(|c_{\mbox{\tiny N}}|) \, ;$$ (iii) For any sequence of positive numbers $\{\delta_{\mbox{\tiny N}}\}$ with $\delta_{\mbox{\tiny N}}=o(1)$, $$\sup_{\theta, \theta' \in \Theta(\delta_{\mbox{\tiny N}})} \|\mathfrak{U}_{\mbox{\tiny N}}(\theta) - \mathfrak{U}(\theta)-[\mathfrak{U}_{\mbox{\tiny N}}(\theta') - \mathfrak{U}(\theta')] \| = o(N^{-1/2});$$ (iv) For all $\delta>0$ and some positive constant $c$, $$ \sup_{\theta, \theta' \in \Theta(\delta)} {\rm Var}\Big\{[\hat{\mathfrak{U}}_{\mbox{\tiny N}}(\theta)-\hat{\mathfrak{U}}_{\mbox{\tiny N}}(\theta')]\mid \mathcal {F}_{\mbox{\tiny N}}\Big\}\leq cn_{\scriptscriptstyle B}^{-1}|\delta|. $$ \end{itemize} \begin{theorem} \label{thm4} Suppose that the assumptions for Theorem \ref{thm3} and the conditions B5-B7 hold. Then, as $N \rightarrow \infty$, $$ n_{\scriptscriptstyle B}^{1/2}\big(\hat{\theta}_{\scriptscriptstyle GEL}^{\scriptscriptstyle R}-\theta_{\mbox{\tiny N}}\big) \;\; \stackrel{{\cal L}}{\rightarrow} \;\; N(0,V^{\scriptscriptstyle R}), $$ where $ V^{\scriptscriptstyle R}=\mathscr{C}^{\scriptscriptstyle R}\Pi^{\top} \mathscr{W}^{-1} \Omega^{\scriptscriptstyle R} \mathscr{W}^{-1} \Pi \mathscr{C}^{\scriptscriptstyle R}, $ with $\mathscr{C}^{\scriptscriptstyle R}=\Sigma^{\scriptscriptstyle R}-\Sigma^{\scriptscriptstyle R} \Phi^{\top}(\Phi\Sigma^{\scriptscriptstyle R} \Phi^{\top})^{-1}\Phi\Sigma^{\scriptscriptstyle R}$, $\Sigma^{\scriptscriptstyle R}=[\Pi^{\top}\mathscr{W}^{-1}\Pi]^{-1}$, $\Pi=(\Gamma_2^{\top},H^{\top})^{\top}$, $\Gamma_2$ is given in Theorem \ref{thm2}, $H=H(\theta_{\mbox{\tiny N}})$, $\Phi=\Phi(\theta_{\mbox{\tiny N}})$, $\mathscr{W}= (n_{\scriptscriptstyle B}/ N^2)\sum_{i=1}^{N}\pi_i^{-1}\phi(Z_i,\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})^{\otimes 2},$ and $ \Omega^{\scriptscriptstyle R}= n_{\scriptscriptstyle B}N^{-2}{\rm Var}\{\sum_{i\in \mathcal{S}}\pi_i^{-1}\phi(Z_i,\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})\mid \mathcal{F}_{\mbox{\tiny N}}\}. $ \end{theorem} \begin{theorem} \label{thm5} Suppose that the assumptions for Theorem \ref{thm4} hold. Then, as $N \rightarrow \infty$, \begin{eqnarray*} {\rm T}_{\mbox{\tiny N}}^{\scriptscriptstyle R}(\theta_{\mbox{\tiny N}}) \;\; \stackrel{{\cal L}}{\rightarrow} \;\; \mathcal{Q}^{\top} \Delta^{\scriptscriptstyle R} \mathcal{Q}, \end{eqnarray*} where $\mathcal{Q}\sim N(0, I_{r+s})$, $ \Delta^{\scriptscriptstyle R} = (\Omega^{\scriptscriptstyle R})^{1/2}[\mathscr{P}^{\scriptscriptstyle R}-\mathscr{S}_{\psi} \mathscr{P}\mathscr{S}_{\psi}^{\top}](\Omega^{\scriptscriptstyle R})^{1/2}$, $\mathscr{S}_{\psi}=(I_r,0)^{\top}$ is an $(r+s)\times r$ matrix, $\mathscr{P}^{\scriptscriptstyle R}=\mathscr{W}^{-1}-\mathscr{W}^{-1}\Pi \mathscr{C}^{\scriptscriptstyle R}\Pi^{\top}\mathscr{W}^{-1}$, and $\mathscr{P}=W_2^{-1}-W_2^{-1}\Gamma_2\Sigma_2\Gamma_2^{\top}W_2^{-1}$. \end{theorem} \begin{corollary} \label{cor3} Suppose that the assumptions for Theorem \ref{thm5} hold. Under single-stage PPS sampling with replacement or single-stage PPS sampling without replacement with negligible sampling fractions, we have $V^{\scriptscriptstyle R}=\mathscr{C}^{\scriptscriptstyle R}$ and $ {\rm T}_{\mbox{\tiny N}}^{\scriptscriptstyle R}(\theta_{\mbox{\tiny N}}) \;\; \stackrel{{\cal L}}{\rightarrow} \;\; \chi_{s+k}^2 $ as $N \rightarrow \infty$. \end{corollary} The standard chi-square limiting distribution presented in Corollary \ref{cor3} under the commonly used survey designs provides a convenient tool for conducting general hypothesis tests and the construction of confidence regions for a subvector, say $\theta_{1\mbox{\tiny N}}$, of $\theta_{\mbox{\tiny N}}$ consisting of $k$ elements. Let $\theta = (\theta_1^{\top}, \theta_2^{\top} )^{\top}$ be the partition of $\theta_{\mbox{\tiny N}}$ with the first $k$ components corresponding to $\theta_{1\mbox{\tiny N}}$. A $(1-\alpha)$-level confidence region for $\theta_{1\mbox{\tiny N}}$ using the proposed GEL ratio statistic is given by \[ \mathcal {C}_\alpha^{\scriptscriptstyle R} = \bigg\{\theta_1 \mid -2\Big[\hat{P}_{\mbox{\tiny N}}(\hat{\theta}_{\scriptscriptstyle GEL},\hat{\eta}_{\scriptscriptstyle GEL},\hat{\varphi}) - \hat{P}_{\mbox{\tiny N}}\big(\tilde{\theta}(\theta_1),\eta(\tilde{\theta}(\theta_1),\hat{\varphi}),\hat{\varphi}\big)\Big] \leq \chi^2_{k,\alpha}\bigg\} \,, \] where $\tilde{\theta}(\theta_1) = (\theta_1^{\top},\hat{\theta}_2(\theta_1)^{\top})^{\top}$ and $\hat{\theta}_2(\theta_1)=\arg\inf_{\theta_2} \sup_{\eta\in\hat{\Lambda}_{\mbox{\tiny N},\psi}((\theta_1,\theta_2),\hat{\varphi})}\hat{P}_{\mbox{\tiny N}}((\theta_1,\theta_2), \eta,\hat{\varphi})$ for the given $\theta_1$. \section{Derivations of Augmentation Terms} \label{sec.exam} \setcounter{equation}{0} \noindent The augmentation term $\Xi$ specified in (\ref{nee}) plays the most crucial role in the proposed methods and needs to be derived for the particular nuisance functionals involved. In this section, we first discuss general procedures for identifying $\Xi$, and then illustrate the methods using three examples related to income inequality measures widely used in economic studies. \subsection{The general identification condition} \label{subsec.ident} \noindent We first discuss the general identification condition of $\Xi$ using a superpopulation-based approach. To facilitate the technical arguments, we introduce the notion of probability spaces associated with the sampling design and the superpopulation model. We assume that the vectors $\mathcal {F}_{\mbox{\tiny N}}=(Z_1,\cdots, Z_{\mbox{\tiny N}})\in\mathsf {R}^{d_z\times N}$ are an independent and identically distributed sample from a superpopulation model over a probability space $(\Omega, \mathscr{A}, \mathbb{P}_m)$, as well as from a distribution function $F_0$. The values $Z_i$ can be viewed as a mapping $\Omega\mapsto\mathsf {R}^{d_z}$, and we can write $Z_i$ as $Z_i(\omega)$ with $\omega\in\Omega$. Denote a $d_x$-dimensional component of $Z_i$ as $X_i $ with $X_i\in \mathsf {R}_{+}^{d_x}$ and $1\leq d_x\leq d_z$. Suppose that $\mathbf{X}^{N}=(X_1,\cdots, X_{\mbox{\tiny N}})\in\mathsf {R}_{+}^{d_x\times N}$ contains all the variables used for the sampling design. With the given sampling design, denote by $\mathbf{S}_{\mbox{\tiny N}}=\{\mathcal{S}:\mathcal{S}\subset\mathcal {U}_{\mbox{\tiny N}}\}$ the set of all possible samples. The smallest $\sigma$-algebra containing all the sets of $\mathbf{S}_{\mbox{\tiny N}}$ is denoted as $\mathsf{C}_{\mbox{\tiny N}}$ and is called the sigma-algebra generated by $\mathbf{S}_{\mbox{\tiny N}}$. Following Rubin-Bleuer and Schiopu-Kratina (2005), the sampling design is characterized by a function $P$: $\mathsf{C}_{\mbox{\tiny N}}\times\mathsf {R}_{+}^{d_x\times N}\rightarrow[0,1]$ such that (i) for all $\mathcal{S}$ in $\mathbf{S}_{\mbox{\tiny N}}$, $P(\mathcal{S},\cdot)$ is Borel-measurable in $\mathsf {R}_{+}^{d_x}$; (ii) for $\mathbf{X}^{N}\in\mathsf {R}_{+}^{d_x\times N}$, $P(\cdot,\mathbf{X}^{N})$ is a probability measure on $\mathsf{C}_{\mbox{\tiny N}}$. For each $\omega\in\Omega$ and $B\subset\mathbf{S}_{\mbox{\tiny N}}$, define $\mathbb{P}_d(B,\omega)=\sum_{s\in B}P(s, \mathbf{X}^{N}(\omega))$. We call the triple $(\mathbf{S}_{\mbox{\tiny N}},\mathsf{C}_{\mbox{\tiny N}}, \mathbb{P}_d)$ a design probability space. The product probability space that includes the super-population and the design space is defined as $(\mathbf{S}_{\mbox{\tiny N}}\times\Omega,\mathsf{C}_{\mbox{\tiny N}}\times\mathscr{A}, \mathbb{P}_{d,m})$, in which the probability measure $\mathbb{P}_{m,d}$ defined on rectangles $\{s\}\times A\in\mathsf{C}_{\mbox{\tiny N}}\times \mathscr{A}$ has the value $$\mathbb{P}_{m,d}(\{s\}\times A)=\int_{A}P(s,\mathbf{X}^{N}(\omega)){\rm d}\mathbb{P}_m(\omega)=\int_{A}\mathbb{P}_d(\{s\},\mathbf{X}^{N}(\omega)){\rm d}\mathbb{P}_m(\omega).$$ In what follows, we use $\mathbb{E}_m\{\cdot\}$ to denote the expectation with respect to the probability space $(\Omega, \mathscr{A}, \mathbb{P}_m)$ and $\mathbb{E}_{d,m}\{\cdot\}$ to represent the expectation with respect to the product probability space $(\mathbf{S}_{\mbox{\tiny N}}\times\Omega,\mathsf{C}_{\mbox{\tiny N}}\times\mathscr{A}, \mathbb{P}_{d,m})$. For any $(\theta, \varphi)\in\Theta\times\Psi$, we have \begin{eqnarray*} \mathbb{E}_{d,m}\Big\{\dfrac{1}{N}\sum_{i\in\mathcal{S}}\pi_{i}^{-1}g(Z_i,\theta,\varphi)\Big\}=\mathbb{E}_m\{g(Z,\theta,\varphi)\},\\ \mathbb{E}_{d,m}\Big\{\dfrac{1}{N}\sum_{i\in\mathcal{S}}\pi_{i}^{-1}\Xi(Z_i,\theta,\varphi)\Big\}=\mathbb{E}_m\{\Xi(Z,\theta,\varphi)\}. \end{eqnarray*} Let $\theta_0\in\Theta$ and $\varphi_0\in\Psi$ be the superpopulation version of the parameter of interest $\theta_{\mbox{\tiny N}}$ and the nuisance functions $\varphi_{\mbox{\tiny N}}$, respectively. Then, in terms of probability space $(\Omega, \mathscr{A}, \mathbb{P}_m)$, we have $\theta_{\mbox{\tiny N}}\stackrel{\mathbb{P}_m}{\longrightarrow}\theta_0$ and $\varphi_{\mbox{\tiny N}}\stackrel{\mathbb{P}_m}{\longrightarrow}\varphi_0$. Here $``\stackrel{\mathbb{P}_m}{\longrightarrow}"$ denotes convergence in probability with respect to probability space $(\Omega, \mathscr{A}, \mathbb{P}_m)$. By the identification of the super-population model, we further have that $\mathbb{E}_m\{g(Z,\theta_0,\varphi_0)\}=0$ and $\mathbb{E}_m\{\Xi(Z,\theta_0,\varphi_0)\}=0$. Let $\mathscr{F}=\{F\}$ be a general family of distribution of $Z$ and $\varphi(\cdot)$ be a mapping $\mathscr{F}\mapsto\mathsf {R}^{{\rm dim}(\varphi)}$. \ Suppose that \ $\hat{\varphi}\stackrel{\mathbb{P}_{d,m}}{\longrightarrow}\varphi(F)$ \ if the distribution of $Z$ is $F \in \mathscr {F}$, where $``\stackrel{\mathbb{P}_{d,m}}{\longrightarrow}"$ denotes convergence in probability with respect to the product probability space $(\mathbf{S}_{\mbox{\tiny N}}\times\Omega,\mathsf{C}_{\mbox{\tiny N}}\times\mathscr{A}, \mathbb{P}_{d,m})$. Let $\{F_{\alpha}: F_{\alpha}\in\mathscr{F}\}$ be a one-dimensional subfamily of $\mathscr{F}$. Following Newey (1994), the path $\{F_{\alpha}: \alpha\in(-\varepsilon,\varepsilon)\subset \mathsf {R}, \varepsilon >0, F_{\alpha}\in\mathscr{F}\}$ is assumed to be regular and satisfies the following mean-squared differentiability condition \[\lim_{\alpha\rightarrow 0} \int\left[{\alpha}^{-1}(dF_{\alpha}^{1/2} - dF_0^{1/2}) - \dfrac{1}{2}\mathfrak{F}(z)dF_0^{1/2}\right]^2dz = 0,\] where $dF_{\alpha}$ is the density of $F_{\alpha}$, and $\mathfrak{F}(z)=\partial\ln (F_{\alpha})/\partial \alpha$ is the corresponding score function satisfying $\mathbb{E}_m\{\mathfrak{F}(Z)\}=0$ and $\mathbb{E}_m\{\mathfrak{F}(Z)^2\}<\infty$. Define the functional \[\mu(F)= \mathbb{E}_{m}\{g(Z, \theta_0, \varphi(F))\}. \] We assume that $\mu: \mathscr {F} \mapsto\mathsf {R}^r$ is differentiable at $F_0$ in the sense of Van der Vaart (1991). Then under certain regularity conditions the function $\Xi(Z, \theta_0, \varphi(F_0))$ to be used as the augmentation term is uniquely determined by \begin{eqnarray}\label{con-influ} \dfrac{\partial \mu(F_{\alpha})}{\partial \alpha}\bigg|_{\alpha = 0} = \mathbb{E}_m\{\Xi(Z, \theta_0, \varphi(F_0))\mathfrak{F}(Z)\}. \end{eqnarray} Equation (\ref{con-influ}) is useful for deriving the expression for the function $\Xi$ when $\varphi_{\mbox{\tiny N}}=\varphi_{\mbox{\tiny N}}(\cdot,\theta_{\mbox{\tiny N}})$ is the finite population version of a regression function or a density. The function $\Xi(Z, \theta_0, \varphi(F_0))$ is called influence function of $\mu(F_0)$. In the model-based context, the explicit or numerical computation of the influence function has been discussed extensively in the literature, see, for example, Bickel et al. (1993), Newey (1994a, 1994b), Chen et al. (2003), Chen (2007), Ichimura and Newey (2022), Bravo et al. (2020), Chernozhukov et al. (2022) and references therein. It follows from above that these model-based approaches can be readily extended to the problem of complex survey data. \subsection{Census estimating equation based approach} \label{cee-approach} \noindent We next consider cases where the nuisance functional $\varphi_{\mbox{\tiny N}}$ can be explicitly defined via the following census estimating equations \begin{equation} \label{pee-nui} \mathscr{T}_{\mbox{\tiny N}}(\varphi_{\mbox{\tiny N}})=\dfrac{1}{N}\sum_{i=1}^{N}\mathfrak{T}(Z_i,\varphi_{\mbox{\tiny N}})=0. \end{equation} We assume that the equation system (\ref{pee-nui}) for defining the function $\varphi_{\mbox{\tiny N}}$ is possibly over-identified, i.e., dim($\mathfrak{T}$) $\geq$ dim($\varphi$). Given the set of sampled units $\mathcal{S}$ and survey weights $\{\pi_i^{-1},i\in\mathcal{S}\}$, the design-based GEL estimator for $\varphi_{\mbox{\tiny N}}$ can be obtained as \begin{equation*} \label{gel-nui} \hat{\varphi}_{\scriptscriptstyle GEL}=\arg\inf_{\varphi\in\Psi}\sup_{\lambda\in\hat{\Lambda}_{\mbox{\tiny N},\mathfrak{T}}(\varphi)} \hat{P}_{\mbox{\tiny N}}(\varphi,\vartheta), \end{equation*} where $\hat{P}_{\mbox{\tiny N}}(\varphi,\vartheta)=\sum_{i\in \mathcal{S}}\{\rho(\vartheta^{\top}\pi_i^{-1}\mathfrak{T}(Z_i, \varphi))-\rho_0\}, $ and $\hat{\Lambda}_{\mbox{\tiny N},\mathfrak{T}}(\varphi)=\{\vartheta:\vartheta^{\top}\pi_i^{-1} \mathfrak{T}(Z_i,\varphi)\in\mathcal {V}, i\in\mathcal{S}\}$. By applying Theorem \ref{thm2}, we have that \begin{equation*} \label{nui-influ} \hat{\varphi}_{\scriptscriptstyle GEL}-\varphi_{\mbox{\tiny N}}=-\mathbb{K}(\varphi_{\mbox{\tiny N}})\mathbb{H}(\varphi_{\mbox{\tiny N}})^{\top} \mathbb{W}(\varphi_{\mbox{\tiny N}})^{-1}\dfrac{1}{N}\sum_{i\in\mathcal{S}}\pi_{i}^{-1}\mathfrak{T}(Z_i,\varphi_{\mbox{\tiny N}})+o_p(n_{\scriptscriptstyle B}^{-1/2}), \end{equation*} where $\mathbb{K}(\varphi)=[\mathbb{H}(\varphi)^{\top}\mathbb{W}(\varphi)^{-1}\mathbb{H}(\varphi)]^{-1}$, $\mathbb{H}(\varphi)=\partial \mathscr{T}(\varphi)/\partial \varphi^{\top}$ with $\mathscr{T}(\varphi)$ satisfying $$\sup_{\varphi\in\Psi}\|\mathscr{T}_{\mbox{\tiny N}}(\varphi)-\mathscr{T}(\varphi)\|=o(1),$$ and $\mathbb{W}(\varphi)=(n_{\scriptscriptstyle B}/ N^2)\sum_{i=1}^{N}\pi_i^{-1}\mathfrak{T}(Z_i,\varphi)^{\otimes 2}$. The augmentation term is therefore given by $$ \Xi(Z,\theta,\varphi)=-D(\theta,\varphi) \mathbb{K}(\varphi)\mathbb{H}(\varphi)^{\top}\mathbb{W}(\varphi)^{-1}\mathfrak{T}(Z,\varphi), $$ where the derivative $D(\theta,\varphi)$ is defined in Condition A5 presented in Appendix A. When $\varphi_{\mbox{\tiny N}}$ is just-identified by (\ref{pee-nui}), i.e., dim($\mathfrak{T}$) =dim($\varphi$), the result is simplified to $\Xi(Z,\theta,\varphi) = -D(\theta,\varphi) \mathbb{H}(\varphi)^{-1}\mathfrak{T}(Z,\varphi)$. It is straightforward to show that the augmented estimating functions $\psi(Z,\theta,\varphi)=g(Z,\theta,\varphi)+\Xi(Z,\theta,\varphi)$ satisfy the following invariance property: \begin{equation*} \label{pels2.2} \frac{1}{N}\sum\limits_{i\in \mathcal{S}}\pi_i^{-1}\psi(Z_i,\theta_{\mbox{\tiny N}},\hat{\varphi}_{\scriptscriptstyle GEL})=\frac{1}{N}\sum\limits_{i\in \mathcal{S}}\pi_i^{-1}\psi(Z_i,\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}}) +o_p(n_{\scriptscriptstyle B}^{-1/2}). \end{equation*} This observation intuitively justifies the standard chi-square limiting distributions of augmented two-step GEL ratio statistics presented in the paper under commonly used survey designs. \subsection{Illustrative examples} \noindent We now apply the general results to three examples: the Gini coefficient, Lorenz curves and quantile shares, all involving a nuisance functional. These three examples have important implications to the theory and practice in economics on inequality measures. The Gini coefficient, also called the Gini index, measures the degree of the inequality in income distributions, the Lorenz curve depicts concentration and inequality in distribution of resources and in size distributions, while the quantile share is used to detect perturbations at different levels of a distribution (Beach and Davidson, 1983). If the variable under study represents income, then the quantile shares are also called income shares. \vspace{3mm} \noindent {\bf Example 1 (Gini Coefficient). } \ Let $Z\in\mathsf {R}$ be a nonnegative random variable on a probability space $(\Omega, \mathscr{A}, \mathbb{P}_m)$. The cumulative distribution function of $Z$ is $F_0(z)=\mathbb{P}_m(Z\leq z)$. The general family of Gini coefficients (Nyg\aa rd and Sandstr\"{o}m, 1989) is defined as $$\theta_0=\dfrac{1}{\mu_0}\int_{0}^{\infty}\psi\{F_0(z)\}zdF_0(z),$$ where $\mu_0=\mathbb{E}_{m}[Z]$, $\psi$ is a bounded and continuous function. Here, $\mathbb{E}_m[\cdot]$ denotes the expectation taken with respect to the probability measure $\mathbb{P}_m$. For the original Gini coefficient, $\psi\{u\} = 2u-1$. The nuisance functional in this case is the cumulative distribution function $F_0(z)$. Let $\mathcal {F}_{\mbox{\tiny N}}=(Z_1,\cdots,Z_{\mbox{\tiny N}})\in\mathsf {R}^{N}$ be a finite population from $\mathbb{P}_m$. The finite population distribution function is given by $F_{\mbox{\tiny N}}(z)=N^{-1}\sum_{i=1}^{N}I(Z_i\leq z)$, where $I(\cdot)$ is the indicator function, and the finite population mean is $\mu_{\mbox{\tiny N}}=N^{-1}\sum_{i=1}^{N}Z_i$. Then, the finite population Gini coefficient is defined as $\theta_{\mbox{\tiny N}}= N^{-1}\sum_{i=1}^{N}\mu_{\mbox{\tiny N}}^{-1}\psi\{F_{\mbox{\tiny N}}(Z_i)\}Z_i$, which satisfies \[ U_{\mbox{\tiny N}}(\theta_{\mbox{\tiny N}},F_{\mbox{\tiny N}})= \dfrac{1}{N} \sum_{i=1}^Ng(Z_i,\theta_{\mbox{\tiny N}},F_{\mbox{\tiny N}}(Z_i))= 0 \,, \] where $g(Z,\theta,F)=\psi\{F\}Z-\theta Z$. Denote $U(\theta,F) = \mathbb{E}_m[\psi\{F\}Z-\theta Z]$. Standard empirical process methods can be used to show that $U_{\mbox{\tiny N}}(\theta,F)$ converges uniformly in $(\theta,F)$ to $U(\theta,F)$. The pathwise derivative of $U(\theta,F_{\mbox{\tiny N}})$ in the direction $F-F_{\mbox{\tiny N}}$ has the form $D(\theta,F_{\mbox{\tiny N}})[F-F_{\mbox{\tiny N}}(Z)]=\mathbb{E}_m[\psi'\{F_{\mbox{\tiny N}}(Z)\}Z\{F-F_{\mbox{\tiny N}}(Z)\}]$, where $\psi'\{u\} = \partial\psi\{u\}/\partial u$. Given the set of sampled units $\mathcal{S}$ and first order inclusion probabilities $\pi_i$, the survey weighted estimator of $F_{\mbox{\tiny N}}(z)$ is obtained by $\hat{F}_{\mbox{\tiny N}}(z) = \hat{N}^{-1}\sum_{i\in \mathcal{S}}\pi_i^{-1}I(Z_i\leq z),$ where $\hat{N}=\sum_{i\in \mathcal{S}}\pi_i^{-1}$. It can be shown that \[ D(\theta_{\mbox{\tiny N}},F_{\mbox{\tiny N}})[\hat{F}_{\mbox{\tiny N}}(Z)-F_{\mbox{\tiny N}}(Z)] = \frac{1}{N} \sum_{i\in \mathcal{S}}\pi_i^{-1}\Xi(Z_i, F_{\mbox{\tiny N}})+o_p(n_{\scriptscriptstyle B}^{-1/2}) \,, \] where $\Xi(Z_i,F_{\mbox{\tiny N}})=\mathbb{E}_m[Z\psi'\{F_{\mbox{\tiny N}}(Z)\}\{I(Z\geq Z_i)-F_{\mbox{\tiny N}}(Z)\}]$, which is used to construct the augmentation term. For the original Gini coefficient, we have $\Xi(Z_i,F_{\mbox{\tiny N}})=2\mathbb{E}_m[Z\{I(Z\geq Z_i)-F_{\mbox{\tiny N}}(Z)\}]$. \vspace{3mm} \noindent {\bf Example 2 (Lorenz Curves). } \ Assume that $F(z)$ is differentiable and $f(z)$ is its density function. For a given $\tau\in[0,1]$, the Lorenz curve of $\mathbb{P}_m$ is defined as \[ \theta_0(\tau) = \frac{1}{\mu_0}\int_{0}^{\xi_0(\tau)}zdF(z) \,, \] where $\xi_0(\tau) = F_0^{-1}(\tau)=\inf\{z:F_0(z)\ge\tau\}$, which is a nuisance functional. The finite population Lorenz curve is defined as $\theta_{\mbox{\tiny N}}(\tau)= N^{-1}\sum_{i=1}^{N}\mu_{\mbox{\tiny N}}^{-1}Z_iI\{Z_i\leq\xi_{\mbox{\tiny N}}(\tau)\}$, where $\xi_{\mbox{\tiny N}}(\tau)=F_{\mbox{\tiny N}}^{-1}(\tau)=\inf\{z:F_{\mbox{\tiny N}}(z)\ge\tau\}$, the $\tau$th finite population quantile. Note that $\theta_{\mbox{\tiny N}}(\tau)$ is the solution to \begin{eqnarray*} U_{\mbox{\tiny N}}(\theta,\xi_{\mbox{\tiny N}}(\tau)) =\frac{1}{N}\sum_{i=1}^Ng(Z_i,\theta,\xi_{\mbox{\tiny N}}(\tau)) = 0, \end{eqnarray*} where $ g(Z,\theta,\xi)=Z\{I(Z\leq \xi)-\theta\}. $ Denote $U(\theta,\xi) = \mathbb{E}_m[Z\{I(Z\leq \xi)-\theta\}]$. It can be shown that $U_{\mbox{\tiny N}}(\theta,\xi)$ converges uniformly in $(\theta,\xi)$ to $U(\theta,\xi)$, and that the pathwise derivative of $U(\theta,\xi_{\mbox{\tiny N}})$ in direction $\xi-\xi_{\mbox{\tiny N}}$ is of the form $ D(\theta,\xi_{\mbox{\tiny N}}(\tau))[\xi-\xi_{\mbox{\tiny N}}(\tau)]=\xi_{\mbox{\tiny N}}(\tau)f(\xi_{\mbox{\tiny N}}(\tau))[\xi-\xi_{\mbox{\tiny N}}(\tau)] \,. $ The survey weighted estimator of $\xi_{\mbox{\tiny N}}(\tau)$ is given by $\hat{\xi}(\tau)=\hat{F}^{-1}_{\mbox{\tiny N}}(\tau)=\inf\{z:\hat{F}_{\mbox{\tiny N}}(z)\ge\tau\}$. Using the Bahadur representation established in Chen and Wu (2002), we obtain $$ D(\theta_{\mbox{\tiny N}},\xi_{\mbox{\tiny N}}(\tau))[\hat{\xi}(\tau)-\xi_{\mbox{\tiny N}}(\tau)] = \dfrac{1}{N}\sum_{i\in \mathcal{S}}\pi_i^{-1}\Xi(Z_i,\xi_{\mbox{\tiny N}}(\tau))+o_p(n_{\scriptscriptstyle B}^{-1/2}), $$ where $\Xi(Z,\xi)=-\xi\{I(Z\leq \xi)- \tau\}$. The GEL-based estimation and inference for $\theta_{\mbox{\tiny N}}(\tau)$ can consequently be conducted using the augmented estimating function $ \psi(Z,\theta,\xi)=g(Z,\theta,\xi)+\Xi(Z,\xi). $ \vspace{3mm} \noindent {\bf Example 3 (Quantile Shares). } \ For two fixed quantile levels $\tau_1,\tau_2\in[0,1]$ with $\tau_1\leq\tau_2$, the quantile share of $\mathbb{P}_m$ is defined as $$ \theta_0(\tau_1,\tau_2)=\theta_0(\tau_2)-\theta_0(\tau_1) \,, $$ where $\theta_0(\tau)$ is defined in Example 2. If $Z$ is an income variable, the income share $\theta_0(\tau_1,\tau_2)$ is the percentage of total income shared by the population allocated to the income interval $[\xi_0(\tau_1), \; \xi_0(\tau_2)]$. The finite population quantile share is defined as $\theta_{\mbox{\tiny N}}(\tau_1,\tau_2)=N^{-1}\sum_{i=1}^{N}\mu_{\mbox{\tiny N}}^{-1}Z_iI\{\xi_{\mbox{\tiny N}}(\tau_1)<Z_i\leq\xi_{\mbox{\tiny N}}(\tau_2)\},$ which is obtained by solving the census estimating equation \begin{eqnarray*} U_{\mbox{\tiny N}}(\theta,\xi_1,\xi_2) = \frac{1}{N}\sum_{i=1}^Ng(Z_i,\theta,\xi_1,\xi_2) = 0, \end{eqnarray*} with $ g(Z,\theta,\xi_1,\xi_2)=Z\{I(\xi_1<Z\leq \xi_2)-\theta\}. $ Using the same arguments given in Example 2 for the Lorenz curve, we obtain the following augmented estimating function $ \psi(Z,\theta,\xi_1,\xi_2)=g(Z,\theta,\xi_1,\xi_2)+\Xi(Z,\xi_1,\xi_2), $ where $\Xi(Z,\xi_1,\xi_2)=-\xi_2\{I(Z\leq \xi_2)- \tau_2\}+\xi_1\{I(Z\leq \xi_1)- \tau_1\}$. \section{Survey Designs and Variance Estimation} \label{sec.design} \setcounter{equation}{0} \noindent We give detailed illustrations of how our results can be readily applied to a class of complex survey designs, along with discussions on design-based variance estimation of the proposed efficient GEL estimators. It follows from Theorem \ref{thm2} that the point estimators $\hat{\theta}_{\scriptscriptstyle GEL}$ and its estimated standard errors can be used to construct Wald-type confidence regions. However, estimation of the asymptotic design-based variance-covariance matrix $V_2$ of $n_{\scriptscriptstyle B}^{1/2}(\hat{\theta}_{\scriptscriptstyle GEL}-\theta_{\mbox{\tiny N}})$ is not straightforward for a general unequal probability survey design. We consider three commonly used survey designs: single-stage unequal probability sampling single-stage survey designs, stratified sampling and cluster sampling. General discussions on variance estimation for complex survey designs can be found in Wu and Thompson (2020). \subsection{Single-stage unequal probability sampling} \label{sec.single} If the survey design is single-stage PPS sampling with replacement or single-stage PPS sampling without replacement with negligible sampling fractions, we have the following approximation formula for the variance-covariance matrix $\Omega$: \begin{eqnarray*} \Omega= \dfrac{n_{\scriptscriptstyle B}}{N^2}{\rm Var}\Big\{\sum_{i\in \mathcal{S}}\pi_i^{-1}\psi(Z_i,\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})\mid \mathcal{F}_{\mbox{\tiny N}}\Big\} =\dfrac{n_{\scriptscriptstyle B}}{N^2}\sum_{i=1}^N\pi_i^{-1}\psi(Z_i,\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})^{\otimes 2}+o(1). \end{eqnarray*} Consequently, a design-based consistent estimator of $\Omega$ in the survey designs mentioned above can be obtained by \begin{eqnarray*} \hat{\Omega}=\dfrac{n}{N^2}\sum_{i\in\mathcal{S}}\Big[\pi_i^{-1}\psi(Z_i,\hat{\theta}_{\scriptscriptstyle GEL},\hat{\varphi}) -N n^{-1}\hat{\mathbb{U}}_{\mbox{\tiny N}}(\hat{\theta}_{\scriptscriptstyle GEL},\hat{\varphi})\Big]^{\otimes 2}, \end{eqnarray*} where $\hat{\mathbb{U}}_{\mbox{\tiny N}}(\theta,\varphi)= N^{-1}\sum_{i\in \mathcal{S}}\pi_i^{-1}\psi(Z_i,\theta,\varphi)$. However, for general survey designs, estimating $\Omega$ requires second order inclusion probabilities $\pi_{ij}={\rm Pr}(i,j\in\mathcal{S})$, which may not be available. Approximate variance formulas not involving the $\pi_{ij}$ are often used in practice; see Haziza {\em et al.} (2008) for further discussion. For single-stage PPS sampling with non-negligible sampling fractions, we can estimate $\Omega$ by the H\'{a}jek variance estimator \[ \hat{\Omega} = \dfrac{n}{N^2}\sum\limits_{i\in \mathcal{S}} c_i\big [\pi_i^{-1}\psi(Z_i,\hat{\theta}_{\scriptscriptstyle GEL},\hat{\varphi})-\hat{\mathscr{B}}\big]^{\otimes 2}\,, \] where \[ \hat{\mathscr{B}} = \Big \{\sum_{i\in\mathcal{S}}c_i\pi_i^{-1}\psi(Z_i,\hat{\theta}_{\scriptscriptstyle GEL},\hat{\varphi})\Big\}/\sum_{i\in\mathcal{S}}c_i \;\;\; {\rm and} \;\;\; c_i = \{n(1-\pi_i)\}/(n-1) \,. \] Simulation results reported in Haziza {\em et al.} (2008) showed that the approximate variance estimator has good finite sample performances for commonly used single-stage survey designs. We now discuss design consistent estimators of the weight matrix $W_2$ and the derivative $\Gamma_2$ under a general single-stage sampling design. The weight matrix $W_2$ can be consistently estimated by \begin{eqnarray*}\label{hatw2} \hat{W}_{\mbox{\tiny N}}=\dfrac{n}{N^2}\sum_{i\in\mathcal{S}}\pi_i^{-2}\psi(Z_i,\hat{\theta}_{\scriptscriptstyle GEL},\hat{\varphi})^{\otimes 2}. \end{eqnarray*} If the function $\psi(Z,\theta,\varphi)$ is differentiable in $\theta$, it is easily to see that $$\hat{\Gamma}_2=\dfrac{1}{N}\sum_{i\in\mathcal{S}}\pi_i^{-1} \dfrac{\partial\psi(Z,\hat{\theta}_{\scriptscriptstyle GEL},\hat{\varphi})}{\partial\theta^{\top}}$$ is a design consistent estimator for $\Gamma_2$. For non-smooth functions, we employ the method of random perturbation proposed in Chen and Liao (2015). Denote by $\mathscr{V}$ a large enough compact set in $\mathsf{R}^{p}$ and define \begin{eqnarray*}\label{randper} \begin{split} \mathcal {D}_{\mbox{\tiny N},\theta}(\mathscr{V},\hat{\theta}_{\scriptscriptstyle GEL},\hat{\varphi}) =\sqrt{N}\hat{\mathbb{U}}_{\mbox{\tiny N}}(\hat{\theta}_{\scriptscriptstyle GEL}+N^{-1/2}\mathscr{V},\hat{\varphi}) -\sqrt{N}\hat{\mathbb{U}}_{\mbox{\tiny N}}(\hat{\theta}_{\scriptscriptstyle GEL},\hat{\varphi}). \end{split} \end{eqnarray*} Under the conditions presented in Appendix, $\{\hat{\mathbb{U}}_{\mbox{\tiny N}}(\theta,\varphi)-\mathbb{U}(\theta,\varphi), N=1,2, \cdots\}$ is stochastically equicontinuous. This, together with the differentiability of the limiting function $\mathbb{U}(\theta,\varphi)$ with respect to $\theta$, implies that \begin{eqnarray}\label{approximate-ls} \begin{split} \mathcal {D}_{\mbox{\tiny N},\theta}(\mathscr{V},\hat{\theta}_{\scriptscriptstyle GEL},\hat{\varphi}) =\Gamma_2(\tilde{\theta},\hat{\varphi})\mathscr{V}+o_p(1). \end{split} \end{eqnarray} where $\tilde{\theta}$ is on the line segment between $\hat{\theta}_{\scriptscriptstyle GEL}$ and $\hat{\theta}_{\scriptscriptstyle GEL}+N^{-1/2}\mathscr{V}$. Motivated by the expression (\ref{approximate-ls}), we propose the following resampling procedure based on the least squares. \begin{itemize} \item[1.] Generate independent and identically distributed random samples, $\{\mathscr{V}_{b}: b = 1,\cdots, B\}$, from some known multivariate distribution with mean zero and variance $I_{p}$. \item[2.] Compute $\mathcal {D}_{\mbox{\tiny N},\theta}(\mathscr{V}_{b},\hat{\theta}_{\scriptscriptstyle GEL},\hat{\varphi})$ for $b = 1,\cdots, B$. \item[3.] Calculate \begin{eqnarray}\label{resample-ls} \hat{\Gamma}_{2,j}=\left(\dfrac{1}{B}\sum_{b=1}^{B} \mathscr{V}_{b}\mathscr{V}_{b}^{\top}\right)^{-1}\dfrac{1}{B}\sum_{b=1}^{B} \mathcal {D}_{j\mbox{\tiny N},\theta}(\mathscr{V}_{b},\hat{\theta}_{\scriptscriptstyle GEL},\hat{\varphi})\mathscr{V}_{b}, \end{eqnarray} for $j=1,\cdots,r$, where $\mathcal {D}_{j\mbox{\tiny N},\theta}$ denotes the $j$th coordinate of $\mathcal {D}_{\mbox{\tiny N},\theta}$. The value of $\Gamma_2$ is then estimated by $\hat{\Gamma}_2$ with $\hat{\Gamma}_2=(\hat{\Gamma}_{2,1},\cdots,\hat{\Gamma}_{2,r})^{\top}$. \end{itemize} Note that $\hat{\Gamma}_{2,j}$ in (\ref{resample-ls}) is the least squares estimate from regressing $\mathcal {D}_{j\mbox{\tiny N},\theta}(\mathscr{V},\hat{\theta}_{\scriptscriptstyle GEL},\hat{\varphi})$ over $\mathscr{V}$ based on (\ref{approximate-ls}). Denote by $E_{\mathscr{V}}[\cdot]$ the expectation with respect to $\mathscr{V}$. The following theorem presents the consistency of the resampling estimator $\hat{\Gamma}_2$. \begin{theorem} \label{thm6} Suppose that (i) $\mathscr{V}$ is a random vector with mean zero and variance $I_{d}$, independent of the survey sample $\mathcal{S}$; (ii) for all sequences $\delta_{\mbox{\tiny N}}=o(1)$, $$ \sup_{(\theta,\varphi)\in\Theta(\delta_{\mbox{\tiny N}})\times\Psi(\delta_{\mbox{\tiny N}})} \|\Gamma_2(\theta,\varphi)-\Gamma_2(\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})\|=o(1) $$ and $$ \sup_{(\theta,\varphi)\in\Theta(\delta_{\mbox{\tiny N}})\times\Psi(\delta_{\mbox{\tiny N}})} \|E_{\mathscr{V}}[\mathfrak{D}_{\mbox{\tiny N},\mathscr{V}}(\mathscr{V}, \theta+N^{-1/2}\mathscr{V},\varphi)]\| =o_p(N^{-1/2}),$$ where $\mathfrak{D}_{\mbox{\tiny N},\mathscr{V}}(\mathscr{V},\theta,\varphi) =[\hat{\mathbb{U}}_{\mbox{\tiny N}}(\theta,\varphi)- \mathbb{U}(\theta,\varphi)] \mathscr{V}^{\top}$. Then $\hat{\Gamma}_2 \stackrel{p}{\rightarrow}\Gamma_2$. \end{theorem} Consequently, the variance-covariance matrix of $n_{\scriptscriptstyle B}^{1/2}(\hat{\theta}_{\scriptscriptstyle GEL}-\theta_{\mbox{\tiny N}})$ can be consistently estimated by $ \hat{V}_2=(\hat{\Gamma}_2^{\top}\hat{W}_{\mbox{\tiny N}}^{-1}\hat{\Gamma}_2)^{-1} \hat{\Gamma}_2^{\top}\hat{W}_{\mbox{\tiny N}}^{-1}\hat{\Omega} \hat{W}_{\mbox{\tiny N}}^{-1}\hat{\Gamma}_2 (\hat{\Gamma}_2^{\top}\hat{W}_{\mbox{\tiny N}}^{-1}\hat{\Gamma}_2)^{-1}. $ \subsection{Stratified sampling} \label{sec.stra} Suppose that the finite population $\mathcal {U}_{\mbox{\tiny N}}$ is divided into $H$ strata indexed by $h=1,\cdots,H$. Let $N=\sum_{h=1}^{H}N_h$ be the overall population size where $N_h$ is the population size of the $h$th stratum. Let $(hi)$ be the index for unit $i$ in stratum $h$. The parameter of interest $\theta_{\mbox{\tiny N}}$ is defined through the following stratified population estimating equations \begin{equation} \label{str-un} U_{\mbox{\tiny N}}(\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})=\dfrac{1}{N} \sum_{h=1}^H\sum_{i=1}^{N_{\scriptscriptstyle H}}g(Z_{hi},\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})=0. \end{equation} Assume that the stratum sample $\mathcal{S}_h$ of size $n_h$ is selected with first order inclusion probabilities $\{\pi_{hi},i\in \mathcal{S}_h\}$, $h=1,\cdots,H$, independent across different strata. Let $n=\sum_{h=1}^{H}n_h$ be the overall size of the stratified sample. Assume that a design consistent estimator for $\varphi_{\mbox{\tiny N}}$, denoted by $\hat{\varphi}$, can be obtained in advance by using the stratified samples. The following two regularity conditions are imposed on the population estimating functions defined in (\ref{str-un}) and the first-step estimator $\hat{\varphi}$: \begin{itemize} \item [C1.] There exists a function $U(\theta,\varphi)$ such that $\sup_{(\theta,\varphi)\in\Theta\times\Psi}\|U_{\mbox{\tiny N}}(\theta,\varphi)-U(\theta,\varphi)\|=o(1)$, and $U(\theta,\varphi)$ satisfies conditions A4 and A5 presented in Appendix A. \item [C2.] The pathwise derivative $D(\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})[\hat{\varphi}-\varphi_{\mbox{\tiny N}}]$ of $U(\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})$ is of the following form: $$D(\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})[\hat{\varphi}-\varphi_{\mbox{\tiny N}}]=\dfrac{1}{N}\sum_{h=1}^H\sum_{i\in \mathcal{S}_h}\pi_{hi}^{-1}\Xi(Z_{hi},\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})+o_p(n_{\scriptscriptstyle B}^{-1/2}),$$ where $\Xi(\cdot)$ satisfies the following conditions: (i) $\Xi(Z_{hi},\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})$ has finite fourth population moments; and (ii) $\sum_{h=1}^H\sum_{i\in \mathcal{S}_h}\pi_{hi}^{-1}\Xi(Z_{hi},\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})$ is asymptotically normally distributed with mean zero and variance-covariance matrix at the order $O(n_{\scriptscriptstyle B}^{-1}N^2)$. \end{itemize} Under stratified sampling designs, the efficient two-step GEL estimator for $\theta_{\mbox{\tiny N}}$ satisfying (\ref{str-un}) is defined as \begin{equation} \label{gele-str} \hat{\theta}_{\scriptscriptstyle GEL}=\arg\inf_{\theta\in\Theta} \sup_{\eta\in\hat{\Lambda}_{\mbox{\tiny N},\psi}(\theta,\hat{\varphi})}\hat{P}_{\mbox{\tiny N}}(\theta, \eta,\hat{\varphi}), \end{equation} where $\hat{P}_{\mbox{\tiny N}}(\theta,\eta,\varphi) =\sum_{h=1}^H\sum_{i\in \mathcal{S}_h}\{\rho(\eta^{\top}\pi_{hi}^{-1}\psi(Z_{hi},\theta,\hat\varphi))-\rho_0\}$ and $\hat{\Lambda}_{\mbox{\tiny N},\psi}(\theta,\varphi)=\{\eta:\eta^{\top}\pi_{hi}^{-1} \psi(Z_{hi},\theta,\varphi)\in\mathcal {V}, i\in\mathcal{S}_h,h=1,\cdots,H\}$. We call $\hat{\theta}_{\scriptscriptstyle GEL}$ under stratified sampling the pooled GEL estimator. It follows from Theorem 3.2 that the pooled GEL estimator $\hat{\theta}_{\scriptscriptstyle GEL}$ defined in (\ref{gele-str}) is asymptotically normally distributed with mean $\theta_{\mbox{\tiny N}}$ and variance-covariance matrix $n_{\scriptscriptstyle B}^{-1}V_2$, where $V_2$ has the same form given in Theorem 3.2 with the matrices $W_2$ and $\Omega$ replaced respectively by $ W_2= n_{\scriptscriptstyle B} N^{-2} \sum_{h=1}^H\sum_{i=1}^{N_{\scriptscriptstyle H}}g(Z_{hi},\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})^{\otimes 2} $ and $$ \Omega= \dfrac{n_{\scriptscriptstyle B}}{N^2}\sum_{h=1}^H{\rm Var}\Big\{\sum_{i\in \mathcal{S}_h}\pi_{hi}^{-1}[\psi(Z_{hi},\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})] \mid \mathcal{F}_{\mbox{\tiny N}}\Big\}. $$ If the stratum samples $\mathcal{S}_h$ are selected by a PPS sampling design with small sampling fractions, we can estimate $\Omega$ by \[ \hat{\Omega}=\frac{n}{N^2}\sum_{h=1}^{\scriptscriptstyle H}\sum_{i\in \mathcal{S}_h}\Big [\pi_{hi}^{-1}\psi(Z_{hi},\hat{\theta}_{\scriptscriptstyle GEL},\hat{\varphi})-\hat{U}_{h}(\hat{\theta}_{\scriptscriptstyle GEL},\hat\varphi)\Big]^{\otimes 2}, \] where $\hat{U}_{h}(\theta,\varphi)= n_h^{-1}\sum_{i\in \mathcal{S}_h} \pi_{hi}^{-1} \psi(Z_{hi},\theta,\varphi)$. In cases where the sampling fractions are not negligible, using arguments similar to those presented in Section 5 of the main paper, we can estimate $\Omega$ by \[ \hat{\Omega} = \dfrac{n}{N^2}\sum_{h=1}^{\scriptscriptstyle H}\sum_{i\in \mathcal{S}_h} c_{ih}\big [\pi_{ih}^{-1}\psi(Z_{ih},\hat{\theta}_{\scriptscriptstyle GEL},\hat{\varphi})-\hat{\mathscr{B}_h}\big]^{\otimes 2}\,, \] where $ \hat{\mathscr{B}}_h = \sum_{i\in\mathcal{S}_h}c_{ih}\pi_{ih}^{-1}\psi(Z_{ih},\hat{\theta}_{\scriptscriptstyle GEL},\hat{\varphi})/\sum_{i\in\mathcal{S}_h}c_{ih}$ and $ c_{ih} = \{n_h(1-\pi_{ih})\}/\{n_h-1\}. $ Under stratified sampling designs, the weight matrix $W_2$ and the derivative $\Gamma_2$ can be consistently estimated by using the same estimators for single-stage sampling designs, with $Z_i$, $\pi_{i}$ and $\sum_{i\in \mathcal{S}}$ respectively replaced by $Z_{hi}$, $\pi_{ih}$ and $\sum_{h=1}^{\scriptscriptstyle H}\sum_{i\in \mathcal{S}_h}$. \subsection{Cluster sampling} \label{sec.cluster} We now consider cluster sampling. Suppose that the population is divided into $K$ clusters and that the $i$th cluster has $M_i$ elements. The overall population size is $N=\sum_{i=1}^{K}M_i$. Let $Z_{(ij)}$ be the value of $Z$ for the $j$th element in the $i$th cluster. Then, the true parameter $\theta_{\mbox{\tiny N}}$ satisfies \begin{equation} \label{clst-cl} U_{\mbox{\tiny N}}(\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})=\dfrac{1}{N}\sum_{i=1}^K\sum_{j=1}^{M_i}g(Z_{(ij)},\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})=0. \end{equation} We consider two-stage cluster sampling designs where the first stage sample $\mathcal{S}_c$ is a set of $k$ clusters selected from the population with inclusion probabilities $\pi_{1i} = {\rm Pr}(i\in \mathcal{S}_c)$, and the second stage sample $\mathcal{S}_i$ is a set of $m_i$ ($\le M_i$) units drawn from cluster $i\in \mathcal{S}_c$ with second-stage inclusion probabilities $\pi_{j | i}={\rm Pr}(j\in \mathcal{S}_i \mid i\in \mathcal{S}_c)$. The final first order inclusion probability for selecting unit $(ij)$ is given by $\pi_{(ij)} = {\rm Pr}(i \in \mathcal{S}_c, j\in \mathcal{S}_i) = \pi_{1i}\pi_{j | i}$. A popular two-stage sampling design is the so-called self-weighting design for which $\pi_{1i} = kM_i/N$ and $\pi_{j | i} = m/M_i$ such that the final first order inclusion probabilities are the same for all units. We assume that under two-stage cluster sampling the population estimating functions defined in (\ref{clst-cl}) and the first-step estimator $\hat{\varphi}$ satisfy the following two regularity conditions: \begin{itemize} \item [D1.] There exists a function $U(\theta,\varphi)$ such that $\sup_{(\theta,\varphi)\in\Theta\times\Psi}\|U_{\mbox{\tiny N}}(\theta,\varphi)-U(\theta,\varphi)\|=o(1)$, and $U(\theta,\varphi)$ satisfies conditions A4 and A5 presented in Appendix A. \item [D2.] The pathwise derivative $D(\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})[\hat{\varphi}-\varphi_{\mbox{\tiny N}}]$ is of the following form: $$D(\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})[\hat{\varphi}-\varphi_{\mbox{\tiny N}}]=\dfrac{1}{N}\sum_{i\in \mathcal{S}_c}\sum_{j\in \mathcal{S}_i} \pi_{(ij)}^{-1} \Xi(Z_{(ij)},\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})+o_p(n_{\scriptscriptstyle B}^{-1/2}),$$ where $\Xi(\cdot)$ satisfies the following conditions: (i) $\Xi(Z_{(ij)},\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})$ has finite fourth population moments; and (ii) $\sum_{i\in \mathcal{S}_c}\sum_{j\in \mathcal{S}_i} \pi_{(ij)}^{-1} \Xi(Z_{(ij)},\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})$ is asymptotically normally distributed with mean zero and variance-covariance matrix at the order $O(n_{\scriptscriptstyle B}^{-1}N^2)$. \end{itemize} Under two-stage cluster sampling, the parameter $\theta_{\mbox{\tiny N}}$ is defined by (\ref{clst-cl}). The efficient two-step GEL estimator $\hat{\theta}_{\scriptscriptstyle GEL}$ is defined as in (\ref{gele-str}) but replaces $\hat{P}_{\mbox{\tiny N}}(\theta,\eta,\hat{\varphi})$ by \begin{equation*} \hat{P}_{\mbox{\tiny N}}(\theta,\eta,\hat{\varphi}) =\sum_{i\in \mathcal{S}_c}\sum_{j\in \mathcal{S}_i}\{\rho(\eta^{\top}\pi_{(ij)}^{-1}\psi(Z_{(ij)},\theta,\hat\varphi))-\rho_0\}, \end{equation*} where $\psi(Z_{(ij)},\theta,\varphi)=g(Z_{(ij)},\theta,\varphi)+\Xi(Z_{(ij)},\theta,\varphi)$. We can show that $ n_{\scriptscriptstyle B}^{1/2}(\hat{\theta}_{\scriptscriptstyle GEL}-\theta_{\mbox{\tiny N}})\stackrel{{\cal L}}{\rightarrow}N(0,V_2), $ where $ V_2=\Sigma_2\Gamma_2^{\top}W_2^{-1}\Omega W_2^{-1}\Gamma_2 \Sigma_2 \,, \label{V1} $ $\Sigma_2=(\Gamma_2^{\top}W_2^{-1}\Gamma_2)^{-1}$, $\Gamma_2=\Gamma_2(\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})$, $\Gamma_2(\theta,\varphi)$ is the ordinary derivate of $U(\theta,\varphi)$ with respect to $\theta$, $$ W_2= \dfrac{n_{\scriptscriptstyle B}}{N^2}\sum_{i=1}^K\sum_{j=1}^{M_i}\pi_{(ij)}^{-1}\psi(Z_{(ij)},\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})^{\otimes 2}\,, $$ and \begin{equation*} \label{omega-ts} \Omega =\dfrac{n_{\scriptscriptstyle B}}{N^2} {\rm Var}\Big\{\sum_{i\in \mathcal{S}_c}\sum_{j\in \mathcal{S}_i} \pi_{(ij)}^{-1} \psi(Z_{(ij)},\theta_{\mbox{\tiny N}},\varphi_{\mbox{\tiny N}})\mid \mathcal{F}_{\mbox{\tiny N}}\Big\} \,. \end{equation*} For self-weighting two-stage sampling designs, we can estimate $\Omega$ directly by \[ \hat{\Omega}=\frac{1}{k(k-1)}\sum_{i\in \mathcal{S}_c}\Big (\bar{G}_i - \bar{G}\Big )^{\otimes 2}\,, \] where $\bar{G}_i = \sum_{j\in \mathcal{S}_i}\psi(Z_{(ij)},\hat{\theta}_{\scriptscriptstyle GEL},\hat{\varphi})/m$ and $\bar{G} = \sum_{i\in \mathcal{S}_c}\bar{G}_i/k$. Design consistent estimators of weight matrix $W_2$ and derivative $\Gamma_2$ can be easily obtained under general two-stage cluster sampling with suitable changes in notation. The results described in Sections \ref{sec.single} for single-stage unequal probability sampling and those for stratified sampling and cluster sampling can be combined for variance estimation under the more general stratified multi-stage sampling designs. \section{Simulation Studies} \label{sec.sim} \setcounter{equation}{0} \noindent In this section, we report results from a simulation study on the finite sample performances of our proposed augmented two-step GEL estimators and the GEL ratio confidence intervals when the sample data are selected from a finite population by a probability sampling method. The finite population $\mathcal{F}_{\mbox{\tiny N}}=(Z_1,\cdots, Z_{\mbox{\tiny N}})$ of size $N$ is generated from the model $$ Z_i=X_i+\varepsilon_i,\,\,\,\,i=1,\cdots,N \, , $$ where $X_i \sim 0.25+ {\rm Weibull}(2,2)$ and $\varepsilon_i$ follows the $\chi^2$ distribution with 3 degrees of freedom. The parameter of interest is the finite population quantile share $\theta_{\mbox{\tiny N}}(\tau_1,\tau_2)$ of the population $\mathcal{F}_{\mbox{\tiny N}}$ as discussed in Section \ref{sec.exam}. We consider four scenarios for the quantile levels: $(\tau_1,\tau_2)=(0,0.25)$, $(0.25,0.5)$, $(0.5,0.75)$, $(0.75,1)$. The finite population, once generated, is held fixed and repeated simulation samples are selected from the finite population using the following four sampling methods: \vspace{1.5mm} \noindent (A) Single-stage randomized systematic PPS sampling without replacement with small sampling fractions. The finite population size and the sample size are taken to be $(N, n) = (20000, 300)$. The sampling fraction is $n/N=1.5\%$, which can be viewed as negligible. \vspace{1.5mm} \noindent (B) Single-stage Rao-Sampford PPS sampling without replacement with large sampling fractions. The finite population size and the sample size are chosen as $(N, n) = (3000, 300)$. The sampling fraction is $10\%$, which is non-negligible. \vspace{1.5mm} \noindent (C) Stratified Rao-Sampford PPS sampling. The finite population is divided into $H = 3$ strata with stratum sizes $(N_1,N_2,N_3)=(4000, 6000, 10000)$. Stratum samples of size $n_h$ are selected by the randomized systematic PPS sampling, for $h=1,2,3$. The stratum sample sizes are chosen as $(n_1,n_2,n_3)=(50,100,150)$. The total sample size is $n=n_1+n_2+n_3$. \vspace{1.5mm} \noindent (D) Two-stage cluster sampling. The finite population is split into $1350$ clusters, with $200$ clusters having equal cluster size $M_j=30$, $250$ clusters having size $M_j=20$, and $900$ clusters having size $M_j=10$. In the first stage sampling $k=n/5$ clusters are selected by the randomized systematic PPS sampling, and in the second stage sampling $m = 5$ units are selected within each selected cluster, independent among different clusters, by simple random sampling without replacement. The overall sample size is taken to be $n = 300$. For each of the four sampling methods, a total of 1000 simulation samples are selected from the finite population. For each selected sample, we calculate the survey weighted point estimator, and use the following six different methods to construct the $95\%$ confidence intervals for the quantile share $\theta_{\mbox{\tiny N}}(\tau_1,\tau_2)$ at each quantile levels $(\tau_1,\tau_2)$: the GEL ratio confidence intervals using the standard chi-square limiting distributions for each of EL, ET, CU and GMM; the normal approximation confidence interval using the estimating equation based point estimator and a bootstrap estimate of the standard error (BC$_{n}$); and the bootstrap percentile interval with the estimating equation based point estimator (BC$_{p}$). In all simulations, the proposed method (Augmented SWEE) is compared with the method of Zhao et al. (2020) (Conventional SWEE) under an assumed standard chi-square limiting distribution for the GEL ratio statistic. The latter is based on $ g(Z,\theta,\xi_1,\xi_2)=Z\{I(\xi_1<Z\leq \xi_2)-\theta\} $ without the augmentation term, where $\xi_1$ and $\xi_1$ are the nuisance parameters with true values being the $\tau_1$th and $\tau_2$th population quantile of $\mathcal{F}_{\mbox{\tiny N}}$, respectively. Our proposed augmented SWEE method is based on $\psi(Z,\theta,\xi_1,\xi_2)=g(Z,\theta,\xi_1,\xi_2) -\xi_2\{I(Z\leq \xi_2)- \tau_2\}+\xi_1\{I(Z\leq \xi_1)- \tau_1\}$. \vspace{3mm} \noindent {\em Point estimation.} \ Tables \ref{tab1} presents the Monte Carlo biases, standard deviation (SD) and standard error (SE) of the survey weighted estimator of $\theta_{\mbox{\tiny N}}(\tau_1,\tau_2)$ at four different values of $(\tau_1,\tau_2)$ for each of the four sampling methods. Here, the SD is the square root of the simulated true variance of the point estimator and the SE is the square root of the bootstrap variance estimator. From Table \ref{tab1}, we have the following observations. (i) Under all the settings, the proposed augmented survey weighted estimators have negligible biases, and the values of SE are all close to the corresponding values of SD; (ii) The proposed estimator has values of SD similar to the conventional estimator without the augmentation term. \vspace{3mm} \noindent {\em Confidence intervals.} \ The finite sample performances of the six confidence intervals are evaluated using the following criteria: the average lengths (AL), the coverage probabilities (CP), the lower tail error (LE) and the upper tail error (UE), which are respectively computed as \[\begin{array}{lllllllllll} \mbox{AL}& =& \dfrac{1}{M}\sum\limits_{m=1}^M \Bigl\{P_{{\rm \mbox{\tiny U}}}^{(m)}(\tau_1,\tau_2) - P_{{\rm \mbox{\tiny L}}}^{(m)}(\tau_1,\tau_2)\Bigr\},\\ \mbox{CP} & =&\dfrac{1}{M}\sum\limits_{m=1}^M I\Bigl\{P_{{\rm \mbox{\tiny L}}}^{(m)}(\tau_1,\tau_2) < \theta_{\mbox{\tiny N}}(\tau_1,\tau_2) < P_{{\rm \mbox{\tiny U}}}^{(m)}(\tau_1,\tau_2)\Bigr\}, \\ \mbox{LE} & =& \dfrac{1}{M}\sum\limits_{m=1}^M I\Bigl\{\theta_{\mbox{\tiny N}}(\tau_1,\tau_2) \leq P_{{\rm \mbox{\tiny L}}}^{(m)}(\tau_1,\tau_2) \Bigr\},\\ \mbox{UE}& =&\dfrac{1}{M}\sum\limits_{m=1}^M I\Bigl\{ \theta_{\mbox{\tiny N}}(\tau_1,\tau_2) \geq P_{{\rm \mbox{\tiny U}}}^{(m)}(\tau_1,\tau_2)\Bigr\} , \end{array} \] where $P_{{\rm \mbox{\tiny L}}}^{(m)}(\tau_1,\tau_2)$ and $P_{{\rm \mbox{\tiny U}}}^{(m)}(\tau_1,\tau_2)$ are respectively the lower and upper boundaries of the $95\%$ confidence interval computed from the $m$th simulation sample, and $M$ is the number of simulation runs. Tables \ref{tab2}--\ref{tab5} report the simulation results computed from $M=1000$ simulation runs. Using the proposed augmented estimating functions, the GEL and GMM confidence intervals have excellent performance in terms of all the criteria listed above. Both have better coverage accuracy than the normal approximation and bootstrap based confidence intervals. The proposed GEL and GMM confidence intervals have coverage probabilities which are closer to the nominal level, and have shorter lengths than those of normal approximation and bootstrap methods. Without using the augmentation term, the GEL and GMM approaches under an assumed standard chi-square limiting distributions give invalid results as coverage probabilities are completely off the target nominal value. \section{An Application} \label{sec.data} \setcounter{equation}{0} \noindent The proposed methods are further illustrated with an application to the New York City Social Indicators Survey (NYSIS). The NYSIS was a biennial survey of New York City residents conducted by the Columbia University School of Social Work. The core survey is designed to demonstrate the use of several social indicators to answer questions about inequality and wellbeing. The survey also measures the sources and extent of external supports from government, family and friends, community and religious programs, and employers. We use data from the 2002 NYSIS survey (Teitler et al., 2004), which examined the period between March and June, 2002. The original data were collected from 1501 adults through telephone interviews and census individual weight was assigned to each survey respondent. In this example we are interested in making statistical inference for quintile shares on the respondent's earnings in 2002. We consider a subset of the 2002 survey sample consisting of $n=956$ respondents who have positive earnings. For analytical purposes, we re-scale the survey weights by $w_i=n\tilde{w}/\sum_{j=1}^{n}\tilde{w}_j$ such that $\sum_{i=1}^{n}w_i=n$, where $\tilde{w}_i$ is the original weight for $i$th survey respondent. Four different quantile shares are considered: $\theta_{\mbox{\tiny N}}(0,0.25)$, $\theta_{\mbox{\tiny N}}(0.25,0.5)$, $\theta_{\mbox{\tiny N}}(0.5,0.75)$ and $\theta_{\mbox{\tiny N}}(0.75,1)$. The point estimator and their standard errors are computed in the same way as the simulation study described in Section \ref{sec.sim}. In line with the simulation study, we also use the same six methods to construct $95\%$ confidence intervals for quantile shares: EL, ET, CU, GMM, BC$_{n}$ and BC$_{p}$. We include both the augmented and the conventional SWEE methods for all the cases considered. The analysis results are reported in Tables \ref{tab6}--\ref{tab7}. In terms of point estimators, the two approaches produce very similar values. However, the $95\%$ confidence intervals from the the conventional SWEE analysis are much wider than those from the augmented SWEE analysis. This is consistent with the theoretical results as well as results from the simulation studies. \section{Discussion} \label{sec.dis} \setcounter{equation}{0} \noindent Semiparametric modeling techniques using estimating equations combine the flexibility and robustness of nonparametric models and the interpretability of parametric models, and provide a powerful general framework for analytical use of complex survey data. In the presence of nuisance functions, however, the conventional two-step semiparametric estimation approach with simple plug-in estimators for the nuisance function is not only inefficient but also sensitive to the plug-in estimator. Moreover, the conventional approach lacks the asymptotic pivotalness and is difficult to use in practice. Our proposed augmented estimating functions tackle the weaknesses of the conventional approach in dealing with nuisance functions and complex survey data, and lead to more efficient estimation of the main parameters of interest and more desirable features on confidence intervals and hypothesis tests. We show that the augmented two-step GEL ratio statistic is asymptotically pivotal under some commonly used survey designs, and the resulting maximum GEL estimators achieve the semiparametric efficiency bound. The inferential framework developed in this paper for design-based inferences using survey data is generally applicable to parameters defined through estimating equations in the presence of nuisance functions. The proposed methods do not follow from any work in the existing literature and are especially attractive to problems in economic studies on inequality measures. Applications of machine learning methods to the semiparametric estimation problems have received considerable attention in recent years. Under the model-based framework with independent samples, Chernozhukov et al. (2018) proposed double/debiased machine learning estimators for treatment and structural parameters; Chernozhukov et al. (2022) considered debiased and robust semiparametric GMM estimation for plug-in semiparametric estimating equations; Chang (2020) developed double/debiased machine learning estimators for difference-in-differences models. In the survey context, Dagdoug et al. (2021) proposed using random forests to construct a new class of model-assisted estimators for finite population parameters. In this paper, we show that the limiting distribution of the point estimator of the main parameters of interest is invariant to the first step plug-in estimator for the nuisance parameters under our proposed augmented approach. It is a challenge research question on how to deal with scenarios where machine learning methods are used in the first-step estimation under the current setting. The development of appropriate machine learning methods for general semiparametric models with survey data requires future investigation. \section*{Acknowledgments} \noindent The authors thank the Editor and anonymous referees for their comments and suggestions that led to major improvement of the paper. This research is supported by the scientific research fund for high-level talents of Yunnan province, the National Natural Science Foundation of China (grant NO. 12071416), Yunnan Fundamental Research Projects (grant NO. 202201AV070006), and grants from the Natural Sciences and Engineering Research Council of Canada and The Canadian Statistical Sciences Institute.
1,314,259,992,641
arxiv
\section{Introduction} \IEEEPARstart{W}{ireless} communications systems employing multiple antennas have the advantage of increasing the overall throughput without increasing the required bandwidth. For this reason, multiple-antenna systems are at the core of several wireless communications standards such as WiFi, Long Term Evolution (LTE) and the fifth generation (5G). However, such wireless systems suffer from multiuser interference (MUI). In order to mitigate MUI, transmit processing techniques have been employed in the downlink (DL), allowing accurate recovery of the data at the receivers. In general, a precoding technique maps the symbols containing the information to the transmit antennas so that the information arrives at the receiver with reduced levels of MUI. Due to its benefits, linear \cite{mmimo,Joham2005,wence,grbd,wlrbd,rmmse} and non-linear \cite{Zu2014,Peel2005,bbprec} precoding techniques have been extensively reported in the literature. The design of effective precoders demands very accurate channel state information at the transmitter (CSIT), which is an extremely difficult task to accomplish in actual wireless systems. Hence, the transmitter typically only has access to partial or imperfect CSIT. As a result, the precoder cannot handle MUI as expected, resulting in residual MUI at the receiver. This residual MUI can degrade heavily the performance of wireless systems since it scales with the transmit power employed at the base station (BS) \cite{Tse2005}. \subsection{Prior and related work} In this context, Rate Splitting (RS) has emerged as a novel approach that is capable of dealing with CSIT imperfection \cite{Clerckx2016} in an effective way. RS was initially proposed in \cite{Han1981} to deal with interference channels \cite{Carleial1978}, where independent transmitters send information to independent receivers \cite{Haghi2021}. Since then, several studies have found that RS outperforms conventional schemes such as conventional precoding in spatial division multiple access (SDMA), power-domain Non-Orthogonal Multiple Access (NOMA) \cite{Mao2018} and even dirty paper coding (DPC) \cite{Mao2020}. Interestingly, it turns out that RS constitutes a generalized framework which has as special cases other transmission schemes such SDMA, NOMA and multicasting \cite{Clerckx2020,Naser2020,Jaafar2020,Mao22}. The main advantage of RS is its capability to partially decode interference and partially treat interference as noise. To this end, RS splits the message of one or several users into a common message and a private message. The common message must be decoded by all the users that employ successive interference cancellation \cite{spa,mfsic,mbdf,bfidd,1bitidd}. On the other hand, the private messages are decoded only by their corresponding users. RS schemes have been shown to enhance the performance of wireless communication systems. In \cite{Yang2013} RS was extended to the broadcast channel (BC) of multiple-input single-output (MISO) systems, where it was shown that RS provides gains in terms of Degrees-of-Freedom (DoF) with respect to conventional multiuser multiple-input multiple-output (MU-MIMO) schemes under imperfect CSIT. Later in \cite{Hao2017b}, the DoF of a MIMO BC and IC was characterized. RS has eventually been shown in \cite{Piovano2017} to achieve the optimal DoF region when considering imperfect CSIT, outperforming the DoF obtained by SDMA systems, which decays in the presence of imperfect CSIT. Due to its benefits, several wireless communications deployments with RS have been studied. RS has been employed in MISO systems along with linear precoders \cite{Joudeh2016,Hao2015} in order to maximize the sum-rate performance under perfect and imperfect CSIT assumptions. Another approach has been presented in \cite{Joudeh2017} where the max-min fairness problem has been studied. In \cite{Hao2017} a K-cell MISO IC has been considered and the scheme known as topological RS presented. The topological RS scheme transmits multiple layers of common messages, so that the common messages are not decoded by all users but by groups of users. RS has been employed along with random vector quantization in \cite{Lu2018} to mitigate the effects of the imperfect CSIT caused by finite feedback. In \cite{Flores2020,rsthp}, RS with common stream combining techniques has been developed in order to exploit multiple antennas at the receiver and to improve the overall sum-rate performance. A successive null-space precoder, that employs null-space basis vectors to adjust the inter-user-interference at the receivers, is proposed in \cite{Krishnamoorthy2021}. The optimization of the precoders along with the transmission of multiple common streams was considered in \cite{Mishra2022}. In \cite{Li2020}, RS with joint decoding has been explored. The authors of \cite{Li2020} devised precoding algorithms for an arbitrary number of users along with a stream selection strategy to reduce the number of precoded signals. Along with the design of the precoders, power allocation is also a fundamental part of RS-based systems. The benefits of RS are obtained only if an appropriate power allocation for the common stream is performed. However, the power allocation problem in MU-MIMO systems is an NP-hard problem \cite{Luo2008,Liu2011}, and the optimal solution can be found at the expense of an exponential growth in computational complexity. Therefore, suboptimal approaches that jointly optimize the precoder and the power allocation have been developed. Most works so far employ exhaustive search or complex optimization frameworks. These frameworks rely on the augmented WMMSE \cite{Mao2020,Maoinpress,JoudehClerckx2016,Kaulich2021,Mishra2022}, which is an extension of the WMMSE proposed in \cite{Christensen2008}. This approach requires also an alternating optimization, which further increases the computational complexity. A simplified approach can be found in \cite{Dai2016a}, where closed-form expressions for RS-based massive MIMO systems are derived. However, this suboptimal solution is more appropiate for massive MIMO deployments. The high complexity of most optimal approaches makes them impractical to implement in large or real-time systems. For this reason, there is a strong demand for cost-effective power allocation techniques for RS systems. \subsection{Contributions} In this paper, we present novel efficient robust and adaptive power allocation techniques \cite{rapa} for RS-based MU-MIMO systems. In particular, we develop a robust adaptive power allocation (APA-R) strategy based on stochastic gradient learning \cite{bertsekas,jidf,smtvb,smce} and the minimization of the mean-square error (MSE) between the transmitted common and private symbols of the RS system and the received signal. We incorporate knowledge of the variance of the channel errors to cope with imperfect CSIT and adjust power levels in the presence of uncertainty. When the knowledge of the variance of the channel errors is not exploited the proposed APA-R becomes the proposed adaptive power allocation algorithm (APA). An analysis of the convexity and stability of the proposed power allocation algorithms along with a study of their computational complexity and theoretical bounds relating the power allocation strategies are developed. Numerical results show that the sum-rate of an RS system employing adaptive power allocation outperforms conventional MU-MIMO systems under imperfect CSIT assumption. The contributions of this work can be summarized as: \begin{itemize} \item Cost-effective APA-R and APA techniques for power allocation are proposed based on stochastic gradient recursions and knowledge of the variance of the channel errors for RS-based and standard MU-MIMO systems. \item An analysis of convexity and stability of the proposed power allocation techniques along with a bound on the MSE of APA and APA-R and a study of their computational complexity. \item A simulation study of the proposed APA and APA-R, and the existing power allocation techniques for RS-based and standard MU-MIMO systems. \end{itemize} \subsection{Organization} The rest of this paper is organized as follows. Section II describes the mathematical model of an RS-based MU-MIMO system. In Section III the proposed APA-R technique is presented, the proposed APA approach is obtained as a particular case and sum-rate expressions are derived. In Section IV, we present an analysis of convexity and stability of the proposed APA and APA-R techniques along with a bound on the MSE of APA and APA-R and a study of their computational complexity. Simulation results are illustrated and discussed in Section V. Finally, Section VI presents the conclusions of this work. \subsection{Notation} Column vectors are denoted by lowercase boldface letters. The vector $\mathbf{a}_{i,*}$ stands for the $i$th row of matrix $\mathbf{A}$. Matrices are denoted by uppercase boldface letters. Scalars are denoted by standard letters. The superscript $\left(\cdot\right)^{\text{T}}$ represents the transpose of a matrix, whereas the notation $\left(\cdot\right)^H$ stands for the conjugate transpose of a matrix. The operators $\lVert \cdot \rVert$, and $\mathbb{E}_x\left[\cdot\right]$ represent the Euclidean norm, and the expectation w.r.t the random variable $x$, respectively. The trace operator is given by $\text{tr}\left(\cdot\right)$. The Hadamard product is denoted by $\odot$. The operator $\text{diag}\left(\mathbf{a}\right)$ produces a diagonal matrix with the coefficients of $\mathbf{a}$ located in the main diagonal. \section{System Model} Let us consider the RS-based MU-MIMO system architecture depicted in Fig. \ref{System Model RS}, where the BS is equipped with $N_t$ antennas, serves $K$ users and the $k$th UE is equipped with $N_k$ antennas. Let us denote by $N_r$ the total number of receive antennas. Then, $N_r=\sum_{k=1}^K N_k$. For simplicity, the message intended for the $k$th user is split into a common message and a private message. Then, the messages are encoded and modulated. The transmitter sends one common stream and a total of $M$ private streams, with $M\leq N_r$. The set $\mathcal{M}_k$ contains the $M_k$ private streams, that are intended for the user $k$, where $M_k\leq N_k$. It follows that $M=\sum_{k^=1}^K M_k$. \begin{figure}[htb!] \begin{center} \includegraphics[scale=0.45]{RS_system_model.eps} \vspace{-1.0em} \caption{RS MU-MIMO architecture.} \label{System Model RS} \end{center} \vspace{-2em} \end{figure} The vector $\mathbf{s}^{\left(\text{RS}\right)}=\left[s_c,\mathbf{s}_p^{\text{T}}\right]^{\text{T}} \in \mathbb{C}^{M+1}$, which is assumed i.i.d. with zero mean and covariance matrix equal to the identity matrix, contains the information transmitted to all users, where $s_c$ is the common symbol and $\mathbf{s}_p=\left[\mathbf{s}_1^{\text{T}},\mathbf{s}_2^{\text{T}},\cdots,\mathbf{s}_K^{\text{T}}\right]^{\text{T}}$ contains the private symbols of all users. Specifically, the vector $\mathbf{s}_k \in \mathbb{C}^{M_k}$ contains the private streams intended for the $k$th user. The system is subject to a transmit power constraint given by $\mathbb{E}\left[\lVert\mathbf{x}^{\left(\text{RS}\right)}\rVert^2\right]\leq E_{tr}$, where $\mathbf{x}^{\left(\text{RS}\right)}\in \mathbb{C}^{N_t}$ is the transmitted vector and $E_{tr}$ denotes the total available power. The transmitted vector can be expressed as follows: \begin{align} \mathbf{x}^{\left(\text{RS}\right)}=&\mathbf{P}^{\left(\text{RS}\right)}\mathbf{A}^{\left(\text{RS}\right)}\mathbf{s}^{\left(\text{RS}\right)}=a_c s_c \mathbf{p}_c+\sum_{m=1}^{M}a_m s_m \mathbf{p}_m, \label{Transmit Signal} \end{align} where $\mathbf{A}^{\left(\text{RS}\right)}\in \mathbb{R}^{\left(M+1\right)\times \left(M+1\right)}$ represents the power allocation matrix and $\mathbf{P}^{\left(\text{RS}\right)}=[\mathbf{p}_c,\mathbf{p}_1,\cdots ,\mathbf{p}_K] \in \mathbb{C}^{N_t \times \left(M+1\right)}$ is used to precode the vector of symbols $\mathbf{s}^{\left(\text{RS}\right)}$. Specifically, $\mathbf{A}^{\text{RS}}=\text{diag}\left(\mathbf{a}^{\left(\text{RS}\right)}\right)$ and $\mathbf{a}^{\left(\text{RS}\right)}=\left[a_c, a_1,\cdots,a_M\right]^{\text{T}}\in \mathbb{R}^{M+1}$, where $a_c$ denotes the power allocated to the common stream and $a_k$ allocates power to the $k$th private stream. Without loss of generality, we assume that the columns of the precoders are normalized to have unit norm. The model established leads us to the received signal described by \begin{equation} \mathbf{y}=\mathbf{H}\mathbf{P}^{\left(\text{RS}\right)}\text{diag}\left(\mathbf{a}^{\left(\text{RS}\right)}\right)\mathbf{s}^{\left(\text{RS}\right)}+\mathbf{n}, \label{Receive Signal Complete} \end{equation} where $\mathbf{n}=\left[\mathbf{n}_1^{\text{T}},\mathbf{n}_2^{\text{T}},\cdots,\mathbf{n}_K^{\text{T}}\right]^{\text{T}} \in \mathbb{C}^{N_r}$ represents the uncorrelated noise vector, which follows a complex normal distribution, i.e., $\mathbf{n}\sim \mathcal{CN}\left(\mathbf{0},\sigma_n^2\mathbf{I}\right)$. We assume that the noise and the symbols are uncorrelated, which is usually the case in real systems. The matrix $\mathbf{H}=\left[\mathbf{H}_1^{\text{T}},\mathbf{H}_2^{\text{T}},\cdots,\mathbf{H}_K^{\text{T}}\right]^{\text{T}}\in \mathbb{C}^{N_r\times N_t}$ denotes the channel between the BS and the user terminals. Specifically, $\mathbf{n}_k$ denotes the noise affecting the $k$th user and the matrix $\mathbf{H}_k\in \mathbb{C}^{N_k\times N_t}$ represents the channel between the BS and the $k$th user. The imperfections in the channel estimate are modelled by the random matrix $\tilde{\mathbf{H}}$. Each coefficient of $\tilde{\mathbf{H}}$ follows a Gaussian distribution with variance equal to $\sigma_{e,i}^2$ and $\mathbb{E}\left[\tilde{\mathbf{h}}_{i,*}^H\tilde{\mathbf{h}}_{i,*}\right]=\sigma_e^2\mathbf{I}\quad \forall i \in\left[1,N_r\right]$. Then, the channel matrix can be expressed as $\mathbf{H}=\hat{\mathbf{H}}+\tilde{\mathbf{H}}$, where the channel estimate is given by $\hat{\mathbf{H}}$. From \eqref{Receive Signal Complete} we can obtain the received signal of user $k$, which is given by \begin{align} \mathbf{y}_k=&a_c s_c \mathbf{H}_k\mathbf{p}_c+ \mathbf{H}_k\sum_{i\in \mathcal{M}_k}a_i s_i\mathbf{p}_i+\mathbf{H}_k\sum\limits_{\substack{l=1\\l \neq k}}^{K}\sum\limits_{j\in \mathcal{M}_l}a_j s_j\mathbf{p}_j+ \mathbf{n}_k.\label{Receive Signal per user} \end{align} Note that the RS architecture contains the conventional MU-MIMO as a special case where no message is split and therefore $a_c$ is set to zero. Then, the model boils down to the model of a conventional MU-MIMO system, where the received signal at the $k$th user is given by \begin{equation} \mathbf{y}_k=\mathbf{H}_k\sum_{i\in \mathcal{M}_k}a_i s_i\mathbf{p}_i+\mathbf{H}_k\sum\limits_{\substack{l=1\\l \neq k}}^{K}\sum\limits_{j\in \mathcal{M}_l}a_j s_j\mathbf{p}_j+\mathbf{n}_k\label{Receive Signal per user convetinoal MIMO} \end{equation} In what follows, we will focus on the development of power allocation techniques that can cost-effectively compute $a_c$ and $a_j, j = 1, \ldots, K$. \section{Proposed Power Allocation Techniques} In this section, we detail the proposed power allocation techniques. In particular, we start with the derivation of the ARA-R approach and then present the APA technique as a particular case of the APA-R approach. \subsection{Robust Adaptive Power Allocation}\label{c5 section robust power allocation RS} Here, a robust adaptive power allocation algorithm, denoted as APA-R, is developed to perform power allocation in the presence of imperfect CSIT. The key idea is to incorporate knowledge about the variance of the channel uncertainty \cite{locsme,okspme} into an adaptive recursion to allocate the power among the streams. The minimization of the MSE between the received signal and the transmitted symbols is adopted as the criterion to derive the algorithm. Let us consider the model established in \eqref{Receive Signal Complete} and denote the fraction of power allocated to the common stream by the parameter $\delta$, i.e., $a_c^2=\delta E_{tr}$. It follows that the available power for the private streams is reduced to $\left(1-\delta\right)E_{tr}$. We remark that the length of $\mathbf{s}^{\left(\text{RS}\right)}$ is greater than that of $\mathbf{y}^{\left(\text{RS}\right)}$ since the common symbol is superimposed to the private symbols. Therefore, we consider the vector $\mathbf{y}'=\mathbf{T}\mathbf{y}^{\left(\text{RS}\right)}$, where $\mathbf{T}\in\mathbb{R}^{\left(M+1\right)\times M}$ is a transformation matrix employed to ensure that the dimensions of $\mathbf{s}^{\left(\text{RS}\right)}$ and $\mathbf{y}^{\left(\text{RS}\right)}$ match, and is given by \begin{equation} \mathbf{T}=\begin{bmatrix} 1 &1 &\cdots &1\\ 1 &0 &\cdots &0\\ 0 &1 &\cdots &0\\ \vdots &\vdots &\ddots &\vdots\\ 0 &0 &\cdots &1 \end{bmatrix}. \end{equation} All the elements in the first row of matrix $\mathbf{T}$ are equal to one in order to take into account the common symbol obtained at all receivers. As a result we obtain the combined receive signal of all users. It follows that \begin{equation} \mathbf{y}'=\begin{bmatrix} y_c\\ y_1\\ \vdots\\ y_M \end{bmatrix}=\begin{bmatrix} \sum_{i=1}^{M} y_i\\ y_1\\ \vdots\\ y_M \end{bmatrix}, \end{equation} where the received signal at the $i$th antenna is described by \begin{equation} y_i=a_c s_c\left(\hat{\mathbf{h}}_{i,*}+\tilde{\mathbf{h}}_{i,*}\right)\mathbf{p}_c+\sum_{j=1}^{M}a_j s_j \left(\hat{\mathbf{h}}_{i,*}+\tilde{\mathbf{h}}_{i,*}\right)\mathbf{p}_j+ n_i. \end{equation} Let us now consider the proposed robust power allocation problem for imperfect CSIT scenarios. By including the error of the channel estimate, the robust power allocation problem can be formulated as the constrained optimization given by \begin{equation} \begin{gathered} \min_{\mathbf{a}} \mathbb{E}\left[\lVert\mathbf{s}^{\left(\text{RS}\right)}-\mathbf{y}'\left(\mathbf{H}\right)\rVert^2| \hat{\mathbf{H}}\right]\\ \text{s.t.}~~\text{tr}\left(\mathbf{P}^{\left(\text{RS}\right)}\text{diag}\left(\mathbf{a}^{\left(\text{RS}\right)}\odot \mathbf{a}^{\left(\text{RS}\right)}\right)\mathbf{P}^{\left(\text{RS}\right)^{H}}\right)=E_{tr},\label{Obejctive funtion robust} \end{gathered} \end{equation} which can be solved by first relaxing the constraint, using an adaptive learning recursion and then enforcing the constraint. In this work, we choose the MSE as the objective function due to its convex property and mathematical tractability, which help to find an appropriate solution through algorithms. The objective function is convex as illustrated by Fig. \ref{fig:MSECurve} and analytically shown in Section \ref{analysis}. In Fig. \ref{fig:MSECurve}, we plot the objective function using two precoders, namely the zero-forcing (ZF) and the matched filter (MF) precoders \cite{Joham2005}, where three private streams and one common stream are transmitted and the parameter $\delta$ varies with uniform power allocation across private streams. \begin{figure} \centering \includegraphics[scale=0.4]{MSECurve.eps} \vspace{-1.5em}\caption{Objective function considering a MU-MIMO system with $Nt=3$, $K=3$, and $\sigma_n^2=1$}\vspace{-1.5em} \label{fig:MSECurve} \end{figure} To solve \eqref{Obejctive funtion robust} we need to expand the terms and evaluate the expected value. Let us consider that the square error is equal to $\varepsilon=\lVert \mathbf{s}^{\left(\text{RS}\right)}-\mathbf{y}' \rVert^2$. Then, the MSE is given by \begin{align} \mathbb{E}\left[\varepsilon|\hat{\mathbf{H}}\right]=&-2a_c\sum_{i=1}^M\Re{\left\{\hat{\phi}^{\left(i,c\right)}\right\}}-2\sum_{j=1}^M a_j \Re{\left\{\hat{\phi}^{\left(j,j\right)}\right\}}+2\sum_{j=1}^{M}a_j^2\left(\sum_{l=1}^M\lvert\hat{\phi}^{\left(l,j\right)}\rvert^2+M\sigma_{e,i}^2\lVert\mathbf{p}_j\rVert^2\right)\nonumber\\ &+2a_c^2\left(\sum_{i=1}^M\lvert\hat{\phi}^{\left(i,c\right)}\rvert^2+M\sigma_{e,i}^2\lVert\mathbf{p}_c\rVert^2\right)+2\sum_{i=1}^{M-1}\sum_{q=i+1}^{M}\sum_{r=1}^M a_r^2\Re\left\{\hat{\phi}^{\left(i,r\right)^*}\hat{\phi}^{\left(q,r\right)}\right\}\nonumber\\ &+\sum_{l=1}^M\sum\limits_{\substack{j=1\\j\neq l}}^M a_c^2\hat{\phi}^{\left(l,c\right)^*}\hat{\phi}^{\left(j,c\right)}+M\left(1+2\sigma_n^2\right)+1,\label{mean square error APA robust} \end{align} where $\hat{\phi}^{\left(i,c\right)}=\hat{\mathbf{h}}_{l,*}\mathbf{p}_c$ and $\hat{\phi}^{\left(i,l\right)}=\hat{\mathbf{h}}_{i,*}\mathbf{p}_l$ for all $i,l \in \left[1,M\right]$. The proof to obtain the last equation can be found in appendix \ref{Appendix MSE APA-R}. The partial derivatives of \eqref{mean square error APA robust} with respect to ${a}_c$ and ${a}_i$ are expressed by \begin{align} \frac{\partial\mathbb{E}\left[\varepsilon|\hat{\mathbf{H}}\right]}{\partial a_c}=& 2\sum_{l=1}^M\sum\limits_{\substack{j=1\\j\neq l}}^M a_c\hat{\phi}^{\left(l,c\right)^*}\hat{\phi}^{\left(j,c\right)}-2\sum_{i=1}^M\Re{\left\{\hat{\phi}^{\left(i,c\right)}\right\}}+4a_c\left(\sum_{i=1}^M\lvert\hat{\phi}^{\left(i,c\right)}\rvert^2+M\sigma_{e,i}^2\lVert\mathbf{p}_c\rVert^2\right),\label{gradient robust apa ac} \end{align} \begin{align} \frac{\partial\mathbb{E}\left[\varepsilon|\hat{\mathbf{H}}\right]}{\partial a_i}=&4a_i\sum_{l=1}^{M-1}\sum_{q=l+1}^{M} \Re\left[\hat{\phi}^{\left(l,i\right)^*}\hat{\phi}^{\left(q,i\right)}\right]-2 \Re{\left\{\hat{\phi}^{\left(i,i\right)}\right\}}+4a_i\left(\sum_{l=1}^M\lvert\hat{\phi}^{\left(l,i\right)}\rvert^2+M\sigma_{e,i}^2\lVert\mathbf{p}_i\rVert^2\right).\label{gradient robust apa ai} \end{align} The partial derivatives given by \eqref{gradient robust apa ac} and \eqref{gradient robust apa ai} represent the gradient of the MSE with respect to the power allocation coefficients. With the obtained gradient we can employ a gradient descent algorithm, which is an iterative procedure that finds a local minimum of a differentiable function. The key idea is to take small steps in the opposite direction of the gradient, since this is the direction of the steepest descent. Remark that the objective function used is convex and has no local minimum. Then, the recursions of the proposed APA-R technique are given by \begin{align} a_c\left[t+1\right]&=a_c\left[t\right]-\mu\frac{\partial\mathbb{E}\left[\lvert\varepsilon\rvert^2|\hat{\mathbf{H}}^{\text{T}}\right]}{\partial a_c},\nonumber\\ a_i\left[t+1\right]&=a_i\left[t\right]-\mu\frac{\partial\mathbb{E}\left[\lvert\varepsilon\rvert^2|\hat{\mathbf{H}}^{\text{T}}\right]}{\partial a_i}, \end{align} where the parameter $\mu$ represents the learning rate of the adaptive algorithm. At each iteration, the power constraint is analyzed. Then, the coefficients are scaled with a power scaling factor by $\mathbf{a}^{\left(\rm{RS}\right)}\left[n\right]=\beta\mathbf{a}^{\left(\rm{RS}\right)}\left[n\right]$, where $\beta=\sqrt{\frac{1}{\textrm{tr}\left(\textrm{diag}\left(\mathbf{a}^{\left(\rm{RS}\right)}\left[n\right]\odot \mathbf{a}^{\left(\rm{RS}\right)}\left[n\right]\right)\right)}}$ to ensure that the power constraint is satisfied. Algorithm \ref{algorithm RS APA} summarizes the proposed APA-R algorithm. \begin{algorithm}[t!] \normalsize \SetAlgoLined given $\hat{\mathbf{H}}$, $\mathbf{P}$, and $\mu$\; $\mathbf{a}\left[1\right]=\mathbf{0}$\; \For{$n=2$ \KwTo $I_t$}{ $\frac{\partial\mathbb{E}\left[\varepsilon|\hat{\mathbf{H}}\right]}{\partial a_c}= 2\sum\limits_{l=1}^M\sum\limits_{\substack{j=1\\j\neq l}}^M a_c\hat{\phi}^{\left(l,c\right)^*}\hat{\phi}^{\left(j,c\right)}-2\sum\limits_{i=1}^M\Re{\left\{\hat{\phi}^{\left(i,c\right)}\right\}}+4a_c\left(\sum\limits_{i=1}^M\lvert\hat{\phi}^{\left(i,c\right)}\rvert^2+M\sigma_{e,i}^2\lVert\mathbf{p}_c\rVert^2\right)$\; \vspace{0.1cm} $\frac{\partial\mathbb{E}\left[\varepsilon|\hat{\mathbf{H}}\right]}{\partial a_i}=4a_i\sum\limits_{l=1}^{M-1}\sum\limits_{q=l+1}^{M} \Re\left[\hat{\phi}^{\left(l,i\right)^*}\hat{\phi}^{\left(q,i\right)}\right]-2 \Re{\left\{\hat{\phi}^{\left(i,i\right)}\right\}}+4a_i\left(\sum\limits_{l=1}^M\lvert\hat{\phi}^{\left(l,i\right)}\rvert^2+M\sigma_{e,i}^2\lVert\mathbf{p}_i\rVert^2\right)$\; \vspace{0.1cm} $a_c\left[n\right]=a_c\left[n-1\right]-\mu\frac{\partial\mathbb{E}\left[\varepsilon|\hat{\mathbf{H}}\right]}{\partial a_c}$\; \vspace{0.1cm} $a_i\left[n\right]=a_i\left[n-1\right]-\mu\frac{\partial\mathbb{E}\left[\varepsilon|\hat{\mathbf{H}}\right]}{\partial a_i}$\; \vspace{0.1cm} \If{$\textrm{\rm tr}\left(\textrm{\rm diag}\left(\mathbf{a}^{\left(\rm{RS}\right)}\left[n\right]\odot \mathbf{a}^{\left(\rm{RS}\right)}\left[n\right]\right)\right)\neq 1$}{ \vspace{0.1cm} $\beta=\sqrt{\frac{1}{\textrm{tr}\left(\textrm{diag}\left(\mathbf{a}^{\left(\rm{RS}\right)}\left[n\right]\odot \mathbf{a}^{\left(\rm{RS}\right)}\left[n\right]\right)\right)}}$\; \vspace{0.1cm} $\mathbf{a}^{\left(\rm{RS}\right)}\left[n\right]=\beta\mathbf{a}^{\left(\rm{RS}\right)}\left[n\right]$\; } } \caption{Robust Adaptive Power allocation} \label{algorithm RS APA} \end{algorithm}\vspace{-0.5em} \subsection{Adaptive Power Allocation} In this section, a simplified version of the proposed APA-R algorithm is derived. The main objective is to reduce the complexity of each recursion of the adaptive algorithm and avoid the load of computing the statistical parameters of the imperfect CSIT, while reaping the benefits of RS systems. The power allocation problem can be reformulated as the constrained optimization problem given by \begin{equation} \begin{gathered} \min_{\mathbf{a}} \mathbb{E}\left[\lVert\mathbf{s}^{\left(\text{RS}\right)}-\mathbf{y}'\rVert^2\right]\\ \text{s.t.}~~\text{tr}\left(\mathbf{P}^{\left(\text{RS}\right)}\text{diag}\left(\mathbf{a}^{\left(\text{RS}\right)}\odot \mathbf{a}^{\left(\text{RS}\right)}\right)\mathbf{P}^{\left(\text{RS}\right)^{H}}\right)=E_{tr}, \end{gathered} \label{Objective function apa} \end{equation} In this case, the MSE is equivalent to \begin{align} \mathbb{E}\left[\varepsilon\right]=&-2a_c\sum_{j=1}^{M}\Re\left\{\phi^{\left(j,c\right)}\right\}-2\sum_{l=1}^M a_l\Re\left\{\phi^{\left(l,l\right)}\right\}+2\left(\sum_{l=1}^M a_c^2\lvert\phi^{\left(l,c\right)}\rvert^2+\sum_{i=1}^M\sum_{j=1}^M a_j^2\lvert\phi^{\left(i,j\right)}\rvert^2\right)\nonumber\\ &+2\sum_{i=1}^{M-1}\sum_{q=i+1}^{M}\sum_{r=1}^M a_r^2\Re\left\{\phi^{\left(i,r\right)^*}\phi^{\left(q,r\right)}\right\}+\sum_{l=1}^M\sum\limits_{\substack{j=1\\j\neq l}}^M a_c^2\phi^{\left(l,c\right)^*}\phi^{\left(j,c\right)}+M\left(1+2\sigma_n^2\right)+1, \label{mean square error APA RS} \end{align} where we considered that $\phi^{\left(i,c\right)}=\mathbf{h}_{l,*}\mathbf{p}_c$ and $\phi^{\left(i,l\right)}=\mathbf{h}_{i,*}\mathbf{p}_l$ for all $i,l \in \left[1,M\right]$. The proof to obtain \eqref{mean square error APA RS} can be found in appendix \ref{Appendix MSE APA}. Taking the partial derivatives of \eqref{mean square error APA RS} with respect to the coefficients of $\mathbf{a}^{\left(\text{RS}\right)}$ we arrive at \begin{align} \frac{\partial\mathbb{E}\left[\varepsilon\right]}{\partial a_c}&=4a_c \sum_{i=1}^{M}\lvert\phi^{\left(i,c\right)}\rvert^2+2a_c\sum_{l=1}^M\sum\limits_{\substack{j=1\\j\neq l}}^M \phi^{\left(l,c\right)^*}\phi^{\left(j,c\right)}-2\sum_{q=1}^{M}\Re\left[\phi^{\left(q,c\right)}\right],\label{gradient RS perfect CSIT common stream}\\ \frac{\partial\mathbb{E}\left[\varepsilon\right]}{\partial a_i}&=4a_i\sum_{j=1}^M \lvert\phi^{\left(j,i\right)}\rvert^2+4a_i\sum_{r=1}^{M-1}\sum_{q=r+1}^{M} \Re\left[\phi^{\left(r,i\right)^*}\phi^{\left(q,i\right)}\right]-2 a_i\Re\left[\phi^{\left(i,i\right)}\right],\label{gradient RS perfect CSIT private streams} \end{align} The power allocation coefficients are adapted using \eqref{gradient RS perfect CSIT common stream} and \eqref{gradient RS perfect CSIT private streams} in the following recursions: \begin{align} a_c\left[t+1\right]&=a_c\left[t\right]-\mu\frac{\partial\mathbb{E}\left[\varepsilon\right]}{\partial a_c}\nonumber\\ a_i\left[t+1\right]&=a_i\left[t\right]-\mu\frac{\partial\mathbb{E}\left[\varepsilon\right]}{\partial a_i}.\label{update equation for perfect CSIT} \end{align} \subsection{Sum-Rate Performance} In this section, we derive closed-form expressions to compute the sum-rate performance of the proposed algorithms. Specifically, we employ the ergodic sum-rate (ESR) as the main performance metric. Before the computation of the ESR we need to find the average power of the received signal, which is given by \begin{equation} \mathbb{E}\left[\lvert y_k\rvert^2\right]=a_c^2\lvert \mathbf{h}_k^{\textrm{T}}\mathbf{p}_c\rvert^2+\sum_{i=1}^K a_i^2\lvert \mathbf{h}_k^{\textrm{T}}\mathbf{p}_i\rvert^2+\sigma_w^2. \end{equation} It follows that the instantaneous SINR while decoding the common symbol is given by \begin{align} \gamma_{c,k}&=\frac{a_c^2\lvert \mathbf{\hat{h}}_k^{\textrm{T}}\mathbf{p}_c\rvert^2}{\sum\limits_{i=1}^K a_i^2\lvert \mathbf{h}_k^{\textrm{T}}\mathbf{p}_i\rvert^2+\sigma_w^2}.\label{instantaneous SINR common rate} \end{align} Once the common symbol is decoded, we apply SIC to remove it from the received signal. Afterwards, we calculate the instantaneous SINR when decoding the $k$th private stream, which is given by \begin{equation} \gamma_k=\frac{a_k^2\lvert\mathbf{\hat{h}}_k^{\textrm{T}}\mathbf{p}_k\rvert^2}{\sum\limits_{\substack{i=1\\i\neq k}}^K a_i^2\lvert\mathbf{h}_k\mathbf{p}_i\rvert^2+\sigma_w^2}.\label{instantaneous SINR private rate} \end{equation} Considering Gaussian signaling, the instantaneous common rate can be found with the following equation: \begin{equation} R_{c,k}=\log_2\left(1+\gamma_{c,k}\right).\label{instantaneous common rate per user} \end{equation} The private rate of the $k$th stream is given by \begin{equation} R_{k}=\log_2\left(1+\gamma_{k}\right)\label{instantaneous private rate} \end{equation} Since imperfect CSIT is being considered, the instantaneous rates are not achievable. To that end, we employ the average sum rate (ASR) to average out the effect of the error in the channel estimate. The average common rate and the average private rate are given by \begin{align} \bar{R}_{c,k}&=\mathbb{E}\left[R_{c,k}|\mathbf{\hat{G}}\right] & \bar{R}_{k}=\mathbb{E}\left[R_{k}|\mathbf{\hat{G}}\right], \end{align} respectively. With the ASR we can obtain the ergodic sum-rate (ESR), which quantifies the performance of the system over a large number of channel realizations and is given by \begin{equation} S_r=\min_{k}\mathbb{E}\left[\bar{R_{c,k}}\right]+\sum_{k=1}^K \mathbb{E}\left[\bar{R}_k\right],\label{system ergodic sum rate} \end{equation} \section{Analysis} \label{analysis} In this section, we carry out a convexity analysis and a statistical analysis of the proposed algorithms along with an assessment of their computational complexity in terms of floating point operations (FLOPs). Moreover, we derive a bound that establishes that the proposed APA-R algorithm is superior or comparable to the proposed APA algorithm. \subsection{Convexity analysis} In this section, we perform a convexity analysis of the optimization problem that gives rise to the proposed APA-R and APA algorithms. In order to establish convexity, we need to compute the second derivative of $\mathbb{E}\left[\varepsilon|\hat{\mathbf{H}}\right]$ with respect to $a_c$ and $a_i$, and then check if it is greater than zero \cite{bertsekas}, i.e., \begin{equation} \frac{\partial^2 \mathbb{E}\left[\varepsilon|\hat{\mathbf{H}}\right]}{\partial a_c \partial a_c} >0 ~{\rm and} ~ \frac{\partial^2 \mathbb{E}\left[\varepsilon|\hat{\mathbf{H}}\right]}{\partial a_i \partial a_i} >0, ~ i=1,2, \ldots, K \end{equation} {Let us first compute $\frac{\partial^2 \mathbb{E}\left[\varepsilon|\hat{\mathbf{H}}\right]}{\partial a_c \partial a_c} $ using the results in \eqref{gradient robust apa ac}: \begin{equation} \begin{split} \frac{\partial}{\partial a_c} \Bigg( \frac{\partial \mathbb{E}\left[\varepsilon|\hat{\mathbf{H}}\right]}{ \partial a_c} \Bigg) & = \frac{\partial}{\partial a_c} \Bigg(2\sum_{l=1}^M\sum\limits_{\substack{j=1\\j\neq l}}^M a_c\hat{\phi}^{\left(l,c\right)^*}\hat{\phi}^{\left(j,c\right)}-2\sum_{i=1}^M\Re{\left\{\hat{\phi}^{\left(i,c\right)}\right\}}+4a_c \left(\sum_{i=1}^M\lvert\hat{\phi}^{\left(i,c\right)}\rvert^2+M\sigma_{e,i}^2 \right) \Bigg) \\ & = 2\sum_{l=1}^M\sum\limits_{\substack{j=1\\j\neq l}}^M \hat{\phi}^{\left(l,c\right)^*}\hat{\phi}^{\left(j,c\right)}+4\left(\sum_{i=1}^M\lvert\hat{\phi}^{\left(i,c\right)}\rvert^2+M\sigma_{e,i}^2\lVert\mathbf{p}_c\rVert^2\right). \label{2ndderiv_ac} \end{split} \end{equation} Now let us compute $\frac{\partial^2 \mathbb{E}\left[\varepsilon|\hat{\mathbf{H}}\right]}{\partial a_i \partial a_i} $ using the results in \eqref{gradient robust apa ai}: \begin{equation} \begin{split} \frac{\partial}{\partial a_i} \Bigg( \frac{\partial \mathbb{E}\left[\varepsilon|\hat{\mathbf{H}}\right]}{ \partial a_i} \Bigg) & = \frac{\partial}{\partial a_i} \Bigg(4a_i\sum_{l=1}^{M-1}\sum_{q=l+1}^{M} \Re\left[\hat{\phi}^{\left(l,i\right)^*}\hat{\phi}^{\left(q,i\right)}\right]-2 \Re{\left\{\hat{\phi}^{\left(i,i\right)}\right\}}+4a_i\left(\sum_{l=1}^M\lvert\hat{\phi}^{\left(l,i\right)}\rvert^2+M\sigma_{e,i}^2\lVert\mathbf{p}_i\rVert^2\right) \Bigg) \\ & = 4\sum_{l=1}^{M-1}\sum_{q=l+1}^{M} \Re\left[\hat{\phi}^{\left(l,i\right)^*}\hat{\phi}^{\left(q,i\right)}\right]+4\left(\sum_{l=1}^M\lvert\hat{\phi}^{\left(l,i\right)}\rvert^2+M\sigma_{e,i}^2\lVert\mathbf{p}_i\rVert^2\right), \label{2ndderiv_ai} \end{split} \end{equation} Since we have the sum of the strictly convex terms in \eqref{2ndderiv_ac} and \eqref{2ndderiv_ai} the objective function associated with APA-R is strictly convex \cite{bertsekas}. The power constraint is also strictly convex and only scales the powers to be adjusted. In the case of the APA algorithm, the objective function does not employ knowledge of the error variance $\sigma_{e,i}^2$ and remains strictly convex. \subsection{Bound on the MSE of APA and APA-R} Let us now show that the proposed APA-R power allocation produces a lower MSE than that of the proposed APA power allocation. The MSE obtained in \eqref{mean square error APA RS} assumes that the transmitter has perfect knowledge of the channel. Under such assumption the optimal coefficients $\mathbf{a}_{o}$ that minimize the error are found. However, under imperfect CSIT the transmitter is unaware of $\tilde{\mathbf{H}}$ and the adaptive algorithm performs the power allocation by employing $\hat{\mathbf{H}}$ instead of $\mathbf{H}$. This results in a power allocation given by $\hat{\mathbf{a}}^{\left(\text{APA}\right)}$ which originates an increase in the MSE obtained. It follows that \begin{equation} \varepsilon\left(\mathbf{H},\mathbf{a}_o\right)\leq\varepsilon\left(\mathbf{H},\hat{\mathbf{a}}^{\left(\text{APA}\right)}\right) \end{equation} On the other hand, the robust adaptive algorithm finds the optimal $\mathbf{a}_o^{\left(\text{APA-R}\right)}$ that minimizes $\mathbb{E}\left[\varepsilon\left(\mathbf{H},\mathbf{a}\right)\right|\hat{\mathbf{H}}]$ and therefore takes into account that only partial knowledge of the channel is available. Since the coefficients $\hat{\mathbf{a}}^{\left(\text{APA}\right)}$ and $\mathbf{a}_o^{\left(\text{APA-R}\right)}$ are different, we have \begin{equation} \mathbb{E}\left[\varepsilon\left(\mathbf{H,\mathbf{a}_o^{\left(\text{APA-R}\right)}}\right)|\hat{\mathbf{H}}\right]\leq\mathbb{E}\left[\varepsilon\left(\mathbf{H},\hat{\mathbf{a}}^{\left(\text{APA}\right)}\right)|\hat{\mathbf{H}}\right] \end{equation} Note that under perfect CSIT equation \eqref{mean square error APA robust} reduces to \eqref{mean square error APA RS}. In such circumstances $\mathbf{a}_o^{\left(\text{APA}\right)}=\mathbf{a}_o^{\left(\text{APA-R}\right)}$ and therefore both algorithms are equivalent. In the following, we evaluate the performance obtained by the proposed algorithms. Specifically, we have that $\hat{\mathbf{a}}^{\left(\text{APA}\right)}=\mathbf{a}_o^{\left(\text{APA-R}\right)}+\mathbf{a}_e$, where $\mathbf{a}_e=\left[a_{c,e},a_{1,e},\cdots,a_{M,e}\right]^{\text{T}}$ is the error produced from the assumption that the BS has perfect CSIT. Then, we have \begin{align} \mathbb{E}\left[\varepsilon^{\left(\text{APA}\right)}-\varepsilon\right.\left.^{\left(\text{APA-R}\right)}\right]&=-2a_{c,e}\sum_{i=1}^M\Re{\left\{\hat{\phi}^{\left(i,c\right)}\right\}}-2\sum_{j=1}^M a_{j,e} \Re{\left\{\hat{\phi}^{\left(j,j\right)}\right\}}+\sum_{l=1}^M\sum\limits_{\substack{j=1\\j\neq l}}^M a_{c,e}^2\hat{\phi}^{\left(l,c\right)^*}\hat{\phi}^{\left(j,c\right)}\nonumber\\ &+2a_{c,e}^2\left(\sum_{i=1}^M\lvert\hat{\phi}^{\left(i,c\right)}\rvert^2+M\sigma_{e,i}^2\lVert\mathbf{p}_c\rVert^2\right)+2\sum_{i=1}^{M-1}\sum_{q=i+1}^{M}\sum_{r=1}^M a_{r,e}^2\Re\left[\hat{\phi}^{\left(i,r\right)^*}\hat{\phi}^{\left(q,r\right)}\right]\nonumber\\ &+2\sum_{j=1}^{M}a_{j,e}^2\left(\sum_{l=1}^M\lvert\hat{\phi}^{\left(l,j\right)}\rvert^2+M\sigma_{e,i}^2\lVert\mathbf{p}_j\rVert^2\right). \end{align} which is a positive quantity when $-2a_{c,e}\sum_{i=1}^M\Re{\left\{\hat{\phi}^{\left(i,c\right)}\right\}}<2a_{c,e}^2\left(\sum_{i=1}^M\lvert\hat{\phi}^{\left(i,c\right)}\rvert^2+M\sigma_{e,i}^2\lVert\mathbf{p}_c\rVert^2\right)$ and $-2\sum_{j=1}^M a_{j,e} \Re{\left\{\hat{\phi}^{\left(j,j\right)}\right\}}<2\sum_{j=1}^{M}a_{j,e}^2\left(\sum_{l=1}^M\lvert\hat{\phi}^{\left(l,j\right)}\rvert^2+M\sigma_{e,i}^2\lVert\mathbf{p}_j\rVert^2\right)$. The inequalities hold as long as $a_{c,e}\left[\sum_{i=1}^M\left(\Re\left\{\hat\phi^{\left(i,c\right)}\right\}\right)^2+\sum_{i=1}^M\left(\Im\left\{\hat\phi^{\left(i,c\right)}\right\}\right)^2+M\sigma_{e,i}^2\lVert\mathbf{p}_c\rVert^2\right]>\sum_{i=1}^M\Re{\left\{\phi^{\left(i,c\right)}\right\}}$ and $\sum_{j=1}^{M}a_{j,e}\left[\sum_{l=1}^M\left(\Re\left\{\hat\phi^{\left(l,j\right)}\right\}\right)^2+\sum_{l=1}^M\left(\Im\left\{\hat\phi^{\left(l,j\right)}\right\}\right)^2+M\sigma_{e,i}^2\lVert\mathbf{p}_j\rVert^2\right]>\sum_{j=1}^M\Re{\left\{\phi^{\left(j,j\right)}\right\}}$. As the error in the power allocation coefficients grows the left-hand side of the two last inequalities increases. This explains why the proposed APA-R performs better than the proposed APA algorithm. \subsection{Statistical Analysis} The performance of adaptive learning algorithms is usually measured in terms of its transient behavior and steady-state behaviour. These measurements provide information about the stability, the convergence rate, and the MSE achieved by the algorithm\cite{Yousef2001,Sayed2003}. Let us consider the adaptive power allocation with the update equations given by \eqref{update equation for perfect CSIT}. Expanding the terms of \eqref{update equation for perfect CSIT} for the private streams, we get \begin{align} a_j\left[t+1\right]=&a_j\left[t\right]-4\mu a_j\left[t\right]\sum_{l=1}^{M}\lvert\phi^{\left(l,j\right)}\rvert^2+2\mu\Re\left\{\phi^{\left(j,j\right)}\right\}-4\mu a_j\left[t\right]\sum_{q=1}^{M-1}\sum_{r=q+1}^{M}\Re\left\{\phi^{\left(q,j\right)^*}\phi^{\left(r,j\right)}\right\}.\label{update recursion coeff perfect csit} \end{align} Let us define the error between the estimate of the power coefficients and the optimal parameters as follows: \begin{equation} e_{a_j}\left[t\right]=a_j\left[t\right]-a_j^{\left(o\right)},\label{error optimal estimate} \end{equation} where $a_j^{\left(o\right)}$ represents the optimal allocation for the $j$th coefficient. By subtracting \eqref{error optimal estimate} from \eqref{update recursion coeff perfect csit}, we obtain \begin{align} e_{a_j}\left[t+1\right]=&e_{a_j}\left[t\right]-4\mu a_j\left[t\right]\sum_{l=1}^{M}\lvert\phi^{\left(l,j\right)}\rvert^2+2\mu\Re\left\{\phi^{\left(j,j\right)}\right\}-4\mu a_j\left[t\right]\sum_{q=1}^{M-1}\sum_{r=q+1}^{M}\Re\left\{\phi^{\left(q,j\right)^*}\phi^{\left(r,j\right)}\right\}\nonumber\\ =&e_{a_j}\left[t\right]+2\mu\Re\left\{\phi^{\left(j,j\right)}\right\}-4\mu\left(\sum_{l=1}^{M}\lvert\phi^{\left(l,j\right)}\rvert^2+\sum_{q=1}^{M-1}\sum_{r=q+1}^{M}f_{q,r}^{\left(j\right)}\right)a_j\left[t\right], \label{eq54} \end{align} where $f_{q,r}^{\left(j\right)}=\Re\left\{\left(\mathbf{h}_{q,*}\mathbf{p}_j\right)^*\left(\mathbf{h}_{r,*}\mathbf{p}_j\right)\right\}$. Expanding the terms in \eqref{eq54}, we get \begin{align} e_{a_j}\left[t+1\right]=&-4\mu\left(\sum_{l=1}^{M}\lvert\phi^{\left(l,j\right)}\rvert^2+\sum_{q=1}^{M-1}\sum_{r=q+1}^{M}f_{q,r}^{\left(j\right)}\right)a_j^{\left(o\right)}+e_{a_j}\left[t\right]+2\mu\Re\left\{\phi^{\left(j,j\right)}\right\}\nonumber\\ &-4\mu\left(\sum_{l=1}^{M}\lvert\phi^{\left(l,j\right)}\rvert^2+\sum_{q=1}^{M-1}\sum_{r=q+1}^{M}f_{q,r}^{\left(j\right)}\right)e_{a_j}\left[t\right].\nonumber \end{align} Rearranging the terms of the last equation, we obtain \begin{align} e_{a_j}\left[t+ \right.\left. 1\right] &=\left\{1-4\mu\left(\sum_{l=1}^{M}\lvert\phi^{\left(l,j\right)}\rvert^2+\sum_{q=1}^{M-1}\sum_{r=q+1}^{M}f_{q,r}^{\left(j\right)}\right)\right\}e_{a_j}\left[t\right]\nonumber\\ &-4\mu\left(\sum_{l=1}^{M}\lvert\phi^{\left(l,j\right)}\rvert^2+\sum_{q=1}^{M-1}\sum_{r=q+1}^{M}f_{q,r}^{\left(j\right)}\right)a_j^{\left(o\right)}+2\mu\Re\left\{\phi^{\left(j,j\right)}\right\}.\label{eq55} \end{align} Equation \eqref{eq55} can be rewritten as follows \begin{align} e_{a_j}\left[t+ \right.\left. 1\right] =&\left\{1-4\mu\left(\sum_{l=1}^{M}\lvert\phi^{\left(l,j\right)}\rvert^2+\sum_{q=1}^{M-1}\sum_{r=q+1}^{M}f_{q,r}^{\left(j\right)}\right)\right\}e_{a_j}\left[t\right]\nonumber\\ &+2\mu\left(\frac{\text{MSE}_{\textrm{min}}\left(a_j^{\left(o\right)}\right)}{a_j^{\left(o\right)}}-a_j^{\left(o\right)}\sum_{q=1}^{M-1}\sum_{r=q+1}^Mf_{q,r}^{\left(j\right)}\right),\label{eq56} \end{align} where \begin{align} \text{MSE}_{\rm min}\left(a_j^{\left(o\right)}\right)=2 a_j^{\left(o\right)}\left(a_j^{\left(o\right)}\sum_{i=1}^{M} \lvert\phi^{\left(i,j\right)}\rvert^2-\Re\left\{\phi^{\left(j,j\right)}\right\}\right.\left.+\sum_{q=1}^{M-1}\sum_{r=q+1}^{M}f_{q,r}^{\left(j\right)}\right). \end{align} Bu multiplying \eqref{eq56} by $e_a\left[t+1\right]$ and taking the expected value, we obtain \begin{align} \sigma_{e_{a_j}}^2\left[t\right.\left.1+\right] =&\left\{1-4\mu\left(\sum_{l=1}^{M}\lvert\phi^{\left(l,j\right)}\rvert^2+\sum_{q=1}^{M-1}\sum_{r=q+1}^{M}f_{q,r}^{\left(j\right)}\right)\right\}^2\sigma_{e_{a_j}}^2\left[t\right]\nonumber\\ &+4\mu^2\left(\frac{\text{MSE}_{min}\left(a_j^{\left(o\right)}\right)}{a_j^{\left(o\right)}}-a_j^{\left(o\right)}\sum_{q=1}^{M-1}\sum_{r=q+1}^Mf_{q,r}^{\left(j\right)}\right)^2, \end{align} where we consider that $\mathbb{E}\left[e_{a_j}\left[i\right]\right]\approx \mathbf{0}$.The previous equation constitutes a geometric series with geometric ratio equal to $1-4\mu\left(\sum_{l=1}^{M}\lvert\phi^{\left(l,j\right)}\rvert^2+\sum_{q=1}^{M-1}\sum_{r=q+1}^{M}f_{q,r}^{\left(j\right)}\right)$. Then, we have \begin{equation} \left\lvert 1-4 \mu\left(\sum_{l=1}^{M}\lvert\phi^{\left(l,j\right)}\rvert^2+\sum_{q=1}^{M-1}\sum_{r=q+1}^{M}f_{q,r}^{\left(j\right)}\right)\right\rvert<1 \end{equation} Note that from the last equation the step size must fulfill \begin{equation} 0<\mu_j<\frac{1}{2\lambda_j}, \end{equation} with $\lambda_j=\sum_{l=1}^{M}\lvert\phi^{\left(l,j\right)}\rvert^2+\sum_{q=1}^{M-1}\sum_{r=q+1}^{M}f_{q,r}^{\left(j\right)}$. For the common power allocation coefficient we have the following recursion: \begin{align} a_c\left[t+1\right]=&a_c\left[t\right]-4\mu a_c\left[t\right]\sum_{j=1}^{M}\lvert\phi^{\left(j,c\right)}\rvert^2-2\mu\sum_{l=1}^{M}\Re\left\{\phi^{\left(l,c\right)}\right\}+2\mu a_c\left[t\right]f^{\left(c\right)}, \end{align} where $f^{\left(c\right)}=\sum_{q=1}^{M}\sum_{\substack{r=1\\r\neq q}}^{M}\phi^{\left(q,c\right)^*}\phi^{\left(r,c\right)}$. The error with respect to the optimal power allocation of the common stream is given by \begin{equation} e_c\left[i\right]=a_c{\left[i\right]}-a_c^{\left(o\right)}. \end{equation} Following a similar procedure to the one employed for the private streams we arrive at \begin{align} e_c\left[t+1\right]=&\left\{1-4\mu\sum_{j=1}^{M}\lvert\phi^{\left(j,c\right)}\rvert^2-2\mu f^{\left(c\right)}\vphantom{\sum_{j=1}^{M}}\right\}e_c\left[t\right]-2\mu\left(2\sum_{j=1}^{M}\lvert\phi^{\left(j,c\right)}\rvert^2+\mu f^{\left(c\right)}\right)a_c^{\left(o\right)}\nonumber\\ &-2\mu\sum_{l}^{M}\Re\left\{\phi^{\left(l,c\right)}\right\}. \end{align} Multiplying the previous equation by $e_c\left[t+1\right]$ and taking the expected value leads us to: \begin{align} \sigma_{e_c}^2\left[t+1\right]=&\left\{1-2\mu\left(2\sum_{j=1}^{M}\lvert\phi^{\left(j,c\right)}\rvert^2-f^{\left(c\right)}\right)\right\}\sigma_{e_c}^2\left[t\right]\nonumber\\ &-4\left(\frac{\text{MSE}_{\text{min}}\left(a_c^{\left(o\right)}\right)}{a_c^{\left(o\right)}}+a_c^{\left(o\right)}\sum_{j=1}^{M}\Re\left\{\phi^{\left(j,c\right)}\right\}\right) \end{align} It follows that the geometric ratio of the recursion is equal to $1-2\mu\left(2\sum\limits_{j=1}^{M}\lvert\phi^{\left(j,c\right)}\rvert^2-f^{\left(c\right)}\right)$. Then, the step size must be in the following range: \begin{equation} 0<\mu_c<\frac{1}{\lambda_c}, \end{equation} where \begin{equation} \lambda_c=2\sum_{j=1}^{M}\lvert\phi^{\left(j,c\right)}\rvert^2-f^{\left(c\right)} \end{equation} The step-size of the algorithm must be less or equal to $\min\left(\mu_c,\mu_j\right)$ $\forall j \in \left\{1,2,\cdots,M\right\}$. The stability bounds provide useful information on the choice of the step sizes. Let us now consider the APA-R algorithm. The \textit{a posteriori} error can be expressed as follows: \begin{align} e_{a_j}\left[t+1\right]=&e_{a_j}\left[t\right]+2\mu\Re{\left\{\hat{\phi}^{\left(j,j\right)}\right\}}-4\mu\left(\sum_{l=1}^M\lvert\hat{\phi}^{\left(l,j\right)}\rvert^2+M\sigma_{e,i}^2\lVert\mathbf{p}_j\rVert^2\right.\left.+\sum_{q=1}^{M-1}\sum_{r=q+1}^Mf_{q,r}^{\left(j\right)}\right)a_j\left[t\right]. \end{align} Expanding the terms of the last equation, we get \begin{align} e_{a_j}\left[t+1\right]=&\left\{1-4\mu\left(\sum_{l=1}^M\lvert\hat{\phi}^{\left(l,j\right)}\rvert^2+M\sigma_{e,i}^2\lVert\mathbf{p}_j\rVert^2\right.\right.\left.\left.+\sum_{q=1}^{M-1}\sum_{r=q+1}^Mf_{q,r}^{\left(j\right)}\right)\right\}e_{a_j}\left[t\right]\nonumber\\ &-4\mu\left(\sum_{l=1}^M\lvert\hat{\phi}^{\left(l,j\right)}\rvert^2+M\sigma_{e,i}^2\lVert\mathbf{p}_j\rVert^2\right.\left.+\sum_{q=1}^{M-1}\sum_{r=q+1}^Mf_{q,r}^{\left(j\right)}\right)a_j^{\left(o\right)}+2\mu\Re\left\{\hat{\phi}^{\left(j,j\right)}\right\}. \end{align} The geometric ratio of the robust APA algorithm is given by $1-4\mu\left(\sum\limits_{l=1}^M\lvert\hat{\phi}^{\left(l,j\right)}\rvert^2+\sigma_{e,i}^2\lVert\mathbf{p}_j\rVert^2\right.$ $\left.+\sum\limits_{q=1}^{M-1}\sum\limits_{r=q+1}^Mf_{q,r}^{\left(j\right)}\right)$. Then, we have that \begin{equation} \left\lvert1-4\mu\left(\sum_{l=1}^{M}\lvert\hat{\phi}^{\left(l,j\right)}\rvert^2+\sigma_{e,i}^2\lVert\mathbf{p}_j\rVert^2+\sum_{q=1}^{M-1}\sum_{r=q+1}^{M}f_{q,r}^{\left(j\right)}\right)\right\rvert<1 \end{equation} Therefore, the step size must satisfy the following inequality: \begin{equation} 0<\mu_j^{\left(\textrm{r}\right)}<\frac{1}{2\lambda^{\left(\textrm{r}\right)}_j}, \end{equation} where $\lambda^{\left(\textrm{r}\right)}_j=\sum_{l=1}^{M}\lvert\hat{\phi}^{\left(l,j\right)}\rvert^2+\sigma_{e,i}^2\lVert\mathbf{p}_j\rVert^2+\sum_{q=1}^{M-1}\sum_{r=q+1}^{M}f_{q,r}^{\left(j\right)}$. Following a similar procedure for the power coefficient of the common stream lead us to \begin{equation} 0<\mu_c^{\left(\textrm{r}\right)}<\frac{1}{\lambda^{\left(\textrm{r}\right)}_c}, \end{equation} where \begin{equation} \lambda_c^{\left(\text{r}\right)}=2\sum_{j=1}^{M}\lvert\hat{\phi}^{\left(j,c\right)}\rvert^2+\sigma_{e,i}^2\lVert\mathbf{p}_j\rVert^2-f^{\left(c\right)} \end{equation} As in the previous case, the step size is chosen using $\min\left(\mu^{\left(\textrm{r}\right)}_c,\mu^{\left(\textrm{r}\right)}_j\right)$ $\forall j \in \left\{1,2,\cdots,M\right\}$. The variable $\lambda_c^{\left(\textrm{r}\right)}$ has an additional term given by $\sigma_{e,i}^2\lVert\mathbf{p}_j\rVert^2$ when compared to $\lambda_c$ of the APA algorithm. In this sense, the upper bound of (70) is smaller than the bound in (65). In other words, the step size of APA-R takes smaller values than the step size of APA. \subsection{Computational Complexity} In this section the number of FLOPs performed by the proposed algorithms is computed. For this purpose let us consider the following results to simplify the analysis. Consider the complex vector $\mathbf{z}_1$ and $\mathbf{z}_2 \in \mathbb{C}^{n}$. Then, we have the following results: \begin{itemize} \item The product $\mathbf{z}_1^{\text{T}}\mathbf{z}_2$ requires $8n-2$ FLOPs. \item The term $\lVert \mathbf{z}_1\rVert^2$ requires $7n-2$ FLOPs \end{itemize} The gradient in equation \eqref{gradient RS perfect CSIT private streams} requires the computation of three terms. The first term, which is given by $4 a_c \sum\limits_{i=1}^M\lvert\mathbf{h}_{i,*}\mathbf{p}_c\rvert^2$ needs a total of $8 N_tM +2M+1$ FLOPs. The evaluation of the second term results in $8N_tM+M$. For the last term we have a total of $\left(9M^2-9M+2\right)/2$. Considering a system where $N_t=N_r=M=n$ we have that the number of FLOPs required by the proposed adaptive algorithm is given by $\frac{41}{2}n^3+19n^2+\frac{5}{2}n+4$. The computational complexity of the gradients in \eqref{gradient robust apa ac} and \eqref{gradient robust apa ai} can be computed in a similar manner. However, in this case we have an additional term given by $4 a_c M\sigma_{e,i}^2\lVert \mathbf{p}_i\rVert^2$, which requires a total of $7N_t+2$ FLOPs. Then, the robust adaptive algorithm requires a total of $\frac{41}{2}n^3+19n^2+\frac{19}{2}n+6$. It is important to mention that the computational complexity presented above represents the number of FLOPs of the adaptive algorithm per iteration In contrast, the optimal power allocation for the conventional SDMA system requires the exhaustive search over $\mathbf{A}$ with a fine grid. Given a system with $12$ streams and a grid step of $0.001$, the exhaustive search would require the evaluation of $5005000$ different power allocation matrices for each channel realization. In contrast, the adaptive approaches presented require only the computation of around $30$ iterations. Furthermore, the complexity of the exhaustive search for an RS system is even higher since the search is perform over $\mathbf{A}^{\left(\text{RS}\right)}$, which additionally contains the power allocated to the common stream. Table \ref{Computational complexity power allocation} summarizes the computational complexity of the proposed algorithms employing the big $\mathcal{O}$ notation. In Table \ref{Computational complexity power allocation}, $I_o$ denotes the number of points of the grid given a step size, $I_w$ refers to the number of iterations of the alternating procedure and $I_a$ denotes the number of iterations for the adaptive algorithms. It is worth noting that $I_o>>I_a$. Moreover, the inner iterations employed in the WMMSE approach are much more demanding than the iterations of the proposed APA and APA-R algorithms. Fig \ref{Complexity} shows the computational complexity in terms of FLOPS assuming that the number of transmit antennas increases. The term CF represents the closed-form power allocation in \cite{Dai2016a}, which requires the inversion of an $N_t\times N_t$ matrix. The step of the grid was set to $0.01$ for the ES and the number of iterations to $30$ for the WMMSE, APA and APA-R approaches. Note that in practice the WMMSE iterates continuously until meeting a predefined accuracy. In general, it requires more than $30$ iterations. It is also important to point out that the cost per iteration of the adaptive approaches can be reduced after the first iteration. The precoders and the channel are fixed given a transmission block. Therefore, after the first iteration, we can avoid the multiplication of the precoder by the channel matrix in the update equation. In contrast, the WMMSE must perform the whole procedure again since the precoders are updated between iterations. This is illustrated in Fig. \ref{ComplexityPerIteration} \begin{table}[H] \caption{Computational complexity of the power allocation algorithms.} \begin{center} \vspace{-.3cm} \begin{tabular}{ p{4 cm} c} \hline \hline Technique & $\mathcal{O}$\\ \hline \rule{0pt}{3ex} SDMA-ES & $\mathcal{O}\left(N_t I_o^2 M^3\right)$\\ \rule{0pt}{3ex} WMMSE & $\mathcal{O}\left(I_w N_t M^3\right)$\\ \rule{0pt}{3ex} RS-ES & $\mathcal{O}\left(N_t I_o^2 (M+1)^3\right)$\\ \rule{0pt}{3ex} RS-APA & $\mathcal{O}\left(I_a N_t (M+1)^2\right)$\\ \rule{0pt}{3ex} RS-APA-R & $\mathcal{O}\left(I_a N_t (M+1)^2\right)$\\ \rule{0pt}{3ex} CF\cite{Dai2016a} & $\mathcal{O}\left( N_t^3\right)$\\ \hline\label{Computational complexity power allocation} \end{tabular} \end{center} \end{table} \vspace{-2em} \begin{figure}[h] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[height=5.5cm, width=0.98\columnwidth]{Complexity_new.eps} \caption{Number of FLOPs required by different power allocation algorithms considering a MU-MIMO system with an increasing number of antennas.} \label{Complexity} \end{subfigure} \quad \begin{subfigure}[b]{0.45\textwidth}. \includegraphics[height=5.6cm, width=0.98\columnwidth]{CostPerIteration.eps} \caption{Number of FLOPs required per iteration considering a MU-MIMO system with $Nt=16$ and $M=16$.} \label{ComplexityPerIteration} \end{subfigure} \caption{Computational Complexity} \end{figure} \section{Simulations}\label{c5 section simulations} In this section, the performance of the proposed APA-R and APA algorithms is assessed against existing power allocation approaches, namely, the ES, the WMMSE \cite{JoudehClerckx2016}, the closed-form approach of \cite{Dai2016a}, the power allocation obtained directly from the precoders definition which is denoted here as random power allocation, and the uniform power allocation (UPA) approaches. Unless otherwise specified, we consider an RS MU-MIMO system where the BS is equipped with four antennas and transmits data to two users, each one equipped with two antennas. The inputs are statistically independent and follow a Gaussian distribution. A flat fading Rayleigh channel, which remains fixed during the transmission of a packet, is considered, we assume additive white Gaussian noise with zero mean and unit variance, and the SNR varies with $E_{tr}$. For all the experiments, the common precoder is computed by employing a SVD over the channel matrix, i.e. $\hat{\mathbf{H}}=\mathbf{S}\mathbf{\Psi}\mathbf{V}^H$. Then we set the common precoder equal to the first column of matrix $\mathbf{V}$, i.e., $\mathbf{p}_c=\mathbf{v}_1$. In the first example, we consider the transmission under imperfect CSIT. The proposed APA-R algorithm is compared against the closed form expression from \cite{Dai2016a} and against ES with random and UPA algorithms. The first ES approach fixes a random power allocation for the private streams and then an exhaustive search is carried out to find the best power allocation for the common stream. The second scheme considers that the power is uniformly distributed among private streams and then performs an exhaustive search to find the optimum value for $a_c$. Fig. \ref{C5 Figure3} shows the performance obtained with a MF. Although ES obtains the best performance, it requires a great amount of resources in terms of computational complexity. Moreover, we can see that the closed-form power allocation does not allocate power to the common message in the low SNR regime. The reason for this behavior is that this power allocation scheme was derived for massive MIMO environments where the excess of transmit antennas gets rid of the residual interference at low SNR and no common message is required. As the SNR increases the residual interference becomes larger and the algorithm allocates power to the common stream. \begin{figure}[h] \begin{center} \includegraphics[scale=0.55]{C5_FMF_new.eps} \vspace{-1.0em} \caption{Sum-rate performance of RS-MF precoding scheme, $N_t=4$, $N_k=4$, $K=1$, and $\sigma_e^2=0.05$.} \label{C5 Figure3} \end{center} \end{figure} In the next example, the ZF precoder has been considered. The results illustrated in Fig. \ref{C5 Figure4} show that the techniques that perform ES, which are termed as RS-ZF-ES+Random and RS-ZF-ES-UPA, have the best performance. However, the APA and APA-R adaptive algorithms obtain a consistent gain when compared to the conventional ZF algorithm. \begin{figure}[h] \begin{center} \includegraphics[scale=0.45]{C5_F4.eps} \vspace{-1.0em} \caption{Sum-rate performance of RS-ZF precoding scheme, $N_t=4$, $N_k=2$, $K=2$, and $\sigma_e^2=0.1$.} \label{C5 Figure4} \end{center} \end{figure} In Fig. \ref{C5_7_D} we employed the MMSE precoder. In this case, an exhaustive search was performed to obtain the best power allocation coefficients for all streams. The technique from [27] was also considered and is denoted in the figure as RS-WMMSE. We can observe that the best performance is obtained by ES. The proposed APA and APA-R algorithms obtain a consistent gain when compared to the conventional MMSE precoder. The performance is worse than that of ES and the RS-WMMSE, but the computational complexity is also much lower. \begin{figure}[h] \begin{center} \includegraphics[scale=0.45]{C5_F7_E.eps} \vspace{-1.0em} \caption{Sum-rate performance of RS-MMSE precoding scheme, $N_t=4$, $N_k=2$, $K=2$, and $\sigma_e^2=0.2$.} \label{C5_7_D} \end{center} \end{figure} In the next example, we evaluate the performance of the proposed APA and APA-R techniques as the error in the channel estimate becomes larger. For this scenario, we consider a fixed SNR equal to 20 dB. Fig. \ref{VarErr} depicts the results of different power allocation techniques. The results show that the APA-R performs better than the APA as the variance of the error increases. \begin{figure}[h] \begin{center} \includegraphics[scale=0.45]{VarErrV1.eps} \vspace{-1.0em} \caption{Sum-rate performance of RS-ZF precoding scheme, $N_t=4$, $N_k=2$, and $K=2$.} \label{VarErr} \end{center} \end{figure} Let us now consider the ESR obtained versus the number of iterations, which is shown in Fig. \ref{MSEperIteration}. The step size of the adaptive algorithms was set to $0.004$ and the SNR to $10$ dB. Fig. \ref{MSEperIteration} shows that APA and APA-R obtain better performance than WMMSE with few iterations, i.e., with reduced cost. Recall that the cost per iteration is much lower for the adaptive algorithms. \begin{figure}[h] \begin{center} \includegraphics[scale=0.55]{FigIterationsJournal10dB4.eps} \vspace{-1.0em} \caption{Sum-rate performance of RS-ZF precoding scheme, $N_t=4$, $N_k=1$,$K=4$, and $\sigma_e^2=0.2$.} \label{MSEperIteration} \end{center} \end{figure} In Fig. \ref{acPow} we can notice the power allocated to the common stream. For this simulation we consider the same setup as in the previous simulation. We can observe that the parameter $a_c$ increases with the SNR. In other words, as the MUI becomes more significant, more power is allocated to the common stream. We can also notice that the proposed APA-R algorithm allocates more power to the common stream than that of the APA algorithm. \begin{figure}[h] \begin{center} \includegraphics[scale=0.45]{Power_ac.eps} \vspace{-1.5em} \caption{Power allocated to the common stream, $N_t=4$, $N_k=2$, $K=2$.} \label{acPow} \vspace{-1.5em} \end{center} \end{figure} In the last example, we consider the ZF precoder in a MU-MIMO system where the BS is equipped with $24$ transmit antennas. The information is sent to $24$ users which are randomly distributed over the area of interest. Fig. \ref{C5 Figure6} shows the results obtained by employing the proposed APA and APA-R algorithms. Specifically, it can be noticed that the RS system equipped with APA-R can obtain a gain of up to $20 \%$ over that random allocation and up to $50 \%$ over that of a conventional MU-MIMO system with random allocation. \begin{figure}[h] \begin{center} \includegraphics[scale=0.5]{C5_F6_B.eps} \vspace{-1.5em} \caption{Sum-rate performance of RS-ZF precoding scheme, $N_t=24$, $N_k=1$, $K=24$, and $\sigma_e^2=0.1$.} \label{C5 Figure6} \end{center} \end{figure} \vspace{-2.15em} \section{Conclusion} In this work, adaptive power allocation techniques have been developed for RS-MIMO and conventional MU-MIMO systems. Differently to optimal and WMMSE power allocation often employed for RS-based systems that are computationally very costly, the proposed APA and APA-R algorithms have low computational complexity and require fewer iterations for new transmission blocks, being suitable for practical systems. Numerical results have shown that the proposed power allocation algorithms, namely APA and APA-R, are not very far from the performance of exhaustive search with uniform power allocation. Furthermore, the proposed robust technique, i.e., APA-R, increases the robustness of the system against CSIT imperfections. \appendices \section{Proof of the MSE for the APA-R}\label{Appendix MSE APA-R} In what follows the derivation of the MSE for the APA-R algorithm is detailed. Let us first expand the MSE, which is given by \begin{align} \mathbb{E}\left[\varepsilon\lvert\hat{\mathbf{H}}\right]=&\mathbb{E}\left[\left(\mathbf{s}^{\left(\text{RS}\right)}-\mathbf{y}'\right)^H\left(\mathbf{s}^{\left(\text{RS}\right)}-\mathbf{y}'\right)\lvert\hat{\mathbf{H}}\right]\nonumber\\ =&\underbrace{\mathbb{E}\left[\mathbf{s}^{\left(\text{RS}\right)^H}\mathbf{s}^{\left(\text{RS}\right)}\lvert\hat{\mathbf{H}}\right]}_{T_1}-\underbrace{\mathbb{E}\left[\mathbf{s}^{\left(\text{RS}\right)^H}\mathbf{y}'\lvert\hat{\mathbf{H}}\right]}_{T_2}-\underbrace{\mathbb{E}\left[\mathbf{y'}^H\mathbf{s}^{\left(\text{RS}\right)}\lvert\hat{\mathbf{H}}\right]}_{T_3}+\underbrace{\mathbb{E}\left[\mathbf{y'}^H\mathbf{y}'\lvert\hat{\mathbf{H}}\right]}_{T_4}.\label{mean square error terms for RS APA Robust} \end{align} The first term of \eqref{mean square error terms for RS APA Robust} is independent from $\hat{\mathbf{H}}$ and can be reduced to the following expression: \vspace{-2em} \begin{align} \mathbb{E}\left[\mathbf{s}^{\left(\text{RS}\right)^H}\mathbf{s}^{\left(\text{RS}\right)}\right]=&\mathbb{E}\left[s_c^*s_c\right]+\mathbb{E}\left[s_1^*s_1\right]\cdots+\mathbb{E}\left[s_M^*s_M\right],=M+1.\label{c5 term1} \end{align} The second term given by $T_2$ requires the computation of the following term: \begin{align} \mathbf{s}^{\left(\text{RS}\right)^H}\mathbf{y}'=& s_c^* y_c+s_1^*y_1+\cdots+s_M^*y_{M},\nonumber\\ =&s_c^*\sum_{l=1}^{M}\left[a_c s_c\left(\hat{\mathbf{h}}_{l,*}+\tilde{\mathbf{h}}_{l,*}\right)\mathbf{p}_c+\sum_{j=1}^{M}a_j s_j \left(\hat{\mathbf{h}}_{l,*}+\tilde{\mathbf{h}}_{l,*}\right)\mathbf{p}_j+ n_l\right]\nonumber\\ &+s_1^*\left[a_c s_c \left(\hat{\mathbf{h}}_{1,*}+\tilde{\mathbf{h}}_{1,*}\right)\mathbf{p}_c+\sum_{l=1}^{M}a_l s_l\left(\hat{\mathbf{h}}_{1,*}+\tilde{\mathbf{h}}_{1,*}\right)\mathbf{p}_l+n_1\right]\nonumber\\ &+\cdots+s_M^*\left[a_c s_c \left(\hat{\mathbf{h}}_{M,*}+\tilde{\mathbf{h}}_{M,*}\right)\mathbf{p}_c+\sum_{l=1}^{M}a_l s_l\left(\hat{\mathbf{h}}_{M,*}+\tilde{\mathbf{h}}_{M,*}\right)\mathbf{p}_l+n_M\right].\label{c5 term2 new} \end{align} By evaluating the expected value of the different terms in \eqref{c5 term2 new} we get the following quantities: \begin{align} \mathbb{E}\left[s_i^*y_i|\hat{\mathbf{H}}\right]&=\mathbb{E}\left[s_i^*\left\{a_c s_c \left(\hat{\mathbf{h}}_{i,*}+\tilde{\mathbf{h}}_{i,*}\right)\mathbf{p}_c+\sum_{l=1}^{M}a_l s_l\left(\hat{\mathbf{h}}_{i,*}+\tilde{\mathbf{h}}_{i,*}\right)\mathbf{p}_l+n_i\right\}\lvert\hat{\mathbf{H}}\right],\nonumber\\ &=a_i\hat{\phi}^{\left(i,i\right)}.\\ \mathbb{E}\left[s_c^*y_c|\hat{\mathbf{H}}\right]&=\mathbb{E}\left[s_c^*\sum_{l=1}^{M}\left\{a_c s_c\left(\hat{\mathbf{h}}_{l,*}+\tilde{\mathbf{h}}_{l,*}\right)\mathbf{p}_c+\sum_{j=1}^{M}a_j s_j \left(\hat{\mathbf{h}}_{l,*}+\tilde{\mathbf{h}}_{l,*}\right)\mathbf{p}_j+ n_l\right\}\lvert\hat{\mathbf{H}}\right],\nonumber\\ &=a_c\sum_{i=1}^M \hat{\phi}^{\left(i,c\right)}, \end{align}where $\hat{\phi}^{\left(i,q\right)}=\hat{\mathbf{h}}_{i,*}\mathbf{p}_q$ and $\hat{\phi}^{\left(i,c\right)}=\hat{\mathbf{h}}_{i,*}\mathbf{p}_c$. These expressions allow us to compute $T_2$, which is expressed by \begin{equation} \mathbb{E}\left[\mathbf{s}^{\left(\text{RS}\right)^H}\mathbf{y}'|\hat{\mathbf{H}}\right]=a_c\sum_{i=1}^M\hat{\phi}^{\left(i,c\right)}+\sum_{j=1}^{M}a_j\hat{\phi}^{\left(j,j\right)}.\label{c5 term 2 robust} \end{equation} The third term can be calculated in a similar manner and is given by \begin{align} \mathbb{E}\left[\mathbf{y'}^H\mathbf{s}^{\left(\text{RS}\right)}\right.&\left.|\hat{\mathbf{H}}\right]=a_c\sum_{i=1}^M\hat{\phi}^{\left(i,c\right)^*}+\sum_{j=1}^{M}a_j\hat{\phi}^{\left(j,j\right)^*}.\label{c5 term 3 robust} \end{align} The last term of equation \eqref{mean square error terms for RS APA Robust} requires the computation of several quantities. Let us first consider the following quantity: \begin{align} \mathbf{y'}^H\mathbf{y}'=&y_c^* y_c+y_1^*y_1+\cdots+y_M^*y_M=\left(\sum_{l=1}^M y_l^*\right)\left(\sum_{j=1}^M y_j\right)+\sum_{i=1}^M y_i^*y_i. \end{align} Taking the expected value on the last equation results in \begin{align} \mathbb{E}\left[ \mathbf{y'}^H\mathbf{y}'|\hat{\mathbf{H}}\right]=&\sum_{l=1}^M\sum_{j=1}^M\mathbb{E}\left[y_l^* y_j|\hat{\mathbf{H}}\right]+\sum_{i=1}^{M}\mathbb{E}\left[y_i^* y_i|\hat{\mathbf{H}}\right],\nonumber\\ =&\sum_{l=1}^M\sum\limits_{\substack{j=1\\j\neq l}}^M\mathbb{E}\left[y_l^* y_j|\hat{\mathbf{H}}\right]+2\sum_{i=1}^{M}\mathbb{E}\left[y_i^* y_i|\hat{\mathbf{H}}\right],\label{c5 term4 complete} \end{align} \begin{align} \mathbb{E}\left[y_i^* y_i\lvert\hat{\mathbf{H}}\right]=&\mathbb{E}\left[\left\lvert a_c s_c \mathbf{h}_{i,*}\mathbf{p}_c+\sum_{q=1}^{M}a_q s_q \mathbf{h}_{i,*}\mathbf{p}_q+ n_i\right\rvert^2\rvert\hat{\mathbf{H}}\right],\nonumber\\ =&\mathbb{E}\left[a_c^2\lvert s_c\rvert^2\left\lvert\hat{\phi}^{\left(i,c\right)}+\tilde{\phi}^{\left(i,c\right)}\right\rvert^2+\sum_{q=1}^{M}a_q\lvert s_q\rvert^2\left\lvert\hat{\phi}^{\left(i,q\right)}+\tilde{\phi}^{\left(i,q\right)}\right\rvert^2+\lvert n_i\rvert^2 \lvert \hat{\mathbf{H}}\right], \end{align} where $\tilde{\phi}^{\left(i,c\right)}=\tilde{\mathbf{h}}_{i,*}\mathbf{p}_c$ and $\tilde{\phi}^{\left(i,q\right)}=\tilde{\mathbf{h}}_{i,*}\mathbf{p}_q$. Expanding the terms of the last equation results in \begin{align} \mathbb{E}\left[y_i^* y_i\lvert\hat{\mathbf{H}}\right]=&\mathbb{E}\left[\sum_{q=1}^M a_q^2\lvert s_q\rvert^2\left(\lvert\hat{\phi}^{\left(i,q\right)}\rvert^2+2\Re\left[\hat{\phi}^{\left(i,q\right)^*}\tilde{\phi}^{\left(i,q\right)}\right]+\lvert\tilde{\phi}^{\left(i,q\right)}\rvert^2\right)\lvert\hat{\mathbf{H}}\right]\nonumber\\ &+\mathbb{E}\left[a_c^2\lvert s_c\rvert^2\left(\lvert\hat{\phi}^{\left(i,c\right)}\rvert^2+2\Re\left[\hat{\phi}^{\left(i,c\right)^*}\tilde{\phi}^{\left(i,c\right)}\right]+\lvert\tilde{\phi}^{\left(i,c\right)}\rvert^2\right)|\hat{\mathbf{H}}\right]+\sigma_n^2. \end{align} Since the entries of $\tilde{\mathbf{h}}_{i,*}~~\forall i$ are uncorrelated with zero mean and also independent from $\mathbf{s}^{\left(\text{RS}\right)}$, we have \begin{align} \mathbb{E}\left[y_i^* y_i|\hat{\mathbf{H}}\right]=&a_c^2\left(\lvert\hat{\phi}^{\left(i,c\right)}\rvert^2+\mathbb{E}\left[\lvert\tilde{\phi}^{\left(i,c\right)}\rvert^2\lvert\hat{\mathbf{H}}\right]\right)+\sum_{q=1}^M a_q^2\left(\lvert\hat{\phi}^{\left(i,q\right)}\rvert^2+\mathbb{E}\left[\lvert\tilde{\phi}^{\left(i,q\right)}\rvert^2\lvert\hat{\mathbf{H}}\right]\right)+\sigma_n^2.\label{c5 term4 robust part1} \end{align} Note that $\lvert\tilde{\phi}^{\left(i,c\right)}\rvert^2$ and $\lvert\tilde{\phi}^{\left(i,q\right)}\rvert^2$ are independent from $\hat{\mathbf{H}}$. Thus, we get \begin{align} \mathbb{E}\left[\lvert\tilde{\phi}^{\left(i,c\right)}\rvert^2\right]=&\lvert p^{\left(c\right)}_1\rvert^2\mathbb{E}\left[\tilde{h}_{i,1}^*\tilde{h}_{i,1}\right]+\lvert p^{\left(c\right)}_2\rvert^2\mathbb{E}\left[\tilde{h}_{i,2}^*\tilde{h}_{i,2}\right]+\cdots+\lvert p^{\left(c\right)}_{N_t}\rvert^2\mathbb{E}\left[\tilde{h}_{i,N_t}^*\tilde{h}_{i,N_t}\right],\nonumber\\ =&\lvert p^{\left(c\right)}_1\rvert^2\sigma_{e,i}^2+\lvert p^{\left(c\right)}_2\rvert^2\sigma_{e,i}^2+\cdots\lvert p^{\left(c\right)}_{N_t}\rvert^2\sigma_{e,i}^2,\nonumber\\ =&\sigma_{e,i}^2\lVert\mathbf{p}_c\rVert^2, \end{align} and similarly $\mathbb{E}\left[\lvert\tilde{\phi}^{\left(i,q\right)}\rvert^2\right]=\sigma_{e,i}^2\lVert\mathbf{p}_q\rVert^2.$ Then, \eqref{c5 term4 robust part1} turns into \begin{align} \mathbb{E}\left[ y_i^* y_i|\hat{\mathbf{H}}\right]=&a_c^2\left(\lvert\hat{\phi}^{\left(i,c\right)}\rvert^2+\sigma_{e,i}^2\lVert\mathbf{p}_c\rVert^2\right)+\sum_{j=1}^{K}a_j^2\left(\lvert\hat{\phi}^{\left(i,j\right)}\rvert^2+\sigma_{e,i}^2\lVert\mathbf{p}_j\rVert^2\right)+\sigma_n^2.\label{c5 corr received signal same ant robust} \end{align} Let us now evaluate the expected value of $y_l^*y_j$ when $l\neq j$, which results in \begin{align} \mathbb{E}\left[y_l^* y_j\lvert\hat{\mathbf{H}}\right]=&\mathbb{E}\left[\left(a_c s_c \mathbf{h}_{l,*}\mathbf{p}_c+\sum_{q=1}^{M}a_q s_q \mathbf{h}_{l,*}\mathbf{p}_q+ n_l\right)^*\right.\times\left.\left(a_c s_c \mathbf{h}_{j,*}\mathbf{p}_c+\sum_{r=1}^{M}a_r s_r \mathbf{h}_{j,*}\mathbf{p}_r+ n_j\right)\lvert\hat{\mathbf{H}}\right],\nonumber\\ =&\sum_{q=1}^M a_q^2\mathbb{E}\left[\hat{\phi}^{\left(l,q\right)^*}\hat{\phi}^{\left(j,q\right)}+ \hat{\phi}^{\left(l,q\right)^*}\tilde{\phi}^{\left(j,q\right)}+\tilde{\phi}^{\left(l,q\right)^*}\hat{\phi}^{\left(j,q\right)}+\tilde{\phi}^{\left(l,q\right)^*}\tilde{\phi}^{\left(j,q\right)}\lvert\hat{\mathbf{H}}\right].\nonumber\\ &+a_c^2\mathbb{E}\left[\hat{\phi}^{\left(l,c\right)^*}\hat{\phi}^{\left(j,c\right)}+\hat{\phi}^{\left(l,c\right)^*}\tilde{\phi}^{\left(j,c\right)}+\tilde{\phi}^{\left(l,c\right)^*}\hat{\phi}^{\left(j,c\right)}+\tilde{\phi}^{\left(l,c\right)^*}\tilde{\phi}^{\left(j,c\right)}\lvert\hat{\mathbf{H}}\right]. \end{align} Remark that $\tilde{\mathbf{h}}_l$ and $\tilde{\mathbf{h}}_j$ are independent $\forall~~l\neq j$ with zero mean. Thus, the last equation is reduced to \begin{align} \mathbb{E}\left[y_l^* y_j|\hat{\mathbf{H}}\right]=&a_c^2\hat{\phi}^{\left(l,c\right)^*}\hat{\phi}^{\left(j,c\right)}+\sum_{q=1}^M a_q^2\hat{\phi}^{\left(l,q\right)^*}\hat{\phi}^{\left(j,q\right)}.\label{c5 corr signal from different ant robust} \end{align} Equations \eqref{c5 corr received signal same ant robust} allow us to compute the second term of equation \eqref{c5 term4 complete}, which is given by \begin{align} \sum_{i=1}^M\mathbb{E}\left[y_i^*y_i|\right.\left.\hat{\mathbf{H}}\right] =&\sum_{j=1}^{M}a_j^2\left(\sum_{l=1}^M\lvert\hat{\phi}^{\left(l,j\right)}\rvert^2+M\sigma_{e_i}^2\lVert\mathbf{p}_j\rVert^2\right)+a_c^2\left(\sum_{i=1}^M\lvert\hat{\phi}^{\left(i,c\right)}\rvert^2+M\sigma_{e,i}^2\lVert\mathbf{p}_c\rVert^2\right)+M\sigma_n^2.\label{c5 term 4 part 1 robust} \end{align} On the other hand, \eqref{c5 corr signal from different ant robust} allow us to obtain the first term of \eqref{c5 term4 complete}, which results in \begin{equation} \sum_{l=1}^M\sum\limits_{\substack{j=1\\j\neq l}}^M\mathbb{E}\left[y_l^* y_j|\hat{\mathbf{H}}\right]=\sum_{l=1}^M\sum\limits_{\substack{j=1\\j\neq l}}^M\left(a_c^2\hat{\phi}^{\left(l,c\right)^*}\hat{\phi}^{\left(j,c\right)}+\sum_{q=1}^M a_q^2\hat{\phi}^{\left(l,q\right)^*}\hat{\phi}^{\left(j,q\right)}\right) \end{equation} Applying the property $\hat{\phi}^{\left(l,q\right)^*}\hat{\phi}^{\left(j,q\right)}+\hat{\phi}^{\left(l,q\right)}\hat{\phi}^{\left(j,q\right)^*}=2\Re\left[\hat{\phi}^{\left(l,q\right)^*}\hat{\phi}^{\left(j,q\right)}\right]$, we can simplify half of the sums from the triple summation, i.e., \begin{equation} \sum_{l=1}^M\sum\limits_{\substack{j=1\\j\neq l}}^M\sum_{q=1}^Ma_q^2\hat{\phi}^{\left(l,q\right)^*}\hat{\phi}^{\left(j,q\right)}=2\sum_{l=1}^{M-1}\sum_{j=i+1}^{M}\sum_{q=1}^M a_q^2\Re\left[\hat{\phi}^{\left(l,q\right)^*}\hat{\phi}^{\left(j,q\right)}\right].\label{c5 triple summation explained new} \end{equation} It follows that \begin{align} \sum_{l=1}^M\sum\limits_{\substack{j=1\\j\neq l}}^M\mathbb{E}\left[y_l^* y_j|\hat{\mathbf{H}}\right]=&2\sum_{l=1}^{M-1}\sum_{j=i+1}^{M}\sum_{q=1}^M a_r^2\Re\left[\hat{\phi}^{\left(l,q\right)^*}\hat{\phi}^{\left(j,q\right)}\right]+\sum_{l=1}^M\sum\limits_{\substack{j=1\\j\neq l}}^M a_c^2\hat{\phi}^{\left(l,c\right)}\hat{\phi}^{\left(j,c\right)}.\label{c5 term4 part 2 robust} \end{align} By using \eqref{c5 term 4 part 1 robust} and \eqref{c5 term4 part 2 robust} in \eqref{c5 term4 complete} and then substituting \eqref{c5 term1} \eqref{c5 term 2 robust}, \eqref{c5 term 3 robust}, and \eqref{c5 term4 complete} in \eqref{mean square error terms for RS APA Robust} we can calculate the MSE, which is given by \eqref{mean square error APA robust}. This concludes the proof. \vspace{-1.5em} \section{Proof of the MSE for the APA}\label{Appendix MSE APA} Here, we describe in detail how to obtain the MSE employed in the APA algorithm. Let us first consider the MSE, which is given by \begin{align} \mathbb{E}\left[\varepsilon\right]=&{\mathbb{E}\left[\mathbf{s}^{\left(\text{RS}\right)^H}\mathbf{s}^{\left(\text{RS}\right)}\right]}-{\mathbb{E}\left[\mathbf{s}^{\left(\text{RS}\right)^H}\mathbf{y}'\right]}-{\mathbb{E}\left[\mathbf{y'}^H\mathbf{s}^{\left(\text{RS}\right)}\right]}+{\mathbb{E}\left[\mathbf{y'}^H\mathbf{y}'\right]}.\label{mean square error terms for RS APA} \end{align} The first term of \eqref{mean square error terms for RS APA} is computed identically to \eqref{c5 term1}. By taking the expected value of the second term in \eqref{mean square error terms for RS APA} and expanding the equation, we have \begin{align} \mathbb{E}\left[\mathbf{s}^{\left(\text{RS}\right)^H}\mathbf{y}'\right]=&a_c\mathbb{E}\left[s_c^*s_c\right]\sum_{l=1}^M\mathbf{h}_{l,*}\mathbf{p}_c+\sum_{l=1}^M \mathbb{E}\left[s_c^*n_l\right]+ \sum_{l=1}^{M}\mathbf{h}_{l,*}\sum_{j=1}^{M}a_j \mathbb{E}\left[s_c^*s_j\right] \mathbf{p}_j\nonumber\\ &+a_c\sum_{l=1}^M\mathbb{E}\left[s_l^*s_c\right]\mathbf{h}_{l,*}\mathbf{p}_c+\sum_{q=1}^M\sum_{l=1}^{M}a_l \mathbb{E}\left[s_q^*s_l\right]\mathbf{h}_{q,*}\mathbf{p}_l+\sum_{l=1}^M\mathbb{E}\left[s_l^*n_l\right].\label{c5 term 2 full} \end{align} Since the symbols are uncorrelated, equation \eqref{c5 term 2 full} is reduced to \begin{align} \mathbb{E}\left[\mathbf{s}^{\left(\text{RS}\right)^H}\mathbf{y}'\right]=a_c\sum_{j=1}^{M}\mathbf{h}_{j,*}\mathbf{p}_c+\sum_{l=1}^M a_l \mathbf{h}_{l,*}\mathbf{p}_l.\label{c5 term2} \end{align} The third term of \eqref{mean square error terms for RS APA} can be computed in a similar way as the second term, which lead us to \begin{align} \mathbb{E}\left[\mathbf{y'}^H\mathbf{s}^{\left(\text{RS}\right)}\right]=a_c\sum_{j=1}^{M}\left(\mathbf{h}_{l,*}\mathbf{p}_c\right)^{*}+\sum_{l=1}^M a_l \left(\mathbf{h}_{l,*}\mathbf{p}_l\right)^{*}.\label{c5 term3} \end{align} The last term of \eqref{mean square error terms for RS APA} is equal to \begin{align} \mathbb{E}\left[ \mathbf{y'}^H\mathbf{y}'\right]=\sum_{l=1}^M\sum\limits_{\substack{j=1\\j\neq l}}^M\mathbb{E}\left[y_l^* y_j\right]+2\sum_{i=1}^{M}\mathbb{E}\left[y_i^* y_i\right],\label{c5 term4 APA complete} \end{align} Let us first compute the quantity given by \begin{equation} \mathbb{E}\left[y_i^* y_i\right]=a_c^2\lvert\mathbf{h}_{i,*}\mathbf{p}_c\rvert^2+\sum_{l=1}^M a_l^2\lvert\mathbf{h}_{i,*}\mathbf{p}_l\rvert^2+\sigma_n^2.\label{c5 term 1 of term4} \end{equation} Additionally, we know that \begin{align} \mathbb{E}\left[y_i^* y_j\right]=&\mathbb{E}\left[\left(a_c s_c \mathbf{h}_{i,*}\mathbf{p}_c+\sum_{q=1}^{M}a_q s_q \mathbf{h}_{i,*}\mathbf{p}_q+ n_i\right)^*\right.\times\left.\left(a_c s_c \mathbf{h}_{j,*}\mathbf{p}_c+\sum_{l=1}^{M}a_l s_l \mathbf{h}_{j,*}\mathbf{p}_l+ n_j\right)\right],\nonumber\\ =&a_c^2\left(\mathbf{h}_{i,*}\mathbf{p}_c\right)^*\left(\mathbf{h}_{j,*}\mathbf{p}_c\right)+\sum_{l=1}^Ma_l^2\left(\mathbf{h}_{i,*}\mathbf{p}_l\right)^*\left(\mathbf{h}_{j,*}\mathbf{p}_l\right),\label{c5 term 2 of term4} \end{align} for all $i\neq j$. From the last equation, we have \begin{align} \sum_{i=1}^M\sum\limits_{\substack{j=1\\j\neq l}}^M\mathbb{E}\left[y_l^* y_j\right]=&\sum_{i=1}^M\sum\limits_{\substack{j=1\\j\neq l}}^M\left[a_c^2\left(\mathbf{h}_{i,*}\mathbf{p}_c\right)^*\left(\mathbf{h}_{j,*}\mathbf{p}_c\right)+\sum_{l=1}^Ma_l^2\left(\mathbf{h}_{i,*}\mathbf{p}_l\right)^*\left(\mathbf{h}_{j,*}\mathbf{p}_l\right)\right],\nonumber\\ =&\sum_{i=1}^M\sum\limits_{\substack{j=1\\j\neq l}}^Ma_c^2\left(\mathbf{h}_{i,*}\mathbf{p}_c\right)^*\left(\mathbf{h}_{j,*}\mathbf{p}_c\right)+\sum_{i=1}^M\sum\limits_{\substack{j=1\\j\neq l}}^M\sum_{l=1}^Ma_l^2\left(\mathbf{h}_{i,*}\mathbf{p}_l\right)^*\left(\mathbf{h}_{j,*}\mathbf{p}_l\right),\nonumber\\ =&\sum_{i=1}^M\sum\limits_{\substack{j=1\\j\neq l}}^Ma_c^2\phi^{\left(i,c\right)^*}\phi^{\left(j,c\right)}+\sum_{i=1}^M\sum\limits_{\substack{j=1\\j\neq l}}^M\sum_{l=1}^Ma_l^2\phi^{\left(i,l\right)^*}\phi^{\left(j,l\right)},\label{c5 term 2 of term4 extended} \end{align} where we define $\phi^{\left(i,c\right)}=\mathbf{h}_{l,*}\mathbf{p}_c$ and $\phi^{\left(i,l\right)}=\mathbf{h}_{i,*}\mathbf{p}_l$ for all $i,l \in \left[1,M\right]$. Applying the property $\phi^{\left(i,l\right)^*}\phi^{\left(j,l\right)}+\phi^{\left(i,l\right)}\phi^{\left(j,l\right)^*}=2\Re\left[\phi^{\left(i,l\right)^*}\phi^{\left(j,l\right)}\right]$, we can simplify half of the sums from the triple summation, i.e., \begin{equation} \sum_{i=1}^M\sum\limits_{\substack{j=1\\j\neq l}}^M\sum_{l=1}^Ma_l^2\phi^{\left(i,l\right)^*}\phi^{\left(j,l\right)}=2\sum_{i=1}^{M-1}\sum_{q=i+1}^{M}\sum_{r=1}^M a_r^2\Re\left[\phi^{\left(i,r\right)^*}\phi^{\left(q,r\right)}\right].\label{c5 triple summation explained} \end{equation} The final step to obtain the last term of \eqref{mean square error terms for RS APA} is to employ \eqref{c5 term 1 of term4}, \eqref{c5 term 2 of term4 extended}, and \eqref{c5 triple summation explained} to compute the following quantities: \begin{equation} \sum_{i=1}^{M}\mathbb{E}\left[y_i^* y_i\right]=\sum_{l=1}^M a_c^2\lvert\mathbf{h}_{l,*}\mathbf{p}_c\rvert^2+\sum_{i=1}^M\sum_{j=1}^M a_j^2\lvert\mathbf{h}_{i,*}\mathbf{p}_j\rvert^2+M\sigma_n^2,\label{c5 term4 a} \end{equation} \begin{align} \sum_{l=1}^M\sum\limits_{\substack{j=1\\j\neq l}}^M\mathbb{E}\left[y_l^* y_j\right]=&2\sum_{i=1}^{M-1}\sum_{q=i+1}^{M}\sum_{r=1}^M a_r^2\Re\left[\phi^{\left(i,r\right)^*}\phi^{\left(q,r\right)}\right]+\sum_{l=1}^M\sum\limits_{\substack{j=1\\j\neq l}}^M a_c^2\phi^{\left(l,c\right)^*}\phi^{\left(j,c\right)}.\label{c5 term4} \end{align} By using \eqref{c5 term4 a} and \eqref{c5 term4} in \eqref{c5 term4 APA complete} and then substituting \eqref{c5 term1}, \eqref{c5 term2}, \eqref{c5 term3}, \eqref{c5 term4 APA complete} in \eqref{mean square error terms for RS APA} we get the MSE in \eqref{mean square error APA RS}. \vspace{-1.5em} \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
1,314,259,992,642
arxiv
\section{Introduction} \label{sec:intro} The shear viscosity $\eta$ and bulk viscosity $\zeta$ of the hot quark-gluon plasma characterize the dissipation which occurs due to nonuniform flow, such as occurs in heavy ion collisions. They have been a topic of intense study for the last two decades. Experimental results \cite{STAR:2000ekf, PHENIX:2003qra, ALICE:2011ab, ATLAS:2014ndd, ALICE:2016kpq} suggest a small shear viscosity; indeed, based on the determined values of elliptic and higher-order flow as functions of momentum and impact parameter, the best extractions of the shear viscosity are in the range $1/(4\pi)<\eta/s<2/(4\pi)$ \cite{JETSCAPE:2020mzn}. This is close to the claimed lower bound on $\eta/s$ obtained from $\mathcal{N}=4$ supersymmetric Yang-Mills theory at strong coupling, which predicts $\eta/s = 1/(4\pi)$ \cite{Policastro:2001yc}. While leading-order weak-coupling calculations \cite{Arnold:2000dr,Arnold:2003zc}, extrapolated to the physical coupling strength, suggest a larger shear viscosity $\eta/s \sim 0.5$--$1$, the next-to-leading correction to this result at a physically interesting coupling and temperature reduces the tension, implying $\eta/s \sim 0.2$ \cite{Ghiglieri:2018dib}. The size of this difference implies that the perturbative series shows poor convergence. As for the bulk viscosity, its extraction from experiments shows that it is nonzero but somewhat smaller than the shear viscosity at temperatures of order 200 MeV \cite{JETSCAPE:2020mzn}. At higher temperatures we have a leading order perturbative calculation \cite{Arnold:2006fz} which shows that, for $0.06<\alpha_s<0.3$, $\zeta/s\sim 0.02\alpha_s^2$. That is, as the theory becomes more conformal at higher temperatures, the bulk viscosity is expected to become small, but it can nevertheless play a role at lower temperatures where QCD behaves strongly nonconformally. We want first-principles theoretical determinations of shear and bulk viscosity, to accompany the values extracted from experiment. The temperatures achieved in real-world heavy ion collisions are in a range where perturbation theory does not appear to be applicable, and so truly nonperturbative methods are needed. Our best first-principles nonperturbative tool is lattice gauge theory, which we will pursue in this work. Like previous literature, we will work within pure SU(3) gauge theory, but one focus of our work is to develop tools which will be straightforward to extend to the theory with dynamical quarks. The pioneering works \cite{Nakamura:2004sy, Meyer:2007ic,Meyer:2007dy} established the general approach for investigating shear viscosity via unequal Euclidean-time, zero space-momentum energy-momentum tensor correlation functions. More recent studies~\cite{Astrakhantsev:2017nrs,Astrakhantsev:2018oue} have extended this work to consider a range of temperatures. However, these works used rather coarse and small lattices, meaning that cutoff effects may be severe. Recently, a lattice calculation using the gradient flow method was conducted on a $64^3\times 16$ lattice \cite{Itou:2020azb}. In that work, the shear viscosity is extracted at finite flow time, making the results difficult to interpret \cite{Altenkort:2020fgs}. The standard way to investigate transport coefficients on the lattice is through Kubo formulas, which relate these coefficients to spectral functions, which in turn are related to Euclidean correlators through analytic continuation. The biggest challenge is that the energy-momentum tensor (EMT) correlators, from which the viscosities are extracted, are extremely noisy, such that a noise-reduction technique must be employed to obtain the necessary precision. In references \cite{Meyer:2007dy, Astrakhantsev:2018oue} the multi-level algorithm~\cite{Luscher:2001up} was used; in this work we instead make use of the gradient flow method ~\cite{Luscher:2010iy,Luscher:2013cpa,Luscher:2010we,Narayanan:2006rf} and the blocking method~\cite{Altenkort:2021jbk} which we proposed recently. In comparison to multi-level algorithms, the gradient flow approach has the advantages that it is straightforward to apply to the full theory with dynamical quarks, and it helps with the problem of operator renormalization. This paves the way for a future study in full QCD. The signal is improved further via the blocking method, up to a factor of 7 without additional computation cost, as we demonstrate in ~\cite{Altenkort:2021jbk}. With these two methods we are able to achieve high precision for the desired correlators. Our lattice setup consists of 5 large and fine lattices, of which the coarsest one ($64^3\times 16$) is already as large as the finest lattice used in previous literature. The largest and finest lattice in our study is of size $144^3\times 36$ at $\beta=7.544$ ($a=0.0117\mathrm{fm}$). With our setup, including such a fine lattice, the continuum extrapolation is well-behaved and, thanks to the large temporal extents of the underlying lattices, the results of the spectral reconstruction will be more reliable. In the following we will start with the definition of EMT under gradient flow and explain how shear and bulk viscosity can be obtained from the EMT correlators. In Sec. III we give the lattice setup used in this study. Sec. IV is devoted to the non-perturbative renormalization of the EMT correlators. After a short illustration to the temperature correction and tree level improvement in Sec. V we continue with the discussions of continuum extrapolation and flow time extrapolation in Sec. VI. In Sec. VII we focus on the extraction of viscosities via spectral analysis and provide our estimates for the viscosities. The conclusion is given in section VIII. \section{Transport, energy momentum tensor, and gradient flow} \label{sec:gradflow} The fundamental object of our study is the energy-momentum tensor $T_{\mu\nu}$, defined as the Noether current of 4-translation symmetry (or equivalently as the variation of the action with respect to the spacetime metric). Shear viscosity is the response of $T_{ij}$ to shear flow, under which the traceless part of $\partial_i v_j$ is nonzero. Shear flow also couples to the energy-momentum tensor, so the Kubo relation describing the shear viscosity involves a correlation function of two traceless energy-momentum tensors: \begin{align} \label{kubo1} \eta(T) & =\lim_{\omega\rightarrow 0} \frac{\rho_{\rm{shear}}(\omega,T)}{\omega}, \\ \nonumber \rho_{\rm{shear}}(\omega,T) & = \frac{1}{10} \int \mathrm{d}^3 x \, \mathrm{d}t \, e^{i\omega t} \left\langle \left[ \pi_{ij}(x,t) \,,\, \pi_{ij}(0,0) \right] \right\rangle \,, \\ \noindent\pi_{ij} & = T_{ij} - \frac{1}{3} \delta_{ij} T_{kk} \,. \end{align} Similarly, bulk viscosity is the response of the trace of the energy-momentum tensor to a divergent fluid flow, which also couples to the trace of the energy-momentum tensor: \begin{align} \label{kubo2} \zeta(T) & = \frac{1}{9} \lim_{\omega \to 0} \frac{\rho_{\rm{bulk}}(\omega,T)}{\omega} , \\ \rho_{\rm{bulk}} & = \int \mathrm{d}^3 x \, \mathrm{d}t \, e^{i\omega t} \left\langle \left[ T_{\mu\mu}(x,t) \,,\, T_{\nu\nu}(0,0) \right] \right\rangle . \end{align} Our approach will be to use analyticity to relate these spectral functions to the Euclidean, time-dependent correlation (still at zero momentum or equivalently with $\int \mathrm{d}^3 x$): \begin{align} \label{Gthetatau} G(\tau)=\int_0^{\infty}\frac{\mathrm{d}\omega}{\pi}\frac{\cosh[\omega(1/2T-\tau)]}{\sinh(\omega/2T)}\rho(\omega,T). \end{align} This expression can in principle be inverted to determine the spectral function, a task we will return to in section \ref{sec:spectrum}. Here $G(\tau)$ is the Euclidean function associated to the respective spectral function, that is, \begin{align} \begin{split} &G_{\rm{shear}}(\tau)=\frac{1}{10} \int \mathrm{d}^3x\ \left\langle \pi_{ij}(0,\vec{0}) \: \pi_{ij}(\tau,\vec x) \right\rangle,\\ &G_{\rm{bulk}}(\tau)=\int \mathrm{d}^3x\ \left\langle T_{\mu\mu}(0,\vec{0}) \: T_{\mu\mu}(\tau,\vec{x})\right\rangle. \label{eq:Gshearbulk} \end{split} \end{align} Our main task will be evaluating the continuum limit of these correlation functions precisely. There are two principle challenges when treating energy-momentum tensor correlations on the lattice: the correlations are very noisy, and because of the lack of continuous translation symmetry on the lattice, there is no obvious choice for the energy-momentum tensor operator. In particular, different components of $\pi_{ij}$ renormalize differently, which presents a challenge. Both problems are ameliorated if we utilize gradient flow to generate our energy-momentum operators. Gradient flow is defined as the iterative replacement of the gauge fields $A_\mu(x)$ with fields containing less UV fluctuations, $B_\mu(x,\tau_{\mathrm{F}})$, through the definitions \cite{Luscher:2010iy} \begin{align} \label{flow_equation} B_\nu(x,\tau_{\mathrm{F}}=0) & = A_\nu(x) \,, \nonumber \\ \dot{B}_{\mu} & = D_{\nu}G_{\nu\mu}, \nonumber \\ G_{\mu\nu} & = \partial_{\mu}B_{\nu}-\partial_{\nu}B_{\mu}+[B_{\mu},B_{\nu}], \nonumber \\ D_{\mu} & = \partial_{\mu}+[B_{\mu},\cdot]. \end{align} That is, at $\tau_{\mathrm{F}}=0$ the flowed field is the nonflowed field, but the field then evolves under a covariant heat equation which iteratively removes the most UV fluctuations of the field. Using the flowed field to construct operators such as the energy-momentum tensor leads to operators with well behaved renormalization properties and improved rotational invariance. In terms of the gradient-flowed field, we define the gradient-flowed squared field strength operator and the traceless tensor operator as: \begin{equation} \label{E_U} \begin{split} E(\tau_{\mathrm{F}},x)&=\frac{1}{4}F^a_{\rho\sigma}(x,\tau_{\mathrm{F}})F^a_{\rho\sigma}(x,\tau_{\mathrm{F}}),\\ U_{\mu\nu}(x,\tau_{\mathrm{F}})&=F^a_{\mu\rho}(x,\tau_{\mathrm{F}})F^a_{\nu\rho}(x,\tau_{\mathrm{F}})-\delta_{\mu\nu}E(\tau_{\mathrm{F}},x). \end{split} \end{equation} On the lattice we will use the clover definition of the field strength. The energy-momentum tensor can then be written in terms of these two operators and two not-yet-known coefficients as: ~\cite{Suzuki:2013gza} \begin{equation} \label{EMT_flow} T_{\mu\nu}(\tau_{\mathrm{F}},x)=c_1(\tau_{\mathrm{F}}) U_{\mu\nu}(\tau_{\mathrm{F}},x)+4c_2(\tau_{\mathrm{F}})\delta_{\mu\nu}E(\tau_{\mathrm{F}},x). \end{equation} Here $c_1$, $c_2$ are the coefficients on the traceless and pure-trace parts of the tensor, respectively. Arguably one should perform a vacuum subtraction from $E(\tau_{\mathrm{F}},x)$, but in practice we always compute connected correlation functions, which implements such a subtraction automatically. There are two approaches to determining the coefficients $c_1,c_2(\tau_{\mathrm{F}})$. Suzuki has determined them up to 2-loop and 3-loop order in the $\overline{\mathrm{MS}}$-scheme~\cite{Suzuki:2021tlr}: \begin{equation} \label{coeff_UE} \begin{split} &c_1^{\text{(N${}^2$LO)}}(\tau_{\mathrm{F}})=\frac{1}{g^2(\mu)} \sum_{n=0}^{2} k_1^{(n)}(L(\mu,\tau_{\mathrm{F}})) \Big{[} \frac{g^2(\mu)}{(4 \pi)^2 } \Big{]}^n ,\\ &c_2^{\text{(N${}^3$LO)}}(\tau_{\mathrm{F}})=\frac{1}{g^2(\mu)} \sum_{n=1}^{4} k_2^{(n)}(L(\mu,\tau_{\mathrm{F}})) \Big{[} \frac{g^2(\mu)}{(4 \pi)^2 } \Big{]}^n, \end{split} \end{equation} where the coefficients $k_1^{(n)}$, $k_2^{(n)}$ can be found in \cite{Iritani:2018idk, Suzuki:2021tlr}. Here $L(\mu,\tau_{\mathrm{F}})\equiv \log(2\mu^2e^{\gamma_E}\tau_{\mathrm{F}})$ and the running coupling can be evaluated in the $\overline{\mathrm{MS}}$-scheme at scale $\mu=1/ \sqrt{8\tau_{\mathrm{F}}}$\cite{Harlander:2016vzb}. We will use this result for $c_2$, but for $c_1$ we will perform a nonperturbative renormalization on the lattice in Sec.\ref{sec_renormalization}, based on ideas developed by Giusti and Pepe \cite{Giusti:2015daa}. According to small flow time expansion~\cite{Luscher:2011bx}, any composite operator at finite flow time can be expressed as superposition of renormalized operators with finite, flow-dependent coefficients~\cite{DelDebbio:2013zaa}. That is, one can expand our stress tensor operator in an operator product expansion, where the first term is the desired stress tensor and higher terms represent various higher dimension operators with coefficients containing positive powers of $\tau_{\mathrm{F}}$. Therefore, one expects that the correlation functions we evaluate, at separation $\tau$, correspond to the correct correlation functions, plus corrections which appear as a series expansion in $(\tau_{\mathrm{F}}/\tau^2)$. Determining the desired correlation function therefore requires an extrapolation to $\tau_{\mathrm{F}} \to 0$ to eliminate the effects of these high-dimension contaminants. Only some finite range of $\tau_{\mathrm{F}}$ values will actually be useful in this extrapolation; larger values of $\tau_{\mathrm{F}}$, such that $(\tau_{\mathrm{F}}/\tau^2)$ is not small, will be outside of the range where an extrapolation is possible. Solving Eq.~(\ref{flow_equation}) perturbatively suggests that the flow smears the gauge field with a radius $r \simeq \sqrt{8\tau_{\mathrm{F}}}$~\cite{Luscher:2010iy}. In general this radius should be larger than one lattice spacing to suppress the lattice effects and noise, and at the same time smaller than half the lattice extent so that the flow radius does not interact with the lattice periodicity. For a specific operator there can be further constraints on the flow radius. How much flow can be applied and what Ansatz should be used for the $\tau_{\mathrm{F}}\rightarrow 0$ extrapolation will be discussed in a later section. \section{Lattice Setup} \label{sec:latt} Our lattice calculations are carried out in SU(3) Yang-Mills theory in 4-dimensional spacetime with periodic boundary conditions for all directions. We summarize the settings in Tab.~\ref{tab:lattice_setup}. The gauge configurations are generated using the standard Wilson gauge action on 5 large, fine, isotropic lattices. On each lattice we generate 10000 configurations. To ensure the gauge fields are fully thermalized the first 4000 sweeps (each consists of one heatbath and four overrelaxation steps) are discarded. In the sampling procedure the configurations are stored after every 500 sweeps. This removes the autocorrelations in observables as we have confirmed. All the lattices are set to the same temperature $\sim 1.5T_c$ by tuning the $\beta$ value. The scale is set via the Sommer parameter $r_0$~\cite{Sommer:1993ce} with state-of-the-art value $r_0 T_c = 0.7457$ fm~\cite{Francis:2015lha}. The parametrization form needed in scale setting is taken from~\cite{Francis:2015lha} with updated coefficients from~\cite{Burnier:2017bod}. \begin{table}[htb] \centering \begin{tabular}{ccrccccc} \hline \hline $a$ (fm) & $a^{-1}$ (GeV) & $N_{\sigma}$ & $n_{\sigma}$ & $N_{\tau}$ & $\beta$ & $T/T_{c}$ & \#conf.\tabularnewline \hline 0.0262 & 7.534 & 64 & 4 & 16 & 6.8736 & 1.5104 & 10000 \tabularnewline 0.0215 & 9.187 & 80 & 4 & 20 & 7.0350 & 1.4734 & 10000 \tabularnewline 0.0178 & 11.11 & 96 & 4 & 24 & 7.1920 & 1.4848 & 10000 \tabularnewline 0.0140 & 14.14 & 120 & 6 & 30 & 7.3940 & 1.5118 & 10000 \tabularnewline 0.0117 & 16.88 & 144 & 8 & 36 & 7.5440 & 1.5042 & 10000 \tabularnewline \hline \hline \end{tabular} \caption{ $\beta$ values, lattice spacings, lattice sizes, blocking bin size $n_{\sigma}$ and number of configurations in this study. \label{tab:lattice_setup}} \end{table} We use the clover definition of the energy-momentum tensor appearing in Eq.~(\ref{E_U}). The gradient flow is a Symanzik improved version~\cite{Ramos:2015baa}. We measure the EMT correlators at 140 discrete flow times in the range $\sqrt{8\tau_{\mathrm{F}}}T\in \lbrace 0.004, \dots, 0.375\rbrace$ using an adaptive step-size method. In this method the step size is updated after each integration step such that the error in the integration does not exceed a certain tolerance \cite{Fritzsch:2013je}. The bin size used in the blocking method is given as $n_{\sigma}$ in Tab.~\ref{tab:lattice_setup}. \section{Renormalization} \label{sec_renormalization} \begin{figure}[tbh] \centerline{\includegraphics[width=0.5\textwidth]{./figures/coupling_filled_sameT_144x36.pdf}} \caption{$c_1$ measured at several higher temperatures at $\beta=7.544$ and their weighted average. The vertical bars indicate the flow depth where each $N_\tau$ choice is expected to become unreliable. } \label{fig_ZU_highT} \end{figure} In this section we describe how we determine the renormalization constants appearing in Eq.~(\ref{EMT_flow}). We determine the constant $c_1$ using a method inspired by the work of Giusti and Pepe \cite{Giusti:2015daa}. Namely, we observe that the enthalpy density is given by \begin{equation} \label{entropy} \langle \epsilon+P \rangle_{\tau_{\mathrm{F}}}= c_1(\tau_{\mathrm{F}}) \left\langle \frac{1}{3}U_{ii}(\tau_{\mathrm{F}})- U_{00}(\tau_{\mathrm{F}})\right\rangle , \end{equation} where ``0'' denotes the time direction. Since $\epsilon + P$ has been measured at the sub-percent level \cite{Giusti:2016iqr}, we can determine $c_1$ through the ratio $c_1(\tau_{\mathrm{F}})=\langle \epsilon+P \rangle_{\tau_{\mathrm{F}}}/\langle \frac{1}{3}U_{ii}(\tau_{\mathrm{F}})- U_{00}(\tau_{\mathrm{F}})\rangle$. Note that in principle the diagonal and off-diagonal parts of $U_{\mu\nu}(\tau_{\mathrm{F}})$ renormalize differently as they are not related by hypercubic symmetry. However, under gradient flow rotational symmetry is restored up to $a^2/\tau_{\mathrm{F}}$ corrections. Therefore the diagonal and off-diagonal parts renormalize with the same coefficient up to $a^2/\tau_{\mathrm{F}}$ corrections, which should vanish when we take the continuum limit at fixed $\tau_{\mathrm{F}}$. Unfortunately the enthalpy density is proportional to $T^4$ and therefore to $N_{\tau}^{-4}$, which leads to a poor signal to noise for the finest lattices. We overcome this limitation by measuring $\epsilon+P$ at a range of $N_\tau$ values, not just the ones given in Tab.~\ref{tab:lattice_setup}. This is possible because the renormalization constant $c_1$ depends on the lattice spacing but not on the temperature. However, after enough gradient flow, the gradient flow radius starts to interact with the periodicity radius and the result becomes contaminated and unreliable. A leading-order perturbative estimate of this effect is that \cite{Eller:2018yje} \begin{align} \label{Ellerparam} \frac{\langle \frac{1}{3}U_{ii}- U_{00}\rangle_{\mathrm{flowed}}}{(\frac{1}{3}U_{ii}- U_{00})_{\mathrm{true}}} & = 1 - \frac{180}{\pi^4} e^{-1/x} \left( 1 + \frac{1}{x} + \frac{1}{2x^2}\right), \nonumber \\ \mbox{with} \hspace{2em} x & = 8\tau_{\mathrm{F}} T^2 \,. \end{align} \begin{figure*}[tbh] \centerline{\includegraphics[width=0.5\textwidth]{./figures/combine_ZU.pdf}\includegraphics[width=0.5\textwidth]{./figures/combine_ZU_ratio.pdf}} \caption{Left: combined $c_1$ at different lattice spacings. Right: the ratio of $c_1/c_1(\beta=7.793)$. The error in the estimation of $c_1(\beta=7.793)$ is not included in the ratio. The temperature $T$ in the legends $aT$ and $\tau_{\mathrm{F}} T^2$ has been fixed to 1.5$T_c$. } \label{final_c1} \end{figure*} We illustrate the method, and the effect of the different $N_\tau$ choices, in Fig.~\ref{fig_ZU_highT}, which shows $c_1$ for our finest lattice at different temperatures. It can be seen that, at very small flow times $c_1$, measurements from different temperatures agree with each other, with smaller statistical errors for the smaller $N_\tau$ values. With increasing flow time, the higher-temperature $c_1$ values start to deviate from the lower ones. The point where Eq.~(\ref{Ellerparam}) implies a 1\% correction is marked for each $N_\tau$ value by a vertical bar, and it corresponds well with the flow time value where a given lattice starts to deviate clearly from the larger-$N_\tau$ lattices. Our final estimate for $c_1$ will be based on a weighted average of the value determined from each $N_\tau$ value we explored. The weight is determined as $1/(\sigma_{\textrm{stat}}+\sigma_{\textrm{syst}})^2$ where $\sigma_{\textrm{stat}}$ is the statistical uncertainty from the lattice data and $\sigma_{\textrm{syst}}$ is the systematic shift as determined from Eq.~(\ref{Ellerparam}). The averaged $c_1$ is the black curve labeled ``combined" in Fig.~\ref{fig_ZU_highT}. \begin{table}[ht] \centering \begin{tabular}{ccccc} \hline \hline $\beta$ & $a$[fm]($a^{-1}$[GeV]) & $N^h_\tau$ & $N^h_{\sigma}$ & $\#$ confs\\ \hline 6.8736 & 0.0262 (7.534) & 12 & 64 & 1000\\ \hline \multirow{2}{*}{7.0350} &\multirow{2}{*}{0.0215 (9.187)} & 10 & 80 & 1000\\ & & 14 & 80 & 1000\\ \hline \multirow{2}{*}{7.1920} & \multirow{2}{*}{0.0178 (11.11)} & 12 & 96 & 1000\\ & & 18 & 96 & 1000\\ \hline \multirow{2}{*}{7.3940} & \multirow{2}{*}{0.0140 (14.14)} & 10 & 120 & 1000\\ & & 16 &120 & 1000\\ \hline \multirow{3}{*}{7.5440} & \multirow{3}{*}{0.0117 (16.88)} & 12 & 140 & 1000\\ & & 18 &120 & 1000\\ & & 24 & 120 & 1000\\ \hline \multirow{3}{*}{7.7930} & \multirow{3}{*}{0.0087 (22.78)} & 12 & 144 & 500\\ & & 24 &144 & 500\\ & & 48 & 192 & 700\\ \hline \hline \end{tabular} \caption{The lattices with smaller temporal extents for the determination of $c_1$.} \label{tab:highT} \end{table} We repeat this procedure for the other lattice spacings and summarize the final $c_1$ in Fig.~\ref{final_c1}. Let us now focus on the small flow-time region, to establish how much flow time is enough to eliminate lattice spacing effects. We have added one more, still finer lattice ($\beta=7.793$, with $N_\tau=48$ when $T/T_c = 1.5$) so that we can compare to a still more continuum-like case. We can see that lattice cutoff effects are suppressed at large flow times but at small flow times they are noticeable. To see down to what flow time the $c_1$ is free of lattice cutoff effects, we plot the ratio $c_1/c_1(\beta=7.793)$ in the right panel. In order to see more clearly how the different lattice spacings differ from each other, we have plotted error bars based only on the statistical errors in the coarser lattices -- that is, statistical errors in the $\beta=7.793$ lattice are treated as a common systematic error in the right plot. The figure shows that the lattices give compatible $c_1$ values as long as the flow time is large enough; but each lattice starts to deviate at a flow time such that $\tau_{\mathrm{F}}/a^2$ becomes order-1. Specifically, in every case the deviation from continuum behavior reaches 2\% when $\tau_{\mathrm{F}} \simeq 0.4 a^2$. The deviation rapidly becomes more severe below this point. This deviation from continuum behavior indicates that the applied gradient flow is not sufficient to supply a continuum-like, well-renormalized stress tensor operator. Since the statistical precision of our EMT correlator data is typically around 2\% and since we want to keep systematic effects smaller than this, we will impose the condition $\tau_{\mathrm{F}} \geq 0.4 a^2$ when we perform the double extrapolation of shear correlators in the next section. \begin{table}[htb] \centering \begin{tabular}{ccrccccc} \hline \hline $a$ (fm) & $a^{-1}$ (GeV) & $N_{\sigma}$ & $N_{\tau}$ & $\beta$ & $T/T_{c}$ & \#conf.\tabularnewline \hline 0.0262 & 7.534 & 64 & 64 & 6.8736 & 0.3776 & 1000 \tabularnewline 0.0215 & 9.187 & 80 & 80 & 7.0350 & 0.3684 & 1000 \tabularnewline 0.0178 & 11.11 & 96 & 96 & 7.1920 & 0.3712 & 1000 \tabularnewline 0.0140 & 14.14 & 96 & 120 & 7.3940 & 0.3780 & 1000 \tabularnewline 0.0117 & 16.88 & 96 & 144 & 7.5440 & 0.3761 & 1000 \tabularnewline \hline \hline \end{tabular} \caption{ The lattices at $T<T_c$ for the study of $c_2$.} \label{tab_zeroT} \end{table} \begin{figure}[htb] \centerline{\includegraphics[width=0.5\textwidth]{./figures/c2.pdf} } \caption{$c_2$ measured at $T<T_c$ at different lattice spacings. } \label{fig_c2} \end{figure} Now we calculate $c_2$. According to Eq.~(\ref{coeff_UE}), the running coupling in the $\overline{\mathrm{MS}}$-scheme is needed. For that we first calculate the coupling in the gradient flow scheme and then convert it to the $\overline{\mathrm{MS}}$-scheme. In the gradient flow scheme the running coupling can be calculated as \cite{Fodor:2012td,Hasenfratz:2019hpg} \begin{align} \label{g2flow} g_{\mathrm{flow}}^2=\frac{128\pi^2}{3(N_c^2-1)}\frac{1}{1+\delta(\tau_{\mathrm{F}})}\langle \tau_{\mathrm{F}}^2 E\rangle, \end{align} where $N_c=3$ and $E$ is the energy density defined in Eq.~(\ref{E_U}). $\delta(\tau_{\mathrm{F}})$ can be found in \cite{Fodor:2012td,Hasenfratz:2019hpg} as well. Note that the energy density should be measured at zero temperature. On the lattice we take large temporal extents to suppress the thermal effects. The lattices used to study this quantity are given in Tab.~\ref{tab_zeroT}. Because of high computation costs the two finest lattices have smaller aspect ratios. However, based on the three coarse lattices, we have seen that finite volume effects are small compared to the statistical error of the correlators. After obtaining $\tau_{\mathrm{F}}^2E$ in the gradient flow scheme, we can relate it to the one in the $\overline{\mathrm{MS}}$-scheme \cite{Harlander:2016vzb}. This requires solving a cubic equation, whose solution gives the running coupling in the $\overline{\mathrm{MS}}$-scheme. Inserting in Eq.~(\ref{coeff_UE}), we get the final $c_2$ shown in Fig.~\ref{fig_c2}. The errors are not visible as they are tiny and in every case much smaller than 1\%. We can see that unlike $c_1$, the difference of $c_2$ among different lattice spacings is always small. The ratio $c_2/c_2(\beta=7.544)$ is always smaller than 1\% at all flow times, suggesting that the cutoff effects can be ignored for $c_2$. \section{large separations and noise reduction} Evaluating Eq.~(\ref{eq:Gshearbulk}) involves computing a correlator with an integral over all values of the spatial separation. To improve signal to noise, in practice one evaluates $\int \mathrm{d}^3 x \mathrm{d}^3 y \, \mathrm{d}t \langle T(x,\tau+t) T(y,t) \rangle$, that is, one performs an integral over the coordinates of each operator. The correlation function is dominated by small values of coordinate difference $|x-y|$. However, the fluctuations in the correlator, and therefore the noise, are approximately separation-independent. Therefore the inclusion of large separations makes the evaluation noisy without contributing meaningfully to the signal. In Ref.\cite{Altenkort:2021jbk} we proposed a way to reduce these noise contributions. The operator of interest ($T_{\mu\mu}$ or a component of $\pi_{ij}$) is first summed over small volumes called blocks, on a single $\tau$ sheet but with a cubic space extent given in Tab.~\ref{tab:lattice_setup}. We evaluate all block-to-block correlators and then average all correlators which have the same temporal and block-center spatial separation. Finally, we examine how the correlation function varies with the space separation between blocks, replacing the large-separation, small-signal values with an asymptotic fit as described in \cite{Altenkort:2021jbk}. Each index combination of the $\langle \pi_{ij}(x,\tau) \pi_{ij}(y,0)\rangle$ correlator has a distinctive angular structure as a function of the direction of the $\vec x - \vec y$ vector. For instance, from reflection positivity we know that $\langle T_{xy}(\vec r) T_{xy}(0)\rangle < 0$ for $\vec r$ pointing along the $x$-axis or $y$-axis, but it is positive if $\vec r$ points along the $z$-axis or the line $x=y$. In contrast, the $\langle (T_{xx}-T_{yy})(\vec r) (T_{xx}-T_{yy})(0)\rangle$ correlator is positive along each lattice axis but is negative along the $x=y$ line. In our blocking procedure, certain block separations primarily sample blocks which are separated along lattice axes, while others sample the directions along lattice diagonals or other combinations. Therefore, $T_{xy}T_{xy}$-type correlators will be larger for some blocks and smaller for others, while $T_{xx}-T_{yy}$-type correlators will show the opposite trend. Including a single component or a subset of possible components leads to a correlation function which varies strongly with separation-direction and therefore jumps up and down as a function of the block separation. This effect goes away if we include all traceless $ij$ combinations, which is therefore obligatory. We illustrate this in Figure \ref{fig:TTjump}, which shows the $\pi\pi$ correlation function as a function of block-separation. \begin{figure}[tbh] \includegraphics[width=0.5\textwidth]{./figures/radial_dependence.pdf} \caption{Traceless spatial stress tensor (shear-channel) correlator between lattice blocks, at a fixed temporal separation, as a function of box separation. The black points contain all traceless stress tensor components, while the red data points contain only the diagonal-type contributions. Some block separations only occur along lattice axes where the diagonal-type contributions are largest, while other block separations occur along lattice diagonals where some diagonal-type contributions are negative. Hence, the red points jump around, while the black points follow a smooth curve until the statistical errors become large. \label{fig:TTjump}} \end{figure} On the lattice, the renormalization constant $c_1$ can be different for $T_{xx}-T_{yy}$-type operators as for $T_{xy}$-type operators, because the rotational symmetry which relates them in the continuum is absent on the lattice \cite{Caracciolo:1989pt,Caracciolo:1991cp,Giusti:2015daa}. Unfortunately, we only know the renormalization coefficient for the diagonal-type operators. However, for an operator generated using gradient flow, the difference between diagonal-type and off-diagonal-type renormalization constants should be suppressed by $\mathcal{O}(a^2/\tau_{\mathrm{F}})$. Therefore any effects from this difference should be removed in our fixed-$\tau_{\mathrm{F}}$ continuum limit. In order to remove the large-separation data and therefore its noise, it is necessary to fit the large-separation tail to a physically-motivated Ansatz. The fitted value is then used instead of the data at those separations where the block-by-block signal to noise is poor. For our Ansatz we will use the leading-order perturbative behavior of the correlation function, accounting for time periodicity, gradient flow, and our blocking procedure. In vacuum, the leading-order correlator of two field strength tensors is \begin{align} \label{vacFF} &\langle F^a_{\mu\nu}(r) F^b_{\alpha\beta}(0) \rangle = \frac{g^2 \delta_{ab}}{\pi^2 r^4} \left[ \delta_{\mu\alpha} \delta_{\nu\beta} - \delta_{\mu\beta} \delta_{\nu\alpha} \vphantom{\frac{2}{r^2}} \right. \nonumber \\ & \left. - \frac{2}{r^2} \left( r_\mu r_\alpha \delta_{\nu\beta} - r_\mu r_\beta \delta_{\nu\alpha} - r_\nu r_\alpha \delta_{\mu\beta} + r_\nu r_\beta \delta_{\mu\alpha} \right) \right]. \end{align} Applying gradient flow to a depth $\tau_{\mathrm{F}}$ modifies this expression to \cite{Eller:2018yje}: \begin{align} \label{vacFFflow} &\langle G^a_{\mu\nu}(r) G^b_{\alpha\beta}(0) \rangle_{\tau_{\mathrm{F}}} = \frac{g^2 \delta_{ab}}{\pi^2 r^4} \left[ A(r,\tau_{\mathrm{F}}) \left( \delta_{\mu\alpha} \delta_{\nu\beta} - \delta_{\mu\beta} \delta_{\nu\alpha} \right) \vphantom{\frac{2}{r^2}} \right. \nonumber \\ & \left. + \frac{B(r,\tau_{\mathrm{F}})}{r^2} \left( r_\mu r_\alpha \delta_{\nu\beta} {-} r_\mu r_\beta \delta_{\nu\alpha} {-} r_\nu r_\alpha \delta_{\mu\beta} {+} r_\nu r_\beta \delta_{\mu\alpha} \right) \right], \\ & A(r,\tau_{\mathrm{F}}) = 1 - \left(1+\frac{r^2}{8\tau_{\mathrm{F}}} \right) e^{-r^2/8\tau_{\mathrm{F}}} \,, \\ & B(r,\tau_{\mathrm{F}}) = -2 + \left[ 2 - 2 \frac{r^2}{8\tau_{\mathrm{F}}} + \left(\frac{r^2}{8\tau_{\mathrm{F}}} \right)^2\right] e^{-r^2/8\tau_{\mathrm{F}}} . \end{align} Note that this is a continuum, not lattice, expression; but when $\tau_{\mathrm{F}} /a^2 > 0.5$, the lattice-continuum difference for flowed correlators is small, and the use of a continuum limit at fixed flow depth based only on data which satisfies this criterion should avoid the need to include lattice spacing corrections as well. \begin{figure*}[tbh] \centerline{\includegraphics[width=0.5\textwidth]{./figures/G1G2Shear.pdf} \includegraphics[width=0.5\textwidth]{./figures/G1G2Bulk.pdf} } \caption{The temperature correction when going from $\beta_1=7.035$ to $\beta_2=7.0767$ on an $80^3\times 20$ lattice, for shear (left) and bulk(right). Each data point is found at the maximum flow time used in the flow-time extrapolation (see Sec.(\ref{sec:extrapolate})). } \label{G1G2} \end{figure*} Using these expressions, at finite $\tau_{\mathrm{F}},\tau,|\vec r|$ and with periodic boundaries in the time direction, the leading-order stress tensor correlator summed over all transverse-traceless elements $\hat T_{ij} = T_{ij} - \delta_{ij} T_{kk}/3$ relevant for shear viscosity and for bulk viscosity are: \begin{widetext} \begin{align} \langle \hat T_{ij}(\vec r,\tau) \hat T_{ij}(0,0)\rangle_{\tau_{\mathrm{F}}} \propto \sum_{n_1,n_2 \in \mathcal{Z}} & \frac{A(r_1) A(r_2)}{r_1^4 r_2^4} + \frac{A(r_1) B(r_2) + A(r_2) B(r_1)}{2 r_1^4 r_2^4} \nonumber \\ & + \frac{B(r_1) B(r_2)}{6r_1^6 r_2^6} \left( 3 (r_1\cdot r_2)^2 + \vec{r}^2 \left[ r_1^2+ r_2^2 - 4r_1\cdot r_2 + \frac{4}{5} \vec{r}^2 \right]\right), \\ \langle T_{\mu\mu}(\vec r,\tau) T_{\nu\nu}(0,0) \rangle_{\tau_{\mathrm{F}}} \propto \sum_{n_1,n_2 \in \mathcal{Z}} & \frac{A(r_1) A(r_2)}{r_1^4 r_2^4} + \frac{A(r_1) B(r_2) + A(r_2) B(r_1)}{2r_1^4 r_2^4} \nonumber \\ & + \frac{B(r_1) B(r_2)}{6r_1^6 r_2^6} \left( 2 (r_1\cdot r_2)^2 + r_1^2 r_2^2 \right) \,, \end{align} where $r_1=(\tau+n_1 \beta,\vec r)$ and $r_2=(\tau+n_2 \beta,\vec r)$ are the 4-displacement with the temporal displacement shifted by independent integer multiples of the inverse temperature $\beta$. \end{widetext} \section{Temperature correction and tree level improvement} \label{sec:improve} From Tab.~\ref{tab:lattice_setup} it can be seen that the temperatures are not exactly $1.5T_c$ on all lattices. This setup is adopted for historical reasons \cite{Francis:2015daa, Ding:2021ise}, and the deviations of the temperature were only discovered after the correlators were measured. The temperature differences, though small, must be accounted for when performing a continuum extrapolation. Because the temperature differences are small and the lattices are fine enough that the continuum extrapolation is not very severe, we will content ourselves by evaluating the temperature dependence at the linearized level and at a single lattice spacing. We then assume that the established temperature correction also applies at the other lattice spacings. We choose to perform a linear temperature-dependence analysis on the lattice which has the largest deviation from $T=1.5T_c$, namely the $20\times 80^3$ lattice with $\beta\equiv \beta_1 = 7.035$ and $T = 1.4734 T_c$. For this lattice, we choose a second $\beta$ value, $\beta_2 = 7.0767$, corresponding to $T = 1.5501 T_c$, and we repeat our correlation function studies on this lattice. Since the renormalized correlators contain two parts, namely the renormalization constants $c_1$ or $c_2$ and the bare correlators, the corrections for both parts should be considered. The renormalization constants have been determined precisely in Sec.\ref{sec_renormalization} at $\beta$ values listed in Tab.~\ref{tab:lattice_setup}. To obtain the one at $\beta=7.0767$ we linearly interpolate between $\beta=7.035$ and $\beta=7.192$. We then calculate the renormalized correlators, denoted as $G_1$ and $G_2$ for the lower and higher temperature respectively, by multiplying the bare correlations functions and the squared renormalization constants. We then evaluate the difference, $-1+G_2(\tau) / G_1(\tau)$, as a function of $\tau$ and $\tau_{\mathrm{F}}$. We show in Fig.~\ref{G1G2} that this thermal correction is almost $\tau$ independent except at the smallest $\tau$ values (which are contaminated by lattice effects). (The figure shows $-1+G_2/G_1$ as a function of $\tau$ for the largest $\tau_{\mathrm{F}}$ value used in our extrapolation for that $\tau$ value, in order to minimize the errors.) Based on this result, we treat $-1+G_2/G_1$ as a function of $\tau_{\mathrm{F}}$ only, determining its value based on the weighted average of all the points at $\tau/a\geq 4$ at each $\tau_{\mathrm{F}}$. Fig.~\ref{G1G2} shows that the thermal corrections are relatively small, considering that the temperature difference $1.5501-1.4734=0.0767 T_c$ is significantly larger than any of the individual deviations from $1.5 T_c$ shown in Table \ref{tab:lattice_setup}. We will therefore use the determined slope $P=(G_2/G_1 - 1)/(T_2 - T_1)$, averaged over $\tau$ values, and apply it as a linearly interpolated correction to all data. For instance, data at temperature $T$ can be interpolated to the temperature $T_0$ through $G(T_0) \simeq G(T) (1+ P (T_0-T))$. Next, consider discretization effects associated with computing on a lattice rather than in continuous space. To suppress the lattice discretization effects, we apply tree level improvement to the bare correlators. Specifically, if we assume that the lattice correlation functions will deviate from the continuum ones in the same way as occurs at lowest perturbative order, then we can remove this effect by rescaling by the ratio of leading-order continuum to lattice correlation functions \cite{Gimenez:2004me,Meyer:2009vj}: \begin{equation} \label{ratio_tt} G^{\mathrm{t.l.}}(\tau T)=G_{\rm lat}(\tau T) \,\cdot\, \frac{G^{\mathrm{LO}}_{\mathrm{cont}}(\tau T) }{G^{\mathrm{LO}}_{\mathrm{lat}}(\tau T) }. \end{equation} The leading-order continuum correlators in shear channel and bulk channel can be found in~\cite{Meyer:2007ic,Meyer:2007dy} \begin{equation} \label{pert_cont} \begin{split} &\frac{G^{\mathrm{LO,shear}}_{\mathrm{cont}}(\tau T)}{T^5} = \frac{32d_A}{5\pi^2} \Big(f(x)-\frac{\pi^4}{72} \Big)\\ &\frac{G^{\mathrm{LO,bulk}}_{\mathrm{cont}}(\tau T)}{T^5} = \frac{484d_A}{16\pi^6}g^4 \Big(f(x)-\frac{\pi^4}{60} \Big), \end{split} \end{equation} where $x=1-2\tau T$, $f(x) = \int_0^\infty ds~ s^4 \cosh^2(x s)/\sinh^2 s$ and $d_A=8$ counting the number of gluons. The leading-order lattice correlator for clover discretization is available in~\cite{Meyer:2009vj}. For better visibility we always normalize the tree-level improved correlators with a normalization correlator $G_{\rm{norm}}$ calculated at $\tau_{\mathrm{F}}=0$, where for shear channel we use $G_{\rm{norm}}\equiv G^{\mathrm{LO,shear}}_{\mathrm{cont}}$ and for bulk we use $G^{}_{\rm{norm}}\equiv G^{\mathrm{LO,bulk}}_{\mathrm{cont}}/g^4$. After temperature corrections, tree level improvement and renormalization, in Fig.~\ref{Nt36corrs} we show the lattice correlators normalized by the free continuum correlators on $144^3\times 36$ lattice at different flow times, in both the shear and the bulk channels. We have not plotted data down to small flow times because it has large errors. We can see that as flow time increases the signal-to-noise improves. At very large flow times the signal is strongly modified by flow effects and we leave the regime where an extrapolation $\tau_{\mathrm{F}} \to 0$ can be performed. \begin{figure*}[htb] \centerline{\includegraphics[width=0.5\textwidth]{./figures/Gimp_norm_Ns144Nt36_UU.pdf} \includegraphics[width=0.5\textwidth]{./figures/Gimp_norm_Ns144Nt36_EE.pdf} } \caption{Tree-level-improved EMT correlators in the shear channel (left) and bulk channel (right) normalized by the leading-order correlator on the $144^3\times 36$ lattice at different flow times. (The tree-level correlator used for the normalization in the bulk channel is missing a factor of $g^4$, which explains the large ratio.)} \label{Nt36corrs} \end{figure*} \section{Double extrapolation} \label{sec:extrapolate} \begin{figure*}[htb] \centerline{\includegraphics[width=0.5\textwidth]{./figures/cont_extrap_UU_flowtime4dot1632813e-03.pdf} \includegraphics[width=0.5\textwidth]{./figures/cont_extrap_EE_flowtime4dot1632813e-03.pdf} } \caption{The continuum extrapolation of EMT correlators in shear channel (\textit{left}) and bulk channel (\textit{right}) at flow time $\tau_{\mathrm{F}} T^2=0.00416$, fit using Eq.~(\ref{cont_Ansatz}). } \label{cont-extrap} \end{figure*} The double extrapolation contains two steps: first we perform the continuum extrapolation $a \to 0$, and then we perform a flow-time-to-zero extrapolation. As we pointed out in Ref.~\cite{Altenkort:2020fgs}, this has the advantage that the continuum extrapolation eliminates terms of form $a^2 / \tau_{\mathrm{F}}$, so that the $\tau_{\mathrm{F}}$ extrapolation will consist only of positive powers. Before the continuum extrapolation, the correlators on coarse lattices have to be interpolated to the separations of the finest lattice, for details see, for example, references \cite{Altenkort:2020fgs, Altenkort:2020axj}. In the continuum extrapolation we use the Ansatz \begin{align} \label{cont_Ansatz} \frac{G^\textrm{t.l.}(N_\tau) }{G_{\rm{norm}}(N_\tau)} = m \cdot N_\tau^{-2} + b, \end{align} because the lattice action has leading discretization errors of order $a^2$. Here $m$ and $b$ are fit parameters that can be different for each temporal separation and flow time. The continuum estimates for the (normalized) correlators are given by $b \equiv G_\mathrm{cont}/G_{\rm{norm}}$. Fig.~\ref{cont-extrap} shows how good the fit Ansatz, Eq.~(\ref{cont_Ansatz}), works at an intermediate flow time $\tau_{\mathrm{F}} T^2=0.00416$. We can see for the bulk channel that in some cases the fit is poor in the sense that $\chi^2/\mathrm{dof} > 1$. Our procedure is to enlarge the error bars by $\sqrt{\chi^2/\mathrm{dof}}$ in these cases. After the continuum extrapolation we collect the continuum estimates for each flow time and show them in grey bands in Fig.~\ref{tauF-extrap}. \begin{figure*}[htb] \centerline{\includegraphics[width=0.5\textwidth]{./figures/flowtime_extrap_UU.pdf} \includegraphics[width=0.5\textwidth]{./figures/flowtime_extrap_EE.pdf} } \caption{The $\tau_{\mathrm{F}}\rightarrow 0$ extrapolation of continuum-extrapolated EMT correlators in the shear channel (\textit{left}) and bulk channel (\textit{right}). } \label{tauF-extrap} \end{figure*} \begin{figure*}[tbh] \centerline{ \includegraphics[width=0.5\textwidth]{./figures/shear_corr_broad.pdf} \includegraphics[width=0.5\textwidth]{./figures/bulk_corr_broad.pdf} } \caption{Double-extrapolated correlators in the shear channel (\textit{left}) and bulk channel (\textit{right}). Note that $G^{\mathrm{norm}}(\tau T)$ in the bulk channel is missing a factor of $g^4$, which explains the size and possibly the slope of the resulting correlator ratio. } \label{final_corrs} \end{figure*} Now we consider the $\tau_{\mathrm{F}}\rightarrow 0$ extrapolation. To perform the extrapolation, we need to understand the functional dependence on $\tau_{\mathrm{F}}$, and we need to determine over what range of $\tau_{\mathrm{F}}$ values to perform the extrapolation. For general values of $\tau_{\mathrm{F}} / \tau^2$, the correlator is a complicated function of this ratio, in some cases even taking on a different sign than the small-$\tau_{\mathrm{F}}$ value \cite{Eller:2018yje}. However, if $\tau_{\mathrm{F}} / \tau^2$ is small, then as discussed near the end of Section \ref{sec:gradflow}, we expect the flowed stress tensor to be described in terms of an operator product expansion, with the leading coefficient equaling the stress tensor and with higher-dimension operators suppressed by powers of $\tau_{\mathrm{F}}$. As a result, in this regime the small-$\tau_{\mathrm{F}}$ expansion of the correlation function should approach $\tau_{\mathrm{F}} \to 0$ with polynomial-in-$\tau_{\mathrm{F}}$ corrections. (We will ignore possible anomalous dimensions in this discussion.) The more fitting coefficients we use, the larger the errors in the resulting fit. Therefore we want to avoid using two extrapolation coefficients, eg, a fit of form $G(\tau_{\mathrm{F}}/\tau^2) = A + B \tau_{\mathrm{F}}/\tau^2 + C \tau_{\mathrm{F}}^2/\tau^4$. And if we use a wide enough data range that the $\tau_{\mathrm{F}}^2/\tau^4$ coefficient is really relevant, then there is a danger that we also need still higher-order coefficients. Therefore, we will restrict ourselves to a region where the total variation in $G(\tau_{\mathrm{F}}/\tau^2)$ appears to be at most 20\% from its extrapolated value. In this range, within the few \% accuracy which is our goal, we expect that a linear extrapolation, eg, $G(\tau_{\mathrm{F}}/\tau^2) = A + B \tau_{\mathrm{F}}/\tau^2$, should be sufficient. Based on our previous experience with the topological density operator \cite{Altenkort:2020axj}, we expect that a fitting range out to $\sqrt{8\tau_{\mathrm{F}}^{\mathrm{max}}} = 0.5220\tau$ should remain in this small-correction regime. We will fit a range of $\tau_{\mathrm{F}}$ from this maximum down to half this value, because the correlator becomes so noisy at smaller $\tau_{\mathrm{F}}$ that extending the range further is not helpful. In addition, to prevent lattice spacing effects of form $a^2/\tau_{\mathrm{F}}$, we restrict to values with $\tau_{\mathrm{F}}/a^2 \geq 0.4$ as already discussed. For small $\tau$ values this constraint excludes too much of the $\tau_{\mathrm{F}}$ range over which we want to extrapolate, which prevents us from determining the correlator at small temporal separations. The resulting correlators within the range $[0.5\tau_{\mathrm{F}}^{\mathrm{max}} , \tau_{\mathrm{F}}^{\mathrm{max}}]$ are shown as colored bands in Fig.~\ref{tauF-extrap}. For the extrapolation of the bulk viscosity correlators we have taken a slightly different approach, based on the work of \cite{Suzuki:2013gza, Makino:2014taa,Suzuki:2021tlr}. A recent three-loop calculation of the flow-dependence of the EMT trace suggests a finite-$\tau_{\mathrm{F}}$ fitting function of form \cite{Suzuki:2021tlr} \begin{equation} \theta(\tau_{\mathrm{F}})=\Big{(}1-c \big{(}\frac{g^2(\mu(\tau_{\mathrm{F}}))}{(4 \pi)}\big{)}^{3}\Big{)} \theta(\tau_{\mathrm{F}}=0), \label{resquenchS} \end{equation} where $c$ and $\theta(\tau_{\mathrm{F}}=0)$ are fit parameters. Since what we measured in this study is the correlators of $\theta$, we take the square root of the correlators and fit it to Eq.~(\ref{resquenchS}). The fitted curves are shown as dashed black lines in Fig.~\ref{tauF-extrap} and the extrapolated correlators are shown as colored points at $\tau_{\mathrm{F}} T^2=0$. It can be seen that the fit function is almost linear, indicating that a fit to an Ansatz linear in flow time (as used in \cite{Altenkort:2020fgs,Altenkort:2020axj}) would give similar results. \section{Spectral analysis} \label{sec:spectrum} This section is devoted to the spectral extraction from the extrapolated correlators. We first reconstruct the spectral function using $\chi^2$-fits with models based on perturbative calculations and then determine the viscosities using the Backus-Gilbert (BG) method \cite{Backus1968}. The spectral reconstruction performed here is mathematically ill-posed \cite{hadamard1923lectures}. One feature of this is the difficulty in quoting a robust spectral function since uniqueness of any solution is a priori not given. In the case of the spectral analysis via fit this issue presents itself as the difficulty in finding a global, well determined minimum. In principle, if the ``correct'' Ansatz were known, with enough data points and without considering any noise the analysis should yield a global minimum in the $\chi^2$-plane. Without this knowledge and with noise included, however, this minimum is less well determinable and a fit often yields $\chi^2$-values that are not very sensitive to the parameter choices. Consequently it becomes difficult to choose with confidence which solution and Ansatz is the best description. In the following we address this difficulty by augmenting our study with a spectral analysis using a method that does not rely on an Ansatz per se in form of the BG method. \subsection{Spectral function from model fits} According to Eq.~(\ref{kubo1},\ref{kubo2}), the viscosities are proportional to the slope of the spectral function at zero frequency. But the large frequency part also contributes considerably to the correlators and they can be computed perturbatively. For the shear channel the large frequency part has been computed both at leading order (LO) and at next-to-leading order (NLO) \cite{Zhu:2012be} : \begin{align} \begin{split} \label{rhoLO} \rho_{\rm{shear}}^{\mathrm{LO}}(\omega)= & \frac{d_A \ \omega^4}{10\pi} \coth\Big{(}\frac{\omega}{4T}\Big{)},\\ \rho_{\rm{shear}}^{\mathrm{NLO}}(\omega)= & \rho_{\rm{shear}}^{\mathrm{LO}} (\omega) - 4 d_A \omega^4 \coth\Big{(}\frac{\omega}{4T}\Big{)} \frac{g^2(\bar{\mu})N_c}{(4\pi)^3} \\ &\times \biggl[ \frac{2}{9} + \phi^{\eta}_T(\omega) \biggr] . \end{split} \end{align} Note that our definition of the spectral function differs from that in Ref.~\cite{Zhu:2012be} by a relative minus sign. Here $d_A = N_c^2-1=8$ is the dimension of the adjoint representation. In the region of $\omega\ll \pi T$, the one-loop running coupling can be fixed via the `EQCD' renormalization point \cite{Kajantie:1997tt} \begin{align} \label{mu_opt_from_T} \ln\left( \bar{\mu}^{\mathrm{opt}(T)} \right) \equiv \ln\left( 4\pi T\right) -\gamma_{\mathrm{E}} -\frac{1}{22} \,. \end{align} \begin{figure*}[tbh] \centerline{\includegraphics[width=0.5\textwidth]{./figures/fit_shear_corr.pdf} \includegraphics[width=0.5\textwidth]{./figures/fit_shear_spf.pdf} } \caption{The comparison of fit correlators and lattice correlators (left) and the fit spectral function in the shear channel. In M3 the width of the Lorentzian peak $C$ has been fixed to 1. } \label{fit_shear_corr_spf} \end{figure*} \begin{figure*}[tbh] \centerline{\includegraphics[width=0.5\textwidth]{./figures/fit_bulk_corr.pdf} \includegraphics[width=0.5\textwidth]{./figures/fit_bulk_spf.pdf} } \caption{The comparison of fit correlators and lattice correlators (left) and the fit spectral function in the bulk channel. } \label{fit_bulk_corr_spf} \end{figure*} Using this relation the coupling is fixed to the value $g^2\left( \bar{\mu}^{\mathrm{opt}(T)}\right)=2.2346$ at $T=1.5T_c$, where we use an updated relation $T_c=1.24\Lambda_{\overline{\mathrm{MS}}}$~\cite{Francis:2015lha}. For large $\omega$, due to the lack of explicit logarithms of the renormalization scale in Eq.~(\ref{rhoLO}), a natural choice is given by $\bar{\mu}^{\mathrm{opt}(\omega)}=\omega$ \cite{Zhu:2012be}. Combining the above two conditions a switching point for the renormalization scale at $\omega/T=2.146 \pi$ can be found. The dimensionless function $\phi_T^{\eta}(\omega)$ was first determined in reference \cite{Zhu:2012be} but with a computational error, which was found in reference \cite{Vuorinen:2015wla}. In \cite{Vuorinen:2015wla} another term from HTL resummation was introduced. Such a term only affects small frequencies and we do not include it in our spectral analysis, as we do not expect HTL to be reliable at the non-perturbative regime of small frequencies. For the bulk channel the LO and NLO spectral function are also available \cite{Laine:2011xm} \begin{equation} \begin{split} \label{rho_pert} \rho_{\rm{bulk}}^{\mathrm{LO}}(\omega)= & \frac{d_A c^2_{\theta}\ \omega^4g^4}{4\pi} \coth\Big{(}\frac{\omega}{4T}\Big{)},\\ \rho_{\rm{bulk}}^{\mathrm{NLO}}(\omega)= & \rho_{\rm{bulk}}^{\mathrm{LO}} (\omega) + d_A c^2_{\theta}\ \omega^4 \coth\Big{(}\frac{\omega}{4T}\Big{)} \frac{g^6(\bar{\mu})N_c}{(4\pi)^3} \\ &\times \biggl[ \frac{22}{3} \ln\frac{\bar{\mu}^2}{\omega^2} + \frac{73}{3} + 8\, \phi^{ }_T(\omega) \biggr], \end{split} \end{equation} where $c_{\theta}\approx -b_0/2-b_1 g^2/4$, $b_0=\frac{11N_c}{3(4\pi)^2}$ and $b_1=\frac{34N_c^2}{3(4\pi)^4}$. $\phi_T(\omega)$ can be found in \cite{Laine:2011xm}. At LO, the running coupling can not be fixed. For simplicity we fix it to the one at the switching point at NLO. One can also fix it to another point, however this will not have effect on our spectral reconstruction as we shall see later there will be a rescaling factor to account for this uncertainty. At NLO, for $\omega \gg \pi T$ the optimization of the scale $\bar{\mu}$ and the running coupling can be determined\cite{Laine:2011xm} \begin{align} \ln\left( \bar{\mu}^{\mathrm{opt}(\omega)} \right) \equiv \ln\left( \omega \right) -\frac{73}{44} \,. \label{mu_opt_from_omega} \end{align} In the opposite regime one should use Eq.~(\ref{mu_opt_from_T}). Equating Eq.~(\ref{mu_opt_from_T}) to Eq.~(\ref{mu_opt_from_omega}) leads to a switching point $\omega/T=11.276 \pi$. For an arbitrary $\omega$ the larger optimization scale from the two equations should be used. The infrared behavior of the spectral function is not known \textsl{a priori}, and must be modeled. In previous work \cite{Altenkort:2020axj} we have considered several proposed IR behaviors, generally finding that the data is not very restrictive between different IR Ansatz choices. In this work we will consider one model with an infrared ``peak'' and perturbative UV behavior, and two ``peak-free'' models in which the IR behavior is linear in $\omega$, the UV behavior is perturbative, and the spectral function increases continuously between them: \begin{align} \label{models} \mathrm{M1}: \frac{\rho(\omega)}{\omega T^3}= & \frac{A}{T^3}+ B\frac{\rho_{\mathrm{pert}}(\omega)}{\omega T^3},\\ \nonumber \mathrm{M2}: \frac{\rho(\omega)}{\omega T^3}= &\sqrt{\left(\frac{A}{T^3}\right)^2 +\left(B\frac{\rho_{\mathrm{pert}}(\omega)}{\omega T^3}\right)^2},\\ \nonumber \mathrm{M3}: \frac{\rho(\omega)}{\omega T^3}= &\frac{A}{T^3}\frac{C^2}{C^2+(\omega/T)^2}+B\frac{\rho_{\mathrm{pert}}(\omega)}{\omega T^3} \end{align} Here $B$ is a coefficient allowing for a rescaling of the perturbative result, and $A$ is the size of the IR contribution, which determines the transport coefficient of interest. In the first model, we consider a simple sum of an IR and a UV behavior; in the second, we consider a smooth switch-over between IR and UV behavior. In the third model, the IR behavior is a Lorentzian with width parameter $C$. For simplicity, we have fixed the width parameter $C$ to unity, but we also explored other values and we find a rather weak dependence of the fit quality on the choice. We will use the range of fit values for $A$ between these models as an estimate of the value and uncertainty in the viscosity, though realistically the true spectral function may look different than any of our models and this introduces a potentially large systematic uncertainty in our final result. In addition, for the bulk-viscous channel, there is a known constant contribution arising from the dependence of $T_{\mu\mu}$ on the energy density and on the fluctuations in the system energy. Specifically, the spectral function is known to possess a delta function at zero frequency, equal to $\rho/\omega T^3 = \pi\frac{E+P}{T^3}\frac{(3c_s^2-1)^2}{c_s^2}\delta(\frac{\omega}{T})$. Equivalently one can subtract an $\tau$-independent constant of corresponding size from the Euclidean correlation function. We adopt the values $\frac{E+P}{T^3}=5.098$ and $c_s^2=0.2848$ that can be calculated from \cite{Giusti:2016iqr}. For the bulk channel our fit has 2 parameters on 13 data points, leaving 11 degrees of freedom. The leading-order fit shows a poor $\chi^2$/dof, with values of 3.9, 5.4 and 6.3 for $\mathrm{M1}$, $\mathrm{M2}$ and $\mathrm{M3}$ with $C=1$. But using the NLO spectral function returns a good fit, with $\chi^2$/dof of 0.4, 0.5 and 0.6. This suggests that the NLO corrections and in particular the running of the coupling improve the estimation significantly and brings it close to our non-perturbative determination. The resultant $\zeta/T^3$ is $0.086(0.008)$, $0.133(0.010)$, and $0.303(31)$ for M1, M2, and M3($C=1$) respectively. For the shear channel we find that when using the LO spectral function the $\chi^2$/dof is 4.1, 3.99 and 3.98, and for the NLO spectral function it is 3.7, 4.8 and 3.66, respectively. This indicates that both LO and NLO calculations fail to capture our non-perturbative results for the Euclidean correlator. This indicates that the true form of the spectral function is something more complicated than our relatively simple proposals in Eq.~(\ref{models}). As one attempt to capture possibly-missing structure, we have considered amending the UV part of the spectral function with an anomalous dimension, namely changing Eq.~(\ref{rhoLO}) by replacing $\omega^4$ with $\omega^{4+\gamma}$. With this modification we find that the $\chi^2$/dof becomes $\sim$2.0-2.1 for all models, both for the LO and the NLO spectral function. The returned value of the viscosity is $\eta/T^3=0.84(0.14)$, $1.10(0.14)$ for LO and $0.77(0.16)$ and $1.09(0.15)$ for NLO, all using the first two models. Model M3 with $C=1$ using NLO and an anomalous dimension returns $\eta/T^3 = 2.46(54)$. Using an anomalous dimension improves the fit, but $\chi^2$/dof of 2 with 8 degrees of freedom still represents a rather poor fit. It would be interesting to explore other models for the IR behavior and to see if any such model can improve the quality of our fit. \subsection{Spectral function from Backus-Gilbert method} \begin{figure*}[tbh] \centerline{\includegraphics[width=0.5\textwidth]{./figures/resolution_shear.pdf} \includegraphics[width=0.5\textwidth]{./figures/BGM_spf_shear.pdf} } \caption{The resolution function (left) and output spectral function (right) at $\bar{\omega}=0$ in shear channel at some selected $\lambda$ values from Backus-Gilbert analysis. } \label{reso_spf_shear} \end{figure*} \iffalse We have seen in the previous section that the perturbative spectral function is proportional to $\omega^4$. Such high dimension means quick change with frequency in the spectral function and causes intractable difficulty in the spectral reconstruction. The Backus-Gilbert method (BGM) has the \todoluis{some people might argue that that is also a disadvantage :D} advantage that no prior information about the parametric form of the spectral function is needed. BGM searches for the minimum of the functional \begin{align} \mathcal{H}(\rho(\omega)) = \lambda \mathcal{A}(\rho(\omega)) + (1 - \lambda) \mathcal{B}(\rho(\omega)), \end{align} where $\mathcal{A} = \int_0^{\infty} d \omega \delta^2(\bar \omega, \omega) (\omega - \bar \omega)^2$ represents the width of the resolution function \begin{align} \label{eq_resolution} \delta(\bar \omega, \omega) = \sum_i q_i(\bar \omega) K(\tau_i, \omega) \end{align} and $\mathcal{B}(\vec{q}) = \vec{q}^T \hat{S} \vec{q}$ the variance of the spectral function characterises how much the spectral function is dependent on the lattice data. $\lambda$ is a trade-off parameter balancing the contributions from $\mathcal{A}$ and $\mathcal{B}$. $\bar{\omega}$ is the frequency interested, in our case $\bar{\omega}=0$ where the viscosity is defined. Both $\lambda$ and $\bar{\omega}$ are input parameters. $q$ corresponds to the solution at the minimum of the functional $\mathcal{H}$ that can be obtained by calculating \begin{align} \begin{split} q_i(\bar{\omega}) &= \frac { \sum_j W^{-1}_{ij} (\bar \omega) R(\tau_j) } { \sum_{kj} R(\tau_k) W^{-1}_{kj} (\bar \omega) R(\tau_j) }, \\ W_{ij}(\bar \omega) &= \lambda\int_{0}^{\infty} d \omega K(\tau_i,\omega) (\omega- \bar \omega)^2 K(\tau_j,\omega) + (1 - \lambda) S_{i j}, \\ R(\tau_i) &= \int_0^{\infty} d \omega K(\tau_i,\omega), \end{split} \end{align} where $S$ is the covariance matrix of the lattice correlators. Once $q$ is known together with Eq.~(\ref{eq_resolution}) the spectral function can be obtained via \begin{align} \bar{\rho}(\bar{\omega})=f(\bar{\omega})\sum_i q_i(\bar{\omega})G(\tau_i), \end{align} where $f=(\omega/T)^4/\tanh^3(\omega/4T)$ is introduced to regularize the divergence at $\omega=0$ and \wave{wild change} of spectral function at large frequencies. A general issue of BGM is that it is not clear how to find the best $\lambda$. In principle one should examine the width of the output resolution function and choose the one that best matches the interested frequency window. However in many practical calculations the resolution function does not show clear difference between $\lambda$ in a wide range. For instance in the left panel of Fig.~\ref{reso_spf_shear} we show the resolution function obtained at many different values of $\lambda$ in shear channel. We can see the width is $\sim$5 in all cases except the two smallest $\lambda$. However, if we try out all the $\lambda$ in $(0,1)$, a lower bound can still be found for the shear viscosity $\eta/T^3\geq 0.81$, see the right panel of Fig.~\ref{reso_spf_shear}. Similarly for bulk viscosity we obtain $\zeta/T^3\geq 0.059$. We can see the fit results obtained in previous section safely lie in this range. \fi The technical difficulty in performing the spectral reconstruction can be traced in part to two issues, the finiteness of the number of data points and their noise. The first implies a discretisation of the integral transform \begin{align} G(\tau)=\int_0^\infty \mathrm{d}\omega \rho(\omega) K(\tau,\omega) \rightsquigarrow G(\tau_i)=\sum_{i=1}^{N_\tau} \rho(\omega) \tilde{K}(\tau_i,\omega) \end{align} i.e. the underlying task is an inverse problem to find $\rho$ at a given $\omega$, schematically written as $\rho= \sum_i\tilde{K}_i^{-1}G_i$. Consider an estimator $\hat\rho$ of the spectral function at a given $\bar\omega$ by (see e.g. \cite{Brandt:2015sxa,Brandt:2015aqk}) \begin{align} \label{eq_spfestimator} \hat\rho(\bar\omega)=f(\bar{\omega}) \int_0^\infty \mathrm{d}\omega \,\delta(\bar\omega,\omega)\,\rho(\omega) \,f({\omega})^{-1} \end{align} where $f({\omega})$ is an arbitrary rescaling function and $\delta(\bar\omega,\omega)$ is a smooth function, normalised to $\int_0^\infty d\omega \delta(\bar\omega,\omega)=1$, that may be parameterised as $\delta(\bar \omega, \omega) = \sum_i q_i(\bar \omega) K(\tau_i, \omega)$ \cite{Backus1968}. This so called resolution function acts as an averaging kernel that enables formulating the spectral function estimator as \begin{align} \label{eq_bgmweights} \hat{\rho}(\bar{\omega})=f(\bar{\omega})\sum_i q_i(\bar{\omega})\,G(\tau_i). \end{align} In this form it becomes clear that constructing $\hat\rho$, or by extension $\rho$, depends crucially on the number of coefficients available, i.e. the number of data points, and their behaviour, i.e. how stable and regular the inverse is. Typically one is faced with a situation where the coefficients are large and highly fluctuating, requiring very precise determinations, but at the same time the connected matrix is nearly singular, requiring a regulator to be inverted safely. The added effect of noise in the data further complicates this situation as it affects the precision with which the coefficients can be determined. Keeping this in mind, one recipe to evaluate $\hat{\rho}(\bar{\omega})$ is given by the Backus-Gilbert method (BGM) \cite{Backus1968}: Construct the coefficients $q_i$ such that the width $\Gamma$, or spread, of the resolution function in $\omega$ becomes minimal, i.e. in the ideal case $\lim_{\Gamma\rightarrow 0}\hat\rho = \rho$. Then the solution can be shown to be: \begin{align} q_i(\bar{\omega}) &= \frac { \sum_j W^{-1}_{ij} (\bar \omega) R(\tau_j) } { \sum_{kj} R(\tau_k) W^{-1}_{kj} (\bar \omega) R(\tau_j) }, \end{align} \begin{align} \begin{split} W_{ij}(\bar \omega) &= \lambda\int_{0}^{\infty} \mathrm{d} \omega K(\tau_i,\omega) (\omega- \bar \omega)^2 K(\tau_j,\omega) \\ &\hspace{25ex}+ (1 - \lambda) S_{i j}, \\ R(\tau_i) &= \int_0^{\infty} \mathrm{d} \omega K(\tau_i,\omega)~~. \end{split} \end{align} Here we immediately introduced a regularization scheme $W_{ij} = \lambda W_{ij}^{no\,reg.} + (1-\lambda) S_{ij} $, where $S$ is the covariance matrix of the lattice correlators and $0\leq \lambda \leq 1$ is the regularization parameter. Other regularization schemes, such as the Tikhonov scheme where $S_{ij}=\mathds{1}$, have also been used in literature, see e.g. \cite{Astrakhantsev:2018oue}. Another recipe where the $q_i(\bar\omega)$ are determined with a fixed input resolution function was presented in \cite{Hansen:2019idp}. In our implementation we further consider the rescaling function $f(\bar\omega)$ \cite{Brandt:2015aqk}. It rescales the spectral function inside the integral of Eq.~(\ref{eq_spfestimator}) prior to reconstruction and is or may be re-introduced afterwards. The coefficients $q_i$ are changed as a result and the procedure can be understood as related to a kernel transformation. In particular divergent behaviours of the kernel, such as that at $\omega\rightarrow 0$ can be handled in this way. Additionally certain well established, global trends of the spectral function can be built-in, for example the large frequency behaviour $\sim \omega^4$. As such the procedure can also be seen as introducing prior information and some level of model dependence. Here we consider the function $f=(\omega/T)^4/\tanh^3(\omega/4T)$ introduced to regularize the divergence at $\omega=0$ and to encode the information on the asymptotic trend. One key difficulty in the BGM, or any spectral reconstruction, is the determination of its errors, both statistical and systematic. The number of points, the rescaling function, the regularization parameter and the noise of the data all feed into the estimator result. Here, we focus on the impact of the regularization parameter $\lambda$. We also tested the impact of using different numbers of points and rescaling functions, but find that using the maximum number of points that have a stable solution and the above mentioned scaling function $f$ lead to the smallest spread of the resolution function. So in this study we use all the available data points. Note that $\lambda$ to some extent also controls the impact of noise given by the covariance through the regularization prescription. Choosing $\lambda$ one would like to use the value which minimizes $\Gamma(\delta(\bar\omega,\omega))$ in the frequency window of interest. In the left panel of Fig.~\ref{reso_spf_shear} we show the resolution function dependence for a broad range $\lambda$ in the shear channel. We see that the width is $\Gamma(\delta(\bar\omega,\omega))\sim 5 T$ except for the two smallest $\lambda$, which implies that the dependence of $\Gamma$ on $\lambda$ is weak. At the same time, when plotting the obtained spectral functions depending on $\lambda$ in the right panel of Fig.~\ref{reso_spf_shear}, we see that the variance of the spectral function and crucially the value of the intercept at $\bar\omega=0$ depend strongly on this parameter. Based on the discussion above the increasing variance with $\lambda$ can be understood as insufficient regularization, while the decreasing variance with $\lambda$ but increasing width of the resolution implies the data and coefficients cannot be combined to form sharp, localised features. Nevertheless, a robust result over a broad range in $\lambda$ implies a stable solution of the reconstruction. As such scanning through $\lambda$ in $(0,1)$ does suggest a lower bound for the intercept and thereby the viscosity. For the shear viscosity we find $\eta/T^3\geq 0.81$ (see right panel of Fig.~\ref{reso_spf_shear}). Similarly for bulk viscosity we obtain $\zeta/T^3\geq 0.059$. We can see the fit results determined in previous section safely lie in this range. One could imagine using a criterion for $\lambda$ based on the variance of the output spectral function instead of the spread of the resolution function, given the strong dependence observed,. The Morozov discrepancy principle \cite{Morozov1984} could be used for this: It states that $\delta\hat\rho(\bar\omega)/ \hat\rho(\bar\omega)=\overline{\delta G(\tau)}/G(\tau)$, where $\overline{\delta G(\tau)}$ denotes the average correlator variance. Since we are mainly interested in $\bar\omega=0$ one could impose this condition by matching $\delta\hat\rho(0)/ \hat\rho(0)=\delta G(T/2)/G(T/2)$, as the long-$\tau$ correlator data dominates the low-$\omega$ spectral function regime \cite{Aarts:2005hg}. This neglects the resolution function and the matching gives just a rough approximation to the more complicated underlying relation. However, applying this criterion we arrive at results for $\eta/T^3$ and $\zeta/T^3$ that agree with the quoted plateau values above. \section{Conclusion} \label{sec:conclusion} We have calculated the energy-momentum tensor correlators in both the shear and the bulk channel at $1.5T_c$ in the quenched approximation on 5 large and fine lattices. To improve the signal-to-noise we have applied both the gradient flow method and the blocking method. We thoroughly studied the temperature corrections and the renormalization of the operators. The correlators have been extrapolated first to the continuum limit and then to the $\tau_{\mathrm{F}}\rightarrow 0$ limit. The final correlators are used to extract the shear and bulk viscosity based on perturbative models. For the bulk channel, we find that the NLO spectral function can describe our lattice data when adding a transport part with appropriate interpolation. For the shear channel we were unable to find a fit with better than $\chi^2$/dof = 2. To further improve the fit quality, we need either a more flexible model or a better theoretical understanding of the expected spectral function. In fitting our data, we find that the statistical errors are significantly smaller than the difference in fit values found from various fit Ansatz choices, despite relatively little difference in the fit quality from the different Ansatz choices. Therefore we will estimate the lowest and highest value of viscosity to be the extreme values we found among the fit functions. Using $s/T^3 = 5.098$ from \cite{Giusti:2016iqr}, our shear and bulk results become: \begin{equation} \begin{split} &\eta/s = 0.15 - 0.48,\ \ T=1.5T_c,\\ &\zeta/s = 0.017 - 0.059 ,\ \ T=1.5T_c. \end{split} \label{shear_bulk_viscosity} \end{equation} The lower estimates are above the lower bounds from the Backus-Gilbert analysis. The upper bounds are based on a model which assumes that there \textsl{is} a relatively narrow feature near $\omega=0$, namely a Lorentzian-type peak with a width of $1 T$. If a strongly-coupled medium does not support long-lived excitations, this assumption appears unlikely and the lower limit is more likely to be correct. However, the data cannot definitively prove or disprove this theoretical prejudice. The shear viscosity we obtained in Eq.~(\ref{shear_bulk_viscosity}) is close to the hydrodynamic estimate $1<(4\pi)\eta /s<2.5$ \cite{Song:2010mg}. In our opinion, there are two pressing tasks to further improve on this work. The first is to find better models for the spectral function's behavior at low to intermediate frequencies $\omega \sim [1-5]T$. This will allow a fitting extraction which makes maximal use of the high-quality data which is now available. The second task is to extend these results to the unquenched case. This is not just a matter of performing much more expensive unquenched simulations. It is also necessary to understand the renormalization of the more-complicated unquenched stress tensor operator at the percent level, which appears to be possible but quite challenging. Some progress in this direction has been made recently by Dalla Brida et al \cite{DallaBrida:2020gux}, but precision studies including gradient flow do not yet exist. We leave these developments for future work. \section{Acknowledgments} All authors acknowledge support by the Deutsche For\-schungs\-ge\-mein\-schaft (DFG, German Research Foundation) through the CRC-TR 211 'Strong-interaction matter under extreme conditions'– project number 315477589 – TRR 211. AF acknowledges support by the Ministry of Science and Technology Taiwan (MOST) under grant 111-2112-M-A49-018-MY2. The computations in this work were performed on the GPU cluster at Bielefeld University using \texttt{SIMULATeQCD} suite \cite{Altenkort:2021fqk,mazur2021}. We thank the Bielefeld HPC.NRW team for their support. \bibliographystyle{apsrev4-1}
1,314,259,992,643
arxiv
\section{Introduction} Let $Y$ be an algebraic manifold (i.e., an irreducible smooth algebraic variety defined over $\Bbb{C}$) with $H^i(Y, \Omega^j_Y)=0$ for all $j\geq 0$ and $i>0$, where $\Omega^j_Y$ is the sheaf of regular $j$-forms. We want to understand what $Y$ is. This question was raised by J.-P. Serre for complex manifolds \cite{Se}. Since $Y$ is not compact, for any analytic or algebraic coherent sheaf ${\mathcal{F}}$ on $Y$, we have $H^3(Y, {\mathcal{F}})=0$ \cite{Siu1, Siu2, Zh1}. So $Y$ contains no complete surfaces \cite{NS, Zh1}. If $Y$ has non-constant regular functions, we know that it contains no complete curves \cite {Zh1}. Let $X$ be a smooth completion of $Y$, then the complement of $Y$ in $X$ is connected \cite{Zh1}. Suppose that the boundary $X-Y$ is of pure codimension 1 and is the support of an effective divisor $D$ with simple normal crossings. We consider the $D$-dimension of $X$ in order to understand $Y$. The notion of $D$-dimension is due to Iitaka (\cite{I1},Lecture 3 or \cite{Uen1}, Chapter 2). It measures that how many regular functions there are on $Y$. If for all integers $m> 0$ we have $H^0(X, {\mathcal{O}}_X(mD))=0$, then we define the $D$-dimension of $X$, denoted by $\kappa (D, X)$, to be $-\infty$. If $h^0(X, {\mathcal{O}}_X(mD))\geq 1$ for some $m$, choose a basis $\{f_0, f_1, \cdot \cdot\cdot, f_n\}$ of the linear space $H^0(X, {\mathcal{O}}_X(mD))$, it defines a rational map $\Phi _{mD}$ from $X$ to the projective space ${\Bbb{P}}^n$ by sending a point $x$ on $X$ to $(f_0(x), f_1(x), \cdot \cdot\cdot, f_n(x))$ in ${\Bbb{P}}^n$. Then we define $\kappa (D, X)$ to be the maximal dimension of the images of the rational map $\Phi _{mD}$, i.e., $$ \kappa (D, X)= \max_m\{\dim (\Phi _{mD}(X))\}. $$ Let $K_X$ be the canonical divisor of $X$, then the Kodaira dimension of $X$ is the $K_X$-dimension of $X$, denoted by $\kappa(X)$, i.e., $$\kappa(X)=\kappa(K_X, X). $$ When $\kappa(D, X)=$dim$X$, we say that $D$ is big. In our case, since $D$ is effective, the $D$-dimension $\kappa(D, X)\neq -\infty$. In \cite{Zh1, Zh2}, we prove that $\kappa(D, X)$ can be 1 and in this case, we have a surjective morphism from $Y$ to a smooth affine curve $C$ such that every fibre $S$ satisfies the same vanishing condition, i.e., $H^i(S, \Omega^j_S)=0$ for all $j\geq 0$ and $i>0$. We obtained that the Kodaira dimension of $X$ is $-\infty$ and $q(X)=h^1(X, {\mathcal{O}}_X)$ can be any non-negative integer. In particular, all smooth fibres are of the same type, i.e., type (2) or type (3) open surface in the following Theorem 2.6 \cite{Ku}. For the existence of the non-affine and non-product threefolds with vanishing Hodge cohomology , see \cite{Zh2}. In \cite{Zh3}, we proved that $\kappa(D, X)\neq 2$ and if $\kappa(D, X)=3$, then $Y$ is birational to Spec$\Gamma (Y, {\mathcal{O}}_Y )$. Furthermore, if $Y$ is regularly separable, i.e., if for any two distinct points $y_1$ and $y_2$ on $Y$, there is a regular function $f$ on $Y$ such that $f(y_1)\neq f(y_2)$, then $Y$ is affine. Regular separability implies $\kappa(D, X)=3$. We want to know whether $Y$ is affine if $H^i(Y, \Omega^j_Y)=0$ and $\kappa(D, X)=3$. If we can prove the converse, that is, $\kappa(D, X)=3$ implies regular separability, then $Y$ is affine. It is sufficient to prove that if $H^i(Y, \Omega^j_Y)=0$ for all $j\geq 0$ and $i>0$ and $\kappa(D, X)=3$, then any non-constant regular function on $Y$ defines an affine surface. Another possible approach is to prove that $|nD|$ is base point free for some $n>0$. Since $Y$ contains no complete curves, $Y$ is affine (\cite{H2}, Chapter 2, Proposition 2.2). In this paper, we will prove that when $D$ is smooth, irreducible and contains no exceptional curves, then $Y$ is affine. \begin{theorem}Let $Y$ be an irreducible smooth threefold contained in a smooth projective threefold $X$ such that $H^i(Y, \Omega^j_Y)=0$ for all $j\geq 0$ and $i>0$. Suppose that the complement $D=X-Y$ is a smooth projective surface without exceptional curves and $\kappa(D, X)=3$, then $|nD|$ is base point free for all $n\gg 0$. \end{theorem} \begin{corollary} Under the condition of Theorem 1.1, $Y$ is affine if and only if $\kappa(D, X)=3$. \end{corollary} The most mysterious case is $\kappa(D, X)=0$. It is hard to understand because we cannot construct a fibre space by the divisor $D$. However, in order to keep track of $Y$ and its cohomology, we have to use this boundary divisor to define the map. So we cannot apply Iitaka's fibration or Mori's construction. The situation on any normal and complete surface is much better. Any effective divisor on a normal complete surface has a unique Zariski decomposition and any two divisors have an intersection number \cite{Sa2}. We even can get satisfied information of the surface if we know the numerical type of the divisor \cite{Sa1}. When dim$Y=3$ and the $D$-dimension is 0 or 3, we do not have Zariski decomposition \cite{C} and a good method to check whether a divisor is nef. When $\kappa(D, X)=0$ and $D$ is smooth and irreducible, we can reduce the problem to surface case. \begin{theorem} Let $Y$ be a smooth threefold contained in a smooth projective threefold $X$ such that $H^i(Y, \Omega^j_Y)=0$ for all $j\geq 0$ and $i>0$ and the complement $D=X-Y$ is a smooth projective surface. If $Y$ has no nonconstant regular functions, then $\frac{1}{2}(c_1^2+c_2)\cdot D=\chi({\mathcal{O}}_D)\geq 0$. In particular, if the line bundle ${\mathcal{O}}_D(D)$ is not torsion, then $q=h^1(X, {\mathcal{O}}_X)=0$, $\frac{1}{2}(c_1^2+c_2)\cdot D=\chi({\mathcal{O}}_D)=0$, $\chi({\mathcal{O}}_X) >0$ and $K_X$ is not nef. \end{theorem} We organize the paper as follows. In Section 2, we will present some preparation. We will prove the two theorems in Section 3. Let us mention some open problems for the Steinness of $Y$. When dimension of $Y$ is 2, the type (3) open surfaces in Theorem 2.6 is a mystery. When dimension is 3 and $\kappa(D, X)=1$, we do not know whether the nonaffine, nonproduct example in \cite{Zh2} is Stein. Hartshorne asked the following question (\cite{H2}, page 235): Remove an irreducible curve from a smooth complete surface, what is the condition for the open surface to be Stein? See \cite{N, Ued} for recent progress. Unfortunately, their approach cannot be applied to type (3) surface because the boundary has 9 components. We can ask three dimensional analogue of Hartshorne's question: What is the necessary and sufficient condition of a smooth threefold to be Stein but not affine? Are the two conditions $H^i(Y, \Omega^j_Y)=0$ for all $j\geq 0$ and $i>0$ and $\kappa(D, X)\leq 1$ sufficient? The known result is that there exist algebraic Stein threefolds $Y$ with $H^i(Y, \Omega^j_Y)=0$ for all $j\geq 0$, $i>0$ and $\kappa(D, X)=1$ \cite{Zh1}. However, if $\kappa(D, X)=0$, we do not know whether there exists an algebraic Stein threefold with $H^i(Y, \Omega^j_Y)=0$ whereas such surface exists (\cite{H2}, Chapter VI, Section 3 or \cite{Ku}). \noindent {\bf{Acknowledgments}} \quad I would like to thank the following professors for helpful discussions: Steven Dale Cutkosky, Dan Edidin, N.Mohan Kumar, Zhenbo Qin and Qi Zhang. \section{Preparation} \begin{lemma}{\bf[Goodman, Hartshorne]} Let $V$ be a scheme and $D$ be an effective Cartier divisor on $V$. Let $U=V-$Supp$D$ and $F$ be any coherent sheaf on $V$, then for every $i\geq 0,$ $$\lim_{{\stackrel{\to}{n}}} H^i(V, F\otimes {\mathcal{O}}(nD)) \cong H^i(U, F|_U). $$ \end{lemma} \begin{lemma} Let $I=\{i\}$ be a direct system of indices. Let $\{{\mathcal{F}}_i, f_{ji}\}$, $\{{\mathcal{G}}_i, g_{ji}\}$ and $\{{\mathcal{H}}_i, h_{ji}\}$ be direct system of coherent sheaves indexed by $I$ over a topological space $X$. If for all $i\in I$, there are short exact sequences $$ 0 \longrightarrow {\mathcal{F}}_i \longrightarrow {\mathcal{G}}_i \longrightarrow {\mathcal{H}}_i \longrightarrow 0 $$ and the commutative diagram for all $j\geq i$ \[ \begin{array}{ccccccccc} 0\longrightarrow{\mathcal{F}}_i& \longrightarrow {\mathcal{G}}_i& \longrightarrow {\mathcal{H}}_i &\longrightarrow 0\\ \quad \quad \Big\downarrow\vcenter{% \rlap{$\scriptstyle{f_{ji}}$}} & \quad\Big\downarrow\vcenter{% \rlap{$\scriptstyle{g_{ji}}$}} & \quad \Big\downarrow\vcenter{% \rlap{$\scriptstyle{h_{ji}}$}} & \\ 0\longrightarrow{\mathcal{F}}_j &\longrightarrow {\mathcal{G}}_j & \longrightarrow {\mathcal{H}}_j & \longrightarrow 0, \end{array} \] then we have exact sequence $$ 0 \longrightarrow \lim_{{\stackrel{\to}{i}}} {\mathcal{F}}_i \longrightarrow \lim_{{\stackrel{\to}{i}}} {\mathcal{G}}_i \longrightarrow \lim_{{\stackrel{\to}{i}}} {\mathcal{H}}_i \longrightarrow 0. $$ \end{lemma} {\it Proof}. By the assumption, for any point $x\in X$, we have short exact sequence on stalks $$ 0 \longrightarrow ({\mathcal{F}}_i)_x \longrightarrow ({\mathcal{G}}_i)_x \longrightarrow ({\mathcal{H}}_i)_x \longrightarrow 0 $$ and the commutative diagram for all $j\geq i$ \[ \begin{array}{ccccccccc} 0\longrightarrow ({\mathcal{F}}_i)_x& \longrightarrow ({\mathcal{G}}_i)_x& \longrightarrow ({\mathcal{H}}_i)_x &\longrightarrow 0\\ \quad \quad \Big\downarrow\vcenter{% \rlap{$\scriptstyle{({f_{ji}})_x}$}} & \quad\Big\downarrow\vcenter{% \rlap{$\scriptstyle{{(g_{ji}}})_x$}} & \quad \Big\downarrow\vcenter{% \rlap{$\scriptstyle{{(h_{ji}})}_x$}} & \\ 0\longrightarrow ({{\mathcal{F}}_j})_x &\longrightarrow ({{\mathcal{G}}_j})_x & \longrightarrow ({\mathcal{H}}_j)_x & \longrightarrow 0. \end{array} \] By Ueno, Algebraic Geometry 2 (\cite{Uen2}, Page 10), we have the following exact sequences of abelian groups on stalks $$ 0 \longrightarrow \lim_{{\stackrel{\to}{i}}} ({{\mathcal{F}}_i})_x \longrightarrow \lim_{{\stackrel{\to}{i}}} ({{\mathcal{G}}_i})_x \longrightarrow \lim_{{\stackrel{\to}{i}}} ({{\mathcal{H}}_i})_x \longrightarrow 0. $$ Since direct limits commute with each other (\cite{Br}, page 20), we have $$ \lim_{{\stackrel{\to}{i}}} ({{\mathcal{F}}_i})_x =\lim_{{\stackrel{\to}{i}}} \lim_{{\stackrel{\to}{x\in U}}}{{\mathcal{F}}_i} =\lim_{{\stackrel{\to}{x\in U}}} \lim_{{\stackrel{\to}{i}}} {{\mathcal{F}}_i} =(\lim_{{\stackrel{\to}{i}}} {{\mathcal{F}}_i})_x. $$ The Lemma follows from $$ 0 \longrightarrow ( \lim_{{\stackrel{\to}{i}}} {\mathcal{F}}_i)_x \longrightarrow ( \lim_{{\stackrel{\to}{i}}} {\mathcal{G}}_i)_x \longrightarrow ( \lim_{{\stackrel{\to}{i}}} {\mathcal{H}}_i)_x \longrightarrow 0. $$ \begin{flushright} Q.E.D. \end{flushright} \begin{lemma} Let $Y$ be a smooth variety contained in a smooth projective variety $X$ such that the complement $X-Y$ is compact and of pure codimension 1. Let $D$ be any effective divisor with support $X-Y$. Then we have the following two exact sequences $$ 0 \longrightarrow \lim_{{\stackrel{\to}{n}}} {\mathcal{O}}_X(nD) \longrightarrow \lim_{{\stackrel{\to}{n}}} {\mathcal{O}}_X((n+1)D) \longrightarrow \lim_{{\stackrel{\to}{n}}} {\mathcal{O}}_D((n+1)D) \longrightarrow 0 $$ and for all $j>0$ $$0 \longrightarrow \lim_{{\stackrel{\to}{n}}} \Omega^j_X(nD) \longrightarrow \lim_{{\stackrel{\to}{n}}} \Omega^j_X((n+1)D) \longrightarrow \lim_{{\stackrel{\to}{n}}} \Omega^j_X((n+1)D)|_D \longrightarrow 0. $$ \end{lemma} {\it Proof}. For any positive integers $n$ and $m$, we have the following commutative diagram \[ \begin{array}{ccccccccc} 0\longrightarrow{\mathcal{O}}_X(nD)& {\stackrel{f}{\longrightarrow}} {\mathcal{O}}_X((n+1)D)& {\stackrel{r}{\longrightarrow}} {\mathcal{O}}_D((n+1)D) &\longrightarrow 0\\ \quad \quad \Big\downarrow\vcenter{% \rlap{$\scriptstyle{i}$}} & \quad\Big\downarrow\vcenter{% \rlap{$\scriptstyle{i}$}} & \quad \Big\downarrow\vcenter{% \rlap{$\scriptstyle{h}$}} & \\ 0\longrightarrow{\mathcal{O}}_X((n+m)D) & {\stackrel{f}{\longrightarrow}} {\mathcal{O}}_X((n+m+1)D) & {\stackrel{r}{\longrightarrow}} {\mathcal{O}}_D((n+m+1)D) & \longrightarrow 0, \end{array} \] where the first map $f$ in each arrow is defined by the local defining function of $D$, $r$ is the restriction map, the vertical map $i$ is the natural embedding map "1" and the last vertical map $h$ is defined as follows. For any nonzero element $s$ in $O_D((n+1)D)$ (locally), there is an $t$ in $ O_X((n+1)D)$ (locally) such that $r(t)=s$. Then we define $h(s)$ to be $r(i(t))=r(t)$. Since $i\neq f$, if $t$ is not zero on $D$, then $i(t)$ does not sit in the image of $O_X((n+m)D)$ under the map $f$. Notice that the vertical map $i$ defines the direct limit. Then the first exact sequence is an immediate consequence of Lemma 2.2. The second exact sequence can be similarly proved. \begin{flushright} Q.E.D. \end{flushright} \begin{lemma} Let $Y$ be a smooth threefold with $H^i(Y, \Omega^j_Y)=0$ for all $j\geq 0$ and $i>0$. Let $X$ be a smooth completion of $Y$ such that the complement $X-Y$ is compact and of pure codimension 1. Let $D$ be any effective divisor on $X$ with support $X-Y$, then for all $j\geq 0$ and $i>0$, we have $$ \lim_{{\stackrel{\to}{n}}} H^i(D, \Omega^j_X(nD)|_D)=0. $$ \end{lemma} {\it Proof}. Since $H^i(Y, \Omega^j_Y)=0$ for all $j\geq 0$ and $i>0$, by Lemma 2.1, we have $$\lim_{{\stackrel{\to}{n}}} H^i(X, \Omega^j_X(nD))=0. $$ Since the direct limit commutes with cohomology (\cite{H1}, Chapter III, Proposition 2.9), the Lemma follows from the short exact sequences in Lemma 2.3. \begin{flushright} Q.E.D. \end{flushright} The following lemma is not new. We already proved it in our previous paper \cite{Zh1}. Since we frequently use the argument in the proof of the theorems, We include a proof here for completeness. \begin{lemma} Under the condition of Lemma 2.4, $H^3(X, {\mathcal{O}}_X(nD))=0$ for sufficiently large $n$. \end{lemma} {\it Proof}. From the exact sequence $$ 0 \longrightarrow {\mathcal{O}}_X(nD) \longrightarrow {\mathcal{O}}_X((n+1)D) \longrightarrow {\mathcal{O}}_D((n+1)D) \longrightarrow 0, $$ we have surjective map from $H^3(X,{\mathcal{O}}_X(nD))$ to $H^3(X,{\mathcal{O}}_X((n+1)D))$ for all $n\geq 0$. By Lemma 2.1, we have $$ \lim_{{\stackrel{\to}{n}}} H^3(X,{\mathcal{O}}_X(nD))=0. $$ These two conditions imply the vanishing of the third cohomology for large n. \begin{flushright} Q.E.D. \end{flushright} \begin{theorem}{\bf [Mohan Kumar]} Let $Y$ be a smooth algebraic surface over $\Bbb{C}$ with $H^i(Y, \Omega^j_Y)=0$ for all $j\geq 0$ and $i>0$, then $Y$ is one of the following (1) $Y$ is affine. (2) Let $C$ be an elliptic curve and $E$ the unique nonsplit extension of $\mathcal{O}$$_C$ by itself. Let ${X=\Bbb{P}}_C(E)$ and $D$ be the canonical section, then $Y=X-D$. (3) Let $X$ be a projective rational surface with an effective divisor $D=-K$ with $D^2=0$, $\mathcal{O}$$(D)|_D$ be nontorsion and the dual graph of $D$ be $\tilde{D}_8$ or $\tilde{E}_8$, then $Y=X-D$. \end{theorem} In the above theorem, when $Y$ is affine, we can choose $D$ such that $D$ is ample (\cite{H2}, Theorem 4.2, page 69), so $\kappa(D, X)=2$. If $Y$ is not affine, then $\kappa(D, X)=0$ by Lemma 1.8 \cite{Ku}. \begin{theorem}{\bf [Cutkosky, Srinivas]} Let $k$ be a field of characteristic 0. Let $X$ be a normal surface, proper over $k$, and let $D$ be an effective Cartier divisor on $X$. Then, for sufficiently large $n$, $$ h^0(X, {\mathcal{O}}_X(nD))=P(n)+\lambda(n), $$ where $P(n)$ is a quadratic polynomial and the function $\lambda(n)$ is periodic. \end{theorem} For the proof of the following Iitaka's theorem, see Lecture 3 \cite{I1} or Theorem 8.1 \cite{Uen1}. \begin{theorem}{\bf[Iitaka]} Let $X$ be a normal projective variety and let $D$ be an effective divisor on $X$. There exist two positive numbers $\alpha$ and $\beta$ such that for all sufficiently large $n$ we have $$ \alpha n^{\kappa(D, X)} \leq h^0(X, {\mathcal{O}}_X(nD)) \leq \beta n^{\kappa(D, X)}. $$ \end{theorem} \section{Proof of the Theorems} From now on, we will fix the notations as follows. $Y$ is a smooth threefold with $H^i(Y, \Omega^j_Y)=0$ for all $j\geq 0$ and $i>0$. $X$ is a smooth completion of $Y$ such that the complement $X-Y=D$ is a smooth projective surface. {\bf Proof of Theorem 1.1.} Since $\kappa(D, X)=3$ and $D$ is effective, by Theorem 2.8, there is a constant $c>0$ such that for sufficiently large $n$, $h^0(X, {\mathcal{O}}_X(nD))> c n^3$. From the short exact sequence $$0\longrightarrow {\mathcal{O}}_X(nD) \longrightarrow {\mathcal{O}}_X((n+1)D) \longrightarrow {\mathcal{O}}_D((n+1)D) \longrightarrow 0, $$ we have $$0\longrightarrow H^0({\mathcal{O}}_X(nD)) \longrightarrow H^0({\mathcal{O}}_X((n+1)D)) \longrightarrow H^0({\mathcal{O}}_D((n+1)D)) \longrightarrow \cdot\cdot\cdot. $$ So there are infinitely many $n$, such that $h^0({\mathcal{O}}_D(nD))>0 $ since $h^0(X, {\mathcal{O}}_X(nD))$ grows like $cn^3$ for sufficiently large $n$. Let $$ N=\{ m\in {\Bbb{N}}, h^0(D, {\mathcal{O}}_D(mD))>0\}. $$ Let $p$ be the greatest common divisor of $N$, then $$ h^0(D, {\mathcal{O}}_D(npD)) >0 $$ for all $n\geq 0$. Thus the line bundle ${\mathcal{O}}_D(pD)$ determines an effective divisor $G$ on $D$ such that $${\mathcal{O}}_D(pD)={\mathcal{O}}_D(G).$$ If ${\mathcal{O}}_D(D)$ is torsion, then by Lecture 3, \cite{I1}, $h^0(D, {\mathcal{O}}_D(nD))\leq 1$ for all $n\geq 0$. Since $D$ is big, i.e., $h^0(X, {\mathcal{O}}_X(nD))> c n^3$, ${\mathcal{O}}_D(D)$ is not torsion. So we may assume that $D|_D$ determines an effective divisor on $D$. We still denote it as $G$. By Lemma 2.4, we have for all $j\geq 0$ and $i>0$, $$ \lim_{{\stackrel{\to}{n}}} H^i(D, \Omega^j_X(nD)|_D)=0. $$ In particular, for all $j\geq 0$ and $i>0$, $$ \lim_{{\stackrel{\to}{n}}} H^i(D, \Omega^j_X(nD)|_D)= \lim_{{\stackrel{\to}{n}}} H^i(D, \Omega^j_X\otimes {\mathcal{O}}_D(nD)) $$ $$\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad =\lim_{{\stackrel{\to}{n}}} H^i(D, \Omega^j_X\otimes {\mathcal{O}}_D(nG))=0. $$ Let $S=D-G$, by Lemma 2.1, for all $i>0$ and $j\geq 0$, we have $$H^i(S, \Omega^j_X|_S)= H^i(S, \Omega^j_D|_S)=0. $$ By the first exact sequence of Lemma 2.3, we have $$ \cdot\cdot\cdot \longrightarrow H^1(X, \lim_{{\stackrel{\to}{n}}} {\mathcal{O}}_X((n+1)D)) \longrightarrow H^1(D, \lim_{{\stackrel{\to}{n}}} {\mathcal{O}}_D((n+1)D)) $$ $$ \longrightarrow H^2(X, \lim_{{\stackrel{\to}{n}}} {\mathcal{O}}_X(nD)) \longrightarrow \cdot\cdot\cdot $$ Since the direct limit commutes with cohomology \cite{H1}, Chapter III, Proposition 2.9, and by Lemma 2.1, for all $i>0$, $$\lim_{{\stackrel{\to}{n}}} H^i(X, {\mathcal{O}}_X(nD) ) =0, $$ we have $$ H^i(S, {\mathcal{O}}_S)= \lim_{{\stackrel{\to}{n}}} H^i(D, {\mathcal{O}}_D(nD) )=0. $$ By \cite{H1}, page 178, Theorem 8.17 or \cite{GrH}, page 157, ($**$), we have $$0\longrightarrow {\mathcal{O}}_D(-D) \longrightarrow \Omega^1_X|_D \longrightarrow \Omega^1_D \longrightarrow 0. $$ Tensoring with ${\mathcal{O}}_D((n+1)D)$, we have $$0\longrightarrow {\mathcal{O}}_D(nD) \longrightarrow \Omega^1_X|_D \otimes {\mathcal{O}}_D((n+1)D) \longrightarrow \Omega^1_D ((n+1)D) \longrightarrow 0. $$ Taking the direct limit, the corresponding long exact sequence gives $$H^i(S, \Omega^1_S)= \lim_{{\stackrel{\to}{n}}} H^i(D,\Omega^1_D ((n+1)D) ) =0.$$ Applying the same procedure to the following short exact sequence $$ 0\longrightarrow \Omega^1_D(-D) \longrightarrow \Omega^2_X|_D \longrightarrow \Omega^2_D \longrightarrow 0, $$ we have $H^i(S, \Omega^2_S)=0$. So $S$ satisfies the same vanishing condition, i.e., for all $i>0$, $j\geq 0$, $H^i(S, \Omega^j_S)=0$. By Theorem 2.6 in Section 2 and Lemma 1.8 \cite{Ku}, the boundary divisor $G$ on $D$ is connected and $\kappa(G, D)=0$ or $2$. We know that there are $(-2)$-curves on type (3) surface in Theorem 2.6. Since $D$ has no exceptional curves, $D$ is not of type (3). Let $C$ be an irreducible complete curve on $X$. If $C$ is not contained in $D$, then $C\cdot D>0$ since $Y$ has no complete curves by Lemma 5 \cite{Zh1}. If $C$ is contained in $D$ but not in $G$, then again $$C\cdot D= C\cdot D|_D=C\cdot G >0$$ since $S$ has no complete curves by Lemma 1.1 \cite{Ku}. If $C$ is a component of $G$, then $C\cdot G\geq 0$ since $G$ has no exceptional curves and is connected. Therefore $D$ is nef. We know that $S$ is either affine or type (2) surface in Theorem 2.6. If $S$ is of type (2), then the boundary $D-S$ is an irreducible elliptic curve with self-intersection number 0 by Theorem 2.6. This gives us $G^2=D^3=0$ which is impossible since $D$ is nef and $h^0(X, {\mathcal{O}}_X(nD))> c n^3$ (Proposition 2.61, \cite{KM}). So $S$ must be affine. Therefore for any component $C$ of $G$, $$ D^3= G^2\geq C\cdot G>0. $$ Therefore $D$ is big and nef. We will prove that the linear system $|nD|$ is base point free for sufficiently large $n$. Since $S$ is an affine surface and $G$ contains no exceptional curves, $G$ is ample, i.e., $pD|_D$ is ample on $D$. Thus $D|_D$ is ample on $D$. Let $L=D|_D$, then $$ H^1(D, {\mathcal{O}}_D(nL))= H^1(D, {\mathcal{O}}_D(nD))=0 $$ for sufficiently large $n$. Thus for all sufficiently large $n$ we have surjective map $$ H^1(X, {\mathcal{O}}_X((n-1)D)) \longrightarrow H^1(X, {\mathcal{O}}_X(nD)) \longrightarrow 0. $$ Since $$ \lim_{{\stackrel{\to}{n}}} H^1(X, {\mathcal{O}}_X(nD))=0, $$ we have $ H^1(X, {\mathcal{O}}_X(nD))=0$ for large $n$. Thus we have an exact sequence $$ 0 \longrightarrow H^0({X, \mathcal{O}}_X((n-1)D)) \longrightarrow H^0(X, {\mathcal{O}}_X(nD)) \longrightarrow H^0(D, {\mathcal{O}}_D(nD)) \longrightarrow 0. $$ This implies that the linear system $|nD|$ is base point free. In fact, a point $x\in X$ is a base point of $|nD|$ if and only if for every section $s\in H^0({\mathcal{O}}_X(nD))$, $s(x)=0$ or if and only if for every effective divisor $E\in |nD|$, $x\in E$. Suppose that $x$ is a base point of $|nD|$, then $x\in nD$. Thus $x\in D$. Since $nD|_D$ is very ample for large $n$, there is a divisor $F\in |nD|_D|$ such that $x$ is not a point of $F$. Pull $F$ back to $|nD|$, then there is a divisor $E\in |nD|$, such that $x$ is not a point of $E$. This is a contradiction. So the linear system $|nD|$ is base point free for sufficiently large $n$. \begin{flushright} Q.E.D. \end{flushright} {\bf Proof of Theorem 1.3.} In order to prove Theorem 1.3, we will analyse three cases: $D^3>0$, $D^3< 0$ and $D^3=0$. \begin{proposition} Let $Y$ be a smooth threefold contained in a smooth projective threefold $X$ such that $H^0(Y, {\mathcal{O}}_Y )=\Bbb{C}$, $H^i(Y, \Omega^j_Y)=0$ for all $j\geq 0$ and $i>0$ and the complement $D=X-Y$ is a smooth projective surface. Then $D^3$ is not positive. \end{proposition} {\it Proof}. Suppose $D^3>0.$ By the Riemann-Roch formula for surfaces, we have $$ h^0({\mathcal{O}}_D(nD))-h^1({\mathcal{O}}_D(nD))+h^2({\mathcal{O}}_D(nD)) =\chi({\mathcal{O}}_D)+\frac{1}{2}n^2D^3-\frac{1}{2}nD^2K_D, $$ $$h^0({\mathcal{O}}_D(-nD))-h^1({\mathcal{O}}_D(-nD))+h^2({\mathcal{O}}_D(-nD)) =\chi({\mathcal{O}}_D)+\frac{1}{2}n^2D^3+\frac{1}{2}nD^2K_D. $$ So we have $$ h^0({\mathcal{O}}_D(nD))+h^2({\mathcal{O}}_D(nD)) \geq \chi({\mathcal{O}}_D)+\frac{1}{2}n^2D^3-\frac{1}{2}nD^2K_D, $$ $$h^0({\mathcal{O}}_D(-nD))+h^2({\mathcal{O}}_D(-nD)) \geq\chi({\mathcal{O}}_D)+\frac{1}{2}n^2D^3+\frac{1}{2}nD^2K_D. $$ Since $D^3>0$ and $(K_D+nD|_D)+(K_D-nD|_D)=2K_D$, either $h^0({\mathcal{O}}_D(nD))\longrightarrow \infty$ or $h^0({\mathcal{O}}_D(-nD)) \longrightarrow \infty$ as $n\longrightarrow \infty$ and they cannot be big simultaneously. Otherwise, we would have $h^0({\mathcal{O}}_D(2K_D))=\infty$ which is absurd.\\ Case 1. $h^0({\mathcal{O}}_D(nD))\longrightarrow \infty$. In this case, there is an integer $n_0>0$ such that for all $n\geq n_0$, $nD|_D$ is an effective divisor on $D$. We may assume that $D|_D=G$ is effective without loss of generality. From the exact sequence $$0\longrightarrow {\mathcal{O}}_D(nG) \longrightarrow {\mathcal{O}}_D((n+1)G) \longrightarrow {\mathcal{O}}_G((n+1)G) \longrightarrow 0, $$ since $H^2({\mathcal{O}}_G((n+1)G))=0$, we have surjective map $$ H^2({\mathcal{O}}_D(nG))\longrightarrow H^2({\mathcal{O}}_D((n+1)G) )\longrightarrow 0. $$ By Lemma 2.4, $$ \lim_{{\stackrel{\to}{n}}} H^2({\mathcal{O}}_D(nG)) =\lim_{{\stackrel{\to}{n}}} H^2({\mathcal{O}}_D(nD))=0. $$ So for all sufficiently large $n$, $H^2({\mathcal{O}}_D(nD))=0$. By the same argument, $H^2({\mathcal{O}}_X(nD))=0$ for $n\gg 0$. Since by Lemma 2.5, $H^3({\mathcal{O}}_X(nD))=0$, we have $$ \chi({\mathcal{O}}_X(nD))= h^0({\mathcal{O}}_X(nD))-h^1({\mathcal{O}}_X(nD)) =1-h^1({\mathcal{O}}_X(nD))\leq 1. $$ On the other hand, since $D^3>0$, by the Riemann-Rock formula for threefolds (\cite{Ful}, page 291), we have $$ \chi({\mathcal{O}}_X(nD))= \frac{1}{6}(nD)^3+ \frac{1}{4} c_1\cdot(nD)^2 +\frac{1}{12}(c_1^2+c_2)\cdot nD +\frac{1}{24}c_1\cdot c_2 \longrightarrow \infty $$ as $n\longrightarrow \infty .$ Therefore we get a contradiction. Thus $h^0({\mathcal{O}}_D(nD))\longrightarrow \infty$ is not a possible case. \\ Case 2. $h^0({\mathcal{O}}_D(-nD))\longrightarrow \infty$ as $n\longrightarrow \infty .$ There is an $n_0>0$ such that for all $n\geq n_0$, $-nD|_D$ is effective divisor on $D$. Again we may assume that $G=-D|_D$ is effective. Since $(K_X+D)|_D=K_D$ (Proposition 8.20, Chapter II, \cite{H1}), by Lemma 2.4 and Serre duality, we have $$ \lim_{{\stackrel{\to}{n}}} H^2(D, \Omega^3_X((n+1)D)|_D)= \lim_{{\stackrel{\to}{n}}} H^2(D, {\mathcal{O}}_X(K_X+(n+1)D)|_D) $$ $$\quad\quad \quad\quad\quad \quad\quad\quad \quad\quad\quad = \lim_{{\stackrel{\to}{n}}} H^2(D, {\mathcal{O}}_D(K_D+nD|_D))=0. $$ Take the duality of the vector space, we have (\cite{H2}, Chapter 3, Section 3) $$ \lim_{{\stackrel{\gets}{n}}} H^0(D, {\mathcal{O}}_D(-nD)) =(\lim_{{\stackrel{\to}{n}}}H^2(D, {\mathcal{O}}_D(K_D+nD)))^* =0, $$ where $*$ indicates the dual vector space. Let $A_n=H^0({\mathcal{O}}_D)$, then $A_n=\Bbb{C}$. Let $B_n=H^0(D, {\mathcal{O}}_D(-nD))$ and let $C_n$ be their quotient, then we have short exact sequence $$ 0\longrightarrow A_n \longrightarrow B_n \longrightarrow C_n \longrightarrow 0. $$ Since $-D|_D=G$ is effective, $B_n$ is a subspace of linear space $B_{n'}$ if $n'> n$. The map $B_{n'}\rightarrow B_n$ is the natural restriction map. So we have the exact sequence of inverse systems $$0\longrightarrow (A_n) \longrightarrow (B_n) \longrightarrow (C_n) \longrightarrow 0. $$ By \cite{H1}, page 191, we have injective map $$0\longrightarrow {\Bbb{C}}= \lim_{{\stackrel{\gets}{n}}} A_n \longrightarrow \lim_{{\stackrel{\gets}{n}}} B_n $$ which is contrary to $$\lim_{{\stackrel{\gets}{n}}} B_n=0. $$ Thus $h^0({\mathcal{O}}_D(-nD))\longrightarrow \infty$ as $n\longrightarrow \infty $ is also impossible. We have seen that $D^3$ is not positive. \begin{flushright} Q.E.D. \end{flushright} \begin{remark} Since $h^0(X, {\mathcal{O}}_X(nD))=1$, when $D$ is not irreducible but nef, we still have $D^3\leq 0$ \cite{KM}, Proposition 2.61. \end{remark} \begin{proposition} With the assumption of Proposition 3.1, $D^3$ is not negative. \end{proposition} {\it Proof}. Suppose $D^3<0.$ If $h^0({\mathcal{O}}_D(nD))=0$ for all $n\gg 0$, then we have injective map $$ 0\longrightarrow H^1(X, {\mathcal{O}}_X(nD)) \longrightarrow H^1(X, {\mathcal{O}}_X((n+1)D)). $$ By Lemma 2.4, $$ \lim_{{\stackrel{\to}{n}}}H^1(X, {\mathcal{O}}_X(nD))=0. $$ These two restrictions imply $H^1(X, {\mathcal{O}}_X(nD))=0$ for all $n\gg 0$. Since $H^3(X, {\mathcal{O}}_X(nD))=0$ for all $n\gg 0$, the Euler characteristic $$\chi ({\mathcal{O}}_X(nD))= 1+h^2({\mathcal{O}}_X(nD))\geq 1. $$ On the other hand, since $D^3<0$, by the Riemann-Roch formula (\cite{Ful}, page 291), we have $$ \chi ({\mathcal{O}}_X(nD))= \frac{1}{6} n^3D^3+ \frac{1}{4} c_1\cdot(nD)^2 +\frac{1}{12}(c_1^2+c_2)\cdot nD +\frac{1}{24}c_1\cdot c_2 $$ $$ \longrightarrow -\infty \quad {\mbox{as}} \quad n\rightarrow \infty. $$ This contradicts the fact that $\chi ({\mathcal{O}}_D(nD))$ is positive. Therefore there are infinitely many $n$ such that $h^0({\mathcal{O}}_D (nD))>0$. Let $N$, $p$ be the same as in the proof of Theorem 1.1, i.e., $$ N=\{ m\in {\Bbb{N}}, h^0(D, {\mathcal{O}}_D(mD))>0\}, $$ and $p$ is the greatest common divisor of $N$. Then $h^0(D, {\mathcal{O}}_D(pD))>0$. Since $D^3\neq 0$, the line bundle ${\mathcal{O}}_D(pD)$ is not trivial and $pD|_D$ defines an effective divisor on surface $D$. Thus we may assume that $D|_D=G$ is an effective divisor on $D$. Let $S=D-G$, by the same argument as in the proof of Theorem 1.1, $H^i(S, \Omega^j_S)=0$ for all $i>0$ and $j\geq 0$ since $$\lim_{{\stackrel{\to}{n}}} H^i(D, \Omega^j_X(nD)|_D)=0. $$ By Theorem 2.6 in Section 2 and Lemma 1.8 \cite{Ku}, $\kappa({G, D})=0$ or $2$. If $\kappa({G, D})=2$, then by Theorem 2.7, for all $n\gg 0$, $$ h^0(D, {\mathcal{O}}_D(nG))=a_2n^2+a_1n+a_0+\lambda(n), $$ where $\lambda(n)$ is periodic. By Theorem 2.8, since $\kappa({G, D})=2$, there is a constant $c>0$ such that $ h^0(D, {\mathcal{O}}_D(nG))> cn^2$. Thus $a_2>0$. By Lemma 2.1, $$ \lim_{{\stackrel{\to}{n}}}H^0(D, {\mathcal{O}}_D(nG)) =\lim_{{\stackrel{\to}{n}}}H^0(D, {\mathcal{O}}_D(nD)) =H^0(S, {\mathcal{O}}_S) \neq 0. $$ In fact, since $S$ is affine, $h^0(S, {\mathcal{O}}_S)=\infty$. By Lemma 2.3, we have $$ 0\longrightarrow \lim_{{\stackrel{\to}{n}}} H^0(X, {\mathcal{O}}_X(nD-D)) \longrightarrow \lim_{{\stackrel{\to}{n}}} H^0(X, {\mathcal{O}}_X(nD)) $$ $$ \longrightarrow \lim_{{\stackrel{\to}{n}}} H^0(X, {\mathcal{O}}_D(nD)) \longrightarrow 0. $$ By Lemma 2.1, $$ \lim_{{\stackrel{\to}{n}}} H^0(X, {\mathcal{O}}_X(nD))=H^0(Y, {\mathcal{O}}_Y) =\Bbb{C} $$ and $$\lim_{{\stackrel{\to}{n}}} H^0(X, {\mathcal{O}}_X(nD-D))=H^0(Y, {\mathcal{O}}_X(-D)|_Y) =\Bbb{C}. $$ Thus the above exact sequence is $$ 0\longrightarrow {\Bbb{C}} \longrightarrow {\Bbb{C}} \longrightarrow H^0(S, {\mathcal{O}}_S) \longrightarrow 0. $$ This is impossible since $h^0(S, {\mathcal{O}}_S)=\infty$. So $\kappa(G, D)\neq 2$. If $\kappa(G, D)=0$, since ${\mathcal{O}}_D(D)\neq {\mathcal{O}}_D $ defines an effective divisor on $D$, by Lemma 2.1, 2.4, and Lemma 1.8\cite{Ku} we have $$ \lim_{{\stackrel{\to}{n}}} H^0(D, {\mathcal{O}}_D(nD))= H^0(S, {\mathcal{O}}_S) =\Bbb{C}. $$ Again by Lemma 2.1 and 2.3, we have $$ 0\longrightarrow \lim_{{\stackrel{\to}{n}}} H^0(X, {\mathcal{O}}_X(nD-D)) \longrightarrow \lim_{{\stackrel{\to}{n}}} H^0(X, {\mathcal{O}}_X(nD)) $$ $$ \longrightarrow \lim_{{\stackrel{\to}{n}}} H^0(D, {\mathcal{O}}_D(nD)) \longrightarrow 0, $$ which is impossible since three direct limits are $\Bbb{C}$. Therefore $D^3$ is not negative. \begin{flushright} Q.E.D. \end{flushright} \begin{proposition} Under the condition of Proposition 3.1, $D^3=0$ and $D|_D$ is numerically equivalent to 0. \end{proposition} {\it Proof.} Let $H$ be an ample divisor on $X$. Then when restricted on $D$, $L=H|_D$ is still ample by Proposition 4.1 \cite{H2}. We may assume that $L$ is smooth by Bertini's Theorem (Chapter 2, Theorem 8.8 \cite{H1} or Theorem 4.21, \cite{ Uen1}). Let $G$ be the divisor determined by the line bundle ${\mathcal{O}}_D(D)$. If $L\cdot G \neq 0$, then there exists an $n_0>0$ such that either the linear system $|n_0G|$ is nonempty or $|-n_0G|$ is nonempty. If $|n_0G|$ is nonempty, we may assume that $G$ is effective. Let $S=D-G$. By Lemma 2.3, we have $$\lim_{{\stackrel{\to}{n}}} H^0(X, {\mathcal{O}}_D(nD))=H^0(S, {\mathcal{O}}_S). $$ By Theorem 2.8, $$ h^0(S, {\mathcal{O}}_S)= {\mbox{dim}}_{\Bbb{C}} \lim_{{\stackrel{\to}{n}}} H^0(X, {\mathcal{O}}_D(nD))\neq 0. $$ By the first exact sequence in Lemma 2.3, we have exact sequence $$0\longrightarrow {\Bbb{C}} \longrightarrow {\Bbb{C}} \longrightarrow H^0(S, {\mathcal{O}}_S) \longrightarrow 0. $$ This is impossible. If $|-n_0G|$ is nonempty, then we may assume that $-G$ is an effective divisor on $D$. By the same argument as in case 2, Proposition 3.1, we have $$ \lim_{{\stackrel{\gets}{n}}} H^0(D, {\mathcal{O}}_D(-nD)) =(\lim_{{\stackrel{\to}{n}}}H^2(D, {\mathcal{O}}_D(K_D+nD)))^* =0, $$ which is again not possible since $$ {\Bbb{C}}=\lim_{{\stackrel{\gets}{n}}} H^0(D, {\mathcal{O}}_D) \hookrightarrow \lim_{{\stackrel{\gets}{n}}} H^0(D, {\mathcal{O}}_D(-nD)). $$ We have seen that the only possible case is $L\cdot G=0$. Since $G^2=D^3=0$, by the Hodge Index Theorem, $G$ is numerically equivalent to 0. If $G=0$, then $$(\lim_{{\stackrel{\to}{n}}}H^2(D, {\mathcal{O}}_D(K_D+nD)))^* =\lim_{{\stackrel{\gets}{n}}} H^0(D, {\mathcal{O}}_D(-nD)) =\Bbb{C} $$ which contradicts Lemma 2.4. So we have $G\equiv 0$ but $G\neq 0$. \begin{flushright} Q.E.D. \end{flushright} \begin{proposition} Under the condition of Proposition 3.1, If ${\mathcal{O}}_D(D)$ is not torsion, then $q=h^1(X, {\mathcal{O}}_X)=0$, $\frac{1}{2}(c_1^2+c_2)\cdot D=\chi({\mathcal{O}}_D)=0$, $\chi ({\mathcal{O}}_X)>0$ and $K_X$ is not nef. \end{proposition} {\it Proof.} If there is an $n_0$ such that $h^0(D, {\mathcal{O}}_D(n_0D))\neq 0$ (or $h^0(D, {\mathcal{O}}_D(-n_0D))\neq 0$), then $\kappa(D, D|_D)\geq 0$. By \cite{I1}, page 34-35, there are infinitely many $n$ such that $h^0(D, {\mathcal{O}}_D(nD))\neq 0$ (or $h^0(D, {\mathcal{O}}_D(-nD))\neq 0$). Then $D|_D$ (or $-D|_D$) defines an effective divisor on $D$ since ${\mathcal{O}}_D(D)$ is not torsion. By Lemma 2.3, we can get a contradiction $$ 0 \longrightarrow {\Bbb{C}} \longrightarrow {\Bbb{C}} \longrightarrow H^0(S, {\mathcal{O}}_S) \longrightarrow 0. $$ So $h^0(D, {\mathcal{O}}_D(nD))=h^0(D, {\mathcal{O}}_D(-nD))=0$ for all $n>0$. Since $h^0(D, {\mathcal{O}}_D(nD))=0$ for all $n>0$, from the exact sequence $$0\longrightarrow {\mathcal{O}}_X((n-1)D) \longrightarrow {\mathcal{O}}_X(nD) \longrightarrow {\mathcal{O}}_D(nD) \longrightarrow 0, $$ we have injective map from $H^1(X, {\mathcal{O}}_X(nD))$ to $H^1(X, {\mathcal{O}}_X((n+1)D))$. Since the direct limit is 0, $H^1(X, {\mathcal{O}}_X(nD))=0$ for sufficiently large $n$. By the injectivity, $H^1(X, {\mathcal{O}}_X((n-1)D))=0$ for all $n>0$, i.e., $H^1(X, {\mathcal{O}}_X(nD))=H^1(X, {\mathcal{O}}_X)=0$. Thus for all $n > 0$, by Lemma 2.5, we have exact sequence $$0\longrightarrow H^1(D, {\mathcal{O}}_D((n+1)D)) \longrightarrow H^2(X, {\mathcal{O}}_X(nD)) \longrightarrow H^2(X, {\mathcal{O}}_X((n+1)D)) $$ $$ \longrightarrow H^2(D, {\mathcal{O}}_D((n+1)D)) \longrightarrow 0. $$ This gives us $$ h^2( {\mathcal{O}}_X((n+1)D))- h^2( {\mathcal{O}}_X(nD))= h^2( {\mathcal{O}}_D((n+1)D)) -h^1( {\mathcal{O}}_D((n+1)D)). $$ By Riemann-Roch formulas for surfaces and threefolds, we have $$ \frac{1}{12} (c_1^2+c_2)\cdot D = \chi ({\mathcal{O}}_D). $$ Since for all $n\gg 0$, $h^1({\mathcal{O}}_X(nD))=h^3({\mathcal{O}}_X(nD))=0$ and $D^3=D^2\cdot K_D=0$, we have $\chi({\mathcal{O}}_X(nD))=1+ h^2( {\mathcal{O}}_X(nD))\geq 1$. So $\frac{1}{12} (c_1^2+c_2)\cdot D = \chi ({\mathcal{O}}_D)\geq 0. $ If $n\gg 0$, $h^1({\mathcal{O}}_D(nD))=0$, then by the above exact sequence, we have $$0 \longrightarrow H^2(X, {\mathcal{O}}_X(nD)) \longrightarrow H^2(X, {\mathcal{O}}_X((n+1)D)) \longrightarrow H^2(D, {\mathcal{O}}_D((n+1)D)) \longrightarrow 0, $$ It is easy to see that for all $n\geq 0$, $h^2(X, {\mathcal{O}}_X(nD))=0$ and $h^2( {\mathcal{O}}_D((n+1)D))=h^1( {\mathcal{O}}_D((n+1)D))=0$. Thus $\chi ({\mathcal{O}}_D)=0$ and the proposition follows. So the remaining case is: for infinitely many $n>0$, $h^1({\mathcal{O}}_D(nD))>0$. We claim that $\chi ({\mathcal{O}}_D)=0$. If not, then $\chi ({\mathcal{O}}_D)>0$. Since for all $n>0$, $h^0(D, {\mathcal{O}}_D(-nD))= h^2(D, {\mathcal{O}}_D(K_D+nD))=0$, we have surjective map from $H^2(X, {\mathcal{O}}_X(K_X+(n-1)D))$ to $H^2(X, {\mathcal{O}}_X(K_X+nD))$. By Lemma 2.4, $$\lim_{{\stackrel{\to}{n}}}H^2(X, {\mathcal{O}}_X(K_X+nD))=0. $$ So for $n\gg 0$, $H^2(X, {\mathcal{O}}_X(K_X+nD))=0$. Applying Riemann-Roch to the sheaf ${\mathcal{O}}_X(K_X+nD)$, we have $$ h^0(X, {\mathcal{O}}_X(K_X+nD))\geq (\frac{1}{12} (c_1^2+c_2)\cdot D )n>0. $$ Since $h^0(X, {\mathcal{O}}_X(nD))=1$ for all $n\geq 0$, the above inequality is not true. To see this, let $D_m$ be an effective divisor in the linear system $|K_X+ mD|$ and $D_{m+n}$ be an effective divisor in the linear system $|K_X+(m+n)D|$, then $D_m+nD$ is linearly equivalent to $D_{m+n}$. Let $f$ be the nonconstant rational function on $X$ such that div$f+D_{m+n}=D_m+nD$, then div$f+D_{m+n}-D_m=nD>0$. This gives $f\in H^0(X, {\mathcal{O}}_X(D_{m+n}-D_m))$ and $D_{m+n}-D_m$ is linearly equivalent to $nD$. Since $h^0(X, {\mathcal{O}}_X(K_X+(m+n)D))\geq c(m+n)$, there are at least 2 linearly independent $f$ in $H^0(X, {\mathcal{O}}_X(D_{m+n}-D_m))$. It contradicts to $H^0(X, {\mathcal{O}}_X(D_{m+n}-D_m))= H^0(X, {\mathcal{O}}_X(nD)=\Bbb{C}$ for all $n\geq 0$. Thus $h^0(X, {\mathcal{O}}_X(K_X+nD))$ cannot be greater than $cn$ for $c>0$. So $$ \frac{1}{12} (c_1^2+c_2)\cdot D = \chi ({\mathcal{O}}_D)=0. $$ By Riemann-Roch, $h^2(X, {\mathcal{O}}_X(nD))=h^2(X, {\mathcal{O}}_X)- h^3(X, {\mathcal{O}}_X)$ is a constant and $h^2( {\mathcal{O}}_D(nD)) =h^1( {\mathcal{O}}_D(nD))$ for all $n\geq 0$. Since $h^2(X, {\mathcal{O}}_X)- h^3(X, {\mathcal{O}}_X)\geq 0$ and $q=0$, $\chi ({\mathcal{O}}_X)\geq 1$. By a theorem of Miyaoka \cite {Mi}, $K_X$ is not nef. \begin{flushright} Q.E.D. \end{flushright} \begin{proposition} Under the condition of Proposition 3.1, if ${\mathcal{O}}_D(D)$ is torsion, then $h^1(X, {\mathcal{O}}_X(nD))$ are bounded for all $n$ and $ \frac{1}{12} (c_1^2+c_2)\cdot D = \chi ({\mathcal{O}}_D)\geq 0$. \end{proposition} {\it Proof. } Let $p$ be the least natural number such that ${\mathcal{O}}_D(pD)={\mathcal{O}}_D$. Let $G$ be the divisor defined by $D|_D$, then $\kappa (G, D)=0$. By Iitaka \cite{I1}, page 35, $h^0(D, {\mathcal{O}}_D(nD))=0$ if $p$ does not divide $n$. Lemma 2.1-2.5 in \cite{Ku} still hold in our case, so $h^1(X, {\mathcal{O}}_X(nD))$ are bounded for all $n\geq 0$. Since $h^3(X, {\mathcal{O}}_X(nD))=0$ and $D^3=D^2\cdot K_X=0$, by Riemann-Roch, we have $$\chi ({\mathcal{O}}_X(nD))=1-h^1(X, {\mathcal{O}}_X(nD)) +h^2(X, {\mathcal{O}}_X(nD))=\frac{1}{12} (c_1^2+c_2) nD +\frac{1}{24}c_1c_2.$$ So $\frac{1}{12} (c_1^2+c_2) D\geq 0$ since $h^1(X, {\mathcal{O}}_X(nD))$ are bounded for all $n\geq 0$. By the short exact sequence $$ 0 \longrightarrow {\mathcal{O}}_X((np-1)D) \longrightarrow {\mathcal{O}}_X(npD) \longrightarrow {\mathcal{O}}_D(npD)={\mathcal{O}}_D \longrightarrow 0, $$ we have $$\chi ({\mathcal{O}}_D)= \chi ({\mathcal{O}}_X(npD))-\chi ({\mathcal{O}}_X((np-1)D)) =\frac{1}{12} (c_1^2+c_2) D\geq 0. $$ \begin{flushright} Q.E.D. \end{flushright} We have proved Theorem 1.3. By the standard classification theory of surfaces, if ${\mathcal{O}}_D(D)$ is not torsion, $D$ might be one of the following surfaces: (1) ruled surfaces over an elliptic curve; (2) bi-elliptic surfaces; (3) Abelian surfaces; (4) Elliptic surfaces.
1,314,259,992,644
arxiv
\section{Introduction} In 1975, Erd\H{o}s and Shelah~\cite{E75} defined the following generalization of classical Ramsey numbers. \begin{definition} Fix integers $p, q$ such that $p \ge 3$ and $2 \le q \le \binom p2$. A \emph{$(p, q)$-coloring} of $K_n$ is a coloring of the edges of $K_n$ such that every $p$-clique has at least $q$ distinct colors among its edges. The Erd\H{o}s-Gy\'arf\'as function $f(n, p, q)$ is the minimum number of colors such that $K_n$ has a $(p, q)$-coloring. \end{definition} \noindent We are interested in fixing $p, q$ and investigating the asymptotic behavior of $f(n, p, q)$ as $n$ tends to infinity. In particular we will be investigating $f(n, 4, 5)$. But in order to introduce the general problem, we will discuss what is known about other ``small'' pairs $(p, q)$. We start with the case where $q=2$, which is equivalent to a classical Ramsey problem. Recall that we define the Ramsey number $R_k(p)$ to be the smallest natural number $N$ such that every edge-coloring of $K_N$ using $k$ colors yields a monochromatic $p$-clique. Thus, $f(n, p, 2)$ is the smallest $k$ such that $R_k(p) > n$. The following lower bound was proved by Lefmann~\cite{L87} and the upper bound follows from the Erd\H{o}s-Szekeres ``neighborhood chasing'' argument~\cite{ES35}: \[ 2^{kp/4} \le R_k(p) \le k^{kp}. \] It follows for fixed $p\ge 3$ that \[ \Omega\rbrac{\frac{\log n}{\log \log n}} = f(n, p, 2) = O\rbrac{\log n}. \] Next we discuss $(3, 3)$-colorings. This case is easy but we would like to use it to preview $(4, 5)$-colorings. It is not difficult to see that $f(n, 3, 3) = \chi'(K_n)$, since a $(3,3)$-coloring is precisely a proper edge coloring of $K_n$, i.e.\ a decomposition of the edges into matchings. Later we will see that finding $f(n, 4, 5)$ also involves a type of decomposition problem (with additional constraints). Using the well-known values of $\chi'(K_n)$ we get \[ f(n, 3, 3) =\chi'(K_n) =\begin{cases} & n-1, \;\;\; \mbox{ $n$ is even},\\ & n, \;\;\; \mbox{\hskip4ex $n$ is odd.} \end{cases} \] We now consider $(p, q)$-colorings where $p \ge 4$ and $q \ge 3$. It is easy to see that here we have $f(n, p, \binom p2) = \binom n2$. Thus if we examine the sequence of functions $f(n, p, 2), f(n, p, 3), \ldots,$ $f(n, p, \binom p2)$ we see it starts with at most logarithmic growth and gets larger until we see quadratic growth. Erd\H{o}s and Gy\'arf\'as~\cite{EG97} found for each $p$ the smallest value of $q$ such that $f(n, p, q)$ is at least linear in $n$ (such $q$ is called the \tbf{linear threshold}). They also found the smallest $q$ such that $f(n, p, q)$ is quadratic (the \tbf{quadratic threshold}). In particular, they showed that the linear threshold is $q = \binom p2 - p + 3$ and that the quadratic threshold is $q = \binom p2 - \lfloor p/2 \rfloor + 2$. Among several other questions posed in~\cite{EG97}, they ask the following: for fixed $p$, what is the smallest $q$ such that $f(n, p, q)$ is polynomial in $n$ (the \tbf{polynomial threshold})? They showed that the polynomial threshold for any $p$ is at most $p$, and in particular \begin{equation}\label{eqn:fnpp} f(n, p, p) \ge n^{\frac{1}{p-2}}. \end{equation} For $(4, 3)$-colorings, the following lower bound is due to Fox and Sudakov~\cite{FS08} and upper bound is due to Mubayi~\cite{M98}: \[ \Omega \rbrac{ \log n} = f(n, 4, 3) \le \exp \cbrac{ O\rbrac{\sqrt{ \log n}}} = n^{o(1)}. \] Thus, the polynomial threshold for $p=4$ is $q=4$. For $(4, 4)$-colorings, the following lower bound follows from equation~\eqref{eqn:fnpp} and upper bound is due to Mubayi~\cite{M04}: \[ n^{1/2} \le f(n, 4, 4) \le n^{1/2} \exp \cbrac{ O\rbrac{\sqrt{ \log n}}} = n^{1/2+o(1)}. \] Thus, we arrive at $(4, 5)$-colorings, which is the main focus of this paper. Of course $f(n, 4, 5) = \Omega(n)$ since $q=5$ is the linear threshold for $p=4$. Moreover, Erd\H{o}s and Gy\'arf\'as \cite{EG97} paid special attention to $f(n, 4, 5)$ and gave a proof that \[ \frac56 (n-1) \le f(n, 4, 5) \le n, \] although the lower bound was previously stated by Erd\H{o}s, Elekes and F\"{u}redi \cite{E81}. Since the coefficients $5/6$ and $1$ are so close, Erd\H{o}s and Gy\'arf\'as were tempted to make a guess as to what the true coefficient should be. Erd\H{o}s thought that it should be $1$, while Gy\'arf\'as thought that it should be ``closer to $5/6$''~\cite{EG97}. Our main theorem settles this disagreement: \begin{theorem}\label{thm:main} We have \[ f(n, 4, 5) = \frac 56 n + o(n). \] \end{theorem} Let us also mention that the function $f(n,p,q)$ has been extensively studied by several other researchers, see, e.g., \cite{A00, BEHK22, C19, CH18, CH20, CFLS15, FPS20, PS19, SS01, SS03}. \subsection{Proof overview} We outline the proof of Theorem~\ref{thm:main}. The lower bound was proved by Erd\H{o}s and Gy\'arf\'as~\cite{EG97}, and for the sake of completeness we will restate their proof in Section \ref{sec:process}. For the upper bound, it clearly suffices to show that for any fixed $\varepsilon > 0$ we have $f(n, 4, 5) \le \frac 56 n + \varepsilon n$ for all sufficiently large $n$. We show that there exists some randomized coloring procedure using $\frac56 n + \varepsilon n$ colors such that the probability of getting a $(4, 5)$-coloring is positive for sufficiently large $n$. We will then define a procedure using two \emph{phases}. The \emph{first phase} will (if successful) use $\frac 56 n + \frac12 \varepsilon n$ colors to color almost all the edges of $K_n$ using a randomized coloring process, and the analysis of this phase will be the main work of this paper. The \emph{second phase} will color the remaining uncolored edges using a much simpler random coloring and a fresh set of $\frac12 \varepsilon n$ colors. Our analysis of the first phase of the process will show that with positive probability it outputs a partial coloring with nice properties that will allow us to easily show that the second phase successfully finishes a $(4, 5)$-coloring with positive probability, which completes the proof. For the first phase we will use the differential equation method (see~\cite{BD20} for a gentle introduction) to establish dynamic concentration of our random variables. The origin of the differential equation method stems from work done at least as early as 1970 (see Kurtz \cite{Kurtz1970}), and which was developed into a very general tool by Wormald \cite{W1995, W1999} in the 1990's. Indeed, Wormald proved a ``black box'' theorem, which gives dynamic concentration so long as some relatively simple conditions hold. Warnke \cite{Warnke2020} recently gave a short proof of a somewhat stronger black box theorem. For our purposes the existing black box theorems are insufficient, but we are still able to analyze our process using fairly standard arguments that resemble previous analyses of other processes. The analysis of the second phase will be based on the Lov\'asz Local Lemma. \subsection{Tools}\label{sec:tools} We will be using the following forms of Chernoff's bound (see, e.g., \cite{JLR}). \begin{lemma}[Chernoff bound] Let $X\sim \textrm{Bin}(n,p)$ and $\mu = E(X) = np$. Then, for all $0<\delta<1$ \begin{equation}\label{Chernoff_upper} \Pr(X \ge (1+\delta) \mu) \le \exp(-\mu \delta^2/3) \end{equation} and \begin{equation}\label{Chernoff_lower} \Pr(X \le (1-\delta)\mu) \le \exp(-\mu \delta^2/2). \end{equation} \end{lemma} We will also need Freedman's inequality~\cite{F1975}, which we state next. \begin{lemma}[Freedman's inequality]\label{lem:Freedman} Let $W(i)$ be a supermartingale with $\Delta W(i) \leq D$ for all $i$, and let \newline $V(i) :=\displaystyle \sum_{k \le i} \mbox{{\bf Var}}[ \Delta W(k)| \mc{F}_{k}]$. Then, \[ \P\left[\exists i: V(i) \le b, W(i) - W(0) \geq \lambda \right] \leq \displaystyle \exp\left(-\frac{\lambda^2}{2(b+D\lambda) }\right). \] \end{lemma} Finally, let us state the Lov\'asz Local Lemma (LLL)~\cite{AS}. For a set of events $\mc{A}$ and a graph $G$ on vertex set $\mc{A}$, we say that $G$ is a \tbf{dependency graph} for $\mc{A}$ if each event $A \in \mc{A}$ is not adjacent to events which are mutually independent. \begin{lemma}[Lov\'asz Local Lemma] \label{lem:LLL} Let $\mc{A}$ be a finite set of events in a probability space $\Omega$ and let $G$ be a dependency graph for $\mc{A}$. Suppose there is an assignment $x:\mc{A} \rightarrow [0, 1)$ of real numbers to $\mc{A}$ such that for all $A \in \mc{A}$ we have \begin{equation}\label{eqn:LLLcond} \Pr(A) \le x(A) \prod_{B \in N(A)} (1-x(B)). \end{equation} Then, the probability that none of the events in $\mc{A}$ happen is \[ \Pr\rbrac{\bigcap_{A \in \mc{A}} \overline{A}} \ge \prod_{A \in \mc{A}} (1-x(A)) >0. \] \end{lemma} \subsection{Organization of the paper} In Section~\ref{sec:process} we motivate and formally define the process for the first phase of our coloring procedure. Our analysis of the process will hinge on our ability to maintain good estimates of a family of random variables which change with each step of the process. In Section~\ref{sec:variables} we define our family of variables, and in Section~\ref{sec:goodevent} we state the bounds that we intend to prove for our random variables. In Sections~\ref{sec:QY}--\ref{sec:crude} we bound the probability that any of our random variables violate the stated bounds until almost all the edges are colored. This will finish the first phase of the proof, which leaves just a few uncolored edges. Finally, in Section~\ref{sec:finishing} we show how to color such uncolored edges. \section{The coloring process}\label{sec:process} First we give some motivation. Let us start by seeing the proof of the lower bound for Theorem~\ref{thm:main}, which was given by Erd\H{o}s and Gy\'arf\'as~\cite{EG97}. The proof will illuminate what needs to be done to achieve an asymptotically matching upper bound (and we will comment on that after the proof). \begin{theorem}[\cite{EG97}] We have \[ f(n, 4, 5) \ge \frac56 (n-1). \] \end{theorem} \begin{proof} Suppose we have a $(4, 5)$-coloring of a graph of order $n$ using a set of colors $C$. For each $c \in C$ let $G_c$ be the graph of order $n$ with only the $c$-colored edges. Note that $G_c$ can never have a connected component with more than two edges (i.e.\ all components have at most three vertices and there are no monochromatic triangles). Thus every component of $G_c$ is either $P^0$, $P^1$, or $P^2$, where $P^j$ denotes a path on $j$ edges. For $0 \le j \le 2$ let $x_j$ be the total number of components $P^j$ in all the graphs $G_c$, $c \in C$. Thus \begin{equation}\label{eqn:EGproof1} x_0 + 2x_1 + 3x_2 = n|C| \end{equation} and \begin{equation}\label{eqn:EGproof2} x_1 + 2x_2 = \binom n2. \end{equation} Note also that whenever we have a component in color $c$ with two edges on three vertices (i.e.\ a component counted by $x_2$), the third edge in that triangle must be a component counted by $x_1$. Thus, $x_1 \ge x_2$, and hence, $x_0 + \frac13 (x_1 - x_2) \ge 0$. But then, using~\eqref{eqn:EGproof1} and \eqref{eqn:EGproof2}, we get \begin{align*} |C| = \frac{x_0 + 2x_1 + 3x_2}{n} \ge \frac{x_0 + 2x_1 + 3x_2 - \rbrac{x_0 + \frac13 (x_1 - x_2)}}{n} = \frac{\frac53 (x_1 + 2x_2)}{n} = \frac56 (n-1). \end{align*} \end{proof} From the proof we can see that the only way to achieve equality would be if $x_0=0$ and $x_1=x_2.$ Erd\H{o}s~\cite{EG97} expressed doubt that any coloring could come close to that. Indeed, he suspected that if we have $x_0=o(n^2)$ then we must also have $x_2=o(n^2)$, i.e.\ $x_1$ dominates everything and essentially all the graphs $G_c$ are matchings with $n/2 -o(n)$ edges. Such a coloring would have $|C| = n-o(n)$. Indeed, Erd\H{o}s and Gy\'arf\'as~\cite{EG97} gave such a coloring to prove the upper bound $f(n, 4, 5) \le n.$ To prove Theorem~\ref{thm:main}, we will need to get a coloring with $x_0 = o(n^2)$ and $x_1=x_2 +o(n^2)$. In other words, for almost every $P^1$ component in some graph $G_c$, its two endpoints are also the ends of some $P^2$ component in some $G_{c'}$. Thus we are motivated to consider a process which at each step $i$ colors the edges of some triangle $T_i$ (whose edges have no colors yet), giving two of them the same color and the third one a different color. The intent is to create one $P^1$ component in a color $c$ and a $P^2$ component in another color $c'$. When a vertex $v$ is incident to an edge of some color $c$ we say $v$ has been \tbf{hit} by $c$. To ensure that our components do not accidentally become larger than intended, at each step we will have to make sure to choose $c, c'$ that have not already hit the vertices they are about to hit. There are many ways we could choose the triangle $T_i$ whose edges we will color at step $i$. We will use what seems to be the most natural (and well-studied) candidate: the \tbf{random triangle removal process} first introduced by Bollob\'as and Erd\H{o}s (see~\cite{B98, B00}). In this process one starts with $G_R(0)=K_n$ and at each step~$i$ removes the edges of one triangle chosen uniformly at random from all triangles in $G_R(i)$, stopping only when the graph becomes triangle-free. Bollob\'as and Erd\H{o}s conjectured that the number of edges remaining at the end of this process (i.e.\ edges not in the triangle packing) is $\Theta(n^{3/2})$ a.a.s.\ (\emph{asymptotically almost surely}, that is, with probability tending to one as $n \to \infty$). The best known estimate (both upper and lower bounds) on the number of edges remaining is $n^{3/2 + o(1)}$ by Bohman, Frieze and Lubetzky~\cite{BFL15}. We will not need the full power of their result, but for our convenience we will use a few facts they proved in their analysis of the process. For our coloring process, at each step $i$ we will choose our triangle $T_i$ uniformly at random from all triangles whose edges are all uncolored. We will then randomly choose an \tbf{orientation} for $T_i$, meaning that we choose which of the three edges will be in a $P^1$ component (meaning the other two will make a $P^2$). We will then randomly choose two ``suitable'' colors $c_i, c'_i$ to assign to the edges of $T_i$. In the end we will use a somewhat complicated rule to determine which colors are ``suitable'' here. Of course our rule must not violate the constraint for $(4, 5)$-coloring, which requires for each set of four vertices to have five different colors among its six edges. So somehow our process must prevent the creation of any set of four vertices having two repeated colors (or one color repeated twice). Suppose $T_i = \{u, u', u''\}$ is the selected triangle. We will choose the orientation such that $u'u''$ will be assigned the color $c_i'$ and the other two edges will be assigned $c_i$. In this case, we say that the triangle is oriented {\em away from $u$}. Obviously, by our previous discussion we should choose the colors such that $c_i'$ has not hit $u'$ or $u''$ and $c_i$ has not hit $u, u'$, or $u''$. Thus throughout the process our coloring will have no color components with more than two edges and our color components $P^1$ and $P^2$ will come in pairs sharing endpoints. This requirement already avoids many of the ways our coloring could violate the constraint for a $(4,5)$-coloring. For example, since our color components have at most two edges, we cannot have four vertices containing three edges of the same color. Thus any violation of a $(4,5)$-coloring must involve two different colors, each appearing twice in the same set of four vertices. The rule we have already described also avoids the two configurations illustrated in Figure~\ref{fig1}. \noindent \begin{figure}[h!] \begin{center} \begin{tikzpicture}[scale=.5] \node (ll) at (0,0) [vert] {}; \node (lr) at (2,0) [vert] {}; \node (ur) at (2,2) [vert] {}; \node (ul) at (0,2) [vert] {}; \draw [blue] (ll) -- (lr); \draw [red] (ur) -- (ul); \draw [red] (ur) -- (lr); \draw [blue] (ul) -- (lr); \end{tikzpicture} \qquad \qquad \qquad \qquad \qquad \begin{tikzpicture}[scale=.5] \node (ll) at (0,0) [vert] {}; \node (lr) at (2,0) [vert] {}; \node (ur) at (2,2) [vert] {}; \node (ul) at (0,2) [vert] {}; \draw [blue] (ll) -- (lr); \draw [blue] (ll) -- (ul); \draw [red] (ur) -- (ul); \draw [red] (ur) -- (lr); \end{tikzpicture} \end{center} \caption{Two possible configurations that would violate a $(4,5)$-coloring.} \label{fig1} \end{figure} However, unless we impose some additional rules for choosing our colors, our process would allow the configurations depicted in Figure~\ref{fig2} that would violate the constraint for a $(4, 5)$-coloring. \begin{figure}[h!] \centering \begin{subfigure}[b]{0.3\textwidth} \centering \begin{tikzpicture}[scale=.5] \node (ll) at (0,0) [vert] {}; \node (lr) at (2,0) [vert] {}; \node (ur) at (2,2) [vert] {}; \node (ul) at (0,2) [vert] {}; \draw [blue] (ll) -- (lr); \draw [blue] (ul) -- (ur); \draw [red] (ul) -- (ll); \draw [red] (ur) -- (lr); \end{tikzpicture} \caption{} \label{fig:4cycle} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \begin{tikzpicture}[scale=.5] \node (ll) at (0,0) [vert] {}; \node (lr) at (2,0) [vert] {}; \node (ur) at (2,2) [vert] {}; \node (ul) at (0,2) [vert] {}; \draw [blue] (ul) -- (ur); \draw [blue] (lr) -- (ul); \draw [red] (ur) -- (lr); \draw [red] (ul) -- (ll); \end{tikzpicture} \caption{} \label{fig:violation} \end{subfigure} \caption{Two additional configurations that would violate a $(4,5)$-coloring and require extra consideration.} \label{fig2} \end{figure} We will have to impose two more rules to avoid these violations. Of course, we are still trying to use only $\frac 56 n + o(n)$ colors. We pause to comment that based on the rules we have stated so far, it is heuristically plausible that we could use the process to color almost all edges using $\frac 56 n + o(n)$ colors. Indeed, after $i \le \frac 13 \binom n2$ steps there are $i$ colored triangles, and so each vertex $v$ should be incident with about $3i/n$ of them. About $1/3$ of those triangles will get oriented in a way that means $v$ gets hit by only one color whereas $2/3$ of these triangles have $v$ getting hit by two colors. Thus the number of colors that have hit $v$ should be about \[ \frac {3i}{n} \cdot \sbrac{\frac13 \cdot 1 + \frac 23 \cdot 2} = \frac {5i}{n} \le \frac {5\cdot \frac13 \binom n2 }{n} \le \frac 56 n \] and so, heuristically, given any vertex it should be possible to choose a color that has not hit it yet (until almost all edges are colored). However, running this simpler process might require substantial extra colors to make it into a $(4,5)$-coloring afterwards. Thus, we impose the following additional rules into our process. We will avoid the configuration in Figure \ref{fig:4cycle} (an \tbf{alternating 4-cycle}) by ``brute force'' adjustment: when we choose our colors we will simply refuse to create such a cycle (i.e.\ color choices that would create one are eliminated from consideration and we randomly choose from the remaining colors). While this rule does make the process more challenging to analyze, we will see that it does not reduce the number of choices we have for colors too significantly. To avoid the configuration in Figure \ref{fig:violation}, it is tempting to say that we will use ``brute force" again and simply refuse to make it. However, some thought reveals that this idea is not too promising if we want to use only $\frac 56 n + o(n)$ colors. Indeed, when we have colored, say, about half of the edges, a vertex $v$ should be in some linear number of triangles oriented away from $v$. Unless our process has a rule to prevent it, we would expect to see some linear number of colors (like the colors $c, \ldots, c'$ in Figure~\ref{fig3}) appearing across from~$v$ in those triangles. None of those colors can be allowed to hit $v$, since then we would get Figure \ref{fig:violation}. However, the simpler process (without additional rules) was using very close to $\frac 56 n$ colors and to come close to $\frac 56 n$ colors we need to make sure almost every vertex gets hit by almost every color. Thus, this proposed ``brute force'' rule is not viable. \noindent \begin{figure}[!h] \begin{center} \begin{tikzpicture}[scale=.5] \node (lr) at (2,0) [vert] {}; \node (ur) at (2,2) [vert] {}; \node (y) at (-2,0) [vert] {}; \node (x) at (-2,2) [vert] {}; \node [label=$\ldots$] (dots) at (0,2) {}; \node (v) at (0,.5) [vert, label=below:$v$] {}; \draw [blue] (v) -- (ur); \draw [blue] (lr) -- (v); \draw [red] (ur) -- node[label=right:{$c'$}]{}(lr); \draw [cyan] (v) --(x); \draw [cyan] (v) --(y); \draw [orange] (y) -- node[label=left:{$c$}]{} (x); \end{tikzpicture} \end{center} \caption{If using a ``brute force'' adjustment to the process, there would be a linear number of colors $c, \ldots, c'$ across from $v$.} \label{fig3} \end{figure} In order to overcome this issue, we will do the following. Each vertex $v$ will have some small linear set of \tbf{special colors} $S_v$, which will be the only colors we allow to appear opposite from $v$ in triangles oriented away from $v$. To avoid the configuration in Figure \ref{fig:violation}, we will make sure that $v$ is never hit by any color in $S_v$. \subsection{First phase} In this subsection we define the random process we will use to color almost all edges. Suppose we have a set $\overline{\textsc{col}}$ of $\frac 56 n + \varepsilon n$ colors for some $0<\varepsilon<1/100$. Our coloring method has two phases and the first phase will need almost all the colors. We let $\textsc{col} \subseteq \overline{\textsc{col}}$ be some subset of $\frac 56 n + \frac{\varepsilon}{2} n$ colors. We will use colors in $\textsc{col}$ for the first phase and reserve the rest for the second phase. We will now start describing the first phase in detail, which we motivated before this subsection. First, independently for each vertex $v$ and color $k$, we put $k$ into the set $S_v$ with probability \[ s:=\frac{ \frac{\varepsilon}{2} }{\frac56 + \frac{\varepsilon}{2} }. \] The colors in $S_v$ will not be allowed to hit $v$, and they will be the only colors allowed to appear across from $v$ in triangles oriented away from $v$. Note that $s$ was chosen so that the number of colors we allow to hit $v$, i.e.\ $|\textsc{col} \setminus S_v|$, has expectation $\frac 56 n$. We will need the following definitions. \begin{definition} An \tbf{alternating $(uv, k)$-path} is a $u-x-y-v$ path such that edges $ux$ and $vy$ are colored the same color and edge $xy$ is colored $k$. \end{definition} \begin{definition} Let $k$ and $k'$ be colors in $\textsc{col}$. \begin{itemize} \item We say $k$ is \tbf{available at a vertex} $u$ at step $i$ if $k \notin \S_u$, and $u$ has not been hit by $k$. \item We say $k$ is \tbf{available at an edge} $uv$ at step $i$ if $uv$ is uncolored, $k$ is available at each of the vertices $u$ and $v$, and there is no alternating $(uv, k)$-path. \item We say $k'$ is \tbf{1-available at a triple} $(u, u', u'')$ at step $i$ if $k' \in \S_u$ and $k'$ is available at the edge $u'u''$. \item We say $k$ is \tbf{2-available at a triple} $(u, u', u'')$ at step $i$ if $k$ is available at the edges $uu'$ and $uu''$. We say a pair $(k, k')$ is \tbf{available at a triple} $(u, u', u'')$ at step $i$ if $k'$ is 1-available at $(u,u',u'')$ and $k$ is 2-available at $(u,u',u'')$. \end{itemize} (Note that this definition implies all edges in $uu'u''$ are uncolored. Also, the roles of $u'$ and $u''$ are interchangeable but the role of $u$ is different). \end{definition} Now, we are ready to define the process. \vspace{2ex} \begin{enumerate}[label={\bf Substep \arabic*.\!}, wide, labelindent=0pt] \noindent\begin{minipage}{.6\textwidth} \item (Initialization) Start with a complete, uncolored graph $K_n=(V,E)$ on $n$ vertices. Recall that for each $u \in V$, we have a set $\S_u$ of colors from $\textsc{col}$. Colors from $S_u$ can be assigned as the opposite color to $u$ when the triangle is oriented away from $u$ (see opposite figure). On the other hand, these colors are not not allowed to touch~$u$. \end{minipage} \begin{minipage}{.4\textwidth} \begin{center} \begin{tikzpicture} \node (u) at (0,0) [vert,label=below:$u$] {}; \node (v) at (1,1) [vert] {}; \node (u'') at (2,1) [vert] {}; \node (x) at (-1,1) [vert] {}; \node (y) at (-2,1) [vert] {}; \draw [uncolored] (u) -- (v); \draw [uncolored] (u) -- (u''); \draw [uncolored] (u) -- (x); \draw [uncolored] (u) -- (y); \draw [blue] (v) -- node[above] {$k''\in S_u$} (u''); \draw [red,] (x) -- node[above] {$k'\in S_u$} (y); \end{tikzpicture} \end{center} \end{minipage} \vspace{2ex} \noindent\begin{minipage}{.6\textwidth} \item (Triangle) At step $i$, choose an uncolored triangle $T_i$ uniformly at random from the set of uncolored triangles. \end{minipage} \begin{minipage}{.4\textwidth} \begin{center} \begin{tikzpicture} \node (x) at (0,0) [vert] {}; \node (y) at (2,0) [vert] {}; \node (v) at (1,1) [vert] {}; \draw [uncolored] (v) -- (x); \draw [uncolored] (v) -- (y); \draw [uncolored] (x) -- (y); \end{tikzpicture} \end{center} \end{minipage} \vspace{2ex} \noindent\begin{minipage}{.6\textwidth} \item (Orientation) Choose one vertex $u$ uniformly at random from the three vertices of the selected triangle $T_i$. The triangle will be oriented away from $u$. Label the other vertices $u'$ and $u''$. \end{minipage} \begin{minipage}{.4\textwidth} \begin{center} \begin{tikzpicture} \node (u') at (0,0) [vert,label=above:$u'$] {}; \node (u'') at (2,0) [vert,label=above:$u''$] {}; \node (u) at (1,1) [vert,label=above:$u$] {}; \draw [uncolored] (u) -- (u'); \draw [uncolored] (u) -- (u''); \draw [uncolored] (x) -- (y); \end{tikzpicture} \end{center} \end{minipage} \vspace{2ex} \noindent\begin{minipage}{.6\textwidth} \item (Color the triangle) Choose a pair of colors $(k, k')$ uniformly at random from all pairs that are available at triple $(u, u', u'')$ (or terminate if there is no such pair). Note that we can choose $k$ and $k'$ independently from each other. More specifically, we choose $k'$ uniformly at random from all colors such that $k'\in \S_u$ and $k'$ is available at $u'u''$. Independent from the choice of $k'$ we choose $k \notin S_u$ uniformly at random from all colors such that $k$ is available at both $uu'$ and $uu''$. Color $uu'$ and $uu''$ with $k$ and $u'u''$ with $k'$. \end{minipage} \begin{minipage}{.4\textwidth} \begin{center} \begin{tikzpicture} \node (u') at (0,0) [vert,label=above:$u'$] {}; \node (u'') at (2,0) [vert,label=above:$u''$] {}; \node (u) at (1,1) [vert,label=above:$u$] {}; \draw [red] (u) -- (u'); \draw [red] (u) -- (u''); \draw [blue] (u') -- (u''); \end{tikzpicture} \end{center} \end{minipage} \vspace{2ex} \item If there are more uncolored triangles, then go back to Substep 2 and carry out step $i+1$. Otherwise, terminate. \end{enumerate} Note that there are two possible endings of the process: it could finish at Substep 4 because no pair of colors is available, or it could finish at Substep 5 because no uncolored triangles remain. Bohman, Frieze and Lubetzky \cite{BFL15} showed that the random triangle removal process a.a.s.\ does not terminate until $n^{3/2 + o(1)}$ edges remain, and so our process a.a.s.\ does not terminate at Substep 5 until the number of uncolored edges is $n^{3/2 + o(1)}$. Most of this paper is devoted to showing that a.a.s.\ our coloring process does not terminate at Substep 4 until we have colored almost all the edges. When the process terminates, we will we move to the second phase of our coloring procedure, which will assign colors to the remaining uncolored edges. We note that some similar ideas were used by Guo, Patton and Warnke \cite{GPW20}. In particular they used a coloring process assigning colors one at a time where each color was chosen uniformly at random from all ``available" colors (for some appropriate definition of ``available"). \subsection{Second phase} The second phase will use the set of $\frac{\varepsilon}{2} n$ \tbf{reserved colors} $\overline{\textsc{col}} \setminus \textsc{col}$. Each edge that still needs to be colored will get one of the reserved colors chosen uniformly at random. Our analysis of the first phase will show that it produces a partial coloring that enjoys several useful ``pseudorandom'' properties (i.e.\ properties that one would expect to see in a simpler random coloring where each edge has an independent random color). These properties will allow us to argue that the remaining edges are relatively easy to color. We will use the Lov\'asz Local Lemma to show that with positive probability the resulting coloring is a $(4, 5)$-coloring and so, by the trivial probabilistic method, there exists an appropriate extension of the partial coloring we produced in the first phase to a complete $(4, 5)$-coloring. \section{System of random variables}\label{sec:variables} Our analysis of the process in the first phase will proceed by the differential equation method. As usual, we will define a family of random variables which we will \tbf{track} throughout the process, meaning that we will obtain asymptotically tight estimates which hold a.a.s.. For each tracked variable there will be a deterministic function, called the \tbf{trajectory}, such that a.a.s.\ the tracked variable is asymptotically equal to its trajectory. Our family of variables will also include some for which we prove only crude upper bounds (but which we do not track). A beautiful aspect of the differential equation method is that often the trajectories of random variables can be guessed using the right intuition and heuristics. Fortunately we will see that this is the case for our process. Indeed, our family of random variables will have elementary trajectories which we can guess using heuristics. In the next subsection we describe these heuristics, and in the following subsection we define our random variables. As we define each variable we state its trajectory. \subsection{Heuristics}\label{sec:heuristics} Before we start listing the random variables, let us go over the heuristic assumptions. We define the ``scaled time'' parameter \begin{equation}\label{eqn:tdef} t = t(i) := i/n^2. \end{equation} At each step $i$ we color three edges, so the total number of colored edges at that step is $3i = 3n^2 t$. Heuristically, the probability that an edge is colored is \[ \frac{3n^2 t}{\binom n2} \approx 6t. \] Thus, in particular we predict that in many ways the uncolored graph should resemble $G(n, p)$ with \begin{equation}\label{eqn:pdef} p=p(t) := 1 - 6t. \end{equation} We would also like a heuristic for the probability that some vertex $u$ has been hit by a color $k \notin \S_u$. In this process, $u$ should be getting hit by colors at about the same rate throughout the process. In fact, the proportion of colors in $\textsc{col} \setminus S_u$ that have hit $u$ should be about the same as the proportion $1-p$ of edges in the graph we have colored. Thus, we heuristically assume that the probability $k \notin S_u$ has hit $u$ is $1-p$. Recall that \[ s= \frac{ \frac{\varepsilon}{2} }{\frac56 + \frac{\varepsilon}{2} } \] is the probability that (for some fixed color $k$ and vertex $u$) $k$ is chosen to be in $\S_u$. Note that the expected number of colors in $|\textsc{col} \setminus S_u|$ is $(1-s)|\textsc{col}| = \frac 56 n$ and so \[ |\textsc{col}|= \frac 56 n \cdot \frac{1}{1-s}. \] We will need to pay careful attention to alternating paths to analyze our process. Heuristically, for some uncolored edge $e$ and a color $k$, we will assume that there is some function $r(t)$ which we treat as the probability there is no $(uv, k)$-alternating path at time $t$. We will guess the appropriate function $r(t)$ using a Poisson heuristic. For a Poisson random variable $X$, if $\lambda = E[X]$ then $\P(X=0) = e^{-\lambda}$. If we let $X$ be the number of $(uv, k)$-alternating path at time $t$, then we ought to have \[ E[X] \approx n^2 \cdot (1-p)^3 \cdot \rbrac{\frac{1}{|\textsc{col}|}}^2, \] since we have about $n^2$ choices for possible vertices $x$ and $y$ in $u-x-y-v$, each of the edges $ux$, $xy$ and $yv$ are colored with probability $1-p$, $xy$ has the color $k$ with probability $1/|\textsc{col}|$, and the edges $ux$ and $yv$ have the same color with probability $1/|\textsc{col}|$ as well. Now substituting the value of $|\textsc{col}|$ gives \[ E[X] \approx n^2 \cdot (1-p)^3 \cdot \rbrac{\frac{1}{\frac 56 n \cdot \frac{1}{1-s}}}^2 = \frac{36}{25} (1-s)^2 (1-p)^3 = \frac{7776}{25} (1-s)^2 t^3. \] Consequently, we heuristically guess that \[ r(t) = \exp \cbrac{- \frac{7776}{25} (1-s)^2 t^3}. \] Note that for all $t \le 1/6$ we have \[ r(t) \ge r(1/6) = \exp \cbrac{ - \frac{36}{25}(1-s)^2} \ge \exp \cbrac{ - \frac{36}{25}} > \frac{1}{5}. \] \subsection{Variables} In this subsection, we introduce our family of variables. We start with the variables we intend to track, meaning that we will show that a.a.s.\ each of these variables stays within a relatively small interval centered around its trajectory. Formally we will use many random variables that are actually sets (not numbers), and when we say we ``track" them we mean that we track their cardinalities. We will often abuse notation and omit absolute value signs for the cardinality of sets, i.e.\ we write $S$ to denote either the set $S$ or its cardinality. In context there should be no confusion. At the end of this subsection we will define a few more variables for which we will obtain only crude upper bounds. Roughly speaking, the differential equations method is a way to formally argue that a.a.s.\ certain conditions (bounds on random variables) are maintained as the process runs. Often the goal is to argue that the process does not fail until almost all edges are colored. Thus, our choice of random variables will be motivated by what the process needs to keep going. In our case, the process needs two things: first it needs to be able to choose an uncolored triangle (i.e.\ the process does not terminate at Substep~4), and then it needs to have some choice of colors for that triangle that obey our coloring rules (i.e.\ the process does not terminate at Substep~5). Thus, our family of random variables will include one counting the number of uncolored triangles (see the variable $Q$ below), as well as ones counting the number of choices for colors we have for each such triangle (see variables $C^{(1)}, C^{(2)}$). For the differential equation method to work we will need a ``closed system'' of variables, meaning that if we condition on the current state of the process then the expected one-step change of any variable in our family can be (approximately) written in terms of variables in our family. Thus, our family will have to include several other variables. We start with the variables used by Bohman, Frieze and Lubetzky~\cite{BFL10} for the triangle removal process. This includes $Q$ which is clearly important, as well as another kind of variable which is necessary to make a closed system with $Q$. \begin{definition} Let $Q=Q(i)$ be the set of triangles where all three edges are uncolored at step $i$. For each $u, u'$ we let $Y_{uu'}=Y_{uu'}(i) $ be the set of vertices $u''$ such that both $uu''$ and $u'u''$ are uncolored. \end{definition} Recalling \eqref{eqn:tdef} and \eqref{eqn:pdef}, the natural heuristic guess for the trajectories (also proved formally in~\cite{BFL10}) is \[ Q(i) \approx \binom n3 p^3 \approx \frac 16 n^3 p^3 = n^3q(t) \qquad \text{and}\qquad Y_{uu'}(i) \approx np^2 =ny(t), \] where \[ q(t) := \frac16 p^3 \qquad\text{and}\qquad y(t) := p^2. \] We will call these functions $q(t), y(t)$ (i.e.\ trajectories with the power of $n$ removed) \tbf{scaled trajectories}. Before moving to the variables that count color choices, we briefly explain how $Q$ and the $Y$ variables form a closed system. Let $(\mc{F}_i)_{i \ge 0}$ be the ``natural filtration" of the process. In particular, conditioning on $\mc{F}_i$ tells us exactly what our partial coloring looks like at step $i$. More formally, our probability space consists of all possible maximal sequences of steps (specifying at each step which triangle, orientation, and colors are chosen), and the partition $\mc{F}_i$ groups these sequences according to what happens on the first $i$ steps. The work of Bohman, Frieze and Lubetzky \cite{BFL10} implies that \[ E[\Delta Q(i) | \mc{F}_i] = - \sum_{uu' \in E(i)} \frac{Y_{uu'}(i)^2}{Q(i)} +O(1) \] (where $E(i)$ is the set of uncolored edges at step $i$) and \[ E[\Delta Y_{uu'}(i) | \mc{F}_i] = - \sum_{u'' \in Y_{uu'}(i)} \frac{Y_{uu''}(i) + Y_{u'u''}(i) +O(1)}{Q(i)}. \] Since the conditional expected one-step change of $Q$ and any of the $Y$ variables can be approximately written using only the variables $Q$ and $Y$, we have a closed system. However, for our coloring process we need several more variables that count color choices. We will now extend our family to include not only variables of types $\CI, \CII$ (which count choices of colors given a fixed oriented triangle), but also several more variables needed to make the system closed again. We will verify later that this system is indeed closed. The variables of types $A$ through $F$ will all count triples $(u, u', u'')$ and pairs $(k, k')$ that are available at $(u, u', u'')$. For each of these variables we fix some set of vertices and/or colors and count extensions of the fixed set. To illustrate the substructures that these variables count, we will include diagrams that use the following conventions. Closed circles represent vertices that vary (based on some constraint), and open squares represent fixed vertices. Dashed, colored edges represent uncolored edges that have that color available at that edge; a dashed, black edge is a general uncolored edge; and a solid, colored edge is an edge with that color. For example, Figure~\ref{fig4} would indicate that we are fixing $u, u', u''$ and counting pairs $k, k'$ such that $k$ is available at $uu'$ and $uu''$ and $k'$ is available at $u'u''$. \begin{figure}[!h] \begin{center} \begin{tikzpicture} \node (u') at (0,0) [fixed,label=above:$u'$] {}; \node (u) at (1,1) [fixed,label=above:$u$] {}; \node (u'') at (2,0) [fixed,label=above:$u''$] {}; \draw [red?] (u) -- node [above] {$k$} (u'); \draw [blue?] (u') -- node [below] {$k'$} (u''); \draw [red?] (u) -- node [above] {$k$} (u''); \end{tikzpicture} \end{center} \caption{A demonstration of the diagram conventions used in this section.} \label{fig4} \end{figure} First we define the type $A$ variables, where $u'$, $u''$ and $k'$ are fixed. \begin{definition} For each edge $u'u''$ and each color $k' \notin S_{u'} \cup S_{u''}$ we define the random variable $A_{u'u'',k'}=A_{u'u'',k'}(i)$ to be the set of pairs $(u, k)$ such that $k$ is available at $uu'$ and $uu''$, and $k' \in S_u$. \end{definition} Note that technically our definition above does not assume that $k'$ is available at $u'u''$. However whenever that happens to be the case we have for all $(u, k) \in A_{u'u'',k'}$ that the color pair $(k, k')$ is available at the oriented triangle $(u, u', u'')$. \begin{figure}[h!] \begin{center} \begin{tikzpicture} \node (u') at (0,0) [fixed,label=above:$u'$] {}; \node (u) at (1,1) [vert,label=above:$u$ s.t. $k' \in S_u$] {}; \node (u'') at (2,0) [fixed,label=above:$u''$] {}; \draw [red?] (u) -- node [above] {$k$} (u'); \draw [blue?] (u') -- node [below] {} (u''); \draw [red?] (u) -- node [above] {$k$} (u''); \end{tikzpicture} \end{center} \label{fig5} \caption{A depiction of the $(u,k)\inA_{u'u'',k'}$.} \end{figure} Based on our heuristics we predict the following trajectory of $A_{u'u'',k'}$. First, we choose a vertex $u$ with two uncolored edges $uu'$ and $uu''$ having about $np^2$ choices. We need to make sure that $k'\in S_u$, which happens with probability $s$. The number of possible choices for $k\notin S_u \cup S_{u'}\cup S_{u''}$ is $|\textsc{col}|(1-s)^3$ and the probability that $k$ did not hit $u$, $u'$ or $u''$ in the previous steps is $p^3$. Finally, with probability $r^2$ we avoid $(uu', k)$- and $(uu'', k)$-alternating paths. Thus, \[ A_{u'u'',k'} \approx |\textsc{col}| n s (1-s)^3 p^5r^2 = \frac 56 n^2 s (1-s)^2 p^5r^2 = n^2 a(t), \] where we define the scaled trajectory \begin{equation}\label{eqn:atrajdef} a(t) := \frac 56 s (1-s)^2 p^5r^2. \end{equation} Next we define the type $B$ variables, which also fix two vertices and a color. For these variables we fix $u$, $u'$ and $k$. These are similar to the type $A$ variables but necessary due to the different roles the vertices and colors play in the process. \begin{definition} For each edge $uu'$ and each color $k \notin S_{u} \cup S_{u'}$ we define the random variable $B_{uu',k}=B_{uu',k}(i)$ to be the set of pairs $(u'', k')$ such that $k$ is available at $uu''$, $k'$ is available at and $u'u''$, and $k \in S_u''$. \end{definition} \begin{figure}[h!] \begin{center} \begin{tikzpicture} \node (u) at (0,0) [fixed,label=above:$u$] {}; \node (u'') at (1,1) [vert,label=above:$u''$ s.t. $k \in S_u''$] {}; \node (u') at (2,0) [fixed,label=above:$u'$] {}; \draw [red?] (u) -- node [below] {} (u'); \draw [blue?] (u') -- node [above] {$k'$} (u''); \draw [red?] (u) -- node [above] {$k$} (u''); \end{tikzpicture} \end{center} \label{fig6} \caption{A depiction of the $(u'',k')\inB_{uu',k}$.} \end{figure} We heuristically predict that these have the same trajectory as the type $A$ variables. Indeed, the number of possible choices for $u''$ with uncolored $uu''$ and $u'u''$ is about $np^2$. The number of possible choices for $k'$ with $k'\in S_u$ and $k'\notin S_{u'}\cup S_{u''}$ is $|\textsc{col}|s(1-s)^2$ and the probability that $k\notin S_{u''}$ is $(1-s)$. Furthermore, the probability that color $k'$ did not hit neither $u'$ nor $u''$ is $p^2$ and the probability that $k$ did not hit $u''$ is $p$. Avoiding alternating paths $(uu'',k)$ and $(u'u'',k')$ is again a probability of $r^2$. Hence, \[ B_{uu',k} \approx |\textsc{col}| n s (1-s)^3 p^5r^2 = \frac 56 n^2 s (1-s)^2 p^5r^2 = n^2 b(t), \] where the scaled trajectory is \[ b(t) := \frac 56 s (1-s)^2 p^5r^2 = a(t). \] Next we define type $\CI{}$, $\CII{}$ and $C_{uu'u''}$ variables which fix all of the vertices $u, u', u''$ and only count colors. \begin{definition} For each ordered triple $(u, u', u'')$ of uncolored edges we define the random variable $\CI=\CI(i)$ to be the set of colors $k'$ such that $k'$ is 1-available at $(u, u', u'')$ at step $i$. We define the random variable $\CII=\CII(i)$ to be the set of colors $k$ such that $k$ is 2-available at $(u, u', u'')$ at step $i$. We also define $C_{uu'u''}(i)$ to be the set of pairs $(k, k')$ available at $(u, u', u'')$. In other words $C_{uu'u''}(i)$ is the Cartesian product $\CI \times \CII$. \end{definition} \begin{figure}[h!] \begin{center} \begin{tikzpicture} \node (u') at (0,0) [fixed,label=above:$u'$] {}; \node (u) at (1,1) [fixed,label=above:$u$] {}; \node (u'') at (2,0) [fixed,label=above:$u''$] {}; \draw [blue?] (u') -- node [above] {$k'$} (u''); \end{tikzpicture} \hspace{4ex} \begin{tikzpicture} \node (u') at (0,0) [fixed,label=above:$u'$] {}; \node (u) at (1,1) [fixed,label=above:$u$] {}; \node (u'') at (2,0) [fixed,label=above:$u''$] {}; \draw [red?] (u) -- node [above] {$k$} (u'); \draw [red?] (u) -- node [above] {$k$} (u''); \end{tikzpicture} \hspace{4ex} \begin{tikzpicture} \node (u') at (0,0) [fixed,label=above:$u'$] {}; \node (u) at (1,1) [fixed,label=above:$u$] {}; \node (u'') at (2,0) [fixed,label=above:$u''$] {}; \draw [blue?] (u') -- node [above] {$k'$} (u''); \draw [red?] (u) -- node [above] {$k$} (u'); \draw [red?] (u) -- node [above] {$k$} (u''); \end{tikzpicture} \end{center} \caption{Depictions of the $k'\in\CI$, $k\in\CII$, and $(k,k')\in C_{uu'u''}$.} \end{figure} Similarly, as for the previous variables we heuristically predict that \[ \CI \approx |\textsc{col}| s (1-s)^2p^2r = \frac{5}{6} n s (1-s)p^2r = n c_1(t), \] \[ \CII \approx |\textsc{col}| (1-s)^3p^3r^2 = \frac{5}{6} n (1-s)^2p^3r^2 = n c_2(t), \] and \[ C_{uu'u''}(i) \approx \frac{25}{36}n^2 s(1-s)^3p^5r^3 = n^2 c(t), \] where the scaled trajectories are \[ c_1(t) := \frac{5}{6} s (1-s)p^2r, \quad c_2(t) := \frac{5}{6} (1-s)^2p^3r^2 \quad\text{ and }\quad c(t) := \frac{25}{36} s(1-s)^3p^5r^3 = c_1(t) c_2(t). \] Now we have type $D, E$ and $F$ variables, where one vertex and one color are fixed. \begin{definition} For each vertex $u$ and each color $k \notin \S_u$ we define the random variable $D_{u,k}=D_{u,k}(i)$ to be the set of triples $(u', u'', k')$ such that $(k, k')$ is available at $(u, u', u'')$ at step $i$. \end{definition} \begin{definition} For each vertex $u''$ and each color $k \notin \S_{u''}$ we define the random variable $E_{u'',k}=E_{u'',k}(i)$ to be the set of triples $(u, u', k')$ such that $(k, k')$ is available at $(u, u', u'')$ at step $i$. \end{definition} \begin{definition} For each vertex $u''$ and each color $k' \notin \S_{u''}$ we define the random variable $F_{u'',k'}=F_{u'',k'}(i)$ to be the set of triples $(u, u', k)$ such that $(k, k')$ is available at $(u, u', u'')$ at step $i$. \end{definition} Based on our heuristics we predict the following trajectories. Here, for example, we explain how to obtain the predicted trajectory of $F_{u'',k'}$. First we choose an ordered pair $u$ and $u'$ with all uncolored edges. This gives us about $n^2p^3$ choices. Next we choose a color $k$ such that $k\notin S_u\cup S_{u'}\cup S_{u''}$ yielding $|\textsc{col}|(1-s)^3$ possibilities. Now we observe that the probability that $k'\in S_u$ and $k'\notin S_{u'}$ is $s(1-s)$. Furthermore, the probability that $u$, $u'$ and $u''$ are not hit by $k$ is $p^3$, and the probability that $k'$ did not hit $u'$ is $p$. Finally, the probability of avoiding alternating paths $(uu',k)$, $(uu'',k)$ and $(u'u'', k')$ is $r^3$. Thus, $F_{u'',k'} \approx |\textsc{col}| n^2 s (1-s)^4p^7r^3$. Trajectories of $D_{u,k}$ and $E_{u'',k}$ can be derived in a similar fashion. Consequently, \[ D_{u,k}, E_{u'',k}, F_{u'',k'} \approx |\textsc{col}| n^2 s (1-s)^4p^7r^3 = \frac 56 n^3 s (1-s)^3p^7r^3 \] with the scaled trajectories \[ d(t), e(t), f(t) := \frac 56 s (1-s)^3p^7r^3. \] Finally we define our type $Z$ variables, which are useful for tracking which colors become forbidden due to alternating 4-cycles. In particular, for a fixed edge $uv$ and a color $k$, we keep track of substructures that could eventually cause $k$ to be forbidden at $uv$ due to a potential alternating 4-cycle. \begin{definition} Fix two vertices $u, v$, a color $k \notin S_u \cup S_v$ and a vector $(a_1, a_2, a_3) \in \{0, 1\}^3$ with $(a_1, a_2, a_3) \neq (1, 1, 1)$. We define the random variable $Z_{uv, k, a_1, a_2, a_3} = Z_{uv, k, a_1, a_2, a_3}(i)$ to be the number of triples $(x, y, k')$ where $x, y$ are vertices and $k'$ is a color satisfying the following condition. Letting $e_1:=ux$, $e_2:=xy$, $e_3:=yv$, and $k_1:=k'$, $k_2:=k, k_3:=k'$, we have for each $1 \le j \le 3$ that \begin{enumerate}[label=$\bullet$] \item if $a_j=0$ then $k_j$ is available at $e_j$, and \item if $a_j=1$ then $e_j$ is assigned the color $k_j$. \end{enumerate} \end{definition} \noindent \begin{figure}[h!] \begin{center} \begin{tikzpicture}[scale=.8] \node (u) at (0,0) [fixed,label=below:$u$] {}; \node (v) at (2,0) [fixed,label=below:$v$] {}; \node (y) at (2,2) [vert,label=above:$y$] {}; \node (x) at (0,2) [vert,label=above:$x$] {}; \draw [uncolored] (u) -- (v); \draw [blue?] (y) -- node [right] {$k'$}(v); \draw [blue?] (u) -- node [left] {$k'$}(x); \draw [red?] (y) -- node [above] {$k$}(x); \end{tikzpicture} \hfill \begin{tikzpicture}[scale=.8] \node (u) at (0,0) [fixed,label=below:$u$] {}; \node (v) at (2,0) [fixed,label=below:$v$] {}; \node (y) at (2,2) [vert,label=above:$y$] {}; \node (x) at (0,2) [vert,label=above:$x$] {}; \draw [uncolored] (u) -- (v); \draw [blue?] (y) -- (v); \draw [blue] (u) -- (x); \draw [red?] (y) -- (x); \end{tikzpicture} \begin{tikzpicture}[scale=.8] \node (u) at (0,0) [fixed,label=below:$u$] {}; \node (v) at (2,0) [fixed,label=below:$v$] {}; \node (y) at (2,2) [vert,label=above:$y$] {}; \node (x) at (0,2) [vert,label=above:$x$] {}; \draw [uncolored] (u) -- (v); \draw [blue?] (y) -- (v); \draw [blue?] (u) -- (x); \draw [red] (y) -- (x); \end{tikzpicture} \begin{tikzpicture}[scale=.8] \node (u) at (0,0) [fixed,label=below:$u$] {}; \node (v) at (2,0) [fixed,label=below:$v$] {}; \node (y) at (2,2) [vert,label=above:$y$] {}; \node (x) at (0,2) [vert,label=above:$x$] {}; \draw [uncolored] (u) -- (v); \draw [blue] (y) -- (v); \draw [blue?] (u) -- (x); \draw [red?] (y) -- (x); \end{tikzpicture} \hfill \begin{tikzpicture}[scale=.8] \node (u) at (0,0) [fixed,label=below:$u$] {}; \node (v) at (2,0) [fixed,label=below:$v$] {}; \node (y) at (2,2) [vert,label=above:$y$] {}; \node (x) at (0,2) [vert,label=above:$x$] {}; \draw [uncolored] (u) -- (v); \draw [blue] (y) -- (v); \draw [blue?] (u) -- (x); \draw [red] (y) -- (x); \end{tikzpicture} \begin{tikzpicture}[scale=.8] \node (u) at (0,0) [fixed,label=below:$u$] {}; \node (v) at (2,0) [fixed,label=below:$v$] {}; \node (y) at (2,2) [vert,label=above:$y$] {}; \node (x) at (0,2) [vert,label=above:$x$] {}; \draw [uncolored] (u) -- (v); \draw [blue] (y) -- (v); \draw [blue] (u) -- (x); \draw [red?] (y) -- (x); \end{tikzpicture} \begin{tikzpicture}[scale=.8] \node (u) at (0,0) [fixed,label=below:$u$] {}; \node (v) at (2,0) [fixed,label=below:$v$] {}; \node (y) at (2,2) [vert,label=above:$y$] {}; \node (x) at (0,2) [vert,label=above:$x$] {}; \draw [uncolored] (u) -- (v); \draw [blue?] (y) -- (v); \draw [blue] (u) -- (x); \draw [red] (y) -- (x); \end{tikzpicture} \end{center} \caption{Depictions of the $(x,y,k')\in Z_{uv,k,a_1,a_2,a_3}$ for $(a_1,a_2,a_3)\in \{(0,0,0), (1,0,0),(0,1,0),(0,0,1),(0,1,1),(1,1,0)\}$ (respectively).} \end{figure} We anticipate the following trajectories. For example, we explain in detail how to predict $Z_{uv,k, 0,1,1}$. First we choose an ordered pair $x$ and $y$ such that $xy$ and $yv$ are already colored. For this we should have $n^2p(1-p)^2$ choices. Next we need to make sure that the color of $xy$ is $k$. This should happen with probability $1/|\textsc{col}|$. The color $k'$ is already determined by the color of $yv$ and $k'$ must be available at $ux$. In particular, $k'$ must not be in $S_u$ or $S_x$, which happens with probability $(1-s)^2$. Also $k'$ must not have already hit $u$ or $x$ before, which occurs with probability $p^2$. Finally there must not be an alternating $(ux,k')$-path, which happens with probability~$r$. Thus, $Z_{uv,k, 0,1,1} \approx n^2p(1-p)^2 \cdot \frac{1}{|\textsc{col}|} \cdot (1-s)^2 \cdot p^2 \cdot r$. The remaining trajectories can be obtained similarly. \begin{align*} Z_{uv,k, 0,0,0} & \approx |\textsc{col}| n^2 (1-s)^6 p^9 r^3 = \frac 56 n^3 (1-s)^5 p^9 r^3,\\ Z_{uv,k, 1,0,0} \approx Z_{uv,k, 0,1,0} \approx Z_{uv,k, 0,0,1} &\approx n^2 (1-s)^4 (1-p) p^6 r^2,\\ Z_{uv,k, 1,1,0} \approx Z_{uv,k, 1,0,1} \approx Z_{uv,k, 0,1,1} &\approx \frac{n^2}{|\textsc{col}|} (1-s)^2 (1-p)^2 p^3 r = \frac65 n (1-s)^3 (1-p)^2 p^3 r. \end{align*} Thus we define the following scaled trajectories: \[ z_0(t) := \frac 56 (1-s)^5 p^9 r^3, \quad z_1(t) := (1-s)^4 (1-p) p^6 r^2 \quad\text{ and }\quad z_2(t) := \frac65 (1-s)^3 (1-p)^2 p^3 r. \] \subsection{Derivatives of the trajectories} First, we collect all the scaled trajectories: \begin{align*} y(t) &= p^2,\\ q(t) & = \frac16 p^3,\\ a(t) = b(t) &= \frac 56 s (1-s)^2 p^5r^2,\\ c_1(t) &= \frac{5}{6} s (1-s)p^2r,\\ c_2(t) &= \frac{5}{6} (1-s)^2p^3r^2,\\ d(t) = e(t) = f(t) &= \frac 56 s (1-s)^3p^7r^3,\\ z_0(t) &= \frac 56 (1-s)^5 p^9 r^3,\\ z_1(t) &= (1-s)^4 (1-p) p^6 r^2,\\ z_2(t) & = \frac65 (1-s)^3 (1-p)^2 p^3 r. \end{align*} These functions satisfy the following system of differential equations. Each differential equation in the system naturally arises from estimating the expected one-step change in one of our random variables. The fact that our scaled trajectories satisfy this system is crucial to our analysis and will be used in our calculations. It is not hard to check (with, for example, a software such as Maple) that the system is satisfied using that $p'(t)=-6$ and $r'(t) = -\frac{648}{25}(1-s)^2(1-p)^2r$. We have: \begin{align} a'(t) = b'(t) &= - \frac{5ad}{2qc}- \frac{6a^2z_2}{qc} - \frac{2ay}{q},\label{eqn:abdiffeq}\\ c'_1(t) &= -\frac{5dc_1}{3qc}-\frac{3az_2c_1}{qc},\label{eqn:c1diffeq}\\ c'_2(t) &= -\frac{5dc_2}{2qc}-\frac{6az_2c_2}{qc},\label{eqn:c2diffeq}\\ d'(t) = e'(t) = f'(t) &= -\frac{20d^2}{6qc}-\frac{9az_2d}{qc}-\frac{3yd}{q},\label{eqn:defdiffeq}\\ z'_0(t) &= -\frac{5dz_0}{qc} - \frac{9az_2z_0}{qc} - \frac{3yz_0}{q},\label{eqn:z0diffeq}\\ z'_1(t) &= \frac{az_0}{qc} -\frac{10dz_1}{3qc} - \frac{6az_2z_1}{qc} - \frac{2yz_1}{q}, \label{eqn:z1diffeq}\\ z'_2(t) & = \frac{2az_1}{qc}-\frac{5dz_2}{3qc} - \frac{3az_2^2}{qc} - \frac{yz_2}{q} \label{eqn:z2diffeq}. \end{align} We will also need a crude bound on the first and second derivatives of the scaled trajectories. Note that all these functions ($a, b$, etc.) have the form $h_1(t) \exp(h_2(t))$ where $h_1$ and $h_2$ are polynomials. It is easy to see that the derivative (and second derivative) of any such function has the form $h_3(t) \exp(h_2(t))$ where $h_3(t)$ is a polynomial. In particular, the first and second derivatives are all $O(1)$ for all $0 \le t \le 1$. Thus we have: \begin{prop}\label{obs:CrudeDerivTraj} The first and second derivatives of all the scaled trajectory functions are $O(1)$. \end{prop} \subsection{Untracked variables} In addition to the random variables we already mentioned, which we will track, we will also need several random variables for which we will will establish some necessary, but less precise, bounds in our analysis. In particular, when we consider the maximum one-step change in the $Z$ type variables, we could potentially lose a catastrophic number of triples through alternating paths forbidding edges in two types of pathological substructures. \begin{definition} \begin{enumerate}[label=(\roman*)] \item Fix two vertices $u,v$ and a color $k$. We define the random variable $\Xi_{u, v, k} = \Xi_{u, v, k}(i)$ to be the number of pairs $(x,y)$ such that $ux$ has the same color as $vy$, and $xy$ has the color $k$. In other words, $\Xi_{u, v, k}$ is the number of alternating $(uv, k)$-paths. See Figure \ref{fig:xi}. \item Fix four vertices $u,u',v,v'$ and a color $k$. We define the random variable $\Phi_{u, u', v, v'} = \Phi_{u, u', v, v'}(i)$ to be the number of pairs $(x,y)$ such that $ux$ has the same color as $u'y$ and $vx$ has the same color as $v'y$. See Figure \ref{fig:phi}. \item Fix two vertices $u,u''$ and colors $k, k''$. We define the random variable $\Psi_{u, u'', k, k''} = \Psi_{u, u'', k, k''}(i)$ to be the number of triples $(x, y, z)$ such that $ux$ has the same color as $zu''$, $xy$ has the color $k$ and $yz$ has the color $k''$. See Figure \ref{fig:psi}. \item Fix three vertices $u,v, w$. We define the random variable $\Lambda_{u,v, w} = \Lambda_{u,v, w}(i)$ to be the number of pairs $(x, y)$ such that $ux$ has the same color as $vy$, and $vx$ has the same color as $wy$. See Figure \ref{fig:lambda}. \end{enumerate} \begin{figure}[h!] \centering \begin{subfigure}[b]{0.24\textwidth} \centering \begin{tikzpicture}[scale=0.5] \node (u) at (0,0) [fixed,label=below:$u$] {}; \node (v) at (2,0) [fixed,label=below:$v$] {}; \node (y) at (2,2) [vert,label=above:$y$] {}; \node (x) at (0,2) [vert,label=above:$x$] {}; \draw [uncolored] (u) -- (v); \draw [blue] (y) -- (v); \draw [blue] (u) -- (x); \draw [red] (y) -- node [above] {$k$} (x); \end{tikzpicture} \caption{} \label{fig:xi} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \begin{tikzpicture}[scale=.8] \node (u) at (0,0) [fixed,label=below:$u$] {}; \node (u') at (0,1) [fixed,label=above:$u'$] {}; \node (x) at (1,0) [vert,label=below:$x$] {}; \node (y) at (1,1) [vert,label=above:$y$] {}; \node (v) at (2,0) [fixed,label=below:$v$] {}; \node (v') at (2,1) [fixed,label=above:$v'$] {}; \draw [blue] (u) -- (x); \draw [blue] (u') -- (y); \draw [red] (x) -- (v); \draw [red] (v') -- (y); \end{tikzpicture} \caption{} \label{fig:phi} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \begin{tikzpicture}[scale=1] \node (u) at (0,0) [fixed,label=below:$u$] {}; \node (u'') at (2,0) [fixed,label=below:$u''$] {}; \node (x) at (0,1) [vert,label=above:$x$] {}; \node (y) at (1,1) [vert,label=above:$y$] {}; \node (z) at (2,1) [vert,label=above:$z$] {}; \draw [green] (u) -- (x); \draw [red] (x) -- node [above] {$k$}(y); \draw [blue] (y) -- node [above] {$k''$}(z); \draw [green] (z) -- (u''); \end{tikzpicture} \caption{} \label{fig:psi} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \begin{tikzpicture} \node (u) at (0,0) [fixed, label=below:$u$] {}; \node (v) at (1,0) [fixed, label=below:$v$] {}; \node (w) at (2,0) [fixed, label=below:$w$] {}; \node (x) at (.5,1) [vert, label=above:$x$] {}; \node (y) at (1.5,1) [vert, label=above:$y$] {}; \draw [red] (u) -- (x); \draw [red] (v) -- (y); \draw [blue] (v) -- (x); \draw [blue] (w) -- (y); \end{tikzpicture} \caption{} \label{fig:lambda} \end{subfigure} \caption{Depictions of the $(x,y)\in\Xi_{u, v, k}$,~$(x,y)\in\Phi_{u, u', v, v'}$,~$(x,y,z)\in\Psi_{u, u'', k, k''}$ and $(x,y)\in \Lambda_{u,v, w}$.} \end{figure} \end{definition} \section{The good event}\label{sec:goodevent} In this section we define the good event $\mc{E}_i$, which among other things stipulates that every uncolored triangle $(u, u', u'')$ still has plenty of available pairs of colors $(k, k')$. More specifically, $\mc{E}_i$ will stipulate that all of our tracked variables are within a small window of their respective trajectories we derived in Section \ref{sec:variables}. The event $\mc{E}_i$ will also stipulate some crude upper bounds on certain other variables. Note that in the process, we choose $\varepsilon$, which gives us $s(\varepsilon)$. Then we let \[ \delta:=10^{-7}s(1-s)^4 \] and define below all of the error functions $g_q, g_y$, etc.. For any step $i'$ we let $t'=t(i')$. We formally define the \tbf{good event} $\mc{E}_i$ to be the event that for all $i' \le i$ we have the following conditions (below, functions are evaluated at $i=i'$, $t=t'$): \begin{enumerate}[label=(\Roman*)] \item \label{E:Q} we have \[ \abrac{Q - n^3 q(t)} \le n^{3} g_q, \] \item \label{E:Y} for each uncolored edge $uu'$ we have \[ \abrac{Y_{uu'} - ny(t)} \le n g_y, \] \item \label{E:A} for each uncolored edge $u'u''$ we have \[ \abrac{A_{u'u'',k'} -n^2a(t)} \le n^2 g_{ab}, \] \item \label{E:B} for each uncolored edge $uu'$ and color $k$ we have \[ \abrac{B_{uu',k} -n^2 b(t)} \le n^2 g_{ab}, \] \item \label{E:C1} for each triple $(u, u', u'')$ of uncolored edges we have \[ \abrac{\CI -nc_1(t)} \le n g_{c1}, \] \item \label{E:C2} for each triple $(u, u', u'')$ of uncolored edges we have \[ \abrac{\CII -nc_2(t)} \le n g_{c2}, \] \item \label{E:D} for each vertex $u$ and color $k$ available at $u$ we have \[ \abrac{D_{u,k} -n^3d(t)} \le n^3 g_{def}, \] \item \label{E:E} for each vertex $u''$ and color $k$ available at $u''$ we have \[ \abrac{E_{u'',k} -n^3e(t)} \le n^3 g_{def}, \] \item \label{E:F} for each vertex $u''$ and color $k'$ available at $u''$ we have \[ \abrac{F_{u'',k'} -n^3f(t)} \le n^3 g_{def}, \] \item \label{E:Z0} for each uncolored edge $uv$ and color $k$ we have \[ \abrac{Z_{uv,k, 0,0,0} -n^3z_0(t)} \le n^3 g_0, \] \item \label{E:Z1} for each uncolored edge $uv$ and color $k$ we have \[ \abrac{Z_{uv,k, 1,0,0} -n^2z_1(t)} \le n^2 g_1, \] \[ \abrac{Z_{uv,k, 0,1,0} -n^2z_1(t)} \le n^2 g_1, \] \[ \abrac{Z_{uv,k, 0,0,1} -n^2z_1(t)} \le n^2 g_1, \] \item \label{E:Z2} for each uncolored edge $uv$ and color $k$ we have \[ \abrac{Z_{uv,k, 1,1,0} -nz_2(t)} \le n g_2, \] \[ \abrac{Z_{uv,k, 1,0,1} -nz_2(t)} \le n g_2, \] \[ \abrac{Z_{uv,k, 0,1,1} -nz_2(t)} \le n g_2. \] \item \label{E:crude} for all $u, v, k$ we have \[ \Xi_{u, v, k} \le n^{4\delta}, \] for all $u,u', v,v'$ we have \[ \Phi_{u, u', v, v'} \le n^{4\delta}, \] for all $u, u'', k,k'$ we have \[ \Psi_{u, u'', k, k''} \le n^{4\delta}, \] and for all $u, v, w$ we have \[ \Lambda_{u, v, w} \le n^{4 \delta}. \] \end{enumerate} Recall that we define the random variable $C_{uu'u''}=C_{uu'u''}(i)$ as $\CI \times \CII$. This will count the number of pairs $(k,k')$ that are available at $(u, u', u'')$ at step $i$. In addition, we let \[ c(t) := c_1(t)c_2(t) \quad\text{and}\quad g_c:=2(c_2g_{c_1}+c_1g_{c_2}). \] Since $g_{c_1}=o(c_1)$, we get in the good event, \[ C_{uu'u''}\le n(c_1+g_{c_1}) \cdot n(c_2 + g_{c_2}) = n^2(c + c_1 g_{c_2} + g_{c_1}c_2 + g_{c_1}g_{c_2}) \le n^2(c + g_{c}) \] and similarly $C_{uu'u''}\ge n^2(c - g_c)$. Thus, \[ \abrac{C_{uu'u''} -n^2c(t)} \le n^2 g_{c}. \] We let \[ i_{max}:= \frac16 n^2 \rbrac{1-n^{-\d}}, \qquad t_{max} := \frac{i_{max}}{n^2} = \frac16 \rbrac{1-n^{-\d}}, \] and note that \[ p(t_{max}) = 1-6t_{max} = n^{-\d}. \] Let \[ \kappa = \kappa(s) = 10000 s^{-1}(1-s)^{-4} \qquad \textrm{and}\qquad\omega = 100(\kappa + 1)\delta. \] Note that $\omega = 100 \delta + 1/10 = 10^{-5}s(1-s)^4 + 1/10< 1/4$, since $s=s(\varepsilon)$ and $\varepsilon <1/100$. We define the error functions as follows: \begin{align*} g_y(t) &= n^{-1/2+\delta},\\ g_q(t) &= n^{-1+2\delta},\\ g_{ab}(t) &= n^{-\omega} p(t)^{-100 \kappa}, \\ g_{c_1}(t) &= n^{-\omega} p(t)^{-100 \kappa-2}, \\ g_{c_2}(t) &= n^{-\omega} p(t)^{-100 \kappa-1}, \\ g_{def}(t) &= n^{-\omega} p(t)^{-100 \kappa+3}, \\ g_{0}(t) &= n^{-\omega} p(t)^{-100 \kappa+5}, \\ g_{1}(t) &= n^{-\omega} p(t)^{-100 \kappa+1}, \\ g_{2}(t) &= n^{-\omega} p(t)^{-100 \kappa-1}. \end{align*} Note that $$ \frac{g_q(t)}{q(t)} = \frac{n^{-1+2\delta}}{\frac16 p(t)^3} \le 6 n^{-1+5\delta} = o(1) \qquad $$ since $p \ge n^{-\delta}$ and $\delta < 1/1000$. Furthermore, using these and that $\varepsilon/2 \le s \le 3\varepsilon/5$ and $r\ge 1/6$, we obtain $$ \frac{g_{c}}{c} = \frac{2(c_2g_{c_1}+c_1g_{c_2})}{c} = O\rbrac{\frac{p^3 \cdot n^{-\omega} p(t)^{-100 \kappa-2} + p^2 \cdot n^{-\omega} p(t)^{-100 \kappa-1}}{p^5} } = O\rbrac{n^{-\omega + (100\k-4)\d} } = o(1). $$ It is also routine to check that the error function satisfy the following, which will be required at a crucial point in our analysis: \begin{align} & g_{ab}' - 30 \k \rbrac{p^{2} g_2 + p^{-1}g_{ab} + p^{-1} g_c} \label{eq:err-sup-ab}\\ &\qquad\qquad = n^{-\omega}\left(570{p}^{-100\,\kappa-1} -30{p}^{-100\kappa+1}-120{p}^{-100\,\kappa}\right) =\Omega(1), \nonumber \\ & g_{c_1}' - 30 \k (p^{-1}g_2 + p^{-3}g_{ab} + p^{-4}g_c+p^{-6}g_{def}) \label{eq:err-sup-c1}\\ &\qquad\qquad= n^{-\omega} \left( \left( 420\,\kappa+12 \right) {p}^{-100\,\kappa-3}-30\,\kappa\,{p}^{- 100\,\kappa-2} \right) =\Omega(1),\nonumber\\ &g_{c_2}' - 30 \k (p^{-1}g_2 + p^{-2}g_{ab} + p^{-3}g_c+p^{-5}g_{def}) \label{eq:err-sup-c2}\\ &\qquad\qquad= n^{-\omega}\left( 390\,\kappa+6 \right) {p}^{-100\,\kappa-2} =\Omega(1),\nonumber\\ & g_{def}' - 30 \kappa \rbrac{p^{4}g_2 + p^{-1}g_{def} + p^{2}g_{ab}+pg_c} \label{eq:err-sup-def}\\ &\qquad\qquad = n^{-\omega} \left(\left( 420\,\kappa-18 \right) {p}^{-100\,\kappa+2}-30\,\kappa\,{p}^{- 100\,\kappa+3} \right) =\Omega(1),\nonumber\\ & g_{0}' - 30 \k \rbrac{p^{6}g_2 + pg_{def} + p^{4}g_{ab}+p^{3}g_c}\label{eq:err-sup-0}\\ &\qquad\qquad= n^{-\omega} \left(\left( 420\,\kappa-30 \right) {p}^{-100\,\kappa+4}-30\,\kappa\,{p}^{- 100\,\kappa+5} \right) =\Omega(1),\nonumber\\ & g_{1}' - 40 \k \rbrac{p^{3}g_2 + p^{-2}g_{def} + pg_{ab}+p^{-1}g_c}\label{eq:err-sup-1}\\ &\qquad\qquad= n^{-\omega} \left(\left( 440\,\kappa-6 \right) {p}^{-100\,\kappa}-80\,\kappa\,{p}^{-100\,\kappa+1}-40\,{p}^{-100\,\kappa+2}\kappa \right) =\Omega(1),\nonumber\\ & g_{2}' - 40 \k \rbrac{p^{-1}g_2 + p^{-5}g_{def} + p^{-2}g_{ab}+p^{-3}g_c} \label{eq:err-sup-2}\\ &\qquad\qquad= n^{-\omega} \left( 320\,\kappa+6 \right) {p}^{-100\,\kappa-2} =\Omega(1).\nonumber \end{align} Finally note that all error functions have the form $n^{-\omega} p^{-h}$ where $h \ge -100\k-2$ is a constant. So the first derivative is a constant times $n^{-\omega}p^{-h-1}$ and the second derivative is a constant times $n^{-\omega}p^{-h-2}$. In particular for all $0 \le t \le 1$ the second derivative of any error function is \[ O\rbrac{n^{-\omega}p^{-100\k-4} } = O\rbrac{n^{-\omega + (100\k+4)\d} } \] and similarly for the first derivative. Thus we have: \begin{prop}\label{obs:CrudeDerivErr} The first and second derivatives of all the error functions are $O(1)$. \end{prop} \section{Some helpful bounds that hold in the good event}\label{sec:helpers} Some of these bounds will be sharp and others will be more crude upper bounds. \subsection{Sharp estimates} In order to track certain variables we will need to estimate the probabilities of the following events. \begin{enumerate}[label=$\bullet$] \item For a color $k^*$ available at $v$ at step $i$, we let $\mc{L}_{v,k^*}=\mc{L}_{v,k^*}(i)$ be the event that $v$ gets hit by $k$ at step~$i$. \item For a color $k^*$ available at edge $vw$, we let $\mc{M}_{vw,k^*}=\mc{M}_{vw,k^*}(i)$ be the event that $vw$ becomes the color $k^*$ at step~$i$. \item For an uncolored edge $vw$, we let $\mc{M}_{vw,\bullet}=\mc{M}_{vw,\bullet}(i)$ be the event that $vw$ gets colored any color at step~$i$. \item For any color $k$ and an uncolored edge $uv$, let $\mc{N}_{uv,k}=\mc{N}_{uv,k}(i)$ be the event that the edge $uv$ will become part of a $(uv,k)$-alternating path. \end{enumerate} \begin{claim}\label{claim:LMMN} Assuming the good event $\mc{E}_i$ holds, we have: \begin{enumerate}[label=(\roman*)] \item\label{claim:LMMN:L} $\displaystyle{n^{-2} \sbrac{ \frac{5d}{6qc} - \k \rbrac{p^{-8}g_{def}+ p^{-6} g_c}} \le \P(\mc{L}_{v,k^*}) \le n^{-2} \sbrac{ \frac{5d}{6qc} + \k \rbrac{p^{-8}g_{def} + p^{-6} g_c}}}$, \item\label{claim:LMMN:M} $\displaystyle{n^{-3} \sbrac{\frac{a}{qc} - \k p^{-8} \rbrac{g_{ab} + g_c}} \le \P(\mc{M}_{vw,k^*}) \le n^{-3} \sbrac{\frac{a}{qc} + \k p^{-8} \rbrac{g_{ab} + g_c}}}$, \item\label{claim:LMMN:M2} $\displaystyle{n^{-2} \frac{y}{q} - O(n^{-5/2+4\d}) \le \P(\mc{M}_{vw,\bullet}) \le n^{-2} \frac{y}{q} + O(n^{-5/2+4\d})}$, and \item\label{claim:LMMN:N} $\displaystyle{n^{-2} \sbrac{\frac{3az_2}{qc} - \k \rbrac{p^{-3} g_2 + p^{-5}g_{ab} + p^{-5} g_c}} \le \P(\mc{N}_{uv,k}) \le n^{-2} \sbrac{\frac{3az_2}{qc} + \k \rbrac{p^{-3} g_2 + p^{-5}g_{ab} + p^{-5} g_c}}}$. \end{enumerate} \end{claim} We also state some simple bounds related to the above probabilities. These bounds easily follow from the trajectories given in Section \ref{sec:variables}, $0 \le p \le 1$, $1/5 \le r \le 1$ and the assumption that $s>0$ is sufficiently small. Therefore, the proof is omitted. \begin{claim}\label{claim:helpersloppy} We have: \[ \frac{d}{qc} \le 50p^{-1}, \qquad \frac{az_2}{qc} \le 10, \qquad \frac{a}{qc} \le 10p^{-3}, \qquad\text{and}\qquad \frac{y}{q} \le 10p^{-1}. \] \end{claim} We will use the following upper and lower bounds throughout the proof of Claim~\ref{claim:LMMN}. The following fact is easily checked, and thus we omit the proof. \begin{fact}\label{fact:sharp-bnd} Let $x=x(n), y=y(n), z=z(n)$ with $x,y,z \in (0,1)$ and $y, z, =o(1)$. Then, for sufficiently large $n$, we have $$ \frac{1+x}{(1-y)(1-z)} \le 1 + 2x + 2y + 2z \qquad \textrm{and} \qquad \frac{1-x}{(1+y)(1+z)} \ge 1 - 2x - 2y- 2z. $$ \end{fact} \begin{proof}[Proof of Claim~\ref{claim:LMMN}] We prove each statement separately. \bigskip \noindent \emph{Part~\ref{claim:LMMN:L}:} By using Fact~\ref{fact:sharp-bnd} we get \begin{align*} \P(\mc{L}_{v,k^*}) &= \frac12 \sum_{(u', u'', k')\in D_{v,k^*}} \frac{1}{3QC_{vu'u''}} + \sum_{(u, u', k') \in E_{v,k^*}} \frac{1}{3Q C_{uu'v}} + \sum_{(u, u', k)\in F_{v,k^*}} \frac{1}{3Q C_{uu'v}}\\ & \le \frac52 \cdot \frac{n^3(d + g_{def})}{3 n^3\rbrac{q - g_q} \cdot n^2 \rbrac{ c - g_c}} = n^{-2} \cdot \frac{5d}{6qc} \cdot \frac{1 + \frac{g_{def}}{d}}{\rbrac{1 - \frac{g_q}{q}} \rbrac{1 - \frac{g_c}{c }}}. \end{align*} Since $g_q/q$ and $g_c/c$ are $o(1)$, we use Fact~\ref{fact:sharp-bnd} and next $g_q/q = o(g_{def}/d)$ to obtain \begin{align*} \P(\mc{L}_{v,k^*})& \le n^{-2} \cdot \frac{5d}{6qc} \rbrac{1 + \frac{2g_{def}}{d} + \frac{2g_q}{q}+ \frac{2g_c}{c }} \le n^{-2} \cdot \frac{5d}{6qc} \rbrac{1 + \frac{4g_{def}}{d} + \frac{2g_c}{c }}\\ &= n^{-2} \rbrac{\frac{5d}{6qc}+\frac{144}{5}\frac{1}{(1-s)^3sr^3}p^{-8}g_{def}+ \frac{432}{25}\frac{1}{(1-s)^3sr^3}p^{-6}g_c}. \end{align*} Finally, since $r^{-3} \le 5^3$ and $\kappa(s) = 10^4s^{-1}(1-s)^{-4}$, we obtain the required upper bound on $\P(\mc{L}_{v,k^*})$. Using a similar calculation and the lower bound in Fact~\ref{fact:sharp-bnd} gives the lower bound. \bigskip \noindent \emph{Part~\ref{claim:LMMN:M}:} Using similar calculations as Part~\ref{claim:LMMN:M} yields \begin{align*} \P(\mc{M}_{vw,k^*}) &= \sum_{(u, k)\in A_{vw,k^*}} \frac{1}{3Q C_{uvw}} + \sum_{(u'', k')\in B_{vw,k^*}} \frac{1}{3QC_{vwu''}} + \sum_{(u'',k')\in B_{wv,k^*}} \frac{1}{3QC_{wvu''}}\\ & \le 3 \cdot \frac{ n^2(a + g_{ab})}{3 n^3\rbrac{q-g_q} \cdot n^2 \rbrac{ c- g_c}} = n^{-3} \frac{a}{qc} \cdot \frac{1 + \frac{g_{ab}}{a}}{\rbrac{1 - \frac{g_q}{q}} \rbrac{1 - \frac{g_c}{c }}}\\ &\le n^{-3} \cdot \frac{a}{qc} \rbrac{1 + \frac{2g_{ab}}{a} + \frac{2g_q}{q}+ \frac{2g_c}{c }} \le n^{-3} \cdot \frac{a}{qc} \rbrac{1 + \frac{4g_{ab}}{a} + \frac{2g_c}{c }}\\ &= n^{-3}\rbrac{\frac{a}{qc}+\frac{864}{25}\frac{1}{(1-s)^3 s r^3}p^{-8}g_{ab} + \frac{2592}{125}\frac{1}{(1-s)^4sr^4}p^{-8}g_c}\\ & \le n^{-3} \sbrac{\frac{a}{qc} + \k p^{-8} \rbrac{g_{ab} + g_c}}, \end{align*} where for the latter term we use the fact that $r \ge \exp \cbrac{ - \frac{36}{25}}$. \bigskip \noindent \emph{Part~\ref{claim:LMMN:M2}:} We have \begin{align*} \P(\mc{M}_{vw,\bullet}) &= \frac{Y_{vw}}{Q} \le \frac{ny + n^{1/2+\d}}{n^3 q - n^{2+2\d}} = n^{-2} \frac{y}{q} \left( \frac{1 + n^{-1/2+\d}\frac 1y }{ 1 - n^{-1+2\d}\frac 1q} \right) = n^{-2} \frac{y}{q}\left( \frac{1 + n^{-1/2+3\d} }{ 1 - 6n^{-1+5\d}} \right)\\ & = n^{-2} \frac{y}{q} \rbrac{1 + O(n^{-1/2+3\d}) } = n^{-2} \frac{y}{q} + O(n^{-5/2+4\d}). \end{align*} \bigskip \noindent \emph{Part~\ref{claim:LMMN:N}:} Finally note that \begin{align*} \P(\mc{N}_{uv,k}) &= \sum_{(x,y,k') \in Z_{uv,k, 1,0,1}} \P(\mc{M}_{xy,k} ) + \sum_{(x,y,k') \in Z_{uv,k, 0,1,1}} \P(\mc{M}_{ux,k'}) + \sum_{(x,y,k') \in Z_{uv,k, 1,1,0}} \P(\mc{M}_{vy,k'})\\ & \le 3 \cdot n \rbrac{z_2 + g_2}\cdot n^{-3} \sbrac{\frac{a}{qc} + \k \rbrac{p^{-8}g_{ab} + p^{-8} g_c}}\\ & \le n^{-2} \sbrac{\frac{3az_2}{qc} + \k \rbrac{p^{-3} g_2 + p^{-5}g_{ab} + p^{-5} g_c}}, \end{align*} where in the latter we use the fact that $g_2 \le 10z_2/11$ (since $g_2=o(z_2)$) and $z_2 \le p^3/3$. \end{proof} \subsection{Crude upper bounds} Here we will (crudely) bound probabilities of intersections of the events defined in the previous subsection. \begin{claim}\label{claim:intersections} The following holds in the good event $\mc{E}_i$. \begin{enumerate}[label=(\roman*)] \item\label{claim:intersections:LL} Fix vertices $v \neq v'$ and colors $k$ and $k'$ (where we allow $k=k'$). Then, \[ \P(\mc{L}_{v,k}(i) \cap \mc{L}_{v',k'}(i)) = O\rbrac{n^{-3 + 8\d}}. \] \item\label{claim:intersections:LM} Fix a vertex $v$, colors $k$ and $k'$ and an edge $e$. \begin{enumerate}[label=$\bullet$] \item If $k' \neq k$, then \[ \P(\mc{L}_{v,k}(i) \cap \mc{M}_{e, k'}(i)) = O\rbrac{n^{-4 + 8\d}}. \] \item If $v$ is not incident with $e$, then \[ \P(\mc{L}_{v,k}(i) \cap \mc{M}_{e, k'}(i)) \le \P(\mc{L}_{v,k}(i) \cap \mc{M}_{e, \bullet}(i)) = O\rbrac{n^{-4 + 8\d}}. \] \item If $v$ is incident with $e$, then \[ \P(\mc{L}_{v,k}(i) \cap \mc{M}_{e, k'}(i)) \le \P(\mc{L}_{v,k}(i) \cap \mc{M}_{e, \bullet}(i)) = O\rbrac{n^{-3 + 8\d}}. \] \end{enumerate} \item\label{claim:intersections:MM} Fix distinct (but possibly adjacent) edges $e$ and $e'$ and a color $k$. Then, \[ \P(\mc{M}_{e, k}(i) \cap \mc{M}_{e', \bullet}(i)) = O\rbrac{n^{-4 + 8\d}} \quad \text{ and }\quad \P(\mc{M}_{e, \bullet}(i) \cap \mc{M}_{e', \bullet}(i)) = O\rbrac{n^{-3 + 8\d}}. \] \item\label{claim:intersections:LN} Fix a vertex $v$, colors $k$ and $k'$ (possibly equal) and an edge $e$ (possibly incident with $v$). Then, \[ \P(\mc{L}_{v,k}(i) \cap \mc{N}_{e, k'}(i)) = O\rbrac{n^{-3 + 8\d}}. \] \item\label{claim:intersections:NN} Fix distinct (but possibly adjacent) edges $e, e'$, and colors $k, k'$ (possibly equal). Then, \[ \P(\mc{N}_{e, k} \cap \mc{N}_{e', k'}) = O\rbrac{n^{-3 + 12\d}}. \] \item\label{claim:intersections:NM} Fix edges $e, e'$ and colors $k, k'$. \begin{enumerate}[label=$\bullet$] \item If we assume nothing about $e, e'$ being distinct or nonadjacent or $k, k'$ being distinct, then, \[ \P(\mc{N}_{e, k} \cap \mc{M}_{e', k'}) \le \P(\mc{N}_{e, k} \cap \mc{M}_{e', \bullet}) = O\rbrac{n^{-3 + 8\d}}. \] \item If $k \neq k'$ and $e \neq e'$ are adjacent, then \[ \P(\mc{N}_{e, k} \cap \mc{M}_{e', k'}) = O\rbrac{n^{-4 + 8\d}}. \] \item Suppose $e=uv$ and $e'=xy$ are distinct and nonadjacent, and $k=k'$. Suppose further that it is not the case that $ux$ has the same color as $vy$ or that $uy$ has the same color as $vx$. Then, \[ \P(\mc{N}_{e, k} \cap \mc{M}_{e', k'}) = O\rbrac{n^{-4 + 8\d}}. \] \end{enumerate} \end{enumerate} \end{claim} \begin{proof} The above bounds have fairly straightforward proofs and, therefore, we omit most of them. Here we only show details for bounds in Parts \ref{claim:intersections:LL} and~\ref{claim:intersections:LN}. We start with the following observation. Fix any oriented triangle $(v, v', v'')$ and pair of colors $(k, k')$. The probability that at step $i$ we choose $(v, v', v'')$ to color, and then choose the color pair $(k, k')$ to use, is \begin{equation}\label{eqn:Pcrude} \frac{1}{3Q} \cdot \frac{1}{C_{vv'v''}} = O\rbrac{\frac{1}{n^3q \cdot n^2 c}} = O\rbrac{\frac{1}{n^3p^3 \cdot n^2 p^5}} = O\rbrac{n^{-5 + 8\d}}. \end{equation} \bigskip \noindent \emph{Part~\ref{claim:intersections:LL}:} For the event $\mc{L}_{v,k} \cap \mc{L}_{v',k'}$ to happen, the triangle chosen at step $i$ must contain both $v$ and $v'$ and so there are a linear number of choices for the triangle. The color pair must include $k$ so there is at most a linear number of choices for the color pair. Since each possibility occurs with probability at most $O\rbrac{n^{-5 + 8\d}}$ by \eqref{eqn:Pcrude}, we have \[ \P(\mc{L}_{v,k}(i) \cap \mc{L}_{v',k'}(i)) = O(n) \cdot O(n) \cdot O\rbrac{n^{-5 + 8\d}} = O\rbrac{n^{-3 + 8\d}}. \] \bigskip \noindent \emph{Part~\ref{claim:intersections:LN}:} Suppose $e$ is an uncolored edge at step $i$. Let $P(e, k)=P(e, k, i)$ be the set of all pairs $(e^*, k^*)$ where $e^*$ is an edge and $k^*$ is a color such that coloring $e^*$ the color $k^*$ would forbid $k$ at $e$ through an alternating 4-path. More precisely, if $e=wx$ then $P(e, k)$ is the following set: \begin{align*} &\{(yz, k): \mbox{$yz$ is not adjacent to $wx$, and $wy$ has the same color as $zx$} \} \\ &\qquad \cup \{(wy, k''): \mbox{ there exists some $z$ where $yz$ has color $k$ and $zx$ has color $k''$}\} \\ &\qquad\qquad \cup \{(xz, k''): \mbox{ there exists some $y$ where $wy$ has color $k''$ and $yz$ has color $k$}\}. \end{align*} We split the proof into cases. Due to~\eqref{eqn:Pcrude} it suffices to show that the number of choices for the triangle (containing $v$) and colors (containing $k$) is at most $O(n^2)$. In order for $\mc{N}_{e, k'}$ to happen there must be some pair $(e^*, k^*) \in P(e, k')$ such that $e^*$ gets assigned the color $k^*$. Suppose that $e^*$ is adjacent to $e$ and $k^*=k$. Since no vertex is adjacent to more than two edges of the same color, there are $O(1)$ choices for $(e^*,k^*)$ with this property. There are $O(n)$ triangles containing $e^*$, and $O(n)$ ways to choose the other color in the color pair. Now suppose that $e^*$ is adjacent to $e$ and $k^*\neq k$. There are $O(n)$ choices for $(e^*,k^*)$ with this property. There are $O(n)$ triangles containing $e^*$, and the color pair must consist of $k$ and $k^*$. Next assume that $e^*$ is not adjacent to $e$ and does not contain $v$. There are $O(n)$ choices for $e^*$, and once we choose one the triangle is determined. One color must be $k$ and we have $O(n)$ choices for the other color. Finally assume that $e^*$ is not adjacent to $e$ and contains $v$. There are $O(1)$ choices for $e^*$, and so $O(n)$ choices for the triangle. One color must be $k$ and we have $O(n)$ choices for the other color. This completes the proof of Part~\ref{claim:intersections:LN}. As we mentioned, the proofs for the rest of the parts of the claim are very similar and the reader can easily check them. Some of them use the bounds in \ref{E:crude}. \end{proof} \section{Variables \texorpdfstring{$Q$ and $Y$}{}}\label{sec:QY} We now begin verifying that the good event holds, starting with~\ref{E:Q} and~\ref{E:Y}. Both of the variables $Q$ and~$Y$ were tracked by Bohman, Frieze and Lubetzky in~\cite{BFL10}, so we will use a weaker form of their results. They showed that a.a.s.\ for all \[ i \le i_{0} = \frac 16 n^2 - \frac 53 n^{7/4} \log^{5/4} n \] we have $$ n^3q(t) - n^2 \log n \cdot \frac{(5-30\log p(t))^2}{p(t)} \le Q(i) \le n^3q(t) + \frac{1}{3}n^2p(t), \textrm{ and} $$ $$ |Y_{uu'}-y(t)n|\le \sqrt{n\log n} \cdot (5-30\log p(t)) \textrm{ for all } uu'. $$ These bounds are better than we need so we will loosen and simplify them. Note that as long as $\d<1/4$ we have that $i_{max} \le i_0$. Thus, using that $p(t) \ge p(t_{max}) = n^{-\d}$ the above bounds on $Q$ and $Y$ imply that for all $i \le i_{max}$ that \[ \abrac{Q - n^3 q(t)} \le n^{2+2\d} \] and \[ \abrac{Y_{uu'}-y(t)n} \le n^{1/2+\d}. \] Thus, $\mc{E}_{i_{max}}$ a.a.s.\ does not fail due to conditions \ref{E:Q} or \ref{E:Y}. \section{Variable \texorpdfstring{$A$}{}}\label{sec:AB} In this section we bound the probability that $\mc{E}_{i_{max}}$ fails due to a variable of type $A$ straying too far from its trajectory and violating Condition \ref{E:A}. Several of the sections that follow will have a very similar structure, so we will explain our reasoning carefully in this section so we can go faster in future sections. In addition, we will only show details for representatives of the four following groups of variables \[ A_{u'u'',k'} \in \{A_{u'u'',k'},B_{uu',k}\}, \qquad C^{(1)}_{uu'u''}\in\{C^{(1)}_{uu'u''}, C^{(2)}_{uu'u''}\},\qquad D_{u,k} \in \{D_{u,k}, E_{u'',k}, F_{u'',k'}\} \] and \[ Z_{uv,k,0,0,0}\in\{Z_{uv,k,0,0,0}, Z_{uv,k,1,0,0},Z_{uv,k,0,1,0},Z_{uv,k,0,0,1},Z_{uv,k,0,1,1},Z_{uv,k,1,0,1},Z_{uv,k,1,1,0}\}. \] The variables within a group require similar calculations, and in some cases have the exact same trajectory. In the case of the $Z$ variables, extending the work on the remaining types requires some routine, but tedious, additional details which we omit for readability. Each of Conditions \ref{E:A}-\ref{E:Z2} states that some random variable lies within some interval centered at its trajectory, i.e.\ it is equivalent to a statement of the form \[ X(i) \in [x_1(t), x_2(t)], \] where $X=X(i)$ is our random variable and $x_1, x_2$ are deterministic functions of $t$ (possibly depending on $n$ as well). We use the following strategy to bound the probability that $X$ leaves the interval. First we define a pair of auxiliary random variables \[ X^+(i) := \begin{cases} X(i) - x_2(t) & \text{ if $\mc{E}_{i-1}$ holds},\\ X^+(i-1) & \text{ otherwise}, \end{cases} \] and \[ X^-(i) := \begin{cases} X(i) - x_1(t) & \text{ if $\mc{E}_{i-1}$ holds},\\ X^-(i-1) & \text{ otherwise}. \end{cases} \] Note that if $\mc{E}_{i-1}$ holds but $\mc{E}_i$ fails due to $X$ leaving its interval, then we have either $X^+(i)>0$ or $X^-(i)<0$. To bound the probability of that event we show that $X^+$ is a supermartingale, and $X^-$ is a submartingale (showing the latter is typically very similar to the former and we will often show less work here). The bound on the failure probability then follows from Freedman's inequality. We now proceed to apply the strategy described above to the variables of type $A$. We let \[ A^{\pm}_{u'u'',k'}=A^{\pm}_{u'u'',k'}(i):= \begin{cases} A_{u'u'',k'} - n^2(a(t) \pm g_{ab}(t)) & \text{ if $\mc{E}_{i-1}$ holds},\\[3pt] A^{\pm}_{u'u'',k'}(i-1) & \text{ otherwise}. \end{cases} \] To check that $A^+_{u'u'',k'}$ is a supermartingale, we must show that $\mathbb{E}[\Delta A^+_{u'u'',k'}|\mathcal F_i] \le 0$ where we define $\Delta A^+_{u'u'',k'}:= A^+_{u'u'',k'}(i+1) - A^+_{u'u'',k'}(i)$. We first deal with a trivial case. If at step $i$ we have that $\mc{E}_i$ fails, then by definition we have $\Delta A^+_{u'u'',k'}=0$ and we are done. Henceforth assume that $\mc{E}_i$ holds. We estimate the one-step change in $A_{u'u'',k'}$. This variable never increases, and each pair $(u, k) \in A_{u'u'',k'}$ can be lost in one of the following ways: \begin{enumerate}[label= $\bullet$] \item one of the vertices $u$, $u'$, $u''$ can get hit by $k$, \item one of the edges $uu'$, $uu''$ can have $k$ forbidden due to a potential alternating 4-cycle, or \item one of the edges $uu'$, $uu''$ can get colored. \end{enumerate} Thus, for each pair $(u, k) \in A_{u'u'',k'}(i)$, the probability that $(u, k) \notin A_{u'u'',k'}(i+1)$ is \[ \P\sbrac{ \bigcup_{z \in \{u,u',u''\}} \mc{L}_{z,k} \;\;\; \cup \bigcup_{e \in \{uu', uu''\}} \rbrac{ \mc{N}_{e,k} \cup \mc{M}_{e, \bullet}}} \] and so \[ \mathbb{E}[\Delta A_{u'u'',k'}|\mathcal F_i] = -\sum_{(u, k)\in A_{u'u'',k'}} \P\sbrac{ \bigcup_{z \in \{u,u',u''\}} \mc{L}_{z,k} \;\;\; \cup \bigcup_{e \in \{uu', uu''\}} \rbrac{ \mc{N}_{e,k} \cup \mc{M}_{e, \bullet}}}. \] Now we will approximate the above probability by using the union bound with an error term as follows. Let $E_1,\dots,E_k$ be the set of events. Then, \begin{equation}\label{prob:inequality} \sum_{i=1}^k \P(E_i) - \sum_{1\le i<j\le k} \P(E_i\cap E_j) \le \P\sbrac{\bigcup_{i=1}^k E_i} \le \sum_{i=1}^k \P(E_i). \end{equation} This together with Claim~\ref{claim:intersections} and the assumption that the good event $\mc{E}_i$ holds implies that \begin{align*} &\P\sbrac{ \bigcup_{z \in \{u,u',u''\}} \mc{L}_{z,k} \;\;\; \cup \bigcup_{e \in \{uu', uu''\}} \rbrac{ \mc{N}_{e,k} \cup \mc{M}_{e, \bullet}}}\\ &\qquad\qquad\qquad =\sum_{z \in \{u,u',u''\}} \P\rbrac{ \mc{L}_{z,k}} + \sum_{e \in \{uu', uu''\}} \sbrac{\P\rbrac{ \mc{N}_{e,k}} + \P\rbrac{ \mc{M}_{e, \bullet}}} + O(n^{-3 + 12\d}). \end{align*} Consequently, \begin{align*} \mathbb{E}[\Delta A_{u'u'',k'}|\mathcal F_i] &= -\sum_{(u, k)\in A_{u'u'',k'}} \cbrac{ \sum_{z = \{u,u',u''\}} \P\rbrac{ \mc{L}_{z,k}} + \sum_{e = \{uu', uu''\}} \sbrac{\P\rbrac{ \mc{N}_{e,k}} + \P\rbrac{ \mc{M}_{e, \bullet}}} + O(n^{-3 + 12\d})}. \nonumber \end{align*} We will again use the assumption that $\mc{E}_i$ holds to give deterministic upper and lower bounds on $\mathbb{E}[\Delta A_{u'u'',k'}|\mathcal F_i]$. Due Claim~\ref{claim:LMMN} we have \begin{align} \mathbb{E}[\Delta A_{u'u'',k'}|\mathcal F_i] & \le \begin{multlined}[t] -n^2(a - g_{ab})\left\{ 3n^{-2} \sbrac{ \frac{5d}{6qc} - \k \rbrac{p^{-8}g_{def} + p^{-6} g_c}} \right. \nonumber\\ \left. + 2 n^{-2} \sbrac{\frac{3az_2}{qc} + \frac{y}{q} - \k \rbrac{p^{-3} g_2 + p^{-5}g_{ab} + p^{-5} g_c}} + O\rbrac{n^{-3 + 12\d} + n^{-5/2+4\d} }\right\} \end{multlined}\nonumber\\ & \le -(a - g_{ab})\left\{ \frac{5d}{2qc}+ \frac{6az_2}{qc} + \frac{2y}{q} - 10 \k \rbrac{p^{-3} g_2 + p^{-5}g_{ab} + p^{-6} g_c} \right\} + O(n^{-1/2 + 4\d}), \nonumber \end{align} where on the last line we used the fact that $p^{-3}g_{def} = g_{ab}$ and assumed that $\delta$ is small to simplify the big-O term. Now we will take the above expression and separate the ``main terms" from the ``first-order error term" (the terms involving error functions $g_{ab}$, etc.) and the ``lesser-order error terms" (in the big-O). We will be precise for the main terms and generous for error terms. By Claim~\ref{claim:helpersloppy} we get \begin{align*} g_{ab}\left(\frac{5d}{2qc}+ \frac{6az_2}{qc} + \frac{2y}{q}\right)&\le g_{ab}\left(125 p^{-1} + 60 + 20 p^{-1}\right) \le 205 g_{ab}p^{-1} \le \kappa g_{ab} p^{-1}. \end{align*} Recalling that $\k$ is large, $a(t) \le p^5$ (see \eqref{eqn:atrajdef}), and $g_{ab}(t)=o(a(t))$, we obtain \begin{align*} \mathbb{E}[\Delta A_{u'u'',k'}|\mathcal F_i] & \le - \frac{5ad}{2qc}- \frac{6a^2z_2}{qc} - \frac{2ay}{q} + 25 \k \rbrac{p^{2} g_2 + p^{-1}g_{ab} + p^{-1} g_c} + O(n^{-1/2 + 4\d}). \end{align*} Similarly, we have \begin{align} \mathbb{E}[\Delta A_{u'u'',k'}|\mathcal F_i] & \ge - \frac{5ad}{2qc}- \frac{6a^2z_2}{qc} - \frac{2ay}{q} - 25 \k \rbrac{p^{2} g_2 + p^{-1}g_{ab} + p^{-1} g_c} + O(n^{-1/2 + 4\d}).\label{eqn:DeltaAlower} \end{align} We must also estimate the one-step change in $n^2(a+g_{ab})$, i.e.\ the deterministic part of $A^{\pm}_{u'u'',k'}.$ We use Taylor's theorem with the Lagrange form of the remainder: for a function $h: \mathbb R \rightarrow \mathbb R$ twice differentiable on $(x_0,x)$ and $h'$ continuous on $[x_0,x]$, we have $$ h(x) - h(x_0) = h'(x_0)(x-x_0) + h''(x^*)(x-x_0)^2/2 $$ for some $x^*\in [x_0,x]$. In our case, $x_0=i/n^2 = t, x= (i+1)/n^2 = t + n^{-2}$. Thus for some $t^* \in [t, t+n^{-2}]$ we have \begin{equation}\label{eqn:DeltaTrajA} \Delta n^2(a+g_{ab}) = a'(t) + g'_{ab}(t) + \frac{a''(t^*) + g''_{ab}(t^*)}{2n^2} = a'(t) + g'_{ab}(t) + O(n^{-2}), \end{equation} where the last expression follows from Propositions \ref{obs:CrudeDerivTraj} and \ref{obs:CrudeDerivErr}. Putting \eqref{eqn:DeltaAlower} and \eqref{eqn:DeltaTrajA} together we have \begin{align*} \mathbb{E}[\Delta A_{u'u'',k'}^+|\mathcal F_i] & \le - \frac{5ad}{2qc}- \frac{6a^2z_2}{qc} - \frac{2ay}{q} -a' -g_{ab}' + 25 \k \rbrac{p^{2} g_2 + p^{-1}g_{ab} + p^{-1} g_c} +O(n^{-1/2 + 4\d})\\ & = -g_{ab}' + 25 \k \rbrac{p^{2} g_2 + p^{-1}g_{ab}+ p^{-1} g_c} + O(n^{-1/2 + 4\d})\\ & \le -5 \k \rbrac{p^{2} g_2 + p^{-1}g_{ab} + p^{-1} g_c} + O(n^{-1/2 + 4\d})\\ & \le -\Omega\rbrac{n^{-\omega}}, \end{align*} where the second line follows from \eqref{eqn:abdiffeq} which says $a' = - \frac{5ad}{2qc}- \frac{6a^2z_2}{qc} - \frac{2ay}{q}$, the third line follows from~\eqref{eq:err-sup-ab}, and the final line follows from our choice of the error functions. Thus $A_{u'u'',k'}^+$ is a supermartingale. The reader can check that $A_{u'u'',k'}^-$ is a submartingale using an entirely ``symmetric" calculation (i.e.\ we repeat the above calculation with the directions of inequalities reversed and the signs of the error terms reversed) using \eqref{eqn:DeltaAlower}. We will apply Freedman's inequality from Lemma~\ref{lem:Freedman}. Our supermartingale will be $A_{u'u'',k'}^+$. First we determine a suitable value for $D$. Note that at each step $i$, the number of edges that have a color forbidden (when it was available at step $i-1$) is $O(n)$. Also, any edge has $O(1)$ colors forbidden at each step. Thus, the number of pairs $(e, k)$ such that $k$ was available at $e$ at step $i-1$ but forbidden at step $i$ is $O(n)$. But the only way for a pair $(k, k')$ that is available at a triple $(u, u', u'')$ at step $i-1$ to become forbidden at step $i$ is to forbid one of the colors $k, k'$ at one of the edges in $uu'u''$. Thus, we have $\Delta A_{u'u'',k'} = O(n)$. Meanwhile we have by \eqref{eqn:DeltaTrajA} and Propositions \ref{obs:CrudeDerivTraj} and \ref{obs:CrudeDerivErr} that \[ \Delta n^2(a+g_{ab}) = a' + g_{ab}' + O(n^{-2}) = O(1) \] and so \[ |\Delta A_{u'u'',k'}^+| \le |\Delta A_{u'u'',k'}|+ |\Delta n^2(a+g_{ab})| = O(n). \] Thus, using that $|\Delta A_{u'u'',k'}^+| = O(n)$ in the good event we get \[ \mbox{{\bf Var}}[\Delta A_{u'u'',k'}^+ | \mc{F}_{k}] \le \mathbb{E}[ (\Delta A_{u'u'',k'}^+)^2 | \mc{F}_{k}] = O(n) \cdot \mathbb{E}[ |\Delta A_{u'u'',k'}^+| | \mc{F}_{k}]. \] In order to bound~$\mathbb{E}[ |\Delta A_{u'u'',k'}^+| | \mc{F}_{k}]$, first observe that \[ \mathbb{E}[ \Delta A_{u'u'',k'} | \mc{F}_{k}] = O(1) \qquad \textrm{and} \qquad- \mathbb{E}[ \Delta A_{u'u'',k'} | \mc{F}_{k}] = O(1), \] by~\eqref{eqn:DeltaAlower}, and hence \[ \mathbb{E}[ |\Delta A_{u'u'',k'}^+| | \mc{F}_{k}] \le \mathbb{E}[ |\Delta A_{u'u'',k'}| | \mc{F}_{k}] + \mathbb{E}[ |\Delta n^2(a+g_{ab}) | | \mc{F}_{k}] = O(1). \] Consequently, $\mbox{{\bf Var}}[\Delta A_{u'u'',k'}^+ | \mc{F}_{k}] = O(n)$ and for all $i \le i_{max} < \frac 16 n^2$ we have \[ V(i) = \sum_{0 \le k \le i} \mbox{{\bf Var}}[ A_{u'u'',k'}^+ | \mc{F}_{k}] = O(n^3). \] In view of the above calculations we are going to apply Freedman's inequality with $b=O(n^3)$ and $D=O(n)$. We still need to estimate the initial value $A_{u'u'',k'}^+(0)$ of our supermartingale. Note that \begin{equation}\label{eqn:AInit} A_{u'u'',k'}^+(0) = A_{u'u'',k'}(0) - n^2(a(0) + g_{ab}(0)). \end{equation} Recall that $A_{u'u'',k'}(0)$ is the number of pairs $(u, k)$ such that $(k, k')$ is available at $(u, u', u'')$ at step $0$. The only requirement here is that $k' \in S_u$, and $k \notin S_u, S_{u'}, S_{u''}$. Observe that $A_{u'u'',k'}(0)$ is a binomial random variable with $A_{u'u'',k'}(0)\sim\textrm{Bin}((n-2)|\textsc{col}|, s(1-s)^3)$. Thus, the expected value of $A_{u'u'',k'}(0)$ is \[ \mathbb{E}[A_{u'u'',k'}(0)] = (n-2)|\textsc{col}|s(1-s)^3 = n^2 a(0) + O(n) \quad \textrm{and}\quad \mathbb{E}[A_{u'u'',k'}(0)] = \Theta(n^2) \] so an easy application of the Chernoff bounds~\eqref{Chernoff_upper} and~\eqref{Chernoff_lower} tells us that a.a.s. $|A_{u'u'',k'}(0) - n^2a(0)| \le n^{3/2}$ for all $u', u'', k'$. Returning to \eqref{eqn:AInit} we have \[ A_{u'u'',k'}^+(0) \le n^{3/2} - n^2 g_{ab}(0) = n^{3/2} - n^{2-\omega} \le -\frac12 n^{2-\omega}. \] Thus for our application of Freedman's inequality we get to use $\lambda = \frac12 n^{2-\omega}$. Freedman's inequality then gives us that the probability $A_{u'u'',k'}^+$ becomes positive before step $i_{max}$ is at most \[ \displaystyle \exp\left(-\frac{\lambda^2}{2(b+D\lambda) }\right) = \exp\cbrac{-\Omega\rbrac{\frac{\rbrac{n^{2-\omega}}^2}{n^3 + n \cdot n^{2-\omega} }}}= \exp\cbrac{-\Omega\rbrac{ n^{1-\omega}}}. \] Since there are $O(n^3)$ choices for $u', u'', k'$, we have by the union bound that the probability any such choice ever sees $A_{u'u'',k'}^+$ become positive before step $i_{max}$ is at most \[ O(n^3) \cdot \exp\cbrac{-\Omega\rbrac{ n^{1-\omega}}} = o(1). \] Similarly, one can apply Freedman's inequality to the supermartingales $-A_{u'u'',k'}^-$ to show that the probability any of them become positive before step $i_{max}$ is $o(1)$. Thus, a.a.s.\ the good event $\mc{E}_{i_{max}}$ does not fail due to Condition \ref{E:A}. Handling Condition \ref{E:B} is similar, since the type $B$ variables are similar to type $A$ (in particular they even have the same trajectory). To demonstrate the similarity, note that for $(u'', k') \in B_{uu',k}(i)$, the probability that $(u'', k') \notin B_{uu',k}(i+1)$ is \begin{align} &\P\sbrac{ \mc{L}_{u'',k} \cup \bigcup_{z=\{u',u''\}} \mc{L}_{z,k'}\;\;\; \cup \mc{N}_{uu'',k} \cup\mc{N}_{u'u'',k'} \cup \bigcup_{e=\{uu'',u'u''\}} \mc{M}_{e, \bullet}}.\nonumber \end{align} And although the indices are different, there are exactly the same number of each of the events $\mc{L}_{z,k^*},$ $\mc{N}_{e,k^*},$ $\mc{M}_{e,\bullet}$, which will yield precisely the same estimates as $A_{u'u'',k'}$. Thus, to avoid too much repetition we will not show the work for Condition \ref{E:B}. \section{Variable \texorpdfstring{$C^{(1)}$}{}}\label{sec:C1C2} In this section, we address \ref{E:C1}. Now define \[ {\CIpm}={\CIpm}(i):= \begin{cases} \CI - n ( c_1(t)\pm g_{c1} ) & \text{if $\mc{E}_{i-1}$ holds},\\[3pt] {\CIpm}(i-1) & \text{otherwise}. \end{cases} \] We demonstrate that $\CIp$ is a supermartingale. To estimate the one-step change, note that we may lose $k' \in \CI(i)$ if $u',u''$ is hit by $k'$ or if $u'u''$ becomes part of an alternating $(u'u'',k')$-path. Thus, due to~\eqref{prob:inequality} and Claim~\ref{claim:intersections}, we get \begin{align*} \mathbb{E}[\Delta \CI|\mathcal F_i] &= -\sum_{k'\in \CI} \P\sbrac{ \bigcup_{z \in \{u',u''\}} \rbrac{\mc{L}_{z,k'} \cup \mc{N}_{u'u'',k'}}}\\ &= -\sum_{k'\in \CI} \sbrac{ \sum_{z \in \{u',u''\}} \P(\mc{L}_{z,k'}) + \P(\mc{N}_{u'u'',k'}) + O(n^{-3+8\delta})}. \end{align*} Now, Claim~\ref{claim:LMMN} yields \begin{align*} \mathbb{E}[\Delta \CI|\mathcal F_i] & \le \begin{multlined}[t] -n(c_1-g_{c_1})\cbrac{ 2n^{-2} \sbrac{ \frac{5d}{6qc} - \k \rbrac{p^{-8}g_{def} + p^{-6} g_c}} \right.\\ \left. \qquad\qquad\qquad\qquad+ n^{-2} \sbrac{\frac{3az_2}{qc} - \k \rbrac{p^{-3} g_2 + p^{-5}g_{ab} + p^{-5} g_c}}} + O(n^{-2+8\delta}) \end{multlined}\\ & \le -n^{-1}(c_1-g_{c_1}) \left[\frac{5d}{3qc}+\frac{3az_2}{qc} - 2\k(p^{-3}g_2 + p^{-5}g_{ab} + p^{-6}g_c+p^{-8}g_{def})\right] + O(n^{-2+8\delta})\\ & \le n^{-1}\left(-\frac{5dc_1}{3qc}-\frac{3az_2c_1}{qc} + 20 \k (p^{-1}g_2 + p^{-3}g_{ab}+ p^{-4}g_c+p^{-6}g_{def}) \right) + O(n^{-2+8\delta}), \end{align*} where in the last line we use bounds from Claim~\ref{claim:helpersloppy} together with $g_1 = o(c_1)$ and $c_1 \le p^2$. The lower bound will follow by symmetric calculations. Now by Taylor's theorem we have $\Delta(n(c_1 + g_{c_1})) = n^{-1}(c_1' + g_{c_1}') + O(n^{-3})$. Therefore, in the good event by applying~\eqref{eqn:c1diffeq} and~\eqref{eq:err-sup-c1}, we obtain \begin{align*} \begin{split} \mathbb{E}[\Delta \CIp|\mathcal F_i] &\le n^{-1}\left(-c_1' -\frac{5dc_1}{3qc}-\frac{3az_2c_1}{qc} - g_{c_1}' \right.\\ &\left.\qquad\qquad+ 20 \k (p^{-1}g_2 + p^{-3}g_{ab} + p^{-4}g_c+p^{-6}g_{def})\vphantom{\frac{3az_2c_1}{qc}}\right) + O(n^{-2+8\delta}) \end{split}\\ & = n^{-1}\left(- g_{c_1}'+ 20 \k (p^{-1}g_2 + p^{-3}g_{ab} + p^{-4}g_c+p^{-6}g_{def})\right)+ O(n^{-2+8\delta})\\ & \le n^{-1}\left(-10 \k (p^{-1}g_2 + p^{-3}g_{ab} + p^{-4}g_c+p^{-6}g_{def})\right)+ O(n^{-2+8\delta})\\ &\le -\Omega(n^{-1-\omega}). \end{align*} Now to apply Freedman's inequality, we estimate the maximum one-step change of $\CI$. Since the number of ways to forbid a color at an edge in one step is $O(1)$, we get that that $|\Delta \CI|=\Delta \CI = O(1)$, and by Propositions~\ref{obs:CrudeDerivTraj} and \ref{obs:CrudeDerivErr}, $\Delta n(c_1+g_{c_1}) = n^{-1}(c_1' + g_{c_1}') + O(n^{-3})$ yielding \[ |\Delta n(c_1+g_{c_1})| \le n^{-1}(|c_1'| + |g_{c_1}'|) + O(n^{-3}) = O(n^{-1}|c_1'|) = O(n^{-1}) \] and so \[ |\Delta \CIp| \le |\Delta \CI|+ |\Delta n(c_1+g_{c_1})| = O(1). \] Thus we let $D=O(1)$ in Freedman's inequality. Further, we have \[ \mbox{{\bf Var}}[ \Delta \CIp | \mc{F}_{k}] \le \mathbb{E}[ (\Delta \CIp)^2 | \mc{F}_{k}] = O(1) \cdot \mathbb{E}[ |\Delta \CIp| | \mc{F}_{k}] = O(n^{-1}). \] Therefore, $V(i) = O(n)$ for all $i \le i_{max}$ and so we take $b= O(n)$. In addition, Chernoff's bound allows us to take $\lambda = \frac{1}{2}n^{1-\omega}$, and so Freedman's inequality demonstrates that the probability that $\CIp$ becomes positive before step $i_{max}$ is at most $\exp\cbrac{-\Omega\rbrac{ n^{1-2\omega}}}$, which beats the union bound over all $O(n^3)$ choices for $u$, $u'$ and $u''$. \section{Variable \texorpdfstring{$D$}{}}\label{sec:DEF} In this section, we address \ref{E:D}. Since \ref{E:E}--\ref{E:F} are very similar and these variables share the same trajectory, we will omit their calculations (see our discussion of $B$ type variables in Section~\ref{sec:AB}). Define \[ D^{\pm}_{u,k}=D^{\pm}_{u,k}(i):= \begin{cases} D_{u,k} - n^3 ( d(t)\pm g_{def} ) & \text{if $\mc{E}_{i-1}$ holds},\\[3pt] D^{\pm}_{u,k}(i-1) & \text{otherwise}. \end{cases} \] To bound the expected one-step change, note that we can lose $(u', u'', k') \in D_{u,k}$ in several ways: one of the edges could become matched, $u'$ or $u''$ could become hit by $k$ or $k'$, or one of the edges could become part of an alternating path. Hence, using Claims~\ref{claim:intersections}, \ref{claim:LMMN} and~\ref{claim:helpersloppy} together with $g_{def}=o(d)$, and $d\le p^7$, yield \begin{align*} \mathbb{E}[\Delta D_{u,k}&|\mathcal F_i] = -\sum_{(u',u'',k')\in D_{u,k}} \P\left(\bigcup_{e \in \{uu',uu'', u'u''\}} \mc{M}_{e,\bullet} \cup \bigcup_{z \in \{u',u''\}} ((\mc{L}_{z,k}) \cup (\mc{L}_{z,k'})) \cup\bigcup_{e \in \{uu', uu''\}} \mc{N}_{e,k} \cup \mc{N}_{u'u'', k'}\right) \\ \begin{split} &=-\sum_{(u',u'',k')\in D_{u,k}} \left(\sum_{e \in \{uu',uu'',u'u''\}} \P(\mc{M}_{e,\bullet}) + \sum_{z \in \{u',u''\}} (\P(\mc{L}_{z,k})+ \P(\mc{L}_{z,k'})) \right.\\ &\left.\qquad\qquad\qquad+\sum_{e \in \{uu',uu''\}} \P(\mc{N}_{e,k}) + \P(\mc{N}_{u'u'', k'})+O(n^{-3+12\delta})\right) \end{split}\\ &\le \begin{multlined}[t] -n^{3}(d-g_{def})\left\{ 4n^{-2} \sbrac{ \frac{5d}{6qc} - \k \rbrac{p^{-8}g_{def} + p^{-6} g_c}} \right.\\ \left. \qquad\qquad+\ \ 3 n^{-2} \sbrac{\frac{3az_2}{qc} + \frac{y}{q} - \k \rbrac{p^{-3} g_2 + p^{-5}g_{ab} + p^{-5} g_c}}+ O(n^{-5/2+4\delta})\right\} + O(n^{12\delta}) \end{multlined}\\ &\le -(d-g_{def}) \sbrac{\frac{20d}{6qc}+\frac{9az_2}{qc}+\frac{3y}{q} - 7 \kappa \rbrac{p^{-3}g_2 + p^{-8}g_{def} + p^{-5}g_{ab}+p^{-6}g_c}}n + O(n^{1/2+4\delta})\\ &\le \sbrac{-\frac{20d^2}{6qc}-\frac{9az_2d}{qc}-\frac{3yd}{q} + 20 \kappa \rbrac{p^{4}g_2 + p^{-1}g_{def} + p^{2}g_{ab}+pg_c}}n + O(n^{1/2+4\delta}). \end{align*} On the other hand, by Taylor's theorem we have $\Delta(n^3 (d + g_{def})) = (d' + g_{def}')n + O(n^{-1})$. Therefore, in the good event due to \eqref{eqn:defdiffeq} and \eqref{eq:err-sup-def}, we get \begin{align*} \mathbb{E}[\Delta D^{+}_{u,k}|\mathcal F_i] &\le \sbrac{-d'-\frac{20d^2}{6qc}-\frac{9az_2d}{qc}-\frac{3yd}{q} - g_{def}'+ 20 \kappa \rbrac{p^{4}g_2 + p^{-1}g_{def} + p^{2}g_{ab}+pg_c}}n + O(n^{1/2+4\delta})\\ &= \sbrac{- g_{def}'+ 20 \kappa \rbrac{p^{4}g_2 + p^{-1}g_{def} + p^{2}g_{ab}+pg_c}}n + O(n^{1/2+4\delta})\\ &\le\sbrac{- 10 \kappa \rbrac{p^{4}g_2 + p^{-1}g_{def} + p^{2}g_{ab}+pg_c}}n + O(n^{1/2+4\delta})\\ &\le - \Omega(n^{1-\omega}). \end{align*} As before, we apply Freedman's inequality to $D^{+}_{u,k}$ by first estimating the maximum one-step change of $D_{u,k}$. As discussed above, the maximum one-step change is $O(n^2)$ by having at most $O(n)$ edges $e$ forbid the $O(n)$ pairs $(e,k')$. In addition, Propositions~\ref{obs:CrudeDerivTraj} and \ref{obs:CrudeDerivErr} imply \[ |\Delta n^3(d+g_{def})| \le n(|d'| + |g_{def}'|) + O(n^{-1}) = O(n|d'|) = O(n) \] and so \[ |\Delta D_{u,k}^+| \le |\Delta D_{u,k}|+ |\Delta n^3(d+g_{def})| = O(n^2), \] so we take $D=O(n^2)$. Furthermore, \[ \mbox{{\bf Var}}[ \Delta D_{u,k}^+ | \mc{F}_{k}] \le \mathbb{E}[ (\Delta D_{u,k}^+)^2 | \mc{F}_{k}] = O(n^2) \cdot \mathbb{E}[ |\Delta D_{u,k}^+| | \mc{F}_{k}] = O(n^3) \] and so for all $i \le i_{max} < \frac 16 n^2$ we have $V(i) = O(n^5)$. Therefore, take $b= O(n^5)$ and $\lambda = \frac{1}{2}n^{3-\omega}$ to get by Freedman's inequality and the union bound a failure probability of $O(n^2)\cdot\exp(-\Omega(n^{1-2\omega})) = o(1)$. \section{Variable \texorpdfstring{$Z_0$}{}}\label{sec:Z0} In this section, we address \ref{E:Z0} by considering $Z_0$. Extending the work on the remaining variables $Z_1$ and $Z_2$ requires some similar calculations (some of which involve the bounds in \ref{E:crude}), which we omit for readability. Define $$ {Z_{uv, k, 0,0,0}^\pm}={Z_{uv, k, 0,0,0}^{\pm}}(i):= \begin{cases} Z_{uv, k, 0,0,0} - n^3 ( z_0(t)\pm g_{0} ) & \text{if $\mc{E}_{i-1}$ holds},\\[3pt] Z_{uv,k,0,0,0}^\pm(i-1) & \text{otherwise}. \end{cases} $$ Notice that the expected one-step change can never increase and we may lose a $(x,y,k')\in Z_{uv, k, 0,0,0}$ in several ways: the vertex $x,y$ is hit with the color $k$ or $x,y,u,v$ is hit with the color $k'$; or the edge $xy, ux, vy$ is colored; or $ux, vy, xy$ becomes part of an alternating path. Thus, by Claims~\ref{claim:intersections}, \ref{claim:LMMN} and~\ref{claim:helpersloppy} together with bounds $g_0=o(z_0)$, and $z_0\le p^9$, we get \begin{align*} \begin{split} \mathbb{E}[\Delta Z_{uv,k,0,0,0}|&\mathcal F_i] = -\sum_{(x,y, k')\in Z_{uv,k,0,0,0}} \P\left( \bigcup_{z \in\{ x,y\}} \mc{L}_{z,k} \cup\bigcup_{z \in\{ x,y,u,v\}} \mc{L}_{z,k'} \right.\\ &\qquad\qquad\qquad\qquad\qquad\qquad\left.\cup \bigcup_{e \in \{xy, ux, vy\}} \mc{M}_{e,\bullet} \cup \bigcup_{e\in\{ux, vy\}} \mc{N}_{e,k'} \cup \mc{N}_{xy,k} \vphantom{\bigcup_{(x,y, k')}}\right) \end{split}\\ \begin{split} &=-\sum_{(x,y, k')\in Z_{uv,k,0,0,0}} \left( \sum_{z \in \{x,y\}} \P(\mc{L}_{z,k}) +\sum_{z \in \{x,y,u,v\}} \P(\mc{L}_{z,k'}) \right.\\ &\qquad\left.+ \sum_{e \in \{xy, ux, vy\}} \P(\mc{M}_{e,\bullet}) + \sum_{e\in\{ux, vy\}} \P(\mc{N}_{e,k'}) + \P(\mc{N}_{xy,k}) + O(n^{-3+12\delta}) \vphantom{\bigcup_{(x,y, k')}}\right) \end{split}\\ \begin{split} &\le -n^3(z_0 - g_0) \left\{6n^{-2} \sbrac{ \frac{5d}{6qc} - \k \rbrac{p^{-8}g_{def} + p^{-6} g_c}} \right.\\ &\qquad\left.+ 3 n^{-2} \sbrac{\frac{3az_2}{qc} + \frac{y}{q} - \k \rbrac{p^{-3} g_2 + p^{-5}g_{ab} + p^{-5} g_c} + O(n^{-5/2+4\delta}) \vphantom{}}\right\} + O\rbrac{n^{12\delta}} \end{split}\\ \begin{split} & = -n(z_0-g_0) \left[\frac{5d}{qc} + \frac{9az_2}{qc} + \frac{3y}{q} \right.\\ &\qquad\left. - 9 \kappa \rbrac{p^{-3}g_2 + p^{-8}g_{def} + p^{-5}g_{ab}+p^{-6}g_c\vphantom{\frac{10d}{3qc}} } \right] + O\rbrac{n^{1/2+4\delta}} \end{split}\\ &\le n\sbrac{-\frac{5dz_0}{qc} - \frac{9az_2z_0}{qc} - \frac{3yz_0}{q} + 20 \k \rbrac{p^{6}g_2 + pg_{def} + p^{4}g_{ab}+p^{3}g_c} \vphantom{\frac{10dz_0}{3qc}}} + O\rbrac{n^{1/2+4\delta}}. \end{align*} By Taylor's theorem we have $\Delta(n^3 (z_0 + g_{0})) = (z_0' + g_{0}')n + O(n^{-1})$. Therefore in the good event by~\eqref{eqn:z0diffeq} and~\eqref{eq:err-sup-0}, we obtain \begin{align*} \begin{split} \mathbb{E}[\Delta Z^{+}_{uv,k,0,0,0}|\mathcal F_i] &\le \left[-z_0'-\frac{5dz_0}{qc} - \frac{9az_2z_0}{qc} - \frac{3yz_0}{q} - g_0' \right.\\ &\qquad\left.+ 20 \k \rbrac{p^{6}g_2 + pg_{def} + p^{4}g_{ab}+p^{3}g_c\vphantom{\frac{10dz_0}{3qc}} } \right]n + O\rbrac{n^{1/2+4\delta}} \\ &= \left[- g_0' + 20 \k \rbrac{p^{6}g_2 + pg_{def} + p^{4}g_{ab}+p^{3}g_c\vphantom{\frac{10dz_0}{3qc}} } \right]n + O\rbrac{n^{1/2+4\delta}}\\ &\le \left[-10 \k \rbrac{p^{6}g_2 + pg_{def} + p^{4}g_{ab}+p^{3}g_c\vphantom{\frac{10dz_0}{3qc}} } \right]n + O\rbrac{n^{1/2+4\delta}}\\ &\le -\Omega(n^{1-\omega}). \end{split} \end{align*} Consider $Z_{uv, k, 0, 0, 0}$ for some fixed edge $uv$ and color $k$. The one-step change in this random variable never has any positive contributions, and its negative contributions can come in several ways. Suppose $(x, y, k') \in Z_{uv, k, 0, 0, 0}(i)$. Then we could have $(x, y, k') \notin Z_{uv, k, 0, 0, 0}(i+1)$ for any of the following (exhaustive) list of reasons: \begin{enumerate}[label=(\roman*)] \item\label{z:reason1} one of the edges $ux, xy, yv$ gets colored, \item\label{z:reason2} one of the vertices $x, y$ gets hit by one of the colors $k, k'$, \item\label{z:reason3} one of the vertices $u, v$ gets hit by $k'$, \item\label{z:reason4} $k$ is forbidden at $xy$ through an alternating 4-cycle, \item\label{z:reason5} $k'$ is forbidden at $ux$ or $yv$ through an alternating 4-cycle. \end{enumerate} Consider the triples $(x, y, k')$ that are removed from $Z_{uv, k, 0, 0, 0}$ due to~\ref{z:reason1}. Two of the vertices in $\{u,x,y,v\}$ must be in the triangle that gets colored in this step, and so the number of triples $(x, y, k')$ is at most $O(n^2)$. Reason~\ref{z:reason2} is similarly $O(n^2)$. Reason~\ref{z:reason3} is $O(n^2)$ since $k'$ must be one of the colors in the triangle getting colored. Now in~\ref{z:reason4}, we observe that for a fixed color $k$, in a single step $k$ is forbidden at $O(n)$ many edges due to potential 4-cycles. Since $xy$ would have to be one of those edges, we get $O(n^2)$. Now for~\ref{z:reason5}, observe that for a fixed color $k'$, there are at most $O(1)$ edges $ux$ adjacent to $u$ such that $k'$ is forbidden at $ux$ through a potential 4-cycle. Thus, we obtain again $O(n^2)$. We now apply Freedman's inequality. Note that \[ |\Delta n^3(z_0+g_{0})| \le n(|z_0'| + |g_{0}'|) + O(n^{-1}) = O(n|z_0'|) = O(n^{1+8\d}) \] and so \[ |\Delta Z_{uv,k,0,0,0}^+| \le |\Delta Z_{uv,k,0,0,0}|+|\Delta n^3(z_0+g_{0})| = O(n^2). \] Therefore, we let $D = O(n^2)$. In addition, \[ \mbox{{\bf Var}}[ \Delta Z_{uv,k, 0,0,0}^+ | \mc{F}_{k}] \le \mathbb{E}[ (\Delta Z_{uv,k,0,0,0}^+)^2 | \mc{F}_{k}] \le O(n^2) \cdot \mathbb{E}[ |\Delta Z_{uv,k,0,0,0}^+| | \mc{F}_{k}] = O(n^3) \] implying that $V(i) = O(n^5)$. Therefore, take $b= O(n^5)$. Using Chernoff's bound to estimate $Z_{uv,k,0,0,0}^+(0)$ allows us to set $\lambda = \frac{1}{2}n^{3-\omega}$. Thus Freedman's inequality gives us an exponentially small failure probability. \section{Bounds on \texorpdfstring{$\Xi$}{}, \texorpdfstring{$\Phi$}{}, \texorpdfstring{$\Psi$}{}, \texorpdfstring{$\Lambda$}{}}\label{sec:crude} In this section we bound the probability that the good event $\mc{E}_{i_{max}}$ fails due to Condition \ref{E:crude}. The variables we are bounding here are all similar, so we will only show the details for $\Xi_{u, v, k}$. First we define versions of these variables that are ``frozen" outside the good event $\mc{E}_{i-1}$: \begin{equation*} \Xi_{u, v, k}^*(i) := \begin{cases} \Xi_{u, v, k}(i) & \mbox{ if $\mc{E}_{i-1}$ holds}, \vspace{2ex}\\ \Xi_{u, v, k}^*(i-1) & \mbox{ otherwise}. \end{cases} \end{equation*} Note that we have $\Delta \Xi_{u, v, k}(i) =O(1)$ and therefore $\Delta \Xi_{u, v, k}^*(i)=O(1)$. We bound the probability that $\Delta \Xi_{u, v, k}(i) \neq 0$ as follows. First we bound the number of ``predecessors," (see figures below) i.e. triples $(x, y, k')$ which are not in $\Xi_{u, v, k}(i-1)$ but which could become an element of $\Xi_{u, v, k}(i)$. On the first row in Figure~\ref{fig10} below we have ``single-edge predecessors" that only need one edge colored in order to become part of $\Xi_{u, v, k}(i)$. On the second row we see ``double-edge predecessors" which need two edges colored simultaneously (of course, for two edges to get colored in one step they would need to be adjacent). \begin{figure}[h!] \begin{minipage}{\textwidth} \begin{center} \begin{tikzpicture}[scale=.8] \node (u) at (0,0) [fixed,label=below:$u$] {}; \node (v) at (2,0) [fixed,label=below:$v$] {}; \node (y) at (2,2) [vert,label=above:$y$] {}; \node (x) at (0,2) [vert,label=above:$x$] {}; \draw [uncolored] (u) -- (v); \draw [blue] (y) -- (v); \draw [blue?] (u) -- (x); \draw [red] (y) -- (x); \end{tikzpicture} \qquad \qquad \begin{tikzpicture}[scale=.8] \node (u) at (0,0) [fixed,label=below:$u$] {}; \node (v) at (2,0) [fixed,label=below:$v$] {}; \node (y) at (2,2) [vert,label=above:$y$] {}; \node (x) at (0,2) [vert,label=above:$x$] {}; \draw [uncolored] (u) -- (v); \draw [blue] (y) -- (v); \draw [blue] (u) -- (x); \draw [red?] (y) -- (x); \end{tikzpicture} \qquad \qquad \begin{tikzpicture}[scale=.8] \node (u) at (0,0) [fixed,label=below:$u$] {}; \node (v) at (2,0) [fixed,label=below:$v$] {}; \node (y) at (2,2) [vert,label=above:$y$] {}; \node (x) at (0,2) [vert,label=above:$x$] {}; \draw [uncolored] (u) -- (v); \draw [blue?] (y) -- (v); \draw [blue] (u) -- (x); \draw [red] (y) -- (x); \end{tikzpicture} \end{center} \end{minipage} \begin{minipage}{\textwidth} \begin{center} \begin{tikzpicture}[scale=.8] \node (u) at (0,0) [fixed,label=below:$u$] {}; \node (v) at (2,0) [fixed,label=below:$v$] {}; \node (y) at (2,2) [vert,label=above:$y$] {}; \node (x) at (0,2) [vert,label=above:$x$] {}; \draw [uncolored] (u) -- (v); \draw [blue] (y) -- (v); \draw [blue?] (u) -- (x); \draw [red?] (y) -- (x); \end{tikzpicture} \qquad \qquad \begin{tikzpicture}[scale=.8] \node (u) at (0,0) [fixed,label=below:$u$] {}; \node (v) at (2,0) [fixed,label=below:$v$] {}; \node (y) at (2,2) [vert,label=above:$y$] {}; \node (x) at (0,2) [vert,label=above:$x$] {}; \draw [uncolored] (u) -- (v); \draw [blue?] (y) -- (v); \draw [blue] (u) -- (x); \draw [red?] (y) -- (x); \end{tikzpicture} \end{center} \end{minipage} \caption{Depictions of ``double-edge predecessors'' (on the first row) and ``single-edge predecessors'' (on the second row) of $\Xi_{u, v, k}(i)$.} \label{fig10} \end{figure} Note that the number of single-edge predecessors is $O(n)$ since for each fixed $k'$ there is a constant number of choices for $x, y$. For a single-edge predecessor triple $(x, y, k')$ to become part of $\Xi_{u, v, k}(i)$, a particular edge needs to get a particular color, which has probability $O\rbrac{n^{-3+3\d}}$ in the good event. The number of double-edge predecessors is $O(n^2)$, and for one of them to become part of $\Xi_{u, v, k}(i)$ we need to color a particular triangle using a particular pair of colors, which has probability $O\rbrac{n^{-5+8\d}}$ in the good event. Thus, in the good event we have \[ \P\rbrac{\Delta \Xi_{u, v, k}(i) \neq 0} = O(n) \cdot O\rbrac{n^{-3+3\d}} + O(n^2) \cdot O\rbrac{n^{-5+8\d}} = O\rbrac{n^{-2+3\d}}. \] Of course this implies $\P\rbrac{\Delta \Xi_{u, v, k}^*(i) \neq 0} = O\rbrac{n^{-2+3\d}}$ as well. Thus, the final value $\Xi_{u, v, k}^*(i_{max})$ is stochastically dominated by $X \sim K \textrm{Bin}(i_{max}, Kn^{-2+3\d})$ for some constant $K$. An easy application of Chernoff shows that \[ \P\rbrac{X > n^{4\d}} \le \exp\cbrac{-\Omega(n^{4\d})}. \] Since there are only a polynomial number of variables $\Xi_{u, v, k}$, the union bound shows that a.a.s. none of them exceed $n^{4\d}$. \section{Finishing the coloring}\label{sec:finishing} In this section we describe Phase 2 of our coloring procedure. We assume that Phase 1 has terminated successfully (i.e.\ the event $\mc{E}_{i_{max}}$ holds). In Phase 2 we will assign to each uncolored edge a uniform random color from the $\varepsilon n/2$ colors in $\overline{\textsc{col}} \setminus \textsc{col}$. We will use the Lov\'asz Local Lemma (Lemma~\ref{lem:LLL}) to show there is a positive probability that none of the following ``bad" events occur in Phase 2. Here when we say ``uncolored edges" we mean edges that were not colored in Phase 1. Define the following events: \begin{enumerate}[label=$\bullet$] \item For two adjacent uncolored edges $e_1, e_2$ let $B_1(e_1, e_2)$ be the event that both edges get the same color. \item For any 4-cycle of uncolored edges $e_1, e_2, e_3, e_4$, let $B_2(e_1, e_2, e_3, e_4)$ be the event that this 4-cycle becomes alternating (i.e.\ $e_1$ gets the same color as $e_3$, and $e_2$ gets the same color as $e_4$). \item For any 4-cycle of edges $e_1, e_2, e_3, e_4$ such that $e_1$ and $e_3$ are uncolored and $e_2$ and $e_4$ were given the same color in Phase 1, let $B_3(e_1,e_3)$ be the event that this 4-cycle becomes alternating (i.e.\ $e_1$ gets the same color as $e_3$). \end{enumerate} Let $\mc{B}$ be the family of all bad events of types $B_1, B_2, B_3$ described above. Note that if none of the events in $\mc{B}$ happens, then Phase 2 gives us a $(4, 5)$-coloring. Toward describing our dependency graph we claim the following: \begin{claim} Fix any event $B \in \mc{B}$ (of any type $B_1$, $B_2$ or $B_3$). Among the other events in $\mc{B}$, $B$ is mutually independent with all but at most \begin{enumerate}[label=$\bullet$] \item $O(n^{1-\d})$ events of type $B_1$, \item $O(n^{2-2\d})$ events of type $B_2$, and \item $O(n^{1-\d})$ events of type $B_3$. \end{enumerate} \end{claim} \begin{proof} Every event in $\mc{B}$ involves some set of uncolored edges and the colors they get in Phase 2. Any such event $B$ is mutually independent of the set of all events $B'$ that do not involve any of the same edges as $B$. So, for each type (i.e.\ type $B_1, B_2,$ or $B_3$) we bound the number of $B'$ of that type sharing an edge with $B$. We show that any fixed uncolored edge $e_1$ is in $O(n^{1-\d})$ events of the form $B_1(e_1, e_2)$. Indeed, this will follow from bounding the number of uncolored edges at a vertex. Bohman, Frieze and Lubetzky~\cite{BFL15} proved that in the triangle removal process the degree of each vertex is a.a.s.\ $(1+o(1))np$ as long as we have, say, $p \ge n^{-1/3}$ (the power of $n$ could be any constant larger than $-1/2$). In our analysis we are requiring the stronger condition $p \ge n^{-\d}$ (which is the value of $p$ at step $i_{max}$ when we stopped the Phase 1 process). Thus, at the end of Phase 1 each vertex is incident with $O(n^{1-\d})$ uncolored edges. Next we show that any fixed uncolored edge $e_1$ is in $O(n^{2-\d})$ events of the form $B_2(e_1, e_2, e_3, e_4)$. But, given $e_1$ and our bound on degrees, there are $O(n^{1-\d})$ choices for $e_2$ and then $O(n^{1-\d})$ choices for $e_3$, which determines at most one choice for $e_4$ and we are done. Finally, we show that any fixed uncolored edge $e_1$ is in $O(n^{1-\d})$ events of the form $B_3(e_1, e_3)$. We know that at the end of Phase 1 we have \[ \sum_{k \in \textsc{col}} Z_{e_1, k, 1, 0, 1} = O\rbrac{|\textsc{col}| \cdot n^{1-3\d}} = O\rbrac{ n^{2-3\d}}. \] Each event $B_3(e_1, e_3)$ is counted in the above sum once for every color $k$ available at $e_3$. So we estimate the number of colors available at an edge. Say $u', u''$ are the endpoints of $e_3$. We know that \[ \sum_{u \in V} \CI = \Theta\rbrac{n \cdot n^{1-2\d}} = \Theta\rbrac{ n^{2-2\d}}. \] For each color $k'$ available at $e_3$, the sum above counts $k'$ once for every vertex $u$ such that $k' \in S_u$. An easy application of the Chernoff bound gives us that a.a.s. for every color $k'$ there are $(1+o(1)) ns = \Theta(n)$ vertices $u$ such that $k' \in S_u$. Thus the number of colors available at $e_3$ is $\Theta\rbrac{ n^{1-2\d}}$. Thus, the number edges $e_3$ such that we have a bad event $B(e_1, e_3)$ is \[ O\rbrac{ \frac{n^{2-3\d}}{n^{1-2\d}}} = O\rbrac{ n^{1-\d}}, \] as required. \end{proof} To apply the Local Lemma we must assign to each bad event $B \in \mc{B}$ a number $x_B \in [0, 1)$. To all the events of type $B_j$ we assign the number $x_j$ ($j=1,2,3$), where \[ x_1:= \frac{10}{\varepsilon n}, \qquad x_2:= \frac{10}{(\varepsilon n)^2}, \qquad x_3:= \frac{10}{\varepsilon n}. \] We check the condition \eqref{eqn:LLLcond} of the Local Lemma. Since Phase 2 uses the set $\overline{\textsc{col}} \setminus \textsc{col}$ of $\varepsilon n/2$ colors, the probability of any $B_1$ event is $2/(\varepsilon n)$, which is smaller than \[ x_1 (1-x_1)^{O(n^{1-\d})} (1-x_2)^{O(n^{2-2\d})} (1-x_3)^{O(n^{1-\d})} = (1+o(1)) x_1. \] The probability of any $B_2$ event is $4/(\varepsilon n)^2$, which is smaller than \[ x_2 (1-x_1)^{O(n^{1-\d})} (1-x_2)^{O(n^{2-2\d})} (1-x_3)^{O(n^{1-\d})} = (1+o(1)) x_2. \] The probability of any $B_3$ event is $2/(\varepsilon n)$, which is smaller than \[ x_3 (1-x_1)^{O(n^{1-\d})} (1-x_2)^{O(n^{2-2\d})} (1-x_3)^{O(n^{1-\d})} = (1+o(1)) x_3. \] Thus, the conditions of Lemma \ref{lem:LLL} are met and so with positive probability Phase 2 gives us a $(4, 5)$-coloring. This completes the proof of Theorem \ref{thm:main}.
1,314,259,992,645
arxiv
\section{Introduction} The interest to study of ternary fission was appeared \cite{WJS58} after the discovery of binary fission of heavy nuclei by various authors. Ternary fission is one of the oldest problem of nuclear reaction but still it has topicality. Authors Diehl and Greiner \cite{DG74} tried to explain ternary fission within the framework of the liquid drop model. They mainly calculated the potential energy for the prolate and the oblate configurations of the ternary system. Also, within the three-center shell model Degheidy and Maruhn \cite{Degheidy79} generalized the phenomenological shell model based on the harmonic oscillator potential to systems with three clusters. Authors showed that the centers of nuclei may be in arbitrary geometrical configurations and nuclei may have different masses. The experimental work \cite{Daniel04} was dedicated to study of ternary fission of the $^{252}$Cf nucleus with eight $\bigtriangleup E\times E$ particle telescope. In this experiment, mainly, light charged particles (like He, Be, B etc) were observed in the coincidence with $\gamma$-emission. The experimental group FOBOS (in the Flerov Laboratory For Nuclear Reactions of the Joint Institute for Nuclear Research, Dubna, Russia) made effort to study the unusual mode of ternary fission - collinear cluster tripartition \cite{PKVA10,PKVA11,PKVA12}. Studies of the spontaneous fission products of $^{252}$Cf in the coincidences with the emitted neutrons has been performed in two missing-mass experiments \cite{PKVA10,PKVA12}. These experiments demonstrated a new mode of the ternary fission process as a collinear cluster tripartition. At first, in collinear cluster tripartition (CCT) mode, the masses of nuclei are comparable, and nuclei are especially clusters, i.e. with a magic number of mass (or charge). At second, the collinearity of the momenta of the ternary fission fragments is proved by the fact that the two detectors registering Sn and Ni-like fragments are placed on the opposite sides from the fissioning source $^{252}$Cf such that angle between them is $180^0$. The probability of the yield of Ni and Sn nuclei are observed was approximately $10^3$ times less then the one of binary fission. The authors of Ref. \cite{Tash15,NOMT14} concluded that middle fragment is calcium nucleus. This mode of ternary fission differs from usual ternary fission which is binary fission with the emission of light fragments (He, Li, Be etc.) as the third (middle) nucleus in the perpendicular plane to the fission axis. Moreover some theoretical works dedicated to this kind of ternary fission have been published \cite{TNS11,MB11,VVB12,VBO15,NOMT14,WN14,WNT15,Tash15,NTW16}. In the Ref. \cite{VBO15} the ternary fission of $^{252}$Cf has been studied through the potential energy surfaces for two different arrangements in a collinear configuration, and authors concluded that true ternary fission (with almost equal fragment mass) is energetically possible due to the minima in the fragmentation potential energy and high $Q$-values. Also, in this method it is shown that collinear geometry with the lightest fragment between two heavier nuclei is expected to give the highest probabilities in the decay. In our previous works \cite{Tash15,WNT15} the possible channels of true ternary fission were studied. In the Ref. \cite{PKVA12} it is shown that more possible channel of ternary fission in the $^{252}$Cf(sf) reaction is $^{70}$Ni+$^{50}$Ca+$^{132}$Sn which is theoretically proven in the Ref. \cite{Tash15}. Our experience from the previous works \cite{Tash15,NTW16} leads to the interesting question: how does tri-nuclear system evaluate during its decay? Because in those works it was not proven that the momenta of fission products are collinear or not. In this work it is decided to study the dynamical changing of relative distance between nuclei and their velocities. So, the main aim of current work is to study of dynamics of fission of the $^{70}$Ni+$^{50}$Ca+$^{132}$Sn system, in other words to check is the ternary fission collinear or not. Certainly, to get information about dynamics an equation of motion should be solved. Results of solution of the equation of motion depend on initial conditions. The dependence of the result on the initial condition is studied in detail to find the collinear flying of the ternary fission products. Thus from results it will be easy to know what initial condition leads to collinear fission. \section{The model} The theoretical model is based on the formation of the tri-nuclear system (TNS). The TNS is the system which has three interacting nuclei \cite{Tash15,NTW16}, and its interaction is studied on the basis of the dinuclear system model \cite{ACNPV93,ACNPV95,NFTA05}. The stage proceeding to formation of the TNS is not studied. It is assumed that the system is formed, and any ternary fission of heavy nuclei passes through the TNS stage. \begin{figure} \includegraphics[width=0.6\textwidth]{Fig1.eps} \vspace*{-3.0cm} \caption{Point ($\textbf{R}_k$) and relative ($\textbf{R}_{ij}$) vectors of tri-nuclear system. The point $O$ (origin) corresponds to the center of mass.} \label{graph1} \end{figure} The main task is obtaining the classical Lagrange equations of motion, and solving them. First of all, the Lagrangian is $L=T-V$, where $T=\frac{1}{2}\sum^3_{i=1}m_i\dot{\bf{R}}^2_i$ - kinetic energy of the system, $V$ - total interaction potential between fragments. Following system of equations can be written from the Fig. \ref{graph1}: \begin{equation} \left\{ \begin{array}{l} \textbf{R}_{12}=\textbf{R}_1-\textbf{R}_2\\ \textbf{R}_{13}=\textbf{R}_3-\textbf{R}_1\\ \textbf{R}_{23}=\textbf{R}_3-\textbf{R}_2\\ \end{array}\label{Ri-Rij} \right. \end{equation} where $\textbf{R}_k$ ($k=1,2,3$) are point vectors of nuclei and the magnitude of a vector $\textbf{R}_{ij}$ is the relative distance between the i$^{th}$ and j$^{th}$ nuclei. It is clear that any kind of fission process occurs in one plane, i.e. it can be chosen the 2D space where fission fragments move. So any $\textbf{R}_i$ vector can be described only with $x$ and $y$ components ($R_{ix}$ and $R_{iy}$) in the Cartesian system. Correspondingly, velocities are defined as $\upsilon_{ix}=\dot{R}_{ix}$ and $\upsilon_{iy}=\dot{R}_{iy}$, therefore, the kinetic energy can be written as \begin{eqnarray} T=\frac{1}{2}\sum_{i=1}^3m_i(\upsilon^2_{ix}+\upsilon^2_{iy}). \end{eqnarray} \subsection{The Lagrange equation of motion} In the framework of the classical Lagrange formalism 3 equations of motion for the $x$ variable and 3 for the $y$ variable can be obtained: \begin{eqnarray*} \frac{d}{dt}\frac{\partial T}{\partial\upsilon_{ix}}-\frac{\partial T}{\partial R_{ix}}=-\frac{\partial V}{\partial R_{ix}}\\ \frac{d}{dt}\frac{\partial T}{\partial\upsilon_{iy}}-\frac{\partial T}{\partial R_{iy}}=-\frac{\partial V}{\partial R_{iy}} \end{eqnarray*} It is clear that the kinetic energy does not depend on a distance $R_{i}$, i.e. $\frac{\partial T}{\partial R_{ix}}=\frac{\partial T}{\partial R_{iy}}=0$. Therefore, \begin{eqnarray} m_i\dot{\upsilon}_{ix}=-\frac{\partial V}{\partial R_{ix}}\label{EMi}\\ m_i\dot{\upsilon}_{iy}=-\frac{\partial V}{\partial R_{iy}} \end{eqnarray} The magnitude of a $\textbf{R}_{ij}$ vector is $R_{ij}=\sqrt{R^2_{ijx}+R^2_{ijy}}$. Potential energy $V$ depends only on relative distance $R_{ij}$ (or $R_{ik}$). So \begin{eqnarray} \frac{\partial V}{\partial R_{ix}}=\frac{\partial V}{\partial R_{ijx}}\frac{\partial R_{ijx}}{\partial R_{ix}}+\frac{\partial V}{\partial R_{ikx}}\frac{\partial R_{ikx}}{\partial R_{ix}}=\nonumber\\ =\frac{\partial V}{\partial R_{ij}}\frac{\partial R_{ij}}{\partial R_{ijx}}\frac{\partial R_{ijx}}{\partial R_{ix}}+\frac{\partial V}{\partial R_{ik}}\frac{\partial R_{ik}}{\partial R_{ikx}}\frac{\partial R_{ikx}}{\partial R_{ix}}. \label{dVdRij} \end{eqnarray} It can be noted that $R_{ij}=R_{ji}$ and $R_{ik}=R_{ki}$. If the equation (\ref{EMi}) is written for each nucleus, then using the (\ref{dVdRij}) following equations will be obtained:\\ \begin{equation} \left\{ \begin{array}{l} \displaystyle m_1\dot{\upsilon}_{1x}=-\frac{R_{12x}}{R_{12}}\frac{\partial V}{\partial R_{12}}+\frac{R_{13x}}{R_{13}}\frac{\partial V}{\partial R_{13}}\\ \displaystyle m_2\dot{\upsilon}_{2x}=~~\frac{R_{12x}}{R_{12}}\frac{\partial V}{\partial R_{12}}+\frac{R_{23x}}{R_{23}}\frac{\partial V}{\partial R_{23}}\\ \displaystyle m_3\dot{\upsilon}_{3x}=-\frac{R_{23x}}{R_{23}}\frac{\partial V}{\partial R_{23}}-\frac{R_{13x}}{R_{13}}\frac{\partial V}{\partial R_{13}} \end{array} \label{FEMx} \right. \end{equation} The relation between $R_{i}$ (or $R_{ix}$) and $R_{ij}$ (or $R_{ijx}$) is found from the system of equations (\ref{Ri-Rij}). Symmetric 3 equations can be gotten for the $y$ component: \begin{equation} \left\{ \begin{array}{l} \displaystyle m_1\dot{\upsilon}_{1y}=-\frac{R_{12y}}{R_{12}}\frac{\partial V}{\partial R_{12}}+\frac{R_{13y}}{R_{13}}\frac{\partial V}{\partial R_{13}}\\ \displaystyle m_2\dot{\upsilon}_{2y}=~~\frac{R_{12y}}{R_{12}}\frac{\partial V}{\partial R_{12}}+\frac{R_{23y}}{R_{23}}\frac{\partial V}{\partial R_{23}}\\ \displaystyle m_3\dot{\upsilon}_{3y}=-\frac{R_{23y}}{R_{23}}\frac{\partial V}{\partial R_{23}}-\frac{R_{13y}}{R_{13}}\frac{\partial V}{\partial R_{13}} \end{array} \label{FEMy} \right. \end{equation} Taking into account the conservation law of linear momentum $\displaystyle \sum^3_{i=1}m_i\upsilon_{ix}=0$ (since $\textbf{P}_{c.m.}=0$ for the spontaneous fission of $^{252}$Cf) one of the equations in formulas (\ref{FEMx}) and (\ref{FEMy}) can be skipped. It means that $\upsilon_{3x}$ and $R_{3x}$ is found as \begin{equation} \left\{ \begin{array}{l} \displaystyle \upsilon_{3x}=-\frac{m_1\upsilon_{1x}+m_2\upsilon_{2x}}{m_3}\\ \displaystyle R_{3x}=-\frac{m_1R_{1x}+m_2R_{2x}}{m_3} \end{array} \label{V3R3} \right. \end{equation} As the origin is placed at the center of mass so there is no ``$const$'' term in the definition of $R_{3x}$. Equations for $y$ component are similar with the last equation. \subsection{Derivative of total interaction potential} It is clear from the Eqs. (\ref{FEMx}), (\ref{FEMy}) and (\ref{V3R3}) that the dynamics of motion strongly depends on derivative of the total interaction potential. The total interaction potential consists of two parts: coulomb and nuclear \begin{equation} V=V_C+V_{nuc}. \label{Vtot} \end{equation} As there are three interacting nuclei so there are three terms on each part. To calculate the nuclear part the double folding procedure is used \begin{equation} V_C(R_{12},R_{23},R_{13})=e^2\sum^3_{i<j}\frac{Z_iZ_j}{R_{ij}}, \end{equation} \begin{equation} V_{nuc}(R_{12},R_{23},R_{13})=\int{\sum^3_{i<j}\rho_i(r_i)f_{ij}(r_i,r_j)\rho_j(r_j)\mathrm{d}\bf{r}}.\label{Vnuc} \end{equation} Following formulas are necessary to calculate the nuclear part: \begin{equation*} \begin{array}{l} \displaystyle f_{ij}(r_i,r_j)=C\left[f_{in}+(f_{ex}-f_{in})\frac{\rho_0-(\rho_i+\rho_j)}{\rho_0}\right],\\ \displaystyle \rho_i(r_i)=\frac{\rho_0}{1+\exp[\frac{r_i-R_{0i}}{a}]},\\ \displaystyle r_1(R_{12})=\sqrt{r^2+R_{12}^2-2rR_{12}\cos\theta},\\ \displaystyle r_2=r,\\ \displaystyle r_3(R_{12},R_{23},R_{13})=\sqrt{r^2+R_{23}^2-2rR_{23}\cos\alpha},\\ \displaystyle \cos\alpha=\cos\theta\cos\beta+\sin\theta\sin\beta\sin\phi,\\ \displaystyle \cos\beta=\frac{R_{12}^2+R_{23}^2-R_{13}^2}{2R_{12}R_{23}}. \end{array} \end{equation*} Here, $r_i$ is the radial distance of the i$^{th}$ nucleus (see Fig. \ref{graph1}), $r$, $\theta$ and $\phi$ are variables of the spherical coordinate system, $R_{0i}=r_0A^{1/3}$ - the radius of the i$^{th}$ nucleus, $r_0=1.16$~fm - the radius parameter, $\rho_0=0.17$~fm$^{-3}$ - the density parameter, $a=0.54$~fm - the diffuseness parameter, and $C=300$~MeV$\cdot$fm$^3$, $f_{in}=0.09$, $f_{ex}=-2.59$ are constants of interaction potential, $f_{ij}$ is the effective nuclear-nuclear force which is taken from the Ref. \cite{Migdalbook}. By the formula (\ref{Vtot}) the derivative of the total interaction potential is found with the two terms \begin{equation*} \begin{array}{l} \displaystyle \frac{\partial V}{\partial R_{ij}}=-e^2\frac{Z_iZ_j}{R^2_{ij}}+\frac{\partial V_{nuc}}{\partial R_{ij}},\\ \displaystyle \frac{\partial V_{nuc}}{\partial R_{ij}}=\int{(F_{12}+F_{23}+F_{13})\mathrm{d}\bf{r}},\\ \displaystyle F_{12}=\rho_2\left[f_{12}-\frac{\rho_1}{\rho_0}C(f_{ex}-f_{in})\right]\frac{\partial \rho_1}{\partial R_{ij}},\\ \displaystyle F_{23}=\rho_2\left[f_{23}-\frac{\rho_3}{\rho_0}C(f_{ex}-f_{in})\right]\frac{\partial \rho_3}{\partial R_{ij}},\\ \displaystyle F_{13}=\rho_3\left[f_{13}-\frac{\rho_1}{\rho_0}C(f_{ex}-f_{in})\right]\frac{\partial \rho_1}{\partial R_{ij}}+\\ \displaystyle +\rho_1\left[f_{13}-\frac{\rho_3}{\rho_0}C(f_{ex}-f_{in})\right]\frac{\partial \rho_3}{\partial R_{ij}},\\ \displaystyle \frac{\partial \rho_1}{\partial R_{ij}}=\frac{\rho_1(\rho_1-\rho_0)}{a\rho_0}\frac{\partial r_1}{\partial R_{ij}},\\ \displaystyle \frac{\partial \rho_3}{\partial R_{ij}}=\frac{\rho_3(\rho_3-\rho_0)}{a\rho_0}\frac{\partial r_3}{\partial R_{ij}}.\\ \end{array} \end{equation*} Derivatives $\displaystyle \frac{\partial r_1}{\partial R_{ij}}$ and $\displaystyle \frac{\partial r_3}{\partial R_{ij}}$ are calculated as following \begin{equation*} \begin{array}{l} \displaystyle \frac{\partial r_1}{\partial R_{12}}=\frac{R_{12}-r\cos\theta}{r_1},\\ \displaystyle \frac{\partial r_1}{\partial R_{23}}=\frac{\partial r_1}{\partial R_{13}}=0,\\ \displaystyle \frac{\partial r_3}{\partial R_{12}}=(R_{23}\cos\beta-R_{12})h(r),\\ \displaystyle \frac{\partial r_3}{\partial R_{23}}=\frac{R_{23}-r\cos\alpha}{r_3}-(R_{23}-R_{12}\cos\beta)h(r),\\ \displaystyle \frac{\partial r_3}{\partial R_{13}}=R_{13}h(r),\\ \end{array} \end{equation*} where $\displaystyle h(r)=\frac{r}{R_{12}r_3}(\cos\theta-\cot\beta\sin\theta\sin\phi)$. Be reminded that the integration (\ref{Vnuc}) is provided in the ($x',y',z'$) system, and if $\phi=\pi/2$ then $\theta=\alpha+\beta$ (see Fig. \ref{graph1}). \section{Results of calculation} As mentioned above the channel for spontaneous ternary fission of $^{252}$Cf nucleus is chosen as $^{70}$Ni+$^{50}$Ca+$^{132}$Sn. $^{70}$Ni is the first nucleus (placed left side), $^{132}$Sn is the second nucleus (placed right side) and $^{50}$Ca is the third one (placed in the middle) in the Fig. \ref{graph1}. The collinearity of the momenta of the tri-partition is determined by the dynamics of the middle fragment $^{50}$Ca since the heavier fragment $^{132}$Sn is separated firstly and then the middle fragment separates from $^{70}$Ni. This sequence of ternary fission was discussed in the Ref. \cite{Tash15} and it is confirmed by the solution of dynamical equations in this work. It is interesting to discuss how the total interaction potential looks as a function of $R_{3x}$ and $R_{3y}$. It is shown in figures \ref{graph2}-\ref{graph7} for different values of $R_{12}$ (relative distance between Ni and Sn nuclei). The origin (which is not shown) corresponds to the center of mass. There is local minimum at point $R_{3x}=-2.9$ fm and $R_{3y}=0$ fm (see Figs. \ref{graph2} and \ref{graph4}). By increasing of $R_{12}$ the minimum is going to left (to the side of Ni nucleus) but starting from $R_{12}=22$ fm this minimum point is transferred to a saddle point. \begin{figure} \includegraphics[width=0.5\textwidth]{Fig2.eps} \vspace*{-0.8cm} \caption{(Color online). The total interaction potential as the function of $R_{3x}$ and $R_{3y}$ when $R_{12}=20$ fm.} \label{graph2} \end{figure} \begin{figure} \includegraphics[width=0.5\textwidth]{Fig3.eps} \vspace*{-0.8cm} \caption{(Color online). Contour plot of the Fig. \ref{graph2}} \label{graph3} \end{figure} \begin{figure} \includegraphics[width=0.5\textwidth]{Fig4.eps} \vspace*{-0.8cm} \caption{(Color online). The same as Fig. \ref{graph2} but for $R_{12}=21$ fm.} \label{graph4} \end{figure} \begin{figure} \includegraphics[width=0.5\textwidth]{Fig5.eps} \vspace*{-0.8cm} \caption{(Color online). The same as Fig. \ref{graph3} but for $R_{12}=21$ fm.} \label{graph5} \end{figure} \begin{figure} \includegraphics[width=0.5\textwidth]{Fig6.eps} \vspace*{-0.8cm} \caption{(Color online). The same as Fig. \ref{graph3} but for $R_{12}=22$ fm.} \label{graph6} \end{figure} \begin{figure} \includegraphics[width=0.5\textwidth]{Fig7.eps} \vspace*{-0.8cm} \caption{(Color online). The same as Fig. \ref{graph3} but for $R_{12}=34$ fm.} \label{graph7} \end{figure} In the first case it is considered that initially all nuclei are placed in one line which means $R_{1y}(t=0)=R_{2y}(t=0)=R_{3y}(t=0)=0$, since the energy of the collinear configuration in the pre-scission state is the smallest, and $x$ coordinates of that nuclei (or relative distance between nuclei) correspond to the local minimum in the Fig. \ref{graph2}, i.e. $R_{1x}(t=0)=-12.3$ fm, $R_{2x}(t=0)=7.7$ fm, $R_{3x}(t=0)=-2.9$ fm. Both components ($x$ and $y$) of initial velocities of the three nuclei are zero. In other words formation of fragments of the TNS so slow that fragments have zero (or too small) velocities. Nevertheless, the assumption of all initial velocities are zero means that there is no the net force which acts to nuclei in the equilibrium state. Results of calculation of the equations of motion (\ref{FEMx}) and (\ref{FEMy}) (together with (\ref{V3R3})) with the mentioned above initial conditions are shown in the Fig. \ref{graph8}. It is shown that from the beginning Sn nucleus is going to breakup from the Ni+Ca system, and then at $t\approx13.5\times10^{-22}~s$ the Ni+Ca system has decayed. Moreover as an important result has been obtained that the third nucleus (Ca) almost does not change its coordinate, because its velocity is about zero. It means the detecting the middle nucleus (Ca) is almost impossible in an experiment. This conclusion proves the assumption done in our previous paper \cite{Tash15}. Only this condition leads to collinear fission of the tri-nuclear system. \begin{figure} \includegraphics[width=0.5\textwidth]{Fig8.eps} \vspace*{-0.8cm} \caption{(Color online). $x$ component of coordinates (upper) and velocities (lower) of three nuclei as functions of time.} \label{graph8} \end{figure} In the second case the velocities of all nuclei are zero, but the middle nucleus (Ca) is placed a little bit upper, i.e. $R_{3y}(t=0)=0.5$ fm, $R_{1x}(t=0)=-12.3$ fm, $R_{2x}(t=0)=7.7$ fm, $R_{3x}(t=0)=-2.9$ fm and $R_{1y}(t=0)=R_{2y}(t=0)=0$. Results of calculation is shown in the Fig. \ref{graph9}. It is clear from the figure that the deviation of the location of calcium nucleus on 0.5 fm on $y$-axis from the origin is enough to get non collinear fission. Moreover, the sequence of the non collinear fission is similar to the one of the collinear fission: at first Sn is separated from Ni+Ca, then, Ni+Ca system is broken up. It is interesting that the decay time of the tri-nuclear system is $t\approx13.2\times10^{-22}~s$ which is almost the same with the time of collinear ternary fission. \begin{figure} \includegraphics[width=0.5\textwidth]{Fig9.eps} \vspace*{-0.8cm} \caption{(Color online). Trajectories of three decaying nuclei when $R_{3y}(t=0)=0.5$ fm and the same initial velocities as in Fig. \ref{graph8}.} \label{graph9} \end{figure} In the third case initial location of all nuclei are the same as in the first case, i.e. $R_{1y}(t=0)=R_{2y}(t=0)=R_{3y}(t=0)=0$, $R_{1x}(t=0)=-12.3$ fm, $R_{2x}(t=0)=7.7$ fm, $R_{3x}(t=0)=-2.9$ fm. But the initial velocity of the middle fragment is $v_{3y}(t=0)=0.1$~cm/ns, and other initial velocities are zero: $v_{1x}(t=0)=v_{2x}(t=0)=v_{3x}(t=0)=v_{1y}(t=0)=v_{2y}(t=0)=0$. The Fig. \ref{graph10} shows that if $y$ component of the initial velocity of the Ca nucleus is not zero, ternary fission will be non collinear. Also, it should be noted that in this case the decay time of the TNS is $t\approx13.4\times10^{-22}~s$ which is almost the same with the time of the previous cases. From the comparison of Figs. \ref{graph9} and \ref{graph10} is seen that the final paths of the all fragments are similar in spite of difference in the initial conditions of the second and third cases. \begin{figure} \includegraphics[width=0.5\textwidth]{Fig10.eps} \vspace*{-0.8cm} \caption{(Color online). Trajectories of three decaying nuclei when $v_{3y}(t=0)=0.1$ cm/ns and the same initial coordinates as in Fig. \ref{graph8}.} \label{graph10} \end{figure} From the figures it can be concluded that there is collinear fission only when all three nuclei are located in one line ($R_{iy}=0$) and there is not $y$ component of the initial velocity of the middle fragment ($v_{3y}=0$). \section{Summary} We conclude that if in the pre-scission stage all nuclei are placed collinearly which corresponds to the minimum in the potential energy surface and there is no the net force on the third nucleus (Ca) on y-axis (or $y$ component of its initial velocity is zero), then the tri-nuclear system can be broken up collinearly. This theoretical result proves the experimental results of the collinear cluster tripartition in the Ref. \cite{PKVA12}. The experiment shows that collinear ternary fission can be observed. Therefore, in the framework of the TNS model the initial condition which leads to collinear fission have a place in the nature. From the comparison of the potential energy surfaces in the Figs. \ref{graph2}-\ref{graph7} it can be concluded that as $R_{12}$ (relative distance between Ni and Sn nuclei) increases the minimum at the point when $R_{3x}=-2.9$ fm and $R_{3y}=0$ fm in the Fig. \ref{graph2} is disappeared, and instead of this minimum the saddle point is emerged (see Fig. \ref{graph7}). It means the TNS with value of $R_{12}$ higher then 22 fm is an unstable system. Moreover, from the Figs. \ref{graph9} and \ref{graph10} as conclusion it can be emphasized that non collinear ternary fission occurs in the following initial conditions: the deviation in y-axis of location from the origin of the middle (Ca) nucleus or the difference from zero of the $y$ component of the velocity of that nucleus. Nevertheless, it is interesting that in all cases the decay time of the TNS has nearly the same value. It means the time almost does not depend on initial conditions. It is because of the sequence of the fission: firstly, Sn nucleus is separated from the Ni+Ca system, and then Ni is decayed from Ca nucleus. As the collinearity of the ternary fission depends on initial conditions so the probability (or the weight) of the each initial condition's population is the open question which will be studied in future investigations. \acknowledgments This paper has done in the framework of the Project F2-FA-F115 of the Committee for coordination science and technology development under the Cabinet of Ministers of Uzbekistan. The authors A.K.N. and R.B.T. thank to the committee for their financial support. A.K.N. thanks the Russian Foundation for Basic Research for the partial support.
1,314,259,992,646
arxiv
\section{Introduction} The next-generation Very Large Array (ngVLA), with its proposed technical specifications, will extend the search and study of debris disks to much further distances than is currently possible. The high sensitivity over the proposed wavelength range will also allow for the study of a previously difficult (and sometimes impossible) to explore size range of disk particles. For a given debris system, if the disk emission is unable to be separated spatially from its host star, an accurate model of the stellar emission is required in order to constrain the flux contributions of the star and the disk. This is particularly true for warm/hot debris systems such as exo-zodiacal dust and exo-asteroid belts. Emission from stars over the proposed wavelength range of the ngVLA is nontrivial and generally not well-constrained \citep{cranmer}, with the Sun being the most thoroughly studied star in this regime \citep[see][for other G-type stars]{liseau15}. In the atmosphere of the ``quiet'' Sun, the submm/cm continuum radiation is due primarily to free-free emission \citep{dulk, loukitcheva}. Quiet Sun models predict a 1 mm brightness temperature ($\rm T_{B}$) of $\sim 4700$ K, or $\sim 80\%$ of the Sun's photosphere $\rm T_{B}$ \citep{dulk}. The ``active'' Sun, with strong magnetic fields, is difficult to model because there are many contributing emission mechanisms \citep[e.g., umbral oscillations and explosive events;][]{wang, wedemeyer16}. The $\rm T_{B}$ spectrum of the Sun varies significantly, with a minimum in the far-infrared/submm. This is followed by a pronounced increase in flux at mm wavelengths and very high $\rm T_{B}$ at cm wavelengths (see Fig.\,1). The Sun cannot be used as a template for all stars, however, due to the differences in stellar atmospheres and magnetic activity. This is shown in Fig.\,1 with observations of A-type stars Sirius A, Altair, and Fomalhaut. In contrast to the Sun, the Sirius A data and corresponding PHOENIX models show that the brightness temperature continues to decrease with increasing wavelength over the observed range \citep{white_mesas}. Together, the Sun and Sirius A show that when considering the flux from a star, it can not be assumed that the submm/cm $\rm T_{B}$ is well represented by the photosphere's $\rm T_{B}$. It may also not be reasonable to extend the far-infrared brightness temperature to longer wavelengths, as is often done in the literature. Further still, it can not be assumed that a given spectral type, for example an A-type star, will have the same general $\rm T_{B}$ spectrum as another A-type star. This again is seen in Fig.\,1 when comparing Altair, Fomalhaut, and Sirius A and is discussed further in Sec.\,2. In this chapter, we explore how the ngVLA can be used to observe a large number of stars over the proposed wavelength range. These observations can be used to inform stellar atmosphere models (e.g., PHOENIX), which must be accurately known to study unresolved circumstellar debris. \articlefigure{tb_plot_mm.pdf}{fig:spec}{Temperature spectrum of the Sun, Sirius A, Fomalhaut, and Altair. The y-axis is the fractional brightness temperature ($\rm T_{B}$), or the observed $\rm T_{B}$ normalized with the photosphere $\rm T_{B}$ of a given star. This allows for different stellar types to be easily compared on the same plot. The two orange curves represent models of the Sun at maximum activity (solid line) and minimum activity (dashed line) from \citet{loukitcheva}. The black points are JCMT, SMA, ALMA, and VLA observations of Sirius A (White et al.~2018) and the two black curves are PHOENIX models of Sirius A's atmosphere with a non-LTE model (solid line) and a LTE model (dashed line). The blue diamonds are \textit{Herschel} and NOEMA observations of Altair (Thureau et al.~2014, White et al. in prep.) The red squares are ALMA and ATCA observations of Fomalhaut \citep{ricci, su16, white_fom, macgregor17}. } \section{Implication for Debris Disks} Poor characterizations of the radio flux from the host star in a debris system can non-trivially affect measurements of the occurrence and abundance of circumstellar debris. To see this, consider the following cases. Fomalhaut is an A3V star with a well-known extended debris ring at 140 au from the star and additional potential IR excess at much closer orbital distances \citep[e.g., an asteroid-like belt;][]{acke}. However, adopting different stellar models significantly changes the amount of this inferred excess \citep{su16, white_fom}. In particular, ALMA observations at 0.87 mm and 1.3 mm find a $\rm T_{B}$ of $\sim 5500$ K for the central emission. This is much lower than the optical photosphere temperature of 8650 K, and even lower than extrapolating far-IR emission. Confusingly, ATCA 7 mm observations find a $\rm T_{B}$ of $> 16000$ K \citep{ricci}, nearly $\sim 200\%$ of $\rm T_{B}$(phot), potentially showing behavior similar to the Sun (see Fig.\,1). No conclusions on a potential unresolved asteroid belt can be made until the emission of the Fomalhaut star is known and subtracted from the observed flux \citep{white_mesas, white_dis}. HD 141569 is a 5 Myr-old system that contains a B9.5V star, an extended gas and dust structure, and two M dwarf companions at large separations. In 2014, unresolved 9 mm VLA observations of HD 141569 detected a higher than expected disk flux \citep{macgregor16}. This led to an inferred spectral index that was much shallower than average for disks, with interesting implications for the grain size distribution and dust evolution models. High resolution VLA follow-up observations in 2016, however, did not detect the expected dust structure. Instead, these observations found that the recovered emission was consistent with a point source centered on the star \citep{white_141569}. The total emission between the two semesters was also different by a factor of a few, suggesting that stellar emission, which appears to be variable, is affecting the 9 mm flux density and thus the interpretation of the dust in the system. Another case where uncharacterized stellar emission has confounded debris disk studies is with Proxima Centauri. ALMA observations of Proxima found an unresolved excess over the ``expected'' stellar emission at 1.3 mm, in two different antenna configurations \citep{anglada}. This excess was initially interpreted as emission from multiple debris components in the system. Follow-up analysis of the ALMA data found instead that the observations caught a very large, short lived flare \citep{macgregor18}. While flaring is common in M-type stars (such as Proxima), a radio flare of $\sim1000\times$ the quiescent emission was not expected as it is nearly $10\times$ stronger than the strongest Solar flare ever detected \citep{krucker}. While Proxima is indeed an M-type star, with very different emission mechanisms than the A-type stars discussed above, the ALMA observations of Proxima highlight the importance for proper characterization of stellar emission in general. \section{Uniqueness to ngVLA Capabilities} \articlefigure{flux_dist.pdf}{fig:flux}{Estimated ngVLA Band 6 (93 GHz) flux of a given spectral type of star as a function of distance. The shaded region for each spectral type is the estimated stellar flux if the emission is best represented by a brightness temperature ranging from 75\% to 150\% of the photosphere temperature. The lower mass stars will likely have significant non-thermal emission due to, e.g., coronal activity and stellar magnetic fields (see orange curves in Fig\,1. This non-thermal emission can lead to significantly higher emission than shown here, making these estimates lower level limits of the expected flux. The estimated $\sigma_{\rm rms}$ from 1 hr and 10 hr on-source is given by the two horizontal dashed lines \citep{selina18}. } The ngVLA will be the only facility capable of detecting the radio emission from a wide range of debris-poor stars over the proposed wavelength range. While the ngVLA will indeed be able to resolve some nearby stars (the 0.7 mas maximum resolution corresponds to $\sim1~\rm D_{\odot}$ at a distance of $\sim15$pc), the success of this study is not dependent on resolving capabilities. Instead, this study relies on the high sensitivity provided the collecting area of the proposed 214 antennas. Current limitations on building a complete catalog of the radio emissions of stars is largely based on the required on-source observing times. For example, ALMA observations of Sirius A highlight this issue. Sirius A is the closest A-type star to Sun at 2.6 pc and therefore is the easiest target to observe. ALMA 3 mm observations of Sirius A only required $\sim1$ min of on-source observing time to achieve a SNR of 15 (White et al.~in prep). To observe a Sirius-like star that is 15 pc away, it would take over 18 hr of on-source observing time. This makes observing anything but the closest and brightest debris-poor stars very difficult to do with current facilities. In order to build a full catalog of the radio emission of stars with the ngVLA, a targeted survey approach can be adopted. The closest and brightest stars will be observed first with broad spectral coverage. With a proposed 93 GHz continumm sensitivity of $\sigma_{\rm rms} = 0.48~\mu\rm Jy~beam^{-1}$ \citep{selina18}, these sources will only require on the order of minutes on-source time to get a useful SNR (e.g., >10). The observations will be used to inform stellar atmosphere modeling codes \cite[e.g., PHOENIX;][]{hauschildt}. The feasibility of this has already been demonstrated with Sirius A \cite{white_mesas}. The survey will then begin extending out to more distant sources, targeting a range of stellar types and properties (e.g., rapid rotators, metal rich/poor, etc.). Templates of stellar emission profiles will be made readily available so that they can be utilized in properly characterizing unresolved debris structures observed with the ngVLA. \textit{IRAS} and \textit{Spitzer} surveys at 70 and 100 $\mu$m found that A-type stars are the most likely to host a detectable debris component \citep{su06, thureau}. This high occurrence rate of debris makes A-type stars common targets in studies that seek to characterize debris (which in turn requires an accurate model of the star's spectrum). As can be seen in Fig.\,2, nearly all A-types within 150 pc will be detectable in 1 hr of observing time if their emission is characterized by $>75\%$ of their photosphere $\rm T_{B}$. Within 10 hr of observing time, most stars of a size and temperature greater than that of typical early G-type stars will be detectable. With current stellar number density estimates within 150 pc \citep[e.g.,][]{bovy}, this allows for hundreds of stars to be observable with the ngVLA in a reasonable amount of observing time. The scope of this project also offers a unique opportunity for adding many new calibrators to the ngVLA's observational setup. Well-constrained stellar spectral profiles of nearby stars that exhibit no detectable variability will increase the on-sky dispersion of available flux and phase calibrators. A larger number of available calibrators can help reduce the overhead time in a given observation. If these stars were indeed used as calibrators, then it would increase the amount of data available to accomplish the overall goals of this proposed project. \section{Synergies at Other Wavelengths} A broad spectral profile is needed in order to fully characterize the structure of a given stellar atmosphere. Similarly, a broad spectral profile of a debris disk is needed in order to accurately constrain disk properties such as morphology, mass, and grain size distribution. Current observational facilities such as ALMA, NOEMA, and SMA can probe the submm-mm wavelength emission from debris disks. Future facilities such as the SKA may be able to probe the radio emission from debris disks at even longer wavelengths than the ngVLA. The ``gap'' between these two wavelength regimes is also under-explored in regards to stars and debris due to current sensitivity constraints, making the ngVLA a key tool in to understand the physics that occur in this regime. Only by combing data from all these facilities can we observe the full spectral profile of a star, build and test models of its emission, and use these models to studying unresolved debris disks. \section{Summary} The ngVLA will provide unprecedented sensitivity over the proposed wavelength range, creating the opportunity to observe debris disks at much further distances than currently possible. In unresolved disks, an accurate determination of the flux contribution from the disk and the star requires a well tested model of stellar emission. Current observational facilities are only able to observe a few nearby debris-poor stars, making it difficult to test models of stellar atmospheric emission. The ngVLA will have the sensitivity required to observe significantly more stars, including essentially all A-type stars within 150 pc. These observations will be absolutely essential in order to accurately constrain the frequency and abundance of circumstellar debris.
1,314,259,992,647
arxiv
\section{INTRODUCTION} The discovery of iron-based superconductors has opened a new window for unveiling the physics of high-temperature superconductivity besides cuprates.\cite{Hosono,Chen,BaK} Among the various families of iron-based superconductors discovered till now, $A$Fe$_2$As$_2$($A$=alkali earth, alkali, and Eu, the so called 122 system), which has the ThCr$_2$Si$_2$ structure, was the most investigated due to the easy growth of sizable high-quality single crystals. \cite{Wang XF PRL} In this 122 system, KFe$_2$As$_2$, as the end member of the Ba$_{1-x}$K$_x$Fe$_2$As$_2$ series, shows some unique properties. Firstly, superconductivity with $T_c$ $\sim$ 4 K can be realized in KFe$_2$As$_2$ without purposely doping. Secondly, very clean single crystal with residual resistivity ratio(RRR) exceeding 1000 can be quite easily achieved, which is a good start point to study intrinsic physical properties. It was proposed that inter-band interaction that links the hole and electron Fermi surfaces (FS) produces an $\emph{s}_{\pm}$ pairing symmetry in most of the iron-based superconductors. However, angle-resolved photoemission spectroscopy (ARPES) and the de Hass-van Alphen (dHvA) experiments revealed that the electron pockets disappeared and the large hole sheets centered around $\Gamma$ point dominate FS in KFe$_2$As$_2$. \cite{Ding H, Haas Alphen} Therefore, the pairing interaction could be distinct from other iron-based superconductors. The nodes on superconducting gaps have been detected by thermal conductivity, \cite{Li SY, Louis Taillefer} penetration depth, \cite{penetration depth} and NMR. \cite{NMR, Yu WQ} The measurements of thermal conductivity, \cite{Wang AF, Louis Taillefer} specific heat \cite{KNa} and penetration depth \cite{penetration depth} support a \emph{d}-wave superconducting state in KFe$_2$As$_2$. In contrast, recent ultrahigh-resolution laser ARPES suggests a nodal \emph{s}-wave superconductor with highly unusual FS-selective multi-gap structure.\cite{Science} Whether those nodes are imposed by symmetry or accidental still remains an open question. As a consequence, further investigation on analogous compounds would be significant to clarify the underlying physics in $A$Fe$_2$As$_2$ system. In the present article, we will report the crystal growth and characterization of the analogous compound CsFe$_2$As$_2$. Superconductivity at 2.6 K was observed in the polycrystalline CsFe$_2$As$_2$ sample. \cite{Chu CW} But few physical properties have been reported so far because high quality single crystals are not available until now. The difficulty of growing sizable CsFe$_2$As$_2$ single crystal mainly lies in the extremely high chemical activity and low melting point of Cs. In the present article, we have successfully overcome this problem by using the stainless steel sample container assembly, \cite{Hiroshi} which can be sealed in the glove box (O$_2$ content is less than 1 ppm) mechanically. As a result, sizable high-quality single crystals of CsFe$_2$As$_2$ were grown. The CsFe$_2$As$_2$ single crystals were characterized by X-ray diffraction (XRD), resistivity, magnetic susceptibility, and specific heat. The sharp superconducting transition temperature and obvious specific jump indicate good quality of the single crystals. \section{EXPERIMENTAL DETAILS} High quality CsFe$_2$As$_2$ single crystals are grown by the self flux technique. The Cs chunks, Fe and As powder were weighted according to the ratio Cs:Fe:As=6:1:6. Typically, 1.5 grams of the mixture of Fe and As powders were loaded into a 10 mm diameter alumina crucible, and freshly cut Cs pieces were placed on top of the mixture. Then the alumina crucible with a lid was sealed in a stainless steel container assembly. The whole preparation process was carried out in the glove box in which high pure argon atmosphere is filled (O$_2$ content is less than 1 ppm). Considering the low melting point of Cs ($T_{\rm m}$=28 $^{\circ}$C), the room temperature must be kept blow 20 $^{\circ}$C. The sealed stainless steel assembly was then sealed inside an evacuated quartz tube. The quartz tube was placed in a box furnace and slowly heated up to 200 $^{\circ}$C, It was kept at 200 $^{\circ}$C for 400 minutes, which allows full reaction of Cs and the mixture. Then the sample was heated up to 950 $^{\circ}$C in 10 hours. The temperature was kept still for 10 hours and then slowly cooled to 550 $^{\circ}$C at a rate of 3 $^{\circ}$C/h. After cooling down to room temperature by switching off the furnace, shiny plate-like crystals can be easily picked up from the alumina crucible. The single crystals are stable in air or alcohol for several days. XRD was performed on a SmartLab-9 diffracmeter(Rikagu) from 10$^{\rm o}$ to 80$^{\rm o}$ with a scanning rate of 4$^{\rm o}$ per minute. The actual chemical composition of the single crystal is determined by energy dispersive x-ray spectroscopy(EDX) mounted on the field emission scanning electronic microscope (FESEM), Sirion 200. Magnetic susceptibility was measured using Vibrating Sample Magnetometer (VSM) (Quantum Design). The direct current (dc) resistivity was measured by conventional four probe method using the PPMS-9T (Quantum Design). Resistivity and specific heat down to 50 mK were measured in a dilution refrigerator on PPMS. \begin{figure}[ht] \centering \includegraphics[width=0.49\textwidth]{fig1.eps} \caption{(color online). (a) The typical EDX spectrum for CsFe$_2$As$_2$ single crystal, the inset shows photograph of the CsFe$_2$As$_2$ single crystal together with a millimeter scale. (b) X-ray diffraction pattern of the CsFe$_2$As$_2$ single crystal.} \label{fig1} \end{figure} \section{RESULTS} The typical size of as grown single crystals is about 5 mm $\times$3 mm $\times$ 0.03 mm, as shown in the inset of Fig. 1(a). Elemental analysis was performed using EDX. A typical EDX spectrum is shown in Fig.1(a), and the obtained atomic ratio of Cs:Fe:As is roughly 20.76: 40.52: 38.71. The atomic ratio is consistent with the composition CsFe$_2$As$_2$ within instrumental error. Fig.1(b) shows the single crystal XRD pattern for CsFe$_2$As$_2$. Only (00\emph{l}) reflections can be recognized, indicating that the crystal is well orientated along the \emph{c} axis. The \emph{c}-axis lattice parameter was estimated to be \emph{c}=15.13 {\AA}, consistent with previous report on the polycrystalline samples. \cite{Chu CW} \begin{figure}[ht] \centering \includegraphics[width=0.49\textwidth]{fig2.eps} \caption{(color online). Resistivity plotted as a function of temperature for CsFe$_2$As$_2$ single crystal. The inset is the zoom plot of resistivity around the superconducting transition.} \label{fig2} \end{figure} Fig. 2 shows the in-plane resistivity as the function of the temperature for CsFe$_2$As$_2$ single crystal. The resistivity exhibits metallic behavior in the entire temperature range between 60 mK to 300 K. The behavior of the resistivity resembles the counterpart compound KFe$_2$As$_2$. The resistivity begins to drop rapidly at 1.88 K and reaches zero at 1.8 K. The superconducting transition temperature $T_{\rm c}$ in the following text is defined as the temperature when the resistivity reaches zero. The sharp superconducting transition with transition width less than 0.1 K indicates high quality of the crystal. The $T_{\rm c}$ of CsFe$_2$As$_2$ is slightly lower than that reported on the polycrystalline sample. \cite{Chu CW} The residual resistivity ratio (RRR) $\rho$(300K)/$\rho$(5K) is estimated to be 88, comparable with those in the crystals of KFe$_2$As$_2$ grown with FeAs flux,\cite{Li SY, Wang NL} but much smaller than those in crystals of KFe$_2$As$_2$ grown with KAs. \cite{Wang AF, Hiroshi} \begin{figure}[ht] \centering \includegraphics[width=0.49\textwidth]{fig3.eps} \caption{(color online). Temperature dependence of magnetic susceptibility of the CsFe$_2$As$_2$ single crystal collected at $H$=1 T with $H$ $\parallel$ $ab$.} \label{fig3} \end{figure} Fig. 3 shows the the temperature dependence of the in-plane magnetic susceptibility for CsFe$_2$As$_2$ single crystal under H = 1 T in the normal state. The CsFe$_2$As$_2$ single crystal shows paramagnetic behavior from 300 K to 2 K, and no magnetic anomaly was observed. The magnetic susceptibility behavior of CsFe$_2$As$_2$ is similar to KFe$_2$As$_2$. \cite{magnetic susceptibility} \begin{figure}[ht] \centering \includegraphics[width=0.49\textwidth]{fig4.eps} \caption{(color online). (a) Temperature dependence of specific heat divided by temperature $C_{\rm p}(T)/T$ of CsFe$_2$As$_2$. The blue solid line is the fitting for the data between 1.9 and 10 K (data below 6 K are shown). The fitting function is described in the text. (b) Temperature dependence of the difference in the electronic specific heat between the superconducting state and the normal state. The violet, green and blue solid line represent fits for the experimental data by using single band \emph{s}-wave ($\alpha$ = 1.42), single band \emph{d}-wave ($\alpha$ = 1.93) and two band \emph{s}-wave $\alpha$ model ($\alpha_1$ =1.21, $\alpha_2$ =3.33), respectively.} \label{fig4} \end{figure} In Fig. 4(a), we show the temperature dependence of the specific heat of CsFe$_2$As$_2$ down to 50 mK under zero magnetic field. A pronounced jump due to the superconducting transition is observed below 1.8 K, consistent with the resistivity measurement. This indicates the high quality of the present crystals. The normal-state specific heat can be well fitted by $C_{\rm{normal}}(T)$=$\gamma_{\rm N}T$+$C_{\rm{lattice}}(T)$, where $\gamma_{\rm N}T$ and $C_{\rm{lattice}}(T)$=$\beta T^3$+$\eta T^5$ are electron and phonon contributions, respectively.\cite{Wang AF Na} The solid line in Fig. 4(a) is the best fit to the $C_{\rm p}/T$ above $T_{\rm c}$ (1.9 K to 10 K). We obtained $\gamma_{\rm N}$=184.01 mJ $\rm{mol}^{-1}$ $\rm{K}^{-2}$, $\beta$=0.466 mJ $\rm{mol}^{-1}$ $\rm{K}^{-4}$ and $\eta$=0.00474 mJ $\rm{mol}^{-1}$ $\rm{K}^{-6}$. From this value of $\beta$ and by using the formula ${\Theta}_{\rm D}$=[12$\pi^4$$k_B$$N_A$$Z$/(5$\beta$$)]^{1/3}$, where $N_{\rm A}$ is the Avogadro constant and $Z$ is the total number of atoms in one unit cell, the Debye temperature (${\Theta}_{\rm D}$) is estimated to be 275 K. This value is comparable to that of KFe$_2$As$_2$. \cite{Canfield} It should be pointed out that $\gamma$$_{\rm N}$=184.01 mJ $\rm{mol}^{-1}$ $\rm{K}^{-2}$ is very large. The specific heat jump is only about 31\% of $\gamma$$_{\rm N}$, which is similar to the case of KFe$_2$As$_2$. \cite{NMR, KNa, Canfield, magnetic susceptibility} Considering the sharp superconducting transition, this feature may be the evidence for the existence of low energy quasiparticle excitation. \cite{NMR} $\Delta$$C_{\rm P}$/$\gamma$$T_{\rm c}$ can be estimated to be about 0.35. This is much smaller than the value (=1.43) expected for BCS superconductors, which could indicate an unconventional pairing symmetry. In addition, $\Delta$$C_{\rm P}$/$T_{\rm c}$ at $T_c$ = 1.8 K is estimated to be about 64 mJ mol$^{-1}$ K$^{-2}$, far above the $\Delta$$C_{\rm P}$/$T_{\rm c}$ vs. $T_c$ behavior of other iron-based superconductors, except for that observed in KFe$_2$As$_2$. In KFe$_2$As$_2$, $\Delta$$C_{\rm P}$/$T_{\rm c}$ $\approx$ 41 mJ mol$^{-1}$ K$^{-2}$ at $T_c$ = 3.1 K, \cite{BNC} which is also much above the $\Delta$$C_{\rm P}$/$T_{\rm c}$ vs. $T_c$ behavior of other iron-based superconductors. This suggests that the pairing symmetry in CsFe$_2$As$_2$ could be similar to that in KFe$_2$As$_2$, but different from that of other iron-based superconductors. Fig. 4(b) shows the superconducting electronic specific heat difference between the superconducting and normal states $\Delta$$C_{\rm p}(T)$=$C_{\rm es}$$-$${\gamma}_{\rm N}T$=$C_{\rm p}(T)$-$C_{\rm{normal}}(T)$, for which the entropy conservation is confirmed to be satisfied. In the low-temperature limit, the $\emph{s}$-wave model predicts $\Delta$$C/T$$\cong$$a$$T^{-5/2}$exp$(-\Delta/{k_{\rm B}T})$$-$$\gamma_{\rm n}$. While in clean $\emph{d}$-wave model, $C_{\rm es}$$\sim$$T^2$, which gives $\Delta$$C_{\rm p}/T$$=$$\alpha$$T$$-$${\gamma}_{\rm n}$, as have been observed in organic superconductors $\kappa$-(BEDT-TTF)$_2$Cu[N(CN)$_2$]Br and $\kappa$-(BEDT-TTF)$_2$Cu(NCS)$_2$. \cite{d wave} Fig. 4(b) indicates that $\Delta$$C_{\rm p}/T$ can't be described by a single-band BCS $\alpha$-model with $\alpha$=$\Delta$(0)/$k_{\rm B}$$T_{\rm c}$=1.42. \cite{a model} As a consequence, our results exclude the single-band $\emph{s}$-wave pairing symmetry. As shown in Fig. 4(b), single-band $\emph{d}$-wave and two-band \emph{s}-wave model give rise to better fitting quality. \cite{d wave2, Wang AF Na} The low-temperature part of the specific heat roughly follows linear temperature dependence rather than exponent, so the gap symmetry is likely $\emph{d}$-wave. The thermal conductivity measured on the same batch of CsFe$_2$As$_2$ single crystals indicates nodal superconducting gap symmetry.\cite{LiSYC} It has been reported that there is an anomaly around 0.7 K in $C_{\rm p}/T$ of KFe$_2$As$_2$, which was thought to be related to the impurity contribution in the sample, \cite{Wang XF} and higher quality of crystals can almost eliminate such anomaly. \cite{KNa} A very weak anomaly-like feature is observed below 0.55 K in our data too, which may affect the result of the fit. This makes it hard to make sure the pairing symmetry from our fittings, since the main difference between the fitting in $d$-wave and two-band $s$-wave models lies below 0.55 K. Further measurements, such as ARPES and NMR, should be performed to illustrate the pairing symmetry. \section{SUMMARY AND CONCLUSIONS} We have successfully grown CsFe$_2$As$_2$ single crystals using the assembly of stainless steel container. The sharp drop in resistivity and pronounced jump in specific heat with $T_c$ around 1.8 K indicate the high quality of crystals. The behavior of resistivity, magnetic susceptibility, and specific data resemble the counterpart compound KFe$_2$As$_2$. The $T$ dependence of the specific heat in the superconducting state may be explained by a $d$-wave or a multi-band $s$-wave superconducting gap pictures. More work is required to reach a consensus for the pairing symmetry in this system. \section*{Acknowledgements} This work is supported by the National Natural Science Foundation of China (Grants No. 11190021, 11174266, 51021091), the "Strategic Priority Research Program (B)" of the Chinese Academy of Sciences (Grant No. XDB04040100), the National Basic Research Program of China (973 Program, Grants No. 2012CB922002 and No. 2011CBA00101), and the Chinese Academy of Sciences.
1,314,259,992,648
arxiv
\section{Introduction} \label{sec:intro} \lettrine{W}{ith} the rise of Mobility-on-Demand (MoD) services (e.g. Uber, Lyft, DiDi) and the rapid technological evolution of self-driving vehicles, we are closer to having Autonomous Mobility-on-Demand (AMoD) systems. A crucial step in the proper functioning of such a service is to define pricing, rebalancing and routing policies for the fleet of vehicles. This paper focuses on the first two issues, while the interested reader is directed to \cite{wollensteinbetech2020congestionaware} for a discussion on routing and rebalancing. Pricing policies play an important role as they modulate the inflow of customers traveling between regions in the network. As a result, the controller has the ability to choose prices such that the induced demand ensures a balanced load of customers and vehicles arriving at each location. Additionally, selecting prices enables the operator to modify demand such that the system can operate with smaller or larger fleet sizes. If we restrict a pricing policy to one that requires balancing the load in every node, we expect the solution to concentrate on balancing the network rather than choosing a set of prices to maximize profit. To give the pricing policy more flexibility, AMoD systems can leverage rebalancing policies, i.e., send empty vehicles from regions with excess supply of vehicles to regions with excess demand of trips (see Fig.\ref{fig:amod}) with the objective of achieving higher profit. \emph{Related Literature: } Researchers have tackled the pricing problem using two main settings: one-sided, or two-sided markets depending on whether the MoD controller has full or limited control over the supply. In particular, one-sided markets assume full control over the vehicles \cite{banerjee2015pricing, turan2019dynamic}, whereas two-sided markets consider self-interested suppliers \cite{bimpikis2019spatial, banerjee2015pricing}. To the best of our knowledge, all these optimal pricing policies, except \cite{turan2019dynamic}, do not rebalance externally. Rather, they incentivize the supply (human drivers) to reallocate by the use of compensations. Our model differs from \cite{turan2019dynamic}, which uses a microscopic model and Reinforcement Learning techniques, by the level of abstraction performed. Alternatively to a microscopic model we employ a macroscopic (planning) model to assess the benefits of \emph{jointly} solving the pricing and rebalancing problem over other approaches. \begin{figure}[t!] \centering \includegraphics[trim={0cm 0.5cm 0cm 4cm},clip, width=0.85\linewidth]{fig/AMOD.png} \caption{Requested taxi trips on January 15, 2015 10:38 a.m. in NYC. Blue and Orange circles represent origins and destinations respectively. One can observe that at this time, the Financial District (south) is an attractive destination but not origin. Hence, we expect taxis to rebalance to more attractive pickup locations.} \label{fig:amod} \vspace{-0.5cm} \end{figure} The rebalancing literature has tackled the problem without the help of pricing incentives. For AMoD systems this problem has been studied using simulation models~\cite{swaszek2019load,HoerlRuchEtAl2018,LevinKockelmanEtAl2017}, queuing-theoretical models~\cite{ZhangPavone2016,IglesiasRossiEtAl2016}, network-flow models~\cite{PavoneSmithEtAl2012,RossiZhangEtAl2017} and it has also been studied jointly with routing schemes~\cite{SalazarTsaoEtAl2019,wollensteinbetech2020congestionaware}. In~\cite{swaszek2019load}, the rebalancing of an AMoD system is addressed using a data-driven parametric controller which is available for real-time implementation. Alternatively, in~\cite{PavoneSmithEtAl2012}, the rebalancing problem is studied using a steady-state fluid model which serves as a basis for this paper. \emph{Key contributions: } In this work we provide a theoretical framework to design optimal pricing policies for an AMoD provider. We analyze the system in the spirit of~\cite{pavone2012}, converting the problem into profit maximization rather than an operational cost minimization. Different from the existing methods in pricing, we consider the destination of a customer when designing the pricing policy. This allows the fleet controller to modulate demand in such a way that the system is balanced by solely adjusting prices. Additionally, we incorporate the rebalancing policy optimization framework in~\cite{pavone2012} and formulate a \emph{joint} optimization model. We compare this joint strategy with four different methodologies. First by only finding optimal prices, second by only rebalancing the fleet, third by sequentially solving the rebalancing and then pricing of the system, and fourth by jointly estimating pricing and rebalancing with a unique surge price by origin. We apply each approach to two case studies; one, with simulated data; and another, with real taxi data from New York City. \emph{Organization:} The paper is organized as follows. In Section \ref{sec:model} we introduce the fluid model consisting of queues of customers and vehicles at every region. In Section \ref{sec:well-poss-eq-stability}, we show that the system is well-posed and establish the existence of a load balance equilibrium through the selection of prices. We also obtain local stability results. Next, in Section \ref{sec:optimal-pricing}, we state the problems of optimal pricing, optimal rebalancing and the \emph{joint} formulation of these two. Then, we present case studies to assess the performance of the \emph{joint} formulation in Section \ref{sec:Experiments}. Finally, in Section \ref{sec:conclusion} we conclude. \section{Model} \label{sec:model} In this section we present a steady-state deterministic fluid model to find optimal prices in an AMoD system while ensuring service to customers. This model is intended to serve as a relaxation of the corresponding stochastic queueing model where customers arrive according to a Poisson process and travel times are non-deterministic (usually exponentially distributed). The reason for making this relaxation is the flexibility it provides to perform analysis of the system. Consider a fully-connected network $\mathcal{G}=(\mathcal{N}, \mathcal{A})$ where $\mathcal{N}$ is the set of nodes (regions) $\mathcal{N}=\{1,...,N\}$ and $\mathcal{A}=\{(i,j) : i,j \in \mathcal{N} \times \mathcal{N} \}$ is the set of arcs. A customer requests a ride in region $i$, receives a transportation service from the AMoD platform, and is charged a price composed of the product of a \emph{base} and a \emph{surge} price. The total price is $p_{ij} = p^0_{ij} u_{ij}$ where $p^0_{ij}$, $u_{ij}$ are the base and surge prices, respectively, for traveling from node $i$ to $j$. Throughout the paper, we will use the surge price $u_{ij}$ as our control variable, and we assume that $u_{ij}\geq 1$ as the platform is not willing to charge less than its base price. We further assume that customers' arrival rate is a function of the surge price, namely $\lambda_{ij}(u_{ij}): \mathbb{R}_{\geq0} \mapsto \mathbb{R}_{\geq0}$ for a customer travelling from $i$ to $j$. This function is known as the \emph{willigness-to-pay} or the \emph{demand} function. Let the \emph{base demand} be $\lambda^0_{ij}=\lambda_{ij}(1)$, i.e., the demand rate of customers when the surge price is at its minimum. As in~\cite{pavone2012}, we use a queueing model for this system with two queues per region. We let $c_i(t) \in \mathbb{R}_{\geq0}$ be the number of customers at region $i$ waiting to be assigned to a vehicle; and denote with $v_i(t) \in \mathbb{R}_{\geq0}$ the number of available vehicles waiting in region $i$ at time $t$. Moreover, the AMoD provider assigns vehicles to customers located in the same region at a service rate $\mu_i$. We assume that $\mu_i>\sum_j \lambda^0_{ij}$, meaning that the platform assigns vehicles to customers faster than the rate at which customers arrive. This assumption is required to avoid building large customer queues. For the purpose of this paper, we consider the rate vectors $\boldsymbol{\lambda}=(\lambda_{ij}; \ \forall i,j \in \mathcal{N} )$ and $\bmu=(\mu_i; \ \forall i \in \mathcal{N})$ to be invariant (we use bold notation to represent a vector containing all the variables sharing the same symbol). This allows us to analyze the steady-state solution of the system. Finally, we let $T_{ij}\in \mathbb{R}_{\geq0}$ be the travel time for a passenger to go from $i$ to $j$, which we assume to be fixed and not dependent on the routing decisions of the AMoD system (see Fig. \ref{fig:system-diagram}). To continue with our analysis, we make the following assumptions:\\ \begin{figure}[t] \centering \includestandalone[trim={0 0 0 0},width=\linewidth]{systemDiagram} \caption{Customer traveling from $i$ to $j$ arrive to region $i$ at rate $\lambda_{ij}(u_{ij})$ and it takes $T_{ij}$ units of time to reach $j$. The AMoD provider plans a pricing policy ${\mathbf u}$ and a rebalancing policy of empty vehicles $r_{ij}$ to serve their customers such that its profit is maximized. Note this is a fluid model as opposed to a discrete event system.} \label{fig:system-diagram} \vspace{-0.5cm} \end{figure} \textbf{Assumption 1.} The function $\lambda_{ij}(\cdot)$ is monotonically decreasing $\forall i,j \in \mathcal{N}$, i.e., as price increases, the demand rate decreases. \textbf{Assumption 2.} There exists a surge price $u^{\max}_{ij}$ for which $\lambda_{ij}(u^{\max}_{ij})=0, \quad \forall i,j \in \mathcal{N}$. \subsection{Customer Dynamics} \label{subsec:customer-dynamics} Consider a customer queue $c_i$ for each region $i\in\mathcal{N}$ in the network. The queue dynamics are: \begin{alignat}{3} \nonumber \dot{c_i} = & \begin{dcases} \sum\nolimits_j \lambda_{ij}(u_{ij}), & \text{if } v_i=0, \\ 0, & \text{if } v_i \geq 0 \text{ and } c_i = 0, \\ \sum\nolimits_j \lambda_{ij}(u_{ij}) - \mu_{i} , & \text{if } v_i \geq 0 \text{ and } c_i \geq 0. \end{dcases} \end{alignat} In order to express the customer dynamics with shorter notation we let $H(x)=\mathds{1}_{x>0}$ be an indicator function for positive values of $x$, and we use the following shorthand notation: \begin{align*} \lambda_{ij} := \lambda_{ij}(u_{ij}), \quad \lambda_i := \sum\nolimits_j \lambda_{ij}, \quad v_i := v_i(t), \quad \\ c_i := c_i(t), \quad v^i_j := v_j(t - T_{ji}), \quad c^j_i := c_j(t - T_{ji}) \end{align*} where $\lambda_i$ is the total endogenous outgoing flow from node $i$; and $c^i_j$, $v^i_j$ are the rates of customer and vehicle arrivals at $t$ to region $i$ coming from $j$, respectively, i.e., $v^i_j$ is the rate of vehicles who departed from $j$ destined for $i$, $T_{ji}$ time units prior to the current time $t$. Then, we rewrite the customer dynamics in compact form can be written as follows: \begin{align} \label{eq:customer-dynamics} \dot{c_i} = \lambda_{i} (1-H(v_i)) + (\lambda_{i} - \mu_i) H(c_i)H(v_i). \end{align} Note that as a result of using a fluid model, the variables denoting the number of customers in a region are real-valued. \subsection{Vehicle Dynamics} \label{subsec:vehicle-dynamics} The outflow rate corresponding to vehicles departing station $i$ is given by: \begin{alignat*}{3} \dot{v}_i^- = & \begin{dcases} -\lambda_i, & \text{if } v_i \geq 0 \text{ and } c_i = 0, \\ 0, & \text{if } v_i=0, \\ -\mu_{i} , & \text{if } v_i \geq 0 \text{ and } c_i \geq 0. \end{dcases} \end{alignat*} and more succinctly as \begin{equation} \label{eq:veh-arrival-rate} \dot{v}_i^- = - \lambda_i H(v_i) + (\lambda_i - \mu_i) H(v_i)H(c_i). \end{equation} In addition, the rate at which customer-carrying vehicles arrive at station $i$ is given by: \begin{equation} \label{eq:veh-departure-rate} \dot{v}_i^+ = \sum_j (\lambda_{ji}H(v^i_j) - (\lambda_{ji}-\mu_j)H(v^i_j)H(c^i_j) ). \end{equation} Therefore, we can write the vehicle dynamics in compact form by adding \eqref{eq:veh-arrival-rate} with \eqref{eq:veh-departure-rate} as \begin{align} \label{eq:vehicle-dynamics} \dot{v_i} = - \lambda_{i} H(v_i) + (\lambda_{i} - \mu_i) H(c_i)H(v_i) & \\ + \sum\limits_{j} (\lambda_{ji}H(v^i_j) - (\lambda_{ji} - \mu_j) & H(c^i_j)H(v^i_j)) \nonumber. \end{align} Hence the global system dynamics are expressed by the following differential equations \begin{subequations} \label{eq:system-dynamics} \begin{flalign} & \hspace{-0.8cm} \dot{c_i} = \lambda_{i} (1-H(v_i)) + (\lambda_{i} - \mu_i) H(c_i)H(v_i), \label{eq:system-dynamics-customers}\\ & \hspace{-0.8cm} \dot{v_i} = - \lambda_{i} H(v_i) + (\lambda_{i} - \mu_i) H(c_i)H(v_i) \label{eq:system-dynamics-veh}\\ & \quad + \sum\limits_{j} (\lambda_{ji}H(v^i_j) - (\lambda_{ji} - \mu_j) H(c^i_j)H(v^i_j)) \nonumber. \end{flalign} \end{subequations} which describe a non-linear, time-delayed, time-invariant, right-hand discontinuous system. \section{Conclusion}\label{sec:conclusion} In this paper we studied how a pricing policy which considers origin-destinations can stabilize the system and reach an equilibrium in terms of balancing the load of customer and vehicles. In addition, we formulate a profit maximization optimization model which considers selecting pricing and rebalancing policies jointly. Moreover, we quantify the achievable benefits of solving the problem jointly compared to other methodologies using a data-driven case study of the New York City transportation network. Our results suggest that solving the problem jointly increases the profits of the AMoD provider by up to 40\% when comparing it to individual strategies, 15\% when comparing it to sequential strategies, and 7\% when comparing it to a policy that restricts to a unique \emph{surge} price per origin. \emph{Future Work: } This work can be extended as follows. First, we would like to provide a framework capable of handling more realistic nonlinear demand functions. Second, we would like to complement this model with real-time strategies by the use of a stochastic fluid model~\cite{sun2004perturbation}, as well as a discrete event system~\cite{cassandrasbook} with the aim to provide stochastic and microscopic results of the joint policy. Third, we are interested in coupling this joint solution with the routing problem in \cite{wollensteinbetech2020congestionaware} in order to give an overall optimization framework to operate AMoD systems. Finally, we would like to solve the problem from a welfare maximization perspective rather than from the profit maximization and compare the results. \subsection{Stability of equilibrium sets} \SW{we can include the exact same analysis as in \cite{Pavone2012}} Let ${\mathbf u} \in \mathcal{U}$ and $m>m_{\mathbf u}$. Then, an equilibrium exists. Now let us define the following \begin{definition} Let a set of solutions $\mathcal{E}_{\mathbf u}$ be \emph{locally asymptotically stable} if for any equilibrium $({\mathbf c}',{\mathbf v}')\in\mathcal{E}_{\mathbf u}$ there exists a neighborhood $\mathcal{B}_{\mathbf u}^\delta({\mathbf c}',{\mathbf v}')$ such that the evolution of the model starting at: \begin{align*} c_i(t)=c_i', \text{ for } \tau \in [ - \max T_{ij},0 ) \\ v_i(t)=v_i', \text{ for } \tau \in [ - \max T_{ij},0 ) \\ ({\mathbf c}(0), {\mathbf v}(0)) \in \mathcal{B}^\delta_{\mathbf u}({\mathbf c}', {\mathbf v}') \end{align*} has a limit which belongs to the equilibrium set. In other words $(\lim\limits_{t \xrightarrow{} \infty}{\mathbf c}(t), \lim\limits_{t \xrightarrow{} \infty}{\mathbf v}(t)) \in \mathcal{E}_{\mathbf u}$. Where: \begin{align*} & \mathcal{E}_{\mathbf u} := \{({\mathbf c}, {\mathbf v})|c_i=0, v_i>0 \quad \forall i \in \mathcal{N}, \quad \sum\limits_i v_i=m-m_{\mathbf u} \} \\ & \mathcal{B}^\delta_{\mathbf u} := \{ ({\mathbf c}, {\mathbf v})|c_i \geq 0, v_i>0 \quad \forall i \in \mathcal{N}, \quad ||{\mathbf c}-{\mathbf c}', {\mathbf v}-{\mathbf v}'|| \leq \delta, \quad \sum\limits_i v_i=m-m_{\mathbf u} \} \end{align*} \begin{theorem} Let ${\mathbf u} \in \mathcal{U}$ be a feasible price vector and assume $m>m_{\mathbf u}$. Then, the set of equilibria $\mathcal{E}_{\mathbf u}$ is locally asymptotically stable. \end{theorem} \begin{proof} follows the same analysis as in \cite{Pavone2012}, see appendix. \end{proof} \end{definition} \section{Optimal Strategies} \label{sec:optimal-pricing} In this section, we present an optimization framework to find optimal prices given endogenous demand rates. This model aims to maximize the revenue of an AMoD provider while ensuring load balancing of clients and vehicles. We then turn to a model which uses a rebalancing formulation to ensure load balancing, without the need of price adjustments. Finally, we combine these two formulations into a joint model. \subsection{Optimal Pricing} \label{subsec:optimal-pricing} We are interested in finding the best pricing policy that ensures the existence of an equilibrium \eqref{eq:equilibria}. Hence, we define the feasible set of the pricing problem to be \begin{flalign*} & \mathcal{F} = \Big\{{\mathbf u} : \sum_i \big( \lambda_{ij}(u_{ij}) - \lambda_{ji}(u_{ji})\big)=0, \\[-1.2em] & \hspace{14em}\forall j \in \mathcal{N}, \ {\mathbf u} \in [1,{\mathbf u}^{\max}] \Big\}. \end{flalign*} Then, we can define the profit maximization problem as \begin{align} \label{eq:pricing-problem} &\max\limits_{{\mathbf u} \in \mathcal{F}} \quad \sum\limits_{ij} \lambda_{ij}(u_{ij})u_{ij}p^0_{ij}-c^o_{ij}\lambda_{ij}(u_{ij})\notag \\[-0.4cm] &\hspace{3cm} -c^c(\lambda^0_{ij}(u_{ij}) - \lambda_{ij}(u_{ij})), \end{align} where $\lambda_{ij}(u_{ij})u_{ij}p^0_{ij}$ and $c^o_{ij}$ are the total revenue and the operational cost of request $i$ to $j$, respectively; and $c^c$ is the penalty that the AMoD service incurs when a costumer exists the platform because of a high price. Note that if the functions $J_{ij}(u_{ij}):=\lambda_{ij}(u_{ij})u_{ij}p^0_{ij}-c^o_{ij}\lambda_{ij}(u_{ij})-c^c(\lambda^0_{ij}(u_{ij}) - \lambda_{ij}(u_{ij}))$ are concave in the range of $[1,{\mathbf u}^{\max}]$, then the optimization problem is tractable (we maximize over a concave function with linear equality constraints). To ensure the concavity of the cost function $J_{ij}$ we need its second derivative to satisfy \begin{equation} \label{eq:concavity-condition} \ddot{J_{ij}} \leq 0 \implies \ddot{\lambda}_{ij}(u_{ij}) \leq - \frac{2}{u_{ij}p^0_{ij}-c^o_{ij}-c^c} \dot{\lambda}_{ij}(u_{ij}). \end{equation} Recall that by Assumption $1$ ($\lambda_{ij}$ is monotonically decreasing) $\dot{\lambda}_{ij}<0$. Hence, for any linear demand function, the problem becomes tractable. \subsection{Optimal Rebalancing} We use the planning rebalancing model developed in ~\cite{pavone2012}. In this setting, we aim to find a static rebalancing policy that reaches an equilibrium. Let the rebalancing flow be $r_{ij}$, that is, the rate at which empty vehicles flow from $i$ to $j$. To solve the problem we use the following Linear Program (LP) that minimizes the empty travel time and seeks to equate the inflow and outflow of vehicles at each region by using $N^2$ variables \begin{subequations} \label{eq:rebalancing-problem} \begin{flalign} \min_{{\mathbf r} \geq 0}& \quad \sum_{ij} T_{ij}r_{ij} \label{eq:rebalancing-problem-obj} \\ &\text{s.t. } \sum\limits_{i} \lambda^0_{ij} + r_{ij} -\lambda^0_{ji}- r_{ji} = 0, \hspace{2mm} \forall j \in \mathcal{N} \label{eq:rebalancing-problem-cnstrs}. \end{flalign} \end{subequations} Notice that in this case we use $\lambda^0_{ij}$ instead of $\lambda_{ij}(u_{ij})$ as we do not consider the possibility of decreasing the demand by using price incentives. This LP is always feasible as one can always choose $r_{ij}=\lambda^0_{ji}>0$ for all $i,j \in \mathcal{N}$ which satisfies the set of constraints \eqref{eq:rebalancing-problem-cnstrs}. All the results presented in Section \ref{sec:well-poss-eq-stability} hold for this problem as well and are studied in~\cite{pavone2012}. \subsection{Joint Pricing and Rebalancing} \label{subsec:joint-pricing-rebalancing} We are interested in choosing the best policy which leverages different decisions that the mobility-on-demands providers face. In particular, we would like to optimize the pricing, re-balancing and sizing problem. Then, we can write the planning optimization problem as, \begin{subequations} \label{eq:problem-full} \begin{align} \max\limits_{{\mathbf u}, {\mathbf r}, m} & \hspace{2mm} \sum\limits_{ij} \lambda_{ij}(u_{ij})u_{ij}p^0_{ij}-c^o_{ij}\lambda_{ij}(u_{ij}) \label{eq:problem-full-obj} \\[-0.4cm] &\hspace{1cm} -c^c(\lambda^0_{ij}(u_{ij}) -\lambda_{ij}(u_{ij})) - c^r(r_{ij}T_{ij}) -c^f m \notag \\[0.2cm] \text{s.t.} & \hspace{2mm}\sum\limits_{i} \lambda_{ij}(u_{ij})+r_{ij}-\lambda_{ji}(u_{ji})-r_{ji} = 0, \label{eq:problem-full-eq}\\[-.5cm] & \hspace{5.8cm}\forall j \in \mathcal{N} \notag\\ & \hspace{2mm} \sum\limits_{ij} T_{ij}(\lambda_{ij}(u_{ij})+r_{ij}) \leq m, \\ & \hspace{2mm} {\mathbf u} \in [1,{\mathbf u}^{\max}], \end{align} \end{subequations} where $c^r$ and $c^f$ are the cost of rebalancing and the cot of owning and maintaining a vehicle per unit time, respectively. Note that to ensure that solving \eqref{eq:problem-full} reaches a global maximum, we must validate that \eqref{eq:concavity-condition} holds for ${\mathbf u} \in [1,{\mathbf u}^{\max}]$. Note that this problem, if solvable, yields a solution with higher profits than the individual formulations of pricing \eqref{eq:pricing-problem} and \eqref{eq:rebalancing-problem}, or the sequential approach of solving first the rebalancing problem and then adjusting prices. This happens given that the problem is \emph{jointly} solving for ${\mathbf u}$ and ${\mathbf r}$ considering simultaneously the full objective of the profit maximization problem \eqref{eq:problem-full-obj}. \section{Well posedness, Equilibrium and Stability} \label{sec:well-poss-eq-stability} Similar to \cite{pavone2012}, we say that the system \eqref{eq:system-dynamics} is \emph{well posed} if two conditions are satisfied: \textit{(i)} for any initial condition, there exists a solution of the differential equations in \eqref{eq:system-dynamics}, and \textit{(ii)}, the number of vehicles in the system remain invariant over time. In order to analyze the model, we use the framework of Filippov solutions \cite{filippov2013differential}. Let us now give a proposition for the well-posedness of the system: \begin{proposition}[Well-posedness of fluid model] \label{prop:well-posedness} \textit{ \begin{enumerate} \item For every initial condition in the fluid model represented in \eqref{eq:system-dynamics}, there exist continuous functions $c_i(t): \mathbb{R}_{\geq 0} \mapsto \mathbb{R}_{\geq 0}$ and $v_i(t):\mathbb{R}_{\geq 0} \mapsto \mathbb{R}_{\geq 0}, \forall i \in \mathcal{N}$, satisfying the system of equations in the Fillipov sense. \item For all $t>0$, the total number of vehicles is invariant and equal to $m=\sum_{i\in \mathcal{N}} v_i(0)$. \end{enumerate} } \end{proposition} \begin{pf} For the first claim, we use the framework developed by \cite{haddad1981monotone}. In particular, we check that all assumptions and conditions of \cite[Thm II-1]{haddad1981monotone} are satisfied. This theorem, ensures the existence of Fillipov solutions to the time-delayed differential equations with discontinuous right-hand sides. To prove the second claim, we study the dynamics of the vehicles in the system, which we separate into two categories: vehicles who are in transit $v_{ij}(t)$, and vehicles at a specific region $v_i(t)$. For the vehicles queued at $i$ we know their dynamics are as in \eqref{eq:system-dynamics-veh}. For the vehicles in transit, we let the total be \begin{align*} v_{ij}(t)=\hspace{-3mm}\int\limits_{t-Tij}^t \hspace{-2mm} \lambda_{ij}H(v_i(\tau)) + (\lambda_{ij} - \mu_i) H(c_i(\tau))H(v_i(\tau)) \ d\tau, \end{align*} and their dynamics are \begin{align*} \dot{v}_{ij}(t) = \lambda_{ij}H(v_i) + (\lambda_{ij} - \mu_i) H(c_i)H(v_i) & \\ - (\lambda_{ij}H(v^j_i) + (\lambda_{ij} - \mu_i) &H(c^j_i)H(v^j_i)). \end{align*} Moreover, the total number of vehicles in the system is $m(t) = \sum_i v_i(t) + \sum_{ij} v_{ij}(t)$, with dynamics \begin{subequations}\label{eq:fleet-size-dynamics} \begin{align} \dot{m}(t)& = \sum\nolimits_i \dot{v}_i(t)+ \sum\nolimits_{ij} \dot{v}_{ij}(t) \label{eq:fleet-size-dynamics-0},\\ & = \sum\nolimits_{i} \big(-\lambda_{i} H(v_i) + (\lambda_{i} - \mu_i) H(c_i)H(v_i) \label{eq:fleet-size-dynamics-1} \\ & \hspace{-1.4em} + \sum\nolimits_{j} \lambda_{ji}H(v^i_j) - (\lambda_{ji} - \mu_j) H(c^i_j)H(v^i_j)\big) + \sum\nolimits_{ij} \dot{v}_{ij}, \notag \\ &= \sum\nolimits_{ij} -\lambda_{ij} H(v_i) + (\lambda_{ij} - \mu_i) H(c_i)H(v_i) \label{eq:fleet-size-dynamics-2} \\ & \hspace{-1.4em} + \sum\nolimits_{ij} \lambda_{ji}H(v^i_j) - (\lambda_{ji} - \mu_j) H(c^i_j)H(v^i_j) + \sum\nolimits_{ij} \dot{v}_{ij}, \notag \\ & = 0. \end{align} \end{subequations} Note that to obtain the above result we have expanded the first sum term in \eqref{eq:fleet-size-dynamics-0} using \eqref{eq:system-dynamics-veh}, rearranged terms and found that $-\sum_i \dot{v}_i(t) = \sum_{ij} \dot{v}_{ij}(t) \implies \dot{m}=0$, which implies that the fleet size remains invariant over time. \qed{} \end{pf} \subsection{Equilibria} We say that the system is in equilibrium if customer queues (and therefore, waiting times) do not grow infinite. We show the existence of an equilibrium in the fluid model \eqref{eq:system-dynamics} when we control the prices of every origin-destination pair. Additionally, we show that by having the ability to control the prices, one can have find multiple equilibria for a desired fleet size, giving the flexibility to AMoD managers to operate the system at different levels. \begin{theorem}[Existence of equilibria] \label{thm:existance-eq} Let $\mathcal{U}$ be a set of prices ${\mathbf u}$, such that when ${\mathbf u} \in \mathcal{U}$: \begin{equation} \label{eq:equilibria} \sum\limits_{j} \lambda_{ij}(u_{ij}) - \lambda_{ji}(u_{ji}) = 0, \quad \forall i \in \mathcal{N}, \end{equation} and let \begin{equation} \label{eq:min-vehicles} m_{{\mathbf u}} := \sum\limits_{ij} T_{ij}\lambda_{ij}(u_{ij}). \end{equation} Then, if ${\mathbf u} \in \mathcal{U}$, and $m > m_{\mathbf u}$, an equilibrium exists with ${\mathbf c} = 0$ and ${\mathbf v} > 0$. Otherwise no equilibrium exists. \end{theorem} \begin{pf} Set $\dot{c}_i = 0 $ and $\dot{v}_i = 0 $ for all $i \in \mathcal{N}$. Then by using the customer system dynamics in \eqref{eq:system-dynamics-customers}, we have: \begin{equation} \label{eq:equlibria-dynamics-customers} \lambda_{i} = \lambda_{i}H(v_i) - (\lambda_{i} - \mu_i) H(c_i)H(v_i), \end{equation} and since $\lambda_i < \mu_i$, the above equation just has a solution if $c_i=0$ and $v_i>0$ for all $i \in \mathcal{N}$. Setting $\dot{v}_i = 0 $, and using the vehicle dynamics in \eqref{eq:system-dynamics-veh} we have \begin{align} \label{eq:equlibria-dynamics-veh} 0 = - & \lambda_{i} H(v_i) + (\lambda_{i} - \mu_i) H(c_i)H(v_i) \notag \\ & + \sum\limits_{j} \lambda_{ji}H(v^i_j) - (\lambda_{ji} - \mu_j) H(c^i_j)H(v^i_j), \end{align} which combined with equation \eqref{eq:equlibria-dynamics-customers} and the fact that ${\mathbf c}=0$ implies that \begin{equation} \label{eq:equilibria-Hv} 0 = - \lambda_{i} + \sum\limits_{j} \lambda_{ji}H(v_j). \end{equation} To arrive at \eqref{eq:equilibria-Hv}, we used the fact that in the stationary equilibrium $v_i$ and $c_i$ are constants and hence, there is no dependence on $t-T_{ij}$. Recall that for every equilibrium solution, we require ${\mathbf v} > 0$ and thus $H(v_i)=1, \ \forall \in \mathcal{N}$. Therefore, a necessary condition for the existence of equilibria is that the prices ${\mathbf u}$ are chosen such that \begin{align*} 0 = - \lambda_{i} + \sum\limits_{j} \lambda_{ji}, \quad \forall i \in \mathcal{N}, \end{align*} which proves the first statement. We now want to verify that the fleet size is large enough to maintain an equilibrium flow. Recall the fleet size dynamics $\dot{m}(t)$ when ${\mathbf c}=0$ and ${\mathbf v}>0$ in \eqref{eq:fleet-size-dynamics}. Observe that to satisfy ${\mathbf v}>0$, one needs to have a fleet size of at least $\sum\limits_{ij} T_{ij}\lambda_{ij}(u_{ij})$ vehicles which is the definition of $m_{{\mathbf u}}$. This, mixed with its invariant property ($\dot{m}=0$), proves the claim. Conversely, if $m < m_{{\mathbf u}}$ no equilibrium exists. \qed{} \end{pf} \begin{lemma}[Existence of an equilibrium] The set $\mathcal{U}$ is never empty, hence, at least one equilibrium exists. \end{lemma} \begin{pf} We use the fact that there exists a price $u^{\max}_{ij}$ for which $\lambda_{ij}(u^{\max}_{ij}) = 0$ for all $i,j \in \mathcal{N}$. Then, setting ${\mathbf u} = {\mathbf u}^{\max}$, implies that an equilibrium exists. This strategy means that we are not providing service to any request, nevertheless the equilibrium exists as we are unable to have an invariant fleet size. \qed{} \end{pf} \begin{lemma}[Infinite number of equilibria] If there is a positive demand tour in the graph, then there exists an infinite number of price vectors ${\mathbf u}$ which can steer the system to an equilibrium point. \end{lemma} \begin{pf} Assume that there exists at least one Eulerian tour (or \texttt{cycle}) in the graph for which $\lambda^0_{ij} > 0$ for all $ (i,j) \in \texttt{cycle}$. Then, let $\boldsymbol{\lambda}^{\texttt{cycle}}=\{\lambda^0_{ij}\ |\ (i,j) \in \texttt{cycle}\}$ and the minimum rate on that tour be $\lambda^{\texttt{cycle}}_{\min} = \min \{\lambda_{ij}\}_{(i,j) \in \texttt{cycle}}$. Then by setting $u_{ij}=0$ for all $(i,j) \not \in \texttt{cycle}$, we can express the equilibrium condition as \begin{equation} \sum\limits_{j:(i,j) \in \texttt{cycle}} \hspace{-1.5em} \lambda_{ij}(u_{ij}) - \lambda_{ji}(u_{ji}) = 0, \ \ \forall i:(i,j) \in \texttt{cycle}. \end{equation} Now, we use the fact that $\lambda_{ij}(u_{ij})$ is a monotonically decreasing function and we focus on $(i,j) \in \texttt{cycle}$. Hence for all $\lambda_{ij}(u_{ij})>\lambda^{\texttt{cycle}}_{\min}$ we can find a $u_{ij}$ such that $\lambda_{ij}(u_{ij}) = \lambda^{\texttt{cycle}}_{\min}$. Then, extending this for higher prices on $\lambda^{\texttt{cycle}}_{\min}$ and using the same argument as before, we show that there exists a pricing strategy ${\mathbf u}$ for which we can obtain an equilibrium with a tour demand rate with any value in the range $(0 ,\lambda^{\texttt{cycle}}_{\min})$. \qed{} \end{pf} \\ These two lemmata imply that by incorporating an origin-destination pricing strategy, we can operate a mobility-on-demand service at equilibrium for any demand rate and with any fleet size. \begin{corollary}[Minimum number of vehicles in equilibria] The minimum number of vehicles to operate in an equilibrium induced by policy ${\mathbf u}$ is at least \begin{align*} m > \underline{m} := \min_{{\mathbf u} } m_{{\mathbf u}} \end{align*} where $m_{{\mathbf u}} := \sum\limits_{ij}T_{ij}\lambda_{ij}(u_{ij})$. \end{corollary} \begin{pf} This result follows directly from the last argument in the proof of Theorem \ref{thm:existance-eq}. \qed{} \end{pf} \subsection{Stability} \label{subsec:stability} In this section we study local stability of the equilibria presented in the previous subsection. As an example, we look at cases when a disruptive change happens to the system, either because of an increase in customers or a decrease in the availability of vehicles. Let ${\mathbf u}\in\mathcal{U}$ and assume $m_{{\mathbf u}}>\underline{m}$. Then, we define the set of equilibria as \begin{flalign} & \Upsilon_{{\mathbf u}} := \{({\mathbf c}, {\mathbf v}) \in \mathbb{R}^{2N} \ | \ c_i=0, v_i>0, \ \ \forall i \in \mathcal{N},\notag \\ & \hspace{10em} \text{ and} \sum_{i} v_{i}=m-m_{{\mathbf u}} \}. \end{flalign} \begin{definition}[Locally asymptotically stable] \label{def:local-asyp} We say that a set of equilibria $\Upsilon_{{\mathbf u}}$ is locally asymptotically stable if for any equilibrium $(\underline{{\mathbf c}}, \underline{{\mathbf v}})\in \Upsilon_{{\mathbf u}}$, there exists a neighborhood $\mathcal{B}_{{\mathbf u}}^\delta(\underline{{\mathbf c}}, \underline{{\mathbf v}})$ such that every evolution of the model \eqref{eq:system-dynamics} starting at $({\mathbf c}(\tau), {\mathbf v}(\tau))=(\underline{{\mathbf c}},\underline{{\mathbf v}})$, and with $({\mathbf c}(0), {\mathbf v}(0))\in \mathcal{B}_{{\mathbf u}}^\delta(\underline{{\mathbf c}}, \underline{{\mathbf v}})$ has a limit which belongs to the equilibrium set $\Upsilon_{{\mathbf u}}$ i.e., $(\lim_{t\xrightarrow{} +\infty} {\mathbf c}(t), \lim_{t\xrightarrow{} +\infty} {\mathbf v}(t) )\in \Upsilon_{{\mathbf u}}$, where $\tau \in [ -\max_{i,j}T_{ij},0)$ and \begin{flalign} &\mathcal{B}_{{\mathbf u}}^\delta(\underline{{\mathbf c}}, \underline{{\mathbf v}}) :=\{ ({\mathbf c}, {\mathbf v}) \in \mathbb{R}^{2N} \ | \ c_i>0, v_i=\underline{v_i}, \ \forall i \in \mathcal{N} , \notag \\ & \hspace{10em} \text{ and } ||({\mathbf c}-\underline{{\mathbf c}}, 0)|| < \delta) \}. \end{flalign} \end{definition} \begin{theorem}[Stability of the equilibria] \label{thm:stability-eq} Let ${\mathbf u} \in \mathcal{U}$ and assume $m_{{\mathbf u}}>\underline{m}$; then, the set of equilibria $\Upsilon_{{\mathbf u}}$ is locally asymptotically stable. \end{theorem} \begin{pf} Provided in the Appendix \qed{} \end{pf} \section{Experiments} \label{sec:Experiments} We carry out two case studies to assess the benefits of solving the joint problem of pricing and rebalancing over other approaches. Our first experiment uses a fictitious transportation network to analyze sensitivities with respect to the network size. The second one consists of a data-driven case study using historical data from New York City. We report empirical results of the achievable profit of the AMoD system when solving the problem using the different methodologies presented in Table~\ref{tab:policies}. \begin{table}[h] \centering \caption{Different policies evaluated to plan the operation of an AMoD system.} \renewcommand{\arraystretch}{1.2} \begin{tabular}{ | l | l l | } \hline \textbf{Policy} & \textbf{Type} & \textbf{Formulation}\\ \hline $\mathcal{P}_{ij}$ & Individual & \eqref{eq:pricing-problem} \\ $\mathcal{R}_{ij}$ & Individual & \eqref{eq:rebalancing-problem} \\ $\mathcal{R}_{ij}\rightarrow\mathcal{P}_{ij}$ & Sequential & \eqref{eq:rebalancing-problem} then \eqref{eq:pricing-problem}\\ $\mathcal{P}_{i}+\mathcal{R}_{ij}$ & \begin{tabular}{@{}c@{}}Joint with fixed \\[-.5em] price by origin\end{tabular} & \begin{tabular}{@{}c@{}} \eqref{eq:problem-full} with $u_{ij}=u_{ik}$\\[-.4em] $\forall i,j,k \in \mathcal{N}$ \end{tabular} \\ $\mathcal{P}_{ij}+\mathcal{R}_{ij}$ & Joint & \eqref{eq:problem-full} \\ \hline \end{tabular} \label{tab:policies} \vspace{-0.2cm} \end{table} We begin with the individual the policies $\mathcal{P}_{ij}$ and $\mathcal{R}_{ij}$ to see the equilibrium under a policy or rebalancing strategy. We then turn to a \emph{sequential} approach $\mathcal{R}_{ij}\rightarrow\mathcal{P}_{ij}$ to solve the problem. Our motivation for this methodology comes from the fact that many corporations tend to have separate pricing and rebalancing departments, which would result in solving the joint problem sequentially. Note that the sequential policy $\mathcal{P}_{ij}\rightarrow\mathcal{R}_{ij}$ is not included because once the pricing problem is solved, the system is at equilibrium and the rebalancing problem becomes trivial (i.e., ${\mathbf r}=0$). Finally, the \emph{joint with fixed prices by origin} policy $\mathcal{P}_{i}+\mathcal{R}_{ij}$ is motivated by the fact that current MoD services only use the origin (not the destination) when setting surge prices (price multipliers)~\cite{chen2015peeking,cohen2016using}. Note that in order to have a tractable solution for formulations \eqref{eq:pricing-problem} and \eqref{eq:problem-full} we require a function satisfying $\eqref{eq:concavity-condition}$. To achieve this, we assume a linear demand (willingness-to-pay) function, specifically we let \begin{equation}\label{eq:linear-demand-function} \lambda_{ij}(u_{ij})=\frac{\lambda^0_{ij}}{u^{\max}_{ij}-1}(u^{\max}_{ij}-u_{ij}), \end{equation} where we set $u^{\max}_{ij}=4$ as suggested in~\cite{cohen2016using}. Hence, by using this linear demand function, we get a tractable Quadratic Program (QP) with linear constraints. Arguably, linear demand functions may not be as accurate as desired for realistic implementations of this model. However, using linear functions allows us to recover a global maximum solution to the problem and assess the potential benefits that \emph{joint} policies may achieve compared to other strategies. For both experiments we let the \emph{base} price be proportional to the travel time using $p^0_{ij}=0.5T_{ij}$, where $\$0.5$ is the average price a user pays in dollars per minute of taxi ride reported in~\cite{taxi_fares}. Additionally, we let the operation and rebalancing cost per kilometer be $c^o=c^r=\$0.72$ as suggested in \cite{BOSCH201876}; the lost customer cost $c^c$ is equal to $\$5$, and we set the reguralizer parameter on the fleet size to be $c^f=\$1\times10^{-10}$. \subsection{Uniform Demand} We compare the solution of the different methods for a network with random uniform demands. For each strategy, we let the \emph{base} demand be $\lambda_{ij}^0\sim U(0, 4)$ and travel time between regions be $T_{ij}\sim U(0, 40)$. Then, we solve the problem for networks with a number of regions ranging between $0$ and $60$. Figure~\ref{fig:price-absolute} shows the value of the cost function \eqref{eq:problem-full-obj} for each methodology. Moreover, in Figure~\ref{fig:price-relative} we observe the relative deviation in profits for the solution of each strategy against the joint pricing and rebalancing solution. We see that as the number of regions increases, the deviation converges to a stable value. To explain this phenomenon, we define the \emph{potential} of region $i$ to be the load balance deviation when no pricing or rebalancing policy is applied, namely, $\zeta_{i} = \sum_j \lambda_{ij}^0-\lambda_{ji}^0$. Then, since we draw samples from the same uniform distribution to assign all $\lambda_{ij} \ \forall i,j \in \mathcal{N}$, the expected value of $\zeta_i$ is equal to zero for all $i$. Hence, this convergence behavior is simply a direct implication of the law of large numbers. Note that, for the same reason, the individual policy $\mathcal{P}_{ij}$ converges to zero. \begin{figure}[t] \centering \begin{subfigure}{.41\linewidth} \centering \includegraphics[trim={0 0cm 0cm 0}, width=1\linewidth]{fig/profit_abs.pdf} \vspace{-0.7cm} \caption{} \label{fig:price-absolute} \end{subfigure} \begin{subfigure}{.57\linewidth} \centering \includegraphics[trim={0cm 0cm 0cm 0}, width=1\linewidth]{fig/profit_relative.pdf} \vspace{-0.7cm} \caption{} \label{fig:price-relative} \end{subfigure} \caption{Objective function value under different number of zones and AMoD strategies, (a) Shows absolute values of \eqref{eq:problem-full-obj} when different policies are implemented while (b) plots the relative difference between the joint solution and the others. } \vspace{-0.4cm} \end{figure} \subsection{New York City Case Study} We perform a case study of New York city using the data available at~\cite{nyc-trip-data}. Specifically, we analyze the data set of \emph{High Volume For-Hire Vehicle Trip Records} of November 2019~\cite{nyc-trip-data}. In order to analyze stable distributions of trips in the network, we filter the data to consider only working days (Monday to Friday). Then, we focus on four time slots: Morning Peak (AM) from 7:00-10:00 hrs, Noon (MD) from 12:00-15:00 hrs, Afternoon Peak (PM) from 17:00-20:00 hrs and Night (NT) from 00:00-3:00 hrs. For every time window in November 2019, we collect data on origin-destination pairs and travel times of every trip. Then, we compute the average hourly demand and travel times, and we use these values to preform our analysis and test the different solutions. Table~\ref{tab:policy-comparison} shows the deviation in profits (in percentage terms) between the different approaches and the joint formulation. As a reminder, use Table~\ref{tab:policies} summarizes all policy definitions. We observe that the joint method outperforms all the other methods in the range from $5\%$ to $40\%$, highlighting the benefit of solving this problem using a joint strategy. In particular, we observe that each of the individual strategies performs on average worse than strategies that optimize both pricing and rebalancing. Also, it is relevant to stress the $5\%$ deviation of the policy with \emph{fixed surge price by origin}, as it matches our expectations of the relevance of considering the destination when pricing. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{fig/prices/hist_NYC.pdf} \vspace{-0.7cm} \caption{Distribution of prices ${\mathbf u}^*$ for different policies at different time slots} \vspace{-0.2cm} \label{fig:histogram-prices} \end{figure} \begin{table}[t] \centering \caption{Relative deviation in percentage of each policy compared to the joint strategy $\mathcal{P}_{ij}+\mathcal{R}_{ij}$ for different time slots} \input{tables/table} \label{tab:policy-comparison} \vspace{-0.4cm} \end{table} To better understand the different approaches, we generated plots of the pricing distribution and trend. Figure~\ref{fig:histogram-prices} shows histograms comparing the value of the solution ${\mathbf u}$ for the individual pricing policy and the joint strategy. As expected, we observe the distribution of the individual approach to have higher variance than the joint. This happens given the hard constraint to reach an equilibrium. When no rebalancing is considered as in $\mathcal{P}_{ij}$ the policy chooses prices to ensure ${\mathbf u} \in \mathcal{F}$. In contrast, when solving the joint problem, the solution leverages rebalancing and pricing and gives the pricing decision more flexibility to concentrate to pick values that maximize profits. Figure~\ref{fig:prices-policy} plots prices against $\zeta_i$ (the \emph{potential} of origin region $i$) . Recall that parameter $\zeta_i$ indicates excess demand for positive values (i.e., more costumers than vehicles) and excess supply for negative values (i.e., more vehicles than costumers). For the problem with unique surge prices we plot values by origin. For the joint problem we plot the demand-weighted average price by origins. Just as we would expect, there is a positive trend between these variables. The algorithm lowers prices when there is excess supply to incentivize users to request rides, and increases the price when there is excess demand. Note that the pattern is stronger for busier times (AM and PM). \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{fig/prices/prices_NYC_all.pdf} \vspace{-0.7cm} \caption{$\zeta_i$ indicates excess demand for positive values and excess supply for negative values. For the $\mathcal{P}_{i}+\mathcal{R}_{ij}$ case we plot prices by origin. For the joint problem $\mathcal{P}_{ij}+\mathcal{R}_{ij}$, we plot the demand-weighted average price per origin.} \vspace{-0.1cm} \label{fig:prices-policy} \end{figure} Finally, we quantify how relevant is the pricing relative to the rebalancing component when balancing the load of the system. Letting ${\mathbf r}^*$ and ${\mathbf u}^*$ be the solution of \eqref{eq:problem-full}, we define a load dispersion metric as follows $\bar{\zeta}_0 = \frac{1}{N}\sum_i |(\sum_j \lambda_{ij}^0-\lambda_{ji}^0)|$ when nothing is applied, $\bar{\zeta}_{\mathbf r} = \frac{1}{N}\sum_i |(\sum_j \lambda_{ij}^0+r_{ij}-\lambda_{ji}^0-r_{ij})|$ when the rebalancing component is applied, and $\bar{\zeta}_{\mathbf u} = \frac{1}{N}\sum_i |(\sum_j \lambda_{ij}(u_{ij})-\lambda_{ji}(u_{ji}))|$ when the pricing component (but no rebalancing) is applied. Note that we do not define $\bar{\zeta}_{{\mathbf u},{\mathbf r}}$ as the result will be zero given that the system is at equilibrium by \eqref{eq:problem-full-eq}. Table~\ref{tab:optimal-mean-variance} shows this dispersion metric for the different time slots considered. Interestingly, we see that the pricing component of the policy reduces this metric in all cases, showing its importance for load balancing the system. \begin{table}[t] \centering \caption{Dispersion on the average absolute value of potentials when components of the joint policy ${\mathbf u}^*$ and ${\mathbf r}^*$ are applied.} \input{tables/table2} \label{tab:optimal-mean-variance} \vspace{-0.4cm} \end{table} \section*{Appendix} \subsection*{Proof of Theorem \ref{thm:stability-eq}} We start by showing that $v_i(\tau)>c_i(\tau)$ for $\tau\in[-\max_{i,j}T_{ij}, t)$, which will serve as a key element to the analysis. To do so, we first assume $v_i(\tau)>0$ for all $i\in\mathcal{N}$, and observe the system dynamics \eqref{eq:system-dynamics} at time $t$ are \begin{subequations} \begin{flalign} \dot{c_i}(t) & = (\lambda_{i} - \mu_i) H(c_i), \label{eq:system-dynamics-customer-t}\\ \dot{v_i}(t) &= - \lambda_{i} + (\lambda_{i} - \mu_i) H(c_i) + \sum\limits_{j} (\lambda_{ji} \label{eq:system-dynamics-veh-t} \\[-12pt] & \hspace{3em} - (\lambda_{ji} - \mu_j) H(c^i_j)) \nonumber, \\ &= (\lambda_{i} - \mu_i) H(c_i) - \sum\limits_{j} (\lambda_{ji} - \mu_j) H(c^i_j), \label{eq:system-dynamics-veh-t-2}\\ & \geq (\lambda_{i} - \mu_i) H(c_i),\label{eq:system-dynamics-veh-t-3} \\ & = \dot{c}_i(t), \label{eq:system-dynamics-veh-t-4} \end{flalign} \end{subequations} where all the $H(v_i)$ in \eqref{eq:system-dynamics} are replaced with $1$ since we assume that $v_i(\tau)>0$. The step \eqref{eq:system-dynamics-veh-t-2} is due to the fact that the system is at equilibrium, i.e. $\sum\limits_{j} \lambda_{ji} - \lambda_{i}=0,\ \forall i \in \mathcal{N}$, and the step \eqref{eq:system-dynamics-veh-t-3}, is a result of $\mu_i>\lambda_i$ which means that $\lambda_i-\mu_i<0$. Given that $\dot{v}_i(t)\geq\dot{c}_i(t)$ and the fact that at the starting point ${\mathbf v} > {\mathbf c}$ (i.e., $\underline{{\mathbf v}} > \mathbf{0}$), we conclude that $v_i(\tau) > c_i(\tau)$ for $\tau \in [-\max_{i,j} T_{ij}, t)$ and $i \in \mathcal{N}$. Two important consequences of this result are that $c_i$ always reaches $0$ before its corresponding $v_i$, and the vehicle time derivative $\dot{v}_i$ is greater than or equal to $0$ after $c_i$ has reached $0$. This follows by observing that all the terms in \eqref{eq:system-dynamics-veh} are all positive when $c_i=0$. Now we are in a position to show that ${\mathbf v}(t)>0$ for $t\geq0$. We do this by combining the second consequence in the previous paragraph with the assumption that the initial state of the system is $(\mathbf{0},\underline{{\mathbf v}})$ and the fact that $\dot{v}_i(\tau)>\dot{c}_i(\tau)$. Thus, since ${\mathbf v}(t)>0$ for $t\geq0$, we have that $\dot{c}_i(t)=(\lambda_i-\mu_i)H(c_i)\leq 0$ which will implies that $c_i$ will for sure be $0$ when $t\geq T^{\max}$ where $T^{\max}:=\max_{i} \{c_i(0)/(\mu_i-\lambda_i)\}_{i\in \mathcal{N}}$ and which implies that $\lim_{t\xrightarrow{} +\infty} {\mathbf c}(t)=\mathbf{0}$ since both $\dot{c}_i$ and $c_i$ will be equal to $0$ for all $i\in \mathcal{N}$. To show that $\lim_{t\xrightarrow{} +\infty} {\mathbf v}(t)=\mathbf{v}$ we use the fact that $c_i=\dot{c}_i = 0$ for $t>0$ and insert this into the vehicle dynamics in \eqref{eq:system-dynamics-veh} obtaining $v_i(t)=\lambda_i(v_i)+\sum_j ( \lambda_{ji} H(v^i_j)-(\lambda_{ji}-\mu_{j})H(c^i_j)H(v^i_j))$. Since, $c_i(t)=0$ we observe that after $T^{\max}+\max_{i,j} T_{ij}$ time units $H(c^i_j)$ will be equal to zero and therefore $\dot{v}(t)=0$ for $t>T^{\max}+\max_{i,j} T_{ij}$. Moreover, since $\dot{v}_i(t)=0$ the $\lim_{t\xrightarrow{}\infty} v_i(t)$ exists and can be retrieved using $v_i(t) = v_i(0) + \int_0^t \dot{v}_i(s) ds \geq v_i(0) + \int_0^t \dot{c}_i(s) ds = v_i(0) + c_i(t)-c_i(0)$. Given that we show that $v_i(0)>c_i(0)$, we conclude that $\lim_{t\xrightarrow{}\infty} v_i(t)>0$. The property $\lim_{t\xrightarrow{}\infty} v_i(t)>0 > m - m_{{\mathbf u}}$ holds straight from Proposition \ref{prop:well-posedness}. Finally, to characterize the ball $\mathcal{B}^\delta_{{\mathbf u}}$ we set $\psi_i:=\underline{v_i}\sin(\pi/4)$ and $\psi^{\min}:=\min_{i} \psi_i$ (see Fig. \ref{fig:psi}). Then, for $\delta=\psi^{\min}$ any path of the system \eqref{eq:system-dynamics} starting at $({\mathbf c}(\tau), {\mathbf v}(\tau))=(\underline{{\mathbf c}},\underline{{\mathbf v}})$ for $\tau\in[-\max_{i,j}T_{ij},0)$ and satisfying $({\mathbf c}(0), {\mathbf v}(0))\in \mathcal{B}_{{\mathbf u}}^\delta(\underline{{\mathbf c}}, \underline{{\mathbf v}})$ has a limit which belongs to the equilibrium set $\Upsilon_{{\mathbf u}}$. \qed{} \begin{figure}[t] \centering \includestandalone[width=0.75\linewidth]{proof_fig} \caption{Sketch of a variable of the initial solution $(\underline{{\mathbf c}},\underline{{\mathbf v}})$ along with its neighborhood $\mathcal{B}^\delta_{\mathbf u}$. Shaded in grey is the feasible region ($c_i<v_i$).} \label{fig:psi} \end{figure}
1,314,259,992,649
arxiv
\section{Introduction} \label{introduction} Quantum chromodynamics (QCD) predicts a phase transition from a hadron gas (HG) phase to a quark gluon plasma (QGP) phase with variations of thermodynamic parameters such as temperature ($T$) and/or baryon density ($\mu_{B}$)~\cite{Fodor:2004nz}. Lattice QCD calculations indicate that the chiral and de-confinement phase transitions are a smooth crossover along the temperature axis, i.e. with $\mu_{B}$ = 0, while various other models predict that the phase transition becomes first order at high baryon density~\cite{Halasz:1998qr}. The existence of the QCD critical point is thus expected as the first order phase transition line should end somewhere at finite $\mu_{B}$ and $T$. In order to study the properties of QGP in these experiments, it is important to choose an observable which is sensitive enough to the medium property in the early stage. It has been proposed that the shapes of the event-by-event net-charge distributions are sensitive to the presence of the critical point, as they are related to the conserved number susceptibilities of the system and hence to the correlation length~\cite{Gavai:2010zn}. Additionally, the shape of the emission source function can also provide signals for a second-order phase transition or proximity to the QCD critical point~\cite{Csorgo:2005it}. Two-pion correlation measurements provide important information about the space-time evolution of the particle emitting source in the collision. An emitting system which undergoes a strong first order phase transition is expected to demonstrate a much larger space-time extent than would be expected if the system had remained in the hadronic phase throughout the collision process. The PHENIX detector at RHIC has explored the above possibilities in the recent Beam Energy Scan (BES) program of RHIC. During 2010 and 2011, RHIC provided \mbox{Au$+$Au}\xspace collisions to PHENIX at \mbox{$\sqrt{s_{_{NN}}}$}\xspace = 200 GeV, 62.4 GeV, 39 GeV, 27 GeV, 19.6 GeV, and 7.7 GeV. PHENIX recorded Cu+Cu collisions at \mbox{$\sqrt{s_{_{NN}}}$}\xspace = 200 GeV during 2005. Results from PHENIX covering net-charge fluctuations and two-pion interferometry measurements, are discussed here. \section{Net-charge Fluctuations} PHENIX has measured the distributions of net-charge multiplicity (N = $N^{+}$ - $N^{-}$) and their various moments (mean ($\mu$) =${<N>}$, variance ($\sigma^2$) = ${<(N-\mu)^2>}$, skewness (S) = $\frac{<(N-\mu)^3>}{\sigma^3}$ and kurtosis ($\kappa$) =$\frac{<(N-\mu)^4>}{\sigma^4} -3$ ) at several beam energies~\cite{Adare:2015aqk}. The charged hadrons selected for this analysis cover transverse momentum (\mbox{$p_T$}\xspace) between 0.3 and 2.0 GeV/$c$ and pseudorapidity range spanning $|\eta|\leq 0.35$. Figure~\ref{fig1} shows the efficiency corrected $\mu/\sigma^2$, $S\sigma$, $\kappa\sigma^2$, and $S\sigma^3/\mu$ as a function of \mbox{$\sqrt{s_{_{NN}}}$}\xspace for the most central (0-5\%) \mbox{Au$+$Au}\xspace collisions. \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.30]{Fig1.eps} \caption{(Color online) The ratios of cumulants of net-charge distributions (a) $\mu/\sigma^{2}$ (b) S$\sigma$ (c) $\kappa\sigma^{2}$, and (d) S$\sigma^{3}/\mu$, after efficiency corrections for most central (0-5\%) \mbox{Au$+$Au}\xspace collisions. The statistical and systematic errors are shown by bars and caps, respectively. Triangles represent the efficiency corrected cumulant ratios extracted from NBD fits to positively and negatively charged particles distributions~\cite{Adare:2015aqk}.} \label{fig1} \end{center} \end{figure} In Fig.~\ref{fig1}, triangles represent the efficiency corrected cumulants ratios extracted from NBD fits to positively and negatively charged particles distributions. The $\kappa\sigma^{2}$ values are positive and constant at all the collision energies within the statistical and systematic uncertainties as is shown in Fig.~\ref{fig1}. Comparing these measurements with the lattice calculations, freeze-out temperature ($T_{f}$) and baryon chemical potentials ($\mu_{B}$) are also extracted at freeze-out. Figure~\ref{fig2} shows the variation of $\mu_{B}$ as a function of \mbox{$\sqrt{s_{_{NN}}}$}\xspace. The extracted $\mu_{B}$ values are found comparable to the $\mu_{B}$ values extracted from particle ratio analysis given in Ref.~\cite{Cleymans:2005xv}. \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.27]{Fig2.eps} \caption{(Color online) Chemical freeze-out parameter ($\mu_{B}$), extracted from PHENIX higher-moments analysis, as a function of center of mass energy (\mbox{$\sqrt{s_{_{NN}}}$}\xspace) are shown in red solid points~\cite{Adare:2015aqk}. The dashed line shows the parametrization given in Ref.~~\cite{Cleymans:2005xv} and the other experimental data are from Ref.~\cite{Cleymans:2005xv} and references therein.} \label{fig2} \end{center} \end{figure} \section{Two-pion interferometry} PHENIX has performed measurements of two-pion correlations in Cu+Cu collisions at \mbox{$\sqrt{s_{_{NN}}}$}\xspace = 200 GeV and Au+Au collisions at \mbox{$\sqrt{s_{_{NN}}}$}\xspace = 39, 62.4 and 200 GeV~\cite{Adare:2014qvs}. Figure~\ref{fig3} shows the two-pion correlation functions as a function of the components of the momentum difference ($\bf{q}$) between particles in the pair for several \mbox{$\sqrt{s_{_{NN}}}$}\xspace. These correlation functions are fitted with a function which incorporates Bose-Einstein enhancement and the Coulomb interaction between the pairs, to extract the HBT radii ($R_{side}$, $R_{out}$ and $R_{long}$ ). The quantities, $R^{2}_{out} - R^{2}_{side}$ and $(R_{side} - \sqrt{2}\bar{R})/R_{long}$ (see reference~\cite{Bhalerao:2005mm} for $\bar{R}$), which are related to emission duration and medium expansion velocity, respectively, are shown (Fig.~\ref{fig4}) for pair transverse mass $m_{T}$ = 0.26 GeV/$c^{2}$ to reduce the effect of position momentum correlation. Also, the PHENIX results are compared with STAR results for \mbox{$\sqrt{s_{_{NN}}}$}\xspace = 7-200 GeV and ALICE results at LHC for \mbox{$\sqrt{s_{_{NN}}}$}\xspace = 2.76 TeV. A maximum is observed as a function of \mbox{$\sqrt{s_{_{NN}}}$}\xspace in $R^{2}_{out} - R^{2}_{side}$~(Fig.~\ref{fig4}(a)) with complimentary minimum in $(R_{side} - \sqrt{2}\bar{R})/R_{long} ($Fig.~\ref{fig4} (b)). Non-monotonic behavior over a small range in \mbox{$\sqrt{s_{_{NN}}}$}\xspace may point to a softening of equation of state that may coincide with the QCD critical point. \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.32]{Fig3.eps} \caption{(Color online) Correlation functions of two-pion pairs ($\pi^{+}\pi^{+}$ and $\pi^{-}\pi^{-}$) for 0-10\% central \mbox{Au$+$Au}\xspace (left) and Cu+Cu (right) collisions for pion pair transverse momenta ($\langle{k_T}\rangle$) = 0.53 GeV/c and for several \mbox{$\sqrt{s_{_{NN}}}$}\xspace. The curves represent fits to the correlation function~\cite{Adare:2014qvs}. } \label{fig3} \end{center} \end{figure} \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.40]{Fig4.eps} \caption{(Color online) The $\mbox{$\sqrt{s_{_{NN}}}$}\xspace$ dependence of (a) ($R^{2}_{out} - R^{2}_{side}$), (b) [({$R_{side}$ -$\sqrt{2}\bar{R})/R_{long}$}]. The PHENIX and STAR data points represent the results from fits to the $m_{T}$ dependence of the combined data sets~\cite{Adare:2014qvs}. } \label{fig4} \end{center} \end{figure} \section{Summary} PHENIX results for net-charge fluctuations and two-pion interferometry as a function of beam energy are presented. The net-charge fluctuation measurements do not give a clear indication of the presence of the QCD critical point, though the $\mu_{B}$ extracted with lattice calculations and PHENIX data are found to be consistent with previously extracted baryon chemical potentials. A non-monotonic behavior in the quantities related to emission duration and medium expansion velocity is observed, which hints the softening of equation of state. Further, more detailed studies are required for a clear picture of QCD phase diagram.
1,314,259,992,650
arxiv
\section{INTRODUCTION} An important contribution to the total visible light emitted by massive, X-ray galaxy clusters does not come from the galaxies themselves \citep{zwicky51,arnaboldi04,lm04,feldmeieretal04a,feldmeieretal04b, westetal95, zibettietal05,gonzalezetal05,mihosetal05,kricketal06,kb07}. This so-called {\it intracluster light} (ICL) is attributed to intracluster stars, low surface brightness stars located outside galaxies. Intracluster stars have been directly observed \citep{fergusonetal98,arnaboldietal03,gal-yametal03,gerhardetal05}. These observations reveal that the intracluster stellar population is diverse. It is mostly comprised of stars with masses of order $1\msun$, ages up to $10^{10}{\rm yr}$, and metallicities $\rm[Fe/H]$ between $-2$ and 0 \citep{williamsetal07}, but also includes red, old stars \citep{kricketal06} AGB stars \citep{durrelletal02}, planetary nebulae \citep{arnaboldietal96, feldmeieretal03,arnaboldietal04,feldmeieretal04a}, novae and supernovae. \citep{tk77,ctw77,uson91,vilchez-gomez94,sk94,gal-yametal03,nso05}. The fraction of stars that contribute to the ICL increases with the mass of the clusters, and with the density of the environment: from loose groups ($<2\%$, \citealt{castroetal03}), to Virgo-like ($10\%$, \citealt{feldmeieretal04a,zibettietal05}) and rich clusters ($\sim20\%$ or higher, \citealt{tf95,feldmeieretal04b,kb07}). In the cores of dense and rich clusters (like Coma) the local ICL fraction can be as high as 50\% \citep{bernsteinetal95}. Several models have been suggested to explain the origin of intracluster stars. A comprehensive review of these various processes is provided by \citet{tf11}. Essentially, these models fall into four categories: intracluster star formation, ejection, disruption of individual galaxies, or galactic interactions. Some models for in-situ formation of intracluster stars have been suggested. Gas-rich galaxies moving through the hot intracluster medium (ICM) might experience ram-pressure stripping. The gas extracted from the galaxies will form dense gaseous streams, which under the right conditions can fragment to form stars \citep{ft92,sunetal10}. This could explain the galactic tails that are observed in some galaxies such as ESO~137-001 and ESO~137-002. \citet{hatchetal08} observed a halo of diffuse UV intergalactic light surrounding a radio galaxy (the Spiderweb Galaxy), providing evidence for in-situ star formation outside galaxies. Also, tidal stripping could provide another mechanism for extracting cold gas from galaxies. In the recent simulations of \citet{pssd10}, up to 30\% of intracluster stars formed in could gas clouds stripped from substructures infalling into the cluster center. Supernovae explosions inside close binary systems \citep{blaauw61} can produce high-velocity stars, with velocities of several hundreds $\rm km/s$. Some of these stars could have enough kinetic energy to escape the gravitational potential of the parent galaxy and reach the intracluster space \citep{tf09}. However, as \citet{tf11} argue, this is not an efficient mechanism for populating the intracluster space with stars. Even if ones assumes that every supernovae produces a high-velocity star, with enough velocity to escape, the total number of intracluster stars produced by this mechanism is simply too small. Three- or four-body encounters in dense stellar systems \citep{pra67} could also produce high-velocity runaway stars, and recent simulations suggest that these stars could reach velocities as high as $400\,{\rm km\,s^{-1}}$ \citep{ggpz10} A galaxy can get disrupted if it loses some of its gravitational binding energy. A gas-rich galaxy can become unbound by losing a significant fraction of its gas. This could be caused by ram-pressure stripping, or by a galactic wind powered by SNe explosions or an AGN. Another possible mechanism is the merger of two galaxies which both host a central supermassive black hole. This will likely lead to the formation of a single galaxy with a central binary black hole. If the binding energy of this binary black hole exceeds the binding energy of the galaxy, the gravitational energy extracted from the black holes will disrupt the galaxy \citep{tf11}. While these various processes might contribute to some of the observed ICL, it is generally accepted that most intracluster stars were formed inside galaxies, and were later dispersed into the intracluster space by galaxy interactions taking place during the evolution of the clusters. This likely results from tidal stripping or tidal destruction of galaxies during close encounters (\citealt{weiletal97,gw98,gnedin03,willmanetal04,feldmeieretal04a, rmmb06,cwk07, pbz07}, \citealt{paperI}, hereafter Paper I, \citealt{ymvdb09, wj09,rudicketal09,pssd10}), though an important contribution could also be provided by stars ejected during galactic mergers \citep{muranteetal07}. The ICL tends to be more concentrated than the galactic light \citep{aguerrietal05}, which is interpreted as evidence for the role of galaxy collisions in the origin of the ICL \citep{zibettietal05}. Notice that the higher rate of galaxy collisions and higher ICM pressure found in the central regions of the clusters would tend to increase the efficiency of most of the processes discussed above, the exceptions being ejection of high-velocity stars (unaffected) and SNe-driven galactic winds (possibly inhibited). Several analytical and numerical studies of the origin and evolution of the ICL have been performed. These include studies based on analytical modeling of galaxy formation and disruption \citep{pbz07}, N-body simulations of large-scale structure formation combined with an analytical prescription for galaxy formation, dispersion, and merging \citep{napolitanoetal03,rmmb06,hbt08,rmmb11}, and hydrodynamical simulations \citep{willmanetal04,slrp05,muranteetal04,muranteetal07,pssd10,dmb10}. In these numerical studies, there is always a trade-of between having good resolution or good statistics. \citet{napolitanoetal03,willmanetal04,slrp05}, and \citet{rmmb06} simulate either one cluster or a few clusters, so even though these clusters are simulated with high resolution, they might not be representative of the whole cluster population. At the other extreme, \citet{muranteetal04,muranteetal07} simulate a very large cosmological volume, containing a statistically significant sample of clusters, but cannot resolve the scale of dwarf galaxies. In this work, we use an algorithm which combines large-scale cosmological simulations with a semi-analytical treatment of mergers and tidal disruption. This enables us to achieve good statistics while resolving the processes responsible for destroying dwarf galaxies. In this paper, we focus on the relative importance of the tidal destruction and merger processes and their role in the evolution of the cluster luminosities, and do not consider the properties of the ICM. In this case, a full hydrodynamical simulation is not required, and we chose instead to combine a N-body simulation with a subgrid treatment of processes at galactic scales. We use a high-resolution N-body simulation of large-scale structure formation, as in \citet{hbt08} and \citet{rmmb11}, but with a different and complementary approach for galaxy formation, mergers, and tidal destruction, as described in Paper~I (see \S~2.1 below). Our goals are to determine (1) the fraction of galaxies of various masses destroyed by tides and mergers during the formation and evolution of the clusters, (2) the contribution of tidal destruction to the ICL, and (3) the brightness profile of the ICL resulting from tidal destruction. \section{THE NUMERICAL METHOD} \subsection{N-body Simulation} Simulating the formation and destruction of dwarf galaxies in a cosmological context is quite challenging, because of the large dynamical range involved. To get statistically significant results, we need to simulate a volume of the universe large enough to contain several rich clusters. To estimate this volume, we use the cluster mass function of \citet{bc93}, \begin{equation} n_c(>\!M)\simeq4\times10^{-5}\left({M\over M^*}\right)^{-1} e^{-M/M_*}h^3{\rm Mpc}^{-3}\,, \end{equation} \noindent where $M^*\simeq1.8\times10^{14}h^{-1}\msun$. Using $h=0.704$ and $M=10^{14}\msun$, we get $n_c=2.41\times10^{-5}{\rm Mpc}^{-3}$. For a cubic volume of size $100\,{\rm Mpc}$, this gives 24 clusters more massive than $M=10^{14}\msun$, which is probably sufficient to get good statistics. In a $\Lambda$CDM universe with $\Omega_0=0.268$, a $(100\,{\rm Mpc})^3$ volume contains a mass of $M_{\rm tot}=3.69\times10^{16}\msun$. If we take the minimum mass of a dwarf galaxy to be $M_{\rm dw}=10^9\msun$, we get $M_{\rm tot}/M_{\rm dw}=3.69\times10^7$. Lets assume that we perform an N-body simulation with equal-mass particles, and that it takes a minimum 100 particles per galaxy to properly resolve processes such as galaxy merger and tidal destruction, we would then need 3.69~billion particles. This is comparable to some of the largest N-body simulations ever performed to this date, and would require an enormous investment in human and computer resources. The hydrodynamical simulations of \citet{muranteetal07} use three different kinds of particles (dark matter, gas, and stars) with masses $6.57\times10^9\msun$, $9.86\times10^8\msun$, $4.95\times10^8\msun$, respectively. The gravity-only simulations of \citet{rmmb11} use particles of mass $5\times10^8\msun$, while our own simulation uses particles of mass $2.75\times10^8\msun$. \citet{hbt08} used the results of the {\sl Millenium Simulation\/} \citep{springeletal05}, with particles of mass $1.18\times10^9\msun$. None of these simulations can resolve the internal structure of dwarf galaxies and properly simulate the destruction of these galaxies by mergers and tides. To solve this problem, \citet{muranteetal07} use a group finder to determine if particles belong to galaxies or are located in the intracluster space (SKID, see \citealt{stadel01}). \citet{rmmb11} used instead a standard ``zoom-in'' technique. They first performed a relatively low-resolution simulation. They selected a subset of massive clusters at redshift $z=0$, and ran the simulation a second time, with more resolution inside the regions where these clusters formed. \citet{pssd10} used the same approach, by selecting a sample of 16 clusters from the {\sl Millenium Simulation\/} and resimulating them with higher resolution. This approach provides very-high resolution at reasonable computational cost. However, only a few clusters are being simulated at that high resolution. \citet{hbt08} use a semi-analytical model to describe galaxy formation and disruption. In this paper, we use an approach that we first introduced in Paper~I. We represent each galaxy in the system (regardless of its mass) using {\it one single particle}. In this approach, the merger and tidal destruction of galaxies cannot be directly simulated, but instead are treated in the algorithm as subgrid physics. When two particles representing galaxies come close to each other, we can calculate the gravitational potential energy between them. We can also calculate the tidal field caused by one galaxy at the location of the other. With these, we can set rules that dictate when mergers and tidal destruction take place. This is fairly crude compared to an actual simulation of the mergers and tidal destruction events, {\it but is expected to make statistically correct predictions for a large number of events.\/} The main advantage of this approach is that it does not rely on zoom-ins, and thus enables us to simulate a larger number of clusters at high resolution. This approach was developed and tested on isolated clusters (Paper~I). In this paper, we apply the same approach to a cosmological simulation, which enables us to follow the formation and evolution of a statistically significant number of clusters. We consider a concordance $\Lambda$CDM model with $\Omega_0=0.268$, $\lambda_0=0.732$, and $h=0.704$. We perform a high-resolution simulation in a $(100\,{\rm Mpc})^3$ comoving box with periodic boundary conditions, using a Particle-Mesh (PM) algorithm with $512^3$ particles and a $1024^3$ mesh. The total mass in the box is $M_{\rm tot}=3.686\times10^{16}\msun$ and the mass per particle is $M_{\rm part}=2.747\times10^{8}\msun$. The length resolution is $97.7\,\rm kpc$ comoving. We assume that dwarf galaxies of mass $M_{\min}=2\times10^9\msun$ form inside cells where the density is 200 times the mean density. Each galaxy is represented by a ``galaxy particle.'' These particles are treated like PM particles, but have the ability to form, merge, and get tidally destroyed. The treatment of these processes by the algorithm is described in the following sections. \subsection{Formation of Dwarf Galaxies} To include galaxy formation in our N-body simulations, we assume that dwarf galaxies of mass $M=M_{\min}$ form by monolithic collapse, while more massive galaxies form by the merger of smaller galaxies. Therefore, in our code implementation, the formation of massive galaxies by mergers is handled by the merging module, and the galaxy formation module only handles the formation of galaxies of mass $M=M_{\min}$. The computational volume is divided into $1024^3$ PM cells. We assume that dwarf galaxies form in cells where the density $\rho_X^{\phantom1}$ of background matter exceeds a threshold density $\rho_{\rm GF}^{\phantom1}=\Delta_c\bar\rho(z)$, where $\bar\rho(z)$ is the mean density of the universe at redshift $z$, and $\Delta_c=200$. We use the density of the background matter, and not the total density, because the matter already locked up in galaxies is unavailable to form new galaxies, except by mergers. We assume that in each cell that satisfies this criterion, there is a probability $P$ of forming a dwarf galaxy of mass $M_{\min}$ during a time interval $\Delta t$, given by $P=\Psi\Delta t$, where $\Psi$ is a galaxy formation rate. The number of galaxies created during a timestep is therefore \begin{equation} N_{\rm gal}=\Psi N_{\rm cell}\Delta t\,. \label{GF} \end{equation} \noindent where $N_{\rm cells}$ is the number of cells that satisfy the criterion. We adjust the value of $\Psi$ by requiring that the galaxy luminosity function at $z=0$ is consistent with observations (see \S3.1 below). We select a subset of $N_{\rm gal}$ cells randomly among the $N_{\rm cell}$ cells that satisfy the criterion $\rho_X^{\phantom1}>\rho_{\rm GF}^{\phantom1}$, and we create a galaxy of mass $M_{\min}$ in each of these cells. To do so, we consider a Gaussian density profile: \begin{equation} \rho(r)=\rho_{\rm GF}^{\phantom1}e^{-r^2/2w^2}\,, \end{equation} \noindent where $r$ is the distance from the center of the cell, and the width $w$ is defined by $(2\pi)^{3/2}\rho_{\rm GF}^{\phantom1}w^3=M_{\min}$. This profile contains a total mass $M_{\min}$. We identify all dark matter particles located within a distance $r=4w$ from the center of the cell. We then remove from each particle a mass $\Delta m=Ce^{-r^2/2w^2}$, where the constant $C$ is adjusted such that the total mass removed is equal to $M_{\min}$. Instead of locating the newborn galaxy in the exact center of the cell, we calculate the center-of-mass position and velocity of the matter that was removed from dark matter particles, and these become the position and velocity of the galaxy, respectively. This ensures that mass and momentum are conserved. The initial radius of the galaxy is set to $s=(3M_{\min}/4\pi\rho_{\rm GF}^{\phantom1})^{1/3}$, the radius of a uniform sphere with $M=M_{\min}$ and $\rho=\rho_{\rm GF}^{\phantom1}$. For the simulation presented in this paper, we used a minimum mass $M_{\min}=2\times10^9\msun$, which corresponds to the mass of 7 dark matter particles. The corresponding filter width is $w=25.8\,{\rm kpc}$, or 0.2645 PM cells. \subsection{The Subgrid Physics} Our treatment of subgrid physics is presented in Paper I, and we refer the reader to that paper for details. Here we briefly summarize the method used. At each timestep, we identify all pairs of galaxies that are sufficiently close that the center of one galaxy is inside the other galaxy, that is, $r_{ij}<\max(s_i,s_j)$, where $r_{ij}$ is the separation between galaxies $i$ and $j$, and $s_i$, $s_j$ are their radii. For each pair, we calculate the total energy of the two galaxies in the center-of-mass frame, \begin{equation} E_{ij}=K_i+K_j+W_{ij}+U_i+U_j\,, \label{energy} \end{equation} \noindent where $K_i$ and $K_j$ are the kinetic energy of galaxies $i$ and $j$ in the center-of mass frame, $U_i$ and $U_j$ are the internal energies of the galaxies, and $W_{ij}=-Gm_im_j/r_{ij}$ is the gravitational potential energy of the galaxy pair, respectively. The internal energies depend on the masses and radii of the galaxies. They are given by \begin{equation} U_i=-{\zeta Gm_i^2\over2s_i}\,, \end{equation} \noindent where $m_i$ and $s_i$ are the mass and radius of galaxy $i$, respectively. The {\it geometric factor} $\zeta$ depends on the density profile of the galaxy, but does not vary much for any reasonable profile, so, as in Paper~I, we use $\zeta=1$, which is the correct value for a truncated isothermal sphere in virial equilibrium. If $E_{ij}<0$, the merger criterion is satisfied and the two galaxies merge, to form a new galaxy $k$. The mass, position, velocity, and radius of that galaxy are initialized using: \begin{eqnarray} m_k&=&m_i+m_j\,,\\ {\bf r}_k&=&{\bf r}_{ij}\,,\\ {\bf v}_k&=&{\bf v}_{ij}\,,\\ s_k&=&-{\zeta Gm_k^2\over2E_{ij}}={\zeta Gm_k^2\over2|E_{ij}|}\,, \end{eqnarray} \noindent where ${\bf r}_{ij}$ and ${\bf v}_{ij}$ are the center-of-mass position and velocity, respectively. These equations ensure conservation of mass, momentum, and energy during mergers. Tidal disruption is more tricky. When a galaxy is disrupted by the tidal field of a more massive one, the inner part of the galaxy might survive, while the outer part gets stripped. Some of the material stripped might then escape the system, or might get accreted by the massive galaxy. We simplify the problem by using an all-or-nothing approach. We identify pairs of galaxies which are sufficiently close that the separation between their {\it edges\/} is smaller than the radius of the largest galaxies, that is, $r_{ij}-s_i-s_j<\max(s_i,s_j)$. Let us assume that galaxy $i$ is the most massive of the pair. We compare the tidal field of galaxy $i$ at the location of galaxy $j$ with the gravitational field of galaxy $j$ itself. If we estimate that the tidal field is strong enough to unbind more than 50\% of the mass of galaxy $j$, then the tidal disruption criterion is satisfied, and we assume that galaxy $j$ is totally destroyed. Otherwise, galaxy $j$ survives the encounter. In the case of a tidal destruction, we also check the merger criterion [eq.~(\ref{energy})]. If that criterion is also satisfied, we assume that galaxy $j$ is destroyed by the tidal field of galaxy $i$, and then the fragments accrete onto galaxy $i$. Numerically this is treated as a merger. Galaxy particles being destroyed and not reaccreted are flagged, to indicate that they do not represent galaxies anymore but rather tidal fragments. They remain in the simulation, but are ignored during subsequent encounters. This enables us to track the motion of tidal fragments, and eventually determine in which cluster they end up (see \S~3.3 below). As we argue in Paper~I, our treatment of mergers and tidal disruption would be too simplistic to describe individual events, but can be used to describe the net, collective effect of tens of thousands of galactic encounters. \section{RESULTS} \subsection{Galaxy Luminosity Function and Stellar Mass function} \begin{figure} \begin{center} \includegraphics[width=6in]{fig01.eps} \caption{Top panel: Luminosity function of galaxies at $z=0$. Dotted histogram: central galaxies only; solid histogram: all galaxies; solid curve: Schechter luminosity function with $\phi^*=1.61\times10^{-2}h^3{\rm Mpc}^{-3}$, $L^*=9.64\times10^9h^{-2}\lsun$ and $\alpha=-1.21$. Bottom panel: Stellar mass function of galaxies at $z=0$. Histogram: all galaxies; solid curve: Stellar mass function of \citet{bgd08} (their eqs.~[2]-[3]).} \label{histo2} \end{center} \end{figure} The simulation produces 79,751 galaxies with masses in the range $2\times10^9\msun-7.47\times10^{13}\msun$, including 251 objects with mass $M>10^{12}\msun$. Clearly, such massive objects cannot be individual galaxies. We interpret them as individual subhalos hosting several galaxies, more specifically a {\it central galaxy\/} and one or several {\it satellite galaxies}. With this in mind, we can now calculate the luminosity function of galaxies. To convert masses into luminosities, we use the conditional luminosity function of \citet{ymvdb03}. The average number of galaxies in the luminosity interval $[L-dL/2,L+dL/2]$ hosted by a halo of mass $M$ is given by \begin{equation} \phi(L|M)={\tilde\phi^*\over\tilde L^*} \left({L\over\tilde L^*}\right)^{\tilde\alpha}e^{-L/\tilde L^*}\,, \label{philm} \end{equation} \noindent where $\tilde\phi^*$, $\tilde L^*$, and $\tilde\alpha$ are functions of the halo mass. The average mass-to-light ratio of halos is approximated by the following fitting formula: \begin{equation} {M\over L}={1\over2}\left({M\over L}\right)_0 \left[\left({M\over M_1}\right)^{-\beta} +\left({M\over M_1}\right)^{\gamma^{\phantom0}_1}\right]\,. \label{ml1} \end{equation} \noindent If a halo hosts more than one galaxy, the mean luminosity of the most luminous galaxy is given by \begin{equation} \bar L_c(M)=\tilde\phi^*\tilde L^*\Gamma(\tilde\alpha+2,L_1/\tilde L^*)\,, \label{lc} \end{equation} \noindent where $\Gamma$ is the incomplete Gamma function, and $L_1$ is defined by \begin{equation} \tilde\phi^*\Gamma(\tilde\alpha+1,L_1/\tilde L^*)=1\,. \end{equation} \noindent Note that there are more recent studies of the halo mass function and stellar mass function, from which the mass-to-light ratio can be calculated \citep{tinkeretal08,bcw10}. But these studies are limited to halos of mass $M>10^{11}\msun$. The $M/L$ ratio of \citet{ymvdb03} covers the range $M=10^9h^{-1}\msun-10^{14}h^{-1}\msun$, which is appropriate for our work. We used their model M1, defined by $M_1=10^{11.27}h^{-1}\msun$, $(M/L)_0=134h\msun/\lsun$, $\beta=0.77$, $\gamma^{\phantom0}_1=0.32$. Using equation~(\ref{lc}), we calculated the central luminosity $\bar L_c$ of each galaxy particle in the simulation. The resulting luminosity function is shown by the dotted histogram in the top panel of Figure~\ref{histo2}. For comparison, the solid curve shows a Schechter luminosity function with $\phi^*=1.61\times10^{-2}h^3{\rm Mpc}^{-3}$, $L^*=9.64\times10^9h^{-2}\lsun$ and $\alpha=-1.21$ \citep{ymvdb03}. There is an excellent agreement at high luminosities, $L>10^{10}\lsun$. At lower luminosities, our simulated luminosity function is systematically below the Schechter function, except in the lowest bin where our results exceed the Schechter function by an order of magnitude. However, if we interpret galaxy particles of mass $M>10^{12}\msun$ as actually being halos containing several galaxies, then equation~(\ref{lc}) underestimates the total luminosity of these objects. By integrating equation~(\ref{philm}), we can calculate the contribution of satellite galaxies in each luminosity bin, and include it in the luminosity function. The result is shown by the solid histogram in the top panel of Figure~\ref{histo2}. There is now a good agreement with the Schechter function at all luminosities $L>10^{8.5}\lsun$, while there is still a mismatch at lower luminosities. We attribute this to the discreteness of the algorithm, in which all galaxy masses are multiples of $M_{\min}$. In particular, the three lowest luminosity bins in Figure~\ref{histo2} contain, respectively, galaxies of mass $M_{\min}$, of mass $2M_{\min}$, and of masses $3M_{\min}$ and $4M_{\min}$. We can understand the large excess in the lowest luminosity bin by noting that galaxies of mass $M_{\min}$ are allowed to form directly, while more massive galaxies must be built-up through a series of mergers. We find this agreement quite remarkable. There is nothing in our galaxy formation algorithm that guaranties a priori that the final luminosity function would even resemble a Schechter function. The model was never tuned to obtain this result, and there is not much that {\it could\/} be tuned. We use a density threshold of $200\bar\rho$ for identifying collapsed objects, which is a standard value based on theoretical arguments.\footnote{This is an approximation to $18\pi^2$, the exact value for a spherical collapse in a $\Omega_0=1$ universe.} The geometric factor $\zeta$ entering into the merger criterion is a free parameter, but its value is close to unity for any reasonable density profile. We argue that the features seen in the simulated luminosity function at low luminosities are caused by the discreteness of the galaxy masses. Hence, using a different value of $M_{\min}$ would most likely move these features to different luminosities without affecting the high-end of the luminosity function. The only real free parameter in our model is the galaxy formation rate $\Psi$. Changing that parameter might improve the fit at low luminosities, at the cost of worsening it at high luminosities. Overall, we find that our model provides a satisfactory fit to the Schechter luminosity function. In particular, it correctly predicts the value of the luminosity break $L_*$, though that value (or the corresponding mass $M_*$) is not built-in in the model. Since the $M$ vs $L$ relation is non-linear, when two galaxies merge, the merger remnant does not have the total luminosity of the two progenitors. This simply reflects the fact that galactic evolution is much more than just a series of mergers. The luminosity of galaxies is affected by processes such as star formation and evolution, accretion, galactic winds, AGN activity, and so on. Since these processes are not included in our model, we cannot predict the evolution of the luminosities of galaxies during and between mergers. This is why we calculate the values of the luminosity using the observed $M$ vs. $L$ relation. Once we have the luminosities, we can easily estimate the stellar masses. We use the relation given by \citet{belletal03} for g-band luminosities: $\log_{10}(M_{\rm st}/L)=-2.61+0.998\log_{10}M_{\rm st}h^2$, where the stellar mass $M_{\rm st}$ and luminosity $L$ are in solar units. For a cosmological model with $h=0.704$, this reduces to $M_{\rm st}=0.000142L^{1.425}$. The resulting galactic stellar mass function is shown by the histogram in the bottom panel of Figure~\ref{histo2}. For comparison, we show the observed stellar mass function of \citet{bgd08}, which covers the stellar mass range $M_{\rm st}=10^8-10^{12}\msun$. There is an excellent agreement over the range of masses covered by the observations, except possibly at the highest mass bin. \subsection{Global Properties} Figure~\ref{destruct} shows the cumulative number of mergers, of tidal destruction events with the tidal fragments dispersed in the intracluster space, and of tidal destruction events followed by accretion of the fragments onto the massive galaxy (these cases are also counted as mergers). The number of all types of events increase roughly exponentially with redshift. The delay between the start of merger events ($z=5.9$) and tidal destruction events ($z=4.8$) reflects the time it takes to build galaxies of different masses, an essential condition for tidal destruction. At redshifts $z>1$, more than 90\% of tidal destruction events result in the fragments being dispersed into the intracluster space, and therefore contributing to the ICL. After $z=1$, accretion of tidal fragments by the massive galaxy becomes more common, and dominates after $z=0.5$. \begin{figure} \begin{center} \includegraphics[width=6.5in]{fig02.eps} \caption{ Cumulative number of mergers (solid line), tidal destruction events, with fragments dispersed in the intracluster space (dotted line), and tidal destruction events followed by accretion of the fragments onto the massive galaxy (dashed line), versus redshift.} \label{destruct} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=6.5in]{fig03.eps} \caption{ Encounters resulting in tidal destruction, with dispersion of the fragments. $m_1$ and $m_2$ are the mass of the lower- and higher-mass galaxies, respectively. Dashed lines indicate mass ratios.} \label{tides_mass} \end{center} \end{figure} Figure~\ref{tides_mass} shows the mass of the galaxies involved in tidal destruction followed by dispersion, with $m_1$ being the mass of the galaxy being destroyed, and $m_2$ being the mass of the other, more massive galaxy. The most striking feature is that tidal destruction is not limited to dwarf galaxies, but covers more than three orders of magnitudes in mass. The most massive galaxy destroyed had a total mass $m_1=1.29\times10^{13}\msun$, and was destroyed by a galaxy of mass $m_2=2.20\times10^{13}\msun$ at redshift $z=0.48$. Figure~\ref{merge_mass} shows a similar plot, for all merger events (direct mergers, and tidal destruction followed by accretion). There are noticeable differences. First, the top left corner of the plot is populated, showing that encounters with very large mass ratios ($m_2/m_1>2500$) never result in dispersion, but can result in mergers. Second, there is a clear ``desert'' in Figure~\ref{merge_mass} at masses $m_1>3\times10^{11}\msun$ (or $M_{\rm st}>5\times10^9\msun$), between mass ratios $m_2/m_1=1$ and 10, which is not found in Figure~\ref{tides_mass}. To investigate this, we focus on encounters with $m_1>3\times10^{11}\msun$, and combine the two figures. The results are shown in Figure~\ref{encounters_himass}, where we use different symbols for direct mergers, tidal destruction followed by dispersion, and tidal destruction followed by accretion. There is a remarkable separation between the three processes. Direct mergers occur at low mass ratios, $m_2/m_1<1.2$, dispersion occurs mostly at intermediate ratios, $1.2<m_2/m_1<10$, and accretion occurs mostly at high mass ratio, $m_2/m_1>10$. To explain these results, let us consider what happens, in the simulation, during an encounter between a galaxy of mass $m_1$ and size $s_1$ and a galaxy of mass $m_2>m_1$ and size $s_2$, separated by a distance $r_{12}$. The tidal disruption and merger criteria used in our model will determine the outcome of this encounter. The gravitational field that binds the first galaxy is of order $E\sim Gm_1/s_1^2$, while the tidal field of the second galaxy at the location of the first one is of order $T\sim Gm_2s_1/r_{12}^3$. The ratio of these fields, $T/E$ is then proportional to $(m_2/m_1)(s_1/r_{ij})^3$. Since $s_1<r_{ij}$, it takes a certain mass ratio for tidal destruction to occur. Hence, encounters between galaxies of comparable masses can only result in either mergers (solid circles in Fig.~\ref{encounters_himass}) or nothing. But if the mass ratio is sufficiently large for the tidal disruption criterion to be satisfied, the lower-mass galaxy will be destroyed, and the merger criterion will determine whether the fragments are dispersed, or accreted by the higher-mass galaxy. Unlike the tidal disruption criterion, the merger criterion depends on the velocity of the galaxies [$K$-terms in eq.~(\ref{energy})]. High-mass galaxies tend to be located in massive clusters. Their velocities are usually not determined by their properties and the ones of their immediate neighbors, but rather by the overall properties of the cluster in which these galaxies are located. In particular, we expect the velocity of a galaxy orbiting inside a cluster to be of order the velocity dispersion at its location, independently of its mass of the mass of its neighbors. We can then rewrite equation~(\ref{energy}) as \begin{equation} E_{12}\approx{m_1\sigma^2\over2}+{m_2\sigma^2\over2}-{Gm_1m_2\over r_{12}^2} -{\zeta Gm_1^2\over s_1}-{\zeta Gm_2^2\over s_2}\,, \label{energy2} \end{equation} \noindent where $\sigma$ is the local velocity dispersion. If $m_1$ and $m_2$ are small, the kinetic energy terms will dominate, and the criterion will fail ($E_{12}>0$). The galaxies will simply pass by each other without merging, and if the tidal disruption criterion is satisfied (notice that it does not depend on velocities), the lower-mass galaxy will be destroyed and the fragments will be dispersed (crosses in Fig.~\ref{encounters_himass}). If we then keep $m_1$ constant and increase $m_2$, the second, third, and fifth terms in equation~(\ref{energy2}) will increase in amplitude. With two of these terms being negative, and the last one being quadratic in $m_2$, $E_{12}$ will decrease, and for high enough values of $m_2$, the criterion $E_{12}<0$ will be satisfied. The galaxies will merge, and since increasing $m_2$ while keeping $m_1$ constant also favors the tidal disruption criterion, the outcome will be tidal destruction followed by reaccretion of the fragments (open circles in Fig.~\ref{encounters_himass}). \begin{figure} \begin{center} \includegraphics[width=6.5in]{fig04.eps} \caption{ Encounters resulting in mergers (including tidal destruction followed by accretion). $m_1$ and $m_2$ are the mass of the lower- and higher-mass galaxies, respectively. Dashed lines indicate mass ratios.} \label{merge_mass} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=6.5in]{fig05.eps} \caption{ Encounters involving galaxies with masses $m>3\times10^{11}\msun$. $m_1$ and $m_2$ are the mass of the lower- and higher-mass galaxies, respectively. Solid circles: direct mergers; crosses: tidal destruction with fragments dispersed into the intracluster space; open circles: tidal destruction followed by accretion. Dashed lines indicate mass ratios.} \label{encounters_himass} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=6.5in]{fig06.eps} \caption{ Properties of tidally destructed galaxies which contribute to the ICL, versus mass. Top panel: number of galaxies in each mass bin; middle panel: total mass in each mass bin; bottom panel: total luminosity in each mass bin.} \label{destruct_histo} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=6.5in]{fig07.eps} \caption{ Properties of tidally destructed galaxies which contribute to the ICL, versus stellar mass. Top panel: number of galaxies in each mass bin; middle panel: total stellar mass in each mass bin; bottom panel: total luminosity in each mass bin.} \label{destruct_histo_ms2} \end{center} \end{figure} Figures~\ref{destruct_histo} and~\ref{destruct_histo_ms2} show the properties of the galaxies that are tidally destroyed, with fragments dispersed into the intracluster space, as functions of the galaxies' total mass~$m_1$ (Fig.~\ref{destruct_histo}) and stellar mass~$M_{\rm st}$ (Fig.~\ref{destruct_histo_ms2}). The top panels shows the number of galaxies destroyed in different mass bins. Most galaxies destroyed are low-mass galaxies: 89.8\% of galaxies destroyed have masses $m_1<10^{10}\msun$, $M_{\rm st}<2.8\times10^6\msun$, while only 0.16\% have masses $m_1>10^{12}\msun$, $M_{\rm st}>3\times10^{10}\msun$. The middle panels shows the total mass and total stellar mass in each mass bin. The contribution of low-mass galaxies is very small. 42.1\% of the mass in tidal fragments comes from galaxies in the range $m_1=10^{11}-10^{12}\msun$ and $M_{\rm st}=6\times10^8-3\times10^{10}\msun$. The peak at $m_1=M_{\min}=2\times10^9\msun$ results from the fact that there are too many galaxies of that mass to start with, as we explained in \S~3.1. Using equation~(\ref{ml1}), we calculated the total luminosity in each mass bin. The resulting distribution is plotted in the bottom panels of Figures~\ref{destruct_histo} and~\ref{destruct_histo_ms2}. 57.9\% of the ICL comes from galaxies in the range $m_1=10^{11}-10^{12}\msun$, $M_{\rm st}=6\times10^8-3\times10^{10}\msun$, 30.6\% from massive, $m_1>10^{12}\msun$, $M_{\rm st}>3\times10^{10}\msun$ galaxies, and only 11.5\% from low-mass, $m_1<10^{11}\msun$, $M_{\rm st}<6\times10^8\msun$ galaxies, including 1.0\% from $m_1<10^{10}\msun$, $M_{\rm st}<2.8\times10^6\msun$ galaxies. \citet{willmanetal04} also found that intermediate-mass galaxies were an important contributor to the ICL. \subsection{Cluster Analysis} \subsubsection{Intracluster Light Fraction} We identify clusters using a standard friends-of-friends algorithm (FOF) with a linking length equal to 0.25 times the mean particle spacing (corresponding to $48.8\,{\rm kpc}$ comoving). This algorithm identifies the dark matter particles, galaxies, and tidal fragments that belong to each cluster. The term {\it tidal fragment\/} refers here to galaxies that have been flagged as being tidally destroyed, with their fragments dispersed into the intracluster space. At $z=0$, we find 18 massive clusters, with masses $M_{\rm cl}>10^{14}\msun$. For each galaxy and tidal fragment located in these clusters, we calculate the luminosity using equation~(\ref{ml1}). By adding these luminosities, we get the total galaxy luminosity $L_{\rm gal}$ and the total intracluster luminosity $L_{\rm ICL}$ for each cluster. The properties of the clusters are listed in Table~\ref{table_cl}. $M_{\rm cl}$, $M_{\rm gal}$, and $M_{\rm tid}$ are the total mass of the cluster, the mass in galaxies, and the mass in tidal fragments, respectively. $L_{\rm gal}$ and $L_{\rm ICL}$ are the galaxy and intracluster luminosity, respectively, and $f_{\rm ICL}\equiv L_{\rm ICL}/(L_{\rm gal}+L_{\rm ICL})$ is the fraction of intracluster light. The values of $f_{\rm ICL}$ vary from 1\% to 58\%, while observed values vary from 1.6\% to 50\% (see Table~12 of Paper~I). Our simulations therefore reproduce the range of observed values of $f_{\rm ICL}$. However, we only have 4 clusters (out of 18) with $f_{\rm ICL}<20\%$, while such low values are more common among observed clusters. \citet{rmmb11} simulated 6 clusters, and found values of $f_{\rm ICL}$ in the range $9\%-36\%$. Most of their simulated clusters have $f_{\rm ICL}<20\%$\footnote{Each cluster in the simulations of \citet{rmmb11} has several values of $f_{\rm ICL}$ because they experiment with various techniques for calculating that quantity.}. \citet{hbt08} report median values of $f_{\rm ICL}=20\%$ for halos with masses $M_{\rm cl}\sim10^{13}h^{-1}\msun$ and $f_{\rm ICL}=30\%$ for halos with masses $M_{\rm cl}\sim10^{15}h^{-1}\msun$. Visual inspection of their Figure~6 indicates values of $f_{\rm ICL}$ in the range $10\%-50\%$ for the mass range $M_{\rm cl}>10^{14}\msun$ we consider. Overall, there is a broad agreement between the values of $f_{\rm ICL}$ obtained by us, by \citet{rmmb11}, by \citet{hbt08}, and the observed values. The ranges of values are very wide. In \S~3.3.3 below, we investigate the physical origin of this, and argue that it is a consequence of the hierarchical formation of clusters. \begin{deluxetable}{lcccccc} \tablecolumns{7} \tablecaption{Properties of Massive Clusters} \tablewidth{0pt} \tablehead{ \colhead{Name} & \colhead{$M_{\rm cl}$ $[10^{14}\msun]$} & \colhead{$M_{\rm gal}$ $[10^{14}\msun]$} & \colhead{$M_{\rm tid}$ $[10^{14}\msun]$} & \colhead{$L_{\rm gal}$ $[10^{11}\lsun]$} & \colhead{$L_{\rm ICL}$ $[10^{11}\lsun]$} & \colhead{$f_{\rm ICL}$ $[\%]$} } \startdata C01 & 12.16 & 0.869 & 0.539 & 3.887 & 4.207 & 52 \cr C02 & 10.28 & 0.914 & 0.313 & 3.289 & 2.151 & 40 \cr C03 & 5.62 & 0.573 & 0.110 & 2.804 & 0.879 & 24 \cr C04 & 5.40 & 0.369 & 0.223 & 2.156 & 1.866 & 46 \cr C05 & 3.34 & 0.228 & 0.169 & 1.071 & 1.504 & 58 \cr C06 & 2.98 & 0.315 & 0.037 & 1.916 & 0.256 & 12 \cr C07 & 2.52 & 0.204 & 0.096 & 1.279 & 0.813 & 39 \cr C08 & 2.02 & 0.192 & 0.045 & 1.204 & 0.401 & 25 \cr C09 & 1.94 & 0.187 & 0.041 & 1.236 & 0.317 & 20 \cr C10 & 1.71 & 0.130 & 0.072 & 1.019 & 0.668 & 40 \cr C11 & 1.63 & 0.146 & 0.040 & 1.077 & 0.351 & 25 \cr C12 & 1.36 & 0.094 & 0.066 & 0.719 & 0.644 & 47 \cr C13 & 1.23 & 0.124 & 0.021 & 0.795 & 0.162 & 17 \cr C14 & 1.21 & 0.105 & 0.032 & 0.828 & 0.277 & 25 \cr C15 & 1.13 & 0.103 & 0.033 & 0.669 & 0.312 & 32 \cr C16 & 1.11 & 0.117 & 0.003 & 0.745 & 0.006 & 1 \cr C17 & 1.09 & 0.108 & 0.012 & 0.702 & 0.087 & 11 \cr C18 & 1.06 & 0.162 & 0.053 & 1.234 & 0.497 & 29 \cr \enddata \label{table_cl} \end{deluxetable} \begin{figure} \begin{center} \includegraphics[width=7in]{fig08.eps} \caption{ Top left panel: fraction of cluster mass inside galaxies (solid circles) and inside tidal fragments (open circles) versus cluster mass. Top right panel: galaxy stellar mass (solid circles) and intracluster stellar mass (open circles) versus cluster mass. Bottom left panel: galaxy luminosity (solid circles) and intracluster luminosity (open circles) versus cluster mass. Bottom right panel: intracluster light fraction versus cluster mass.} \label{clusters} \end{center} \end{figure} Figure~\ref{clusters} shows the dependencies of cluster properties on the total mass of the cluster. The top left panel shows the mass fraction in galaxies and tidal fragments. There is a lot of scatter, but overall the mass fraction is around 0.08 for galaxies and 0.03 for tidal fragments, with $M_{\rm gal}>M_{\rm tid}$ for all clusters. The top right panel shows the stellar masses $M_{\rm st,gal}$ and $M_{\rm st,ICS}$ in galaxies and intracluster stars, and the bottom left panel shows the luminosities $L_{\rm gal}$ and $L_{\rm ICL}$. All these quantities increase roughly linearly with $M_{\rm cl}$. Most of the light comes from the galaxies, with two notable exceptions: clusters C01 (the most massive one) and C05. The bottom right panel shows the intracluster light fraction $f_{\rm ICL}$. There is no obvious correlation with cluster mass, except for the fact that massive clusters tend to have large values of $f_{\rm ICL}$, with 4 of the 5 most massive clusters having $f_{\rm ICL}\leq40\%$, while only 2 of the least 13 massive ones have values of $f_{\rm ICL}$ this high. Some studies have found no significant dependence of $f_{\rm ICL}$ on cluster mass \citep{dmb10,pssd10,rmmb11}, while others found that $f_{\rm ICL}$ tends to increase with $M_{\rm cl}$ \citep{pbz07,muranteetal07,hbt08}. Our results are somehow intermediate. We do not find very massive clusters with low $f_{\rm ICL}$, but we do find some less-massive clusters with high $f_{\rm ICL}$. \subsubsection{Cluster Evolution} \begin{figure} \begin{center} \includegraphics[width=7in]{fig09.eps} \caption{ Time-evolution of the galaxy luminosity (top panels, solid curves), intracluster luminosity (top panels, dotted curves), and intracluster light fraction $f_{\rm ICL}$ (bottom panels), for clusters C01, C04, C08, and C14. $t_0$ is the present age of the universe.} \label{evol4} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=7in]{fig10.eps} \caption{ Merger tree for cluster C04. The area of each circle is proportional to the mass of the cluster. The numbers next to some clusters indicate the value of $f_{\rm ICL}$ in percentage (top), and the relative contribution of that cluster to the mass of the merger remnant, also in percentage (bottom). The dotted box indicates a major merger, where the most massive progenitor contributes only 38\% of the final mass of the merger remnant. Redshifts are shown on the left. For clarity, only clusters with $20,000$ particles or more are shown.} \label{mergertree} \end{center} \end{figure} We used our FOF algorithm to build clusters catalogs at various redshifts, and combined these catalogs to build merger trees for all 18 massive clusters found at $z=0$. We then traced the ancestry of each cluster back in time, following the most massive progenitor. Figure~\ref{evol4} shows the evolution of the galaxy luminosity $L_{\rm gal}$, intracluster luminosity $L_{\rm ICL}$, and intracluster light fraction $f_{\rm ICL}$, for a subset of 4 clusters. $L_{\rm ICL}$ increases with time as galaxies get tidally destructed. It could only decrease if tidal fragments were ejected from the clusters, but this does not seem to ever happen. $L_{\rm gal}$ is affected by several processes. Galaxy formation increases $L_{\rm gal}$, while tidal destruction followed by dispersal decreases it. Galaxy merger events, and tidal destruction followed by accretion, both replace two galaxies by one with the same total mass. Since $L$ does not vary linearly with $M$, the new galaxy does not have the total luminosity of its two progenitors, as we explained in \S3.1. Overall, the values of $f_{\rm ICL}$ tend to increase with time, in agreement with the results of \cite{rmmb11}. However, there can be sudden drops in the value of $f_{\rm ICL}$, such as the one seen at $t/t_0=0.6$ in cluster C04. These drops are caused by major cluster mergers. When two clusters of comparable masses, but with very different values of $f_{\rm ICL}$, merge, the merger remnant will have a value of $f_{\rm ICL}$ that is intermediate between the values for the progenitors. If the progenitor with the largest value of $f_{\rm ICL}$ was identified as the main progenitor, then the net effect of the merger is to decrease $f_{\rm ICL}$ for that cluster. To illustrate this, we show in Figure~\ref{mergertree} the merger tree for cluster C04. Though the cluster was built mostly through a series a minor merger (where one progenitor provides 80\% or more of the mass), we see that a major merger took place between redshifts $z=0.70$ and $0.49$. The main progenitor has a value $f_{\rm ICL}=0.46$, but provides only 38\% of the mass. The next three progenitors all have lower values of $f_{\rm ICL}$, and together provide 41\% of the mass. As a result, the value of $f_{\rm ICL}$ drops from 0.46 to 0.29. A similar thing happens at $t/t_0=0.95$ for cluster C08. In the cases of clusters C01 and C14, we found that the late drop in $f_{\rm ICL}$ was not caused by a major merger, but rather by a sudden increase in galaxy formation. These results are consistent with the results of \citet{rmmb11}. Their simulations show that $f_{\rm ICL}$ does not increase at a constant rate, and does not always vary monotonically. \subsubsection{The Range of Intracluster Light Fractions} \begin{deluxetable}{lcrccccccc} \tablecolumns{10} \tablecaption{Main Ancestry of Massive Clusters} \tablewidth{0pt} \tablehead{ \colhead{Name} & \colhead{$f_{\rm ICL}^{\phantom1}\ [\%]$} & \colhead{$z=0.10$} & \colhead{0.20} & \colhead{0.31} & \colhead{0.49} & \colhead{0.70} & \colhead{1.05} & \colhead{1.41} & \colhead{2.06} } \startdata C01 & 52 & 90 & 90 & 86 & 80 & 57 & {\bf[35]} & 79 & {\bf[22]} \cr C02 & 40 & 89 & 92 & 87 & 78 & 84 & {\bf[35]} & 80 & {\bf[31]} \cr C04 & 46 & 90 & 92 & 82 & 70 & {\bf[38]} & 82 & 61 & {\bf[19]} \cr C05 & 58 & 92 & 93 & 90 & 81 & 87 & {\bf[46]} & {\bf[32]} & {\bf[29]} \cr C07 & 39 & 79 & 91 & 88 & 75 & 83 & 70 & {\bf[33]} & 69 \cr C08 & 25 & 64 & 90 & 92 & 87 & 57 & 80 & {\bf[45]} & 71 \cr C10 & 40 & 91 & 91 & 73 & {\bf[48]} & 53 & 67 & 66 & {\bf[43]} \cr C12 & 47 & 90 & 92 & 83 & {\bf[45]} & {\bf[48]} & 67 & 63 & \nodata \cr C14 & 25 & 90 & 87 & 85 & {\bf[41]} & 75 & {\bf[47]} & 51 & 71 \cr \hline C03 & 24 & 88 & 90 & 88 & 63 & 80 & 53 & 75 & {\bf[18]} \cr C06 & 12 & 89 & 90 & 70 & 81 & 84 & 64 & 77 & 55 \cr C09 & 20 & 89 & 52 & 90 & 80 & 85 & 63 & 73 & {\bf[25]} \cr C11 & 25 & 89 & 87 & 56 & 84 & 83 & 70 & 73 & {\bf[26]} \cr C13 & 17 & 92 & 73 & 85 & 81 & 76 & 75 & 73 & 57 \cr C15 & 32 & 92 & 87 & 91 & 84 & 69 & 77 & 69 & {\bf[27]} \cr C16 & 1 & 89 & 91 & 86 & 85 & 82 & 71 & 76 & 74 \cr C17 & 11 & 86 & 90 & 89 & 81 & 84 & 69 & 74 & {\bf[30]} \cr C18 & 29 & 87 & 69 & 91 & 88 & 68 & 83 & 52 & 54 \cr \enddata \label{table_ancestry} \end{deluxetable} Once the merger trees of clusters are built, we can investigate the origin of the very wide range in the values of $f_{\rm ICL}^{\phantom1}$ (from 1\% to 58\% in our simulation, and a comparable range in the observed values). For each cluster, we started at redshift $z=0$, and followed the ancestry back in time along the main progenitors. The results are shown in Table~\ref{table_ancestry}. For each cluster, we indicate, at each redshift $z>0$ the contribution in percentage of that cluster to the mass of the merger remnant at the next redshift (the reader will recognize, for cluster C04, the numbers plotted along the central line in Fig.~\ref{mergertree}). We indicated in boldface and square brackets the major mergers, when less than 50\% of the mass of the merger remnant comes from the main progenitor. We also separated the clusters in two groups (top and bottom). The clusters in the top group all experienced a major merger at intermediate redshift $z<1.41$. The clusters in the bottom group experienced no major merger, or a major merger at high redshift. There is a striking correlation between the presence of major mergers at intermediate redshifts and the final value of $f_{\rm ICL}^{\phantom1}$. All clusters in the top group have $f_{\rm ICL}^{\phantom1}\geq25\%$, and the seven highest values of $f_{\rm ICL}^{\phantom1}$ are found in that group; all clusters in the bottom group have $f_{\rm ICL}^{\phantom1}\leq32\%$, and the six lowest values of $f_{\rm ICL}^{\phantom1}$ are found in that group. Clearly, major mergers at intermediate redshift drive the evolution of $f_{\rm ICL}^{\phantom1}$ and determine the final value of $z=0$. While minor mergers will tend to leave clusters essentially undisturbed, major mergers can have dramatic effects. In particular, the merger of two clusters of comparable masses, which contain comparable numbers of galaxies, can lead to a sudden increase in the number of density of galaxies. Since the frequency of encounters scales like the square of that number density, we expect a significant increase in the rate of encounters immediately after a major merger, and that rate might remain high all the way to $z=0$. This will result in a large number of tidal destruction events, and a correspondingly high value of $f_{\rm ICL}^{\phantom1}$. Notice that major mergers of clusters at $z=2.06$ and do not have much effect, because the clusters do not contain many galaxies at that time. This result is not at odds with the conclusion of the previous section. A major merger between clusters of comparable masses and different values of $f_{\rm ICL}^{\phantom1}$ can lead to a sudden drop in the value of $f_{\rm ICL}^{\phantom1}$, depending on which progenitor is identified as the main one. But this sudden drop can be more than compensated by the increase rate of encounters that take place after the merger and can last all the way to the present. Cluster C04 provides a good illustration of this. $f_{\rm ICL}^{\phantom1}$ drops from 46\% to 29\% during the major merger, but is back at 46\% by $z=0$. \subsubsection{Projected Luminosity Distribution} \begin{figure} \begin{center} \includegraphics[width=6in]{fig11.eps} \caption{Projected maps of some clusters. The greyscale shows the projected surface density of dark matter (shades are separated by 0.3~dex). Yellow and orange dots show galaxies and tidal fragments, respectively. All panels are $8\,{\rm Mpc}\times8\,{\rm Mpc}$.} \label{maps} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=6.5in]{fig12.eps} \caption{Projected luminosity maps of some clusters. The greyscale shows the projected luminosity density of galaxies (left panel for each cluster) and intracluster light (right panel for each cluster). Shades are separated by 0.5~dex, and for each cluster, galaxies luminosity and ICL are plotted using the same greyscale. All panels are $8\,{\rm Mpc}\times8\,{\rm Mpc}$.} \label{maps_lum} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=6.5in]{fig13.eps} \caption{Luminosity in projected radial bins for some clusters. Solid and dotted lines show the galactic luminosity and intracluster luminosity, respectively.} \label{maps_reduce} \end{center} \end{figure} Figure~\ref{maps} shows the structure of some of the clusters. The greyscale images show the projected surface density of dark matter. The yellow and orange dots show the galaxies and tidal fragments, respectively. Some clusters, like C05 and C13, are relaxed objects with a well-defined core where the density of dark matter and the number density of galaxies and tidal fragments all peak. Clusters like C09 and C10 are clearly two clusters undergoing mergers, and each one has two separate cores. Interestingly, in the case of C09, one core has a large number of tidal fragments, while the other core has very few. In all cases, we notice that the tidal fragments are much more centrally concentrated than the galaxies. We calculated separately the galaxy and interstellar luminosities for all clusters. The results are shown in Figure~\ref{maps_lum}, for the same subset of clusters as in Figure~\ref{maps}. It is clear that, within each core, the intracluster light is more concentrated than the galactic light. We selected all clusters that have a single dominant core. For each cluster, we calculated the position of the projected center-of-light, both for the galactic and ICL components. We then calculated the luminosity of each component in projected radial bins of width $320\,{\rm kpc}$. We show some of the results in Figure~\ref{maps_reduce}. The central galaxy luminosity tends to be dominated by a bright central galaxy\footnote{As we explained in \S~3.1, that ``galaxy'' is actually several galaxies represented by one particle.}. Cluster C11 has two bright galaxies located on opposite sides of the center, which explains the low luminosity of the central bin. The profiles show large variations due to the clumpiness of the projected luminosity distributions, but there are some clear trends. For the massive clusters C02 and C05, if we exclude the central bin, the histograms tend to be flat for the galactic luminosity and decreasing for the intracluster luminosity, indicating that the latter is more centrally concentrated. This is even clearer for the lower-mass clusters C11, C13, C15, and C17. Both the galactic luminosity and intracluster luminosity decrease with radius, but the length scale of the intracluster light is of order $0.5-0.7$ of the length scale of the galactic light. As we saw in Figure~\ref{destruct_histo}, the destruction of intermediate-mass galaxies is the primary contributor to the intracluster light. These galaxies can only be destroyed by more massive ones, and these these massive galaxies tend to reside in the center of clusters. Notice that, in our algorithm, tidal fragments are treated as single particles. When a galaxy is tidally destroyed, the algorithm flags it as a tidal fragment with the same mass. Intermediate-mass tidal fragments might be subject to mass segregation, which in this case would be a spurious algorithmic effect, leading to an overestimate of the concentration of ICL in the central region. Still, if most tidal fragments are produced by interactions taking place in the center of clusters, we do expect the fragments to be found in these regions at later time, whether they are represented by one massive particles or several less massive ones. \section{DISCUSSION} Our model combines a gravitational N-body algorithm, a subgrid treatment of galaxy formation, galaxy mergers, and tidal destruction, and a fitting formula for $M/L$ vs. $M$ derived from observations. The greatest virtue of this model is its simplicity. By only including gravitational dynamics, and by using a subgrid treatment at small mass scales, we are able to perform a large cosmological simulation at relatively low computational cost. We simulate a cubic volume of $100\,{\rm Mpc}$ in size, containing 18 clusters of mass above $10^{14}\msun$. Doing a full hydrodynamical simulation of this size, without subgrid physics, that could resolve in detail galactic mergers and tidal destruction at the dwarf galaxy scale would be prohibitive. Our simulation only took a few weeks on a 16-processor computer. There are some caveats implied by the method we use. By only including gravity, and not hydrodynamics, our algorithm can describe the structure and evolution of the mass distribution in the universe, but not how this mass is converted to light. Hence, the algorithm cannot predict the luminosity of galaxies self-consistently. This is why we used the observed relation between $M/L$ and $M$ [eq.~(\ref{ml1})] to calculate the luminosity of galaxies and tidal fragments. Actually, this relation gives the {\it average} value of $M/L$. For any particular value of $M$, there is a distribution of values of $M/L$ \citep{ymvdb03}. We ignored this, and calculated the luminosity $L$ of an object of mass $M$ by using equation~(\ref{ml1}) directly. We could instead have used the actual distribution of values of $M/L$ and draw a random value from that distribution for each object, but that would have been an overkill. We formed $1,088,797$ dwarf galaxies in our simulation, and by $z=0$, each massive cluster ($M>10^{14}\msun$) contains between 311 and $7,765$ galaxies, and between 91 and $2,113$ tidal fragments. Drawing the ratios $M/L$ from a distribution instead of using the average value would hardly make any difference in the total luminosity of these components, and the inferred intracluster light fraction $f_{\rm ICL}$. Using a subgrid model for galaxy mergers and tidal destruction enables us to reach dwarf-galaxy scales while simulating a large cosmological volume, but there is also another advantage: it provides an unambiguous determination of the outcome of each close encounter (merger, tidal destruction, or simple encounter). If the encounters were actually simulated in details, the determination of their outcome would require detailed analysis, as some of the matter would merge, while some would be ejected, and some of that ejected matter would be reaccreted. As we argue here, and also in Paper~I, our subgrid model would not be appropriate to describe a single encounter, but can correctly describe, in a statistical way, the collective effect of a large number of encounters. In the simulation presented in this paper, $590,262$ encounters resulted in a merger, $8,314$ encounters resulted in tidal destruction followed be dispersion, and $113,132$ encounters resulted in tidal destruction followed by accretion. We have assumed that the intracluster light is caused entirely by galaxies that are tidally destroyed. Some simulations suggest that luminous matter can also be added to the intracluster space during mergers (e.g. \citealt{muranteetal07}). Since we are neglecting this effect, our values of $f_{\rm ICL}$ might be underestimated. However, we could argue that a merger with some of the matter being dispersed in the ICL, a case we do not consider, is an intermediate case between a tidal destruction with complete dispersal of the fragments and a tidal destruction with all the fragments being subsequently reaccreted, two cases we do consider. Hence, two mergers with some of the matter dispersed in the intracluster space could be equivalent to a tidal destruction with complete dispersal of the fragments plus a tidal destruction with complete reaccretion of the the fragments. Statistically, the net effect might be the same. Our model has only one parameter that is truly tunable: the coefficient $\Psi$ appearing in equation~(\ref{GF}). We adjusted this value to reproduce the high-luminosity end of the luminosity function, as seen in Figure~\ref{histo2}. Using a larger value of $\Psi$ might lead to an improvement of the luminosity function at the low-luminosity end, but could worsen the fit at the high-luminosity end. Also, a larger value of $\Psi$ would likely result in an increase in both $L_{\rm gal}$ and $L_{\rm ICL}$, while having a smaller effect on the value of $f_{\rm ICL}$. \section{SUMMARY AND CONCLUSION} We performed a numerical simulation of the formation of galaxies and clusters, the destruction of galaxies by mergers and tides, and the evolution of the galactic, extragalactic, and intracluster light, inside a cosmological volume of size $(100\,\rm Mpc)^3$, in a $\Lambda$CDM universe. Our main results are the following: \begin{itemize} \item Our simulation reproduces the observed Schechter luminosity function for luminosities $L>10^{8.5}\lsun$, up to $L=10^{11}\lsun$. We have a significant excess of galaxies at luminosities $L<10^{6.5}\lsun$, and a deficit in the range $10^{6.5}\lsun-10^{8.5}\lsun$. We attribute this to the discreteness of the galaxy masses. All galaxies in our simulation have masses that are multiples of $M_{\min}=2\times10^9\msun$, and this can affect the luminosity function at low $L$. \item The number of mergers and tidal destruction events increase exponentially with decreasing redshift, with mergers starting at $z\sim5.9$ and tidal destruction starting at $z\sim4.8$. This delay is caused by the time it takes to build up galaxies of significantly different masses. Mergers outnumber tidal destruction events by about an order of magnitude, at all redshifts up to the present. When tidal destruction occurs, dispersal of the fragments into the intracluster space dominates over reaccretion of the fragments by the larger galaxy, up to redshift $z\sim0.5$. This trend is then reversed. \item Tidal destruction is not limited to dwarf galaxies. Intermediate-mass galaxies and even high-mass galaxies are destroyed during encounters with even higher-mass galaxies. Tidal destruction and also mergers involve galaxy pairs with all masses and all mass ratios. We found an interesting trend for encounters between high-mass galaxies ($M>3\times10^{11}\msun$, $M_{\rm st}>5\times10^9\msun$). The outcome of such encounter seems to depend almost entirely on the mass ratio between the galaxies. Small mass ratios ($m_2/m_1<1.2$) result in mergers, intermediate mass ratios ($m_2/m_1<10$) result in tidal destruction with the fragment being dispersed into the intracluster space, and higher mass ratios result in tidal destruction, with the fragments being reaccreted by the massive galaxy. \item Most galaxies destroyed by tides are low-mass galaxies. However, the total luminosity provided by these low-mass galaxies is small. 57.9\% of the ICL comes from galaxies of intermediate masses ($M=10^{11}\msun-10^{12}\msun$, $M_{\rm st}=6\times10^8\msun-2\times10^{10}\msun$), while lower-mass galaxies provide only 11.5\% of the ICL. Essentially, the bulk of the ICL comes from galaxies of masses $m_1=10^{11}\msun-10^{12}\msun$ which are tidally destroyed by galaxies of mass $m_2=(1.2-10.0)m_1$. Higher mass ratios result in reaccretion of the tidal fragments. \item The present intracluster light fraction $f_{\rm ICL}$ is in the range $1\%-58\%$. This is consistent with observations, and with simulations presented by other groups. Even though mergers outnumber tidal destruction events by an order of magnitude, the latter are sufficient to explain the observed ICL. The galaxy luminosity $L_{\rm gal}$ and intracluster luminosity $L_{\rm ICL}$ both increase with cluster mass. The intracluster light fraction $f_{\rm ICL}$ does not show any particular trend with cluster mass, except for the fact that we did not find massive clusters with low values of $f_{\rm ICL}$. \item The value of $f_{\rm ICL}$ for any particular cluster tends to increase with time. However, some clusters experience sudden drops in $f_{\rm ICL}$, that can happen at any redshift. At early times, these sudden drops are caused by major mergers, when the cluster absorbs another cluster of comparable mass but with a smaller value of $f_{\rm ICL}$. At late times, they are caused by a sudden increase in galaxy formation not accompanied by a corresponding increase in tidal destruction. \item Several clusters are not relaxed at $z=0$, and show complex structures with multiple cores. Focusing on the clusters with a well-defined core, we found that the distribution of ICL is more concentrated than the distribution of galactic light. Most of the ICL comes from intermediate-mass galaxies destroyed by massive ones. Since these massive galaxies tend to reside in the center of clusters, this explains a relatively small extent of the ICL. \end{itemize} \acknowledgments This work benefited from stimulating discussions with J. Navarro. All calculations were performed at the Laboratoire d'astrophysique num\'erique, Universit\'e Laval. We thank the Canada Research Chair program and NSERC for support. PB acknowledges support from the FP7 ERC Starting Grant {\sl cosmoIGM\/}. HM is thankful to the Department of Physics and Astronomy, University of Victoria, for its hospitality. \clearpage
1,314,259,992,651
arxiv
\section{Introduction} Deep Learning is widely used for many applications, from computer vision~\cite{Resnet,Alexnet} to natural language~\cite{bert,gpt3} to speech processing~\cite{2015SpeechDNN}. While the accuracy of deep learning-based approaches has improved significantly, the state-of-the-art models require substantial storage and energy at inference time. The increasing need for more intelligent IoT devices and rising security concerns require these DL models to be deployed on edge with low inference time and power consumption. Consequently, there is a lot of interest in making DNN inference more efficient. One promising technique to make DNN inference efficient is the binarization of weights of the DNN. Binarization leads to weight repetition as only two unique values get repeated in the weight tensor. Thus, only 1 bit can be used to represent a weight instead of 32 bits leading to a 32x compression. This significantly reduces memory access and data movement, reducing energy consumption and lowering runtime during DNN inference. Since memory accesses from DRAM are more expensive than ALU operations by two orders of magnitude~\cite{EIE, BNN}, reducing data movement to and from memory makes DNN inference more efficient. Additionally, multiply-accumulate operations (MACs) could be replaced with additions only by designing custom DL hardware, thereby eliminating the need for multipliers on the ASICs~\cite{courbariaux2015binaryconnect, BNN}. The other promising technique to make DNN inference efficient is by introducing sparsity to weight tensors of the DNN, known as sparse DNNs. Since anything multiplied by zero is zero, weight sparsity leads to \emph{ineffectual} multiplications. We can reduce DRAM accesses by not reading activations that would be multiplied by zero weights. This would reduce MACs and DRAM accesses, potentially resulting in lower power consumption and runtime during DNN inference. This approach has been shown to work on general-purpose devices and ASICs~\cite{qin2020sigma, hegde2019extensor, wang2021spatten, EIE, dai2020sparsetrain, gong2020save}. Recent work~\cite{DeepCompression, jaszczur2021sparse} has demonstrated that sparse models can achieve similar accuracy as their dense counterparts for CNN and Transformer-based backbones. While there is a rich literature on methods to increase the accuracy of binary models~\cite{courbariaux2015binaryconnect, apple_quantization, liu2020reactnet, liu2018bi, bai2018proxquant, xu2021recu}, binary networks can not use sparsity to improve computational efficiency during DNN inference. This is because weights and activations in SOTA binary models are either positive or negative real numbers. Therefore, DNN inference efficiency is usually achieved either by leveraging binarization or sparsity, but not both. Prior attempts to produce sparse binary networks have suffered a significant accuracy drop when compared to their binary counterparts~\cite{schiavone2020sparse}. In this paper, we demonstrate that the two techniques, namely binarization, and sparsity, can be complementary (and therefore are not mutually exclusive). To illustrate this, we propose a new quantization scheme SBWN~ to create sparse binary weight networks which achieve competitive accuracy on CIFAR10~\cite{cifar10} and ImageNet~\cite{Imagenet}. Finally, we demonstrate that using SBWN~ leads to lower runtime when compared to its binary counterpart when leveraging both binarization and sparsity during DNN inference. We also discuss potential theoretical gains in throughput and energy consumption. Unlike Binary and Ternary Networks~\cite{courbariaux2015binaryconnect, apple_quantization, liu2020reactnet, liu2018bi, bai2018proxquant, xu2021recu, TTQ, lq_net, li2016ternary}, Signed-Binary~ uses two new quantization functions with the values of \{1, 0\} \& \{0, -1\}. The value set of the quantization function for a given filter of a CNN is decided randomly before the training starts and remains fixed during training and inference. \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{figures/concept_diag.png} \caption{\textbf{Comparison of Binary, Ternary and Signed-Binary~ (ours).} We introduce signed binary weight quantization to improve DNN efficiency by leveraging weight repetition and weight sparsity.} \label{fig:concept_diag} \end{figure*} We make the following contributions in the paper: \begin{itemize} \item {We propose a new quantization scheme called Signed Binary Weight Networks~ (SBWN~), which is more efficient than binary and ternary weight networks.} \item {We demonstrate that signed-binary~ leads to comparable accuracy when compared to binary by training ResNet models on CIFAR10 and ImageNet datasets.} \item {We identify that during DNN inference, using binary or ternary weight networks leads to a trade-off between weight repetition and weight sparsity, thereby increasing runtime. Our method makes the exploitation of weight repetition and weight sparsity complementary, maximizing efficiency. We are the first ML work to demonstrate the same and to deploy these models on real hardware.} \item {We discuss potential theoretical gains on throughput and energy consumption with respect to binary when using signed-binary~. For ResNet18, switching to signed-binary~ from binary can potentially lead to $ \sim $ 3x increase in throughput and $ \sim $ 2x decrease in energy consumption during DNN inference.} \end{itemize} \section{Background} \subsection{Quantization} \paragraph{Binary Quantization} A binary quantizer takes any real-valued number and maps it to either +1 or -1. The intention is that if specialized DL hardware could complement this quantization scheme, it would lead to replacing multiply-and-accumulates (MACs) with simple accumulate operations. There are various methods to improve the accuracy of these networks - by selecting the scaling factor~\cite{xnor_nets, bulat2019xnor, bulat2019improved}, changing network structure~\cite{liu2018bi, liu2020reactnet, chen2020binarized}, reducing the gradient error~\cite{bai2018proxquant} to improve STE~\cite{bengio2013estimating}, and reviving dead weights~\cite{xu2021recu}. \paragraph{Ternary Quantization} Ternary Quantizer takes any real-valued number and maps it to either +1, -1, or 0. Every ternary quantization function has a threshold $\Delta$. If the latent full-precision weight is greater than $\Delta$, it is mapped to +1. If the latent full-precision weight is less than $\Delta$, it is mapped to -1. Else, it is mapped to zero. Like in the case of binary quantization, the outputs of the ternary quantizer can be scaled by multiplying them with a floating-point number $\alpha$. The scaling factor could be per convolutional layer~\cite{TTQ,li2016ternary} or per convolutional filter~\cite{apple_quantization}. $\alpha$ can be the same for both positive and negative weights~\cite{li2016ternary} or can be different for positive and negative weights~\cite{TTQ}. Choosing the best $\Delta$ is also a design decision. For example, it could be a fraction of the mean of absolute values of the weights~\cite{li2016ternary} or a fraction of the maximum of the absolute values of the weights~\cite{TTQ}. \subsection{Efficient Inference of Quantized DNNs} \paragraph{Weight Sparsity} Weight Sparsity in DNN implies that there exists a repetition of weight with a value equal to zero. The basic idea here is that since $0 \times x = 0$ for any real-valued scalar $x$ if the weight is zero, the multiplication is \emph{ineffectual} and should be skipped. We know that the weights of the DNN are fixed during inference. Therefore, if the weight value is zero, we can choose not to load activations corresponding to that weight. This can reduce data movement, memory accesses, and MACs, thereby reducing computations and resulting in efficient DNN inference. This approach has been effective on ASICs and general-purpose devices~\cite{hegde2019extensor, qin2020sigma,wang2021spatten, dai2020sparsetrain,gong2020save, EIE}. \paragraph{Weight Repetition} Quantization of weights in DNN leads to the same value being repeated repeatedly in the weight tensor. This phenomenon is known as weight repetition~\cite{ucnn,sze2020efficient}. Since the weights are fixed during DNN inference, this leads to additional opportunities for optimizations. The objective is to improve efficiency during inference with respect to time and energy by exploiting the repetition of weights and reducing memory accesses~\cite{sze2020efficient}. Using weight repetition for efficiency originated in the original BNN~\cite{BNN}, which talked about the idea of filter repetition in BNNs for efficient inference. The authors highlight that the filter size bounds the number of unique filters possible in a binary setting. An example explains this: a 3x3 2D filter can only have $2^9$ unique filters in a binary network.~\cite{BNN} states that there are only 42\% of unique filters per layer on average, which can lead to reducing the number of XNOR operations by 3x. UCNN~\cite{ucnn} was the first work demonstrating efficiency using weight repetition on ASICs. It performed efficient inference by reordering the weights and thus reorders activations and operations. This reduces memory access and decreases the number of arithmetic operations required during DNN inference. For example, if the filter weights are $[a, b, a, a]$ and activations are $[w, x, y, z]$, UCNN would reorder it as $a\times(w+y+z) + b\times(x)$ for efficient inference~\cite{sze2020efficient}. SumMerge~\cite{prabhakar2021summerge} extends the idea of UCNN even further by using both weight repetition and weight sparsity for efficient DNN inference. For example, if $b = 0$ in the previous example, SumMerge would compute $a\times(w+y+z)$ during inference. \section{Motivation and Limitations of Prior Work} We observe three trends, (1) the number of parameters in DNNs is increasing~\cite{chowdhery2022palm, dosovitskiy2020image, liu2022convnet} (2) Introducing Weight Sparsity in a DNN can lead to zero drop in accuracy~\cite{han2015deep, jaszczur2021sparse} and (3) Hardware-Software is becoming increasingly efficient in skipping zero weights to increase the computational efficiency of DNN inference~\cite{dai2020sparsetrain, qin2020sigma, hegde2019extensor}. These trends are not observed for binary networks, which remain dense. Thus, the biggest motivation of this paper is to combine Binary DNNs with Sparse DNNs. Since ternary has also been referred to as sparse binary~\cite{TTQ}, we analyze why ternary networks are sub-optimal sparse binary networks. Authors of Ternary Weight Networks (TWN)~\cite{li2016ternary} argued that when compared to its binary counterpart, the number of arithmetic operations in TWNs would be unchanged as arithmetic operations corresponding to zero-valued weights can be skipped during DNN inference. They claim that switching from binary to ternary increases the expressivity as the 3x3 2D filter can have $3^9$ unique filters instead of $2^9$. However, this increase in expressivity is also a drawback of ternary networks. This is because ternaization makes it exponentially hard to extract efficiency using filter repetition. Switching from binary to signed-binary~, on the contrary, results in the same number of unique 3x3 filters. This is because a filter in \{1,0\} bucket will be a two's complement of a filter in \{0,-1\} bucket. In addition, as signed-binary~ 3x3 filters will be sparse, it will lead to more efficient DNN inference when compared to Binary. Unlike Ternary, which requires at least two bits to represent one weight in a DNN, weights of binary and signed-binary~ can be represented using one bit only. Time taken for DNN inference when using 1-bit, 2-bit, and FP weights has been evaluated~\cite{auto_gen}. They deploy ResNet18 on ARM CPUs while using different bit-width for weights. They demonstrate that switching from one bit to two bits results in a 2x increase in runtime. Hence, signed-binary~ would be more efficient for DNN inference when compared to Ternary when using implementation like for general-purpose devices. To conclude, switching from a binary network to ternary network results in the introduction of sparsity at the expense of weight repetition. This is not a limitation for signed-binary~ because it introduces sparsity to binary networks without compromising weight repetition. \section{Signed Binary Weight Networks~} \subsection{High Level Overview} Binary and Ternary Weight Quantization Schemes use one quantization function that takes full-precision latent weight as input and gives the quantized weight as the output. For Binary, the value set of the quantization function is \{1,-1\}. While for Ternary, the value set of the quantization function is \{1,0,-1\}. Unlike Binary and Ternary, we use two quantization functions instead of one to quantize a convolutional layer. The value sets of the two quantization functions are \{1,0\} and \{0,-1\}. \subsection{Problem Formulation} Let the convolutional filter have a $R \times S$ kernel size and $C$ input channels. The quantization function takes latent full-precision weights $\textbf{W}$ as input and outputs the quantized weight ${\textbf{W}}^{quant}$. The quantized weight ${\textbf{W}}^{quant}$ would be the product of the sign-factor ${\beta}$ and the BitMap $\textbf{U}$. \begin{equation} \textbf{Q}: \textbf{W} \rightarrow \textbf{W}^{quant} \end{equation} \begin{equation} {\textbf{W}}^{quant} = {\beta} \hspace{1pt} \textbf{U} \end{equation} \begin{equation} \forall \hspace{1pt} \textbf{W} \in {{\mathbb{R}}^{R \times S \times C}} \hspace{1pt},\hspace{1pt}\beta \in\{+1,-1\}\hspace{1pt},\hspace{1pt} \mathbf{U} \in\{0,1\}^{R \times S \times C} \end{equation} \subsection{Method} We now introduce the details of our signed-binary~ quantization approach - (1) how quantization functions are assigned to each filter, (2) how is signed-binary~ implemented for efficient training, and (3) define signed-binary~ quantization functions. \paragraph{Per Filter Quantization} SBWN~ is a per filter quantization scheme, i.e., the quantization function takes full-precision latent weights of one filter of a CNN as input and maps it to either $\{0,1\}^{R \times S \times C}$ or $\{0,-1\}^{R \times S \times C}$. The values of the quantization function for a given filter of a CNN are decided randomly before the training starts and never change. \paragraph{Bucketing} For a convolutional layer $i$ having $K$ filters, each filter will have a different quantization function. Since every filter of a convolutional layer is independent of every other filter, we can sort these filters into two buckets based on the values of the quantization function for each filter. In other words, we quantize the full-precision latent weights of a convolutional layer from $\mathbb{R}^{\{R \times S \times C \times K \}}$ to $\{0,1\}^{R \times S \times C \times K * P}$ and $\{0,-1\}^{R \times S \times C \times K * (1-P)}$ where P is the percentage of filters whose quantization functions have the values \{0,1\}. This is done to implement SBWN~ efficiently. \paragraph{\{0,1\} Quantization Function} If a filter $i$ is assigned to $\{0,1\}$ bucket, the sign-factor $\beta_{i}$ is equal to 1. Following~\cite{TTQ}, we use the threshold value of $\Delta=0.05\times max(|\textbf{W}|)$. The quantization function is defined as: \begin{equation} \textbf{W}^{ {quant }}_{i}=\left\{\begin{array}{c}0 {\hspace{3pt} if \hspace{3pt}} \textbf{W}_{i}<\Delta_{i} \\ \beta_{i} {\hspace{3pt} if \hspace{3pt}} \textbf{W}_{i}>=\Delta_{i}\end{array}\right. \end{equation} \begin{equation} \frac{\partial L}{\partial \textbf{W}_{i}}=\left\{\begin{array}{c}\beta_{i} \times \frac{\partial L}{\partial \textbf{W}_{i}^{quant}}: \textbf{W}_{i}>\Delta_{i} \\ 1 \times \frac{\partial L}{\partial \textbf{W}_{i}^{quant}}: \textbf{W}_{i} \leq \Delta_{i}\end{array}\right. \end{equation} \begin{equation} \Delta_{i} = 0.05 * max (|\textbf{W}_{i}|) \end{equation} \paragraph{\{0,-1\} Quantization Function} If a filter $i$ is assigned to $\{0,-1\}$ bucket, the sign-factor $\beta_{i}$ is -1. Following~\cite{TTQ}, we use the threshold value of $\Delta=0.05\times max(|\textbf{W}|)$. The quantization function is defined as: \begin{equation} \textbf{W}^{ {quant }}_{i}=\left\{\begin{array}{c}0 {\hspace{3pt} if \hspace{3pt}} \textbf{W}_{i}>-\Delta_{i} \\ -\beta_{i} {\hspace{3pt} if \hspace{3pt}} \textbf{W}_{i}<=\Delta_{i}\end{array}\right. \end{equation} \begin{equation} \frac{\partial L}{\partial \textbf{W}_{i}}=\left\{\begin{array}{c} - \beta_{i} \times \frac{\partial L}{\partial \textbf{W}_{i}^{quant}}: \textbf{W}_{i}<-\Delta_{i} \\ 1 \times \frac{\partial L}{\partial \textbf{W}_{i}^{quant}}: \textbf{W}_{i} \geq -\Delta_{i}\end{array}\right. \end{equation} \begin{equation} \Delta_{i} = 0.05 * max (|\textbf{W}_{i}|) \end{equation} \section{{Accuracy}} \subsection{Evaluating Signed-Binary~ against Binary and Ternary Quantization\label{cifar10_sec}} \paragraph{Objective} We would like to compare the ability of signed-binary~ to perform object classification tasks against binary and ternary. We train ResNets~\cite{Resnet} on CIFAR10~\cite{cifar10} with these quantization functions under identical settings for this experiment. \paragraph{Setup} We acknowledge that there is a rich literature on Binary and Ternary weight networks, which contains different methods of improving top-1 accuracy by selecting the scaling factor~\cite{TTQ,lq_net}, modifying backbone architecture~\cite{liu2018bi,liu2020reactnet}, using knowledge distillation~\cite{apple_quantization, liu2020reactnet}, reducing the gradient error~\cite{bai2018proxquant}, and reviving dead weights~\cite{xu2021recu}. But, these techniques are built on top of quantization functions. For the most direct comparison of quantization schemes, we focus our evaluation on the primary form of each model. All strategies to further improve learning may be applied on top of the base model for any of these quantization schemes. Quantized weights can only belong to the set \{0, -1, 1\}. Please look at the appendix for the hyperparameters used and more details. \paragraph{Result} The top-1 validation accuracy for ResNets~\cite{Resnet} trained using signed-binary~, binary and ternary quantization function on CIFAR10 is reported in Table \ref{tab:cifar10-quant}. We find that when trained from scratch, signed-binary~ and binary achieve similar accuracy. Hence, we can conclude that signed-binary~ and binary have similar performance on the CIFAR10 object classification task. \begin{table}[h] \begin{center} \begin{tabular}{ccccc} \toprule Architecture & B & SB~ (ours) & T & FP \\ \midrule \rowcolor{gray(x11gray)} Epochs & 350 & 350 & 350 & 200 \\ \midrule ResNet20 & 90.20 & 90.05 & 90.86 & 91.99\\ ResNet32 & 91.51 & 91.55 & 92.03 & 92.90\\ ResNet44 & 91.93 & 91.98 & 92.40 & 93.30\\ ResNet56 & 92.42 & 92.52 & 92.90 & 93.63\\ ResNet110 & 92.64 & 92.68 & 93.33 & 93.83\\ \bottomrule \end{tabular} \end{center} \caption{\textbf{Quantization scheme ablation across architecture}: Binary, Ternary, and Signed-Binary~ ResNets are trained on CIFAR10 under identical settings. We observe that Binary and Signed-Binary~ achieve similar top-1 validation accuracy even though signed-binary has sparse weight tensors.}\label{tab:cifar10-quant} \end{table} \subsection{Evaluation on ImageNet}\label{sec_imagenet} \paragraph{Objective} We would like to evaluate if it is possible to perform object classification tasks at scale when using Signed Binary Weight Networks~. We train signed-binary~ ResNet18 from scratch on ImagetNet for this experiment. \paragraph{Setup} We train ResNet-18~\cite{Resnet} using SBWN~ on ImageNet~\cite{Imagenet}. We use standard practices to train binary networks, like (1) normalizing the input using batch-norm~\cite{BatchNorm} before convolution instead of after convolution~\cite{xnor_nets} and (2) the first and the last layers are not quantized~\cite{TTQ, apple_quantization, li2016ternary, bai2018proxquant}. We use a first-order polynomial learning-rate annealing schedule with Adam optimizer~\cite{kingma2014adam} and PReLU~\cite{he2015delving, Relu}. We use an FFCV data loader~\cite{leclerc2022ffcv} with simple augmentations - Random Resize Crop to $ 224 \times 224$, Random Horizontal Flipping, and Color Jitter with (brightness, contrast, saturation, hue) set as (0.4, 0.4, 0.4, 0). We decrease the learning rate from $2.0 \times e^{-4}$ to $2.0 \times e^{-8}$ while training for 320 epochs and do not use weight decay, and an effective batch size of 256 is used to train the model.  \paragraph{Result} Baseline ResNet18 trained using vanilla SBWN~ gives us 61.94\% top-1 validation accuracy on ImageNet. Adding simple techniques like distillation~\cite{hinton2015distilling} increases the top-1 accuracy to 64.6\%. Thus, we can conclude that SBWN~ can train DNN models on complex tasks like ImageNet and can reach competitive accuracy compared to its binary counterpart without any architectural modifications, scaling factors, or jointly optimizing for information loss and quantization error~\cite{Qin:cvpr20} built for binary networks. These ideas complement this work and can be extended and used in conjunction with signed-binary~ networks to improve the accuracy further. The critical observation is that signed-binary~ provides higher efficiency while retaining competitive performance at scale. \begin{table}[h] \begin{center} \begin{tabular}{cccc} \toprule Method & BW & Top-1 Acc \\ \midrule SBWN~ (ours) & SB~ & 61.94 \\ SBWN~ + Distillation (ours) & SB~ & 64.62 \\ \midrule BWN~\cite{xnor_nets} & B & 60.8 \\ TWN~\cite{li2016ternary} & T & 61.8 \\ ABC-Net~\cite{lin2017towards} & B & 62.8 \\ BWNH~\cite{hu2018hashing} & B & 64.3 \\ DSQ~\cite{gong2019differentiable} & B & 63.7 \\ \textcolor{coolgrey}{LS$\dagger$~\cite{apple_quantization}} & \textcolor{coolgrey}{B} & \textcolor{coolgrey}{66.1} \\ \textcolor{coolgrey}{IR-Net*~\cite{Qin:cvpr20}} & \textcolor{coolgrey}{B} & \textcolor{coolgrey}{66.5} \\ \midrule Full Precision & 32 & 69.6 \\ \bottomrule \end{tabular} \end{center} \caption{\textbf{Evaluation on ImageNet using ResNet18}: We train Signed-Binary~ ResNet18 from Scratch on ImageNet to achieve competitive accuracy when compared to Binary and Ternary Weight Networks. Legend: SB~ = Signed-Binary~, B = Binary, T = Ternary, and BW = Bit-Width of Weights. $\dagger$: architecture modifications, including more skip connections and distillation to improve performance. * performs joint optimization for information retention, which can be extended to any low-bit quantization method.} \label{tab:imagenet-quant} \end{table} \subsection{{Additional Ablations}} We perform ablation on value assignment percentage in Table~\ref{tab:p-value}. We find that randomly assigning the value set to the quantization function of a filter leads to the highest validation accuracy for the CIFAR10 dataset. We see an improvement of 1.2\% and 6\% top-1 accuracy with respect to the naive $\{1,0\}$ quantization function used in the literature. In addition, we also ablate on (1) non-linearity, (2) threshold used in signed-binary~ quantization functions, and (3) batch size by training ResNet20~\cite{Resnet} on the CIFAR10 dataset~\cite{cifar10}. Please take a look at Table~\ref{tab:delta} for ablation on delta (the setup and other ablations are reported in the appendix). We observe that PReLU works best for our method, and our method is not sensitive to the choice of threshold $\Delta$. \begin{table} \begin{center} \begin{tabular}{ccc} \toprule \multicolumn{3}{c}{{\textbf{\textbf{ResNet20 CIFAR10}}}}\\ \midrule \% \{0,1\} filters & \% \{0,-1\} filters & Val Acc (top-1) \\ \midrule 0 & 1 & 88.84 \\ 0.25 & 0.75 & 89.32 \\ 0.5 & 0.5 & {90.05} \\ 0.75 & 0.25 & 89.30 \\ 1 & 0 & 89.07 \\ \midrule \multicolumn{3}{c}{\textbf{ResNet18 ImageNet}}\\ \midrule \% \{0,1\} filters & \% \{0,-1\} filters & Val Acc (top-1) \\ \midrule 1 & 0 & 55.23 \\ 0.5 & 0.5 & 61.94 \\ \bottomrule \end{tabular} \end{center} \caption{\textbf{Ablation of value assignment percentage in Signed Binary Weight Networks~}: We do an ablation to the percentage of filters of a convolutional layer assigned a quantization function of values \{0,1\} and \{0,-1\}. The setup for this ablation is the same as for Tables~\ref{tab:cifar10-quant} and~\ref{tab:imagenet-quant}. We observe that randomly assigning the value set to the quantization function of a filter leads to the highest validation accuracy. This experiment also tells us that switching from naive \{1,0\} quantization function to our method improves the top-1 validation accuracy by 6\% on ImageNet.}\label{tab:p_table} \label{tab:p-value} \end{table} \begin{table}[h!] \begin{center} \begin{tabular}{cc} \toprule \textbf{Delta} ($\Delta$) & \textbf{Accuracy}\\ \midrule $0.01*max(|\textbf{W}|)$ & 90.09\\ $0.05*max(|\textbf{W}|)$ & 90.05\\ $0.1*max(|\textbf{W}|)$ & 89.95\\ \bottomrule \end{tabular} \end{center} \caption{\textbf{Ablation on threshold function ($\Delta$)}: We train ResNet20 on CIFAR to perform ablation on different values of delta to find that our method is not sensitive to the delta.} \label{tab:delta} \end{table} \section{{Efficiency}} \subsubsection{Time taken for DNN Inference} \begin{figure*}[h!] \centering \includegraphics[width=0.8\textwidth]{figures/speedup.png} \caption{\textbf{Speedup over Binary Quantized ResNet18 on Intel CPU}: {Signed-Binary~ is designed for efficiency, while binary and ternary make suboptimal use of weight repetition and/or weight sparsity.} We find that signed-binary~ performs best for every convolutional layer across all quantization schemes when the software leverages both weight repetition and weight sparsity.} \label{fig:speed} \end{figure*} \paragraph{Overview} We demonstrate that signed-binary is more efficient than binary and ternary networks when leveraging both weight sparsity and weight repetition during DNN inference. To do this, we deploy quantized ResNet-18 models on Intel CPUs and measure the actual time taken during inference. \paragraph{Background} To understand efficiency during DNN inference, we need to understand how weight sparsity and weight repetition affect the same. For weight sparsity, we can take advantage of the emerging technology of sparse tensor operations (spMM) that can skip zeros at inference time to speed up computation. For weight repetition, the weight values are repeated in the weight tensor due to quantization; we can use this repetition to reduce arithmetic operations and data movement. {During DNN inference, a traditional binary network chooses maximization of weight repetition while being ignorant of weight sparsity, whereas a ternary network leads to the occurrence of weight sparsity at the expense of weight repetition. Signed-Binary~ is a better solution in terms of this trade-off as it makes the two techniques complementary and maximizes efficiency.} \paragraph{Setup} Since industry standard libraries like Intel's oneDNN do not support these weight quantization schemes, we use recently published work for DNN inference of quantized and sparse DNNs on Intel CPUs called SumMerge~\cite{prabhakar2021summerge} and use their test methodology (details in the appendix). All DNN inference experiments are subject to identical test environments and methodologies. The same experiment can also be performed on an ASIC~\cite{ucnn}. \paragraph{Method} We use SumMerge for DNN inference in two configurations (1) with sparsity support turned OFF and (2) with sparsity support turned ON. When sparsity support is turned OFF, the software does not treat zero weights differently from non-zero weights and only relies on weight repetition. When sparsity support is turned ON, the software also skips computation on zero weights. We deploy quantized ResNet18 on Intel CPUs. We report per-layer speedup and total speedup for different quantization schemes over binary quantization. \paragraph{Result} We observe that signed-binary~ would be the most efficient compared to binary and ternary. Signed-binary is 1.26x faster than binary and 1.75x faster than ternary when sparsity support is turned on. \paragraph{Explanation} (1) \emph{When sparsity support is turned {off}}: In this scenario, the software is only relying on repeating values within the weight tensor for speedup. Because binary and signed-binary~ have two unique values per convolutional filter, they take a similar time for DNN inference. Ternary is much slower as it has three unique values per convolution filter, which makes extracting efficiency by using weight repetition exponentially harder. \\ (2) \emph{When sparsity support is turned {on}}: In this scenario, the software not only cares about the repeating values in the weight tensor but also skips computations on zero weights to improve the runtime. Here we observe that ternary is slower than binary because the reduction in work due to sparsity cannot compensate for the exponential decrease in weight repetition. Our method, on the other hand, does not suffer from this problem and can exploit weight repetition and weight sparsity to the fullest and is most efficient. \begin{figure}[ht] \centering \includegraphics[width=1\columnwidth]{figures/sparsity.png} \caption{\textbf{Distribution of Quantized Weights in signed-binary~ ResNet18 trained on ImageNet with Distillation}: While the weight distribution of signed-binary~ convolutional layers looks similar to ternary, the positive and negative weights are present in different filters. } \label{fig:distribution} \end{figure} \subsection{Sparsity and Further Analysis} \paragraph{Measuring Sparsity} We would like to estimate the percentage of zero weights in signed-binary networks. We count the number of quantized weights with zero values and divide it by the total number of quantized weights to calculate the percentage of sparsity in the deep neural network. We find that ResNet-18 with 64.6\% top-1 on ImageNet has 69\% sparsity (see Figure~\ref{fig:distribution}) while ResNet20 with 90.05\% accuracy on CIFAR10 has 60\% sparsity. \paragraph{Throughput} {Theoretically, the throughput can be increased by $\sim$3x for ResNet18 when switching from binary to signed-binary~ due to reported sparsity. This is because, ideally, when we eliminate ineffectual computations resulting from zero weights, we can eliminate the time associated with them to increase throughput~\cite{joel_sparseloop}. However, since the addition of support for unstructured sparsity is an emerging technology, the speedup we observe on real hardware is in the range of 1.26x-1.75x. This is comparable to the speedup due to unstructured sparsity on general-purpose devices in recent papers~\cite{gong2020save, hegde2019extensor}.} \paragraph{Energy Reduction} To estimate energy reduction due to the unstructured sparsity of DNNs, we use the cycle-level microarchitectural simulator~\cite{STONNE21} of sparsity supporting ASIC~\cite{qin2020sigma}. We take the publicly released code and use it under the default configuration (details in the appendix). We observe that increasing the unstructured weight sparsity from 0\% to 69\% leads to a $\sim$2x reduction in energy during DNN inference. Thus, switching from binary to signed-binary~ would significantly reduce ASICs' power consumption. {We acknowledge that the same experiment can also be conducted by using Sparseloop~\cite{wu2021sparseloop}. \subsection{Signed-Binary vs Binary with comparable non-zero parameters} We would like to compare binary with signed-binary~ when the DNN has the same number of non-zero parameters. Signed-Binary~ ResNet trained on CIFAR10 has slightly greater than 50\% sparsity. If we reduce the total number of parameters of the binary ResNet by half, the resulting model would have a comparable number of non-zero weights to signed-binary~ ResNet. This is done by reducing depth (see Table~\ref{tab:depth}) and width (see Table~\ref{tab:width}). We train these models under identical conditions (setup details in the appendix). To clarify, rows 1 \& 2 of Table~\ref{tab:half_param} have the same number of total parameters, while rows 1 \& row 3 have a comparable number of non-zero parameters. Thus, signed-binary~ leads to a higher accuracy than binary when both methods have a comparable number of effectual operations. \begin{table}[h!] \begin{subtable}[t]{0.45\textwidth} \begin{center} \begin{tabular}{cccc} \toprule Quant & \# Parameters & Depth & Acc \\ \midrule SB~ & 0.46M & 32 & 91.55\%\\ B & 0.46M & 32 & 91.22\%\\ B & 0.27M & 20 & 90.16\%\\ \bottomrule \end{tabular} \caption{\textbf{Reducing the number of parameters by reducing depth}: We observe that the accuracy of binary is 1.3\% lower than signed-binary~ with comparable non-zero weights.} \label{tab:depth} \end{center} \end{subtable} \hspace{\fill} \begin{subtable}[t]{0.45\textwidth} \begin{center} \begin{tabular}{cccc} \toprule Quant & \# Parameters & Width & Acc \\ \midrule SB~ & 0.27M & $1\times$ & 90.05\%\\ B & 0.27M & $1\times$ & 90.20\%\\ B & 0.14M & $\lceil{0.7\times}\rceil$ & 88.5\%\\ \bottomrule \end{tabular} \caption{\textbf{Reducing the number of parameters by reducing width}: We observe that the accuracy of binary is 1.7\% lower than signed-binary~ with comparable non-zero weights.} \label{tab:width} \end{center} \end{subtable} \caption{\textbf{Binary vs. Signed-Binary with comparable non-zero weights}: We observe that Signed-Binary~ achieves higher accuracy when compared to binary with comparable effectual operations. B - Binary, SB~ - Signed-Binary~}\label{tab:half_param} \end{table} \section{Limitations and Discussion} Like all binary weight quantization methods, our method suffers from a significant accuracy drop on ImageNet. Our work will be helpful in scenarios where the time and energy required for inference are very important. In addition, the scope of this work is limited to weight quantization and weight sparsity. In the future, we would like to extend this work to Signed-Binary Neural Networks, address the challenge of combining sparsity in weights and activations with binarization, and create a software system to demonstrate speedup. Finally, scaling factors for our method is an unexplored area that might lead to even higher accuracy on challenging datasets like ImageNet. \section{Conclusion} {This paper introduces a new weight quantization scheme called Signed-Binary, which combines sparse networks with binary networks. Results on ImageNet and CiFAR10 illustrate that signed-binary~ method achieves comparable accuracy to binary. Signed-Binary enables modern hardware software to increase inference efficiency by better exploiting weight repetition (i.e., repeating values in the weight tensor) and weight sparsity (i.e., skipping computations on zero weights). We demonstrate signed-binary~ is more efficient than binary and ternary by performing an ablation study on weight sparsity and weight repetition while deploying these models on Intel CPUs. Finally, we discuss gains concerning efficiency and find that switching from binary to signed-binary~ can lead to $\sim$3x increase in throughput and $\sim$2x decrease in energy consumption on ASICs for ResNet18.} \section{Appendix} \subsection{Experiment Setup} \paragraph{CIFAR10} The data loader pipeline consists of simple augmentations - padding by 4 pixels on each size, random crop to $32\times 32$, Random Horizontal Flip with probability 0.5, and normalization. We train from scratch for 350 epochs and use the Adam Optimizer~\cite{kingma2014adam}. We start with an initial learning rate of 0.01 and reduce it by a factor of 10 at epochs 150, 200, and 320. For apples-to-apples comparison with binary and ternary, we do a sweep over batch sizes \{16, 32, 64, 128, 256\}(ablation in appendix) and activation functions (ReLU, PReLU, TanH) and report the best top-1 validation accuracy. For ablations on (1) value assignment percentage and (2) comparison with binary networks with comparable effectual operations, we select the batch size to be 32 and activation function to be PReLU. \paragraph{Deploying on CPUs} We use SumMerge~\cite{prabhakar2021summerge} for this task. We run all experiments on Intel Xeon Gold 6226 CPU. In order to make our test environment as close as possible to the test environment of the authors of~\cite{prabhakar2021summerge}, we disable simultaneous multi-threading, and enable 2MB huge pages and disable dynamic frequency scaling as well. The test methodology is exactly the same as used by the authors of~\cite{prabhakar2021summerge}, i.e., each experiment is run 50 times when the machine is unloaded and the values for run with the lowest execution time are reported. All arithmetic operations are in floating-point. All DNN inference experiments are subject to identical test environments and methodology. \paragraph{ASIC} We use STONNE~\cite{STONNE21}, a cycle-level microarchitectural simulator for DNN Inference Accelerator SIGMA~\cite{qin2020sigma} for this experiment. We use the docker image released by the authors of~\cite{STONNE21}. We use the standard configuration of SIGMA with 256 multiplier switches, 256 read ports in SDMemory and 256 write ports in SDMemory. The reduction networks is set to ASNETWORK and the memory controller is set to SIGMA\_SPARSE\_GEMM. We use SimulatedConv2d function in the PyTorch frontend version of STONNE. For a given convolutional layer, we run STONNE twice, once with 0\% sparsity and once with 69\% sparsity. We calculate the reduction in energy consumption by dividing the energy of the dense convolutional layer by the energy of the sparse convolutional layer. Since the precision (or bit-width) of the weights is a parameter of SIGMA, the reduction in energy due to sparsity when compared to dense model is not a function of the preceision (or bit-width) of the weights of the DNN. \subsection{Additional Ablation} We perform ablation on (1) batch sizes, (2) non-linearity on CIFAR10~\cite{cifar10} dataset and report the numbers in Table~\ref{tab:batch_size} and ~\ref{tab:nonlin} respectively. The setup is the same as mentioned above. We observe that for our method, there is a drop in accuracy with higher batch size, PReLU~\cite{Relu} works best and it is not sensitive to the choice of delta. In addition, on performing ablation on value assignment percentage for ResNet18 trained on ImageNet. \begin{table}[h!] \begin{subtable}[t]{0.45\textwidth} \begin{center} \begin{tabular}{cc} \toprule Batch Size & Accuracy (Top-1)\\ \midrule 16 & 89.44\\ 32 & 90.05\\ 64 & 89.62\\ 128 & 89.59\\ 256 & 88.51\\ \bottomrule \end{tabular} \end{center} \caption{\textbf{Ablation on Batch Size}: The setup is identical across batch sizes and the non-linearity used is PReLU. We observe a decrease in accuracy when a high batch size of 256 is used.} \label{tab:batch_size} \end{subtable} \hspace{\fill} \begin{subtable}[t]{0.45\textwidth} \begin{center} \begin{tabular}{cc} \toprule Non-Linearity & Accuracy (Top-1)\\ \midrule ReLU & 88.64\\ PReLU & 90.05\\ TanH & 88.75\\ LReLU & 89.22\\ \bottomrule \end{tabular} \end{center} \vspace{5pt} \caption{\textbf{Ablation on Non-Linearity}: The setup is identical across non-linearity and the batch size used is 32. We observe that PReLU works best for our method.} \label{tab:nonlin} \end{subtable} \end{table} \begin{table}[h!] \begin{center} \begin{tabular}{ccc} \toprule \%\{0,1\}filters & \%\{0,-1\}filters & Acc \\ \midrule 1 & 0 & 55.23\\ 0.5 & 0.5 & 61.94\\ 0.25 & 0.75 & 62.29\\ 0.75 & 0.25 & 62.04 \\ \bottomrule \end{tabular} \end{center} \caption{\textbf{Ablation on value assignment percentage for ResNet18 on ImageNet}: The setup is identical across different percentages. We observe a significant accuracy drop when using \{1,0\} only.} \label{tab:imagenet_p} \end{table} \newpage \subsection{Datasets} Licenses of ImageNet~\cite{Imagenet} and CIFAR10~\cite{cifar10} datasets used in this paper are listed in Table 1. Every accuracy reported in this paper is on validation set of the dataset. \begin{table}[h!] \begin{center} \begin{tabular}{ccc} \toprule Dataset & License & Source\\ \midrule ImageNet & Non-Commercial & \href{https://image-net.org/challenges/LSVRC/2012/}{ILSVRC2012}\\ CIFAR10 & N/A & \href{https://www.cs.toronto.edu/~kriz/cifar.html}{CIFAR} \\ \bottomrule \end{tabular} \end{center} \caption{\textbf{Dataset with Licenses}: License and source of the datasets used.} \label{tab:my_ldatasetabel} \end{table} ImageNet and CIFAR10 are standard publicly used datasets. Since they do not own their images, therefore they do not have a release license. Actual images may have their own copyrights but ImageNet provides stipulations for using their dataset (for non-commercial use). We do not recommend using the resulting signed-binary~ models trained on these datasets for any commercial use.{\looseness=-1}
1,314,259,992,652
arxiv
\section*{Acknowledgements} \noindent We express our gratitude to our colleagues of the \mbox{LHCb}\xspace collaboration. The authors would like to thank G.~Cavoto, M.~Ferro-Luzzi, G.~Graziani, M.~Nebot, M.~Schiller, A.~Pich, V.~Vagnoni and G.~Wilkinson for interesting discussions. We acknowledge support from INFN (Italy), MinECo and GVA (Spain). \section{Spin precession and time evolution equations} \label{app:A} The time evolution of the spin-polarization vector for a particle with charge $q$ in an electromagnetic field, as a function of the proper time $\tau$, is given by the Thomas-Bargmann-Michel-Telegdi (T-BMT) equation~\cite{Thomas:1926dy,Thomas:1927yu,Bargmann:1959gz}, \begin{equation} \label{eq:TBMTCov} \frac{da^\mu}{d\tau} = \frac{g \mu_B}{\hbar} \left[ F^{\mu\nu}a_\nu + \frac{1}{c^2} \left( a_\alpha F^{\alpha\beta} u_\beta \right) u^\mu \right] - \frac{1}{c^2} \left( a_\alpha \dot{u}^\alpha \right) u^\mu - \frac{d \mu_B}{\hbar} \left[ F^{*\mu\nu}a_\nu + \frac{1}{c^2} \left( a_\alpha F^{*\alpha\beta} u_\beta \right) u^\mu \right] , \end{equation} where $F^{\mu\nu}$ is the electromagnetic tensor, $a^\mu = (a^0,\mathbf a)$ is the spin 4-pseudovector, and $p^\mu = m u ^\mu = \left(E/c,\mathbf p \right)$ is the momentum 4-vector. For homogeneous fields, the velocity derivative is given by the Lorentz force, \begin{equation} \dot{u}^\mu \equiv \frac{du^\mu}{d\tau} = \frac{q}{mc} F^{\mu\nu} u_\nu . \end{equation} In the rest frame of the particle, $a^\mu=(0, \mathbf s)$, $p^\mu=(mc,\bm 0)$, where $\mathbf s$ is the non-relativistic spin-polarization vector. Therefore, in any frame $a^\mu p_\mu=0$ and $a_\mu a^\mu=-{\mathbf s}^2$. In a frame comoving with respect to the particle rest frame where the particle has velocity $\bm \beta = \mathbf p / m \gamma $, \mbox{\itshape e.g.}\xspace the laboratory frame, $a^\mu$ is given by~\cite{Jackson:1998nia,Leader2011} \begin{equation} \mathbf a = \mathbf s + \frac{\gamma^2}{\gamma+1} (\bm \beta \cdot \mathbf s) \bm \beta~,~~ a^0 = \bm \beta \cdot \mathbf a = \gamma(\bm \beta \cdot \mathbf s) , \label{eq:SpinLab} \end{equation} where the components of the momentum 4-vector are $p^0 = \gamma m c^2$ and $\mathbf p =\gamma m \bm \beta c$. Substituting in the covariant Eq.(\ref{eq:TBMTCov}), the spin precession equation is~\cite{Jackson:1998nia,Leader2011,Fukuyama:2013ioa,Silenko:2014uca}, \begin{equation} \label{eq:TBMTgeneral} \frac{d \mathbf s}{ d t} = \mathbf s \times \bm \Omega ~, ~~~ \bm \Omega = \bm \Omega_{\rm MDM} + \bm \Omega_{\rm EDM} + \bm \Omega_{\rm TH} , \end{equation} where $t$ is the time in the laboratory frame, and the precession angular velocity vector $\bm \Omega$ has been split into three contributions, \begin{equation} \label{eq:OMEGAgeneral} \bm \Omega_{\rm MDM} = \frac{g \mu_B}{\hbar} \left( \mathbf B - \frac{\gamma}{\gamma+1}(\bm \beta \cdot \mathbf B)\bm \beta - \bm \beta \times \mathbf E\right) , \end{equation} \begin{equation} \nonumber \bm \Omega_{\rm EDM} = \frac{d \mu_B}{\hbar} \left( \mathbf E - \frac{\gamma}{\gamma+1}(\bm \beta \cdot \mathbf E)\bm \beta + \bm \beta \times \mathbf B\right) , \end{equation} \begin{equation} \nonumber \bm \Omega_{\rm TH} = \frac{\gamma^2}{\gamma+1} \bm \beta \times \frac{d \bm \beta}{d t} = \frac{q}{mc} \left[ \left( \frac{1}{\gamma} - 1 \right) \mathbf B + \frac{\gamma}{\gamma + 1} (\bm \beta \cdot \mathbf B)\bm \beta - \left( \frac{1}{\gamma+1} -1 \right) \bm \beta \times \mathbf E \right], \end{equation} corresponding to the MDM, EDM and Thomas precession. The electric and magnetic fields, $\mathbf E$ and $\mathbf B$, respectively, are expressed in the laboratory frame. For a neutral particle ($q=0$) the Thomas precession term, arising from Lorentz forces, does not contribute and we obtain the classical equation, $d \mathbf s/ d \tau = \bm \mu \times \mathbf B^* + \bm \delta \times \mathbf E^*$, where $\mathbf E^*$ and $\mathbf B^*$ are the external fields in the rest frame of the particle~\cite{Jackson:1998nia}. Equations~(\ref{eq:TBMTgeneral}) and~(\ref{eq:OMEGAgeneral}) can be generalized to account for field gradient effects as described in Ref.~\cite{Good:1962zza,Metodiev:2015gda}. \section{Calculation of the parity asymmetry parameter $\alpha$ for quasi two-body final states \section{Asymmetry parameter $\alpha$ for quasi two-body final states in \decay{{\ensuremath{\Lz^+_\cquark}}\xspace}{{\ensuremath{\Pp}}\xspace{\ensuremath{\kaon^-}}\xspace{\ensuremath{\pion^+}}\xspace} decays} \label{app:B} The angular distribution for a spin \decay{1/2}{1/2\; 0} baryon decay is given by Eq.~\eqref{eq:AngDist}. The parameter $\alpha$ characterizes the parity violation in the decay and determines the sensitivity to the initial polarization. The effective $\alpha$ parameter for \decay{{\ensuremath{\Lz^+_\cquark}}\xspace}{{\ensuremath{\Kbar{}^{*0}}}\xspace ({\ensuremath{\kaon^-}}\xspace {\ensuremath{\pion^+}}\xspace) {\ensuremath{\Pp}}\xspace}, \decay{{\ensuremath{\Lz^+_\cquark}}\xspace}{\Delta^{++} \left({\ensuremath{\Pp}}\xspace {\ensuremath{\pion^+}}\xspace\right) {\ensuremath{\kaon^-}}\xspace} and \decay{{\ensuremath{\Lz^+_\cquark}}\xspace}{\Lambda(1520)\left({\ensuremath{\Pp}}\xspace {\ensuremath{\kaon^-}}\xspace\right) {\ensuremath{\pion^+}}\xspace} quasi two-body decays can be calculated using the results of an amplitude analysis for \decay{{\ensuremath{\Lz^+_\cquark}}\xspace}{{\ensuremath{\Pp}}\xspace{\ensuremath{\kaon^-}}\xspace{\ensuremath{\pion^+}}\xspace} decays reported in Ref.~\cite{Aitala:1999uq}. The angular distribution for those decays is determined by the helicity amplitudes. A similar angular distribution to Eq.~\eqref{eq:AngDist} is obtained for the above quasi two-body decays when integrating over all the decay angles, except for the helicity angle of the baryon daughter of the {\ensuremath{\Lz^+_\cquark}}\xspace. The computed $\alpha$ parameters are listed in \tablename~\ref{tab:alphas}. \begin{table}[htb] \caption{Computed $\alpha$ parameters for different quasi two-body final states in \decay{{\ensuremath{\Lz^+_\cquark}}\xspace}{{\ensuremath{\Pp}}\xspace{\ensuremath{\kaon^-}}\xspace{\ensuremath{\pion^+}}\xspace} decays. The values for the helicity amplitudes are taken from Ref.~\cite{Aitala:1999uq}. Since no correlation matrix is provided in the article, the errors are calculated assuming no correlation among the helicity amplitude results.\label{tab:alphas}} \centering \begin{tabular}{lc} \toprule Decay & $\alpha$\\ \midrule \decay{{\ensuremath{\Lz^+_\cquark}}\xspace}{{\ensuremath{\Kbar{}^{*0}}}\xspace ({\ensuremath{\kaon^-}}\xspace {\ensuremath{\pion^+}}\xspace) {\ensuremath{\Pp}}\xspace} & $-0.545\pm0.345$\\ \decay{{\ensuremath{\Lz^+_\cquark}}\xspace}{\Delta^{++} \left({\ensuremath{\Pp}}\xspace {\ensuremath{\pion^+}}\xspace\right) {\ensuremath{\kaon^-}}\xspace} & $-0.666\pm0.298$\\ \decay{{\ensuremath{\Lz^+_\cquark}}\xspace}{\Lambda(1520)\left({\ensuremath{\Pp}}\xspace {\ensuremath{\kaon^-}}\xspace\right) {\ensuremath{\pion^+}}\xspace} & $-0.105\pm0.604$\\ \bottomrule \end{tabular} \end{table} \subsection{Spin time evolution for the {\ensuremath{\Lambda}}\xspace case} \label{app:lambda} For $\mathbf E = 0$ and $q=0$, Eqs.~(\ref{eq:TBMTgeneral}) and (\ref{eq:OMEGAgeneral}) simplify to \begin{equation}\label{eq:TBMTLHCb} \frac{d \mathbf s}{ d t} = \mathbf s \times \bm \Omega , \end{equation} \begin{equation}\label{eq:TBMTLHCb-Omega} \bm \Omega = \frac{\mu_B}{\hbar} \left[ g \left( \mathbf B -\frac{\gamma}{\gamma+1}(\bm \beta \cdot \mathbf B)\bm \beta \right) + d \bm \beta \times \mathbf B \right] , \end{equation} where $\bm \beta$ is the particle velocity in the laboratory frame. This system of homogeneous first order linear differential equations can be solved analytically with the approximation that the precession of the particle depends only on the integrated magnetic field along its flight path. Given the initial condition $\mathbf s(0)=\mathbf s_0$, the time evolution of the polarization is \begin{equation}\label{eq:sAnalytical} \mathbf s (t) = (\mathbf s_0 \cdot \bm \omega) \bm \omega + \left[ \mathbf s_0 - (\mathbf s_0 \cdot \bm \omega) \bm \omega \right] \cos(\Omega t) + (\mathbf s_0 \times \bm \omega) \sin (\Omega t) , \end{equation} where $\Omega = |\bm \Omega|$ and $\bm \omega = \bm \Omega / \Omega$, with the precession angular velocity given by Eq.~(\ref{eq:TBMTLHCb-Omega}). The polarization in terms of the experimentally measured {\ensuremath{\Lambda}}\xspace flight length $l=\beta c t$, $\mathbf s (l)$, has similar form, \begin{equation}\label{eq:sAnalytical-DL} \mathbf s (l) = (\mathbf s_0 \cdot \bm \omega') \bm \omega' + \left[ \mathbf s_0 - (\mathbf s_0 \cdot \bm \omega') \bm \omega' \right] \cos\Phi + (\mathbf s_0 \times \bm \omega') \sin\Phi , \end{equation} where $\Phi = |\bm \Phi|$ and $\bm \omega' =\bm \Phi/\Phi$. The precession angle vector is \begin{equation}\label{eq:sAnalytical2} \bm \Phi = \frac{\mu_B}{\beta \hbar c} \left[ g \left( \mathbf D -\frac{\gamma}{\gamma+1}(\bm \beta \cdot \mathbf D)\mathbf \beta \right) + d \bm \beta \times \mathbf D \right] , \end{equation} with $\mathbf D \approx \mathbf{\overline B} l = \int_0^l \mathbf B(\mathbf r_0 + \bm \beta l'/\beta)dl'$ the integrated magnetic field along the {\ensuremath{\Lambda}}\xspace flight path. \subsubsection{Magnetic field gradients} \label{app:lambda_BfieldGrad} The inhomogeneities of the magnetic field are not expected to introduce significant effects in the spin precession. The spin equation of motion including first-order field gradients is derived in Ref.~\cite{Metodiev:2015gda} to be \begin{align} \bm{\Omega}_{\rm MDM} &= \frac{g\mu_B}{\hbar} \left[ \bf{B} - \frac{\gamma}{\gamma+1} (\bm{\beta}\cdot\bf{B}) \bm{\beta}\right] \nonumber\\ &+ \frac{g\mu_B}{2}\frac{1}{mc} \frac{\gamma}{\gamma+1} (\bm{\beta}\times\bm{\nabla}) \left[\bm{s}\cdot\left(\bf{B}-\frac{\gamma}{\gamma+1} \bm{\beta}(\bm{\beta}\cdot\bf{B})\right) \right]~, \nonumber\\ \bm{\Omega}_{\rm EDM} &= \frac{d\mu_B}{\hbar} \left[ \bm{\beta}\times\bf{B} \right] + \frac{d\mu_B}{2}\frac{1}{mc} \frac{\gamma}{\gamma+1} (\bm{\beta}\times\bm{\nabla})\left[\bm{s}\cdot(\bm{\beta}\times\bf{B})\right] . \end{align} In \mbox{LHCb}\xspace the ratio of the field gradient terms to the homogeneous field ones can be estimated as $$ \frac{\hbar}{2 m c}\frac{\beta \gamma}{\gamma+1} \frac{|\bm{\nabla}B|}{B}\sim 7.4 \times 10^{-16}~,$$ with $\beta\simeq 1$ and $\gamma \gg 1$, and where $|\bm{\nabla}B| = 1.14~\text{Tm}^{-1}$ and $B=1~\text{T}$ are the maximum values within the detector acceptance as extracted from the \mbox{LHCb}\xspace field mapping~\cite{LHCb-DP-2014-002,Hicheur:2007jfk}. Therefore, this effect is negligibly small at \mbox{LHCb}\xspace. \subsubsection{Spin rotations} \label{app:lambda_rotations} The variation of the {\ensuremath{\Lambda}}\xspace momentum direction in the laboratory frame results in an initial polarization vector which is not fixed to be perpendicular to the magnetic field. The relative orientation of the spin and magnetic field vectors is determined by two rotations. On one hand, the polarization vector from the equation of motion is given in the comoving rest frame reached from the laboratory frame, \ensuremath{S_{\mathrm L}}\xspace, by a pure boost. This is usually referred to as canonical frame~\cite{Leader2011}. However, the analyser, given by Eq.~\ref{eq:AngDist}, is defined in the particle helicity frame. The two rest frames, canonical and helicity, are related by the rotation between the \ensuremath{S_{\mathrm L}}\xspace and \ensuremath{S_{\Lz\mathrm L}}\xspace frames, defined by the {\ensuremath{\Lambda}}\xspace and \ensuremath{H}\xspace momentum directions in \ensuremath{S_{\mathrm L}}\xspace (see Fig.~\ref{fig:frames}). One the other hand, the choice of the \ensuremath{S_{\Lz\mathrm L}}\xspace frame induces a second rotation of the polarization components with respect to the \ensuremath{S_{\Lz}}\xspace frame, where the {\ensuremath{\Lambda}}\xspace longitudinal polarization is maximal, as illustrated in Fig.~\ref{fig:frames-rotations}. This is known in the literature as Wick rotation. To avoid dilution effects, the change of the polarization has to be analysed as a function of the kinematics of the decay. For example, a longitudinally polarized {\ensuremath{\Lambda}}\xspace with polarization $s_0$ along $z$ in \ensuremath{S_{\Lz}}\xspace would have a transverse component in \ensuremath{S_{\Lz\mathrm L}}\xspace of magnitude $s_0 \sin\alpha$, with $\sin\alpha = (m_{\ensuremath{\Lambda}}\xspace / m_{\ensuremath{H}\xspace}) (p_{\ensuremath{H}\xspace}^{(\mathrm L)} / p_{\ensuremath{\Lambda}}\xspace^{\mathrm (L)}) \sin \theta$~\cite{Leader2011}. As shown in Fig.~\ref{fig:frames-rotations}, the {\ensuremath{\Lambda}}\xspace helicity angle $\theta$ and the spin direction are related to the {\ensuremath{\Lambda}}\xspace impact parameter in the laboratory~\cite{Grosnick:1989qv}. The relation can be exploited to define ensembles of {\ensuremath{\Lambda}}\xspace particles having similar initial polarization, therefore improving the sensitivity to detect the spin change. \begin{figure}[h!] \centering { \includegraphics[width=0.85\linewidth]{./frame_rotations.pdf} } \caption{Sketch of the heavy baryon production at the primary vertex (PV) and its decay into a {\ensuremath{\Lambda}}\xspace, showing the \ensuremath{S_{\PH}}\xspace, \ensuremath{S_{\Lz}}\xspace and \ensuremath{S_{\Lz\mathrm L}}\xspace helicity frames, in the $zy$ plane in \ensuremath{S_{\mathrm L}}\xspace. Continuous (dotted-dashed) arrows represent momenta in \ensuremath{S_{\mathrm L}}\xspace (\ensuremath{S_{\PH}}\xspace) frame. The {\ensuremath{\Lambda}}\xspace polarization vector (thick arrow at the right) is aligned along the $z$ axis in \ensuremath{S_{\Lz}}\xspace (longitudinal polarization), and rotated by the Wick angle $\alpha$ with respect to $z$ in \ensuremath{S_{\Lz\mathrm L}}\xspace. The polarization state of the {\ensuremath{\Lambda}}\xspace in \ensuremath{S_{\Lz\mathrm L}}\xspace (thick arrows at the left) is correlated with its apparent production point on the $z$ plane in \ensuremath{S_{\mathrm L}}\xspace intersecting the PV. These points are shown by the short-dashed lines traced back from the {\ensuremath{\Lambda}}\xspace trajectory (intersecting the \ensuremath{H}\xspace decay point). The angle $\theta$ ($\theta_{\mathrm L}$) is formed by the {\ensuremath{\Lambda}}\xspace momentum in the \ensuremath{S_{\PH}}\xspace (\ensuremath{S_{\mathrm L}}\xspace) frame with respect to the $z$ axis in \ensuremath{S_{\PH}}\xspace. } \label{fig:frames-rotations} \end{figure} For the sensitivity studies, the rotation of the magnetic field into the \ensuremath{S_{\Lz\mathrm L}}\xspace frame and the Wick rotation are neglected. The first is expected to have a negligible impact on our study since {\ensuremath{\Lambda}}\xspace baryons have momenta largely along the $z$ axis, and the main component of the magnetic field is along the vertical direction ($B_y$), thus mostly perpendicular to the {\ensuremath{\Lambda}}\xspace motion. Instead, the effect of the Wick rotation is not relevant when measuring the spin change of ensembles of {\ensuremath{\Lambda}}\xspace particles having similar initial polarization. \subsection{Spin time evolution for the {\ensuremath{\Lz^+_\cquark}}\xspace and {\ensuremath{\PXi^+_\cquark}}\xspace case} \label{app:lambdac} For $\mathbf B = 0$ and $q = +1$, Eq.~\eqref{eq:OMEGAgeneral} simplifies to \begin{equation} \mathbf \Omega = \frac{2\mu'}{\hbar} \left( \mathbf E\times\bm\beta \right) + \frac{d\mu_B}{\hbar} \mathbf E +\frac{1}{\gamma+1} \frac{2\mu_B}{\hbar} \left( \mathbf E\times\bm \beta \right) - \frac{d\mu_B}{\hbar} \frac{\gamma}{\gamma+1} \left( \bm\beta\cdot\mathbf E \right) \bm\beta, \label{eq:TBMT_channelled} \end{equation} where \begin{equation} \mu' = \frac{g-2}{2}\frac{e\hbar}{2m c}, \label{eq:mu_prime} \end{equation} is the anomalous magnetic moment for a spin-1/2 particle. Since we are dealing with ultra relativistic {\ensuremath{\Lz^+_\cquark}}\xspace with $\gamma \approx 437$ at $1 \ensuremath{\mathrm{\,Te\kern -0.1em V}}\xspace$ energy, in first approximation the terms $\propto 1/\gamma$ are neglected. \begin{figure} \centering \includegraphics[scale=0.7]{./BendingField} \caption{Radial coordinates definition: $\rho_0$ is the radius corresponding to the minimum of the harmonic electric potential; $\rho_0'$ represents the radial equilibrium position of the electric and centrifugal potential. The red curve represents the particle trajectory inside the crystal in presence of the radial electric field $\mathbf{E}$, $a$ is the oscillation amplitude and $\Omega$ the revolution frequency. \label{fig: trajectory in crystals}} \end{figure} We describe the particle trajectory in a bent crystal using radial coordinates~\cite{Baryshevsky:2015zba}, as shown in Fig.~\ref{fig: trajectory in crystals}, \begin{equation} x(t) = {~\rm const.}, \hspace{1cm} y(t) = \rho(t)\cos(\Omega t), \hspace{1cm} z(t) = \rho(t)\sin(\Omega t), \end{equation} where $\Omega$ is the revolution frequency for the particle traversing the bent crystal. In our ultra-relativistic case it is well approximated by $\Omega\approx c/\rho_0$, where $\rho_0$ is the crystal curvature radius. The radius of the trajectory as a function of time is \begin{equation} \rho(t) = \rho'_0 + a \cos(\Omega_k t + \delta), \label{eq:rho(t)} \end{equation} where $a$, $\Omega_k$ and $\delta$ are the oscillation amplitude, frequency and phase, respectively; $a$ and $\delta$ depend on the particle energy and incident angle, while $\Omega_k$ depends on the crystal potential and particle energy. The radial equilibrium position $\rho'_0$ differs from the electric potential minimum position $\rho_0$, due to the centrifugal potential, avoiding periodical cancellations and therefore inducing spin precession~\cite{Kim:1982ry}. The electric potential in the crystal around the minimum can be approximated as an harmonic potential, \begin{equation} V = \frac{k}{e} \frac{\left[ \rho(t) - \rho_0 \right]^2}{2}, \label{eq:harmonic_potential} \end{equation} and the corresponding electric field is \begin{equation} E_x = 0 \hspace{1cm} E_y=-\frac{dV}{d\rho} \cos(\Omega t) \hspace{1cm} E_z = -\frac{dV}{d\rho} \sin(\Omega t), \end{equation} where the oscillation frequency of the particle around its equilibrium position $\rho'_0$ is $\Omega_k=\sqrt{kc^2/eW}$ with $W$ being the particle energy. Typical values for the relevant quantities are $\rho_0 \sim 30 \ensuremath{\mathrm{ \,m}}\xspace$, $\Omega\approx c/\rho_0 \sim 10^7 \ensuremath{{\mathrm{ \,Hz}}}\xspace$, $a \sim 10^{-10} \ensuremath{\mathrm{ \,m}}\xspace$, $k = 4\times 10^{17} \ensuremath{\mathrm{\,e\kern -0.1em V}}\xspace/\ensuremath{{\mathrm{ \,cm}}^2}\xspace$ for a Si crystal, yielding $\Omega_k \sim 10^{13} \ensuremath{{\mathrm{ \,Hz}}}\xspace$ for $1 \ensuremath{\mathrm{\,Te\kern -0.1em V}}\xspace$ particles. Substituting the radial coordinates and applying the ultra-relativistic approximation to Eq.~(\ref{eq:TBMT_channelled}) we obtain: \begin{align} \Omega_x &\approx \frac{2\mu'}{\hbar} (E_y\beta_z - E_z\beta_y) = \frac{2\mu'}{\hbar} \left(- \frac{dV}{d\rho} \frac{\rho\Omega}{c} \right) \nonumber\\ \Omega_y &\approx \frac{d\mu_B}{\hbar} \left[ E_y - \left( \bm\beta\cdot\mathbf E \right) \beta_y \right] = \frac{d\mu_B}{\hbar} \left\{ -\frac{dV}{d\rho}\cos(\Omega t) + \frac{dV}{d\rho} \frac{\dot{\rho}}{c^2} \left[ -\rho\Omega\sin(\Omega t) + \dot{\rho}\cos(\Omega t) \right] \right\} \nonumber\\ \Omega_z &\approx \frac{d\mu_B}{\hbar} \left[ E_z - \left( \bm\beta\cdot\mathbf E \right) \beta_z \right] = \frac{d\mu_B}{\hbar} \left\{ -\frac{dV}{d\rho}\sin(\Omega t) + \frac{dV}{d\rho} \frac{\dot{\rho}}{c^2} \left[ \rho\Omega\cos(\Omega t) + \dot{\rho}\sin(\Omega t) \right] \right\}.\label{eq:precession_vector} \end{align} In absence of EDM, \mbox{\itshape i.e.}\xspace $d=0$, the spin precession inside the bent crystal occurs in the $yz$ plane with the following spin time evolution~\cite{Baryshevsky:2015zba}, \begin{equation} \mathbf s(t) ~=~ \left\lbrace \begin{array}{l} s_x(t) = 0 \\ s_y(t) = s_0 \cos\left(\omega t\right) \\ s_z(t) = - s_0 \sin\left( \omega t\right) \end{array} \right. , \label{eq:spin_precession_zero_EDM} \end{equation} for the initial condition ${\mathbf s_0} = \left( 0, s_0, 0\right)$ and where $\omega \approx 2\mu'E(\rho'_0)/\hbar $ is the precession frequency. The spin precession angle defined in Eq.~\eqref{eq:MDM_angle} is $\Phi= \omega\overline{t}$, where $\overline{t}$ is the time needed to traverse the crystal. In presence of a non-zero EDM the spin precession is no longer confined to the $yz$ plane, generating a $s_x$ spin component otherwise not present, \begin{eqnarray} \frac{ds_x}{dt} & = & s_y\Omega_z - s_z\Omega_y \nonumber\\ & \approx & \frac{d\mu_B}{\hbar} \frac{dV}{d\rho} s_0 \bigg\{-\cos(\omega t)\sin(\Omega t) - \sin(\omega t)\cos(\Omega t) \nonumber\\ & & \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, +\frac{\dot{\rho}\rho\Omega}{c^2} \big[ \cos(\omega t)\cos(\Omega t) - \sin(\omega t)\sin(\Omega t) \big] \nonumber\\ & & \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, + \left.\frac{\dot{\rho}^2}{c^2}\big[ \cos(\omega t)\sin(\Omega t) + \sin(\omega t)\cos(\Omega t) \big] \right\rbrace \nonumber\\ & = & \frac{d\mu_B}{\hbar} \frac{dV}{d\rho} s_0 \bigg\{ -\sin\left[(\omega+\Omega)t\right] + \frac{\dot{\rho}\rho\Omega}{c^2} \cos\left[(\omega+\Omega)t\right] + \frac{\dot{\rho}^2}{c^2} \sin\left[(\omega+\Omega)t\right] \bigg\}. \label{eq:sy_evolution} \end{eqnarray} To derive Eq.~(\ref{eq:sy_evolution}), EDM effects are assumed to be small compared to the MDM effects, \mbox{\itshape i.e.}\xspace $d \ll (g-2)$, and therefore $\Omega_y,\Omega_z\ll\Omega_x$. We neglect terms of order $\dot{\rho}/c$ where \begin{equation} \dot{\rho} = -a\Omega_k \sin(\Omega_k t + \delta) \sim a\Omega_k \sim 10^3 \ensuremath{\mathrm{ \,m}}\xspace/\ensuremath{\mathrm{{\,s}}}\xspace, \end{equation} since the second term of Eq.~\eqref{eq:sy_evolution} is about $ \dot{\rho}\rho\Omega/c^2 \sim \dot{\rho}/c \sim 3 \times 10^{-4}$ and the third term is about $\dot{\rho}^2/c^2 \sim 9 \times 10^{-8}$. We demonstrate that $\Omega \ll \omega$ by requiring the electric force to be identical to the centripetal force, \begin{equation} \frac{m\gamma c^2}{\rho'_0} = eE(\rho'_0), \end{equation} and obtain $\omega \approx \frac{2\mu'}{\hbar}E(\rho'_0) \sim 10^{10} \ensuremath{{\mathrm{ \,Hz}}}\xspace \gg \Omega \sim 10^7 \ensuremath{{\mathrm{ \,Hz}}}\xspace$. Then, Eq.~\eqref{eq:sy_evolution} simplifies as \begin{equation} \frac{ds_x}{dt} = \frac{d\mu_B}{\hbar} \left(-\frac{dV}{d\rho}\right) s_0 \sin(\omega t), \label{eq:sy evolution approximated} \end{equation} and the time evolution is \begin{equation} s_x(t) = -\frac{d\mu_B}{\hbar} E(\rho'_0) \int^t_0 \sin(\omega t') dt' - \frac{d\mu_B}{\hbar} \frac{ka}{e} \int^t_0 \cos(\Omega_k t'+\delta)\sin(\omega t') dt'. \end{equation} The second integral is negligibly small since $\Omega_k \gg \omega$ and its fast oscillation averages the integral to zero. The calculation can be decomposed into two analytically integrable terms proportional to $\sin(\Omega_k t')\sin(\omega t')$ and $\cos(\Omega_k t')\sin(\omega t')$. Assuming $\Omega_k \gg \omega$, the maximum value of this integral i \begin{equation} \sim \frac{d\mu_B}{\hbar}\frac{ka}{e\Omega_k} \sim 2\frac{d}{g-2}\xi, \end{equation} where $\xi = \mu'ka/\hbar e\Omega_k\lesssim 10^{-2}$ and terms proportional to $\xi$ were neglected to derive Eq.~\eqref{eq:spin_precession_zero_EDM}~\cite{Baryshevsky:2015zba}. Finally we obtain the time evolution of the polarization vector in presence of a non-negligible EDM, \begin{equation} \mathbf s(t) ~=~ \left\lbrace \begin{array}{l} s_x(t) \approx s_0 \dfrac{d}{g-2} \big[ \cos(\omega t) - 1 \big] \\ s_y(t) \approx s_0 \cos\left(\omega t\right) \\ s_z(t) \approx - s_0 \sin\left( \omega t\right) \end{array} \right. . \end{equation} \subsubsection{Electric field gradients} The equations describing the particle trajectory and its spin precession in an electromagnetic field, including first-order electromagnetic field gradients, as well as a particle EDM contributions, are derived in \cite{Metodiev:2015gda}. In absence of magnetic fields the spin precession vector $\mathbf\Omega = \mathbf\Omega_{\rm MDM} + \mathbf\Omega_{\rm EDM} + \mathbf\Omega_{\rm TH}$ is \begin{align} \mathbf\Omega_{\rm MDM} &= \frac{g\mu_B}{\hbar} \left[ \mathbf E\times \bm\beta \right] + \frac{g\mu_B}{2}\frac{1}{mc} \frac{\gamma}{\gamma+1} (\bm\beta\times\mathbf\nabla)\left[\mathbf s\cdot(\mathbf E\times\bm\beta)\right], \nonumber\\ \mathbf\Omega_{\rm EDM} &= \frac{d\mu_B}{\hbar} \left[ \mathbf E - \frac{\gamma}{\gamma+1} (\bm\beta\cdot\mathbf E) \bm\beta\right] \nonumber\\ &+ \frac{d\mu_B}{2}\frac{1}{mc} \frac{\gamma}{\gamma+1} (\bm\beta\times\mathbf\nabla) \left[\mathbf s\cdot\left(\mathbf E - \frac{\gamma}{\gamma+1} \bm\beta(\bm\beta\cdot\mathbf E)\right) \right], \end{align} with unchanged Thomas precession component. Using the harmonic potential approximation we obtain \begin{equation} \frac{d|\mathbf E|}{d\rho} = \frac{k}{e}, \end{equation} and employing the values used in this appendix, the ratio of the field gradient terms to the homogeneous field ones is estimated to be \begin{equation} \frac{\hbar\frac{d|\mathbf E|}{d\rho}}{2mc|\mathbf E|} = \frac{\hbar k \rho'_0}{2m^2\gamma c^3} \sim 2.3 \times 10^{-3} \, \frac{1}{\gamma}, \end{equation} which is negligibly small in the ultra-relativistic regime. When including electric field gradient effects, in absence of magnetic fields, the particle trajectory equation becomes \begin{align} mc\frac{d(\gamma\bm\beta)}{dt} &= q\mathbf E \nonumber\\ &+ \gamma^2\frac{g\mu_B}{2} \left[ \mathbf\nabla + \bm\beta\times(\bm\beta\times\mathbf\nabla) + \frac{1}{c}\bm\beta\frac{\partial}{\partial t}\right]\left[\mathbf s\cdot(\mathbf E\times\bm\beta)\right] \nonumber\\ &+ \gamma^2 \frac{d\mu_B}{2} \left[ \mathbf\nabla + \bm\beta\times(\bm\beta\times\mathbf\nabla) + \frac{1}{c}\bm\beta\frac{\partial}{\partial t}\right] \left[\mathbf s\cdot\left(\mathbf E-\frac{\gamma}{\gamma+1} \bm\beta(\bm\beta\cdot\mathbf E)\right) \right], \end{align} where the first term is the Lorentz force and the following two terms are the MDM and EDM contributions. In our experimental setup the initial spin vector is orthogonal to $\mathbf E\times\bm\beta$, hence the MDM component is negligible. The typical magnitude of the ratio between the EDM electric field gradient term and the Lorentz force contribution is $\sim d\gamma \times 10^{-3}$ which can be close to 1 for $\gamma\sim 1000$ only if $d\sim 1$, \mbox{\itshape i.e.}\xspace similar EDM and MDM magnitudes. However, we assume the EDM magnitude to be tiny with respect to the MDM one, as already assumed in the derivation of the spin equation of motion. In case of a large EDM, this term would make the spin precession frequency dependent on the spin direction. \section{Conclusions} \label{sec:conclusion} The unique possibility to search for the EDM of strange and charm baryons at LHC is discussed, based on the exploitation of large statistics of baryons with large Lorentz boost and polarization. The {\ensuremath{\Lambda}}\xspace strange baryons are selected from weak charm baryon decays produced in {\ensuremath{\Pp}}\xspace\proton collisions at $\approx 14$~\ensuremath{\mathrm{\,Te\kern -0.1em V}}\xspace center-of-mass energy, while {\ensuremath{\Lz^+_\cquark}}\xspace and {\ensuremath{\PXi^+_\cquark}}\xspace charm baryons are produced in a fixed-target experiment to be installed in the LHC, in front ot the \mbox{LHCb}\xspace detector. Signal events can be reconstructed using the \mbox{LHCb}\xspace detector in both cases. The sensitivity to the EDM and the MDM of the strange and charm baryons arises from the study of the spin precession in intense electromagnetic fields. The long-lived {\ensuremath{\Lambda}}\xspace precesses in the magnetic field of the detector tracking system. Short-lived charm baryons are channeled in a bent crystal attached to the target and the intense electric field between atomic planes induces the spin precession. Sensitivities for the {\ensuremath{\Lambda}}\xspace EDM at the level of $ 1.3 \times 10^{-18}~e\ensuremath{\mathrm{ \,cm}}\xspace$ can be achieved using a data sample corresponding to an integrated luminosity of 50 \ensuremath{\mbox{\,fb}^{-1}}\xspace to be collected during the LHC Run 3. A test of {\ensuremath{C\!PT}}\xspace symmetry can be performed by measuring the MDM of {\ensuremath{\Lambda}}\xspace and {\ensuremath{\kern 0.1em\overline{\kern -0.1em\Lambda}}}\xspace baryons with a precision of about $4\times 10^{-4}$ on the $g$ factor. The EDM of the {\ensuremath{\Lz^+_\cquark}}\xspace ({\ensuremath{\PXi^+_\cquark}}\xspace) can be searched for with a sensitivity of $1.3~(2.0)\times 10^{-17}/\sqrt{t({\rm month})}~e\ensuremath{\mathrm{ \,cm}}\xspace$ with dedicated runs or running in synergetic mode with the \mbox{LHCb}\xspace experiment, in parallel to {\ensuremath{\Pp}}\xspace\proton collisions. Both solutions have to be studied in details using ad-hoc simulations. The proposed experiment would allow about two orders of magnitude improvement in the sensitivity for the {\ensuremath{\Lambda}}\xspace EDM and the first search for the charm baryon EDM, expanding the search for new physics through the EDM of fundamental particles. \section{Introduction} \label{sec:intro} The experimental searches for the electric dipole moment (EDM) of fundamental particles provide powerful probes for physics beyond the Standard Model (SM). The existence of permanent EDMs requires the violation of parity ($P$) and time reversal ($T$) symmetries and thus, relying on the validity of the {\ensuremath{C\!PT}}\xspace theorem, the violation of {\ensuremath{C\!P}}\xspace symmetry. Since EDM searches started in the fifties~\cite{Purcell:1950zz,Smith:1957ht}, there has been an intense experimental program, leading to limits on the EDM of leptons~\cite{Baron:2013eja,Bennett:2008dy,Inami:2002ah}, neutron~\cite{Afach:2015sja}, heavy atoms~\cite{Griffith:2009zz}, proton (indirect from $^{199}$Hg)~\cite{Dmitriev:2003sc}, and {\ensuremath{\Lambda}}\xspace baryon~\cite{Pondrom:1981gu}. New experiments are ongoing and others are planned, including those based on storage rings for muon~\cite{Grange:2015fou,Saito:2012zz}, proton and light nuclei~\cite{Anastassopoulos:2015ura,Pretz2015JEDI,Khriplovich:1998zq}. Comprehensive reviews on EDM experiments can be found in Refs.~\cite{Engel:2013lsa,Fukuyama:2012np,Jungmann:2013sga,Pospelov:2005pr,Semertzidis:2011zz,Semertzidis:2016wtd,Onderwater2011}. The amount of {\ensuremath{C\!P}}\xspace violation in the weak interactions of quarks is not sufficient to explain the observed imbalance between matter and antimatter in the Universe. The SM Lagrangian of strong interactions contains a {\ensuremath{C\!P}}\xspace-violating term proportional to the QCD vacuum angle $\theta$; however, no {\ensuremath{C\!P}}\xspace violation has been observed in the strong interactions. A stringent upper bound, $\theta \lsim 10^{-10}$, is derived from the experimental limit on the EDM of the neutron, $<3.0\times 10^{-26}~e\ensuremath{\mathrm{ \,cm}}\xspace$ (90\% C.L.)~\cite{Afach:2015sja}. This degree of tuning in the value of $\theta$ is known as the ``strong {\ensuremath{C\!P}}\xspace'' problem. Several solutions have been proposed, among which is the Peccei-Quinn mechanism~\cite{Peccei:1977hh,Weinberg:1977ma,Wilczek:1977pj} that predicts the axion as a candidate for dark matter. EDM searches of fundamental particles rely on the measurement of the spin precession angle induced by the interaction with the electromagnetic field. For unstable particles this is challenging since the precession has to take place before the decay. A solution to this problem requires large samples of high energy polarized particles traversing an intense electromagnetic field. In this work, we discuss the unique possibility to search for the EDM of the strange {\ensuremath{\Lambda}}\xspace baryon and of the charm {\ensuremath{\Lz^+_\cquark}}\xspace and {\ensuremath{\PXi^+_\cquark}}\xspace baryons at LHC. Using the experimental upper limit of the neutron EDM, the absolute value of the {\ensuremath{\Lambda}}\xspace EDM is predicted to be \mbox{$<4.4\times 10^{-26}~e\ensuremath{\mathrm{ \,cm}}\xspace$~\cite{Guo:2012vf,Atwood:1992fb,Pich:1991fq,Borasoy:2000pq}}, while the indirect constraints on the charm EDM are weaker, $\lsim 4.4\times 10^{-17}~e\ensuremath{\mathrm{ \,cm}}\xspace$~\cite{Sala:2013osa}. Any experimental observation of an EDM would indicate a new source of {\ensuremath{C\!P}}\xspace violation from physics beyond the SM. The EDM of the long-lived {\ensuremath{\Lambda}}\xspace baryon was measured to be $<1.5 \times 10^{-16}~e\ensuremath{\mathrm{ \,cm}}\xspace$ (95\% C.L.) in a fixed-target experiment at Fermilab~\cite{Pondrom:1981gu}. No experimental measurements exist for short-lived charm baryons since negligibly small spin precession would be induced by magnetic fields used in current particle detectors. By studying the spin precession of polarized {\ensuremath{\Lambda}}\xspace baryons, originated from weak charm baryon decays, it is possible to extract the EDM. We show that an improvement of the present limit of about two orders of magnitude is within reach of the \mbox{LHCb}\xspace experiment. The measurement of the magnetic dipole moment (MDM) of {\ensuremath{\Lambda}}\xspace and {\ensuremath{\kern 0.1em\overline{\kern -0.1em\Lambda}}}\xspace baryons would allow a test of {\ensuremath{C\!PT}}\xspace symmetry at per mille level. A similar test has been performed for the proton~\cite{DiSciacca:2013hya}, electron~\cite{VanDyck:1987ay}, and muon~\cite{Bennett:2004pv}, and a new experiment for the proton is planned~\cite{Ulmer:2013rra}. We propose to search for the EDM of short-lived charm baryons produced by interaction of the 7\ensuremath{\mathrm{\,Te\kern -0.1em V}}\xspace \mbox{LHC}\xspace proton beam on a fixed target and channeled in a bent crystal in front of the \mbox{LHCb}\xspace detector. A sizeable spin precession angle for the short-lived {\ensuremath{\Lz^+_\cquark}}\xspace and {\ensuremath{\PXi^+_\cquark}}\xspace baryons would be possible by exploiting the intense electromagnetic field between crystal atomic planes. The charm baryon decays can be reconstructed using the \mbox{LHCb}\xspace detector. From one month dedicated runs, sensitivities at the level of $10^{-17}~e\ensuremath{\mathrm{ \,cm}}\xspace$ can be reached. This research would extend the physics program of the proposed experiment~\cite{Baryshevsky:2016cul,Burmistrov:2194564} for the measurement of charm baryon MDMs. \section{EDM experiment concept} \label{sec:method} The magnetic and electric dipole moment of a spin-1/2 particle is given (in Gaussian units) by $\bm{\mu} = g \mu_B {\mathbf s}/2$ and \mbox{$\bm{ \delta} = d \mu_B {\mathbf s}/2$}, respectively, where $\mathbf{s}$ is the spin-polarization vector\footnote{The spin-polarization vector is defined such as $\mathbf s = 2 \langle \mathbf S \rangle / \hbar$, where $\mathbf S$ is the spin operator. } and $\mu_B=e \hbar / (2 m c)$ is the particle magneton, with $m$ its mass. The $g$ and $d$ dimensionless factors are also referred to as the gyromagnetic and gyroelectric ratios. The interaction of magnetic and electric dipole moments with external electromagnetic fields causes the change of the particle spin direction. The experimental setup to measure this effect relies on three main elements: i) a source of polarized particles whose direction and polarization degree are known; ii) an intense electromagnetic field able to induce a sizable spin precession angle during the lifetime of the particle; iii) the detector to measure the final polarization vector by analysing the angular distribution of the particle decays. \subsection{{\ensuremath{\Lambda}}\xspace and {\ensuremath{\kern 0.1em\overline{\kern -0.1em\Lambda}}}\xspace case} \label{sec:method_lambda} A large amount of {\ensuremath{\Lambda}}\xspace baryons is produced directly from the \mbox{LHC}\xspace\ {\ensuremath{p}}\xspace\pr collisions via strong interactions. The initial polarization direction is perpendicular to the production plane, defined by the proton beam and {\ensuremath{\Lambda}}\xspace momentum directions, due to parity conservation. The level of polarization increases with the transverse momentum with respect to the beam direction. Thus a significant initial polarization could be achieved by selecting events within specific kinematic regions~\cite{Heller:1978ty}. In contrast, weak decays of heavy baryons (charm and beauty), mostly produced in the forward/backward directions at \mbox{LHC}\xspace, can induce large longitudinal polarization due to parity violation. For example, the decay of unpolarized {\ensuremath{\Lz^+_\cquark}}\xspace baryons to the ${\ensuremath{\Lambda}}\xspace{\ensuremath{\pion^+}}\xspace$ final state~\cite{Link:2005ft}, produces {\ensuremath{\Lambda}}\xspace baryons with longitudinal polarization $\approx -90\%$, being the decay asymmetry parameter $\alpha_{{\ensuremath{\Lambda}}\xspace{\ensuremath{\pion^+}}\xspace} = -0.91 \pm 0.15$~\cite{Olive:2016xmw}. Another example is the ${\ensuremath{\Lz^0_\bquark}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\Lambda}}\xspace{\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace$ decay where {\ensuremath{\Lambda}}\xspace baryons are produced almost 100\% longitudinally polarized~\cite{Aaij:2013oxa,Aad:2014iba}. The spin-polarization vector $\mathbf s$ of an ensemble of {\ensuremath{\Lambda}}\xspace baryons can be analysed through the angular distribution of the ${\ensuremath{\Lambda}}\xspace\ensuremath{\rightarrow}\xspace {\ensuremath{p}}\xspace {\ensuremath{\pion^-}}\xspace$ decay~\cite{Lee:1957qs,Richman1984}, \begin{equation} \label{eq:AngDist} \frac{dN}{d\Omega'} \propto 1 + \alpha \mathbf s \cdot \hat{\mathbf k} ~, \end{equation} where $\alpha = 0.642 \pm 0.013$~\cite{Olive:2016xmw} is the decay asymmetry parameter. The {\ensuremath{C\!P}}\xspace invariance in the {\ensuremath{\Lambda}}\xspace decay implies $\alpha = -\overline{\alpha}$, where $\overline\alpha$ is the decay parameter of the charge-conjugate decay. The unit vector $\hat {\mathbf k} = (\sin\ensuremath{\theta' \xspace}\cos\ensuremath{\phi' \xspace}, \sin\ensuremath{\theta' \xspace}\sin\ensuremath{\phi' \xspace}, \cos\ensuremath{\theta' \xspace})$ indicates the momentum direction of the proton in the {\ensuremath{\Lambda}}\xspace helicity frame, with $\Omega' = (\ensuremath{\theta' \xspace},\ensuremath{\phi' \xspace})$ the corresponding solid angle, as illustrated in (Left) Fig.~\ref{fig:frames}. We can consider the {\ensuremath{\Lambda}}\xspace momentum either in the heavy hadron helicity frame, \ensuremath{S_{\PH}}\xspace, shown in (Center) Fig.~\ref{fig:frames}, or in the laboratory frame, \ensuremath{S_{\mathrm L}}\xspace, defined in (Right) Fig.~\ref{fig:frames}. This offers two possible options for the {\ensuremath{\Lambda}}\xspace helicity frame, as seen from the \ensuremath{S_{\PH}}\xspace or the \ensuremath{S_{\mathrm L}}\xspace frames and referred to as \ensuremath{S_{\Lz}}\xspace or \ensuremath{S_{\Lz\mathrm L}}\xspace, respectively, the latter sketched in (Left) Fig.~\ref{fig:frames}. \begin{figure}[htb] \centering { \includegraphics[width=0.31\linewidth]{./frame_LambdaL.pdf} } { \includegraphics[width=0.31\linewidth]{./frame_H.pdf} } { \includegraphics[width=0.31\linewidth]{./frame_L.pdf} } \caption{(Left) {\ensuremath{\Lambda}}\xspace helicity frame (\ensuremath{S_{\Lz\mathrm L}}\xspace), (Center) heavy baryon (\ensuremath{S_{\PH}}\xspace), and (Right) laboratory frame (\ensuremath{S_{\mathrm L}}\xspace). The {\ensuremath{\Lambda}}\xspace and proton angles, $(\ensuremath{\theta' \xspace},\ensuremath{\phi' \xspace})$ and $(\theta,\phi)$ are defined in the \ensuremath{S_{\Lz\mathrm L}}\xspace and the \ensuremath{S_{\PH}}\xspace frames, respectively. The $z$ axis in \ensuremath{S_{\Lz\mathrm L}}\xspace is defined by the {\ensuremath{\Lambda}}\xspace momentum in \ensuremath{S_{\mathrm L}}\xspace, and the $x$ axis is along the normal to the {\ensuremath{\Lambda}}\xspace production plane, defined by the {\ensuremath{\Lambda}}\xspace and \ensuremath{H}\xspace momenta in \ensuremath{S_{\mathrm L}}\xspace frame. The $z$ axis in \ensuremath{S_{\PH}}\xspace is given by the heavy hadron momentum in \ensuremath{S_{\mathrm L}}\xspace, and the $x$ axis is parallel to the normal to its production plane. The proton beam momentum is taken along the $z$ axis and the vertical direction by the $y$ axis in the \ensuremath{S_{\mathrm L}}\xspace frame. } \label{fig:frames} \end{figure} The dynamics of the spin vector in presence of external electromagnetic fields is given by the T-BMT equation~\cite{Thomas:1926dy,Thomas:1927yu,Bargmann:1959gz} (see Appendix~\ref{app:A}). For a neutral particle in a magnetic field $\mathbf B$ in the laboratory with negligible field gradient effects, the general solution as a function of the {\ensuremath{\Lambda}}\xspace flight length $l$ is described in Sec.~\ref{app:lambda}. For the particular case of {\ensuremath{\Lambda}}\xspace and \ensuremath{H}\xspace baryons flying along the $z$ axis in \ensuremath{S_{\mathrm L}}\xspace frame, an initial longitudinal polarization $s_0$, \mbox{\itshape i.e.}\xspace $\mathbf s_0=(0,0,s_0)$, and $\mathbf B = (0,B_y,0)$, the solution is \begin{equation} \label{eq:sSimpleCase} \mathbf s ~=~ \left\lbrace \begin{array}{l} s_x = - s_{0} \sin\Phi \\ s_y = - s_{0} \dfrac{d \beta }{g} \sin\Phi \\ s_z = s_{0} \cos\Phi \\ \end{array} \right. \text{,~~~where~} {\Phi = \frac{D_y\mu_B}{\beta \hbar c} \sqrt{d^2 \beta^2 + g^2} ~~ \approx ~~ \frac{g D_y \mu_B}{\beta \hbar c} }~, \end{equation} with $D_y\equiv D_y(l) = \int_0^l B_y dl'$ the integrated magnetic field along the {\ensuremath{\Lambda}}\xspace flight path. The polarization vector precesses in the $xz$ plane, normal to the magnetic field, with the precession angle $\Phi$ proportional to the gyromagnetic factor of the particle. The presence of an EDM introduces a non-zero $s_y$ component perpendicular to the precession plane of the MDM, otherwise not present. At LHCb, with a tracking dipole magnet providing an integrated field $D_y \approx \pm 4~\mathrm{T m}$~\cite{LHCb-DP-2014-002}, the maximum precession angle for particles traversing the entire magnetic field region yields $\Phi_{\rm max} \approx \pm \pi/4$, and allows to achieve about 70\% of the maximum $s_y$ component. Moreover, a test of {\ensuremath{C\!PT}}\xspace symmetry can be performed by comparing the $g$ and $-\bar g$ factors for {\ensuremath{\Lambda}}\xspace and {\ensuremath{\kern 0.1em\overline{\kern -0.1em\Lambda}}}\xspace baryons, respectively, which precess in opposite directions as $g$ and $d$ change sign from particle to antiparticle. Contrarily to the past fixed-target EDM experiments where the momentum direction in the laboratory frame was fixed and perpendicular to the magnetic field~\cite{Pondrom:1981gu,Schachinger:1978qs}, in this case the {\ensuremath{\Lambda}}\xspace momentum varies being the particle produced from heavy baryon decays. As a consequence, the polarization vector is not fixed to be perpendicular to the magnetic field and the signature of the EDM becomes the variation of the $s_y$ component of the polarization vector before and after the magnetic field. To avoid the dilution introduced by the rotation of the {\ensuremath{\Lambda}}\xspace production plane, the change of the polarization has to be determined separately for ensembles of {\ensuremath{\Lambda}}\xspace baryons with similar initial polarization, selected according to the kinematics of the decay. In particular, the projection of the {\ensuremath{\Lambda}}\xspace trajectory in the $xy$ plane in \ensuremath{S_{\mathrm L}}\xspace at the $z$ position of the \ensuremath{H}\xspace production vertex can be used to select events with similar polarization, as discussed in Sec.~\ref{app:lambda_rotations}. \subsection{{\ensuremath{\Lz^+_\cquark}}\xspace and {\ensuremath{\PXi^+_\cquark}}\xspace case} \label{sec:method_lambdac} The {\ensuremath{\Lz^+_\cquark}}\xspace and the {\ensuremath{\PXi^+_\cquark}}\xspace baryon EDM can be extracted by measuring the precession of the polarization vector of channeled particles in a bent crystal. There, a positively-charged particle channeled between atomic planes moves along a curved path under the action of the intense electric field between crystal planes. In the instantaneous rest frame of the particle the electromagnetic field causes the spin rotation. The signature of the EDM is a polarization component perpendicular to the initial baryon momentum and polarization vector, otherwise not present, similarly to the case of the {\ensuremath{\Lambda}}\xspace baryon. The phenomenon of spin precession of positively-charged particles channeled in a bent crystal was firstly observed by the E761 collaboration that measured the MDM of the strange \PSigmap baryon~\cite{Chen:1992wx}. The possibility to measure the MDM of short-lived charm baryons using channeling in bent crystals, in the momentum range of hundreds of \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace, is discussed in Ref.~\cite{Baublis:1994ku,Samsonov:1996ah}. The feasibility of the measurement at \mbox{LHC}\xspace\ energies is studied in Ref.~\cite{Baryshevsky:2016cul} and offers clear advantages with respect to lower beam energies since the estimated number of produced charm baryons that are channeled into the crystal is proportional to $\gamma^{3/2}$ where $\gamma$ is the Lorentz factor of the particles. Charm baryons produced by interaction of protons on a fixed target, \mbox{\itshape e.g.}\xspace tungsten target, are polarized perpendicularly to the production plane due to parity conservation in strong interactions~\cite{Jacob:1959at}. The production plane $xz$, shown in (Left) Fig.~\ref{fig:Lc_ProdBending}, is determined by the proton and the charm baryon momenta; the latter defines the $z$ axis. The initial polarization vector $\mathbf s_0 =(0,s_0,0)$ is perpendicular to the production plane, along the $y$ axis. To induce spin rotation the crystal is bent in the $yz$ plane. The intense electric field $\mathbf E$ between the crystal planes which deflects positively-charged particles, transforms into a strong electromagnetic field $\mathbf{E^*}\approx\gamma \mathbf{E}$, $\mathbf{B^*}\approx-\gamma \bm\beta\times\mathbf{E} /c$ in the particle rest frame and induces the spin precession, as it is described in detail in Refs.~\cite{Baryshevsky:2015zba,Kim:1982ry} and illustrated in (Right) Fig.~\ref{fig:Lc_ProdBending}. The crystal bending angle is defined as $\theta_C=L/\rho_0$, where $L$ is the circular arc of the crystal and $\rho_0$ the curvature radius. The precession angle $\Phi$ is defined as the angle between the polarization vector and the $y$ axis, as shown in (Right) Fig.~\ref{fig:Lc_ProdBending}. In the limit of large boost with Lorentz factor $\gamma \gg 1$, the precession angle in the $yz$ plane induced by the MDM is~\cite{Lyuboshits:1979qw} \begin{equation} \Phi \approx \frac{g-2}{2}\gamma \theta_C, \label{eq:MDM_angle} \end{equation} where $g$ is the gyromagnetic factor. \begin{figure}[H] \centering { \includegraphics[width=0.35\linewidth]{./Lc_prod.pdf} } \hspace{1cm} { \includegraphics[width=0.55\linewidth]{./CrystalPlane.pdf} } \caption{(Left) Production plane of the {\ensuremath{\Lz^+_\cquark}}\xspace baryon defined by the proton and the {\ensuremath{\Lz^+_\cquark}}\xspace momenta. The initial polarization vector $\mathbf s_0$ is perpendicular to the production plane, along the $y$ axis, due to parity conservation in strong interactions. (Right) Deflection of the baryon trajectory and spin precession in the $yz$ and $xy$ plane induced by the MDM and the EDM, respectively. The red (dashed) arrows indicate the (magnified) $s_x$ spin component proportional to the particle EDM. $\Phi$ is the MDM precession angle and $\theta_C$ is the crystal bending angle.} \label{fig:Lc_ProdBending} \end{figure} In presence of a non-zero EDM, the spin precession is no longer confined to the $yz$ plane, originating a $s_x$ component proportional to the particle EDM represented by the red (dashed) arrows in (Right) Fig.~\ref{fig:Lc_ProdBending}. The integration of the equation of motion in presence of EDM is described in Appendix~\ref{app:A}, as well as the approximations used to solve the equations analytically. The polarization vector, after channeling through the crystal is \begin{equation} \mathbf s ~=~ \left\lbrace \begin{array}{l} s_{x} \approx s_0 \dfrac{d}{g-2} (\cos{\Phi}-1) \\ s_{y} \approx s_{0} \cos\Phi \\ s_{z} \approx s_{0} \sin\Phi \end{array} \right. , \label{eq:EDM_LcPol} \end{equation} where $\Phi$ is given by Eq.~(\ref{eq:MDM_angle}). The polarization can be determined, as in the case of the {\ensuremath{\Lambda}}\xspace EDM described in Sec.~\ref{sec:method_lambda}, by studying the angular distribution of the final state particles. The angular distribution for non-channeled particles allows to determine the initial polarization along the $y$ axis, which compared to the final polarization allows to extract the gyromagnetic and gyroelectric factors. The same method applies to both {\ensuremath{\Lz^+_\cquark}}\xspace and {\ensuremath{\PXi^+_\cquark}}\xspace baryons. For {\ensuremath{\Lz^+_\cquark}}\xspace decaying to two-body final states such as $p{\ensuremath{\kaon^{*0}}}\xspace$, ${\ensuremath{\PDelta}}\xspace^{++}{\ensuremath{\pion^-}}\xspace$, ${\ensuremath{\Lambda}}\xspace(1520){\ensuremath{\pion^+}}\xspace$ and ${\ensuremath{\Lambda}}\xspace{\ensuremath{\pion^+}}\xspace$, the angular distribution is described by Eq.~(\ref{eq:AngDist}), where $\alpha$ is a parity violating coefficient depending on the final state, $\hat{\mathbf k}$ the direction of the final state baryon in the {\ensuremath{\Lz^+_\cquark}}\xspace helicity frame, and ${\bf s}$ the {\ensuremath{\Lz^+_\cquark}}\xspace polarization vector. In the case of the ${\ensuremath{\Lz^+_\cquark}}\xspace\rightarrow {\ensuremath{\Lambda}}\xspace{\ensuremath{\pion^-}}\xspace$ decay, the $\alpha$ parameter is measured to be $\alpha_{{\ensuremath{\Lambda}}\xspace{\ensuremath{\pion^-}}\xspace} = -0.91\pm0.15$~\cite{Olive:2016xmw}. For other {\ensuremath{\Lz^+_\cquark}}\xspace decays no measurements are available but an effective $\alpha$ parameter can be calculated from a Dalitz plot analysis of ${\ensuremath{\Lz^+_\cquark}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\Pp}}\xspace{\ensuremath{\kaon^-}}\xspace{\ensuremath{\pion^+}}\xspace$ decays~\cite{Aitala:1999uq}, as discussed in Appendix~\ref{app:B} and summarized in Table~\ref{tab:alphas}. Eventually, a Dalitz plot analysis would provide the ultimate sensitivity to the EDM measurement. The initial polarization $s_0$ of {\ensuremath{\Lz^+_\cquark}}\xspace particles produced from the interaction of 7\ensuremath{\mathrm{\,Te\kern -0.1em V}}\xspace protons on a fixed target has not been measured. However, a measurement of {\ensuremath{\Lz^+_\cquark}}\xspace polarization from 40-70 \ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c}}\xspace neutron on carbon target gives $s_0=0.5\pm0.2$~\cite{Szwed:1981rr}, and a measurement from interaction of 230 \ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c}}\xspace~{\ensuremath{\pion^-}}\xspace on copper target yields $s_0=-0.65^{+0.22}_{-0.18}$~\cite{Jezabek:1992ke}. \section{Sensitivity studies} \label{sec:sensitivity} \subsection{{\ensuremath{\Lambda}}\xspace and {\ensuremath{\kern 0.1em\overline{\kern -0.1em\Lambda}}}\xspace case} \label{sec:sensivity_lambda} To identify the most copious {\ensuremath{\Lambda}}\xspace production channels from heavy baryons, we consider decays containing only charged particles in the final state, with at least one originated from the heavy baryon decay vertex. No other long-living particles besides the {\ensuremath{\Lambda}}\xspace baryon, except an intermediate \ensuremath{\PXi^-}\xspace baryon decaying into the ${\ensuremath{\Lambda}}\xspace{\ensuremath{\pion^-}}\xspace$ final state, are considered. These conditions are required to reconstruct the production and the decay vertex of the {\ensuremath{\Lambda}}\xspace particle and eventually exploit this information in the event reconstruction. The number of {\ensuremath{\Lambda}}\xspace particles produced can be estimated as \begin{equation} N_{\ensuremath{\Lambda}}\xspace=2 \mathcal{L} \sigma_{\ensuremath{\quark\quarkbar}}\xspace f({\ensuremath{\mathrm{q}}}\xspace \ensuremath{\rightarrow}\xspace \ensuremath{H}\xspace)\ensuremath{\mathcal{B}} (\ensuremath{H}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\Lambda}}\xspace X')\ensuremath{\mathcal{B}} ({\ensuremath{\Lambda}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{p}}\xspace{\ensuremath{\pion^-}}\xspace)\ensuremath{\mathcal{B}} (X'\ensuremath{\rightarrow}\xspace\mathrm{charged}) , \end{equation} where $\mathcal{L}$ is the total integrated luminosity, $\sigma_{\ensuremath{\quark\quarkbar}}\xspace$ (${\ensuremath{\mathrm{q}}}\xspace={\ensuremath{\mathrm{c}}}\xspace,{\ensuremath{\mathrm{b}}}\xspace$) are the heavy quark production cross sections from {\ensuremath{p}}\xspace\pr collisions at $\sqrt{s}=13$\ensuremath{\mathrm{\,Te\kern -0.1em V}}\xspace~\cite{Aaij:2015bpa,FONLLWEB,Aaij:2010gn,Aaij:2015rla}, and $f$ is the fragmentation fraction into the heavy baryon \ensuremath{H}\xspace~\cite{Lisovyi:2015uqa,Gladilin:2014tba,Amhis:2014hma,Galanti:2015pqa}. All branching fractions \ensuremath{\mathcal{B}} are taken from Ref.~\cite{Olive:2016xmw}, and where they are given relative to other decays all the known decay modes are assumed to sum the total width. In Table~\ref{tab:LambdaChannels} the dominant production channels and the estimated yields are summarised. Overall, there are about $1.5\times 10^{11}$ {\ensuremath{\Lambda}}\xspace baryons per \ensuremath{\mbox{\,fb}^{-1}}\xspace produced directly from heavy baryon decays (referred hereafter as short-lived, or SL events), and $3.8\times 10^{11}$ from charm baryons decaying through an intermediate \ensuremath{\PXi^-}\xspace particle (long-lived, or LL events). The yield of {\ensuremath{\Lambda}}\xspace baryons experimentally available can then be evaluated as $N_{\ensuremath{\Lambda}}\xspace^{\rm reco} = \epsilon_{\rm geo} \epsilon_{\rm trigger} \epsilon_{\rm reco} N_{\ensuremath{\Lambda}}\xspace$, where $\epsilon_{\rm geo}$, $\epsilon_{\rm trigger}$ and $\epsilon_{\rm reco}$ are the geometric, trigger and reconstruction efficiencies of the detector system. \begin{table}[htb] \centering \caption{Dominant {\ensuremath{\Lambda}}\xspace production mechanisms from heavy baryon decays and estimated yields produced per \ensuremath{\mbox{\,fb}^{-1}}\xspace at $\sqrt{s}=13$\ensuremath{\mathrm{\,Te\kern -0.1em V}}\xspace, shown separately for SL and LL topologies. The {\ensuremath{\Lambda}}\xspace baryons from \ensuremath{\PXi^-}\xspace decays, produced promptly in the {\ensuremath{p}}\xspace\pr collisions, are given in terms of the unmeasured production cross section. } \label{tab:LambdaChannels} \renewcommand{\arraystretch}{1.1 \begin{tabular}{lc lc} \toprule SL events & $N_{{\ensuremath{\Lambda}}\xspace}/\ensuremath{\mbox{\,fb}^{-1}}\xspace~(\times 10^{10})$ & LL events, $\ensuremath{\PXi^-}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\Lambda}}\xspace{\ensuremath{\pion^-}}\xspace$ & $N_{{\ensuremath{\Lambda}}\xspace}/\ensuremath{\mbox{\,fb}^{-1}}\xspace~(\times 10^{10})$ \\ \midrule ${\ensuremath{\PXi^0_\cquark}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\Lambda}}\xspace{\ensuremath{\kaon^-}}\xspace{\ensuremath{\pion^+}}\xspace$ & 7.7 & ${\ensuremath{\PXi^0_\cquark}}\xspace\ensuremath{\rightarrow}\xspace\ensuremath{\PXi^-}\xspace{\ensuremath{\pion^+}}\xspace\pip{\ensuremath{\pion^-}}\xspace$ & 23.6 \\ ${\ensuremath{\Lz^+_\cquark}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\Lambda}}\xspace{\ensuremath{\pion^+}}\xspace\pip{\ensuremath{\pion^-}}\xspace$ & 3.3 & ${\ensuremath{\PXi^0_\cquark}}\xspace\ensuremath{\rightarrow}\xspace\ensuremath{\PXi^-}\xspace{\ensuremath{\pion^+}}\xspace$ & 7.1 \\ ${\ensuremath{\PXi^+_\cquark}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\Lambda}}\xspace{\ensuremath{\kaon^-}}\xspace{\ensuremath{\pion^+}}\xspace\pip$ & 2.0 & ${\ensuremath{\PXi^+_\cquark}}\xspace\ensuremath{\rightarrow}\xspace\ensuremath{\PXi^-}\xspace{\ensuremath{\pion^+}}\xspace\pip$ & 6.1 \\ ${\ensuremath{\Lz^+_\cquark}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\Lambda}}\xspace{\ensuremath{\pion^+}}\xspace$ & 1.3 & ${\ensuremath{\Lz^+_\cquark}}\xspace\ensuremath{\rightarrow}\xspace\ensuremath{\PXi^-}\xspace{\ensuremath{\kaon^+}}\xspace{\ensuremath{\pion^+}}\xspace$ & 0.6 \\ ${\ensuremath{\PXi^0_\cquark}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\Lambda}}\xspace{\ensuremath{\kaon^+}}\xspace{\ensuremath{\kaon^-}}\xspace$ (no $\phi$) & 0.2 & ${\ensuremath{\PXi^0_\cquark}}\xspace\ensuremath{\rightarrow}\xspace\ensuremath{\PXi^-}\xspace{\ensuremath{\kaon^+}}\xspace$ & 0.2 \\ ${\ensuremath{\PXi^0_\cquark}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\Lambda}}\xspace\phi({\ensuremath{\kaon^+}}\xspace{\ensuremath{\kaon^-}}\xspace)$ & 0.1 & Prompt $\ensuremath{\PXi^-}\xspace$ & $0.13\times\sigma_{{\ensuremath{p}}\xspace\pr\ensuremath{\rightarrow}\xspace\ensuremath{\PXi^-}\xspace}~[\mu \rm b]$ \\ \bottomrule \end{tabular} \end{table} The geometric efficiency for SL topology has been estimated using a Monte Carlo simulation of ${\ensuremath{p}}\xspace\pr$ collisions at $\sqrt{s}=13$\ensuremath{\mathrm{\,Te\kern -0.1em V}}\xspace and the decay of heavy hadrons, using Pythia~\cite{Sjostrand:2006za} and EvtGen~\cite{Lange:2001uf} standalone toolkits, together with a simplified geometrical model of the LHCb detector~\cite{LHCb-DP-2014-002}. Tracking devices upstream of the dipole magnet (VErtex LOcator and Tracker Turicensis) and downstream the magnet (T stations) are modelled to have rectangular shape. The height and width of the tracking layers along the beam axis are determined by the detector angular acceptance, between 10 \ensuremath{\mathrm{ \,mrad}}\xspace and 250 \ensuremath{\mathrm{ \,mrad}}\xspace (300 \ensuremath{\mathrm{ \,mrad}}\xspace) in the vertical (horizontal) direction, as illustrated in (Left) Fig.~\ref{fig:detectorDiagram}. Particle trajectories are approximated by straight lines defined by the momentum directions. \begin{figure}[htb] \centering { \includegraphics[width=0.48\linewidth]{./DetectorDiagram.pdf} } { \includegraphics[width=0.48\linewidth]{./third_mig.pdf} } \caption{(Left) Sketch of the simplified geometry of the LHCb tracking system in the $yz$ plane. The crosswise lines represent the angular acceptance. The tracking layers and the limits of the R$_1$ and R$_2$ regions are shown as solid and dotted thick lines, respectively. The magnet is divided in three regions by thin dotted lines. A simulated ${\ensuremath{\Lz^+_\cquark}}\xspace \ensuremath{\rightarrow}\xspace{\ensuremath{\Lambda}}\xspace (p {\ensuremath{\pion^-}}\xspace){\ensuremath{\pion^+}}\xspace$ decay with corresponding ${\ensuremath{\pion^+}}\xspace$ (green), ${\ensuremath{\pion^-}}\xspace$ (blue) and $p$ (red) tracks is overlaid. (Right) Decay products from {\ensuremath{\Lambda}}\xspace baryons decaying in the last region of the magnet, M$_3$. } \label{fig:detectorDiagram} \end{figure} Table~\ref{tab:LambdaGeoEfficiencies} summarizes the geometric efficiencies for {\ensuremath{\Lambda}}\xspace baryons decaying in different regions of the detector volume, for three different SL topologies. Region R$_1$ is defined such that the $z$ position of the {\ensuremath{\Lambda}}\xspace decay vertex is in the range [0-40]\ensuremath{\mathrm{ \,cm}}\xspace from the collision point and the decay products are within the detector acceptance. Events in the R$_2$ region have a {\ensuremath{\Lambda}}\xspace decay $z$ position in the range [40-800]\ensuremath{\mathrm{ \,cm}}\xspace. Charged particles produced together with the {\ensuremath{\Lambda}}\xspace baryon are required to be within the VELO and T1-T3, or the VELO and TT acceptances, to insure a precise reconstruction of the {\ensuremath{\Lambda}}\xspace origin vertex. Events in the R$_1$ region provide the measurement of the initial {\ensuremath{\Lambda}}\xspace polarization vector; events in the R$_2$ region allow to determine the polarization as a function of the {\ensuremath{\Lambda}}\xspace decay length in the magnetic field region. Among the latter, {\ensuremath{\Lambda}}\xspace baryons decaying towards the end of the magnet (M$_3$ region in Table~\ref{tab:LambdaGeoEfficiencies}) provide most of the sensitivity to the EDM and MDM. These events are sketched in (Right) Fig.~\ref{fig:detectorDiagram}. The total geometric efficiency for R$_1$ and R$_2$ regions is about 16\%, with small differences among SL topologies, and about $2.4\times 10^{10}$ {\ensuremath{\Lambda}}\xspace baryons per \ensuremath{\mbox{\,fb}^{-1}}\xspace can be reconstructed. \begin{table}[htb] \centering \caption{Geometric efficiencies (in \%) for {\ensuremath{\Lambda}}\xspace baryons decaying in different regions of the \mbox{LHCb}\xspace detector, for several charm baryon decays produced at $\sqrt{s}=13$\ensuremath{\mathrm{\,Te\kern -0.1em V}}\xspace.} \label{tab:LambdaGeoEfficiencies} \renewcommand{\arraystretch}{1.1 \begin{tabular}{l ccccc} \toprule Region & R$_1$ & R$_2$ & M$_1$ & M$_2$ & M$_3$ \\ {\ensuremath{\Lambda}}\xspace decay vertex $z$ position (cm) & [0-40] & [40-800] & [280-450] & [450-610] &[610-780] \\ \midrule ${\ensuremath{\Lz^+_\cquark}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\Lambda}}\xspace{\ensuremath{\pion^+}}\xspace\pip{\ensuremath{\pion^-}}\xspace$ & 4.7 & 10.5 & 1.3 & 0.7 & 0.3 \\ ${\ensuremath{\PXi^0_\cquark}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\Lambda}}\xspace{\ensuremath{\kaon^-}}\xspace{\ensuremath{\pion^+}}\xspace$ & 5.2 & 12.2 & 1.7 & 1.0 & 0.6 \\ ${\ensuremath{\PXi^+_\cquark}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\Lambda}}\xspace{\ensuremath{\kaon^-}}\xspace{\ensuremath{\pion^+}}\xspace\pip$ & 5.3 & 11.9 & 1.6 & 0.9 & 0.4 \\ \toprule \end{tabular} \end{table} To assess the EDM sensitivity, pseudo-experiments have been generated using a simplified detector geometry that includes an approximate \mbox{LHCb}\xspace magnetic field mapping~\cite{LHCb-DP-2014-002,Hicheur:2007jfk}. The angular distribution and spin dynamics have been simulated using Eq.~(\ref{eq:AngDist}) and the general solution as a function of the {\ensuremath{\Lambda}}\xspace flight length described in Sec.~\ref{app:lambda}, respectively. For this study initial polarization vector $\mathbf s_0 = (0,0,s_0)$, with $s_0$ varying between 20\% and 100\%, and factors $g=-1.458$~\cite{Olive:2016xmw} and $d=0$, were used. Each generated sample was adjusted using an unbinned maximum likelihood fitting method with $d$, $g$ and $\mathbf s_0$ (or $\alpha\mathbf s_0$) as free parameters. The $d$-factor uncertainty scales with the number of events $N_{\ensuremath{\Lambda}}\xspace^{\rm reco}$ and the initial longitudinal polarization $s_0$ as $\sigma_d \propto 1/(s_0 \sqrt{N_{\ensuremath{\Lambda}}\xspace^{\rm reco}} )$. The sensitivity saturates at large values of $s_0$, as shown in (Left) Fig.~\ref{fig:Lambda_sensitivity}, and it partially relaxes the requirements on the initial polarizations. Similarly, (Right) Fig.~\ref{fig:Lambda_sensitivity} shows the expected sensitivity on the EDM as a function of the integrated luminosity, summing together SL and LL events, assuming global trigger and reconstruction efficiency $\epsilon_{\rm trigger} \epsilon_{\rm reco}$ of 1\% (improved \mbox{LHCb}\xspace software-based trigger and tracking for the upgrade detector~\cite{LHCb-TDR-016,LHCb-TDR-015}) and 0.2\% (current detector~\cite{LHCb-DP-2014-002}), where the efficiency estimates are based on a educated guess. An equivalent sensitivity is obtained for the gyromagnetic factor. Therefore, with 8~\ensuremath{\mbox{\,fb}^{-1}}\xspace a sensitivity $\sigma_d \approx 1.5\times 10^{-3}$ could be achieved (current detector), to be compared to the present limit, $1.7\times 10^{-2}$~\cite{Pondrom:1981gu}. With 50~\ensuremath{\mbox{\,fb}^{-1}}\xspace (upgraded detector) the sensitivity on the gyroelectric factor can reach $\approx 3\times 10^{-4}$. The reconstruction of long-lived {\ensuremath{\Lambda}}\xspace baryons decaying inside and after the magnet represents a challenge for the \mbox{LHCb}\xspace experiment, introducing significant backgrounds and a limited resolution on the measurement of the {\ensuremath{\Lambda}}\xspace momentum and decay point. Events can be reconstructed by exploiting the kinematics of exclusive decays and the determination of the production and the decay vertex of the {\ensuremath{\Lambda}}\xspace. According to simulation studies even with relatively poor resolutions, the EDM and MDM measurements do not degrade significantly. \begin{figure}[htb] \centering { \includegraphics[width=0.48\linewidth]{./sigd_depSZ.pdf} } { \includegraphics[width=0.48\linewidth]{./sigd_reach.pdf} } \caption{(Left) Dependence of the $d$ uncertainty with the initial polarization for $N_{\ensuremath{\Lambda}}\xspace^{\rm reco}=10^6$ events, and (Right) as a function of the integrated luminosity assuming reconstruction efficiency of 0.2\% and 1\%.} \label{fig:Lambda_sensitivity} \end{figure} \subsection{{\ensuremath{\Lz^+_\cquark}}\xspace and {\ensuremath{\PXi^+_\cquark}}\xspace case} \label{sec:sensivity_lambdac} We propose to search for charm baryon EDMs in a dedicated fixed-target experiment at the LHC to be installed in front of the \mbox{LHCb}\xspace detector, as close as possible to the VELO detector. The target should be attached to the crystal to maximize the yield of short-lived charm baryons to be channeled. The rate of {\ensuremath{\Lz^+_\cquark}}\xspace baryons produced with 7\ensuremath{\mathrm{\,Te\kern -0.1em V}}\xspace protons on a fixed target can be estimated as \begin{equation} \frac{dN_{\ensuremath{\Lz^+_\cquark}}\xspace}{dt} = \frac{F}{A}\sigma({\ensuremath{\Pp}}\xspace\proton\rightarrow{\ensuremath{\Lz^+_\cquark}}\xspace X) N_T , \end{equation} where $F$ is the proton rate, $A$ the beam transverse area, $N_T$ the number of target nucleons, and $\sigma({\ensuremath{\Pp}}\xspace\proton\rightarrow{\ensuremath{\Lz^+_\cquark}}\xspace X)$ is the cross-section for {\ensuremath{\Lz^+_\cquark}}\xspace production in {\ensuremath{\Pp}}\xspace\proton interactions at $\sqrt{s}=114.6\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace$ center-of-mass energy. The number of target nucleons is $N_T=N_A\rho A T A_N/A_T $, where $N_A$ is the Avogadro number, $\rho$ ($T$) is the target density (thickness), and $A_T$ ($A_N$) is the atomic mass (atomic mass number). The rate of {\ensuremath{\Lz^+_\cquark}}\xspace particles channeled in the bent crystal and reconstructed in the \mbox{LHCb}\xspace detector is estimated as \begin{equation} \label{eq:NLcReco} \frac{dN_{\ensuremath{\Lz^+_\cquark}}\xspace^{\rm reco}}{dt} = \frac{dN_{\ensuremath{\Lz^+_\cquark}}\xspace}{dt} \ensuremath{\mathcal{B}} ({\ensuremath{\Lz^+_\cquark}}\xspace\ensuremath{\rightarrow}\xspace f)\ensuremath{\varepsilon_{\rm CH}}\xspace\ensuremath{\varepsilon_{\rm DF}}({\ensuremath{\Lz^+_\cquark}}\xspace)\ensuremath{\varepsilon_{\rm det}} \end{equation} where each quantity and the corresponding estimated value is defined in Table~\ref{tab:quantities}. \begin{table}[!h] \caption{Definitions and estimated values of the relevant quantities for charm baryon EDM and MDM sensitivity studies, for a tungsten (W) target.\label{tab:quantities}} \makebox[\textwidth] {\begin{tabular}{lccc} \toprule Definition & Quantity & Value & Unit\\ \midrule Proton flux on target & $F$ & $5\times 10^{8}$ & proton/\ensuremath{\mathrm{{\,s}}}\xspace\\ \midrule Avogadro number & $N_A$ & $6.022\times 10^{23}$ & atoms/mol\\ Target density (W) & $\rho$ & $19.25$ & g/$\ensuremath{\mathrm{ \,cm}}\xspace^3$\\ Target thickness & $T$ & 0.5 & \ensuremath{\mathrm{ \,cm}}\xspace\\ Atomic mass (W) & $A_T$ & 183.84 & g/mol\\ Atomic mass number (W) & $A_N$ & 183.84 &\\ \midrule {\ensuremath{\Pp}}\xspace\proton cross-section to {\ensuremath{\Lz^+_\cquark}}\xspace & $\sigma({\ensuremath{\Pp}}\xspace\proton\ensuremath{\rightarrow}\xspace{\ensuremath{\Lz^+_\cquark}}\xspace X)$ & 18.2 & \ensuremath{{\mathrm{ \,\upmu b}}}\xspace \\ Branching fraction~\cite{Olive:2016xmw} & \ensuremath{\mathcal{B}} (\ensuremath{\Lc\to \Deltares^{++}\Km}\xspace) & $1.09\%$ & \\ & \ensuremath{\mathcal{B}} (\ensuremath{\Lc\to\Lz(p\pim)\pip}\xspace) & $0.83\%$ & \\ {\ensuremath{\Lz^+_\cquark}}\xspace boost & $\gamma$ & $10^{3}$\\ \midrule Crystal length & $L$ & 10 & \ensuremath{\mathrm{ \,cm}}\xspace\\ Crystal radius & $\rho_0$ & 10 & \ensuremath{\mathrm{ \,m}}\xspace\\ \midrule Channeling efficiency & \ensuremath{\varepsilon_{\rm CH}}\xspace & $10^{-3}$\\ Decay flight efficiency & \ensuremath{\varepsilon_{\rm DF}}({\ensuremath{\Lz^+_\cquark}}\xspace) & $19\%$\\ & \ensuremath{\varepsilon_{\rm DF}}({\ensuremath{\PXi^+_\cquark}}\xspace) & $47\%$\\ Detector efficiency & \ensuremath{\varepsilon_{\rm det}}(\ensuremath{\Lc\to pK^-\pi^+}\xspace) & $5.4\%$\\ & \ensuremath{\varepsilon_{\rm det}}(\ensuremath{\Lc\to\Lz(p\pim)\pip}\xspace) & $10^{-3}$\\ \midrule {\ensuremath{\Lz^+_\cquark}}\xspace polarization & $s_0$ & 0.6\\ $\alpha$ parameter & $\alpha_{{\ensuremath{\Lambda}}\xspace{\ensuremath{\pion^+}}\xspace}$ & $-0.91$\\ & $\alpha_{{\ensuremath{\PDelta}}\xspace^{++}{\ensuremath{\kaon^-}}\xspace}$ & $-0.67$\\ MDM anomaly & $(g-2)/2$ & 0.3\\ \bottomrule \end{tabular}} \end{table} A 6.5\ensuremath{\mathrm{\,Te\kern -0.1em V}}\xspace proton beam was extracted from the LHC beam halo by channeling protons in bent crystals~\cite{Scandale:2016krl}. A beam with intensity of $5\times 10^8~\text{proton/\ensuremath{\mathrm{{\,s}}}\xspace}$, to be directed on a fixed target, is attainable with this technique~\cite{Lansberg:2012wj}. An alternative experimental setup to be considered is a target-crystal system positioned in the vacuum pipe of the LHC where collisions with protons of the beam halo can be reached at comparable rates. Both solutions should be studied very accurately to be compliant with machine protection and safety requirements. Recent results from the UA9 collaboration~\cite{Scandale:2016krl}, relative to crystal collimation tests, demonstrated that a similar setup is technically viable and can be installed successfully in the LHC. Fixed-target collision events can be recorded in short dedicated runs or in parallel to the {\ensuremath{\Pp}}\xspace\proton data taking, if the background caused by the insertion of a fixed target in the beam halo is negligible with respect to {\ensuremath{\Pp}}\xspace\proton collisions. Both solutions have to be studied in detail using ad-hoc simulations. The {\ensuremath{\Lz^+_\cquark}}\xspace cross section can be estimated from the total charm production cross section measured by the PHENIX experiment in proton-proton collisions at $\sqrt{s} = 200\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace$~\cite{Adare:2006hc}, $\sigma_{c\overline{c}} = \left(567 \pm 57_{\rm stat.} \pm 193_{\rm syst.}\right) \ensuremath{{\mathrm{ \,\upmu b}}}\xspace$, rescaled to $\sqrt{s} = 114.6 \ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace$ assuming a linear dependence on $\sqrt{s}$. By applying the {\ensuremath{\Lz^+_\cquark}}\xspace fragmentation function used in Ref. \cite{Adare:2006hc}, $\sigma_{{\ensuremath{\Lz^+_\cquark}}\xspace}/\sigma_{c\overline{c}}\approx 5.6\%$, compatible with theoretical predictions \cite{Kniehl:2005de}, the {\ensuremath{\Lz^+_\cquark}}\xspace cross section is $\sigma_{{\ensuremath{\Lz^+_\cquark}}\xspace} \approx 18.2 \ensuremath{{\mathrm{ \,\upmu b}}}\xspace$. The channeling efficiency in silicon crystals, including both channeling angular acceptance and dechanneling effects, is estimated to be $\ensuremath{\varepsilon_{\rm CH}}\xspace\approx 10^{-3}$~\cite{Biryukov1997}, while the fraction of {\ensuremath{\Lz^+_\cquark}}\xspace baryons decaying after the crystal is $\ensuremath{\varepsilon_{\rm DF}}({\ensuremath{\Lz^+_\cquark}}\xspace)\approx 19\%$, for $\gamma = 1000$ and 10\ensuremath{\mathrm{ \,cm}}\xspace crystal length. The geometrical acceptance for ${\ensuremath{\Lz^+_\cquark}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\Pp}}\xspace{\ensuremath{\kaon^-}}\xspace{\ensuremath{\pion^+}}\xspace$ decaying into the \mbox{LHCb}\xspace detector is $\ensuremath{\varepsilon_{\rm geo}}\xspace \approx 25\%$ according to simulation studies. For {\ensuremath{\Lz^+_\cquark}}\xspace to {\ensuremath{\Lambda}}\xspace decays, \mbox{\itshape e.g.}\xspace \ensuremath{\Lc\to\Lz(p\pim)\pip}\xspace, the geometrical efficiency is reduced by about a factor 50 since most {\ensuremath{\Lambda}}\xspace baryons decay after the detector tracking volume. The \mbox{LHCb}\xspace software-based trigger for the upgrade detector~\cite{LHCb-TDR-016} is expected to have efficiency for charm hadrons comparable to the current high level trigger~\cite{LHCb-DP-2014-002}, \mbox{\itshape i.e.}\xspace $\ensuremath{\varepsilon_{\rm trigger}}\xspace \approx 80\%$. A specific trigger scheme for the fixed-target experiment can be adopted to enhance the trigger efficiency for {\ensuremath{\Lz^+_\cquark}}\xspace decays close to $100\%$. For example, a trigger based on the energy loss in a instrumented silicon crystal was used in the E761 experiment to enhance the rate of reconstructed channeled \PSigmap baryons~\cite{Chen:1992wx}. The tracking efficiency is estimated to be $70\%$ per track, leading to an efficiency $\ensuremath{\varepsilon_{\rm track}}\xspace \approx 34\%$ for a {\ensuremath{\Lz^+_\cquark}}\xspace decay with three charged particles. The detector reconstruction efficiency, $\ensuremath{\varepsilon_{\rm det}} = \ensuremath{\varepsilon_{\rm geo}}\xspace\ensuremath{\varepsilon_{\rm trigger}}\xspace\ensuremath{\varepsilon_{\rm track}}\xspace$, is estimated to be \begin{align} \ensuremath{\varepsilon_{\rm det}}({\ensuremath{\Pp}}\xspace{\ensuremath{\kaon^-}}\xspace{\ensuremath{\pion^+}}\xspace) &\approx 5.4\times10^{-2} \hspace{1cm} \text{ for } \ensuremath{\Lc\to pK^-\pi^+}\xspace , \nonumber\\ \ensuremath{\varepsilon_{\rm det}}({\ensuremath{\Lambda}}\xspace{\ensuremath{\pion^+}}\xspace) &\approx 1.0\times10^{-3} \hspace{1cm} \text{ for } \ensuremath{\Lc\to\Lz(p\pim)\pip}\xspace. \end{align} The initial {\ensuremath{\Lz^+_\cquark}}\xspace polarization will be eventually measured using non-channeled {\ensuremath{\Lz^+_\cquark}}\xspace particles. Few {\ensuremath{\Lz^+_\cquark}}\xspace decay asymmetry parameters are known, the only one relevant for our experiment is that associated to ${\ensuremath{\Lz^+_\cquark}}\xspace\ensuremath{\rightarrow}\xspace\Lambda(p\pi^-)\pi^+$, $\alpha_{\Lambda{\ensuremath{\pion^+}}\xspace} = -0.91\pm 0.15$~\cite{Olive:2016xmw}. Asymmetry parameters for different {\ensuremath{\Lz^+_\cquark}}\xspace decays can be measured precisely at \mbox{LHCb}\xspace in the future. At present, they can be computed from existing ${\ensuremath{\Lz^+_\cquark}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\Pp}}\xspace{\ensuremath{\kaon^-}}\xspace{\ensuremath{\pion^+}}\xspace$ amplitude analysis results~\cite{Aitala:1999uq} (see Appendix~\ref{app:B}), yielding $\alpha_{{\ensuremath{\PDelta}}\xspace^{++}{\ensuremath{\kaon^-}}\xspace} = -0.67 \pm 0.30$ for the \ensuremath{\Lc\to \Deltares^{++}\Km}\xspace decay. For the sensitivity studies we assume $s_0=0.6$ and $(g-2)/2 = 0.3$, according to experimental results and available theoretical predictions, respectively, quoted in Ref.~\cite{Samsonov:1996ah}. The $g-2$ and $d$ values can be derived from Eq.~\eqref{eq:EDM_LcPol} as \begin{eqnarray} &g-2& \approx \frac{2}{\gamma\theta_C}\arccos{\left(\frac{A_y}{\alpha s_0}\right)} \approx \frac{2}{\gamma\theta_C}\arcsin{\left(\frac{A_z}{\alpha s_0}\right)} ,\\ &d & \approx \frac{(g-2)A_x}{\alpha s_0 \left[ \cos \Phi -1 \right]}, \end{eqnarray} where the quantity $A_{x,y,z}=\alpha s_{x,y,z}$ is measured from a fit to the angular distribution of the decay products. The main contribution to the statistical uncertainty on $g$ and $d$, in the limit $\gamma \gg 1$, can be estimated as \begin{eqnarray} &\sigma_{g}& \approx \frac{2}{\alpha s_0 \gamma \theta_C }\frac{1}{\sqrt{N_{\ensuremath{\Lz^+_\cquark}}\xspace^{\rm reco}}}, \\ &\sigma_d& \approx \frac{g-2}{\alpha s_0\left[ \cos\Phi -1 \right]}\frac{1}{\sqrt{N_{\ensuremath{\Lz^+_\cquark}}\xspace^{\rm reco}}}, \label{eq:EDM_stat_uncertainty} \end{eqnarray} where $N_{\ensuremath{\Lz^+_\cquark}}\xspace^{\rm reco}$ is the number of channeled and reconstructed {\ensuremath{\Lz^+_\cquark}}\xspace, as given in Eq.~(\ref{eq:NLcReco}), and $\Phi \approx 3 \ensuremath{\mathrm{ \,rad}}\xspace$ is the precession angle defined in Eq.~(\ref{eq:MDM_angle}) estimated using the quantities reported in Table~\ref{tab:quantities}. The estimate assumes negligibly small uncertainties on $\theta_C$, $\gamma$ and the initial {\ensuremath{\Lz^+_\cquark}}\xspace polarization, $s_0$, the latter to be measured with large samples of non-channeled {\ensuremath{\Lz^+_\cquark}}\xspace decays. Given the estimated quantities reported in Table~\ref{tab:quantities}, we obtain \begin{align} \frac{dN_{\ensuremath{\Lz^+_\cquark}}\xspace^{\rm reco}}{dt} & \approx 5.9 \times 10^{-3}~\ensuremath{{\mathrm{ \,s^{-1}}}}\xspace = 21.2~\ensuremath{{\mathrm{ \,h^{-1}}}}\xspace \hspace{0.8cm} \text{for }\ensuremath{\Lc\to \Deltares^{++}\Km}\xspace, \nonumber\\ \frac{dN_{\ensuremath{\Lz^+_\cquark}}\xspace^{\rm reco}}{dt} & \approx 8.3 \times 10^{-5}~\ensuremath{{\mathrm{ \,s^{-1}}}}\xspace = 0.3~\ensuremath{{\mathrm{ \,h^{-1}}}}\xspace \hspace{1cm} \text{for }\ensuremath{\Lc\to\Lz(p\pim)\pip}\xspace. \end{align} For reaching a sensitivity of $\sigma_d=0.01$, corresponding to a {\ensuremath{\Lz^+_\cquark}}\xspace EDM of $\delta = 2.1 \times 10^{-17} e \ensuremath{\mathrm{ \,cm}}\xspace$, we need, inverting Eq.~\eqref{eq:EDM_stat_uncertainty}, $5.6\times 10^3$ \ensuremath{\Lc\to \Deltares^{++}\Km}\xspace or $3.0\times 10^3$ \ensuremath{\Lc\to\Lz(p\pim)\pip}\xspace events, recorded during a data taking time $t$ of \begin{align} t &= 265 {~\rm h} = 11 {~\rm days} \hspace{4.35cm} \text{for }\ensuremath{\Lc\to \Deltares^{++}\Km}\xspace, \nonumber\\ t &= 1.0\times 10^{4}{~\rm h} \approx 420 {~\rm days} \approx 1.2 {~\rm years} \hspace{1cm} \text{for }\ensuremath{\Lc\to\Lz(p\pim)\pip}\xspace. \end{align} Therefore, a measurement of {\ensuremath{\Lz^+_\cquark}}\xspace EDM is feasible in {\ensuremath{\Lz^+_\cquark}}\xspace quasi two-body decays at \mbox{LHCb}\xspace, while it is difficult in {\ensuremath{\Lz^+_\cquark}}\xspace to {\ensuremath{\Lambda}}\xspace final states. Considering only \ensuremath{\Lc\to \Deltares^{++}\Km}\xspace events, the uncertainties scale as \begin{eqnarray} \sigma_g \approx 4.0\times 10^{-3}\frac{1}{\sqrt{t({\rm month})}}~,~~ \sigma_d \approx 6.1\times 10^{-3}\frac{1}{\sqrt{t({\rm month})}}, \end{eqnarray} corresponding to \begin{eqnarray} \sigma_{\mu} \approx 4.2 \times 10^{-27} \textrm{erg/G} \frac{1}{\sqrt{t({\rm month})}}~,~~ \sigma_{\delta} \approx 1.3 \times 10^{-17} e \ensuremath{\mathrm{ \,cm}}\xspace \frac{1}{\sqrt{t({\rm month})}}, \end{eqnarray} where the time $t$ of the data taking period is expressed in months. The dependence of the sensitivity to {\ensuremath{\Lz^+_\cquark}}\xspace EDM and MDM as a function of the number of incident protons on the target is shown in Fig.~\ref{fig:Lambdac_sensitivity}. \begin{figure} \centering \includegraphics[scale=0.36]{./Lc_sigd.pdf} \includegraphics[scale=0.36]{./Lc_sigg.pdf} \caption{Dependence of the (Left) $d$ and (Right) $g$ uncertainties for the {\ensuremath{\Lz^+_\cquark}}\xspace baryon, reconstructed in ${\ensuremath{\PDelta}}\xspace^{++}{\ensuremath{\kaon^-}}\xspace$ final state, with the number of protons on target. One month of data taking corresponds to $1.3\times 10^{15}$ incident protons (dashed line), according to the estimated quantities listed in Table~\ref{tab:quantities}. \label{fig:Lambdac_sensitivity}} \end{figure} Estimating the {\ensuremath{\PXi^+_\cquark}}\xspace baryon production and the absolute ${\ensuremath{\PXi^+_\cquark}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\Pp}}\xspace{\ensuremath{\kaon^-}}\xspace{\ensuremath{\pion^+}}\xspace$ branching fraction as described in Sec.~\ref{sec:method_lambda}, we obtain the ratio \begin{equation} \frac{\sigma_{\ensuremath{\PXi^+_\cquark}}\xspace\ensuremath{\mathcal{B}} ({\ensuremath{\PXi^+_\cquark}}\xspace\ensuremath{\rightarrow}\xspace pK^-\pi^+)}{\sigma_{\ensuremath{\Lz^+_\cquark}}\xspace\ensuremath{\mathcal{B}} ({\ensuremath{\Lz^+_\cquark}}\xspace\ensuremath{\rightarrow}\xspace pK^-\pi^+)} \approx 18 \%, \end{equation} while the fraction of {\ensuremath{\PXi^+_\cquark}}\xspace baryons decaying after the crystal is $\ensuremath{\varepsilon_{\rm DF}}({\ensuremath{\PXi^+_\cquark}}\xspace) \approx 47\%$. Assuming decay asymmetry parameters and initial polarization similar to the {\ensuremath{\Lz^+_\cquark}}\xspace baryon, the expected statistical uncertainty on the {\ensuremath{\PXi^+_\cquark}}\xspace MDM and EDM is \begin{eqnarray} \sigma_{\mu} \approx 6.3 \times 10^{-27} \textrm{erg/G} \frac{1}{\sqrt{t({\rm month})}}~,~~ \sigma_{\delta} \approx 2.0 \times 10^{-17} e \ensuremath{\mathrm{ \,cm}}\xspace \frac{1}{\sqrt{t({\rm month})}}. \end{eqnarray}
1,314,259,992,653
arxiv
\section{Introduction} A subset $B$ of a group $G$ is called a {\em difference basis} for a subset $A\subset G$ if each element $a\in A$ can be written as $a=xy^{-1}$ for some $x,y\in B$. The smallest cardinality of a difference basis for $A$ is called the {\em difference size} of $A$ and is denoted by $\Delta[A]$. For example, the set $\{0,1,4,6\}$ is a difference basis for the interval $A=[-6,6]\cap\IZ$ witnessing that $\Delta[A]\le 4$. The difference size is subadditive in the sense that $\Delta[A\cup B]<\Delta[A]+\Delta[B]$ for any non-empty finite subsets $A,B$ of a group $G$ (see Proposition~\ref{p:adic}). The definition of a difference basis $B$ for a set $A$ in a group $G$ implies that $|A|\le |B|^2$ and hence $\Delta[A]\ge \sqrt{|A|}$. The fraction $$\eth[A]:=\frac{\Delta[A]}{\sqrt{|A|}}\ge1$$is called the {\em difference characteristic} of $A$. The difference characteristic is submultiplicative in the sense that $\eth[G]\le \eth[H]\cdot\eth[G/H]$ for any normal subgroup $H$ of a finite group $G$, see \cite[1.1]{BGN}. In this paper we are interested in evaluating the difference characteristics of finite Abelian groups. In fact, this problem has been studied in the literature. A fundamental result in this area is due to Kozma and Lev \cite{KL}, who proved (using the classification of finite simple groups) that each finite group $G$ has difference characteristic $\eth[G]\le\frac{4}{\sqrt{3}}\approx 2.3094$. For finite cyclic groups $G$ the upper bound $\frac4{\sqrt{3}}$ can be lowered to $\eth[G]\le \frac32$ (and even to $\eth[G]<\frac2{\sqrt{3}}$ if $|G|\ge 2\cdot 10^{15}$), see \cite{BG}. In this paper we continue investigations started in \cite{BG} and shall give some lower and upper bounds for the difference characteristics of finite Abelian groups. In some cases (for example, for Abelian $p$-groups with $p\ge11$) our upper bounds are better than the general upper bound $\frac4{\sqrt{3}}$ of Kozma and Lev. In particular, in Theorem~\ref{t:u-Abp} we prove that for any prime number $p\ge 11$, any finite Abelian $p$-group $G$ has difference characteristic $$\eth[G]\le \frac{\sqrt{p}-1}{\sqrt{p}-3}\cdot\sup_{k\in\IN}\eth[C_{p^k}]< \frac{\sqrt{p}-1}{\sqrt{p}-3}\cdot\sqrt{2}.$$ These results are obtained exploiting a structure of a Galois ring on the groups $C_{p^k}^r$. Here $C_n=\{z\in\IC:z^n=1\}$ is the cyclic group of order $n$. The group $C_n$ is isomorphic to the additive group of the ring $\IZ_n=\IZ/n\IZ$. \section{Known results} In this section we recall some known results on difference bases in finite groups. The following fundamental fact was proved by Kozma and Lev \cite{KL}. \begin{theorem}[Kozma, Lev]\label{t:KL} Each finite group $G$ has difference characteristic $\eth[G]\le\frac{4}{\sqrt{3}}$. \end{theorem} For a real number $x$ we put $$\lceil x\rceil=\min\{n\in\IZ:n\ge x\}\mbox{ and }\lfloor x\rfloor=\max\{n\in\IZ:n\le x\}.$$ The following proposition is proved in \cite[1.1]{BGN}. \begin{proposition}\label{p:BGN} Let $G$ be a finite group. Then \begin{enumerate} \item $ \frac{1+\sqrt{4|G|-3}}2\le \Delta[G]\le \big\lceil\frac{|G|+1}2\big\rceil$, \item $\Delta[G]\le \Delta[H]\cdot \Delta[G/H]$ and $\eth[G]\le\eth[H]\cdot\eth[G/H]$ for any normal subgroup $H\subset G$; \item $\Delta[G]\le |H|+|G/H|-1$ for any subgroup $H\subset G$; \end{enumerate} \end{proposition} Finite groups $G$ with $\Delta[G]=\big\lceil\frac{|G|+1}2\big\rceil$ were characterized in \cite{BGN} as follows. \begin{theorem}[Banakh, Gavrylkiv, Nykyforchyn] \label{upbound} For a finite group $G$ \begin{enumerate} \item[(i)] $\Delta[G]=\big\lceil\frac{|G|+1}2\big\rceil>\frac{|G|}2$ if and only if $G$ is isomorphic to one of the groups:\newline $C_1$, $C_2$, $C_3$, $C_4$, $C_2\times C_2$, $C_5$, $D_6$, $(C_2)^3$; \item[(ii)] $\Delta[G]=\frac{|G|}2$ if and only if $G$ is isomorphic to one of the groups: $C_6$, $C_8$, $C_4\times C_2$, $D_8$, $Q_8$. \end{enumerate} \end{theorem} In this theorem by $D_{2n}$ we denote the dihedral group of cardinality $2n$ and by $Q_8$ the 8-element group of quaternion units. In \cite{BGN} the difference size $\Delta[G]$ was calculated for all groups $G$ of cardinality $|G|\le 13$. \begin{table}[ht] \caption{Difference sizes of groups of order $\le13$}\label{tab:BGN} \begin{tabular}{|c|c|c|c|cc|cc|ccccc|} \hline $G$:& $C_2$& $C_3$ & $C_5$&$C_4$ &$C_2{\times}C_2$ &$C_6$& $D_6$ & $C_8$ &$C_2{\times}C_4$ & $D_8$ & $Q_8$& $(C_2)^3$\\ \hline $\Delta[G]$&2&2&3&3&3&3&4&4&4&4&4&5\\ \hline \hline $G$:&$C_{7}$& $C_{11}$& $C_{13}$ &$C_9$&$C_3{\times}C_3$ &$C_{10}$& $D_{10}$ & $C_{12}$ &$C_2{\times}C_6$ &$D_{12}$& $A_4$ & $C_3{\rtimes} C_4$\\ \hline $\Delta[G]$&3&4&4&4&4&4&4&4&5&5&5&5\\ \hline \end{tabular} \end{table} } An important role in evaluating the difference sizes of cyclic groups is due to difference sizes of the order-intervals $[1,n]\cap\IZ$ in the additive group $\IZ$ of integer numbers. For a natural number $n\in\IN$ by $\Delta[n]$ we denote the difference size of the order interval $[1,n]\cap\IZ$ and by $\eth[n]:=\frac{\Delta[n]}{\sqrt{n}}$ its difference characteristic. The asymptotics of the sequence $(\eth[n])_{n=1}^\infty$ was studied by R\'edei and R\'enyi \cite{RR}, Leech \cite{Leech} and Golay \cite{Golay} who eventually proved that $$\sqrt{2+\tfrac4{3\pi}}< \sqrt{2+\max_{0<\varphi<2\pi}\tfrac{2\sin(\varphi)}{\varphi+\pi}}\le \lim_{n\to\infty}\eth[n]=\inf_{n\in\IN}\eth[n]\le \eth[6166]=\frac{128}{\sqrt{6166}}<\eth[6]=\sqrt{\tfrac{8}3}.$$ In \cite{BG} the difference sizes of the order-intervals $[1,n]\cap\IZ$ were applied to give upper bounds for the difference sizes of finite cyclic groups. \begin{proposition}\label{p:c<n} For every $n\in\IN$ we get the upper bound $\Delta[C_n]\le\Delta\big[\lceil\frac{n-1}2\rceil\big]$, which implies that $$\limsup_{n\to\infty}\eth[C_n]\le\frac1{\sqrt{2}}\inf_{n\in\IN}\eth[n]\le\frac{64}{\sqrt{3083}}<\frac{2}{\sqrt{3}}.$$ \end{proposition} The following facts on the difference sizes of cyclic groups were proved in \cite{BG}. \begin{theorem}[Banakh, Gavrylkiv]\label{t:cyclic} For any $n\in\IN$ the cyclic group $C_n$ has the difference characteristic: \begin{enumerate} \item $\eth[C_n]\le\eth[C_4]=\frac32$; \item $\eth[C_n]\le\eth[C_2]=\eth[C_8]=\sqrt{2}$ if $n\ne 4$; \item $\eth[C_n]\le\frac{12}{\sqrt{73}}<\sqrt{2}$ if $n\ge 9$; \item $\eth[C_n]\le\frac{24}{\sqrt{293}}<\frac{12}{\sqrt{73}}$ if $n\ge 9$ and $n\ne 292$; \item $\eth[C_n]<\frac2{\sqrt{3}}$ if $n\ge 2\cdot 10^{15}$. \end{enumerate} \end{theorem} For some special numbers $n$ we have more precise upper bounds for $\Delta[C_n]$. We recall that a number $q$ is a {\em prime power} if $q=p^k$ for some prime number $p$ and some $k\in\IN$. The following theorem was derived in \cite{BG} from the classical results of Singer \cite{Singer}, Bose, Chowla \cite{Bose}, \cite{Chowla} and Rusza \cite{Rusza}. \begin{theorem} Let $p$ be a prime number and $q$ be a prime power. Then \begin{enumerate} \item $\Delta[C_{q^2+q+1}]=q+1$; \item $\Delta[C_{q^2-1}]\le q-1+\Delta[C_{q-1}]\le q-1+\frac{3}2\sqrt{q-1}$; \item $\Delta[C_{p^2-p}]\le p-3+\Delta[C_{p}]+\Delta[C_{p-1}]\le p-3+\frac32(\sqrt{p}+\sqrt{p-1})$. \end{enumerate} \end{theorem} The following table of difference sizes of cyclic groups $C_n$ for $\le 100$ is taken from \cite{BG}. \begin{table}[ht] \caption{Difference sizes and characteristics of cyclic groups $C_n$ for $n\le100$}\label{tab:cycl} \begin{tabular}{|c|c|c||c|c|c||c|c|c||c|c|c|} \hline $n$ & \!$\Delta[C_n]$\! & $\eth[C_n]$&$n$ & \!$\Delta[C_n]$\! & $\eth[C_n]$&$n$ & \!$\Delta[C_n]$\! & $\eth[C_n]$&$n$ & \!$\Delta[C_n]$\! & $\eth[C_n]$\\ \hline 1 & 1 & 1 &26 & 6 & 1.1766...\!\! & 51 & 8 & 1.1202...\!\!& 76 & 10 & 1.1470...\\ 2 & 2 &1.4142... &27 & 6 & 1.1547...\!\! & 52 & 9 & 1.2480...\!\!& 77 & 10 & 1.1396...\!\!\\ 3 & 2 &1.1547... &28 & 6 & 1.1338...\!\! & 53 & 9 & 1.2362...\!\!& 78 & 10 & 1.1322...\!\!\\ 4 & 3 &1.5 &29 & 7 & 1.2998...\!\! & 54 & 9 & 1.2247...\!\!& 79 & 10 & 1.1250...\!\!\\ 5 & 3 &1.3416... &30 & 7 & 1.2780...\!\! & 55 & 9 & 1.2135...\!\!& 80 & 11 & 1.2298...\!\!\\ 6 & 3 & 1.2247... &31 & 6 & 1.0776...\!\! & 56 & 9 & 1.2026...\!\!& 81 & 11 & 1.2222...\!\!\\ 7 & 3 & 1.1338... &32 & 7 & 1.2374...\!\! & 57 & 8 & 1.0596...\!\!& 82 & 11 & 1.2147...\!\!\\ 8 & 4 & 1.4142... &33 & 7 & 1.2185...\!\! & 58 & 9 & 1.1817...\!\!& 83 & 11 & 1.2074...\!\!\\ 9 & 4 & 1.3333... &34 & 7 & 1.2004...\!\! & 59 & 9 & 1.1717...\!\!& 84 & 11 & 1.2001...\!\!\\ 10 & 4 & 1.2649... &35 & 7 & 1.1832...\!\! & 60 & 9 & 1.1618...\!\!& 85 & 11 & 1.1931...\!\!\\ 11 & 4 & 1.2060... &36 & 7 & 1.1666...\!\! & 61 & 9 & 1.1523...\!\!& 86 & 11 & 1.1861...\!\!\\ 12 & 4 & 1.1547... &37 & 7 & 1.1507...\!\! & 62 & 9 & 1.1430...\!\!& 87 & 11 & 1.1793...\!\!\\ 13 & 4 & 1.1094... &38 & 8 & 1.2977...\!\! & 63 & 9 & 1.1338...\!\!& 88 & 11 & 1.1726...\!\!\\ 14 & 5 & 1.3363... &39 & 7 & 1.1208...\!\! & 64 & 9 & 1.125\!\!& 89 & 11 & 1.1659...\!\!\\ 15 & 5 & 1.2909... &40 & 8 & 1.2649...\!\! & 65 & 9 & 1.1163...\!\!& 90 & 11 & 1.1595...\!\!\\ 16 & 5 & 1.25 &41 & 8 & 1.2493...\!\! & 66 & 10 & 1.2309...\!\!& 91 & 10 & 1.0482...\!\!\\ 17 & 5 & 1.2126... &42 & 8 & 1.2344...\!\! & 67 & 10 & 1.2216...\!\!& 92 & 11 & 1.1468...\!\!\\ 18 & 5 & 1.1785... &43 & 8 & 1.2199...\!\! & 68 & 10 & 1.2126...\!\!& 93 & 12 & 1.2443...\!\!\\ 19 & 5 & 1.1470... &44 & 8 & 1.2060...\!\! & 69 & 10 & 1.2038...\!\!& 94 & 12 & 1.2377...\!\!\\ 20 & 6 & 1.3416... &45 & 8 & 1.1925...\!\! & 70 & 10 & 1.1952...\!\!& 95 & 12 & 1.2311...\!\!\\ 21 & 5 & 1.0910... &46 & 8 & 1.1795...\!\! & 71 & 10 & 1.1867...\!\!& 96 & 12 & 1.2247...\!\!\\ 22 & 6 & 1.2792... &47 & 8 & 1.1669...\!\! & 72 & 10 & 1.1785...\!\!& 97 & 12 & 1.2184...\!\!\\ 23 & 6 & 1.2510... &48 & 8 & 1.1547...\!\! & 73 & 9 & 1.0533...\!\!& 98 & 12 & 1.2121...\!\!\\ 24 & 6 & 1.2247... &49 & 8 & 1.1428...\!\! & 74 & 10 & 1.1624...\!\!& 99 & 12 & 1.2060...\!\!\\ 25 & 6 & 1.2 &50 & 8 & 1.1313...\!\! & 75 & 10 & 1.1547...\!\!& 100 & 12 & 1.2\\ \hline \end{tabular} \end{table} \section{A lower bound for the difference size} In this section we prove a simple lower bound for the difference size of an arbitrary finite set in a group. This lower bound improves the lower bound given in Proposition~\ref{p:BGN}(1). For a group $G$ by $1_G$ we denote the unique idempotent of $G$. \begin{theorem}\label{t:lower} Each finite subset $A$ of a group $G$ has difference size $$\Delta[A]\ge \frac{1+\sqrt{4|A_{>2}|+8|A_2|+1}}2\ge\frac{1+\sqrt{4|A_{>1}|+1} }2,$$ where $A_{>2}=\{a\in A:a^{-1}\ne a\}$, $A_2=\{a\in A:a^{-1}=a\ne 1_G\}$ and $A_{>1}=\{a\in A:a\ne 1_G\}$. \end{theorem} \begin{proof} Take a difference basis $B\subset G$ for the set $A$ of cardinality $|B|=\Delta[A]$ and consider the map $\xi:B\times B\to G$, $\xi:(x,y)\mapsto xy^{-1}$. Observe that for the unit $1_G$ of the group $G$ the preimage $\xi^{-1}(1_G)$ coincides with the diagonal $\{(x,y)\in B\times B:x=y\}$ of the square $B\times B$ and hence has cardinality $|\xi^{-1}(e)|=|B|$. Observe also that for any element $g\in A_2=\{a\in A:a^{-1}=a\ne 1_G\}$ and any $(x,y)\in \xi^{-1}(g)$, we get $yx^{-1}=(xy^{-1})^{-1}=g^{-1}=g$, which implies that $|\xi^{-1}(g)|\ge 2$. Then $$|B|^2=|B\times B|\ge|\xi^{-1}(1_G)|+\sum_{a\in A_2}|\xi^{-1}(g)|+\sum_{a\in A_{>2}}|\xi^{-1}(g)|\ge |B|+2|A_2|+|A_{>2}|$$and hence $$\Delta[G]=|B|\ge \frac{1+\sqrt{1+4|A_{>2}|+8|A_2|}}2\ge\frac{1+\sqrt{1+4|A_{>1}|}}2$$ as $A_{>2}\cup A_2=A_{>1}$. \end{proof} \begin{corollary}\label{c:lower} Each finite group $G$ has difference size $\Delta[G]\ge \frac{1+\sqrt{4|G|+4|G_2|-3}}2$, where $G_2=\{g\in G:g^{-1}=g\ne 1_G\}$ is the set of elements of order 2 in $G$. \end{corollary} \section{The subadditivity and submultiplicativity of the difference size} In this section we prove two properties of the difference size called the subadditivity and the submultiplicativity. \begin{proposition}\label{p:adic} For any non-empty finite subsets $A,B$ of a group $G$ we get $\Delta[A\cup B]\le\Delta[A]+\Delta[B]-1$. \end{proposition} \begin{proof} Given non-empty sets $A,B\subset G$, find difference bases $D_A$ and $D_B$ for the sets $A,B$ of cardinality $|D_A|=\Delta[A]$ and $|D_B|=\Delta[B]$. Taking any point $d\in D_A$ and replacing $D_A$ by its shift $D_Ad^{-1}$, we can assume that the unit $1_G$ of the group $G$ belongs to $D_A$. By the same reason, we can assume that $1_G\in D_B$. The union $D=D_A\cup D_B$ is a difference basis for $A\cup B$, witnessing that $$\Delta[A\cup B]\le |D|\le |D_A|+|D_B|-1=\Delta[A]+\Delta[B]-1.$$ \end{proof} \begin{proposition}\label{p:multic} Let $h:G\to H$ be a surjective homomorphism of groups with finite kernel $K$. For any non-empty finite subset $A\subset H$ we get $\Delta[h^{-1}(A)]\le \Delta[A]\cdot\Delta[K]$. \end{proposition} \begin{proof} Given a non-empty finite subset $A\subset H$, find a difference basis $D_A$ for the set $A$ of cardinality $|D_A|=\Delta[A]$. Also fix a difference basis $D_K$ for the kernel $K\subset G$ of cardinality $|D_K|=\Delta[K]$. Fix any subset $B\subset G$ such that $|B|=|D_A|$ and $|h^{-1}(x)\cap B|=1$ for any $x\in D_A$. We claim that the set $C=BD_K$ is a difference basis for $h^{-1}(A)$. Since $D_A$ is a difference basis for the set $A$, for any $a\in h^{-1}(A)$ there are elements $a_1,a_2\in D_A$ such that $h(a)=a_1a_2^{-1}$. Then $a=b_1b_2^{-1}k$ for some $b_1,b_2\in B$, $k\in K$. The normality of the subgroup $K$ in $G$ implies that $a=b_1b_2^{-1}k=b_1k'b_2^{-1}$ for some $k'\in K$. Taking into account that $D_K$ is a difference basis for the kernel $K$, find elements $k_1,k_2\in D_K$ such that $k'=k_1k_2^{-1}$. Then $$a=b_1b_2^{-1}k=b_1k'b_2^{-1}=b_1k_1k_2^{-1}b_2^{-1}=(b_1k_1)(b_2k_2)^{-1}\in CC^{-1}$$ and $$\Delta[h^{-1}(A)]\le |C|\le |D_A|\cdot|D_K|=\Delta[A]\cdot\Delta[K].$$ \end{proof} \section{Difference bases in rings} In this section we construct difference bases for subsets of finite rings. All rings considered in this section have the unit. For a ring $R$ by $U(R)$ we denote the multiplicative group of invertible elements in $R$. An element $x$ of a ring $R$ is called {\em invertible} if there exists an element $x^{-1}\in R$ such that $xx^{-1}=x^{-1}x=1$. The group $U(R)$ is called {\em the group of units} of the ring $R$. The {\em characteristic} of a finite ring $R$ is the smallest natural number $n$ such that $nx=0$ for every $x\in R$. A non-empty subset $I$ of a ring $R$ is called an {\em ideal} in $R$ if $I\ne R$, $I-I\subset I$ and $IR\cup RI\subset I$. The spectrum $\Spec(R)$ of a ring is the set of all maximal ideals of $R$. For any maximal ideal $I$ of a commutative ring $R$ the quotient ring $R/I$ is a field. A ring $R$ is called {\em local} if it contains a unique maximal ideal, which is denoted by $I_{\mathfrak m}$. The quotient ring $R/I_{\mathfrak m}$ is a field called the {\em residue field} of the local ring $R$. By \cite[Ch.6]{BF}, for every prime number $p$ and natural numbers $k,r$ there exists a unique local ring $\GR(p^k,r)$ called the {\em Galois ring} of characteristic $p^k$ whose additive group is isomorphic to the group $(C_{p^k})^r$, the maximal ideal $I_{\mathfrak m}$ coincides with the principal ideal $pR$ generated by $p\cdot 1$ and whose residue field $\GR(p^k,r)$ contains $p^r$ elements. The Galois ring $\GR(p^k,r)$ can be constructed as the quotient ring $\IZ[x]/(p^k,f(x))$ of the ring $\IZ[x]$ of polynomials with integer coefficients by the ideal $(p^k,f(x))$ generated by the constant $p^k$ and a carefully selected monic polynomial $f\in\IZ[x]$ of degree $r$, which is irreducible over the field $\IZ/p\IZ$, see \cite[6.1]{BF}. For $k=1$ the Galois ring $\GR(p^k,r)$ is a field, and for $r=1$ the Galois ring $\GR(p^k,r)$ is isomorphic to the ring $\IZ/p^k\IZ$. The following description of the multiplicative group of a Galois ring is taken from Theorem 6.1.7 of the book \cite{BF}. \begin{theorem}\label{t:R*} Let $p$ be a prime number and $k,r$ be natural numbers. The multiplicative group $U(R)$ of a Galois ring $R:=\GR(p^k,r)$ is isomorphic to: $$\begin{cases} C_{p^r-1}\times C_{p^{k-1}}^r&\mbox{if either $p$ is odd or $p=2$ and $k\le 2$},\\ C_{2^r-1}\times C_2\times C_{2^{k-2}}\times C_{2^{k-1}}^{r-1}&\mbox{if $p=2$ and $k\ge 3$}. \end{cases} $$ \end{theorem} The following theorem is our principal tool for evaluating the difference sizes of finite Abelian groups of odd order. \begin{theorem}\label{t:xx} Let $R$ be a finite commutative ring $R$ with unit and $(1+1)\in U(R)$ and let $h:G\to R\times R$ be a surjective homomorphism from a group $G$ onto the Abelian group of the ring $R\times R$. Let $K=h^{-1}(0,0)$ be the kernel of the homomorphism $h$. Then $$\Delta[G]\le \Delta[K]\cdot |R|-|\Spec(R)|+\sum_{I\in\Spec(R)}\Delta[h^{-1}(I\times R)].$$If the ring $R$ is local, then $\Delta[G]\le \Delta[K]\cdot |R|-1+\Delta[h^{-1}(I_{\mathfrak m}\times R)].$ \end{theorem} \begin{proof} First we observe that $R=U(R)\cup\bigcup_{I\in\Spec(R)}I$. Indeed, if an element $x\in R$ is not invertible, then the set $xR=\{xy:y\in R\}$ is an ideal in $R$, contained in some maximal ideal $I\in\Spec(R)$. This implies that $R=U(R)\cup\bigcup_{I\in\Spec(R)}I$ and hence $G=h^{-1}(U(R)\times R)\cup\bigcup_{I\in\Spec(R)}h^{-1}(I\times R)$. \begin{lemma}\label{l:xx} The set $B=\{(x,x^2):x\in R\}$ is a difference basis for the set $U(R)\times R$ in the additive group $R\times R$. \end{lemma} \begin{proof} Given any pair $(a,b)\in U(R)\times R$, we should find two elements $x,y\in R$ such that $(a,b)=(x-y,x^2-y^2)$. Solving this system of equations in the ring $R$, we get the solution $$ \begin{cases} x=2^{-1}(ba^{-1}+a)\\ y=2^{-1}(ba^{-1}-a). \end{cases} $$ \end{proof} By Lemma~\ref{l:xx}, the set $B=\{(x,x^2):x\in R\}$ is a difference basis for the set $U(R)\times R$ in $R\times R$. So, $\Delta[U(R)\times R]\le|B|=|R|$. By Proposition~\ref{p:multic}, $$h^{-1}(U(R)\times R)\le\Delta[K]\cdot\Delta[U(R)\times R]\le\Delta[K]\cdot |R|$$and by Proposition~\ref{p:adic}, $$\Delta[G]\le \Delta[h^{-1}(U(R)\times R)]+\sum_{I\in\Spec(R)}(\Delta[h^{-1}(I\times R)]-1)\le \Delta[K]\cdot |R|-|\Spec(R)|+\sum_{I\in\Spec(R)}\Delta[h^{-1}(I\times R)]. $$ \end{proof} Our next theorem will be applied for evaluating the difference characteristics of Abelian 2-groups. This theorem exploits the structure of a (non)associative ring. By a {\em (non)associative ring} we understand an Abelian group $R$ endowed with a binary operation $\circ:R\times R\to R$ which is distributive in the sense that $x\circ(y+z)=x\circ y+x\circ z$ and $(x+y)\circ z=x\circ z+y\circ z$ for all $x,y,z\in R$. A (non)associative ring $R$ is called a {\em ring} if its binary operation $\circ$ is associative. In the opposite case $R$ is called a {\em non-associative ring}. For any (non)associative ring $R$ the product $R\times R$, endowed with the binary operation $$(x,y)\star(x',y')=(x+x',y+y'+x\circ x'),$$is a group. The inverse element to $(x,y)$ in this group is $(-x,-y+x\circ x)$. The product $R\times R$ endowed with this group operation will be denoted by $R\star R$. The group $R\star R$ is commutative if and only if the binary operation $\circ$ on $R$ is commutative. For a (non)associative ring $R$ let $U(R)$ be the set of all elements $a\in R$ such that the map $R\to R$, $x\mapsto x\circ a$, is bijective. The following theorem was known for semifields, see \cite[4.1]{PSZ}. \begin{theorem}\label{t:RR} For any finite (non)associative ring $R$ the set $B=\{(x,x\circ x):x\in R\}$ is a difference basis for the set $U(R)\times R$ in the group $R\star R$. \end{theorem} \begin{proof} Given any pair $(a,b)\in U(R)\times R$, we need to find elements $x,y\in R$ such that $(a,b)=(x,x\circ x)\star(y,y\circ y)^{-1}$. The definition of the group operation $\star$ implies that $(y,y\circ y)^{-1}=(-y,0)$. Then the equality $(a,b)=(x,x\circ x)\star(y,y\circ y)^{-1}$ turns into $(a,b)=(x,x\circ x)\star(-y,0)=(x-y,x\circ x-x\circ y)=(x-y,x\circ(x-y))$. Since $a\in U(R)$, there exists an element $x\in R$ such that $x\circ a=b$. Let $y=x-a$ and observe that the pair $(x,y)$ has the required property: $$(x,x\circ x)\star(y,y\circ y)^{-1}=(x-y,x\circ (x-y))=(a,x\circ a)=(a,b).$$ \end{proof} Theorem~\ref{t:RR} suggests the problem of detecting the structure of the group $R\star R$ for various rings $R$. For Galois rings $GR(p^k,r)$ this problem is answered in the following two theorems. \begin{theorem}\label{t:RstarR} Let $p$ be a prime number, and $k,r$ be natural numbers. For the Galois ring $R:=GR(p^k,r)$ the group $R\star R$ is isomorphic to $$ \begin{cases} C_{p^k}^{r}\times C_{p^k}^r&\mbox{ if \ $p\ge 3$},\\ C_{2^{k+1}}^r\times C_{2^{k-1}}^r&\mbox{ if \ $p=2$}. \end{cases} $$ \end{theorem} \begin{proof} First observe that the commutativity of the Galois ring $R$ implies the commutativity of the group $R\star R$. To determine the structure of the group $R\star R$ we shall calculate the orders of its elements. Let us recall that the {\em order} of an element $x$ in an Abelian group $G$ is the smallest number $n\in\IN$ such that $nx=0$. Let us fix an element $(x,y)\in R\times R$ and evaluate its order in the group $R\star R$. By induction it can be shown that for every $s\in\IN$ we get $(x,y)^s=(sx,sy+\frac{s(s-1)}2x^2)$. If $p\ge 3$, then $(x,y)^{p^k}=\big(p^kx,p^ky+p^k\frac{p^k-1}2x^2\big)=(0,0)$ as $p^k\cdot z=0$ for each element $z\in R$. On the other hand, $(x,y)^{p^{k-1}}=(0,0)$ if and only if $p^{k-1}x=p^{k-1}y=0$ if and only if $(x,y)\in pR\times pR$, which implies that the set of elements of order $p^{k-1}$ has cardinality $p^{2(k-1)r}$. It remains to observe that up to an isomorphism, $C_{p^k}^{2r}$ is the unique Abelian $p$-group of cardinality $p^{2kr}$ that contains $p^{2kr}$ elements of order $\le p^{k}$ and $p^{2kr}-p^{2(k-1)r}$ elements of order $p^k$. \smallskip Next, assume that $p=2$. In this case $(x,y)^{2^{k+1}}=(2^{k+1}x,2^{k+1}y+2^k(2^{k+1}-1)x^2)=(0,0)$, which means that each element of the group $R\star R$ has order $\le 2^{k+1}$. Observe that $(x,y)^{2^k}=(2^kx,2^ky+2^{k-1}(2^k-1)x^2)=(0,2^{k-1}(2^k-1)x^2)$, which implies that $(x,y)^{2^k}\ne (0,0)$ if and only if $x^2\notin I_{\mathfrak m}=2R$ if and only if $x\in U(R)$. This means that the 2-group $R\star R$ has exactly $|U(R)\times R|=(2^{kr}-2^{(k-1)r})2^{kr}=2^{(2k-1)r}(2^r-1)$ elements of order $2^{k+1}$. Next, for any $i\in \{0,\dots,k\}$, we calculate the number of elements of order $>2^{k-i}$ in $R\star R$. Observe that an element $(x,y)\in R\star R$ has order $>2^{k-i}$ if and only if $(2^{k-i}x,2^{k-i}y+2^{k-i-1}(2^{k-i}-1)x^2)\ne (0,0)$ if and only if either $x\notin 2^{i}R$ or $x\in 2^{i}R$ and $y\notin 2^{i}R$. So, the set of elements of order $>2^{k-i}$ coincides with $\big((R\setminus 2^{i}R)\times R\big)\cup \big(2^{i}R\times (R\setminus 2^{i}R)\big)$ and hence has cardinality $$ |R\setminus 2^{i}R|\cdot |R|+|2^{i}R|\cdot|R\setminus 2^{i}R|=(|R|-|2^{i}R|)\cdot(|R|+|2^{i}R|)=|R|^2-|2^{i}R|^2= 2^{2kr}-2^{2(k-i)r}= 2^{2(k-i)r}(2^{2ir}-1). $$ This information is sufficient to detect the isomorphic type of the group $R\star R$. By \cite[4.2.6]{Rob}, the Abelian 2-group $R\star R$ is isomorphic to the product $H=\prod_{i=1}^{k+1}C_{2^{i}}^{m_i}$ for some numbers $m_1,\dots, m_{k+1}\in\{0\}\cup\IN$. Observe that the group $H$ contains $(2^{(k+1)m_{k+1}}-2^{km_{k+1}})\cdot \prod_{i=1}^k2^{im_i}=2^{km_{k+1}}(2^{m_{k+1}}-1)\cdot\prod_{i=1}^k2^{im_i}$ elements of order $2^{k+1}$. Taking into account that the group $R\star R$ contains $2^{(2k-1)r}(2^{r}-1)$ elements of order $2^{k+1}$, we conclude that $m_{k+1}=r$. Next, observe that the group $H$ contains exactly \begin{multline*} |C_{2^{k+1}}^{m_{k+1}}\times C_{2^k}^{m_k}-C_{2^{k-1}}^{m_{k+1}}\times C_{2^{k-1}}^{m_k}|\cdot\prod_{i=1}^{k-1}|C_{2^i}^{m_i}|=\\ = (2^{(k+1)r+km_k}-2^{(k-1)(r+m_k)})\cdot\prod_{i=1}^{k-1}2^{im_i}= 2^{(k-1)(r+m_k)}(2^{2r+m_k}-1)\cdot\prod_{i=1}^{k-1}2^{im_i} \end{multline*} elements of order $\ge 2^k$. Taking into account that the group $R\star R$ contains exactly $2^{2(k-1)n}(2^{2r}-1)$ elements of order $\ge 2^k$, we conclude that $m_k=0$. The group $H$ contains exactly \begin{multline*} |C_{2^{k+1}}^{m_{k+1}}\times C_{2^k}^{m_k}\times C_{2^{k-1}}^{m_{k-1}}-C_{2^{k-2}}^{m_{k+1}}\times C_{2^{k-2}}^{m_k}\times C_{2^{k-2}}^{m_{k-1}}|\cdot\prod_{i=1}^{k-2}|C_{2^i}^{m_i}|=\\ = (2^{(k+1)r+(k-1)m_{k-1}}-2^{(k-2)(r+m_{k-1})}) \cdot\prod_{i=1}^{k-2}2^{im_i}= 2^{(k-2)(r+m_{k-1})}(2^{3r+m_{k-1}}-1)\cdot\prod_{i=1}^{k-2}2^{im_i} \end{multline*} elements of order $\ge 2^{k-1}$. Taking into account that the group $R\star R$ contains exactly $2^{(k-2)r}(2^{4r}-1)$ elements of order $\ge 2^{k-1}$, we conclude that $m_{k-1}=r$. Taking into account that $|C_{2^{k+1}}^{m_{k+1}}\times C_{2^{k-1}}^{m_{k-1}}|=|C_{2^{k+1}}^r\times C_{2^{k-1}}^r|=2^{2kr}=|R\star R|$, we conclude that $m_i=0$ for $i<k-1$ and hence the group $R\star R$ is isomorphic to $C_{2^{k+1}}^{r}\times C_{2^{k-1}}^{r}$. \end{proof} \section{Evaluating the difference characteristics of Abelian $p$-groups} In this section we shall evaluate the difference characteristics of finite Abelian $p$-groups for an odd prime number $p$. We recall that a group $G$ is called a {\em $p$-group} if each element $x\in G$ generates a finite cyclic group of order $p^k$ for some $k\in\IN$. A finite group $G$ is a $p$-group if and only if $|G|=p^k$ for some $k\in\IN$. It is well-known that each Abelian $p$-group $G$ is isomorphic to the product $\prod_{i=1}^rC_{p^{k_i}}$ of cyclic $p$-groups. The number $r$ of cyclic groups in this decomposition is denoted by $r(G)$ and called the {\em rank} of $G$. Applying Theorem~\ref{t:xx} to the Galois ring $R:=\GR(p^k,r)$ and taking into account that its additive group is isomorphic to $C_{p^k}^r$ and $pR$ coincides with the maximal ideal of $R$, we get the following corollary. \begin{corollary}\label{c:hxx} Let $p$ be an odd prime number, $k,r$ be natural numbers. Let $h:G\to C_{p^k}^{2r}$ be a surjective homomorphism and $K$ be its kernel. Then $$\Delta[G]\le\Delta[K]\cdot p^{kr}+\Delta[h^{-1}(C_{p^{k-1}}^r\times C_{p^k}^r)]-1$$and $$\eth[G]\le\eth[K]+\frac1{\sqrt{p^r}}\cdot\eth[h^{-1}(C_{p^{k-1}}^r\times C_{p^k}^r)]-\frac1{\sqrt{|G|}}\,.$$ \end{corollary} This corollary implies the following recursive upper bound for difference characteristics of finite Abelian $p$-groups. \begin{theorem}\label{t:Abp} Let $p$ be an odd prime number, $k_1,\dots,k_m$ be natural numbers, and $k,r$ be natural numbers such that $2r\le m$ and $k\le\min\limits_{1\le i\le 2r}k_i$. Then $$\Delta\Big[\prod_{i=1}^mC_{p^{k_i}}\Big]\le \Delta\Big[\prod_{i=1}^{2r}C_{p^{k_i-k}}\times \prod_{i=2r+1}^mC_{p^{k_i}}\Big]\cdot p^{kr}+\Delta\Big[\prod_{i=1}^r C_{p^{k_i-1}}\times \prod_{i=r+1}^mC_{p^{k_i}}\Big]-1$$and $$\eth\big[\prod_{i=1}^mC_{p^{k_i}}\big]\le \eth\big[\prod_{i=1}^{2r}C_{p^{k_i-k}}\times \prod_{i=2r+1}^mC_{p^{k_i}}\big]+\frac1{\sqrt{p^r}}\cdot\eth\big[\prod_{i=1}^r C_{p^{k_i-1}}\times \prod_{i=r+1}^mC_{p^{k_i}}\big]-\prod_{i=1}^m\frac1{\sqrt{p^{k_i}}}.$$ \end{theorem} The recursive formulas from the preceding theorem will be used in the following upper bound for the difference characteristic of finite Abelian $p$-group. \begin{theorem}\label{t:u-Abp} For any prime number $p\ge 11$, any finite Abelian $p$-group $G$ has difference characteristic $$\eth[G]\le \frac{\sqrt{p}-1}{\sqrt{p}-3}\cdot\sup_{k\in\IN}\eth[C_{p^k}]\le \frac{\sqrt{p}-1}{\sqrt{p}-3}\cdot\frac{24}{\sqrt{293}}. $$ \end{theorem} \begin{proof} For a prime number $p$ and a natural number $r$ let $\Ab_p^r$ be the class of Abelian $p$-groups of rank $r$. Let also $\Ab_{p}^{<r}=\bigcup_{n<r}\Ab_{p}^n$ and $\Ab_{p}^{\le r}=\bigcup_{n\le r}\Ab_{p}^n$. Let $\Ab_{p}:=\bigcup_{r\in\IN}\Ab_{p}^r$ be the family of finite Abelian $p$-groups. For a class $\C$ of finite groups we put $\eth[\C]:=\sup_{G\in\C}\eth[G]$. By Theorem~\ref{t:KL}, $\eth[\C]\le\frac4{\sqrt{3}}$. \begin{lemma}\label{l:Ab} For any odd prime number $p$ and any natural number $r$ we get the upper bound $$\eth[\Ab_p^{r}]\le \eth[\Ab_{p}^{<r}]+\frac1{\sqrt{p^{\lfloor r/2\rfloor}}}\cdot \eth[\Ab_p^{\le r}]\le \eth[\Ab_{p}^{<r}]+\frac1{\sqrt{p^{\lfloor r/2\rfloor}}}\cdot \eth[\Ab_p].$$ Consequently, $$\eth[\Ab_p^r]\le\eth[\Ab_p^1]+\eth[\Ab_p]\cdot\sum_{i=2}^r\frac1{\sqrt{p^{\lfloor i/2\rfloor}}}\le\eth[\Ab_p^1]+\eth[\Ab_p]\cdot\sum_{i=1}^{\lfloor r/2\rfloor}\frac2{\sqrt{p^i}}$$ \end{lemma} \begin{proof} Any group $G\in\Ab^{r}$ is isomorphic to the product $\prod_{i=1}^{r}C_{p^{k_i}}$ for a unique non-decreasing sequence $(k_i)_{i=1}^{r}$ of natural numbers. Let $k=k_1$ and $m=\lfloor\frac{r}2\rfloor$. Since $k_1-k=0$, the group $\prod_{i=1}^{2m}C_{p^{k_i-k}}\times\prod_{i=2m+1}^rC_{p^{k_i}}$ has rank $<r$. Applying Theorem~\ref{t:Abp}, we conclude that $$ \begin{aligned} \eth[G]&=\eth\big[\prod_{i=1}^rC_{p^{k_i}}\big]< \eth\big[\prod_{i=1}^{2m}C_{p^{k_i-k}}\times \prod_{i=2m+1}^rC_{p^{k_i}}\big]+\frac1{\sqrt{p^m}}\cdot \eth\big[\prod_{i=1}^m C_{p^{k_i-1}}\times \prod_{i=m+1}^rC_{p^{k_i}}\big]\le\\ &\le \eth[\Ab_p^{<r}]+\frac1{\sqrt{p^m}} \cdot\eth[\Ab_{p}^{\le r}]\le \eth[\Ab_p^{<r}]+\frac1{\sqrt{p^m}} \cdot\eth[\Ab_{p}]. \end{aligned} $$ \end{proof} Lemma~\ref{l:Ab} implies that $$\eth[\Ab_p]\le \eth[\Ab_p^1]+\eth[\Ab_p]\cdot\sum_{i=1}^\infty\frac2{\sqrt{p^i}}=\eth[\Ab_p^1] +\eth[\Ab_p]\cdot \frac{2}{\sqrt{p}-1}$$and after transformations $$\eth[\Ab_p]\le\eth[\Ab_p^1]\cdot\Big(1-\frac2{\sqrt{p}-1}\Big)^{-1}= \eth[\Ab_p^1]\cdot\frac{\sqrt{p}-1}{\sqrt{p}-3}= \frac{\sqrt{p}-1}{\sqrt{p}-3}\cdot\sup_{k\in\IN}\eth[C_{p^k}]\le \frac{\sqrt{p}-1}{\sqrt{p}-3}\cdot\frac{24}{\sqrt{293}}\;.$$ In the last inequality we use the upper bound $\sup_{k\in\IN}\eth[C_{p^k}]\le\frac{24}{\sqrt{293}}$ from Theorem~\ref{t:cyclic}. \end{proof} Theorem~\ref{t:Abp} implies: \begin{corollary} For any odd prime number $p$ and natural numbers $k,n$ the groups $C_{p^k}^{2n}$ and $C_{p^k}^{2n+1}$ have difference characteristics $$\eth[C_{p^k}^{2n}]\le 1-\frac1{p^{kr}}+\frac1{\sqrt{p^r}}\cdot\eth[C_{p^{k-1}}^n\times C_{p^k}^n]<1+\frac1{\sqrt{p^r}}\cdot\eth[\Ab_p]$$and $$\eth[C_{p^k}^{2n+1}]\le \eth[C_{p^k}]-\frac1{p^{kr}}+\frac1{\sqrt{p^r}}\cdot\eth[C_{p^{k-1}}^n\times C_{p^k}^{n+1}]<\eth[C_{p^k}]+\frac1{\sqrt{p^r}}\cdot\eth[\Ab_p].$$ \end{corollary} \section{Evaluating the difference characteristics of $2$-groups} In this section we elaborate tools for evaluating the difference characteristics of 2-groups. The following corollary is a counterpart of Corollary~\ref{c:hxx}. \begin{corollary} Let $k,r$ be natural numbers. Let $h:G\to C_{2^{k+1}}^{r}\times C_{2^{k-1}}^r$ be a surjective homomorphism and $K$ be its kernel. Then $$\Delta[G]\le\Delta[K]\cdot 2^{kr}+\Delta[h^{-1}(C_{2^{k}}^r\times C_{2^{k-1}}^r)]-1$$and $$\eth[G]\le\eth[K]+\frac1{\sqrt{2^r}}\cdot \eth[h^{-1}(C_{2^{k}}^r\times C_{2^{k-1}}^r)]-\frac1{\sqrt{|G|}}\,.$$ \end{corollary} \begin{proof} Consider the Galois ring $R:=\GR(2^k,r)$, whose additive group is isomorphic to $C_{2^r}^k$. Its maximal ideal $I_{\mathfrak m}$ coincides with the subgroup $2R$ of $R$, which consists of elements of order $\le 2^{k-1}$ in $R$. The subset $pR\times R$ of the group $R\star R$ has cardinality $2^{(2k-1)r}$ and consists of elements of order $\le 2^{k}$ in $R\star R$. By Theorem~\ref{t:RstarR}, the group $R\star R$ is isomorphic to $C_{2^{k+1}}^r\times C_{2^{k-1}}^r$. It is easy to see that $C_{2^k}^r\times C_{2^{k-1}}^r$ is the unique subgroup of cardinality $2^{(2k-1)r}$ consisting of elements of order $\le 2^{k}$. Therefore, the group $C_{2^{k+1}}^r\times C_{2^{k-1}}^r$ can be identified with the group $R\star R$ and its subgroup $C_{2^k}^r\times C_{2^{k-1}}^r$ with the subgroup $2R\times R$ of the group $R\times R$. By Theorem~\ref{t:RR}, the set $B=\{(x,x^2):x\in R\}$ is a difference base for the set $U(R)\times R$ in the group $R\times R$. So, $\Delta[U(R)\times R]\le|R|=2^k$. By Propositions~\ref{p:adic} and \ref{p:multic}, $$\Delta[G]\le\Delta[K]\cdot\Delta[U(R)\times R]-1+\Delta[h^{-1}(I_{\mathfrak m}\times R)]\le\Delta[K]\cdot |R|-1+\Delta[h^{-1}(2R\times R)]$$ and hence $$ \eth[G]=\frac{\Delta[K]\cdot |R|}{\sqrt{|K|\cdot|R|^2}}-\frac1{\sqrt{|G|}}+\frac{\Delta[h^{-1}(2R\times R)]}{\sqrt{|K|\cdot|2R|\cdot|R|\cdot|R/2R|}}=\eth[K]-\frac1{\sqrt{|G|}}+\frac1{\sqrt{2^r}}\cdot\eth[h^{-1}(2R\times R)]. $$ \end{proof} This corollary implies the following recursive upper bound for difference characteristics of finite Abelian $2$-groups. \begin{theorem}\label{t:Ab2} Let $k_1,\dots,k_m$ be natural numbers, and $k,r$ be natural numbers such that $2r\le m$, $k+1\le\min\limits_{1\le i\le r}k_i$ and $k-1\le\min\limits_{r<i\le 2r}k_i$. Then $$\Delta\Big[\prod_{i=1}^mC_{2^{k_i}}\Big]\le \Delta\Big[\prod_{i=1}^{r}C_{2^{k_i-k-1}}\times \prod_{i=r+1}^{2r}C_{2^{k_i-k+1}}\times \prod_{i=2r+1}^mC_{2^{k_i}}\Big]\cdot 2^{kr}+\Delta\Big[\prod_{i=1}^r C_{2^{k_i-1}}\times \prod_{i=r+1}^mC_{2^{k_i}}\Big]-1$$and $$\eth\Big[\prod_{i=1}^mC_{2^{k_i}}\Big]\le \eth\Big[\prod_{i=1}^{r}C_{2^{k_i-k-1}}\times \prod_{i=r+1}^{2r}C_{2^{k_i-k+1}}\times \prod_{i=2r+1}^mC_{2^{k_i}}\Big]+\frac1{\sqrt{2^r}}\cdot\eth \Big[\prod_{i=1}^r C_{2^{k_i-1}}\times \prod_{i=r+1}^mC_{2^{k_i}}\Big]-\prod_{i=1}^m\frac1{2^{k_i}}.$$ \end{theorem} Now we shall evaluate the difference characteristics of the 2-groups $C_{2^n}^r$. \begin{proposition}\label{p:Boolean} For any $n\in\IN$ the groups $C_2^{2n}$ and $C_2^{2n+1}$ have difference sizes $$\frac{1+\sqrt{2^{2n+3}-7}}2\le \Delta[C_2^{2n}]<2^{n+1}\mbox{ \ and \ }\frac{1+\sqrt{2^{2n+4}-7}}2\le \Delta[C_2^{2n+1}]<3\cdot 2^{n}$$and difference characteristics $$\sqrt{2}<\frac{1+\sqrt{2^{2n+3}-7}}{2^{n+1}}\le \eth[C_2^{2n}]<2\mbox{ \ and \ }\sqrt{2}<\frac{1+\sqrt{2^{2n+4}-7}}{\sqrt{2}\cdot 2^{n+1}}\le \eth[C_2^{2n+1}]<\frac{3}{\sqrt{2}}.$$ \end{proposition} \begin{proof} The lower bound $\frac{1+\sqrt{8|G|-7}}2\le \Delta[G]$ follows from Theorem~\ref{t:lower}. \smallskip The upper bound will be derived from Proposition~\ref{p:BGN}(3) which implies that $$\Delta[C_2^{2n}]<|C_2^n|+|C_2^n|=2\cdot 2^n$$ and $$\Delta[C_2^{2n+1}]<|C_2^n|+|C_2^{n+1}|=3\cdot 2^n.$$ These upper bounds imply $$\eth[C_2^{2n}]<2\mbox{ \ and \ }\eth[C_2^{2n+1}]<\frac{3}{\sqrt{2}}.$$ \end{proof} \begin{proposition}\label{p:C4n} For any $n\in\IN$ the group $C_4^n$ has the difference characteristic $$\eth[C_4^n]\le 1+\frac1{\sqrt{2^n}}\cdot\eth[C_2^n]-\frac1{2^n}< 1-\frac1{2^n}+\frac3{\sqrt{2^{n+1}}}.$$ \end{proposition} \begin{proof} Consider the numbers $k_1=\dots=k_n=2$ and $k_{n+1}=\dots=k_{2n}=1$. By Theorem~\ref{t:Ab2}, $$\eth[C_4^n]=\eth\Big[\prod_{i=1}^{2n}C_{2^{k_i}}\big]\le\eth[C_1]-\frac1{2^n}+ \frac1{\sqrt{2^n}}\eth\Big[\prod_{i=1}^nC_{2^{k_i-1}}\times\prod_{i=n+1}^{2n}C_{2^{k_i}}\Big]= 1-\frac1{2^n}+\frac1{\sqrt{2^n}}\eth[C_{2}^n]<1-\frac1{2^n}+\frac{3}{\sqrt{2^{n+1}}}. $$In the last inequality we used the upper bound $\eth[C_2^n]<\frac3{\sqrt{2}}$, proved in Proposition~\ref{p:Boolean}. \end{proof} Theorem~\ref{t:Ab2}, Proposition~\ref{p:C4n} and Theorem~\ref{t:KL} imply: \begin{corollary} For any $k,n\in\IN$ the group $C_{2^k}^{2n}$ and $C_{2^k}^{2n+1}$ have the difference characteristics $$\eth[C_{2^k}^{2n}]< \eth[C_4^n]-\frac1{2^{kn}}+\frac1{\sqrt{2^n}}\eth[C_{2^{k-1}}^n\times C_{2^k}^n]<1+\frac1{\sqrt{2^n}}\Big(\frac3{\sqrt{2}}+\frac4{\sqrt{3}}\Big)$$and $$\eth[C_{2^k}^{2n+1}]<\eth[C_4^n\times C_{2^k}]+\frac1{\sqrt{2^n}}\eth[C_{2^{k-1}}^n\times C_{2^k}^{n+1}]< \eth[C_{2k}]\cdot\Big(1+\frac3{\sqrt{2^{n+1}}}\Big)+ \frac4{\sqrt{3\cdot 2^n}}. $$ \end{corollary} \section{The difference characteristic of the groups $R\times U(R)$} In this section we obtain an upper bound for the difference characteristics of the groups $R\times U(R)$, which are products of the additive group of a ring $R$ and the multiplicative group $U(R)$ of its units. \begin{theorem}\label{t:RUR} For any finite ring $R$ and its multiplicative group $U(R)$ of units the set $B=\{(x,x):x\in U(R)\}$ is a difference base for the set $A=\{(x,y)\in R\times U(R):\exists z\in U(R)\;\; (y-1)z=x\}$ in the group $R\times U(R)$. If the ring $R$ is local with maximal ideal $I_{\mathfrak m}$ and the residue field $F=R/I_{\mathfrak m}$, then $$\Delta[R\times U(R)]=|U(R)|-2+\Delta[I_{\mathfrak m}\times U(R)]+\Delta[R\times(1+I_{\mathfrak m})]\le |U(R)|-2+\frac4{\sqrt{3}}\frac{|R|}{|F|}\big(\sqrt{|F|-1}+\sqrt{|F|}).$$ and $$\eth[R\times U(R)]< \sqrt{1-\frac1{|F|}}+\frac{4}{\sqrt{3}}\Big(\frac1{\sqrt{|F|}}+\frac1{\sqrt{|F|-1}}\Big). $$ \end{theorem} \begin{proof} Given a pair $(x,y)\in A$, we should find two elements $a,b\in U(R)$ such that $(a-b,ab^{-1})=(x,y)$. Since $(x,y)\in A$, there exists $z\in U(R)$ such that $(y-1)z=x$. Then for the pair $(a,b):=(yz,z)$ we get the required equality $(a-b,ab^{-1})=(yz-z,yzz^{-1})=((y-1)z,y)=(x,y)$. Now assume that the ring $R$ is local and consider its (unique) maximal ideal $I_{\mathfrak m}$. Let $\pi:R\to R/I_{\mathfrak m}$ be the homomorphism of $R$ onto its residue field $F:=R/I_{\mathfrak m}$. It follows that $|R|=|F|\cdot|I_{\mathfrak m}|$. The maximality of the ideal $I_{\mathfrak m}$ guarantees that $U(R)=R\setminus I_{\mathfrak m}$ and $1+I_{\mathfrak m}=\pi^{-1}(\pi(1))$ is a multiplicative subgroup of $U(R)$. We claim that $(I_{\mathfrak m}\times U(R))\cup(R\times (1+I_{\mathfrak m}))\cup A=R\times U(R)$. Indeed, if a pair $(x,y)\in R\times U(R)$ does not belong to $(I_{\mathfrak m}\times U(R))\cup(R\times(1+I_{\mathfrak m}))$, then $x\in U(R)$ and $y\notin 1+I_{\mathfrak m}$. It follows that $1-y\notin I_{\mathfrak m}$ and hence the element $1-y$ is invertible, so we can find $z=(1-y)^{-1}x\in U(R)$ and conclude that $(x,y)\in A$. Theorem~\ref{t:KL} and the subadditivity of the difference size proved in Proposition~\ref{p:BGN}(3) guarantee that $$ \begin{aligned} \Delta&[R\times U(R)]\le \Delta[A]+\Delta[I_{\mathfrak m}\times U(R)]+\Delta[R\times (1+I_{\mathfrak m})]-2\le\\ &\le |U(R)|-2+\frac4{\sqrt{3}}(\sqrt{|I_{\mathfrak m}|\times |U(R)|}+\sqrt{|R|\times|I_{\mathfrak m}|})= |U(R)|-2+\frac4{\sqrt{3}}\sqrt{|I_{\mathfrak m}|} (\sqrt{|R|-|I_{\mathfrak m}|}+\sqrt{|R|})=\\ &=|U(R)|-2+\frac4{\sqrt{3}}|I_{\mathfrak m}| \big(\sqrt{|R/I_{\mathfrak m}|-1}+\sqrt{|R/I_{\mathfrak m}|}\big)\le|U(R)|-2+\frac4{\sqrt{3}}\frac{|R|}{|F|} \big(\sqrt{|F|-1}+\sqrt{|F|}\big). \end{aligned} $$ Dividing $\Delta[R\times U(R)]$ by $\sqrt{|R\times U(R)|}=\sqrt{|R|(|R|-|I_{\mathfrak m}|)}=|R|\sqrt{1-\frac1{|F|}}$, we get the required upper bound for the difference characteristic $$ \eth[R\times U(R)]<\sqrt{\frac{|U(R)|}{|R|}}+\frac4{\sqrt{3}} \frac1{\sqrt{|F|^2-|F|}}(\sqrt{|F|-1}+\sqrt{|F|})= \sqrt{1-\frac1{|F|}}+\frac4{\sqrt{3}}\Big(\frac1{\sqrt{|F|}}+\frac1{\sqrt{|F|-1}}\Big). $$ \end{proof} Combining Theorem~\ref{t:RUR} with Theorem~\ref{t:R*} describing the structure of the multiplicative groups of the Galois rings $GR(p^k,r)$, we get the following two corollaries. \begin{corollary} Let $p$ be a prime number and $k,r$ be natural numbers such that either $p\ge 3$ or $p=2$ and $k\le 2$. The group $G=C_{p^k}^r\times C_{p^{k-1}}^r\times C_{p^r-1}$ has difference characteristic $$\eth[G]< \sqrt{1-\frac{1}{p^r}}+\frac4{\sqrt{3}}\Big(\frac1{\sqrt{p^r}}+\frac1{\sqrt{p^r-1}}\Big)=1+O\big(\tfrac1{p^{r/2}}\big).$$ \end{corollary} \begin{corollary} For any natural numbers $r$ and $k\ge 3$ the group $$G:=C_{2^k}^r\times C_{2^{k-1}}^{r-1}\times C_{2^{k-2}}\times C_2\times C_{2^r-1}$$ has difference characteristic $$\eth[G]< \sqrt{1-\frac{1}{2^r}}+\frac4{\sqrt{3}}\Big(\frac1{\sqrt{2^r}}+\frac1{\sqrt{2^r-1}}\Big)= 1+O\big(\tfrac1{2^{r/2}}\big) .$$ \end{corollary} \section{The results of computer calculations} In Table~~\ref{tab:abel} we present the results of computer calculations of the difference sizes of all non-cyclic Abelian groups $G$ of order $12\le|G|<96$. In this table $$lb[G]:=\left\lceil\frac{1+\sqrt{4|G|+4|G_2|-3}}2\,\right\rceil$$is the lower bound given in Corollary~\ref{c:lower}. \begin{table}[ht] \caption{Difference sizes of non-cyclic Abelian groups $G$ of order $12\le|G|<96$}\label{tab:abel} {\small \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline $G\phantom{\big|^|\!\!\!}$& $(C_2)^2\times C_3$&$C_2\times C_8$ &$(C_4)^2$&$(C_2)^2\times C_4$&$(C_2)^4$& $C_2\times (C_3)^2$& $(C_2)^2\times C_{5}$\\ \hline $lb[G]$ & 5 &5&5 &6&6& 5 &6\\ $\Delta[G]$ & 5&5 &6 &6&6 & 5&6\\ $\eth[G]$& 1,4433... & 1,25 & 1,5 & 1,5 & 1,5 & 1,1785... & 1,3416...\\ \hline \hline $G\phantom{\big|^|\!\!\!}$ & $C_2{\times}C_3{\times}C_4$& $(C_2)^3\times C_3$ &$(C_5)^2$ &$C_3\times C_9$&$(C_3)^3$& $(C_2)^2\times C_{7}$&$C_2\times C_{16}$ \\ \hline $lb[G]$ & 6 & 6 & 6 &6&6 &6&7\\ $\Delta[G]$ & 6 & 6 & 6&6 &6&6&7\\ $\eth[G]$ & 1,2247... & 1,2247... & 1,2 & 1,1547... & 1,1547...& 1,1338... & 1,2374...\\ \hline \hline $G\phantom{\big|^|\!\!\!}$& $C_4\times C_8$ & $(C_2)^2\times C_8$ & $C_2\times (C_{4})^2$ & $(C_2)^3\times C_{4}$&$(C_2)^5$& $(C_6)^2$&$(C_2)^2\times C_{9}$ \\ \hline $lb[G]$ & 7 & 7 & 7 & 8 &9 &7&7\\ $\Delta[G]$ & 7 & 7 & 8 & 8 &10&7&7 \\ $\eth[G]$ & 1,2374... & 1,2374... & 1,4142... & 1,4142... & 1,7677... & 1,1666...& 1,1666...\\ \hline \hline $G\phantom{\big|^|\!\!\!}$ & $(C_3)^2\times C_{4}$ & $(C_2)^3\times C_{5}$&$C_2{\times}C_4{\times}C_5$& $(C_2)^2\times C_{11}$ & $(C_3)^2\times C_{5}$& $\!\!C_2{\times}C_3{\times}C_8\!\!\!$& $C_3\times(C_4)^2$\\ \hline $lb[G]$ & 7 & 8& 7 & 8 & 8& 8&8\\ $\Delta[G]$ & 7 & 8& 8 & 8 & 8& 8&8\\ $\eth[G]$ & 1,1666... & 1,2649... & 1,2649... & 1,2060... & 1,1925... & 1,1547... & 1,1547...\\ \hline \hline $G\phantom{\big|^|\!\!\!}$& $\!\!(C_2)^2{\times}C_3{\times}C_4\!\!$&$(C_2)^4\times C_3$& $(C_7)^2$&$C_2\times (C_5)^2$& $(C_2)^2\times C_{13}$& $C_6\times C_{9}$& $C_2\times (C_3)^3$\\ \hline $lb[G]$ & 8 &9& 8&8 & 8& 8 & 8\\ $\Delta[G]$ & 9 &10& 9&8 & 9& 9 & 9 \\ $\eth[G]$ & 1,2990... & 1,4433... & 1,2857... & 1,1313...& 1,2480... & 1,2247... & 1,2247...\\ \hline \hline $G\phantom{\big|^|\!\!\!}$ & $C_2{\times}C_4{\times}C_7$ & $(C_2)^3\times C_{7}$ & $\!\!(C_2)^2{\times}C_3{\times}C_{5}\!\!\!$ & $(C_3)^2\times C_{7}$ & $C_2\times C_{32}$ & $C_4\times C_{16}$& $\!\!C_2{\times}C_4{\times}C_8\!\!\!$ \\ \hline $lb[G]$ & 9 & 9 & 9 & 9 & 9 & 9& 9 \\ $\Delta[G]$ & 9 & 10 & 9 & 9 & 10 & 10 & 10 \\ $\eth[G]$ & 1,2026... & 1,3363... & 1,1618... & 1,1338... & 1,25 & 1,25 & 1,25\\ \hline \hline $G\phantom{\big|^|\!\!\!}$& $(C_2)^2{\times} C_{16}$ & $(C_8)^2$ & $(C_4)^3$ & $(C_2)^3\times C_{8}$ &$(C_2)^2{\times} (C_{4})^2\!\!$& $(C_2)^4\times C_{4}$ &$(C_2)^6$ \\ \hline $lb[G]$ & 9 & 9 & 9& 10 & 10 &11&12\\ $\Delta[G]$ & 10 & 10& 11& 11 & 12&12&14 \\ $\eth[G]$ & 1,25 & 1,25 & 1,375 & 1,375 & 1,5 & 1,5 & 1,75\\ \hline \hline $G\phantom{\big|^|\!\!\!}$& $(C_2)^2{\times} C_{17}$ & $C_2{\times}C_4{\times}C_9$& $(C_3)^2\times C_{8}$& $\!\!C_2{\times}(C_3)^2{\times}C_4\!\!\!$ & $(C_2)^3{\times}(C_3)^2\!\!$ & $(C_2)^3\times C_{9}$& $C_3\times (C_{5})^2$ \\ \hline $lb[G]$& 9 & 10 & 9 & 10 & 10 & 10 & 10\\ $\Delta[G]$& 10 & 10 & 10 & 10 & 11 & 11 & 10 \\ $\eth[G]$ & 1,2126... & 1,1785... & 1,1785... & 1,1785... & 1,2963... & 1,2963... & 1,1547...\\ \hline \hline $G\phantom{\big|^|\!\!\!}$& $(C_2)^2{\times} C_{19}$ & $C_2{\times}C_8{\times}C_5$&$(C_4)^2\times C_{5}$& $\!(C_2)^2{\times}C_4{\times}C_5\!\!$& $(C_2)^4\times C_{5}$& $(C_9)^2$& $(C_3)^4$\\ \hline $lb[G]$ & 10 & 10 & 10 & 10 &11&10 &10\\ $\Delta[G]$ & 11 & 11 & 11 & 12 &12&11 &12\\ $\eth[G]$ & 1,2617... & 1,2298... & 1,2298...& 1,3416... & 1,3416... & 1,2222... & 1,3333...\\ \hline \hline $G\phantom{\big|^|\!\!\!}$& $(C_3)^2\times C_9$ & $C_3\times C_{27}$ & $\!(C_2)^2{\times}C_3{\times}C_7\!\!\!$ & $(C_2)^3\times C_{11}$ & $C_2{\times}C_4{\times}C_{11}\!\!$ & $\!C_2{\times}(C_3)^2{\times}C_{5}\!\!\!$ & $(C_2)^2{\times}C_{23}$\\ \hline $lb[G]$& 10 & 10 & 10 & 11 & 10 & 10 & 11 \\ $\Delta[G]$& 11 & 11 & 11 & 12 & 12 & 11 & 12 \\ $\eth[G]$& 1,2222... & 1,2222... & 1,2001... & 1,2792... & 1,2792... & 1,1595... & 1,2510...\\ \hline \end{tabular} } \end{table} \section{Acknowledgment} The authors would like to express their sincere thanks to Oleg Verbitsky who turned their attention to the theory of difference sets and their relation with difference bases, to Orest Artemovych for consultations on Galois rings, to Alex Ravsky for valuable discussions on difference sizes of product groups, and to MathOverflow users Seva and Jeremy Rickard for valuable comments.
1,314,259,992,654
arxiv
\section{Introduction} \end{singlespace} \begin{singlespace} \noindent Let $\mathfrak{g}=\mathfrak{g}_{\bar{0}}\oplus\mathfrak{g}_{\bar{1}}$ be a simple basic classical Lie superalgebra over $\mathbb{C}$ and let $e\in\mathfrak{g}_{\bar{0}}$ be nilpotent. This paper forms a project to investigate the centralizer $\mathfrak{g}^{e}=\{x\in\mathfrak{g}:[x,e]=0\}$ of $e$ in $\mathfrak{g}$ and the centre of centralizer $\mathfrak{z}(\mathfrak{g}^{e})=\{x\in\mathfrak{g}^{e}:[x,y]=0\text{ for all }y\in\mathfrak{g}^{e}\}$ of $e$ in $\mathfrak{g}$. There has been a lot of research on the centralizer and the centre of centralizer of nilpotent element in the case of simple Lie algebras, but a lot less is known in the case of Lie superalgebras. The aim of this paper is to lift the level of understanding in the area of Lie superalgebra to the similar level to that in the area of Lie algebra. More precisely, the present paper is planned to be the first of two papers in which we calculate bases of $\mathfrak{g}^{e}$ and $\mathfrak{z}(\mathfrak{g}^{e})$. In this paper, we consider the case when $\mathfrak{g}$ is a exceptional Lie superalgebra, while the other dealing with the other basic classical Lie superalgebras $\mathfrak{sl}(m|n)$ and $\mathfrak{osp}(m|2n)$. \end{singlespace} \begin{singlespace} For nonsuper cases, work on $\mathfrak{g}^{e}$ dates back to 1966, when Springer \cite{Springer1966} worked with a simple algebraic group $G$ and considered the centralizer $G^{u}$ of a unipotent element $u\in G$. Many mathematicians studied $G^{u}$ for different types of $G$ after that, the reader is referred to the introduction of \cite{Lawther2008} for an overview of the other research of $G^{e}$ and $\mathfrak{g}^{e}$. Jantzen gave an explicit account on the structure of $\mathfrak{g}^{e}$ for classical Lie algebras $\mathfrak{g}$ in \cite{Jantzen2004a}. Seitz \cite{Seitz2004} pointed out the dimension of $Z(G^{u})$ is of considerable interest. In \cite{Lawther2008}, Lawther\textendash Testerman studied the centralizer $G^{u}$, especially its centre $Z(G^{u})$ and determined the dimension of the Lie algebra of $Z(G^{u})$ over a field of characteristic $0$ or a good prime. Using a $G$-equivariant homeomorphism, Lawther\textendash Testerman worked with a nilpotent element $e\in\mathrm{Lie}(G)$ rather than $u$. The study of the centre $\mathfrak{z}(\mathfrak{g}^{e})$ for classical Lie algebras $\mathfrak{g}$ over a field of characteristic $0$ was undertaken by Yakimova in \cite{Yakimova2009} and Lawther\textendash Testerman \cite{Lawther2008} made use of work of Yakimova in \cite{Yakimova2009} to deal with classical cases. \end{singlespace} Centralizers of nilpotent elements $e$ in Lie superalgebras $\mathfrak{g}$ for the case where $\mathfrak{g}=\mathfrak{gl}(m|n)$ was done in \cite{Wang2009} over a field of prime characteristic. In \cite{Hoyt2012}, Hoyt claimed that the construction is identical in characteristic zero and further describe the construction of $\mathfrak{g}^{e}$ for $\mathfrak{g}=\mathfrak{osp}(m|2n)$. The dimension of $\mathfrak{z}(\mathfrak{g}^{e})$ for exceptional Lie superalgebras remains a mystery and we attempt to shed some light upon this mystery here. In the rest of this introduction, we give a more detailed survey of our results. \begin{singlespace} In the present paper, let $\mathfrak{g}=\mathfrak{g}_{\bar{0}}\oplus\mathfrak{g}_{\bar{1}}$ be a finite-dimensional simple Lie superalgebra of type $D(2,1;\alpha)$, $G(3)$ or $F(4)$ over $\mathbb{C}$. Let $G$ be the simply connected semisimple algebraic group over $\mathbb{C}$ such that $\mathrm{Lie}(G)=\mathfrak{g}_{\bar{0}}$. Then there is a representation $\rho:G\rightarrow\mathrm{GL}(\mathfrak{g}_{\bar{1}})$ such that $d_{\rho}:\mathrm{Lie}(G)\rightarrow\mathfrak{gl}(\mathfrak{g}_{\bar{1}})$ determines the adjoint action of $\mathfrak{g}_{\bar{0}}$ on $\mathfrak{g}_{\bar{1}}$. Note that $G$ is given explicitly in Table \ref{tab:G}. \end{singlespace} \begin{table}[H] \begin{singlespace} \noindent \begin{centering} \begin{tabular}{|c||c|} \hline $\mathfrak{g}$ & $G$\tabularnewline \hline \hline $D(2,1;\alpha)$ & $\mathrm{SL}_{2}(\mathbb{C})\times\mathrm{SL}_{2}(\mathbb{C})\times\mathrm{SL}_{2}(\mathbb{C})$\tabularnewline \hline \hline $G(3)$ & $\mathrm{SL}_{2}(\mathbb{C})\times G_{2}$\tabularnewline \hline \hline $F(4)$ & $\mathrm{SL}_{2}(\mathbb{C})\times\mathrm{Spin}_{7}(\mathbb{C})$\tabularnewline \hline \end{tabular} \par\end{centering} \end{singlespace} \caption{\label{tab:G}Algebraic group $G$} \end{table} Let $e\in\mathfrak{g}_{\bar{0}}$ be nilpotent, we investigate the centralizer $\mathfrak{g}^{e}$ of $e$ in $\mathfrak{g}$, especially its centre $\mathfrak{z}(\mathfrak{g}^{e})$. In particular, we give bases for $\mathfrak{g}^{e}$ and $\mathfrak{z}(\mathfrak{g}^{e})$ in Tables \ref{tab:results in D(2,1)}, \ref{tab:G(3)} and \ref{tab:F(4)}. Write $G^{e}=\{g\in G:geg^{-1}=e\}$ for the centralizer of $e$ in $G$. We also find a basis for $\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{G^{e}}=\{x\in\mathfrak{z}(\mathfrak{g}^{e}):gxg^{-1}=x\text{ for all }g\in G^{e}\}$ in \S5.5 and \S6.7. Note that $e$ lies in an $\mathfrak{sl}(2)$-triple $\{e,h,f\}\subseteq\mathfrak{g}_{\bar{0}}$ by Jacobson\textendash Morozov Theorem. We use $h$ to determine labelled Dynkin diagrams with respect to $e$. A full definition of labelled Dynkin diagrams with respect to $e$ is given in \S2.3. In contrary to Lie algebra case, in general $e$ determines more than one labelled Dynkin diagram, dependent on a choice of positive roots. Write $\mathfrak{g}=\bigoplus_{j\in\mathbb{Z}}\mathfrak{g}(j)$ as its ad$h$-eigenspace decomposition, we can decompose $\mathfrak{g}^{e}$ into the direct sum of ad$h$-eigenspaces, i.e. $\mathfrak{g}^{e}=\bigoplus_{j\geq0}\mathfrak{g}^{e}(j)$. We also describe the $\mathfrak{g}^{e}(0)$-module structure on each $\mathfrak{g}^{e}(j)$ for $j>0$ in Tables \ref{tab:g^e(0)-D(2,1;)}, \ref{tab:g^e(0)-G(3)} and \ref{tab:g^e(0)-F(4)}. \begin{singlespace} Our results can therefore be viewed as extensions of those obtained by Lawther and Testerman in \cite{Lawther2008} to case of Lie superalgebras over a field of characteristic zero. They obtain four theorems as a consequence of their work. In this paper, we obtain analogues of Theorems 2\textendash 4 in \cite{Lawther2008} for exceptional Lie superalgebras. We view $\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{G^{e}}$ as the correct replacement for $Z(G^{u})$ since $\mathrm{Lie}(Z(G^{e}))=\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{G^{e}}$ for a field of characteristic zero. Fix $\varDelta$ to be a labelled Dynkin diagram with respect to $e$. Let $n_{i}(\varDelta)$ be the number of nodes which have labels equal to $i$ in $\varDelta$. An interesting fact is that the choice of $\varDelta$ does not affect the following theorems and labels in $\varDelta$ can only be $0,1$ or $2$. \end{singlespace} We first consider the case where $\varDelta$ only have even labels. \begin{thm} \begin{singlespace} \noindent Let $\mathfrak{g}=\mathfrak{g}_{\bar{0}}\oplus\mathfrak{g}_{\bar{1}}$ be a Lie superalgebra of type $D(2,1;\alpha)$, $G(3)$ or $F(4)$ and $e\in\mathfrak{g}_{\bar{0}}$ be nilpotent. Let $G$ be the algebraic group defined as in Table \ref{tab:G}. Assume $n_{1}(\varDelta)=0$. Then $\dim\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{G^{e}}=n_{2}(\varDelta)=\dim\mathfrak{z}(\mathfrak{g}^{h})$. \end{singlespace} \end{thm} \begin{singlespace} \noindent In order to state Theorem 2, we define the sub-labelled Dynkin diagram $\varDelta_{0}$ to be\textit{ the $2$-free core of $\varDelta$} where $\varDelta_{0}$ is obtained by removing all nodes with labels equal to $2$ from $\varDelta$. \end{singlespace} \begin{thm} \begin{singlespace} Let $\mathfrak{g}=\mathfrak{g}_{\bar{0}}\oplus\mathfrak{g}_{\bar{1}}$ be a Lie superalgebra of type $D(2,1;\alpha)$, $G(3)$ or $F(4)$ and $e\in\mathfrak{g}_{\bar{0}}$ be nilpotent. Let $G$ be the algebraic group defined as in Table \ref{tab:G}. Let $\varDelta_{0}$ be the $2$-free core of $\varDelta$. Let $\mathfrak{g}_{0}$ be the subalgebra of $\mathfrak{g}$ generated by the root vectors corresponding to the simple roots in $\varDelta_{0}$. Then $\mathfrak{g}_{0}$ is a direct sum of simple Lie superalgebras and there exists a nilpotent orbit in $(\mathfrak{g}_{0})_{\bar{0}}$ having labelled Dynkin diagram $\varDelta_{0}$. Let $G_{0}$ be the simply connected semisimple algebraic group with respect to $(\mathfrak{g}_{0})_{\bar{0}}$. Suppose $e_{0}\in(\mathfrak{g}_{0})_{\bar{0}}$ is a representative of this orbit, then: 1. $\dim\mathfrak{g}^{e}-\dim\mathfrak{g}_{0}^{e_{0}}=n_{2}(\varDelta)$; 2. $\dim\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{G^{e}}-\dim\left(\mathfrak{z}(\mathfrak{g}_{0}^{e_{0}})\right)^{G_{0}^{e_{0}}}=n_{2}(\varDelta)$. \end{singlespace} \end{thm} \begin{singlespace} Our last result gives a more general result relating $\dim\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{G^{e}}$ and $\varDelta$. In this statement we use the notation for nilpotent elements as introduced later in \S4.1, \S5.1 and \S6.1. \end{singlespace} \begin{thm} \begin{singlespace} \noindent Let $\mathfrak{g}=\mathfrak{g}_{\bar{0}}\oplus\mathfrak{g}_{\bar{1}}$ be a Lie superalgebra of type $D(2,1;\alpha)$, $G(3)$ or $F(4)$ and $e\in\mathfrak{g}_{\bar{0}}$ be nilpotent. Let $a_{1},...,a_{l}$ be the labels in $\varDelta$. Then \begin{equation} \dim\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{G^{e}}=\left\lceil \frac{1}{2}\sum_{i=1}^{l}a_{i}\right\rceil +\varepsilon\label{eq:15 theorem 3} \end{equation} \noindent where the value of $\varepsilon$ is equal to $0$ with the following exceptions: $\varepsilon=-1$ when $\mathfrak{g}=D(2,1;\alpha)$, $e=E_{1}+E_{2}+E_{3}$ or $\mathfrak{g}=F(4)$, $e=E+e_{(7)}$. \end{singlespace} \end{thm} This paper is organized as follows: In Section \ref{sec:preliminaries}, we recall some basic vocabulary of Lie superalgebras such as basic classical Lie superalgebras, root systems and Borel subalgebras. We also give a full definition of the labelled Dynkin diagram for a system of simple roots. In Section \ref{sec:general-method}, we recall some generalities on $\mathfrak{g}^{e}$ and $\mathfrak{z}(\mathfrak{g}^{e})$ which help calculations later. In Sections \ref{sec:D(2,1;)}\textendash \ref{sec:F(4)}, we recall a construction of exceptional Lie superalgebras $\mathfrak{g}=D(2,1;\alpha)$, $G(3)$, $F(4)$ and use this to explicitly determine the centralizers $\mathfrak{g}^{e}$ and centres $\mathfrak{z}(\mathfrak{g}^{e})$ of centralizers of nilpotent even elements $e$ in $\mathfrak{g}$. We also give all possible Dynkin diagrams for $\mathfrak{g}$ and further determine the labelled Dynkin diagrams with respect to $e$. \begin{singlespace} \section{Preliminaries\label{sec:preliminaries}} \end{singlespace} \subsection{Basic classical Lie superalgebras} \begin{singlespace} \noindent Recall that a finite-dimensional simple Lie superalgebra $\mathfrak{g}=\mathfrak{g}_{\bar{0}}\oplus\mathfrak{g}_{\bar{1}}$ over $\mathbb{C}$ is called a basic classical Lie superalgebra if $\mathfrak{g}_{\bar{0}}$ is a reductive Lie algebra and there exists a non-degenerate even invariant supersymmetric bilinear form $(\cdotp,\cdotp)$ on $\mathfrak{g}$. Kac gives the classification of finite-dimensional complex simple Lie superalgebras in \cite[Theorem 5]{Kac1977}. He argues that simple basic classical Lie superalgebras that are not Lie algebras consist of classical types which are infinite families and three exceptional types. In this paper, we consider the Lie superalgebras $D(2,1;\alpha)$, $G(3)$ and $F(4)$, which are called the exceptional Lie superalgebras. We will describe a construction of $D(2,1;\alpha)$, $G(3)$ and $F(4)$ in \S4.1, \S5.1 and \S6.3 respectively. \end{singlespace} \begin{singlespace} Let $\mathfrak{g}=\mathfrak{g}_{\bar{0}}\oplus\mathfrak{g}_{\bar{1}}$ be a finite-dimensional basic classical Lie superalgebra and $\text{\ensuremath{\mathfrak{h}}}$ be a Cartan subalgebra of $\mathfrak{g}$. The following concepts can be found in \cite[Chapter 1]{Cheng2012}. There is a root space decomposition $\mathfrak{g}=\mathfrak{h}\oplus\bigoplus_{\alpha\in\mathfrak{h}^{*}}\mathfrak{g_{\alpha}}$ where $\mathfrak{h}=\mathfrak{g}_{0}=\{x\in\mathfrak{g}:[h,x]=0\ \text{for\ all}\ h\in\mathfrak{h}\}=\mathfrak{g}^{\mathfrak{h}}$ and $\mathfrak{g_{\alpha}}:=\{x\in\mathfrak{g}:[h,x]=\alpha(h)x\ \text{for\ all}\ h\in\mathfrak{\mathfrak{h}}\}$ are the corresponding weight spaces. Note that $\mathfrak{g_{\alpha}}$ is 1-dimensional. The set $\Phi=\{\alpha\in\mathfrak{h^{*}:\alpha\neq}0,\mathfrak{g}_{\alpha}\neq0\}$ is called the root system of $\mathfrak{g}$ and we say that $\mathfrak{g}_{\alpha}$ is the root space corresponding to the root $\alpha\in\Phi$. The even and odd roots are defined to be $\Phi_{\bar{0}}=:\{\alpha\text{\ensuremath{\in}}\Phi:\mathfrak{g}_{\alpha}\subseteq\mathfrak{g}_{\bar{0}}\}$ and $\Phi_{\bar{1}}=:\{\alpha\in\Phi:\mathfrak{g}_{\alpha}\subseteq\mathfrak{g}_{\bar{1}}\}$. A subset $\Phi^{+}\subseteq\Phi$ is a system of positive roots if for each root $\alpha\in\Phi$ there is exactly one of $\alpha$,$-\alpha$ contained in $\Phi^{+}$; and for any two distinct roots $\alpha,\beta\in\Phi^{+}$, $\alpha+\beta\in\Phi$ implies that $\alpha+\beta\in\Phi^{+}$. Given a system of positive roots $\Phi^{+}$, elements of $-\Phi^{+}$ form a system of negative roots. Note that $\Phi^{+}=\Phi_{\bar{0}}^{+}\cup\Phi_{\bar{1}}^{+}$. A system of simple roots $\varPi=\{\alpha_{1},...,\alpha_{l}\}\subseteq\Phi^{+}$ consists of all elements that cannot be written as the sum of two elements of $\Phi^{+}$. Note that $l$ does not depend on choice of $\varPi$ and we call it the rank of $\mathfrak{g}$. \end{singlespace} \subsection{Dynkin diagrams\label{subsec:Dynkin-diagrams}} \begin{singlespace} \noindent Let $\mathfrak{g}=\mathfrak{g}_{\bar{0}}\oplus\mathfrak{g}_{\bar{1}}$ be a basic classical Lie superalgebra with a Cartan subalgebra $\mathfrak{h}\subseteq\mathfrak{g}_{\bar{0}}$. There exists a\textit{ }triangular decomposition\textit{ }$\mathfrak{g}=\mathfrak{n}^{-}\oplus\mathfrak{h}\oplus\mathfrak{n}^{+}$ where $\mathfrak{n}^{+}$ (resp. $\mathfrak{n}^{-}$) is a subalgebra such that $[\mathfrak{h},\mathfrak{n}^{+}]\subseteq\mathfrak{n}^{+}$ (resp. $[\mathfrak{h},\mathfrak{n}^{-}]\subseteq\mathfrak{n}^{-}$) and $\dim\mathfrak{n}^{+}=\dim\mathfrak{n}^{-}$, see \cite[Section 2]{Penkov}. The solvable subalgebra $\mathfrak{b}=\mathfrak{h}\oplus\mathfrak{n}^{+}$ is called a Borel subalgebra of $\mathfrak{g}$. We work with Borel subalgebras up to conjugacy by $G$. Note that there are in general many inequivalent conjugacy classes of Borel subalgebras and every Borel subalgebra containing $\mathfrak{h}$ determines a corresponding system of positive roots $\Phi^{+}$. Consequently $\mathfrak{b}$ determines a system of simple roots $\varPi$. Then for each conjugacy class of Borel subalgebras of $\mathfrak{g}$, a simple root system can be transformed into an equivalent one with the same Dynkin diagram under the transformation of the Weyl group $W$ of $\mathfrak{g}$, see \cite[Subsection 2.3]{Frappat1989}. \end{singlespace} \begin{singlespace} We next recall the concept of the Dynkin diagram as defined for example in \cite[Section 2.2]{Frappat1989}. We know that there exists a non-degenerate even invariant supersymmetric bilinear form $(\cdotp,\cdotp)$ on $\mathfrak{g}$. One can check that $(\cdotp,\cdotp)$ restricts to a non-degenerate symmetric bilinear form on $\mathfrak{h}$. Therefore, there exists an isomorphism from $\mathfrak{h}$ to $\mathfrak{h}^{*}$ which provides a symmetric bilinear form on $\mathfrak{h}^{*}$. Then the \textit{Dynkin diagram} of a Lie superalgebra $\mathfrak{g}$ with a simple root system $\varPi$ is a graph where the vertices are labelled by $\varPi$ and there are $\mu_{\alpha\beta}$ lines between the vertices labelled by simple roots $\alpha_{i}$ and $\alpha_{j}$ such that: \begin{equation} \mu_{\alpha\beta}=\begin{cases} \ensuremath{\vert(\alpha_{i},\alpha_{j})\ensuremath{\vert}} & \text{if }(\alpha_{i},\alpha_{i})=(\alpha_{j},\alpha_{j})=0,\\ \frac{2\ensuremath{\vert}(\alpha_{i},\alpha_{j})\ensuremath{\vert}}{min\{\vert(\alpha_{i},\alpha_{i})\vert,\ensuremath{\vert}(\alpha_{j},\alpha_{j})\ensuremath{\vert}\}} & \text{if }(\alpha_{i},\alpha_{i})(\alpha_{j},\alpha_{j})\neq0,\\ \frac{2\ensuremath{\vert}(\alpha_{i},\alpha_{j})\ensuremath{\vert}}{min_{(\alpha_{k},\alpha_{k})\neq0}\ensuremath{\vert}(\alpha_{k},\alpha_{k})\ensuremath{\vert}} & \text{if }(\alpha_{i},\alpha_{i})\neq0,\ (\alpha_{j},\alpha_{j})=0\ \text{\text{and}\ }\alpha_{k}\in\Phi. \end{cases}\label{eq:lines-=0003BC} \end{equation} We say a root $\alpha\in\Phi$ is \textit{isotropic} if $(\alpha,\alpha)=0$ and is \textit{non-isotropic} if $(\alpha,\alpha)\neq0$. We associate a white node $\ocircle$ to each even root, a grey node $\varotimes$ to each odd isotropic root and a black node $\newmoon$ to each odd non-isotropic root. Moreover, when $\mu_{\alpha\beta}>1$, we put an arrow pointing from the vertex labelled by $\alpha_{i}$ to the vertex labelled by $\alpha_{j}$ if $(\alpha_{i},\alpha_{i})(\alpha_{j},\alpha_{j})\neq0$ and $(\alpha_{i},\alpha_{i})>(\alpha_{j},\alpha_{j})$ or if $(\alpha_{i},\alpha_{i})=0,(\alpha_{j},\alpha_{j})\neq0$ and $\vert(\alpha_{j},\alpha_{j})\vert<2$, or pointing from the vertex labelled by $\alpha_{j}$ to the vertex labelled by $\alpha_{i}$ if $(\alpha_{i},\alpha_{i})=0,(\alpha_{j},\alpha_{j})\neq0$ and $\vert(\alpha_{j},\alpha_{j})\vert>2$. If the value of $\mu_{\alpha\beta}$ is not a natural number, then we label the edge between vertices corresponding to roots $\alpha$ and $\beta$ with $\mu_{\alpha\beta}$ instead of drawing multiple lines between them. Note that the Dynkin diagram depends on $\varPi$ up to conjugacy, thus Dynkin diagrams of $\mathfrak{g}$ for different choices of simple roots can be different. \end{singlespace} \subsection{Labelled Dynkin diagrams\label{subsec:Lablled-Dynkin-diagrams}} \noindent Let $e\in\mathfrak{g}_{\bar{0}}$ be nilpotent. There exists an $\mathfrak{sl}(2)$-triple $\{e,h,f\}\subseteq\mathfrak{g}_{\bar{0}}$ by the Jacobson\textendash Morozov Theorem, see for example \cite[Theorem 3.3.1]{Collingwood1993}. An $\mathfrak{sl}(2)$-triple determines a grading on $\mathfrak{g}$ according to the eigenvalues of ad$h,$ thus we can decompose $\mathfrak{g}$ into its ad$h$-eigenspaces $\mathfrak{g}=\bigoplus_{j\in\mathbb{Z}}\mathfrak{g}(j)$ where $\mathfrak{g}(j)=\{x\in\mathfrak{g}:[h,x]=jx\}$. We can choose a Borel subalgebra $\mathfrak{b}\subseteq\bigoplus_{j\geq0}\mathfrak{g}(j)$ and a Cartan subalgebra $\mathfrak{h}\subseteq\mathfrak{b}$. Then we obtain the corresponding system of positive roots $\Phi^{+}$ and a system of simple roots $\varPi=\{\alpha_{1},...,\alpha_{l}\}$ which will give a Dynkin diagram of $\mathfrak{g}$. Furthermore, for each $i=1,...,l$, note that $\mathfrak{g}_{\alpha_{i}}$ is the root space corresponding to $\alpha_{i}$ and $\mathfrak{g}_{\alpha_{i}}\subseteq\mathfrak{g}(j_{i})$ for some $j_{i}\geq0$. Hence, we have $\alpha_{i}(h)\geq0$ for $i=1,...,l$. \begin{defn} \begin{singlespace} \noindent The \textit{labelled Dynkin diagram} $\varDelta$ of $e$ determined by $\varPi$ is given by taking the Dynkin diagram of $\mathfrak{g}$ and labelling each node $\alpha$ with $\alpha(h)$. \end{singlespace} \end{defn} \begin{singlespace} \section{Generalities on $\mathfrak{g}^{e}$ and $\mathfrak{z}(\mathfrak{g}^{e})$\label{sec:general-method}} \end{singlespace} \begin{singlespace} \noindent Let $\mathfrak{g}=\mathfrak{g}_{\bar{0}}\oplus\mathfrak{g}_{\bar{1}}$ be a basic classical Lie superalgebra and $e\in\mathfrak{g}_{\bar{0}}$ be nilpotent. In this section, we give an overview of some general methods for calculating $\mathfrak{g}^{e}$ and $\mathfrak{z}(\mathfrak{g}^{e})$. \end{singlespace} \begin{singlespace} Note that any element $x\in\mathfrak{g}$ can be written as $x=x_{\bar{0}}+x_{\bar{1}}$ such that $x_{\bar{i}}\in\mathfrak{g}_{\bar{i}}$. For a nilpotent element $e\in\mathfrak{g}_{\bar{0}}$, if $[x,e]=0$ then $[x,e]=[x_{\bar{0}},e]+[x_{\bar{1}},e]=0$. This implies that $[x_{\bar{0}},e]=[x_{\bar{1}},e]=0$ since $[x_{\bar{0}},e]\in\mathfrak{g}_{\bar{0}}$ and $[x_{\bar{1}},e]\in\mathfrak{g}_{\bar{1}}$. Hence $\mathfrak{g}^{e}=\mathfrak{g}_{\bar{0}}^{e}\oplus\mathfrak{g}_{\bar{1}}^{e}$. For a nilpotent element $e\in\mathfrak{g}_{\bar{0}}$, recall that there exists an $\mathfrak{sl}(2)$-triple $\{e,h,f\}\subseteq\mathfrak{g}_{\bar{0}}$ as noted in \S2.3 and any two $\mathfrak{sl}(2)$-triples containing $e$ are conjugate under the action of the group $G^{e}$. We have that $\mathfrak{g}$ is a module for $\mathfrak{s}=\left\langle e,h,f\right\rangle $ via the adjoint action. Define $V^{\mathfrak{sl}}(d)$ to be the $(d+1)$-dimensional simple $\mathfrak{sl}(2)$-module with highest weight $d$. By the representation theory of $\mathfrak{sl}(2)$, we can decompose $\mathfrak{g}$ into a direct sum of finite-dimensional $\mathfrak{s}$-submodules $\mathfrak{g}^{i}$ and each of them is isomorphic to $V^{\mathfrak{sl}}(d_{i})$ for some $d_{i}\in\mathbb{Z}$ and $d_{i}\geq0$. The element $h$ of the $\mathfrak{sl}(2)$-triple is semisimple and the eigenvalues of $h$ on $\mathfrak{g}^{i}$ are $d_{i},d_{i}-2,...,-(d_{i}-2),-d_{i}$. The only vectors in $\mathfrak{g}^{i}$ annihilated by $e$ are the multiples of the highest weight vector, i.e. if $\mathfrak{g}^{i}$ has basis $\{x_{d_{i}}^{i},x_{d_{i}-2}^{i},...,x_{-d_{i}+2}^{i},x_{-d_{i}}^{i}\}$ for $i=1,2,...,r$, then the vectors annihilated by $e$ are $\left\langle x_{d_{i}}^{i}\right\rangle $. Thus the vectors in $\mathfrak{g}$ centralized by $e$ are $\left\langle x_{d_{1}}^{1},x_{d_{2}}^{2},...,x_{d_{r}}^{r}\right\rangle $ and they have ad$h$-eigenvalues $d_{1},d_{2},...,d_{r}$. Hence, from the $\mathrm{ad}h$-eigenspace decomposition of $\mathfrak{g}$ we determine the $\mathrm{ad}h$ eigenvalues of elements of $\mathfrak{g}^{e}$. We consider $\mathfrak{sl}(2)$ frequently in the following sections, so we fix the notation $\mathfrak{sl}(2)=\left\langle E,H,F\right\rangle $ where \[ E=\begin{pmatrix}0 & 1\\ 0 & 0 \end{pmatrix},H=\begin{pmatrix}1 & 0\\ 0 & -1 \end{pmatrix},F=\begin{pmatrix}0 & 0\\ 1 & 0 \end{pmatrix}. \] The commutator relations between basis elements for $\mathfrak{sl}(2)$ are $[H,E]=2E$, $[H,F]=-2F$ and $[E,F]=H$. Let $V$ be a two-dimensional vector space with basis $v_{1}=(1,0)^{t}$ and $v_{-1}=(0,1)^{t}$. When describing the $\mathfrak{g}^{e}(0)$-module structure on each $\mathfrak{g}^{e}(j)$ for $j>0$, we need the following lemma: \end{singlespace} \begin{lem} \begin{singlespace} \noindent \textup{\label{lem:osp(1,2)}Let $A=\mathfrak{g}=\mathfrak{g}_{\bar{0}}\oplus\mathfrak{g}_{\bar{1}}$ be a Lie superalgebra where $\{u_{-2},u_{0},u_{2}\}$ is a basis of $\mathfrak{g}_{\bar{0}}$ and $\{u_{-1},u_{1}\}$ is a basis of $\mathfrak{g}_{\bar{1}}$ such that: (1) $[u_{0},u_{i}]=a_{i}u_{i}$; (2) $[u_{1},u_{1}]=au_{2}$ and $[u_{-1},u_{-1}]=bu_{-2}$; (3) $[u_{2},u_{-2}]=cu_{0}$ for $a_{i},a,b,c\neq0$. Then $A$ is simple and $A\cong\mathfrak{osp}(1|2)$ .} \end{singlespace} \end{lem} \begin{singlespace} \noindent \begin{proof}Let $I$ be an non-zero ideal of $A$. Then $I$ is a direct sum of $\mathrm{ad}u_{0}$ eigenspaces, thus $u_{i}\in I$ for some $i$. If $i=0$, then condition (1) implies that $I=A$. If $i=\pm2$, then condition $(3)$ implies that $u_{0}\in I$ and thus $I=A$. If $i=\pm1$, then condition (2) implies that $u_{-2}$ or $u_{2}$ lies in $I$. Thus $u_{0}\in I$ and $I=A$. Therefore, we have that $A$ is simple. According to the classification Theorem of simple Lie superalgebras in \cite[Theorem 5]{Kac1977}, we deduce that $A\cong\mathfrak{osp}(1|2)$.\end{proof} \end{singlespace} \begin{singlespace} We consider the representations of $\mathfrak{osp}(1|2)$ frequently in the following sections. As shown in \cite[Section 2]{Pais1975}, all finite-dimensional representations of $\mathfrak{osp}(1|2)$ are completely reducible. Also in \cite[Section 2]{Pais1975} the irreducible representations of $\mathfrak{osp}(1|2)$ are constructed. We recall that the irreducible representations of $\mathfrak{osp}(1|2)$ are parametrized by $l\in\{\frac{a}{2}:a\in\mathbb{Z}_{\geq0}\}$ and we write $V^{\mathfrak{osp}}(l)$ for the representation corresponding to $l$. Then $\dim V^{\mathfrak{osp}}(l)=4l+1$. We know that $\mathfrak{osp}(1|2)$ is $5$-dimensional with basis $\{u_{-2},u_{-1},u_{0},u_{1},u_{2}\}$. The eigenvalues of $u_{0}$ on $V^{\mathfrak{osp}}(l)$ are $l,l-\frac{1}{2},\dots,-l$. From now on let us denote $\mathfrak{z}=\mathfrak{z}(\mathfrak{g}^{e})$. Given $x=x_{\bar{0}}+x_{\bar{1}}\in\mathfrak{z}$, for any $y=y_{\bar{0}}+y_{\bar{1}}\in\mathfrak{g}^{e}$, we have $[x,y_{\bar{0}}]=[x_{\bar{0}},y_{\bar{0}}]+[x_{\bar{1}},y_{\bar{0}}]=0$. Since $[x_{\bar{0}},y_{\bar{0}}]\in\mathfrak{g}_{\bar{0}}^{e}$ and $[x_{\bar{1}},y_{\bar{0}}]\in\mathfrak{g}_{\bar{1}}^{e}$, we have $[x_{\bar{0}},y_{\bar{0}}]=[x_{\bar{1}},y_{\bar{0}}]=0$. Similarly we have $[x_{\bar{0}},y_{\bar{1}}]=[x_{\bar{1}},y_{\bar{1}}]=0$. Therefore, we know that $x_{\bar{0}},x_{\bar{1}}\in\mathfrak{z}$ and thus $\mathfrak{z}=\mathfrak{z}_{\bar{0}}\oplus\mathfrak{z}_{\bar{1}}$. Moreover, we can decompose $\mathfrak{z}$ into the direct sum of ad$h$-eigenspaces in each case, i.e. $\mathfrak{z}=\text{\ensuremath{\bigoplus}}_{j\geq0}\mathfrak{z}(j)$ for all ad$h$-eigenvalue $j\geq0$. \end{singlespace} In \S5.5 and \S6.7, we consider the adjoint action of group $G^{e}$ on $\mathfrak{z}$ for $\mathfrak{g}=G(3)$ and $F(4)$. According to \cite[Section 5.10]{Jantzen2004a}, we have that $G^{e}$ is the semidirect product as an algebraic group of the reductive group $C^{e}=G^{e}\cap G^{h}$ and $R^{e}$, the unipotent radical of $G^{e}$. Denote the connected component of $G^{e}$ (resp. $C^{e}$) containing the identity by $(G^{e})^{\circ}$ (resp. $(C^{e})^{\circ}$). The group $R^{e}$ is connected by \cite[Section 5.10]{Jantzen2004a}, thus we get $G^{e}/(G^{e})^{\circ}\cong C^{e}/(C^{e})^{\circ}$. Since the adjoint action of $\mathfrak{g}^{e}$ on itself is the differential of the adjoint of $G^{e}$ on $\mathfrak{g}^{e}$, we have $\mathfrak{z}=\{x\in\mathfrak{g}^{e}:g\cdot x=x\text{ for all }g\in(G^{e})^{\circ}\}$. Therefore, there is an action of $G^{e}/(G^{e})^{\circ}$ on $\mathfrak{z}$ and $\mathfrak{z}^{G^{e}}\subseteq\mathfrak{z}^{G^{e}/(G^{e})^{\circ}}\subseteq\mathfrak{z}$. In order to determine $\mathfrak{z}^{G^{e}}$, it suffices to consider the action of elements of $G^{e}/(G^{e})^{\circ}$ on $\mathfrak{z}$. \begin{singlespace} \section{Exceptional Lie superalgebras $D(2,1;\alpha)$\label{sec:D(2,1;)}} \end{singlespace} \begin{singlespace} \noindent In this section, we describe an explicit construction for the Lie Superalgebras $\mathfrak{g}=D(2,1;\alpha)$ following \cite{M.Scheunert1976} and \cite[Section 4.2]{Musson2012}. We also give representatives of nilpotent orbits $e\in D(2,1;\alpha)_{\bar{0}}$. We use the explicit construction in \S4.1 to determine $\mathfrak{g}^{e}$ and $\mathfrak{z}(\mathfrak{g}^{e})$. The labelled Dynkin diagram $\varDelta$ with respect to each nilpotent elements are drawn afterwards. \end{singlespace} \subsection{Structure of Lie superalgebras $D(2,1;\alpha)$\label{subsec:Structure-of-D(2,1;)}} \begin{singlespace} \noindent The Lie superalgebras $D(2,1;\alpha)$ with $\alpha\in\mathbb{C}$\textbackslash$\left\{ 0,1\right\} $ form a one-parameter family of superalgebras of dimension $17$. In \cite{M.Scheunert1976}, Scheunert denotes these algebra by $\Gamma(\sigma_{1},\sigma_{2},\sigma_{3})$ where $\sigma_{1},\sigma_{2},\sigma_{3}$ are complex numbers such that $\sigma_{1}+\sigma_{2}+\sigma_{3}=0$. According to \cite[Section 4.2]{Musson2012}, the Lie superalgebra $\Gamma(\sigma_{1},\sigma_{2},\sigma_{3})$ is simple if and only if $\sigma_{i}\neq0$ for $i=1,2,3$ . If there exists another triple $(\sigma_{1}^{'},\sigma_{2}^{'},\sigma_{3}^{'})$ such that $\Gamma(\sigma_{1},\sigma_{2},\sigma_{3})\cong\Gamma(\sigma_{1}^{'},\sigma_{2}^{'},\sigma_{3}^{'})$, then there must exist a permutation $\rho$ of $\{1,2,3\}$ and a nonzero complex number $c$ such that $\sigma_{i}^{'}=c\sigma_{\rho(i)}$ for $i=1,2,3$. This implies that the $\Gamma(\sigma_{1},\sigma_{2},\sigma_{3})$ form a one-parameter family and we have that $\Gamma(\sigma_{1},\sigma_{2},\sigma_{3})=D(2,1;\alpha)$ for a specific choice of $\sigma_{1},\sigma_{2},\sigma_{3}$. For any $\alpha\in\mathbb{C}\setminus\{0,-1\}$, we have $D(2,1;\alpha)=\Gamma(1+\alpha,-1,-\alpha)\cong\Gamma(\frac{1+\alpha}{\alpha},-1,-\frac{1}{\alpha})\cong\Gamma(-\alpha,-1,1+\alpha)$. \end{singlespace} \begin{singlespace} For $i=1,2,3$, take $V_{i}$ to be a copy of $V$ where $V$ is defined in \S3. Let $\psi_{i}$ be the non-degenerate skew-symmetric bilinear form on $V_{i}$ defined by $\psi_{i}(v_{1},v_{-1})=1$. We also define a bilinear map $p_{i}:V_{i}\times V_{i}\rightarrow\mathfrak{sl}(2)$ by $p_{i}(x,y)(z)=\psi_{i}(y,z)x-\psi_{i}(z,x)y$ for $x,y,z\in V_{i}$. We can calculate that $p_{i}(v_{1},v_{1})=2E$, $p_{i}(v_{1},v_{-1})=-H$ and $p_{i}(v_{-1},v_{-1})=-2F$. By definition, $\mathfrak{g}=D(2,1;\alpha)=\mathfrak{g}_{\bar{0}}\oplus\mathfrak{g}_{\bar{1}}$, where \[ \mathfrak{g}_{\bar{0}}=\mathfrak{sl}(2)\oplus\mathfrak{sl}(2)\oplus\mathfrak{sl}(2)\text{ and }\mathfrak{g}_{\bar{1}}=V_{1}\otimes V_{2}\otimes V_{3}. \] \end{singlespace} \begin{singlespace} \noindent Note that $\mathfrak{g}_{\bar{0}}$ is a Lie algebra thus has Lie bracket $\left[\cdot,\cdot\right]:\mathfrak{g}_{\bar{0}}\times\mathfrak{g}_{\bar{0}}\rightarrow\mathfrak{g}_{\bar{0}}$. Let $x=(x_{1},x_{2},x_{3})\in\mathfrak{g}_{\bar{0}}$ and $v=v_{1}\otimes v_{2}\otimes v_{3}\in\mathfrak{g}_{\bar{1}}$, then the bracket $\left[\cdot,\cdot\right]:\mathfrak{g}_{\bar{0}}\times\mathfrak{g}_{\bar{1}}\rightarrow\mathfrak{g}_{\bar{1}}$ is defined by $[x,v]:=x\cdotp v=x_{1}v_{1}\otimes v_{2}\otimes v_{3}+v_{1}\otimes x_{2}v_{2}\otimes v_{3}+v_{1}\otimes v_{2}\otimes x_{3}v_{3}.$ According to equation (4.2.1) in \cite{Musson2012}, the bracket $\left[\cdot,\cdot\right]:\mathfrak{g}_{\bar{1}}\times\mathfrak{g}_{\bar{1}}\rightarrow\mathfrak{g}_{\bar{0}}$ is given by \begin{align*} [v_{1}\otimes v_{2}\otimes v_{3},u_{1}\otimes u_{2}\otimes u_{3}] & =\sigma_{1}\psi_{2}(v_{2},u_{2})\psi_{3}(v_{3},u_{3})p_{1}(v_{1},u_{1})\\ +\sigma_{2}\psi_{1}(v_{1},u_{1})\psi_{3}(v_{3},u_{3})p_{2}(v_{2},u_{2}) & +\sigma_{3}\psi_{1}(v_{1},u_{1})\psi_{2}(v_{2},u_{2})p_{3}(v_{3},u_{3}), \end{align*} where $v_{i},u_{i}\in V_{i}$. \end{singlespace} \begin{singlespace} Next we give a basis for $\mathfrak{g}$. We first fix the following notation: let $E_{1}=(E,0,0)$, $E_{2}=(0,E,0)$ and $E_{3}=(0,0,E)$. Similarly, we denote $F_{1}=(F,0,0)$, $F_{2}=(0,F,0)$, $F_{3}=(0,0,F)$, $H_{1}=(H,0,0)$, $H_{2}=(0,H,0)$ and $H_{3}=(0,0,H)$. Clearly, $\mathfrak{g}_{\bar{0}}$ has a basis $\{E_{i},H_{i},F_{i}:i=1,2,3\}$ and $\mathfrak{g}_{\bar{1}}$ has a basis $\{v_{i}\otimes v_{j}\otimes v_{k}:i,j,k=\pm1\}$. In the remaining subsections, we write $v_{i,j,k}$ in place of $v_{i}\otimes v_{j}\otimes v_{k}$ for $i,j,k\in\{\pm1\}$. \end{singlespace} \subsection{Root system and Dynkin diagrams for $D(2,1;\alpha)$\label{subsec:Root-system-D(2,1)}} \begin{singlespace} \noindent We follow the construction of the root system of $D(2,1;\alpha)$ given in \cite[Appendix A]{Iohara2001}. A Lie superalgebra of type $D(2,1;\alpha)$ has root system \[ \Phi_{\bar{0}}=\{\pm2\beta_{1},\pm2\beta_{2},\pm2\beta_{3}\}\text{ and\ }\Phi_{\bar{1}}=\{i\beta_{1}+j\beta_{2}+k\beta_{3}:i,j,k=\pm1\}, \] \noindent where $\{\beta_{1},\beta_{2},\beta_{3}\}$ is an orthogonal basis such that $(\beta_{1},\beta_{1})=\frac{1}{2}$, $(\beta_{2},\beta_{2})=-\frac{1}{2}\alpha-\frac{1}{2}$ and $(\beta_{3},\beta_{3})=\frac{1}{2}\alpha$. The corresponding root vectors are listed below: \end{singlespace} \begin{singlespace} \noindent \begin{center} \begin{longtable}[c]{|c||c||c||c|} \hline \multicolumn{1}{|c||}{Roots} & $2\beta_{i},i=1,2,3$ & $-2\beta_{i},i=1,2,3$ & $i\beta_{1}+j\beta_{2}+k\beta_{3}:i,j,k\in\{\pm1\}$\tabularnewline \hline \endfirsthead \hline Root vectors & $E_{i}$ & $F_{i}$ & $v_{i,j,k}$ \tabularnewline \hline \end{longtable} \par\end{center} \end{singlespace} \noindent We can check that all the odd roots in $D(2,1;\alpha)$ are isotropic. In the following table, we give all possible Dynkin diagrams with respect to different systems of simple roots based on \cite[Section 2.20]{Frappat1996}. \begin{longtable}[c]{|>{\centering}m{7cm}||>{\centering}m{5cm}|} \caption{Dynkin diagrams for $D(2,1;\alpha)$} \tabularnewline \endfirsthead \hline Simple systems $\varPi=\{\alpha_{1},\alpha_{2},\alpha_{3}\}$ & Dynkin diagrams\tabularnewline \hline \hline $\{2\beta_{1},-\beta_{1}+\beta_{2}-\beta_{3},2\beta_{3}\}$ & Figure 4.1 \includegraphics[scale=0.8]{\string"DD_TypeD_1\string".PNG}\tabularnewline \hline \hline $\{2\beta_{1},-\beta_{1}-\beta_{2}+\beta_{3},2\beta_{2}\}$ & Figure 4.2 \includegraphics[scale=0.8]{\string"DD_TypeD_2\string".PNG}\tabularnewline \hline \hline $\{2\beta_{3},\beta_{1}-\beta_{2}-\beta_{3},2\beta_{2}\}$ & Figure 4.3 \includegraphics[scale=0.8]{\string"DD_TypeD_3\string".PNG}\tabularnewline \hline \hline $\{-\beta_{1}+\beta_{2}+\beta_{3},\beta_{1}-\beta_{2}+\beta_{3},\beta_{1}+\beta_{2}-\beta_{3}\}$ & Figure 4.4 \includegraphics{\string"DD_TypeD_4\string".PNG}\tabularnewline \hline \end{longtable} \subsection{Centres of centralizers of nilpotent elements $e\in D(2,1;\alpha)$ and labelled Dynkin diagrams with respect to $e$\label{subsec:Centres-of-centralizers-D(2,1)}} \begin{singlespace} \noindent Let $\mathfrak{g}=D(2,1;\alpha)=\mathfrak{g}_{\bar{0}}\oplus\mathfrak{g}_{\bar{1}}$. A nilpotent element $e\in\mathfrak{g}_{\bar{0}}$ is of the form $(e_{1},e_{2},e_{3})$ where $e_{i}\in\mathfrak{sl}(2)$ for $i\in\{1,2,3\}$. We know that representatives of nilpotent elements in $\mathfrak{sl}(2)$ up to conjugation by $\mathrm{SL}(2)$ are $0\ \text{and}\ E$. We give basis elements for $\mathfrak{g}^{e}$ and $\mathfrak{z}(\mathfrak{g}^{e})$ and labelled Dynkin diagrams $\varDelta$ with respect to $e$ when $e=0,E_{1},E_{1}+E_{2},E_{1}+E_{2}+E_{3}$ in Table \ref{tab:results in D(2,1)}. Note that the cases $e=E_{2}$, $e=E_{3}$ are similar to $e=E_{1}$ and cases $e=E_{2}+E_{3}$, $e=E_{1}+E_{3}$ are similar to $e=E_{1}+E_{2}$. Hence, any other case is similar to one of above. The numbers in the column labelled by ``$\varDelta$'' represent labels $a_{i}$ corresponding to $\alpha_{i}$ for $i=1,2,3$ in labelled Dynkin diagram with respect to $e$. \noindent \begin{table}[H] \begin{singlespace} \noindent \begin{centering} \begin{tabular}{|>{\centering}m{1.5cm}||>{\centering}m{8cm}||>{\centering}m{1cm}||>{\centering}m{3.3cm}|} \hline $e$ & $\mathfrak{g}^{e}$ & $\mathfrak{z}(\mathfrak{g}^{e})$ & $\varDelta$ \tabularnewline \hline \hline $0$ & $\mathfrak{g}$ & $\{0\}$ & Figures 4.1, 4.2, 4.3, 4.4: All labels are zeros.\tabularnewline \hline \hline \textbf{$E_{1}$} & $\langle E_{1},E_{2},H_{2},F_{2},E_{3},H_{3},F_{3},v_{i,j,k}:j,k=\pm1\rangle$ & $\langle e\rangle$ & Figure 4.3: $0,1,0$\tabularnewline \hline \hline $E_{1}+E_{2}$ & \begin{singlespace} \noindent $\langle E_{1},E_{2},E_{3},H_{3},F_{3},v_{1,1,1},v_{1,1,-1},v_{1,-1,1}-v_{-1,1,1},v_{1,-1,-1}-v_{-1,1,-1}\rangle$ \end{singlespace} & $\langle e\rangle$ & Figure 4.1: $2,0,0$ Figure 4.3: $0,0,2$ Figure 4.4: $0,0,2$\tabularnewline \hline \hline $E_{1}+E_{2}+E_{3}$ & $\langle E_{1},E_{2},E_{3},v_{1,1,1},v_{1,1,-1}-v_{-1,1,1},v_{1,-1,1}-v_{-1,1,1}\rangle$ & $\langle e\rangle$ & Figure 4.4: $1,1,1$\tabularnewline \hline \end{tabular} \par\end{centering} \end{singlespace} \caption{\label{tab:results in D(2,1)}$\mathfrak{g}^{e}$, $\mathfrak{z}(\mathfrak{g}^{e})$ and $\varDelta$ for $\mathfrak{g}=D(2,1;\alpha)$} \end{table} \end{singlespace} \begin{singlespace} Let $V^{\mathfrak{sl}}(j)$ be an $\mathfrak{sl}(2)$-module with highest weight $j$ and $V^{\mathfrak{osp}}(j)$ be an $\mathfrak{osp}(1|2)$-module with highest weight $j$. We also describe the $\mathfrak{g}^{e}(0)$-module structure on each $\mathfrak{g}^{e}(j)$ for $j>0$ in Table \ref{tab:g^e(0)-D(2,1;)}. \end{singlespace} \begin{singlespace} \noindent \begin{center} \begin{longtable}[c]{|>{\centering}m{2.5cm}|>{\centering}m{2cm}|>{\centering}m{9cm}|} \caption{\label{tab:g^e(0)-D(2,1;)}The $\mathfrak{g}^{e}(0)$-module structure on $\mathfrak{g}^{e}(j)$ for $j>0$ } \tabularnewline \endfirsthead \hline $e$ & $\mathfrak{g}^{e}(0)$ & $\mathfrak{g}^{e}(j)$ for $j>0$\tabularnewline \hline \hline $0$ & $\mathfrak{g}^{e}$ & $0$\tabularnewline \hline \hline \textbf{$E_{1}$} & $\mathfrak{sl}(2)\oplus\mathfrak{sl}(2)$ & $\mathfrak{g}^{e}(1)=V^{\mathfrak{sl}}(1)\otimes V^{\mathfrak{sl}}(1)$, $\mathfrak{g}^{e}(2)=V^{\mathfrak{sl}}(0)\otimes V^{\mathfrak{sl}}(0)$\tabularnewline \hline \hline $E_{1}+E_{2}$ & $\mathfrak{osp}(1|2)$ & $\mathfrak{g}^{e}(2)=V^{\mathfrak{osp}}(0)\oplus V^{\mathfrak{osp}}(1)$\tabularnewline \hline \hline $E_{1}+E_{2}+E_{3}$ & $\{0\}$ & $\dim\mathfrak{g}^{e}(1)=2$, $\dim\mathfrak{g}^{e}(2)=3$, $\dim\mathfrak{g}^{e}(3)=1$\tabularnewline \hline \end{longtable} \par\end{center} \end{singlespace} \begin{singlespace} Let $e=E_{1}+E_{2}$. In the remaining part of this subsection, we calculate $\mathfrak{g}^{e}$, $\mathfrak{z}(\mathfrak{g}^{e})$ and draw the labelled Dynkin diagrams with respect to $e$. We easily calculate that $\mathfrak{g}_{\bar{0}}^{e}=\left\langle E_{1},E_{2},E_{3},H_{3},F_{3}\right\rangle .$ To determine $\mathfrak{g}_{\bar{1}}^{e}$, assume $x=\sum a_{i,j,k}v_{i,j,k}\in\mathfrak{g}_{\bar{1}}^{e}$ where $a_{i,j,k}\in\mathbb{C},i,j,k\in\{\pm1\}$. By calculating $[E_{1}+E_{2},x]=0$, we obtain that $a_{1,1,k}$ are arbitrary, $a_{1,-1,k}=-a_{-1,1,k}$ for $k=\pm1$ and $a_{-1,-1,k}=0$. Hence a basis of $\mathfrak{g}_{\bar{1}}^{e}$ is $\{v_{1,1,1},v_{1,1,-1},v_{1,-1,1}-v_{-1,1,1},v_{1,-1,-1}-v_{-1,1,-1}\}$. Therefore, we have that $\dim\mathfrak{g}^{e}=5+4=9$. By computing commutator relations between basis elements for $\mathfrak{g}^{e}(0)$, we deduce that $\mathfrak{g}^{e}(0)\cong\mathfrak{osp}(1|2)$ according to Lemma \ref{lem:osp(1,2)} where $F_{3},v_{1,-1,-1}-v_{-1,1,-1},H_{3},v_{1,-1,1}-v_{-1,1,1},E_{3}$ correspond to $u_{-2},u_{-1},u_{0},u_{1},u_{2}$ in Lemma \ref{lem:osp(1,2)}. Moreover, we obtain that $\mathfrak{g}^{e}(2)=V^{\mathfrak{osp}}(0)\oplus V^{\mathfrak{osp}}(1)$. Hence, we have that $\mathfrak{z}=\mathfrak{z}(0)\oplus\mathfrak{z}(2)\subseteq\left(\mathfrak{g}^{e}(0)\right)^{\mathfrak{g}^{e}(0)}\oplus\left(\mathfrak{g}^{e}(2)\right)^{\mathfrak{g}^{e}(0)}=\left\langle E_{1}+E_{2}\right\rangle $. We know that $e=E_{1}+E_{2}\in\mathfrak{z}$. Therefore, $\mathfrak{z}=\left\langle E_{1}+E_{2}\right\rangle $ and $\dim\mathfrak{z}=1$. Next we look for the labelled Dynkin diagrams with respect to $e$. We find an element $h=(H,H,0)$ such that $h$ belongs to an $\mathfrak{sl}(2)$-triple $\{e,h,f\}$ in $\mathfrak{g}_{\bar{0}}$. By calculating the ad$h$-eigenvalue on each root vector, we have that roots in $\mathfrak{g}(>0)$ are $\{2\beta_{1},2\beta_{2},\beta_{1}+\beta_{2}+\beta_{3},\beta_{1}+\beta_{2}-\beta_{3}\}$ and roots in $\mathfrak{g}(0)$ are $\Phi(0)=\{\pm2\beta_{3},i\beta_{1}-i\beta_{2}+k\beta_{3}:i,k=\pm1\}$. Hence, we have that $\mathfrak{g}(0)\cong\mathfrak{gl}(1\mid2)$ and there are three systems of simple roots of $\mathfrak{g}(0)$: $\varPi_{1}(0)=\{-\beta_{1}+\beta_{2}-\beta_{3},2\beta_{3}\}$, $\varPi_{2}(0)=\{2\beta_{3},\beta_{1}-\beta_{2}-\beta_{3}\}$ and $\varPi_{3}(0)=\{-\beta_{1}+\beta_{2}+\beta_{3},\beta_{1}-\beta_{2}+\beta_{3}\}$ up to conjugacy. By extending $\varPi_{i}(0)$ to $\varPi$ for $i=1,2,3$, we get three systems of positive roots $\Phi_{i}^{+}$ and simple roots $\varPi_{i}$. Therefore, there are three conjugacy classes of Borel subalgebras such that $\mathfrak{b}=\mathfrak{h}\oplus\bigoplus_{\alpha\in\Phi^{+}}\mathfrak{g}_{\alpha}\subseteq\bigoplus_{j\geq0}\mathfrak{g}(j)$. Hence, the systems of simple roots are: $\varPi_{1}=\{\alpha_{1}=2\beta_{1},\alpha_{2}=-\beta_{1}+\beta_{2}-\beta_{3},\alpha_{3}=2\beta_{3}\}$. We compute $\mu_{12}=1$ and $\mu_{23}=\alpha$ using Formula (\ref{eq:lines-=0003BC}). Therefore, the labelled Dynkin diagram with repect to $\varPi_{1}$ is the Dynkin diagram in Figure 4.1 with labels $2,0,0$. $\varPi_{2}=\{\alpha_{1}=2\beta_{3},\alpha_{2}=\beta_{1}-\beta_{2}-\beta_{3},\alpha_{3}=2\beta_{2}\}$. We compute that $\mu_{12}=\alpha$ and $\mu_{23}=1+\alpha$ using Formula (\ref{eq:lines-=0003BC}). Therefore, the labelled Dynkin diagram with repect to $\varPi_{2}$ is the Dynkin diagram in Figure 4.3 with labels $0,0,2$. $\varPi_{3}=\{\alpha_{1}=-\beta_{1}+\beta_{2}+\beta_{3},\alpha_{2}=\beta_{1}-\beta_{2}+\beta_{3},\alpha_{3}=\beta_{1}+\beta_{2}-\beta_{3}\}$. We compute that $\mu_{12}=\alpha,\mu_{13}=1+\alpha$ and $\mu_{23}=2$ using Formula (\ref{eq:lines-=0003BC}). Therefore, the labelled Dynkin diagram with repect to $\varPi_{3}$ is the Dynkin diagram in Figure 4.4 with labels $0,0,2$. \end{singlespace} \subsection{Analysis of results\label{subsec:Analysis-D(2,1;)}} \begin{singlespace} \noindent Note that $e=E_{1}+E_{2}$ is the only case in which the corresponding labelled Dynkin diagram $\varDelta$ has no label equal to $1$. For this case, we have $n_{2}(\varDelta)=1$, $\dim\mathfrak{z}(\mathfrak{g}^{e})=1$ and $\mathfrak{g}^{h}=\mathfrak{g}(0)\cong\mathfrak{gl}(1|2)$ by \S4.3. Hence, $\mathfrak{g}^{h}$ has centre of dimension $1$. Therefore, $\dim\mathfrak{z}(\mathfrak{g}^{h})=n_{2}(\varDelta)=\dim\mathfrak{z}(\mathfrak{g}^{e})$. \noindent In order to prove Theorem $2$, we only need to look at the case $e=E_{1}+E_{2}$ as the remaining cases do not have labels equal to $2$ so that the $2$-free core $\varDelta_{0}$ of $\varDelta$ is the same as $\varDelta$. For this case, we have $\dim\mathfrak{g}^{e}=9$, $\dim\mathfrak{z}(\mathfrak{g}^{e})=1$ and $\varDelta_{0}=\varDelta_{\mathfrak{g}(0)}$. Hence, we deduce that the corresponding Lie superalgebra $\mathfrak{g}_{0}=\mathfrak{sl}(2|1)$ and the nilpotent orbit $e_{0}$ with respect to $\varDelta_{0}$ is equal to $0$. Therefore, we have that $\dim\mathfrak{g}_{0}^{e_{0}}=8$ and $\dim\mathfrak{g}^{e}-\dim\mathfrak{g}_{0}^{e_{0}}=n_{2}(\varDelta)=1$. Similarly, we have $\dim\mathfrak{z}(\mathfrak{g}^{e})-\dim\mathfrak{z}(\mathfrak{g}_{0}^{e_{0}})=n_{2}(\varDelta)=1$ because $\dim\mathfrak{z}(\mathfrak{g}_{0}^{e_{0}})=0$. \end{singlespace} \begin{singlespace} Theorem 3 for $D(2,1;\alpha)$ can be obtained immediately from Table \ref{tab:results in D(2,1)}, i.e. we have that $\dim\mathfrak{z}(\mathfrak{g}^{e})=\left\lceil \frac{1}{2}\sum_{i=1}^{3}a_{i}\right\rceil +\varepsilon$ where $\varepsilon=0$ for $e=0,E_{1},E_{1}+E_{2}$ and $\varepsilon=-1$ for $e=E_{1}+E_{2}+E_{3}$. \end{singlespace} \section{The Exceptional Lie superalgebra $G(3)$\label{sec:G(3)}} \subsection{Structure of the Lie superalgebra $G(3)$\label{subsec:Structure-of-G(3)}} Let $V_{2}=V$ where $V$ is defined in \S3. Let $G_{2}$ be the exceptional Lie algebra and $V_{7}=\left\langle e_{3},e_{2},e_{1},e_{0},e_{-1},e_{-2},e_{-3}\right\rangle $ be its $7$-dimensional module. A construction of $G(3)$ can be found in \cite[Chapter 4]{Musson2012} and we recall this construction below. Recall that $G(3)=\mathfrak{g}=\mathfrak{g}_{\bar{0}}\oplus\mathfrak{g}_{\bar{1}}$ where \[ \mathfrak{g}_{\bar{0}}=\mathfrak{sl}(2)\oplus G_{2}\text{ and }\mathfrak{g}_{\bar{1}}=V_{2}\otimes V_{7}. \] We view $G_{2}\subseteq\mathfrak{gl}(V_{7})$ and then $\mathfrak{g}_{\bar{0}}$ has a basis $\{E,H,F,h_{1},h_{2},x_{i},y_{i}:i=1,\dots,6\}$ where \[ x_{1}=\begin{pmatrix}0 & -1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & -2 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{pmatrix},\ x_{2}=\begin{pmatrix}0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & -1 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{pmatrix}, \] \[ y_{1}=\begin{pmatrix}0 & 0 & 0 & 0 & 0 & 0 & 0\\ -1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 2 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & -1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 \end{pmatrix},\ y_{2}=\begin{pmatrix}0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & -1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{pmatrix}, \] \[ h_{1}=\text{diag}(1,-1,2,0,-2,1,-1)\ \text{and}\ h_{2}=\text{diag}(0,1,-1,0,1,-1,0), \] and $x_{3}=[x_{1},x_{2}]$, $x_{4}=[x_{1},x_{3}]$, $x_{5}=[x_{1},x_{4}]$, $x_{6}=[x_{5},x_{2}]$. The remaining negative root vectors in the basis of $G_{2}$ can be generated by $y_{1}$ and $y_{2}$ in a similar way. A basis of $\mathfrak{g}_{\bar{1}}$ is $\{v_{i}\otimes e_{j}:i=\pm1,j=0,\pm1,\pm2,\pm3\}$. \begin{singlespace} We know that $\mathfrak{g}_{\bar{0}}$ is a Lie algebra and the bracket $\left[\cdotp,\cdotp\right]:\mathfrak{g}_{\bar{0}}\times\mathfrak{g}_{\bar{1}}\rightarrow\mathfrak{g}_{\bar{1}}$ is given by $[x+y,u\otimes w]=xu\otimes w+u\otimes yw$ for $x\in\mathfrak{sl}(2),y\in G_{2},u\in V_{2}$ and $w\in V_{7}$. The bracket $\left[\cdotp,\cdotp\right]:\mathfrak{g}_{\bar{1}}\times\mathfrak{g}_{\bar{1}}\rightarrow\mathfrak{g}_{\bar{0}}$ is given in \cite[Theorem 4.4.5]{Musson2012}. For $v_{i},v_{k}\in V_{2},e_{j},e_{l}\in V_{7}$ , we have \begin{equation} [v_{i}\otimes e_{j},v_{k}\otimes e_{l}]=\psi_{2}(v_{i},v_{k})p_{7}(e_{j},e_{l})+\psi_{7}(e_{j},e_{l})p_{2}(v_{i},v_{k}),\label{eq:G(3)} \end{equation} where $\psi_{2}$ is a non-degenerate skew-symmetric bilinear form on $V_{2}$ such that $\psi_{2}(v_{1},v_{-1})=1$ and $p_{2}:V_{2}\times V_{2}\rightarrow\mathfrak{sl}(2)$ is given by $p_{2}(x,y)(z)=4\left(\psi_{1}(y,z)x-\psi_{1}(z,x)y\right)$. We calculate that $p_{2}(v_{1},v_{-1})=-4H$, $p_{2}(v_{1},v_{1})=8E$ and $p_{2}(v_{-1},v_{-1})=-8F$. The mappings $\psi_{7}$ and $p_{7}$ are defined in \cite[Theorem 4.4.5]{Musson2012}. We can explicitly calculate that $\psi_{7}(e_{j},e_{-j})=2$, $\psi_{7}(e_{0},e_{0})=-1$ and $\psi_{7}(e_{i},e_{j})=0$ if $i\neq j$. As in \cite[Subsection 4.7.9]{Musson2012}, we can calculate that $p_{7}(e_{-3},e_{3})=-8h_{1}-12h_{2}$. According to the graded Jacobi identity, we have that \begin{align*} [x_{1},[v_{1}\otimes e_{-3},v_{-1}\otimes e_{3}]] & =[v_{1}\otimes e_{-3},[x_{1},v_{-1}\otimes e_{3}]]+[[x_{1},v_{1}\otimes e_{-3}],v_{-1}\otimes e_{3}]\\ & =[v_{1}\otimes e_{-3},0]+[v_{1}\otimes e_{-2},v_{-1}\otimes e_{3}]=p_{7}(e_{-2},e_{3}). \end{align*} Moreover, $[x_{1},[v_{1}\otimes e_{-3},v_{-1}\otimes e_{3}]]=[x_{1},-8H-8h_{1}-12h_{2}]=4x_{1}$. Therefore, we deduce that $p_{7}(e_{-2},e_{3})=4x_{1}$. Using similar methods we obtain the explicit mapping $p_{7}:V_{7}\times V_{7}\rightarrow G_{2}$ which is given in the following table: \end{singlespace} \begin{singlespace} \noindent \begin{table}[H] \begin{singlespace} \begin{longtable}[c]{|>{\centering}m{1cm}||>{\centering}m{1.7cm}||>{\centering}m{1.5cm}||>{\centering}m{1.6cm}||>{\centering}m{1.1cm}||>{\centering}m{1.8cm}||>{\centering}m{1.7cm}||>{\centering}m{1.5cm}|} \hline & $e_{3}$ & $e_{2}$ & $e_{1}$ & $e_{0}$ & $e_{-1}$ & $e_{-2}$ & $e_{-3}$\tabularnewline \hline \hline $e_{3}$ & $0$ & $-2x_{6}$ & $2x_{5}$ & $2x_{4}$ & $-4x_{3}$ & $-4x_{1}$ & $8h_{1}+12h_{2}$\tabularnewline \hline \hline $e_{2}$ & $2x_{6}$ & $0$ & $-2x_{4}$ & $-4x_{3}$ & $12x_{2}$ & $4h_{1}+12h_{2}$ & $-4y_{1}$\tabularnewline \hline \hline $e_{1}$ & $-2x_{5}$ & $2x_{4}$ & $0$ & $4x_{1}$ & $4h_{1}$ & $12y_{2}$ & $4y_{3}$\tabularnewline \hline \hline $e_{0}$ & $-2x_{4}$ & $4x_{3}$ & $-4x_{1}$ & $0$ & $4y_{1}$ & $4y_{3}$ & $2y_{4}$\tabularnewline \hline \hline $e_{-1}$ & $4x_{3}$ & $-12x_{2}$ & $-4h_{1}$ & $-4y_{1}$ & $0$ & $-2y_{4}$ & $-2y_{5}$\tabularnewline \hline \hline $e_{-2}$ & $4x_{1}$ & $-4h_{1}-12h_{2}$ & $-12y_{2}$ & $-4y_{3}$ & $2y_{4}$ & $0$ & $-2y_{6}$\tabularnewline \hline \hline $e_{-3}$ & $-8h_{1}-12h_{2}$ & $4y_{1}$ & $-4y_{3}$ & $-2y_{4}$ & $2y_{5}$ & $2y_{6}$ & $0$\tabularnewline \hline \end{longtable} \end{singlespace} \caption{\label{tab:p7 G2}$p_{7}:V_{7}\times V_{7}\rightarrow G_{2}$} \end{table} \end{singlespace} \subsection{Root system and Dynkin diagrams for $G(3)$\label{subsec:root-system-G(3)}} \begin{singlespace} \noindent We follow the description of the root system of $\mathfrak{g}=G(3)$ given in \cite[Chapter 4]{Musson2012}. Let $\mathfrak{h}=\left\langle H,h_{1},h_{2}\right\rangle $ be the Cartan subalgebra of $\mathfrak{g}$. Note that the roots of $G(3)$ can be expressed in terms of $\delta,\varepsilon_{1,}\varepsilon_{2},\varepsilon_{3}\in\mathfrak{h}^{*}$ where $\varepsilon_{1}+\varepsilon_{2}+\varepsilon_{3}=0$. The root system $\Phi=\Phi_{\bar{0}}\cup\Phi_{\bar{1}}$ is given by \[ \Phi_{\bar{0}}=\{\pm2\delta,\varepsilon_{i}-\varepsilon_{j},\pm\varepsilon_{i}:1\leq i,j\leq3\}\text{ and }\Phi_{\bar{1}}=\{\pm\delta\pm\varepsilon_{i},\pm\delta:1\leq i\leq3\} \] \noindent where the bilinear form $(\cdotp,\cdotp)$ on $\mathfrak{h}^{*}$ is defined by $(\delta,\delta)=2$, $(\varepsilon_{i},\varepsilon_{j})=1-3\delta_{ij}$, and $(\delta,\varepsilon_{i})=0$. \end{singlespace} \begin{singlespace} The table below lists all roots together with corresponding root vectors: \end{singlespace} \noindent \begin{center} \begin{tabular}{|c||>{\centering}m{0.6cm}|>{\centering}m{0.6cm}||>{\centering}m{1.3cm}||>{\centering}m{0.6cm}||>{\centering}m{0.6cm}||>{\centering}m{1.3cm}||c||c||c|} \hline Roots & $2\delta$ & $\varepsilon_{1}$ & $\varepsilon_{2}-\varepsilon_{1}$ & $\varepsilon_{2}$ & $-\varepsilon_{3}$ & $\varepsilon_{1}-\varepsilon_{3}$ & $\varepsilon_{2}-\varepsilon_{3}$ & $i\delta-\varepsilon_{3}$ & $i\delta+\varepsilon_{j}$\tabularnewline \hline Root vectors & $E$ & $x_{1}$ & $x_{2}$ & $x_{3}$ & $x_{4}$ & $x_{5}$ & $x_{6}$ & $v_{i}\otimes e_{3}$ & $v_{i}\otimes e_{j}$\tabularnewline \hline \hline Roots & $-2\delta$ & $-\varepsilon_{1}$ & $\varepsilon_{1}-\varepsilon_{2}$ & $-\varepsilon_{2}$ & $\varepsilon_{3}$ & $\varepsilon_{3}-\varepsilon_{1}$ & $\varepsilon_{3}-\varepsilon_{2}$ & $i\delta+\varepsilon_{3}$ & $i\delta$\tabularnewline \hline Root vectors & $F$ & $y_{1}$ & $y_{2}$ & $y_{3}$ & $y_{4}$ & $y_{5}$ & $y_{6}$ & $v_{i}\otimes e_{-3}$ & $v_{i}\otimes e_{0}$\tabularnewline \hline \end{tabular} \par\end{center} \begin{singlespace} \noindent where $i\in\{1,-1\}$, $j\in\{2,1,-1,-2\}$ and we define $\varepsilon_{j}=-\varepsilon_{-j}$ for $j<0$. We further deduce that the odd roots $\pm\delta\pm\varepsilon_{i}$ are isotropic and $\pm\delta$ are non-isotropic. \end{singlespace} \begin{singlespace} The following table covers all possible Dynkin diagrams with respect to different systems of simple roots based on \cite[Section 2.19]{Frappat1996}. \end{singlespace} \begin{longtable}[c]{|>{\centering}m{6cm}||>{\centering}p{5cm}|} \caption{Dynkin diagrams for $G(3)$} \tabularnewline \endfirsthead \hline Simple systems $\varPi=\{\alpha_{1},\alpha_{2},\alpha_{3}\}$ & Dynkin diagrams\tabularnewline \hline \hline \begin{singlespace} \noindent $\{\delta+\varepsilon_{3},\varepsilon_{1},\varepsilon_{2}-\varepsilon_{1}\}$ \end{singlespace} & Figure 5.1 \noindent \centering{}\includegraphics{\string"DD_TypeG_1\string".PNG}\tabularnewline \hline \hline $\{-\delta-\varepsilon_{3},\delta-\varepsilon_{2},\varepsilon_{2}-\varepsilon_{1}\}$ & Figure 5.2 \noindent \centering{}\includegraphics{\string"DD_TypeG_2\string".PNG}\tabularnewline \hline \hline $\{\delta,-\delta+\varepsilon_{1},\varepsilon_{2}-\varepsilon_{1}\}$ & Figure 5.3 \noindent \centering{}\includegraphics{\string"DD_TypeG_3\string".PNG}\tabularnewline \hline \hline $\{\varepsilon_{1},-\delta+\varepsilon_{2},\delta-\varepsilon_{1}\}$ & Figure 5.4 \noindent \centering{}\includegraphics{\string"DD_TypeG_4\string".PNG}\tabularnewline \hline \end{longtable} \subsection{Centres of centralizers of nilpotent elements $e$ in $G(3)$ and labelled Dynkin diagrams with respect to $e$\label{subsec:centres-of-centralizers--G(3)}} \begin{singlespace} \noindent Let $e=e_{\mathfrak{sl}(2)}+e_{G_{2}}\in\mathfrak{g}_{\bar{0}}$ be nilpotent where $e_{\mathfrak{sl}(2)}\in\mathfrak{sl}(2)$ and $e_{G_{2}}\in G_{2}$. According to \cite[Section 11]{Lawther2008}, we know that representatives of nilpotent orbits in $\mathfrak{sl}(2)$ are $0,E$ and representatives of nilpotent orbits in $G_{2}$ are $0,x_{2},x_{1},x_{2}+x_{5},x_{1}+x_{2}$ up to the adjoint action of $G=\mathrm{SL}_{2}(\mathbb{C})\times K$ where $K$ is the Lie group of type $G_{2}$. Hence, there are in total $10$ possibilities for $e$. It is clear that $\mathfrak{sl}(2)^{E}=\langle E\rangle$ and $\mathfrak{sl}(2)^{0}=\mathfrak{sl}(2)$. We give basis elements for $\mathfrak{g}^{e}$ and $\mathfrak{z}(\mathfrak{g}^{e})$ and the labelled Dynkin diagrams $\varDelta$ with respect to $e$ in Table \ref{tab:G(3)}. Note that the numbers in the column labelled ``$\varDelta$'' represent labels $a_{i}$ corresponding to $\alpha_{i}$ for $i=1,2,3$ in labelled Dynkin diagram with respect to $e$. \end{singlespace} \begin{singlespace} \noindent \begin{center} \begin{longtable}[c]{|>{\centering}m{1.5cm}||>{\centering}m{6cm}||>{\centering}m{3cm}||>{\centering}m{3.5cm}|} \caption{\label{tab:G(3)}$\mathfrak{g}^{e}$, $\mathfrak{z}(\mathfrak{g}^{e})$ and $\varDelta$ for $\mathfrak{g}=G(3)$} \tabularnewline \endfirsthead \hline $e$ & $\mathfrak{g}^{e}$ & $\mathfrak{z}(\mathfrak{g}^{e})$ & $\varDelta$\tabularnewline \hline \hline $E+(x_{1}+x_{2})$ & $\langle E,x_{1}+x_{2},x_{6},v_{1}\otimes e_{3},v_{1}\otimes e_{2}+v_{-1}\otimes e_{3}\rangle$ & $\langle e,x_{6},v_{1}\otimes e_{3}\rangle$ & Figure 5.3: $1,1,2$\tabularnewline \hline \hline $E+x_{2}$ & \begin{singlespace} \noindent \centering{}$\langle E,2h_{1}+3h_{2},x_{2},y_{1},x_{3},x_{6},y_{5},x_{4},y_{4},v_{1}\otimes e_{2},v_{1}\otimes e_{-1},v_{1}\otimes e_{3},v_{1}\otimes e_{0},v_{1}\otimes e_{-3},v_{1}\otimes e_{1}-v_{-1}\otimes e_{2},v_{1}\otimes e_{-2}+v_{-1}\otimes e_{-1}\rangle$ \end{singlespace} & $\langle e\rangle$ & Figure 5.1: $0,0,1$ Figure 5.2: $0,0,1$ Figure 5.4: $0,0,1$\tabularnewline \hline \hline $E+x_{1}$ & $\langle E,x_{1},x_{5},y_{2},x_{6},y_{6},h_{1}+2h_{2},v_{1}\otimes e_{1},v_{1}\otimes e_{3},v_{1}\otimes e_{-2},v_{1}\otimes e_{0}-v_{-1}\otimes e_{1},v_{1}\otimes e_{2}+v_{-1}\otimes e_{3},v_{1}\otimes e_{-3}-v_{-1}\otimes e_{-2}\rangle$ & $\langle e\rangle$ & Figure 5.2: $1,0,0$ Figure 5.4: $1,0,0$\tabularnewline \hline \hline $E+(x_{2}+x_{5})$ & \begin{singlespace} \noindent \centering{}$\langle E,x_{6},x_{3},x_{4},x_{2}+x_{5},v_{1}\otimes e_{3},v_{1}\otimes e_{2},v_{1}\otimes e_{0},6v_{-1}\otimes e_{3}-v_{1}\otimes e_{-1},v_{1}\otimes e_{1}-v_{-1}\otimes e_{2}\rangle$ \end{singlespace} & $\langle e,x_{6}\rangle$ & Figure 5.4: $0,1,1$\tabularnewline \hline \hline $E$ & \begin{singlespace} \noindent \centering{}$\left\langle E\right\rangle \oplus\mathrm{G}_{2}\oplus\langle v_{1}\otimes e_{3},v_{1}\otimes e_{2},v_{1}\otimes e_{1},v_{1}\otimes e_{0},v_{1}\otimes e_{-1},v_{1}\otimes e_{-2},v_{1}\otimes e_{-3}\rangle$ \end{singlespace} & $\langle e\rangle$ & Figure 5.1: $1,0,0$\tabularnewline \hline \hline $x_{1}+x_{2}$ & \begin{singlespace} \noindent \centering{}$\langle E,H,F,x_{1}+x_{2},x_{6},v_{1}\otimes e_{3},v_{-1}\otimes e_{3}\rangle$ \end{singlespace} & $\langle e,x_{6}\rangle$ & Figure 5.3: $0,2,2$\tabularnewline \hline \hline $x_{2}$ & \begin{singlespace} \noindent \centering{}$\langle E,H,F,2h_{1}+3h_{2},x_{2},y_{1},x_{3},x_{6},y_{5},x_{4},y_{4},v_{1}\otimes e_{2},v_{-1}\otimes e_{2},v_{1}\otimes e_{-1},v_{-1}\otimes e_{-1},v_{1}\otimes e_{-3},v_{-1}\otimes e_{-3},v_{1}\otimes e_{3},v_{-1}\otimes e_{3},v_{1}\otimes e_{0},v_{-1}\otimes e_{0}\rangle$ \end{singlespace} & $\langle e\rangle$ & Figure 5.3: $0,0,1$\tabularnewline \hline \hline $x_{1}$ & $\langle E,H,F,x_{1},x_{5},y_{2},x_{6},y_{6},h_{1}+2h_{2},v_{1}\otimes e_{1},v_{-1}\otimes e_{1},v_{1}\otimes e_{3},v_{-1}\otimes e_{3},v_{1}\otimes e_{-2},v_{-1}\otimes e_{-2}\rangle$ & $\langle e\rangle$ & Figure 5.3: $0,1,0$\tabularnewline \hline \hline $x_{2}+x_{5}$ & \begin{singlespace} \noindent \centering{}$\langle E,H,F,x_{6},x_{3},x_{4},x_{2}+x_{5},v_{1}\otimes e_{3},v_{-1}\otimes e_{3},v_{1}\otimes e_{2},v_{-1}\otimes e_{2},v_{1}\otimes e_{0},v_{-1}\otimes e_{0}\rangle$ \end{singlespace} & $\langle e,x_{6}\rangle$ & Figure 5.3: $0,0,2$ Figure 5.4: $0,2,0$\tabularnewline \hline \hline $0$ & $\mathfrak{g}$ & $\{0\}$ & Figures 5.1, 5.2, 5.3, 5.4. All labels are zeros.\tabularnewline \hline \end{longtable} \par\end{center} \end{singlespace} \begin{singlespace} For each nilpotent element $e$, we find a semisimple element $h$ such that $h$ lies in an $\mathfrak{sl}(2)$-triple in $\mathfrak{g}_{\bar{0}}$ that contains $e$. We also calculate the $\mathfrak{g}^{e}(0)$-module structure on each $\mathfrak{g}^{e}(j)$ for $j>0$ in the table below. Let $V^{\mathfrak{sl}}(j)$ be an $\mathfrak{sl}(2)$-module with highest weight $j$ and $V^{\mathfrak{osp}}(j)$ be an $\mathfrak{osp}(1|2)$-module with highest weight $j$. Note that for $e=x_{2}$, the $\mathfrak{g}^{e}(0)$-module structure on $\mathfrak{g}^{e}(j)$ is not included as it requires the construction of $\mathfrak{osp}(3|2)$ representations. \end{singlespace} \begin{singlespace} \noindent \begin{center} \begin{longtable}[c]{|>{\centering}m{2cm}||>{\centering}m{2.5cm}||>{\centering}m{2cm}||>{\centering}m{7cm}|} \caption{\label{tab:g^e(0)-G(3)}The $\mathfrak{g}^{e}(0)$-module structure on $\mathfrak{g}^{e}(j)$ for $j>0$ } \tabularnewline \endfirsthead \hline $e$ & $h$ & $\mathfrak{g}^{e}(0)$ & $\mathfrak{g}^{e}(j),j>0$\tabularnewline \hline \hline $E+(x_{1}+x_{2})$ & $H+(6h_{1}+10h_{2})$ & $0$ & $\dim\mathfrak{g}^{e}(10)=\dim\mathfrak{g}^{e}(7)=\dim\mathfrak{g}^{e}(5)=1$,$\dim\mathfrak{g}^{e}(2)=2$\tabularnewline \hline \hline $E+x_{2}$ & $H+h_{2}$ & $\mathfrak{osp}(1|2)$ & $\mathfrak{g}^{e}(1)=V^{\mathfrak{osp}}(3)$, $\mathfrak{g}^{e}(2)=V^{\mathfrak{osp}}(0)\oplus V^{\mathfrak{osp}}(1)$\tabularnewline \hline \hline $E+x_{1}$ & $H+h_{1}$ & $\mathfrak{osp}(1|2)$ & $\mathfrak{g}^{e}(1)=\mathfrak{g}^{e}(3)=V^{\mathfrak{osp}}(0)$, $\mathfrak{g}^{e}(2)=V^{\mathfrak{osp}}(0)\oplus V^{\mathfrak{osp}}(1)$\tabularnewline \hline \hline $E+(x_{2}+x_{5})$ & $H+(2h_{1}+4h_{2})$ & $0$ & $\dim\mathfrak{g}^{e}(4)=1,\dim\mathfrak{g}^{e}(3)=2,$ $\dim\mathfrak{g}^{e}(2)=4,\dim\mathfrak{g}^{e}(1)=3$\tabularnewline \hline \hline $E$ & $H$ & $G_{2}$ & $\mathfrak{g}^{e}(1)=V_{7},\mathfrak{g}^{e}(2)=\left\langle E\right\rangle $\tabularnewline \hline \hline $x_{1}+x_{2}$ & $6h_{1}+10h_{2}$ & $\mathfrak{sl}(2)$ & $\mathfrak{g}^{e}(2)=\mathfrak{g}^{e}(10)=V^{\mathfrak{sl}}(0)$, $\mathfrak{g}^{e}(6)=V^{\mathfrak{sl}}(1)$\tabularnewline \hline \hline $x_{2}$ & $h_{2}$ & $\mathfrak{osp}(3|2)$ & Omitted.\tabularnewline \hline \hline $x_{1}$ & $h_{1}$ & $\mathfrak{sl}(2)\oplus\mathfrak{sl}(2)$ & $\mathfrak{g}^{e}(1)=V^{\mathfrak{sl}}(1)\otimes V^{\mathfrak{sl}}(1),\mathfrak{g}^{e}(3)=V^{\mathfrak{sl}}(0)\otimes V^{\mathfrak{sl}}(1),$$\mathfrak{g}^{e}(2)=\left(V^{\mathfrak{sl}}(0)\otimes V^{\mathfrak{sl}}(0)\right)\oplus\left(V^{\mathfrak{sl}}(1)\otimes V^{\mathfrak{sl}}(0)\right).$\tabularnewline \hline \hline $x_{2}+x_{5}$ & $2h_{1}+4h_{2}$ & $\mathfrak{osp}(1|2)$ & $\mathfrak{g}^{e}(2)=V^{\mathfrak{osp}}(0)\oplus V^{\mathfrak{osp}}(1)\oplus V^{\mathfrak{osp}}(1)$, $\mathfrak{g}^{e}(4)=V^{\mathfrak{osp}}(0)$\tabularnewline \hline \hline $0$ & $0$ & $\mathfrak{g}$ & $0$\tabularnewline \hline \end{longtable} \par\end{center} \end{singlespace} \begin{singlespace} In the remaining part of this subsection, we explain explicit calculations for finding $\mathfrak{g}^{e}$ and $\mathfrak{z}(\mathfrak{g}^{e})$ and obtain the corresponding labelled Dynkin diagrams for nilpotent element $E+x_{2}$. The results of remaining cases are obtained using the same approach. When $e=E+x_{2}$, we already know that $\mathfrak{sl}(2)^{E}=\langle E\rangle$, now we are going to work out $\mathrm{G}_{2}^{x_{2}}$. Observe that $h_{G_{2}}=\text{diag}(0,1,-1,0,1,-1,0)=h_{2}$ belongs to an $\mathfrak{sl}(2)$-triple $\{e_{G_{2}},h_{G_{2}},f_{G_{2}}\}$ in $G_{2}$ containing $e_{G_{2}}=x_{2}$. Then we can work out the ad$h_{G_{2}}$-eigenspaces with non-negative eigenvalues of $G_{2}$: \end{singlespace} \noindent \begin{center} \begin{tabular}{|c|c|c|c|} \hline Eigenvalues of $h_{G_{2}}$ & $0$ & $1$ & $2$\tabularnewline \hline \hline Eigenvectors & $h_{1},h_{2},x_{4},y_{4}$ & $y_{1},x_{3},x_{6},y_{5}$ & $x_{2}$\tabularnewline \hline \end{tabular} \par\end{center} \begin{singlespace} \noindent This demonstrates that $G_{2}^{x_{2}}\cong G_{2}^{x_{2}}(0)\oplus G_{2}^{x_{2}}(1)\oplus G_{2}^{x_{2}}(1)$. It is clear that $G_{2}^{x_{2}}(2)=\langle x_{2}\rangle$ and $G_{2}^{x_{2}}(1)=\langle y_{1},x_{3},x_{6},y_{5}\rangle$. Note that $G_{2}^{x_{2}}(0)$ has dimension $3$. Since $\left[2h_{1}+3h_{2},x_{2}\right]=0$ and $\left[x_{4},x_{2}\right]=0=\left[y_{4},x_{2}\right]$, we have that $G_{2}^{x_{2}}(0)=\langle x_{4},y_{4},2h_{1}+3h_{2}\rangle$. Therefore, we have that $\mathfrak{g}_{\bar{0}}^{e}$ has a basis $\{E,2h_{1}+3h_{2},x_{4},y_{4},x_{2},y_{1},x_{3},x_{6},y_{5}\}$. \end{singlespace} \begin{singlespace} Now we calculate $\mathfrak{g}_{\bar{1}}^{e}$. We look at the $\mathfrak{sl}(2)$-triple $\{e,h,f\}\subseteq\mathfrak{g}_{\bar{0}}$ and work out all non-negative ad$h$-eigenspaces in $\mathfrak{g}_{\bar{1}}$. \end{singlespace} \begin{singlespace} \noindent \begin{center} \begin{tabular}{|>{\centering}m{3cm}||c|>{\centering}m{3cm}|>{\centering}m{4cm}|} \hline ad$h$-eigenvalues & $2$ & $1$ & $0$\tabularnewline \hline \hline ad$h$-eigenvectors & $v_{1}\otimes e_{2}$, $v_{1}\otimes e_{-1}$ & $v_{1}\otimes e_{3}$, $v_{1}\otimes e_{0}$, $v_{1}\otimes e_{-3}$ & $v_{1}\otimes e_{1}$, $v_{1}\otimes e_{-2}$, $v_{-1}\otimes e_{2}$, $v_{-1}\otimes e_{-1}$\tabularnewline \hline \end{tabular} \par\end{center} \end{singlespace} \begin{singlespace} \noindent The above table implies that $\mathfrak{g}_{\bar{1}}^{e}\cong\mathfrak{g}_{\bar{1}}^{e}(0)\oplus\mathfrak{g}_{\bar{1}}^{e}(1)\oplus\mathfrak{g}_{\bar{1}}^{e}(2)$ where $\mathfrak{g}_{\bar{1}}^{e}(2)$ has a basis $\{v_{1}\otimes e_{2},v_{1}\otimes e_{-1}\}$ and $\mathfrak{g}_{\bar{1}}^{e}(1)$ has a basis $\{v_{1}\otimes e_{3},v_{1}\otimes e_{0},v_{1}\otimes e_{-3}\}$. To determine $\mathfrak{g}_{\bar{1}}^{e}(0)$, we need to find elements of the form $x=a_{1,1}v_{1}\otimes e_{1}+a_{1,-2}v_{1}\otimes e_{-2}+a_{-1,2}v_{-1}\otimes e_{2}+a_{-1,-1}$$v_{-1}\otimes e_{-1}$ that are centralized by $e$. Then $\left[e,x\right]=0$ gives that $a_{1,1}=-a_{-1,2}$ and $a_{1,-2}=a_{-1,-1}$. Hence $\mathfrak{g}_{\bar{1}}^{e}(0)$ has a basis $\{v_{1}\otimes e_{1}-v_{-1}\otimes e_{2},v_{1}\otimes e_{-2}+v_{-1}\otimes e_{-1}\}$. Therefore, $\mathfrak{g}_{\bar{1}}^{e}$ has a basis $\{v_{1}\otimes e_{2},v_{1}\otimes e_{-1},v_{1}\otimes e_{3},v_{1}\otimes e_{0},$$v_{1}\otimes e_{-3},v_{1}\otimes e_{1}-v_{-1}\otimes e_{2},v_{1}\otimes e_{-2}+v_{-1}\otimes e_{-1}\}$. In conclusion, we have that $\dim\mathfrak{g}^{e}=9+7=16$. \end{singlespace} \begin{singlespace} By computing commutator relations between basis elements for $\mathfrak{g}^{e}(0)$, we deduce that $\mathfrak{g}^{e}(0)\cong\mathfrak{osp}(1|2)$ according to Lemma \ref{lem:osp(1,2)} where $y_{4},v_{1}\otimes e_{-2}+v_{-1}\otimes e_{-1},2h_{1}+3h_{2},v_{1}\otimes e_{1}-v_{-1}\otimes e_{2},x_{4}$ correspond to $u_{-2},u_{-1},u_{0},u_{1},u_{2}$ in Lemma \ref{lem:osp(1,2)}. Moreover, we obtain that $\mathfrak{g}^{e}(1)=V^{\mathfrak{osp}}(3)$ and $\mathfrak{g}^{e}(2)=V^{\mathfrak{osp}}(0)\oplus V^{\mathfrak{osp}}(1)$. Hence, we have that \[ \mathfrak{z}=\mathfrak{z}(0)\oplus\mathfrak{z}(1)\oplus\mathfrak{z}(2)\subseteq\left(\mathfrak{g}^{e}(0)\right)^{\mathfrak{g}^{e}(0)}\oplus\left(\mathfrak{g}^{e}(1)\right)^{\mathfrak{g}^{e}(0)}\oplus\left(\mathfrak{g}^{e}(2)\right)^{\mathfrak{g}^{e}(0)}=\langle E+x_{2}\rangle. \] Note that $E+x_{2}\in\mathfrak{z}$, therefore $\mathfrak{z}=\langle E+x_{2}\rangle$ and it has dimension $1$. Next we look at the labelled Dynkin diagrams with respect to $e$. We obtain that roots in $\mathfrak{g}(>0)$ are $\{2\delta,\varepsilon_{2}-\varepsilon_{1},-\varepsilon_{1},\varepsilon_{2},\varepsilon_{2}-\varepsilon_{3,}\varepsilon_{3}-\varepsilon_{1},\delta+\varepsilon_{2},\delta-\varepsilon_{1},\delta-\varepsilon_{3},\delta,\delta+\varepsilon_{3}\}$ and roots in $\mathfrak{g}(0)$ are $\Phi(0)=\{\pm\varepsilon_{3},\pm(\delta+\varepsilon_{1}),\pm(\delta-\varepsilon_{2})\}$. Hence, there are three systems of simple roots of $\mathfrak{g}(0)$: $\varPi_{1}(0)=\{-\varepsilon_{3},-\delta-\varepsilon_{1}\}$, $\varPi_{2}(0)=\{-\delta+\varepsilon_{2},\delta+\varepsilon_{1}\}$ and $\varPi_{3}(0)=\{\delta-\varepsilon_{2},-\varepsilon_{3}\}$ up to conjugacy. By extending $\varPi_{i}(0)$ to $\varPi$ for $i=1,2,3$, we get three systems of positive roots $\Phi_{i}^{+}$ and simple roots $\varPi_{i}$ and thus there are three conjugacy classes of Borel subalgebras satisfying $\mathfrak{b}=\mathfrak{h}\oplus\bigoplus_{\alpha\in\Phi^{+}}\mathfrak{g}_{\alpha}\subseteq\bigoplus_{j\geq0}\mathfrak{g}(j)$. Hence, the systems of simple roots are: $\varPi_{1}=\{-\varepsilon_{3},-\delta-\varepsilon_{1},\delta+\varepsilon_{3}\}$. We compute $\mu_{12}=1$, $\mu_{13}=2$ and $\mu_{23}=3$ using Formula (\ref{eq:lines-=0003BC}). Therefore, the corresponding labelled Dynkin diagram is the Dynkin diagram in Figure 5.4 with labels $0,0,1$. \end{singlespace} $\varPi_{2}=\{-\delta+\varepsilon_{2},\delta+\varepsilon_{1},\varepsilon_{3}-\varepsilon_{1}\}$. We compute $\mu_{12}=1$ and $\mu_{23}=3$ using Formula (\ref{eq:lines-=0003BC}). Therefore, the corresponding labelled Dynkin diagram is the Dynkin diagram in Figure 5.2 with labels $0,0,1$. $\varPi_{3}=\{\delta-\varepsilon_{2},-\varepsilon_{3},\varepsilon_{3}-\varepsilon_{1}\}$. We compute $\mu_{12}=1$ and $\mu_{23}=3$ using Formula (\ref{eq:lines-=0003BC}). Therefore, the corresponding labelled Dynkin diagram is the Dynkin diagram in Figure 5.1 with labels $0,0,1$. \subsection{Analysis of results\label{subsec:Analysis-of-results-G3}} \begin{singlespace} \noindent Let $\mathfrak{h}=\left\langle H,h_{1},h_{2}\right\rangle \subseteq\mathfrak{g}$. Denote a simple root system for $\mathfrak{g}^{h}$ by $\varPi_{h}$. In order to prove Theorem 1 for $G(3),$ we consider two cases in which the corresponding labelled Dynkin diagram has no label equal to $1$. They are $e=x_{1}+x_{2}$ and $e=x_{2}+x_{5}$. \end{singlespace} \begin{singlespace} When $e=x_{1}+x_{2}\in G(3)$, we have $\dim\mathfrak{z}(\mathfrak{g}^{e})=2$ and $n_{2}(\varDelta)=2$. Note that $\mathfrak{g}^{h}$ is generated by root vectors $e_{\pm2\delta}$, $e_{\pm\delta}$ and $\mathfrak{h}$. Hence, we can find a simple root system $\varPi_{h}=\{\delta\}$ and thus $\mathfrak{g}^{h}=\mathfrak{z}(\mathfrak{g}^{h})\oplus\mathfrak{osp}(1|2)$ according to \cite[Subsection 3.4.1]{Musson2012}. Then $\mathfrak{z}(\mathfrak{g}^{h})=\{t\in\mathfrak{h}:\delta(t)=0\}$. Hence, $\dim\mathfrak{z}(\mathfrak{g}^{h})=2=n_{2}(\varDelta)=\dim\mathfrak{z}(\mathfrak{g}^{e})$. When $e=x_{2}+x_{5}\in G(3)$, we have $\dim\mathfrak{z}(\mathfrak{g}^{e})=2$ but $n_{2}(\varDelta)=1$. Note that $\mathfrak{g}^{h}$ is generated by root vectors $e_{\pm2\delta},e_{\pm\delta},e_{\pm\varepsilon_{1}},e_{\pm(\delta+\varepsilon_{1})},e_{\pm(\delta-\varepsilon_{1})}$ and $\mathfrak{h}$, thus we can find a simple root system $\varPi_{h}=\{\varepsilon_{1},\delta-\varepsilon_{1}\}$ and thus $\mathfrak{g}^{h}=\mathfrak{z}(\mathfrak{g}^{h})\oplus\mathfrak{osp}(3|2)$ according to \cite[Subsection 3.4.1]{Musson2012}. Then $\mathfrak{z}(\mathfrak{g}^{h})=\{t\in\mathfrak{h}:\varepsilon_{1}(t)=(\delta-\varepsilon_{1})(t)=0\}$. Hence, $\dim\mathfrak{z}(\mathfrak{g}^{h})=1=n_{2}(\varDelta)$ but $\dim\mathfrak{z}(\mathfrak{g}^{h})\neq\dim\mathfrak{z}(\mathfrak{g}^{e})$. We will further discuss this case in \S5.5 and complete verification of Theorem 1 for $G(3)$. In order to prove Theorem 2 for $G(3)$, we look at three cases below and the remaining cases do not have labels equal to $2$ so that the $2$-free core $\varDelta_{0}$ of $\varDelta$ is the same as $\varDelta$. When $e=E+(x_{1}+x_{2})\in G(3)$, we have $\dim\mathfrak{g}^{e}=5$, $\dim\mathfrak{z}(\mathfrak{g}^{e})=3$ and $\varDelta$ is given in Table \ref{tab:G(3)}. From \cite[Subsection 3.4.1]{Musson2012} we have that the Lie superalgebra corresponds to $\varDelta_{0}$ is $\mathfrak{g}_{0}=\mathfrak{osp}(3|2)$. An explicit explanation of construction of $\mathfrak{osp}(3|2)$ can be found in \cite[Section 2.3]{Musson2012}. We choose the representative of nilpotent orbit $e_{0}\in\mathfrak{osp}(3|2)_{\bar{0}}$ to be \end{singlespace} \begin{singlespace} \noindent \[ \begin{pmatrix}0 & 1 & 0 & 0 & 0\\ 0 & 0 & -1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1\\ 0 & 0 & 0 & 0 & 0 \end{pmatrix}, \] thus $e$ has the Jordan type $(3|2)$. Hence, we obtain that $\dim\mathfrak{g}_{0}^{e_{0}}=4$ according to \cite[Subsection 3.2.2]{Hoyt2012}. Furthermore, we calculate that $\dim\mathfrak{z}(\mathfrak{g}_{0}^{e_{0}})=2$. Therefore, $\dim\mathfrak{g}^{e}-\dim\mathfrak{g}_{0}^{e_{0}}=n_{2}(\varDelta)=1$ and $\dim\mathfrak{z}(\mathfrak{g}^{e})-\dim\mathfrak{z}(\mathfrak{g}_{0}^{e_{0}})=n_{2}(\varDelta)=1$ for this case. \end{singlespace} \begin{singlespace} When $e=x_{1}+x_{2}\in G(3)$, we have $\dim\mathfrak{g}^{e}=7$, $\dim\mathfrak{z}(\mathfrak{g}^{e})=2$ and $\varDelta$ is given in Table \ref{tab:G(3)}. According to \cite[Subsection 3.4.1]{Musson2012}, we have that the Lie superalgebra corresponding to $\varDelta_{0}$ is $\mathfrak{g}_{0}=\mathfrak{osp}(1|2)$ and $e_{0}=0$. Thus we know that $\mathfrak{g}_{0}^{e_{0}}=\mathfrak{g}_{0}$ and $\mathfrak{z}(\mathfrak{g}_{0}^{e_{0}})=\mathfrak{z}(\mathfrak{g}_{0})=\{0\}$. Hence, we have $\dim\mathfrak{g}_{0}^{e_{0}}=\dim\mathfrak{g}_{0}=5$ and $\dim\mathfrak{z}(\mathfrak{g}_{0}^{e_{0}})=\dim\mathfrak{z}(\mathfrak{g}_{0})=0$. Therefore, $\dim\mathfrak{g}^{e}-\dim\mathfrak{g}_{0}^{e_{0}}=n_{2}(\varDelta)=2$ and $\dim\mathfrak{z}(\mathfrak{g}^{e})-\dim\mathfrak{z}(\mathfrak{g}_{0}^{e_{0}})=n_{2}(\varDelta)=2$ for this case. When $e=x_{2}+x_{5}\in G(3)$, we have $\dim\mathfrak{g}^{e}=13$, $\dim\mathfrak{z}(\mathfrak{g}^{e})=2$ and $\varDelta$ is given in Table \ref{tab:G(3)}. From \cite[Subsection 3.4.1]{Musson2012} we have that the Lie superalgebra corresponding to $\varDelta_{0}$ is $\mathfrak{g}_{0}=\mathfrak{osp}(3|2)$ and $e_{0}=0$. Similar to the above case, we have $\dim\mathfrak{g}_{0}^{e_{0}}=\dim\mathfrak{g}_{0}=12$ and $\dim\mathfrak{z}(\mathfrak{g}_{0}^{e_{0}})=\dim\mathfrak{z}(\mathfrak{g}_{0})=0$. Therefore, $\dim\mathfrak{g}^{e}-\dim\mathfrak{g}_{0}^{e_{0}}=n_{2}(\varDelta)=1$ and $\dim\mathfrak{z}(\mathfrak{g}^{e})-\dim\mathfrak{z}(\mathfrak{g}_{0}^{e_{0}})=2\neq n_{2}(\varDelta)$ for this case. We will further discuss this case in \S5.5. \end{singlespace} \subsection{Adjoint action on $G(3)$\label{subsec:Adjoint-action-of-G(3)}} \begin{singlespace} Let $K$ be a simple Lie group of type $G_{2}$, then $\mathrm{Lie}(K)$ is the Lie algebra of type $G_{2}$. Let $G=\mathrm{SL_{2}(\mathbb{C})}\times K$. For a nilpotent element $e=e_{\mathfrak{sl}}+e_{G_{2}}\in\mathfrak{g}_{\bar{0}}$ where $e_{\mathfrak{sl}}\in\mathfrak{sl}(2)$ and $e_{G_{2}}\in\mathrm{Lie}(K)$, we determine $\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{G^{e}}$ in this subsection. Write $(K^{e_{G_{2}}})^{\circ}$ for the connected component of $K^{e_{G_{2}}}$ containing the identity and let $K^{e_{G_{2}}}(0)=K^{h_{G_{2}}}\cap K^{e_{G_{2}}}$. An explicit structure of $K^{e_{G_{2}}}$ has been given in \cite[Section 11]{Lawther2008}. \end{singlespace} For $e_{\mathfrak{sl}}=0$, the centralizer in $\mathrm{\mathrm{SL_{2}(\mathbb{C})}}$ is connected. Thus it suffices to only look at the action of $K^{e}$ on $\mathfrak{z}\subseteq\mathfrak{z}_{\bar{0}}$. In this case, we know that $G^{e}/(G^{e})^{\circ}\cong K^{e}/(K^{e})^{\circ}\cong K^{e}(0)/(K^{e}(0))^{\circ}$. When $e=x_{1},x_{2},x_{1}+x_{2}$, we have that $K^{e}(0)/(K^{e}(0))^{\circ}=1$ by \cite[Section 11]{Lawther2008} and thus $\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{G^{e}}=\mathfrak{z}(\mathfrak{g}^{e})$. When $e=x_{2}+x_{5}$, the component group does not centralize $x_{6}$ according to \cite[page 73]{Lawther2008}. Hence, we deduce that $\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{G^{e}}=\left\langle e\right\rangle $. Therefore, based on \S5.4, we have that $\dim\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{G^{e}}=\dim\mathfrak{z}(\mathfrak{g}^{h})=1$ and $\dim\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{G^{e}}-\dim\left(\mathfrak{z}(\mathfrak{g}_{0}^{e_{0}})\right)^{G_{0}^{e_{0}}}=n_{2}(\varDelta)=1$ for this case. For $e_{\mathfrak{sl}}=E$, we know that $G^{e}=(\{\pm1\}\ltimes R^{E})\times K^{e_{G_{2}}}$ where $R^{E}$ is a connected normal subgroup of $G^{e}$. When $e=E+x_{2}$, $E+x_{1}$, $E+(x_{2}+x_{5})$, we have $\mathfrak{z}\subseteq\mathfrak{z}_{\bar{0}}$. We know that $\pm1\in\mathrm{\mathrm{SL_{2}(\mathbb{C})}}^{E}$ act trivially on $\mathfrak{z}_{\bar{0}}$ and $\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{K^{e_{G_{2}}}}=\left\langle e\right\rangle $ by \cite[Section 11]{Lawther2008}. Hence, we have that $\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{G^{e}}=\mathfrak{z}(\mathfrak{g}^{e})$ for $e=E+x_{2}$, $E+x_{1}$ and $\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{G^{e}}=\left\langle e\right\rangle $ for $e=E+(x_{2}+x_{5})$. When $e=E+(x_{1}+x_{2})$, the component group of $\mathrm{SL(2)}^{^{E}}$ has order $2$ and we consider the element $g=-1\in G^{e}/(G^{e})^{\circ}$. We know that $g$ acts trivially on $\mathfrak{z}_{\bar{0}}$, thus $e,x_{6}\in\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{g}$. However, $v_{1}\otimes e_{3}\notin\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{g}$ since the action of $g$ on $v_{1}\otimes e_{3}$ sends it to $-v_{1}\otimes e_{3}$. Hence, we have that $\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{G^{e}}\subseteq\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{g}=\left\langle e,x_{6}\right\rangle $. Therefore, we have that $\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{G^{e}}=\left\langle e,x_{6}\right\rangle $. Based on \S5.4, we know that $G_{0}=\mathrm{O}_{3}(\mathbb{C})\times\mathrm{Sp}_{2}(\mathbb{C})$. By considering the element $g'=-1\in G_{0}^{e_{0}}/(G_{0}^{e_{0}})^{\circ}$, we obtain that $\left(\mathfrak{z}(\mathfrak{g}_{0}^{e_{0}})\right)^{G_{0}^{e_{0}}}\subseteq\left(\mathfrak{z}(\mathfrak{g}_{0}^{e_{0}})\right)^{g'}=\left\langle e_{0}\right\rangle $. Therefore, we deduce that $\dim\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{G^{e}}-\dim\left(\mathfrak{z}(\mathfrak{g}_{0}^{e_{0}})\right)^{G_{0}^{e_{0}}}=n_{2}(\varDelta)=1$ for this case. The above argument completes the proof of Theorems 1 and 2 for $G(3)$. \begin{singlespace} By combining results in Table \ref{tab:G(3)}, we have that $\dim\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{G^{e}}=\left\lceil \frac{1}{2}\sum_{i=1}^{3}a_{i}\right\rceil $ which proves the statement of Theorem 3 for $G(3)$. \end{singlespace} \begin{singlespace} \section{The Exceptional Lie superalgebra $F(4)$\label{sec:F(4)}} \end{singlespace} \subsection{$\mathfrak{so}(V,\beta)$ embedded into $\mathrm{C}(V,\beta)$\label{subsec:Clifford-algebra}} \begin{singlespace} \noindent Let $V$ be a finite-dimensional complex vector space with a basis $\{v_{i}:i=1,\dots,m\}$ and a symmetric bilinear form $\beta$ and let $\mathrm{C}(V,\beta)$ be the Clifford algebra for $(V,\beta)$. An explicit definition of the Clifford algebra can be found in \cite[Chapter 6]{Goodman2009}. Recall that for any $x,y\in V$, we have that $\{x,y\}=\beta(x,y)1$ for $x,y\in V$ where $\{x,y\}=xy+yx$ is the anticommutator of $x,y$. Then $\mathrm{C}(V,\beta)$ is spanned by $1$ and the products $v_{i_{1}}\dots v_{i_{l}}$ for $1\leq i_{1}<i_{2}<\dots<i_{l}\leq m$. \end{singlespace} \begin{singlespace} For $u,w\in V$, we define $R_{u,w}\in\mathrm{End}(V)$ by $R_{u,w}(v)=\beta(w,v)u-\beta(u,v)w$. For any $x,y\in V$, we can check that \[ \beta(R_{u,w}(x),y)=\beta(w,x)\beta(u,y)-\beta(u,x)\beta(w,y)=-\beta(x,R_{u,w}(y)). \] Hence, we know that $R_{u,w}\in\mathfrak{so}(V,\beta)$. The linear transformations $R_{u,w}$ for $u,w\in V$ form a basis for $\mathfrak{so}(V,\beta)$, see \cite[Lemma 6.2.1]{Goodman2009}. We only consider the case when $\dim V=7$ in this section. Now let $\dim V=7$, there exists a decomposition $V=W\oplus\left\langle e_{0}\right\rangle \oplus W^{*}$ where $W,W^{*}$ is a pair of dual maximal isotropic subspaces of $V$ corresponding to $\beta$ and $\{e_{1},e_{2},e_{3}\}$ (resp. $\{e_{-1},e_{-2},e_{-3}\}$) is a basis for $W$ (resp. $W^{*}$). The basis is chosen such that $\beta(e_{i},e_{-j})=\delta_{ij}$ for $i,j\in\{1,2,3\}$, $\beta(e_{0},e_{0})=2$ and $\beta(e_{0},W)=\beta(e_{0},W^{*})=0$. Then \[ R_{e_{i},e_{-j}}=e_{i,j}-e_{-j,-i},R_{e_{i},e_{j}}=e_{i,-j}-e_{j,-i}\text{ for }i<j,R_{e_{-i},e_{-j}}=e_{-i,j}-e_{-j,i}\text{ for }i>j, \] \[ \text{and }R_{e_{i},e_{0}}=2e_{i,0}-e_{0,-i},\ R_{e_{-i},e_{0}}=2e_{-i,0}-e_{0,i} \] form a basis for $\mathfrak{so}(V,\beta)$ where $e_{i,j}$ is the elementary transformation which sends $e_{i}$ to $e_{j}$ and the rest of basis vectors to $0$. Next we determine representatives of nilpotent orbits in $\mathfrak{so}(7)$ using the above notation. According to \cite[Section 1.6]{Jantzen2004a}, we know that any nilpotent orbit in $\mathfrak{so}(7)$ has a Jordan type $\lambda\in\{(7),(5,1^{2}),(3^{2},1),(3,2^{2}),(3,1^{4}),(2^{2},1^{3}),(1^{7})\}$. By using the orthogonal Dynkin pyramid of $\lambda$ that is defined in \cite[Section 6]{Elashvili2005}, we are able to give a representative of each nilpotent orbit using matrices. Let us fix the corresponding representatives of each of the nilpotent orbits to be as in Table \ref{tab:nilpotent ele}. For each nilpotent element $e_{\mathfrak{so}}\in\mathfrak{so}(7)$, we also give a semisimple element $h$ such that there is an $\mathfrak{sl}(2)$-triple $\{e_{\mathfrak{so}},h_{\mathfrak{so}},f_{\mathfrak{so}}\}\subseteq\mathfrak{so}(7)$. \end{singlespace} \begin{singlespace} \noindent \begin{center} \begin{longtable}[c]{|>{\centering}m{1.5cm}||>{\centering}m{5.4cm}||>{\centering}m{6.6cm}|} \caption{\label{tab:nilpotent ele}Nilpotent orbits in $\mathfrak{so}(7)$} \tabularnewline \endfirsthead \hline Jordan types & nilpotent element $e_{\mathfrak{so}}$ & semisimple element $h_{\mathfrak{so}}$\tabularnewline \hline \hline $(7)$ & $e_{(7)}=R_{e_{1},e_{-2}}+R_{e_{2},e_{-3}}+R_{e_{3},e_{0}}$ & $h_{(7)}=6R_{e_{1},e_{-1}}+4R_{e_{2},e_{-2}}+2R_{e_{3},e_{-3}}$\tabularnewline \hline \hline $(5,1^{2})$ & $e_{(5,1^{2})}=R_{e_{1},e_{-2}+R_{e_{2},e_{0}}}$ & $h_{(5,1^{2})}=4R_{e_{1},e_{-1}}+2R_{e_{2},e_{-2}}$\tabularnewline \hline \hline $(3^{2},1)$ & $e_{(3^{2},1)}=R_{e_{1},e_{-3}}+R_{e_{2},e_{3}}$ & $h_{(3^{2},1)}=2R_{e_{1},e_{-1}}+2R_{e_{2},e_{-2}}$\tabularnewline \hline \hline $(3,2^{2})$ & $e_{(3,2^{2})}=R_{e_{1},e_{0}}+R_{e_{2},e_{3}}$ & $h_{(3,2^{2})}=2R_{e_{1},e_{-1}}+R_{e_{2},e_{-2}}+R_{e_{3},e_{-3}}$\tabularnewline \hline \hline $(3,1^{4})$ & $e_{(3,1^{4})}=R_{e_{1},e_{0}}$ & $h_{(3,1^{4})}=2R_{e_{1},e_{-1}}$\tabularnewline \hline \hline $(2^{2},1^{3})$ & $e_{(2^{2},1^{3})}=R_{e_{1},e_{2}}$ & $h_{(2^{2},1^{3})}=R_{e_{1},e_{-1}}+R_{e_{2},e_{-2}}$\tabularnewline \hline \hline $(1^{7})$ & $e_{(1^{7})}=0$ & $h_{(1^{7})}=0$\tabularnewline \hline \end{longtable} \par\end{center} \end{singlespace} \begin{singlespace} For each nilpotent orbit $e_{\mathfrak{so}}\in\mathfrak{so}(7)$ and any element $x\in\mathfrak{so}(7)$, by calculating $\left[e_{\mathfrak{so}},x\right]=0$ we obtain centralizers $\mathfrak{so}(7)^{e_{\mathfrak{so}}}$ of each $e_{\mathfrak{so}}\in\mathfrak{so}(7)$ in the following table: \begin{longtable}[c]{|>{\centering}m{1.9cm}|>{\centering}m{10.3cm}||>{\centering}m{1.9cm}|} \caption{\label{tab:Centralizers-so7}$\mathfrak{so}(7)^{e_{\mathfrak{so}}}$ of nilpotent orbits $e_{\mathfrak{so}}\in\mathfrak{so}(7)$} \tabularnewline \endfirsthead \hline $e_{\mathfrak{so}}\in\mathfrak{so}(7)$ & $\mathfrak{so}(7)^{e_{\mathfrak{so}}}$ & $\dim\mathfrak{so}(7)^{e_{\mathfrak{so}}}$ \tabularnewline \hline \hline $e_{(7)}$ & $\langle e_{(7)},R_{e_{1},e_{0}}-2R_{e_{2},e_{3}},R_{e_{1},e_{2}}\rangle$ & $3$\tabularnewline \hline \hline $e_{(5,1^{2})}$ & $\langle e_{(5,1^{2})},R_{e_{1},e_{-3}},R_{e_{3},e_{-3}},R_{e_{1},e_{3}},R_{e_{1},e_{2}}\rangle$ & $5$\tabularnewline \hline \hline $e_{(3^{2},1)}$ & $\langle e_{(3^{2},1)},R_{e_{1},e_{-1}}-R_{e_{2},e_{-2}}+R_{e_{3},e_{-3}},R_{e_{2},e_{-3}},R_{e_{2},e_{0}},R_{e_{1},e_{0}},R_{e_{1},e_{3}},R_{e_{1},e_{2}}\rangle$ & $7$\tabularnewline \hline \hline $e_{(3,2^{2})}$ & \begin{singlespace} \noindent $\langle R_{e_{1},e_{0}},R_{e_{2},e_{3}},R_{e_{2},e_{-2}}-R_{e_{3},e_{-3}},R_{e_{2},e_{-3}},R_{e_{3},e_{-2}},2R_{e_{1},e_{-3}}+R_{e_{2},e_{0}},-2R_{e_{1},e_{-2}}+R_{e_{3},e_{0}},R_{e_{1},e_{3}},R_{e_{1},e_{2}}\rangle$ \end{singlespace} & $9$\tabularnewline \hline \hline $e_{(3,1^{4})}$ & $\langle e_{(3,1^{4})},R_{e_{2},e_{3}},R_{e_{2},e_{-3}},R_{e_{2},e_{-2}},R_{e_{3},e_{-3}},$ $R_{e_{3},e_{-2}},R_{e_{-3},e_{-2}},R_{e_{1},e_{2}},R_{e_{1},e_{3}},R_{e_{1},e_{-3}},R_{e_{1},e_{-2}}\rangle$ & $11$\tabularnewline \hline \hline $e_{(2^{2},1^{3})}$ & $\langle e_{(2^{2},1^{3})},R_{e_{1},e_{-2}},R_{e_{1},e_{-1}}-R_{e_{2},e_{-2}},R_{e_{3},e_{-3}},R_{e_{2},e_{-1}},$ $R_{e_{-3},e_{0}},R_{e_{1},e_{3}},R_{e_{1},e_{0}},R_{e_{2},e_{-3}},R_{e_{3},e_{0}},R_{e_{2},e_{3}},R_{e_{1},e_{-3}},R_{e_{2},e_{0}}\rangle$ & $13$\tabularnewline \hline \hline $e_{(1^{7})}$ & $\mathfrak{so}(7)$ & $21$\tabularnewline \hline \end{longtable} \end{singlespace} \subsection{The spin representation of Lie algebra $\mathfrak{so}(7)$\label{subsec:A-spin-representation}} \begin{singlespace} \noindent According to \cite[Lemma 6.2.2]{Goodman2009}, there exists an injective Lie algebra homomorphism $\varphi:\mathfrak{so}(7)\rightarrow C(V,\beta)$ such that $\varphi(R_{e_{i},e_{-j}})=e_{i}e_{-j}$ for $i\neq j$ and $\varphi(R_{e_{i},e_{-i}})=e_{i}e_{-i}-\frac{1}{2}$ for $i\neq0$. Note that $\mathrm{C}(V,\beta)$ has a basis $\{e_{1}^{\delta_{1}}e_{2}^{\delta_{2}}e_{3}^{\delta_{3}}e_{0}^{\delta_{0}}e_{-3}^{\delta_{-3}}e_{-2}^{\delta_{-2}}e_{-1}^{\delta_{-1}}:\delta_{i}=0\text{ or }1\}$. The multiplication in $\mathrm{C}(V,\beta)$ satisfies $e_{0}^{2}=1$, $e_{i}^{2}=0$ for $i\neq0$, $e_{i}e_{j}=-e_{j}e_{i}$ for $i\neq-j$ and $e_{i}e_{-i}=-e_{-i}e_{i}+1$. Consider the subalgebra $D_{0,-}$ of $\mathrm{C}(V,\beta)$ defined by $D_{0,-}=\langle e_{0}^{\delta_{0}}e_{-3}^{\delta_{-3}}e_{-2}^{\delta_{-2}}e_{-1}^{\delta_{-1}}:\delta_{i}=0\text{ or }1\rangle.$ Let us define $S=C(V,\beta)\otimes_{D_{0,-}}\langle s\rangle$ where $\langle s\rangle$ is the 1-dimensional $D_{0,-}$-module such that $e_{-i}s=0\text{ for }i=1,2,3\text{ and }e_{0}s=s.$ In fact, $S$ is an $8$-dimensional representation for $C(V,\beta)$ with basis $\{1\otimes s,e_{1}\otimes s,e_{2}\otimes s,e_{3}\otimes s,e_{1}e_{2}\otimes s,e_{1}e_{3}\otimes s,e_{2}e_{3}\otimes s,e_{1}e_{2}e_{3}\otimes s\}$. We can restrict $S$ to be a representation of $\mathfrak{so}(7)=\mathfrak{so}(V,\beta)\subseteq C(V,\beta)$ and we call it the \textit{spin representation} for $\mathfrak{so}(7)$, see \cite[Section 6.2.2]{Goodman2009}. In the remaining subsections, we write the basis for $S$ as \begin{equation} \{s,e_{1}s,e_{2}s,e_{3}s,e_{1}e_{2}s,e_{1}e_{3}s,e_{2}e_{3}s,e_{1}e_{2}e_{3}s\}.\label{eq:basis V8} \end{equation} We sometimes denote basis elements $s,-e_{1}s,e_{2}s,-e_{3}s,e_{1}e_{2}s,e_{1}e_{3}s,e_{2}e_{3}s,e_{1}e_{2}e_{3}s$ of $V_{8}$ by $v_{---}$, $v_{+--}$, $v_{-+-}$, $v_{--+}$, $v_{++-}$, $v_{+-+}$, $v_{-++}$, $v_{+++}$ respectively. \end{singlespace} \subsection{Construction of the Lie superalgebra $F(4)$\label{subsec:Construction-of-F(4)}} \begin{singlespace} \noindent In order to fully describe the construction of $F(4)$, we first let $V_{2}=V$ where $V$ is defined in \S3. We define $\psi_{2}:V_{2}\times V_{2}\rightarrow\mathbb{C}$ to be a non-degenerate skew-symmetric bilinear form such that $\psi_{2}(v_{1},v_{-1})=1$. We also define $p_{2}:V_{2}\times V_{2}\rightarrow\mathfrak{sl}(2)$ by $p_{2}(x,y)(z)=3(\psi_{2}(y,z)x-\psi_{2}(z,x)y)$ for $x,y,z\in V_{2}$. We compute that $p_{2}(v_{1},v_{-1})=-3H,p_{2}(v_{1},v_{1})=6E$ and $p_{2}(v_{-1},v_{-1})=-6F$. \end{singlespace} \begin{singlespace} Next let $V_{8}$ be the spin representation of $\mathfrak{so}(7)$ with a basis shown in (\ref{eq:basis V8}). Then let $\psi_{8}:V_{8}\times V_{8}\rightarrow\mathbb{C}$ be the non-degenerate symmetric bilinear form given by \[ \psi_{8}(v_{\sigma_{1},\sigma_{2},\sigma_{3}},v_{\sigma'_{1},\sigma'_{2},\sigma'_{3}})=\prod_{i=1}^{3}\delta_{\sigma_{i},-\sigma'_{i}} \] for $\sigma_{i},\sigma_{i}'\in\{+,-\}$, e.g. we have that $\psi_{8}(v_{+++},v_{+--})=0$ and $\psi_{8}(v_{+-+},v_{-+-})=1$. Define $p_{8}:V_{8}\times V_{8}\rightarrow\mathfrak{so}(7)$ to be the antisymmetric bilinear map gives explicitly on the basis elements in Table \ref{tab:p8}. Note that values of $p_{8}(\cdotp,\cdotp)$ are calculated based on the assumption that \begin{equation} (v_{+++},v_{++-})\longmapsto R_{e_{1},e_{2}}.\label{eq:v(+++)v(++-)=00003DRe1e2} \end{equation} For example, by applying $R_{e_{3},e_{-2}}$ to both sides of (\ref{eq:v(+++)v(++-)=00003DRe1e2}) we get \[ (R_{e_{3},e_{-2}}v_{+++},v_{++-})+(v_{+++},R_{e_{3},e_{-2}}v_{++-})\longmapsto[R_{e_{3},e_{-2}},R_{e_{1},e_{2}}], \] this implies that $(v_{+++},v_{+-+})\longmapsto R_{e_{1},e_{3}}$. \end{singlespace} \begin{singlespace} \noindent \begin{center} \begin{longtable}[c]{|>{\centering}m{1cm}|>{\centering}m{3cm}|>{\centering}m{3cm}|>{\centering}m{3cm}|>{\centering}m{3cm}|} \caption{\label{tab:p8}$p_{8}:V_{8}\times V_{8}\rightarrow\mathfrak{so}(7)$} \tabularnewline \endfirsthead \hline & $v_{---}$ & $v_{+--}$ & $v_{-+-}$ & $v_{--+}$\tabularnewline \hline \hline $v_{---}$ & \begin{onehalfspace} \noindent \centering{}$0$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$-R_{e_{-3},e_{-2}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$-R_{e_{-3},e_{-1}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$-R_{e_{-2},e_{-1}}$ \end{onehalfspace} \tabularnewline \hline \hline $v_{+--}$ & \begin{onehalfspace} \noindent \centering{}$R_{e_{-3},e_{-2}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$0$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$-\frac{1}{2}R_{e_{-3},e_{0}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$-\frac{1}{2}R_{e_{-2},e_{0}}$ \end{onehalfspace} \tabularnewline \hline \hline $v_{-+-}$ & \begin{onehalfspace} \noindent \centering{}$R_{e_{-3},e_{-1}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$\frac{1}{2}R_{e_{-3},e_{0}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$0$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$-\frac{1}{2}R_{e_{-1},e_{0}}$ \end{onehalfspace} \tabularnewline \hline \hline $v_{--+}$ & \begin{onehalfspace} \noindent \centering{}$R_{e_{-2},e_{-1}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$\frac{1}{2}R_{e_{-2},e_{0}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$\frac{1}{2}R_{e_{-1},e_{0}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$0$ \end{onehalfspace} \tabularnewline \hline \hline $v_{++-}$ & \begin{onehalfspace} \noindent \centering{}$\frac{1}{2}R_{e_{-3},e_{0}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$R_{e_{1},e_{-3}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$-R_{e_{2},e_{-3}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$-\frac{1}{2}R_{e_{1},e_{-1}}-\frac{1}{2}R_{e_{2},e_{-2}}+\frac{1}{2}R_{e_{3},e_{-3}}$ \end{onehalfspace} \tabularnewline \hline \hline $v_{+-+}$ & \begin{onehalfspace} \noindent \centering{}$-\frac{1}{2}R_{e_{-2},e_{0}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$-R_{e_{1},e_{-2}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$-\frac{1}{2}R_{e_{1},e_{-1}}+\frac{1}{2}R_{e_{2},e_{-2}}-\frac{1}{2}R_{e_{3},e_{-3}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$-R_{e_{3},e_{-2}}$ \end{onehalfspace} \tabularnewline \hline \hline $v_{-++}$ & \begin{onehalfspace} \noindent \centering{}$\frac{1}{2}R_{e_{-1},e_{0}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$\frac{1}{2}R_{e_{1},e_{-1}}-\frac{1}{2}R_{e_{2},e_{-2}}-\frac{1}{2}R_{e_{3},e_{-3}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$-R_{e_{2},e_{-1}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$R_{e_{3},e_{-1}}$ \end{onehalfspace} \tabularnewline \hline \hline $v_{+++}$ & \begin{onehalfspace} \noindent \centering{}$-\frac{1}{2}R_{e_{1},e_{-1}}-\frac{1}{2}R_{e_{2},e_{-2}}-\frac{1}{2}R_{e_{3},e_{-3}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$-\frac{1}{2}R_{e_{1},e_{0}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$\frac{1}{2}R_{e_{2},e_{0}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$-\frac{1}{2}R_{e_{3},e_{0}}$ \end{onehalfspace} \tabularnewline \hline & \begin{onehalfspace} \noindent \centering{}$v_{++-}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$v_{+-+}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$v_{-++}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$v_{+++}$ \end{onehalfspace} \tabularnewline \hline $v_{---}$ & \begin{onehalfspace} \noindent \centering{}$-\frac{1}{2}R_{e_{-3},e_{0}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$\frac{1}{2}R_{e_{-2},e_{0}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$-\frac{1}{2}R_{e_{-1},e_{0}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$\frac{1}{2}R_{e_{1},e_{-1}}+\frac{1}{2}R_{e_{2},e_{-2}}+\frac{1}{2}R_{e_{3},e_{-3}}$ \end{onehalfspace} \tabularnewline \hline $v_{+--}$ & \begin{onehalfspace} \noindent \centering{}$-R_{e_{1},e_{-3}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$R_{e_{1},e_{-2}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$-\frac{1}{2}R_{e_{1},e_{-1}}+\frac{1}{2}R_{e_{2},e_{-2}}+\frac{1}{2}R_{e_{3},e_{-3}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$\frac{1}{2}R_{e_{1},e_{0}}$ \end{onehalfspace} \tabularnewline \hline $v_{-+-}$ & \begin{onehalfspace} \noindent \centering{}$R_{e_{2},e_{-3}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$\frac{1}{2}R_{e_{1},e_{-1}}-\frac{1}{2}R_{e_{2},e_{-2}}+\frac{1}{2}R_{e_{3},e_{-3}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$R_{e_{2},e_{-1}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$-\frac{1}{2}R_{e_{2},e_{0}}$ \end{onehalfspace} \tabularnewline \hline $v_{--+}$ & \begin{onehalfspace} \noindent \centering{}$\frac{1}{2}R_{e_{1},e_{-1}}+\frac{1}{2}R_{e_{2},e_{-2}}-\frac{1}{2}R_{e_{3},e_{-3}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$R_{e_{3},e_{-2}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$-R_{e_{3},e_{-1}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$\frac{1}{2}R_{e_{3},e_{0}}$ \end{onehalfspace} \tabularnewline \hline $v_{++-}$ & \begin{onehalfspace} \noindent \centering{}$0$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$\frac{1}{2}R_{e_{1},e_{0}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$\frac{1}{2}R_{e_{2},e_{0}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$-R_{e_{1},e_{2}}$ \end{onehalfspace} \tabularnewline \hline $v_{+-+}$ & \begin{onehalfspace} \noindent \centering{}$-\frac{1}{2}R_{e_{1},e_{0}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$0$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$\frac{1}{2}R_{e_{3},e_{0}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$-R_{e_{1},e_{3}}$ \end{onehalfspace} \tabularnewline \hline $v_{-++}$ & \begin{onehalfspace} \noindent \centering{}$-\frac{1}{2}R_{e_{2},e_{0}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$-\frac{1}{2}R_{e_{3},e_{0}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$0$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$-R_{e_{2},e_{3}}$ \end{onehalfspace} \tabularnewline \hline $v_{+++}$ & \begin{onehalfspace} \noindent \centering{}$R_{e_{1},e_{2}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$R_{e_{1},e_{3}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$R_{e_{2},e_{3}}$ \end{onehalfspace} & \begin{onehalfspace} \noindent \centering{}$0$ \end{onehalfspace} \tabularnewline \hline \end{longtable} \par\end{center} \end{singlespace} \begin{singlespace} \noindent For example, we can read from this table that $p_{8}(v_{+++},v_{++-})=R_{e_{1},e_{2}}$. \end{singlespace} \begin{singlespace} Together with the above definitions and notation, we are now able to describe the structure of $F(4)$. Recall that the Lie superalgebra of type $F(4)=\mathfrak{g}=\mathfrak{g}_{\bar{0}}\oplus\mathfrak{g}_{\bar{1}}$ where \[ \mathfrak{g}_{\bar{0}}=\mathfrak{sl}(2)\oplus\mathfrak{so}(7)\text{ and }\mathfrak{g}_{\bar{1}}=V_{2}\otimes V_{8}. \] \end{singlespace} \begin{singlespace} \noindent We know that $\mathfrak{g}_{\bar{0}}$ is a Lie algebra thus we have the bracket $\mathfrak{g}_{\bar{0}}\times\mathfrak{g}_{\bar{0}}\rightarrow\mathfrak{g}_{\bar{0}}$ and the bracket $\left[\cdotp,\cdotp\right]:\mathfrak{g}_{\bar{0}}\times\mathfrak{g}_{\bar{1}}\rightarrow\mathfrak{g}_{\bar{1}}$ is given by $[x+y,v_{2}\otimes v_{8}]=xv_{2}\otimes v_{8}+v_{2}\otimes yv_{8}$ for $x\in\mathfrak{sl}(2),y\in\mathfrak{so}(7)$, $v_{2}\in V_{2}$ and $v_{8}\in V_{8}$. Note that $Ev_{1}=0$, $Ev_{-1}=v_{1}$ and $Hv_{i}=iv_{i}$ for $i\in\{\pm1\}$. The following table gives the action of $\mathfrak{so}(7)$ on each basis element of $V_{8}$: \noindent \begin{table}[H] \begin{singlespace} \noindent % \begin{longtable}[c]{|c|c|c|c|c|c|c|c|c|} \hline & $s$ & $e_{1}s$ & $e_{2}s$ & $e_{3}s$ & $e_{1}e_{2}s$ & $e_{1}e_{3}s$ & $e_{2}e_{3}s$ & $e_{1}e_{2}e_{3}s$\tabularnewline \hline \hline $R_{e_{1},e_{-1}}$ & $-\frac{1}{2}s$ & $\frac{1}{2}e_{1}s$ & $-\frac{1}{2}e_{2}s$ & $-\frac{1}{2}e_{3}s$ & $\frac{1}{2}e_{1}e_{2}s$ & $\frac{1}{2}e_{1}e_{3}s$ & $-\frac{1}{2}e_{2}e_{3}s$ & $\frac{1}{2}e_{1}e_{2}e_{3}s$\tabularnewline \hline $R_{e_{1},e_{-2}}$ & $0$ & $0$ & $e_{1}s$ & $0$ & $0$ & $0$ & $e_{1}e_{3}s$ & $0$\tabularnewline \hline $R_{e_{1},e_{-3}}$ & $0$ & $0$ & $0$ & $e_{1}s$ & $0$ & $0$ & $-e_{1}e_{2}s$ & $0$\tabularnewline \hline $R_{e_{1},e_{0}}$ & $e_{1}s$ & $0$ & $-e_{1}e_{2}s$ & $-e_{1}e_{3}s$ & $0$ & $0$ & $e_{1}e_{2}e_{3}s$ & $0$\tabularnewline \hline $R_{e_{1},e_{3}}$ & $e_{1}e_{3}s$ & $0$ & $-e_{1}e_{2}e_{3}s$ & $0$ & $0$ & $0$ & $0$ & $0$\tabularnewline \hline $R_{e_{1},e_{2}}$ & $e_{1}e_{2}s$ & $0$ & $0$ & $e_{1}e_{2}e_{3}s$ & $0$ & $0$ & $0$ & $0$\tabularnewline \hline $R_{e_{2},e_{-1}}$ & $0$ & $e_{2}s$ & $0$ & $0$ & $0$ & $e_{2}e_{3}s$ & $0$ & $0$\tabularnewline \hline $R_{e_{2},e_{-2}}$ & $-\frac{1}{2}s$ & $-\frac{1}{2}e_{1}s$ & $\frac{1}{2}e_{2}s$ & $-\frac{1}{2}e_{3}s$ & $\frac{1}{2}e_{1}e_{2}s$ & $-\frac{1}{2}e_{1}e_{3}s$ & $\frac{1}{2}e_{2}e_{3}s$ & $\frac{1}{2}e_{1}e_{2}e_{3}s$\tabularnewline \hline $R_{e_{2},e_{-3}}$ & $0$ & $0$ & $0$ & $e_{2}s$ & $0$ & $e_{1}e_{2}s$ & $0$ & $0$\tabularnewline \hline $R_{e_{2},e_{0}}$ & $e_{2}s$ & $e_{1}e_{2}s$ & $0$ & $-e_{2}e_{3}s$ & $0$ & $-e_{1}e_{2}e_{3}s$ & $0$ & $0$\tabularnewline \hline $R_{e_{2},e_{3}}$ & $e_{2}e_{3}s$ & $e_{1}e_{2}e_{3}s$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$\tabularnewline \hline $R_{e_{3},e_{-1}}$ & $0$ & $e_{3}s$ & $0$ & $0$ & $-e_{2}e_{3}s$ & $0$ & $0$ & $0$\tabularnewline \hline $R_{e_{3},e_{-2}}$ & $0$ & $0$ & $e_{3}s$ & $0$ & $e_{1}e_{3}s$ & $0$ & $0$ & $0$\tabularnewline \hline $R_{e_{3},e_{-3}}$ & $-\frac{1}{2}s$ & $-\frac{1}{2}e_{1}s$ & $-\frac{1}{2}e_{2}s$ & $\frac{1}{2}e_{3}s$ & $-\frac{1}{2}e_{1}e_{2}s$ & $\frac{1}{2}e_{1}e_{3}s$ & $\frac{1}{2}e_{2}e_{3}s$ & $\frac{1}{2}e_{1}e_{2}e_{3}s$\tabularnewline \hline $R_{e_{3},e_{0}}$ & $e_{3}s$ & $e_{1}e_{3}s$ & $e_{2}e_{3}s$ & $0$ & $e_{1}e_{2}e_{3}s$ & $0$ & $0$ & $0$\tabularnewline \hline $R_{e_{-1},e_{0}}$ & $0$ & $-s$ & $0$ & $0$ & $e_{2}s$ & $e_{3}s$ & $0$ & $-e_{2}e_{3}s$\tabularnewline \hline $R_{e_{-2},e_{0}}$ & $0$ & $0$ & $-s$ & $0$ & $-e_{1}s$ & $0$ & $e_{3}s$ & $e_{1}e_{3}s$\tabularnewline \hline $R_{e_{-3},e_{0}}$ & $0$ & $0$ & $0$ & $-s$ & $0$ & $-e_{1}s$ & $-e_{2}s$ & $-e_{1}e_{2}s$\tabularnewline \hline $R_{e_{-3},e_{-1}}$ & $0$ & $0$ & $0$ & $0$ & $0$ & $s$ & $0$ & $-e_{2}s$\tabularnewline \hline $R_{e_{-3},e_{-2}}$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $s$ & $e_{1}s$\tabularnewline \hline $R_{e_{-2},e_{-1}}$ & $0$ & $0$ & $0$ & $0$ & $s$ & $0$ & $0$ & $e_{3}s$\tabularnewline \hline \end{longtable} \end{singlespace} \caption{\label{tab:action-of-so7}The action of $\mathfrak{so}(7)$ on $V_{8}$} \end{table} \end{singlespace} \begin{singlespace} For $x_{2},y_{2}\in V_{2},x_{8},y_{8}\in V_{8}$, we define \[ \left[x_{2}\otimes x_{8},y_{2}\otimes y_{8}\right]=\psi_{2}(x_{2},y_{2})p_{8}(x_{8},y_{8})+\psi_{8}(x_{8},y_{8})p_{2}(x_{2},y_{2}). \] \end{singlespace} \subsection{Root system and Dynkin diagrams of $F(4)$\label{subsec:Root-system-F(4)}} \begin{singlespace} \noindent In this subsection, we use the structure of the root system of $F(4)$ given in \cite[Appendix A]{Iohara2001}. Note that roots of $F(4)$ are given by $\Phi=\Phi_{\bar{0}}\cup\Phi_{\bar{1}}$ where \[ \Phi_{\bar{0}}=\{\pm\delta,\pm\varepsilon_{i}\pm\varepsilon_{j},\pm\varepsilon_{i}:i\neq j,i,j=1,2,3\}\text{ and }\Phi_{\bar{1}}=\{\frac{1}{2}(\pm\delta\pm\varepsilon_{1}\pm\varepsilon_{2}\pm\varepsilon_{3})\} \] where $\{\delta,\varepsilon_{1},\varepsilon_{2},\varepsilon_{3}\}$ is an orthogonal basis such that $(\delta,\delta)=-6$, $(\varepsilon_{i},\varepsilon_{j})=2$ if $i=j$ and $(\varepsilon_{i},\varepsilon_{j})=0$ otherwise. \end{singlespace} \begin{singlespace} We list all roots and the corresponding root vectors in the table below: \end{singlespace} \begin{singlespace} \noindent \begin{center} \begin{tabular}{|>{\centering}m{1.5cm}||>{\centering}m{0.4cm}||>{\centering}m{0.5cm}||>{\centering}m{2cm}||c||>{\centering}m{1.2cm}||>{\centering}m{1cm}||>{\centering}m{3cm}|} \hline Roots & $\delta$ & $-\delta$ & $\varepsilon_{i}+\varepsilon_{j}(i<j)$ & $-\varepsilon_{i}-\varepsilon_{j}(i<j)$ & $\varepsilon_{i}-\varepsilon_{j}$ & $\pm\varepsilon_{i}$ & $\frac{1}{2}(i\delta+\sigma_{1}\varepsilon_{1}+\sigma_{2}\varepsilon_{2}+\sigma_{3}\varepsilon_{3})$\tabularnewline \hline \hline Root vectors & $E$ & $F$ & $R_{e_{i},e_{j}}$ & $R_{e_{-j},e_{-i}}$ & $R_{e_{i},e_{-j}}$ & $R_{e_{\pm i},e_{0}}$ & $v_{i}\otimes v_{\sigma_{1},\sigma_{2},\sigma_{3}},\sigma_{i}\in\{\pm\}$\tabularnewline \hline \end{tabular} \par\end{center} \end{singlespace} \begin{singlespace} The following table covers all possible Dynkin diagrams with respect to different systems of simple roots based on \cite[Section 2.18]{Frappat1996}. Moreover, we have that all odd roots in the root system of $F(4)$ are isotropic. \end{singlespace} \begin{longtable}[c]{|>{\centering}m{7cm}||>{\centering}m{7cm}|} \caption{Dynkin diagrams for $F(4)$} \tabularnewline \endfirsthead \hline Simple systems $\varPi=\{\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4}\}$ & Dynkin diagrams\tabularnewline \hline \hline $\{\frac{1}{2}(\delta-\varepsilon_{1}-\varepsilon_{2}-\varepsilon_{3}),\varepsilon_{3},\varepsilon_{2}-\varepsilon_{3},\varepsilon_{1}-\varepsilon_{2}\}$ & Figure 6.1 \noindent \centering{}\includegraphics{\string"DD_TypeF_1\string".PNG}\tabularnewline \hline \hline $\{\frac{1}{2}(-\delta+\varepsilon_{1}+\varepsilon_{2}+\varepsilon_{3}),\frac{1}{2}(\delta-\varepsilon_{1}-\varepsilon_{2}+\varepsilon_{3}),\varepsilon_{2}-\varepsilon_{3},\varepsilon_{1}-\varepsilon_{2}\}$ & Figure 6.2 \noindent \centering{}\includegraphics{\string"DD_TypeF_2\string".PNG}\tabularnewline \hline \hline $\{\varepsilon_{1}-\varepsilon_{2},\frac{1}{2}(\delta-\varepsilon_{1}+\varepsilon_{2}-\varepsilon_{3}),\frac{1}{2}(-\delta+\varepsilon_{1}+\varepsilon_{2}-\varepsilon_{3}),\varepsilon_{3}\}$ & Figure 6.3 \noindent \centering{}\includegraphics[bb=0bp 0bp 130bp 86bp,scale=0.8]{\string"DD_TypeF_3\string".PNG}\tabularnewline \hline \hline $\{\frac{1}{2}(\delta+\varepsilon_{1}-\varepsilon_{2}-\varepsilon_{3}),\frac{1}{2}(\delta-\varepsilon_{1}+\varepsilon_{2}+\varepsilon_{3}),\frac{1}{2}(-\delta+\varepsilon_{1}-\varepsilon_{2}+\varepsilon_{3}),\varepsilon_{2}-\varepsilon_{3}\}$ & Figure 6.4 \noindent \centering{}\includegraphics[bb=0bp 0bp 122bp 72.7899bp,scale=0.9]{\string"DD_TypeF_4\string".PNG}\tabularnewline \hline \hline $\{\delta,\frac{1}{2}(-\delta+\varepsilon_{1}-\varepsilon_{2}-\varepsilon_{3}),\varepsilon_{3},\varepsilon_{2}-\varepsilon_{3}\}$ & Figure 6.5 \noindent \centering{}\includegraphics{\string"DD_TypeF_5\string".PNG}\tabularnewline \hline \hline $\{\delta,\frac{1}{2}(-\delta-\varepsilon_{1}+\varepsilon_{2}+\varepsilon_{3}),\varepsilon_{1}-\varepsilon_{2},\varepsilon_{2}-\varepsilon_{3}\}$ & Figure 6.6 \noindent \centering{}\includegraphics{\string"DD_TypeF_6\string".PNG}\tabularnewline \hline \end{longtable} \subsection{Centres of centralizers of nilpotent elements $e\in\mathfrak{g}_{\bar{0}}$ and corresponding labelled Dynkin diagrams\label{subsec:centre-of-Centralizer-F(4)}} \begin{singlespace} \noindent Let $e=e_{\mathfrak{sl}}+e_{\mathfrak{so}}\in\mathfrak{g}_{\bar{0}}$ be nilpotent where $e_{\mathfrak{sl}}\in\mathfrak{sl}(2)$ and $e_{\mathfrak{so}}\in\mathfrak{so}(7)$. We know that there are two representatives of nilpotent orbits in $\mathfrak{sl}(2)$, i.e. $\{0,E\}$. Based on Table \ref{tab:nilpotent ele}, there are $7$ representatives of nilpotent orbits in $\mathfrak{so}(7)$. Hence, there are in total $14$ possibilities for $e$. We give basis elements for $\mathfrak{g}_{\bar{1}}^{e}$ and $\mathfrak{z}(\mathfrak{g}^{e})$ and list the labelled Dynkin diagrams $\varDelta$ with respect to $e$ in Table \ref{tab:F(4)}. Note that we have already calculated $\mathfrak{so}(7)^{e_{\mathfrak{so}}}$ in Table \ref{tab:Centralizers-so7} and $\mathfrak{sl}(2)^{e_{\mathfrak{sl}}}=\langle E\rangle$ for $e_{\mathfrak{sl}}=E$, $\mathfrak{sl}(2)^{e_{\mathfrak{sl}}}=\mathfrak{sl}(2)$ for $e_{\mathfrak{sl}}=0$. The numbers in the column labelled ``$\varDelta$'' represent labels $a_{i}$ corresponding to $\alpha_{i}$ for $i=1,2,3,4$ in $\varDelta$. \end{singlespace} \begin{singlespace} \noindent \begin{center} \begin{longtable}[c]{|>{\centering}m{1.5cm}|>{\centering}m{6cm}|>{\centering}m{3cm}|>{\centering}m{3.5cm}|} \caption{\label{tab:F(4)}$\mathfrak{g}^{e}$, $\mathfrak{z}(\mathfrak{g}^{e})$ and $\varDelta$ for $\mathfrak{g}=F(4)$} \tabularnewline \endfirsthead \hline $e$ & \begin{singlespace} \noindent $\mathfrak{g}_{\bar{1}}^{e}$ \end{singlespace} & $\mathfrak{z}(\mathfrak{g}^{e})$ & $\varDelta$\tabularnewline \hline \hline $E+e_{(7)}$ & \begin{singlespace} \noindent $\langle v_{1}\otimes e_{1}e_{2}e_{3}s,v_{1}\otimes e_{1}e_{2}s-v_{-1}\otimes e_{1}e_{2}e_{3}s,v_{1}\otimes e_{1}s-v_{1}\otimes e_{2}e_{3}s\rangle$ \end{singlespace} & \begin{singlespace} \noindent $\langle e,v_{1}\otimes e_{1}e_{2}e_{3},R_{e_{1},e_{2}}\rangle$ \end{singlespace} & Figure 6.4: $1,1,1,2$\tabularnewline \hline \hline $E+e_{(5,1^{2})}$ & \begin{singlespace} \noindent $\langle v_{1}\otimes e_{1}e_{2}e_{3}s,v_{1}\otimes e_{1}e_{2}s,v_{1}\otimes e_{1}s-v_{-1}\otimes e_{1}e_{2}s,v_{1}\otimes e_{1}e_{3}s+v_{-1}\otimes e_{1}e_{2}e_{3}s\rangle$ \end{singlespace} & $\langle e,R_{e_{1},e_{2}}\rangle$ & Figure 6.3: $2,0,2,0$ Figure 6.4: $2,0,0,2$ Figure 6.5: $2,0,0,2$\tabularnewline \hline \hline $E+e_{(3^{2},1)}$ & $\langle v_{1}\otimes e_{1}e_{2}s,v_{1}\otimes e_{1}e_{2}e_{3}s,v_{1}\otimes e_{1}s-v_{-1}\otimes e_{1}e_{2}e_{3}s,v_{1}\otimes e_{2}s,v_{-1}\otimes e_{1}e_{2}s+v_{1}\otimes e_{2}e_{3}s,v_{1}\otimes e_{1}e_{3}s\rangle$ & $\langle e,R_{e_{1},e_{2}}\rangle$ & Figure 6.3: $0,1,1,0$\tabularnewline \hline \hline $E+e_{(3,2^{2})}$ & \begin{singlespace} \noindent $\langle v_{1}\otimes e_{1}e_{2}e_{3}s,v_{1}\otimes e_{1}e_{2}s,v_{1}\otimes e_{1}e_{3}s,v_{1}\otimes e_{1}s-v_{-1}\otimes e_{1}e_{2}e_{3}s,v_{1}\otimes e_{2}e_{3}s-v_{-1}\otimes e_{1}e_{2}e_{3}s,v_{1}\otimes e_{3}s+v_{-1}\otimes e_{1}e_{3}s,v_{1}\otimes e_{2}s+v_{-1}\otimes e_{1}e_{2}s\rangle$ \end{singlespace} & $\langle e\rangle$ & Figure 6.2: $1,0,0,1$ Figure 6.3: $1,0,0,1$ Figure 6.4: $1,1,0,0$\tabularnewline \hline \hline $E+e_{(3,1^{4})}$ & \begin{singlespace} \noindent $\langle v_{1}\otimes s-v_{-1}\otimes e_{1}s,v_{1}\otimes e_{2}s+v_{-1}\otimes e_{1}e_{2}s,$ $v_{1}\otimes e_{3}s+v_{-1}\otimes e_{1}e_{3}s,v_{1}\otimes e_{1}s,v_{1}\otimes e_{1}e_{3}s,v_{1}\otimes e_{1}e_{2}e_{3}s$ $v_{1}\otimes e_{2}e_{3}s-v_{-1}\otimes e_{1}e_{2}e_{3}s,v_{1}\otimes e_{1}e_{2}s\rangle$ \end{singlespace} & $\langle e\rangle$ & Figure 6.1: $0,0,0,2$ Figure 6.2: $0,0,0,2$ Figure 6.3: $2,0,0,0$ Figure 6.4: $2,0,0,0$\tabularnewline \hline \hline $E+e_{(2^{2},1^{3})}$ & \begin{singlespace} \noindent $\langle v_{1}\otimes e_{1}e_{2}e_{3}s,v_{1}\otimes e_{1}e_{2}s,$ $v_{1}\otimes s-v_{-1}\otimes e_{1}e_{2}s,v_{1}\otimes e_{3}s-v_{-1}\otimes e_{1}e_{2}e_{3}s,$ $v_{1}\otimes e_{1}s,v_{1}\otimes e_{2}s,v_{1}\otimes e_{1}e_{3}s,v_{1}\otimes e_{2}e_{3}s\rangle$ \end{singlespace} & $\langle e\rangle$ & Figure 6.1: $0,0,1,0$ Figure 6.2: $0,0,1,0$ Figure 6.3: $0,1,0,0$\tabularnewline \hline \hline $E$ & \begin{singlespace} \noindent $\langle v_{1}\otimes s,v_{1}\otimes e_{1}s,v_{1}\otimes e_{2}s,v_{1}\otimes e_{3}s,v_{1}\otimes e_{1}e_{2}s,v_{1}\otimes e_{1}e_{3}s,v_{1}\otimes e_{2}e_{3}s,v_{1}\otimes e_{1}e_{2}e_{3}s\rangle$ \end{singlespace} & $\langle e\rangle$ & Figure 6.1: $1,0,0,0$\tabularnewline \hline \hline $e_{(7)}$ & \begin{singlespace} \noindent $\langle v_{1}\otimes e_{1}s-v_{1}\otimes e_{2}e_{3}s,v_{-1}\otimes e_{1}s-v_{-1}\otimes e_{2}e_{3}s,v_{1}\otimes e_{1}e_{2}e_{3}s,v_{-1}\otimes e_{1}e_{2}e_{3}s\rangle$ \end{singlespace} & $\langle e,R_{e_{1},e_{2}}\rangle$ & Figure 6.4: $0,0,2,2$ Figure 6.5: $0,0,2,2$ Figure 6.6: $0,0,2,2$\tabularnewline \hline \hline $e_{(5,1^{2})}$ & \begin{singlespace} \noindent \centering{}$\langle v_{1}\otimes e_{1}e_{2}s,v_{-1}\otimes e_{1}e_{2}s,v_{1}\otimes e_{1}e_{2}e_{3}s,v_{-1}\otimes e_{1}e_{2}e_{3}s\rangle$ \end{singlespace} & $\langle e,R_{e_{1},e_{2}}\rangle$ & Figure 6.5: $0,1,0,2$\tabularnewline \hline \hline $e_{(3^{2},1)}$ & \begin{singlespace} \noindent \centering{}$\langle v_{1}\otimes e_{2}s,v_{-1}\otimes e_{2}s,v_{1}\otimes e_{1}e_{3}s,v_{-1}\otimes e_{1}e_{3}s,v_{1}\otimes e_{1}e_{2}s,v_{-1}\otimes e_{1}e_{2}s,v_{1}\otimes e_{1}e_{2}e_{3}s,v_{-1}\otimes e_{1}e_{2}e_{3}s\rangle$ \end{singlespace} & $\langle e,R_{e_{1},e_{2}}\rangle$ & Figure 6.3: $0,0,2,0$ Figure 6.4: $0,0,0,2$ Figure 6.5: $0,0,0,2$ Figure 6.6: $0,0,0,2$\tabularnewline \hline \hline $e_{(3,2^{2})}$ & \begin{singlespace} \noindent \centering{}$\langle v_{1}\otimes e_{1}s-v_{1}\otimes e_{2}e_{3}s,v_{-1}\otimes e_{1}s-v_{-1}\otimes e_{2}e_{3}s,v_{1}\otimes e_{1}e_{2}s,v_{-1}\otimes e_{1}e_{2}s,v_{1}\otimes e_{1}e_{3}s,v_{-1}\otimes e_{1}e_{3}s,v_{1}\otimes e_{1}e_{2}e_{3}s,v_{-1}\otimes e_{1}e_{2}e_{3}s\rangle$ \end{singlespace} & $\langle e\rangle$ & Figure 6.4: $0,0,1,0$ Figure 6.5: $0,0,1,0$ Figure 6.6: $0,0,1,0$\tabularnewline \hline \hline $e_{(3,1^{4})}$ & $\langle v_{1}\otimes e_{1}e_{2}e_{3}s,$ $v_{-1}\otimes e_{1}e_{2}e_{3}s,v_{1}\otimes e_{1}s,v_{-1}\otimes e_{1}s,v_{1}\otimes e_{1}e_{2}s,$ $v_{-1}\otimes e_{1}e_{2}s,v_{1}\otimes e_{1}e_{3}s,v_{-1}\otimes e_{1}e_{3}s\rangle$ & $\langle e\rangle$ & Figure 6.5: $0,1,0,0$\tabularnewline \hline \hline $e_{(2^{2},1^{3})}$ & $\langle v_{1}\otimes e_{1}s,v_{-1}\otimes e_{1}s,v_{-1}\otimes e_{1}e_{2}e_{3}s,$ $v_{1}\otimes e_{1}e_{2}e_{3}s,v_{1}\otimes e_{1}e_{2}s,v_{-1}\otimes e_{1}e_{2}s,v_{1}\otimes e_{2}s,$ $v_{-1}\otimes e_{2}s,v_{1}\otimes e_{1}e_{3}s,v_{-1}\otimes e_{1}e_{3}s,v_{1}\otimes e_{2}e_{3}s,v_{-1}\otimes e_{2}e_{3}s\rangle$ & $\langle e\rangle$ & Figure 6.3: $0,0,1,0$ Figure 6.4: $0,0,0,1$ Figure 6.5: $0,0,0,1$ Figure 6.6: $0,0,0,1$\tabularnewline \hline \hline $0$ & \begin{singlespace} \noindent $\mathfrak{g}_{\bar{1}}$ \end{singlespace} & $\{0\}$ & Figures 6.1, 6.2, 6.3, 6.4, 6.5, 6.6: All labels are zeros\tabularnewline \hline \end{longtable} \par\end{center} \end{singlespace} \begin{singlespace} We also calculate the $\mathfrak{g}^{e}(0)$-module structure on each $\mathfrak{g}^{e}(j)$ for $j>0$. Recall that we denote by $V^{\mathfrak{sl}}(j)$ the $\mathfrak{sl}(2)$-module with highest weight $j$. We also let $V^{\mathfrak{osp}}(j)$ be the $\mathfrak{osp}(1|2)$-module with highest weight $j$. Let $\mathfrak{t}$ be a 1-dimensional Lie algebra and $\mathfrak{t}_{j}$ be the $\mathfrak{t}$-module such that $t\cdot a=ja$ for $t\in\mathfrak{t}$, $a\in\mathfrak{t}_{j}$. For $e=e_{(3^{2},1)}$ and $e_{(2^{2},1^{3})}$, the $\mathfrak{g}^{e}(0)$-module structure on $\mathfrak{g}^{e}(j)$ is not included as it requires representations of $\mathfrak{sl}(2|1)$ and $D(2,1;\alpha)$. \end{singlespace} \begin{singlespace} \noindent \begin{center} \begin{longtable}[c]{|>{\centering}m{2cm}|>{\centering}m{3.5cm}|>{\centering}m{8cm}|} \caption{\label{tab:g^e(0)-F(4)}The $\mathfrak{g}^{e}(0)$-module structure on $\mathfrak{g}^{e}(j)$ for $j>0$ } \tabularnewline \endfirsthead \hline $e$ & $\mathfrak{g}^{e}(0)$ & $\mathfrak{g}^{e}(j)$ for $j>0$\tabularnewline \hline \hline $E+e_{(7)}$ & $0$ & $\dim\mathfrak{g}^{e}(10)=\dim\mathfrak{g}^{e}(7)=\dim\mathfrak{g}^{e}(6)=\dim\mathfrak{g}^{e}(5)=\dim\mathfrak{g}^{e}(1)=1,\dim\mathfrak{g}^{e}(2)=2.$\tabularnewline \hline \hline $E+e_{(5,1^{2})}$ & $\mathfrak{t}$ & $\mathfrak{g}^{e}(1)=0,\mathfrak{g}^{e}(2)=\mathfrak{t}_{0}\oplus\mathfrak{t}_{-1}\oplus\mathfrak{t}_{1},$$\mathfrak{g}^{e}(4)=\mathfrak{t}_{-2}\oplus\mathfrak{t}_{-1}\oplus\mathfrak{t}_{1}\oplus\mathfrak{t}_{2},\mathfrak{g}^{e}(6)=\mathfrak{t}_{0}.$\tabularnewline \hline \hline $E+e_{(3^{2},1)}$ & $\mathfrak{t}$ & $\mathfrak{g}^{e}(1)=\mathfrak{t}_{-3}\oplus\mathfrak{t}_{-1}\oplus\mathfrak{t}_{1}\oplus\mathfrak{t}_{3}$;$\mathfrak{g}^{e}(2)=\mathfrak{t}_{-4}\oplus\mathfrak{t}_{-2}\oplus\mathfrak{t}_{0}\oplus\mathfrak{t}_{2}\oplus\mathfrak{t}_{-4};$$\mathfrak{g}^{e}(3)=\mathfrak{t}_{-1}\oplus\mathfrak{t}_{1},\mathfrak{g}^{e}(4)=0.$\tabularnewline \hline \hline $E+e_{(3,2^{2})}$ & $\mathfrak{osp}(1|2)$ & $\mathfrak{g}^{e}(1)=V^{\mathfrak{osp}}(1)\oplus V^{\mathfrak{osp}}(0),$ $\mathfrak{g}^{e}(2)=V^{\mathfrak{osp}}(1)\oplus V^{\mathfrak{osp}}(0)\oplus V^{\mathfrak{osp}}(0),$ $\mathfrak{g}^{e}(3)=V^{\mathfrak{osp}}(1).$\tabularnewline \hline \hline $E+e_{(3,1^{4})}$ & $\mathfrak{osp}(1|2)\oplus\mathfrak{osp}(1|2)$ & $\mathfrak{g}^{e}(2)=\left(V^{\mathfrak{osp}}(1)\otimes V^{\mathfrak{osp}}(1)\right)\oplus\left(V^{\mathfrak{osp}}(0)\oplus V^{\mathfrak{osp}}(0)\right)$\tabularnewline \hline \hline $E+e_{(2^{2},1^{3})}$ & $\mathfrak{sl}(2)\oplus\mathfrak{osp}(1|2)$ & $\mathfrak{g}^{e}(1)=V^{\mathfrak{sl}}(1)\otimes V^{\mathfrak{osp}}(2),$$\mathfrak{g}^{e}(2)=\left(V^{\mathfrak{sl}}(0)\otimes V^{\mathfrak{osp}}(0)\right)\oplus\left(V^{\mathfrak{sl}}(0)\otimes V^{\mathfrak{osp}}(1)\right).$\tabularnewline \hline \hline $E$ & $\mathfrak{so}(7)$ & $\mathfrak{g}^{e}(1)=V_{8}$, $\mathfrak{g}^{e}(2)=\left\langle e\right\rangle .$\tabularnewline \hline \hline $e_{(7)}$ & $\mathfrak{osp}(1|2)$ & $\mathfrak{g}^{e}(2)=\mathfrak{g}^{e}(10)=V^{\mathfrak{osp}}(0),\mathfrak{g}^{e}(6)=V^{\mathfrak{osp}}(1)$.\tabularnewline \hline \hline $e_{(5,1^{2})}$ & $\mathfrak{sl}(2)\oplus\mathfrak{t}$ & $\mathfrak{g}^{e}(2)=V^{\mathfrak{sl}}(0)\otimes\mathfrak{t}_{0},$ $\mathfrak{g}^{e}(3)=\left(V^{\mathfrak{sl}}(1)\otimes\mathfrak{t}_{-1}\right)\oplus\left(V^{\mathfrak{sl}}(1)\otimes\mathfrak{t}_{1}\right),$$\mathfrak{g}^{e}(4)=\left(V^{\mathfrak{sl}}(0)\otimes\mathfrak{t}_{-2}\right)\oplus\left(V^{\mathfrak{sl}}(0)\otimes\mathfrak{t}_{2}\right),$ $\mathfrak{g}^{e}(6)=V^{\mathfrak{sl}}(0)\otimes\mathfrak{t}_{0}.$\tabularnewline \hline \hline $e_{(3^{2},1)}$ & $\mathfrak{sl}(2|1)$ & Omitted\tabularnewline \hline \hline $e_{(3,2^{2})}$ & $\mathfrak{sl}(2)\oplus\mathfrak{osp}(1\vert2)$ & $\mathfrak{g}^{e}(1)=V^{\mathfrak{sl}}(1)\otimes V^{\mathfrak{osp}}(1),\mathfrak{g}^{e}(3)=V^{\mathfrak{sl}}(1)\otimes V^{\mathfrak{osp}}(0),$ $\mathfrak{g}^{e}(2)=\left(V^{\mathfrak{sl}}(0)\otimes V^{\mathfrak{osp}}(0)\right)\oplus\left(V^{\mathfrak{sl}}(0)\otimes V^{\mathfrak{osp}}(1)\right).$\tabularnewline \hline \hline $e_{(3,1^{4})}$ & $\mathfrak{sl}(2)\oplus\mathfrak{sl}(2)\oplus\mathfrak{sl}(2)$ & $\mathfrak{g}^{e}(1)=\left(V^{\mathfrak{sl}}(1)\otimes V^{\mathfrak{sl}}(0)\otimes V^{\mathfrak{sl}}(1)\right)\oplus\left(V^{\mathfrak{sl}}(1)\otimes V^{\mathfrak{sl}}(1)\otimes V^{\mathfrak{sl}}(0)\right),$$\mathfrak{g}^{e}(2)=\left(V^{\mathfrak{sl}}(0)\otimes V^{\mathfrak{sl}}(1)\otimes V^{\mathfrak{sl}}(1)\right)\oplus\left(V^{\mathfrak{sl}}(0)\otimes V^{\mathfrak{sl}}(0)\otimes V^{\mathfrak{sl}}(0)\right).$\tabularnewline \hline \hline $e_{(2^{2},1^{3})}$ & $D(2,1;2)$ & Omitted\tabularnewline \hline \hline $0$ & $\mathfrak{g}$ & $0$\tabularnewline \hline \end{longtable} \par\end{center} \end{singlespace} \begin{singlespace} In the remaining part of this subsection, we give explicit calculations for finding $\mathfrak{g}^{e}$ and $\mathfrak{z}(\mathfrak{g}^{e})$ and obtain the corresponding labelled Dynkin diagrams for $e=e_{(7)}$. The results for all other cases are obtained using the same approach. For $e=e_{(7)}$, a basis for $\mathfrak{so}(7)^{e_{(7)}}$ is given in Table \ref{tab:Centralizers-so7}. Hence, $\mathfrak{g}_{\bar{0}}^{e}=\mathfrak{sl}(2)\oplus\mathfrak{so}(7)^{e_{(7)}}$ and it has dimension $3+3=6$. Next we determine $\mathfrak{g}_{\bar{1}}^{e}$. By letting $[e_{(7)},a_{1}v_{1}\otimes s+a_{2}v_{1}\otimes e_{1}s+a_{3}v_{1}\otimes e_{2}s+a_{4}v_{1}\otimes e_{3}s+a_{5}v_{1}\otimes e_{1}e_{2}s+a_{6}v_{1}\otimes e_{1}e_{3}s+a_{7}v_{1}\otimes e_{2}e_{3}s+a_{8}v_{1}\otimes e_{1}e_{2}e_{3}s+b_{1}v_{-1}\otimes s+b_{2}v_{-1}\otimes e_{1}s+b_{3}v_{-1}\otimes e_{2}s+b_{4}v_{-1}\otimes e_{3}s+b_{5}v_{-1}\otimes e_{1}e_{2}s+b_{6}v_{-1}\otimes e_{1}e_{3}s+b_{7}v_{-1}\otimes e_{2}e_{3}s+b_{8}v_{1-}\otimes e_{1}e_{2}e_{3}s]=0$, we have that $a_{2}+a_{7}=0$, $b_{2}+b_{7}=0$ and $a_{i}=b_{i}=0$ for $i=1,3,4,5,6$. Therefore, we obtain that $\mathfrak{g}_{\bar{1}}^{e}$ has a basis $\{v_{1}\otimes e_{1}s-v_{1}\otimes e_{2}e_{3}s,v_{-1}\otimes e_{1}s-v_{-1}\otimes e_{2}e_{3}s,v_{1}\otimes e_{1}e_{2}e_{3}s,v_{-1}\otimes e_{1}e_{2}e_{3}s\}$. \end{singlespace} \begin{singlespace} \noindent According to Table \ref{tab:nilpotent ele}, there is an $\mathfrak{sl}(2)$-triple $\{e,h,f\}$ in $\mathfrak{g}_{\bar{0}}$ such that $h=h_{(7)}=\mathrm{diag}(6,4,2,0,-2,-4,-6)=6R_{e_{1},e_{-1}}+4R_{e_{2},e_{-2}}+2R_{e_{3},e_{-3}}$. Then the $\mathrm{ad}h$-eigenvalues of basis elements in $\mathfrak{g}^{e}$ are shown in the following table: \end{singlespace} \begin{doublespace} \noindent \begin{center} \begin{tabular}{|c|>{\centering}m{10cm}|} \hline $\mathrm{ad}h$-eigenvalues & Basis elements in $\mathfrak{g}^{e}$ \tabularnewline \hline \hline $0$ & $E$, $H$, $F$, $v_{1}\otimes e_{1}s-v_{1}\otimes e_{2}e_{3}s$, $v_{-1}\otimes e_{1}s-v_{-1}\otimes e_{2}e_{3}s$\tabularnewline \hline $2$ & $e_{(7)}$\tabularnewline \hline $6$ & $R_{e_{1},e_{0}}-2R_{e_{2},e_{3}}$, $v_{1}\otimes e_{1}e_{2}e_{3}s$, $v_{-1}\otimes e_{1}e_{2}e_{3}s$\tabularnewline \hline $10$ & $R_{e_{1},e_{2}}$\tabularnewline \hline \end{tabular} \par\end{center} \end{doublespace} \begin{singlespace} By computing commutators in $\mathfrak{g}^{e}(0)$, we deduce that $\mathfrak{g}^{e}(0)=\mathfrak{osp}(1|2)$ by Lemma \ref{lem:osp(1,2)} where $F,v_{-1}\otimes e_{1}s-v_{-1}\otimes e_{2}e_{3}s,H,v_{1}\otimes e_{1}s-v_{1}\otimes e_{2}e_{3}s,E$ correspond to $u_{-2},u_{-1},u_{0},u_{1},u_{2}$ in Lemma \ref{lem:osp(1,2)} respectively. Moreover, both $\mathfrak{g}^{e}(2)$ and $\mathfrak{g}^{e}(10)$ are isomorphic to $V^{\mathfrak{osp}}(0)$ and $\mathfrak{g}^{e}(6)=V^{\mathfrak{osp}}(1)$. Hence, we deduce that \begin{align*} \mathfrak{z} & =\mathfrak{z}(0)\oplus\mathfrak{z}(2)\oplus\mathfrak{z}(6)\oplus\mathfrak{z}(10)\\ & \subseteq\left(\mathfrak{g}^{e}(0)\right)^{\mathfrak{g}^{e}(0)}\oplus\left(\mathfrak{g}^{e}(2)\right)^{\mathfrak{g}^{e}(0)}\oplus\left(\mathfrak{g}^{e}(6)\right)^{\mathfrak{g}^{e}(0)}\oplus\left(\mathfrak{g}^{e}(10)\right)^{\mathfrak{g}^{e}(0)}=\langle e_{(7)},R_{e_{1},e_{2}}\rangle. \end{align*} \end{singlespace} \noindent We check that $e_{(7)},R_{e_{1},e_{2}}\in\mathfrak{z}$, therefore $\mathfrak{z}=\langle e_{(7)},R_{e_{1},e_{2}}\rangle$. \begin{singlespace} Next we look at the labelled Dynkin diagrams with respect to $e$. We obtain that roots in $\mathfrak{g}(>0)$ are $\{\varepsilon_{1}+\varepsilon_{2},\varepsilon_{1}+\varepsilon_{3},\varepsilon_{2}+\varepsilon_{3},\varepsilon_{1}-\varepsilon_{3},\varepsilon_{1}-\varepsilon_{2},\varepsilon_{2}-\varepsilon_{3},\varepsilon_{1},\varepsilon_{2},\varepsilon_{3},\frac{1}{2}(\pm\delta+\varepsilon_{1}+\varepsilon_{2}-\varepsilon_{3}),\frac{1}{2}(\pm\delta+\varepsilon_{1}-\varepsilon_{2}+\varepsilon_{3}),\frac{1}{2}(\pm\delta+\varepsilon_{1}+\varepsilon_{2}+\varepsilon_{3})\}$ and roots in $\mathfrak{g}(0)$ are $\Phi(0)=\{\pm\delta,\frac{1}{2}(\pm\delta+\varepsilon_{1}-\varepsilon_{2}-\varepsilon_{3}),\frac{1}{2}(\pm\delta-\varepsilon_{1}+\varepsilon_{2}+\varepsilon_{3})\}$. Hence, there are three systems of simple roots of $\mathfrak{g}(0)$ up to conjugacy: $\varPi_{1}(0)=\{\frac{1}{2}(\delta+\varepsilon_{1}-\varepsilon_{2}-\varepsilon_{3}),\frac{1}{2}(\delta-\varepsilon_{1}+\varepsilon_{2}+\varepsilon_{3})\}$, $\varPi_{2}(0)=\{\delta,\frac{1}{2}(-\delta+\varepsilon_{1}-\varepsilon_{2}-\varepsilon_{3})\}$ and $\varPi_{3}(0)=\{\delta,\frac{1}{2}(-\delta-\varepsilon_{1}+\varepsilon_{2}+\varepsilon_{3})\}$. By extending $\varPi_{i}(0)$ to simple root systems of $\mathfrak{g}$, we get three systems of positive roots $\Phi^{+}$ and simple roots $\varPi$ and thus there exist three conjugacy classes of Borel subalgebras that satisfy $\mathfrak{b}=\mathfrak{h}\oplus\bigoplus_{\alpha\in\Phi^{+}}\mathfrak{g}_{\alpha}\subseteq\bigoplus_{j\geq0}\mathfrak{g}(j)$. Hence, the systems of simple roots are: $\varPi_{1}=\{\alpha_{1}=\frac{1}{2}(\delta+\varepsilon_{1}-\varepsilon_{2}-\varepsilon_{3}),\alpha_{2}=\frac{1}{2}(\delta-\varepsilon_{1}+\varepsilon_{2}+\varepsilon_{3}),\alpha_{3}=\frac{1}{2}(-\delta+\varepsilon_{1}-\varepsilon_{2}+\varepsilon_{3}),\alpha_{4}=\varepsilon_{2}-\varepsilon_{3}\}$. We compute $\mu_{12}=3$, $\mu_{13}=2$, $\mu_{23}=1$ and $\mu_{34}=2$ using Formula (\ref{eq:lines-=0003BC}). Therefore, the labelled Dynkin diagram with repect to $\varPi_{1}$ is the Dynkin diagram in Figure 6.4 with labels $0,0,2,2$. $\varPi_{2}=\{\alpha_{1}=\delta,\alpha_{2}=\frac{1}{2}(-\delta+\varepsilon_{1}-\varepsilon_{2}-\varepsilon_{3}),\alpha_{3}=\varepsilon_{3},\alpha_{4}=\varepsilon_{2}-\varepsilon_{3}\}$. We compute $\mu_{12}=3$, $\mu_{23}=1$ and $\mu_{34}=2$ using Formula (\ref{eq:lines-=0003BC}). Therefore, the labelled Dynkin diagram with repect to $\varPi_{2}$ is the Dynkin diagram in Figure 6.5 with labels $0,0,2,2$. $\varPi_{3}=\{\alpha_{1}=\delta,\alpha_{2}=\frac{1}{2}(-\delta-\varepsilon_{1}+\varepsilon_{2}+\varepsilon_{3}),\alpha_{3}=\varepsilon_{1}-\varepsilon_{2},\alpha_{4}=\varepsilon_{2}-\varepsilon_{3}\}$. We compute $\mu_{12}=3$, $\mu_{23}=2$ and $\mu_{34}=1$ using Formula (\ref{eq:lines-=0003BC}). Therefore, the labelled Dynkin diagram with repect to $\varPi_{3}$ is the Dynkin diagram in Figure 6.6 with labels $0,0,2,2$. \end{singlespace} \subsection{Analysis of results} \begin{singlespace} \noindent For each nilpotent element $e\in\mathfrak{g}_{\bar{0}}$, a semisimple element $h\in\mathfrak{g}_{\bar{0}}$ is given in Table \ref{tab:nilpotent ele}. Denote a root system for $\mathfrak{g}^{h}$ by $\Phi_{h}$, i.e. $\Phi_{h}=\{\alpha\in\Phi:\alpha(h)=0\}$ and a simple root system for $\mathfrak{g}^{h}$ by $\varPi_{h}$. We also denote the labelled Dynkin diagram for $\varPi_{h}$ by $\varDelta_{h}$ . \end{singlespace} \begin{singlespace} In order to prove Theorem 1 for $\mathfrak{g}$, we need to determine $\mathfrak{g}^{h}$ for each case such that $\varDelta$ has no label equal to $1$. Note that $\mathfrak{g}^{h}$ is of the form $\mathfrak{s}\oplus\bigoplus_{\alpha\in\Phi_{h}}\mathfrak{g}_{\alpha}$ where $\mathfrak{s}$ is a subalgebra of the Cartan subalgebra $\mathfrak{h}$ of $\mathfrak{g}$. Then $\mathfrak{z}(\mathfrak{g}^{h})=\{t\in\mathfrak{h}:\alpha(t)=0\text{ for all }\alpha\in\Phi_{h}\}$, thus $\mathfrak{z}(\mathfrak{g}^{h})$ is a subalgebra of $\mathfrak{h}$ with dimension $\mathrm{rank}\Phi-\mathrm{rank}\Phi_{h}$. Note that $\varDelta$ has no label equal to $1$ when the nilpotent elements are $E+e_{(5,1^{2})}$, $E+e_{(3,1^{4})}$, $e_{(7)},e_{(3^{2},1)}$. We take $E+e_{(5,1^{2})}$ and $e_{(3^{2},1)}$ as examples to show explicit calculation on $\dim\mathfrak{z}(\mathfrak{g}^{h})$. The results for all other cases are obtained by the same method. When $e=E+e_{(5,1^{2})}$, we have that $\Phi_{h}=\{\pm\varepsilon_{3},\pm\frac{1}{2}(\delta-\varepsilon_{1}+\varepsilon_{2}-\varepsilon_{3}),\pm\frac{1}{2}(\delta-\varepsilon_{1}+\varepsilon_{2}+\varepsilon_{3})\}$. Then up to conjugacy the simple root systems are $\varPi_{h}^{1}=\{\varepsilon_{3},\frac{1}{2}(\delta-\varepsilon_{1}+\varepsilon_{2}-\varepsilon_{3})\}$ and $\varPi_{h}^{2}=\{\frac{1}{2}(\delta-\varepsilon_{1}+\varepsilon_{2}+\varepsilon_{3}),-\frac{1}{2}(\delta+\varepsilon_{1}-\varepsilon_{2}+\varepsilon_{3})\}$. Hence $\Phi_{h}$ is of type $\mathfrak{sl}(2|1)$ and $\mathfrak{g}^{h}=\mathfrak{s}\oplus\mathfrak{sl}(2|1)$ where $\mathfrak{s}$ is a complement of $\mathfrak{h}\cap\mathfrak{sl}(2|1)$ in $\mathfrak{h}$. Note that $\mathfrak{sl}(2|1)$ has no centre, thus $\dim\mathfrak{z}(\mathfrak{g}^{h})=4-2=2=n_{2}(\varDelta)=\dim\mathfrak{z}(\mathfrak{g}^{e})$. When $e=e_{(3^{2},1)}$, we have that $\Phi_{h}=\{\pm\delta,\pm(\varepsilon_{1}-\varepsilon_{2}),\pm\varepsilon_{3},\pm\frac{1}{2}(\delta+\varepsilon_{1}-\varepsilon_{2}-\varepsilon_{3}),\pm\frac{1}{2}(\delta-\varepsilon_{1}+\varepsilon_{2}-\varepsilon_{3}),\pm\frac{1}{2}(\delta+\varepsilon_{1}-\varepsilon_{2}+\varepsilon_{3}),\pm\frac{1}{2}(\delta-\varepsilon_{1}+\varepsilon_{2}+\varepsilon_{3})\}$. Then up to conjugacy the simple root systems are $\varPi_{h}^{1}=\{\varepsilon_{1}-\varepsilon_{2},\frac{1}{2}(\delta-\varepsilon_{1}+\varepsilon_{2}-\varepsilon_{3}),\varepsilon_{3}\}$, $\varPi_{h}^{2}=\{\frac{1}{2}(\delta+\varepsilon_{1}-\varepsilon_{2}-\varepsilon_{3}),\frac{1}{2}(\delta-\varepsilon_{1}+\varepsilon_{2}+\varepsilon_{3}),\frac{1}{2}(-\delta+\varepsilon_{1}-\varepsilon_{2}+\varepsilon_{3})\}$, $\varPi_{h}^{3}=\{\delta,\frac{1}{2}(-\delta+\varepsilon_{1}-\varepsilon_{2}-\varepsilon_{3}),\varepsilon_{3}\}$ and $\varPi_{h}^{4}=\{\delta,\frac{1}{2}(-\delta-\varepsilon_{1}+\varepsilon_{2}+\varepsilon_{3}),\varepsilon_{1}-\varepsilon_{2}\}$. Hence $\Phi_{h}$ is of type $D(2,1;2)$ according to \S4.3 and $\mathfrak{g}^{h}=\mathfrak{s}\oplus D(2,1;2)$ where $\mathfrak{s}$ is a complement of $\mathfrak{h}\cap D(2,1;2)$ in $\mathfrak{h}$. Note that $D(2,1;2)$ has no centre, thus $\dim\mathfrak{z}(\mathfrak{g}^{h})=4-3=1=n_{2}(\varDelta)$ but $\dim\mathfrak{z}(\mathfrak{g}^{h})\neq\dim\mathfrak{z}(\mathfrak{g}^{e})$. We will further discuss this case in \S6.7. In order to prove Theorem 2 for $\mathfrak{g}$, we only need to look at cases such that $\varDelta$ has labels equal to $2$ as for the remaining cases $\mathfrak{g}_{0}=\mathfrak{g}$, $e_{0}=e$ and $n_{2}(\varDelta)=0$. These cases are $E+e_{(7)},E+e_{(5,1^{2})},E+e_{(3,1^{4})},e_{(7)},e_{(5,1^{2})},e_{(3^{2},1)}.$ We take $E+e_{(7)}$ and $e_{(3^{2},1)}$ as examples to show explicit analysis. The results for all other cases are obtained by the same method. When $e=E+e_{(7)}$, we have that $\mathfrak{g}_{0}=D(2,1;2)$ and $e_{0}=(E,E,E)$ according to \S4.3. Therefore, we obtain that $\dim\mathfrak{g}^{e}-\dim\mathfrak{g}_{0}^{e_{0}}=7-6=1=n_{2}(\varDelta)$ but $\dim\mathfrak{z}(\mathfrak{g}^{e})-\dim\mathfrak{z}(\mathfrak{g}_{0}^{e_{0}})=3-1\neq n_{2}(\varDelta)$. We will further discuss this case in \S6.7. When $e=e_{(3^{2},1)}$, we have that $\mathfrak{g}_{0}=D(2,1;2)$ and $e_{0}=0$ by looking at $\varDelta_{0}$. Note that $\dim D(2,1;2)=17$. Therefore, we obtain that $\dim\mathfrak{g}^{e}-\dim\mathfrak{g}_{0}^{e_{0}}=18-17=1=n_{2}(\varDelta)$ but $\dim\mathfrak{z}(\mathfrak{g}^{e})-\dim\mathfrak{z}(\mathfrak{g}_{0}^{e_{0}})=2\neq n_{2}(\varDelta)$. We will further discuss this case in \S6.7. \end{singlespace} \subsection{Adjoint action on $F(4)$\label{subsec:Adjoint-action-of-F(4)}} \begin{singlespace} \noindent Let $G$ be a linear algebraic group $G=\mathrm{SL}_{2}(\mathbb{C})\times\mathrm{Spin}_{7}(\mathbb{C})$. In this subsection, we determine $\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{G^{e}}$ in order to complete the proof of the theorems. Note that we only need to look at cases $e=E+e_{(7)}$, $E+e_{(5,1^{2})}$, $E+e_{(3^{2},1)}$, $e_{(7)}$, $e_{(5,1^{2})}$, $e_{(3^{2},1)}$ as for all other cases we have $\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{G^{e}}=\mathfrak{z}(\mathfrak{g}^{e})=\langle e\rangle$. \end{singlespace} \begin{singlespace} For $e=e_{(7)},e_{(5,1^{2})},e_{(3^{2},1)}$, the results can be obtained via \cite[Proposition 4.2]{Lawther2008} since all calculation take place in $\mathfrak{so}(7)$. In the remaining part of this subsection, we include details for the case $e=e_{(3^{2},1)}$ as an example. When $e=e_{(3^{2},1)}$, recall that $\mathfrak{z}(\mathfrak{g}^{e})=\langle e_{(3^{2},1)},R_{e_{1},e_{2}}\rangle$. According to \cite[Theorem 6.3.5]{Goodman2009}, there exists a homomorphism $\mathrm{id}\times\pi:G\rightarrow\mathrm{SL}_{2}(\mathbb{C})\times\mathrm{SO}_{7}(\mathbb{C})$ with $\ker\pi=\{1\}\times\{\pm1\}$. Now let us denote $\mathrm{SL}_{2}(\mathbb{C})\times\mathrm{SO}_{7}(\mathbb{C})$ by $K$. We know that $\ker\pi$ acts trivially on $\mathfrak{g}_{\bar{0}}$, thus we obtain an induced action of $K$ on $\mathfrak{g}_{\bar{0}}$. Let $\mathfrak{z}(\mathfrak{g}^{e})_{\bar{0}}=\mathfrak{z}(\mathfrak{g}^{e})\cap\mathfrak{g}_{\bar{0}}$ and $\mathfrak{z}(\mathfrak{g}^{e})_{\bar{1}}=\mathfrak{z}(\mathfrak{g}^{e})\cap\mathfrak{g}_{\bar{1}}$, note that \[ \left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{G^{e}}=\left(\mathfrak{z}(\mathfrak{g}^{e})_{\bar{0}}\oplus\mathfrak{z}(\mathfrak{g}^{e})_{\bar{1}}\right)^{G^{e}}=\left(\mathfrak{z}(\mathfrak{g}_{\bar{0}}^{e})\oplus\mathfrak{z}(\mathfrak{g}_{\bar{1}}^{e})\right)^{G^{e}}=\left(\mathfrak{z}(\mathfrak{g}_{\bar{0}}^{e})\right)^{G^{e}}\oplus\left(\mathfrak{z}(\mathfrak{g}_{\bar{1}}^{e})\right)^{G^{e}}. \] Furthermore, we have that $(\mathfrak{z}(\mathfrak{g}_{\bar{0}}^{e}))^{G^{e}}=(\mathfrak{z}(\mathfrak{g}_{\bar{0}}^{e}))^{K^{e}}$. Thus when $\mathfrak{z}(\mathfrak{g}_{\bar{1}}^{e})=0$, it suffices to look at $(\mathfrak{z}(\mathfrak{g}_{\bar{0}}^{e}))^{K^{e}}$. It is obvious that $e\subseteq(\mathfrak{z}(\mathfrak{g}_{\bar{0}}^{e}))^{K^{e}}$. Since $\mathrm{SL}_{2}(\mathbb{C})$ is connected, we have that $K^{e}$ is the semidirect product of the subgroup $C^{e}$ and the normal subgroup $R^{e}$. Furthermore, we recall that $K^{e}/(K^{e})^{\circ}\cong C^{e}/(C^{e})^{\circ}$. Now we have that $C^{e}\cong\left(\mathrm{O}_{1}(\mathbb{C})\times\mathrm{O}_{2}(\mathbb{C})\right)\cap\mathrm{SO}_{7}(\mathbb{C})$ where $\mathrm{O}_{1}(\mathbb{C})$ (resp. $\mathrm{O}_{2}(\mathbb{C})$) has the connected component $\mathrm{SO}_{1}(\mathbb{C})$ (resp. $\mathrm{SO}_{2}(\mathbb{C})$). Let us consider the element $g\in K^{e}/(K^{e})^{\circ}$ given by \[ g=\begin{pmatrix}0 & 1 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & -1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 \end{pmatrix}. \] We calculate that $g\cdot e_{(3^{2},1)}=ge_{(3^{2},1)}g^{-1}=e$ and $g\cdot R_{e_{1},e_{2}}=gR_{e_{1},e_{2}}g^{-1}=-R_{e_{1},e_{2}}$. Hence, we have that $\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{K^{e}}\subseteq\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{g}=\langle e\rangle$ and we deduce that $\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{K^{e}}=\langle e\rangle$ . Therefore, $\dim\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{G^{e}}=n_{2}(\varDelta)=1$ and $\dim\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{G^{e}}-\dim\left(\mathfrak{z}(\mathfrak{g}_{0}^{e_{0}})\right)^{G_{0}^{e_{0}}}=n_{2}(\varDelta)=1$. When $e=e_{(7)},e_{(5,1^{2})}$, using similar arguments we obtain that $\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{G^{e}}=\langle e,R_{e_{1},e_{2}}\rangle$. When $e=E+e_{(7)}$, recall that $\mathfrak{z}(\mathfrak{g}^{e})=\langle E+e_{(7)},v_{1}\otimes e_{1}e_{2}e_{3},R_{e_{1},e_{2}}\rangle.$ We know that $G^{e}=\left(\{\pm1\}\ltimes R^{E}\right)\times\mathrm{Spin}_{7}(\mathbb{C})^{e_{(7)}}$ where $R^{E}$ is a connected normal subgroup of $G^{e}$. Now we take $g=-1\in\mathrm{SL}_{2}(\mathbb{C})$ such that $g\in G^{e}$ and $g\notin(G^{e})^{\circ}$. We know that $g$ acts trivially on $\mathfrak{z}(\mathfrak{g}^{e})_{\bar{0}}$, thus $e,R_{e_{1},e_{2}}\in\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{g}$. However, $v_{1}\otimes e_{1}e_{2}e_{3}\notin\mathfrak{z}(\mathfrak{g}^{e})^{g}$ since the action of $g$ on $v_{1}\otimes e_{1}e_{2}e_{3}$ sends it to $-v_{1}\otimes e_{1}e_{2}e_{3}$. Hence, we have that $\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{G^{e}}\subseteq\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{g}=\langle e,R_{e_{1},e_{2}}\rangle$. Therefore, we deduce that $\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{G^{e}}=\langle e,R_{e_{1},e_{2}}\rangle$ and $\dim\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{G^{e}}-\dim\left(\mathfrak{z}(\mathfrak{g}_{0}^{e_{0}})\right)^{G_{0}^{e_{0}}}=n_{2}(\varDelta)=1$. Using similar arguments we obtain that $\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{G^{e}}=\langle e,R_{e_{1},e_{2}}\rangle$ for $e=E+e_{(5,1^{2})}$ and $\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{G^{e}}=\langle e\rangle$ for $e=E+e_{(3^{2},1)}$. The above argument completes the proof of Theorems 1 and 2 for $F(4)$. By combining results in Table \ref{tab:F(4)}, we have that $\dim\left(\mathfrak{z}(\mathfrak{g}^{e})\right)^{G^{e}}=\left\lceil \frac{1}{2}\sum_{i=1}^{4}a_{i}\right\rceil +\varepsilon$ where $\varepsilon=-1$ for $e=E+e_{(7)}$ and $\varepsilon=0$ for all other cases. This proves the statement of Theorem 3 for $F(4)$. \end{singlespace}
1,314,259,992,655
arxiv
\section{Introduction} Many every day tasks require us to interact with objects over extended periods of time, during which objects enter and leave our field of view. This may be due to our head or body movements, because we interact with them or because the objects are moving due to other causal effects. Despite this, we are still able to keep track of object identity across time, even through long term occlusion. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{figures/unaligned_vs_aligned.png} \caption{On the left `Unaligned inputs', entities switch column between time-steps. On the right `Aligned outputs', each column contains the same object across time in its new position (these results were obtained using AlignNet).} \label{fig:main_results} \end{figure} While our interactions with the world often require us to focus on objects, when training agents we commonly use pixel inputs (e.g. images). Since we, as humans, break our observations down in to objects (or entities) \cite{spelke2007core}, one natural avenue for exploration would be to use entity (or object) level inputs in our models \cite{janner2018reasoning, watters2019cobra, veerapaneni2019entity, reyes2019learning}. MONet \cite{burgess2019monet} and other \cite{greff2019multi} unsupervised segmentation models, provide an exciting opportunity to train agents (and their transition models e.g. COBRA \cite{watters2019cobra}) on lower dimensional object representations rather than images. Unfortunately, current object-based transition models \cite{janner2018reasoning, watters2019cobra, veerapaneni2019entity, reyes2019learning} lack a crucial ability to understand how objects segmented in one frame correspond with those segmented in another frame. This makes it more difficult to integrate information about objects over time or even compute losses at the entity level, because correspondence between predicted and target objects are unknown. In this paper we propose the AlignNet, a model capable of computing correspondence between objects across time, not just from one time-step to the next, but across long sequences. For the majority of the paper, we concentrate on fully observable environments, aligning currently observed entities with those observed in the previous time-step. However, we will also show results in a partially observable environment, aligning current inputs with an object-based memory instead. By incorporating an object-based memory, we create an inductive bias for object persistence; once a new object appears it must continue to exist even if it disappears for some time. This allows the model to not only deal with appearance of new entities and disappearance of entities, but also the reappearance of entities through longer term occlusion. The AlignNet has two key components; first is a dynamics model that predicts updates in the representation of the entities aligned in the previous time-step, to match the representations of the entities received in the current one. The second is a permutation model that permutes entities at the current time-step to correspond with the order of the previously aligned entities. The dynamics model helps to bridge the difference between the representation of the entities at the current and previous time-step. \section{Why is this problem important?} \begin{figure} \centering \includegraphics[width=\textwidth]{figures/MONet_AlignNet.png} \caption{AlignNet is used to align MONet \cite{burgess2019monet} entities across multiple time-steps.} \label{fig:monet_align_net} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{figures/DynamicAlignNet_fig.png} \caption{\textbf{The AlignNet Core}. The LSTM acts as the dynamics model predicting where the previously aligned entities, $\tilde{X}_t$, would be at the next time-step, $\tilde{X}_{t+1, dynamics}$. The transformer predicts a permutation matrix, $P_{t+1}$ to align the current input, $X_{t+1}$ with the previously aligned entities, $\tilde{X}_t$. Applying the permutation matrix to the input entities gives the aligned entities at the current time-step, $\tilde{X}_{t+1}$.} \label{fig:align_net_core} \end{figure} In this section we demonstrate the need for alignment in existing entity-based models, not only for learning dynamics \cite{chang2016compositional, yi2019clevrer, riochet2020occlusion}, but also for object based planning \cite{janner2018reasoning, watters2019cobra, veerapaneni2019entity, reyes2019learning} and in robotics \cite{ferreira2019learning}. To train models we need to be able to compute losses between predictions and targets. Problems arise when both the output of our model (the prediction) and the target is a set of objects, because the correspondence between the predicted and the target objects is unknown and so we are not able to compute losses. There are two ways to solve this problem. Firstly, if the inputs and targets are fed to the model in some consistent (or canonical) order then the model can easily exploit this ordering and learn to make predictions in this same consistent (or canonical) order. Note that a canonical order suggests some higher level rules by which the objects are ordered (e.g. based on objects' locations in a scene) and a consistent order suggests that the $n^{th}$ input entity or predicted entity always corresponds to the same "token" or instance of that entity over time. Secondly, if the inputs and targets are not in a canonical (or consistent) order, the model cannot learn to output predictions in a canonical or consistent order. Instead we need to know correspondences between the prediction and the targets. The first requires the alignment of the inputs and targets over time and the second requires alignment between the predictions and the targets. Many approaches are currently avoiding the alignment problem by either taking losses in pixel space (Section \ref{sec:avoid}) or using privileged information to align inputs and targets (Section \ref{sec:privileged}). Few papers, when faced with the entity alignment problem, use weak heuristics to implicitly align a set of predicted objects with a set of target objects in order to compute losses (Section \ref{sec:hack}), but this is often not treated in the main body of the paper. Creswell et al. \cite{creswell2020alignnet} and Piloto et al. \cite{piloto2020learning} are among the first to acknowledge the alignment problem although Piloto et al. also use privileged information to align inputs and targets in their entity-based dynamics models. However, a similar problem exists in the computer vision literature and is referred to as re-identification \cite{ye2020deep, inbook}. Unlike our model that learns without supervision, models trained to perform re-identification often require access to ground truth labels and bounding boxes \cite{zhan2020simple}. \subsection{Avoiding the alignment problem by taking losses in pixel space}\label{sec:avoid} Both Janner et al. \cite{janner2018reasoning} and Watters et al. \cite{watters2019cobra} obtain losses for their entity-based transition models by mapping from their predicted entities' features to full image scenes in pixel space to take losses. Similarly, Riochet et al. \cite{riochet2020occlusion} learn to map from a set of objects to a segmentation and depth map and take losses in pixel space. The problem with mapping back to image space is that it is not always possible to do so, due to lack of a decoder model, and losses may be more semantically meaningful if taken in the entity representation space. Additionally, it can be computationally expensive to apply large decoder models such as MONet to decode back to image space. \subsection{Using privileged information to provide inputs and targets in a consistent order across time.}\label{sec:privileged} Chang et al. \cite{chang2016compositional} avoid the alignment problem by using a ground truth state space where the inputs and targets are in a consistent order and the associations between the input entities and target entities are known. Janner et el. \cite{janner2018reasoning} look at planning in object space, computing an object wise $l_2$ distance, using privileged information. They \cite{janner2018reasoning} encode objects in a consistent order by using ground truth consistently ordered masks in their perception model. Similarly, Ferreira et al. \cite{ferreira2019learning} also use ground truth masks to provide their objects in a consistent order across time and to ensure that the targets are in the same order. Yi et al. \cite{yi2019clevrer} make use of `ground-truth motion traces and event histories of each object in the videos' allowing them to use an $l_2$ loss to train their models, avoiding the alignment problem. \subsection{Attempts at computing entity-wise losses.}\label{sec:hack} Veerapaneni et al.'s \cite{veerapaneni2019entity, reyes2019learning} `entity cost' involves computing a minimum pair-wise loss between entity masks. Their approach requires entities to be mapped back to pixel (mask) space before computing their losses. Mapping back to pixels provides a strong positional bias when aligning objects since the loss between entities will only be small if they overlap significantly in pixel space. Similarly, Smith et al. \cite{smith2019modeling}, use a handcrafted method to compute losses based on intersection over union of object masks. The problem with these approaches is threefold. Firstly, they do not take advantage of lower dimensional representations that may encode objects in interesting (e.g. disentangled) ways and secondly, they require a decoder, which may not always be available. Finally, when two objects become very close (or occlude one another), position is not enough to resolve which object is which, rather we need additional information about the dynamics of the objects to resolve the alignment. One algorithm often exploited to compute correspondence between two sets of entities is the Hungarian. Given an adjacency matrix, the Hungarian algorithm solves the minimum assignment problem between two sets of entities. However, the Hungarian assumes access to an adjacency matrix populated with the costs of matching up each of the entities in one set with those in the other. The adjacency matrix may be approximated by the mean-squared-error between entity representations but this approach is not sufficient to deal with partially observable environments or ambiguous cases where dynamics is needed to resolve alignment as we will show in Section \ref{sec:exp}. Finally, the Hungarian algorithm is non-differentiable and therefore we cannot pass gradients through it for training. To conclude this section, alignment may be useful not only for computing losses but also for integrating information about objects in observations across time. Consider a scene with two objects moving with constant velocity. If you know how objects correspond between frames it is easy to compute the velocity, if not, this becomes much harder. Alignment computes this correspondence across time. Solving alignment would relieve the need for privileged information and lift many of the limitations faced by these models. It is also worth noting that most of these problems are still concerned with fully observable environments and so in this paper, we too focus on fully observable environments to serve the community with a solution as soon as possible. We present some initial results in partially observable environments. \section{The AlignNet Model} Given a sets of object observations, $\mathcal{X}_{\tau=t} = \{x_1, x_2, x_3, ..., x_N \}_{\tau=t}$ across time, $\tau$, we would like to predict how each object at the current time-step, $x_i \in \mathcal{X}_{\tau=t+1}$, corresponds with each object in the previous time-steps, $x_j \in \mathcal{X}_{\tau \leq t}$. Here $x$ is the visual representation of an object that may change when an object moves due to lighting and other conditions. In this paper we consider aligning objects across many time-steps. We can achieve this by first looking at how to align objects across a single time-step and then recursively applying alignment across all time-steps. To being with, we concatenate the elements in each set of objects at times $\tau=t$ and $\tau=t+1$ to obtain lists, $X_t$ and $X_{t+1}$ which have a fixed order, unlike sets: \[X_{t} = [x_1, x_2, x_3, ..., x_N ]_{t}\] \[X_{t+1} = [x_{j_1}, x_{j_2}, x_{j_3}, ..., x_{j_N} ]_{t+1}\] \subsection{Problem Setup} \label{sec:set_up} We would like to learn a function, $f$, (see Equation \ref{eqn:init}) that finds an alignment between $X_{t+1}$ and $\tilde{X}_t$, where $\tilde{X}$ is $X$ re-arranged in some consistent order (as shown on the right-hand side of Figure \ref{fig:main_results}). Initially, in this paper, we assume a full observable environment and so objects visible at $t=0$ are the same objects visible at $t>0$, therefore we choose to define the order of objects to be the order at $t=0$, resulting in $\tilde{X}_0=X_0$. To find an alignment it may be necessary to use information provided by previously aligned inputs, $\tilde{H}_t = [ \tilde{X}_0, \tilde{X}_1, ..., \tilde{X}_{t-1}]$ to help resolve ambiguities in alignment. The distribution of aligned entities at the next time-step, $\tilde{X}_{t+1}$, given the entities at the current time-step, $X_t$, and the history, $\tilde{H}_t$, can be formulated as the conditional distribution in Equation \ref{eqn:normal}, where we assume the distribution to be a Gaussian with mean given by Equation \ref{eqn:init} and variance, $\sigma^2$. \begin{equation} \label{eqn:normal} p(\tilde{X}_{t+1} | X_t, \tilde{H}_t) = \mathcal{N}(f(X_t, \tilde{H}_t), \sigma^2) \end{equation} \begin{equation} \label{eqn:init} \tilde{X}_{t+1} = f(X_t, \tilde{H}_t) \end{equation} The choice of function, $f$, is critical and should capture two things: (1) the permutation (or re-ordering) of elements in $X_t$ and (2) $f$ must take into account how object appearance may change over time; $f$ must account for the dynamics of objects across time. Therefore we choose $f$ to consist of a permutation, $P$, and a linear approximation for the dynamics of the objects, allowing a vector $\tilde{\Delta}$ to capture the changes between time-steps. The function $f$ is defined in Equation \ref{eqn:simple_f} where $\tilde{\Delta}$ depends on $\tilde{H}_t$. Note that $\tilde{X}_t = P_t X_t$. \begin{equation} \label{eqn:simple_f} f(X_t, \tilde{H}_t) = P_t X_t + \tilde{\Delta}_t \end{equation} Plugging function, $f$, (Equation \ref{eqn:simple_f}) in to Equation \ref{eqn:normal} we obtain: \begin{equation} \label{eqn:normal_final} p(\tilde{X}_{t+1} | X_t, \tilde{H}_t) = \mathcal{N}(PX_t + \tilde{\Delta}_t, \sigma^2)\pi(P)\pi(\tilde{\Delta}_t) \end{equation} where $\pi(P)$ and $\pi(\Delta_t)$ are the prior distributions over $P$ and $\Delta_t$ respectively. Since there is no preference for $P$, we may choose a uniform prior over all possible permutations, therefore $\pi(P)=\pi_P$ is a constant. Again, recall that $\tilde{\Delta}_t$ depends on $\tilde{H}_t$. The evidence lower bound (ELBO) for Equation \ref{eqn:normal_final} is then given by Equation \ref{eqn:elbo}: \begin{equation} \label{eqn:elbo} \begin{split} \log p(\tilde{X}_{t+1} | X_t, \tilde{H}_t) &\geq \mathbb{E}_{q(P, \tilde{\Delta}_t| X_t, \tilde{H}_t)} \log p(\tilde{X}_{t+1} | P, X_t, \tilde{\Delta}_t, \tilde{H}_t) \\ &- \mathbb{KL}[q(P|X_t, \tilde{\Delta}_t)||\pi_P] \\ &- \mathbb{KL}[q(\tilde{\Delta}_t|\tilde{H}_t)||\pi(\tilde{\Delta}_t)] \end{split} \end{equation} We choose to factor the posterior as follows, $q(P,\tilde{\Delta}_t|X_t, \tilde{H}_t) = q(P|X_t, \tilde{\Delta}_t)q(\tilde{\Delta}_t |\tilde{H}_t)$. Factorising the posterior in this way, the deltas depend only on the history, $\tilde{H}_t$, and not on the current input. This limits the information that is available to the deltas and helps to avoid trivial solutions. One trivial solution, that we avoid, would be the permutation matrix being an identity matrix and the delta accounting for all of the difference. The choice of prior, $\pi(\tilde{\Delta}_t)$ is more difficult than for $\pi(P)$. If we assume that object representations do not change very much over time we may assume a Gaussian prior, $\mathcal{N}(0, 1)$, however, this assumption may not always hold. Alternatively, we could assume a uniform distribution over all possible delta values, however, it is possible that we may obtain trivial solutions. \subsection{Implementation} The input to our model, $X_t$, is a list of representation vectors extracted from an image observation at time, $t$, using a pre-trained MONet \cite{burgess2019monet} model (see Figure \ref{fig:monet_align_net}). When extracting entities we use ground truth masks, which we shuffle to ensure that the input object representations, $X_t$, are in a random order, where we know the order only for evaluation purposes. We approximate the remaining distributions (all distribution except the priors) in Equation \ref{eqn:elbo} as follows. \begin{itemize} \item $q(\tilde{\Delta}_t|\tilde{H}_t)$ is approximated as a Gaussian distribution. An LSTM is used to predict the mean and standard deviation of the deltas at each time-step given the aligned input, $\tilde{X}_t$, from the current step along with the LSTM state, $\tilde{H}_t$. \item $q(P|X_t, \tilde{\Delta}_t)$ is the output of a transformer applied to both the output of the dynamics model, $\tilde{X}_t + \tilde{\Delta}_t$, and the (unaligned) inputs in the next step, $X_{t+1}$. Rather than using the final values predicted by the transformer, we use the similarity (or attention) matrix as the output. We then apply the Sinkhorn algorithm \cite{mena2018learning}, a differentiable algorithm that operates on a matrix to give a soft approximation of a permutation matrix. \item $p(\tilde{X}_{t+1} | P_t, X_t, \tilde{\Delta}_t)$ is parameterised as a Gaussian distribution, $\tilde{X}_{t+1} \sim \mathcal{N}(P_t X_t + \tilde{\Delta}_t, \sigma)$ with fixed variance, $\sigma=1$ for simplicity. \end{itemize} Throughout this paper, for additional simplicity, we use the means of the Gaussian distributions rather than samples since we found that using the means had no adverse effect on the results. When evaluating the expectation, $\mathbb{E}_{q(P, \tilde{\Delta}_t| X_t, \tilde{H}_t)} \log p(\tilde{X}_{t+1} | P_t, X_t, \tilde{\Delta}_t)$, it is important to note that we are always aligning the next step, $X_{t+1}$, with the current aligned step, $\tilde{X}_t$, and we assume that $X_0 = \tilde{X_0}$. This is equivalent to learning a $\tilde{\Delta}^*_t$ and $P^*_t$ such that $\tilde{X}_t = {P^*_t}^T X_{t+1} - \tilde{\Delta}^*_t$, under some constraints, such as, $P^*_t$ is a permutation matrix. The expectation is then given by the mean-squared difference between $\tilde{X}_t + \tilde{\Delta}_t$ and $P_{t}X_{t+1}$. The loss is given in Equation \ref{eqn:loss}, note that the first $\mathbb{KL}$ term simplifies to the entropy, $\mathbb{H}$, of the permutation matrix. Our model is illustrated in Figure \ref{fig:align_net_core}. \begin{equation} \label{eqn:loss} loss = \|\tilde{X}_t + \tilde{\Delta}_t - P_{t}X_{t+1} \|_2^2 + \beta_1 \mathbb{H}(P_{t}) + \beta_2 \mathbb{KL}[q(\tilde{\Delta}_t|\tilde{H}_t)||\pi(\tilde{\Delta}_t)] \end{equation} \section{Experiments and Results}\label{sec:exp} We demonstrate the AlignNet's performance on 5 tasks (illustrated in Figures \ref{fig:sw}, \ref{fig:blai:perm} and \ref{fig:room}) spanning the three environments: SpriteWorld \cite{spriteworld19}, Physical Concepts \cite{piloto2018probing, piloto2020learning} and Unity Room, a 3D partially observable environment \cite{Hill2020Environmental, das2020probing, hill2020human}. Finally, we show some additional results in Unity Room (Section \ref{sec:improved_align_net}) and Physical Concepts (Section \ref{sec:improved_align_net_blai}) using a modified version of the AlignNet that incorporates memory to deal with partially observable environments. \subsection{SpriteWorld} SpriteWorld \cite{spriteworld19} is a 2D environment made up of 2D shapes with continuous colours, sizes, positions and velocities. In this paper, we use three shapes: squares, triangles and circles. When two sprites `collide' in SpriteWorld one object occludes the other and the sprites continue to move with the same velocity they had before. We begin by testing AlignNet on three tasks in the SpriteWorld, the tasks are described in Figure \ref{fig:sw}. In task (a) we test how well the AlignNet can handle up to seven objects moving with random constant velocity. While task (a) tests if AlignNet can handle many objects, it is possible that the model learns to match objects based only on their visual properties and not based on their velocities. In tasks (b) and (c) we create tasks with ambiguities, that can only be resolved if the model understands dynamics. \begin{figure}[h!] \centering \begin{subfigure}[c]{\textwidth} \includegraphics[width=\linewidth]{figures/const_v_w_rand_num_sprites.png} \caption{\textbf{Task (a):} The number of sprites is drawn from a uniform distribution, $U\{1, 7\}$, each sprite moves with constant random velocity and stops at the edges, becoming partially occluded. The colour and shape of each object is sampled uniformly.} \label{fig:sw:const_v} \end{subfigure} \begin{subfigure}[c]{\textwidth} \includegraphics[width=\linewidth]{figures/const_v_w_ambiguity.png} \caption{\textbf{Task (b):} Each example contains two sprites, in $50\%$ of examples sprites have the same shape and colour and in $90\%$ of examples sprites collide with constant velocity, between time-step $t=5$ and $t=10$. Sprites stop at the edges.} \label{fig:sw:const_v_amb} \end{subfigure} \begin{subfigure}[c]{\textwidth} \includegraphics[width=\linewidth]{figures/const_v_same_shape_colour_rand_num_sprites.png} \caption{\textbf{Task (c):} The number of sprites is drawn from a uniform distribution, $U\{1, 7\}$. The sprites move with constant random velocity and stop at the edges. All sprites in each sample are the same shape and colour. Sprites stop at the edges.} \label{fig:sw:const_v_same} \end{subfigure} \caption{Datasets generated in the SpriteWorld environment.} \label{fig:sw} \end{figure} In task (b), $45\%$ of the time, the model is presented with entities of the same shape and colour colliding. At the point where the entities collide, it would be impossible for a model that did not capture dynamics to resolve which object was which after the collision. Therefore task (b) tests whether the AlignNet is indeed learning the correct dynamics. Task (c) takes this one-step further having up to seven objects of the same shape and colour, moving with constant velocity. In Table \ref{tab:res:sw} we compare the AlignNet to results using the Hungarian. The Hungarian is a non-differentiable algorithm that is used to solve the minimum assignment problem in the case where an adjacency matrix is given. For the comparisons we present in this work, we compute the adjacency matrix using the mean-squared-error between all pairs of object representations. Additional visual results are shown in Figures \ref{fig:SW_task_a_results}, \ref{fig:SW_task_b_results} and \ref{fig:SW_task_c_results}. Results for the first five (of 16) time-steps for task (b) are also shown in Figure \ref{fig:main_results}. We see that the AlignNet solves all tasks well, with some errors in task (c). On inspection, we found that the model fails on task (c) in some cases where more than two objects collide and where those objects have similar trajectories. This fail case would also be hard for humans to resolve. \begin{table}[h!] \centering \begin{tabular}{l| ccc|c|c|} & \multicolumn{3}{|c|}{Sprite World Task} & Physical Concepts & Unity Room \\ & (a) & (b) & (c) & Continuity & (agent following policy) \\ \hline AlignNet accuracy & 100$\%$ & 100$\%$& 99.8$\%$& 100$\%$& 86.2$\%$ \\ Hungarian & 99.7$\%$ & 96.4$\%$ & 97.4$\%$& 98.9$\%$& 85.8$\%$ \\ \end{tabular} \caption{AlignNet performance (three significant figures) on each Task.} \label{tab:res:sw} \end{table} \subsection{Physical Concepts: Continuity} \label{sec:blai_pillars} We also demonstrate AlignNet's performance on an experiment inspired by Spelke et al. \cite{baillargeon1985object} that tests infants' understanding of object persistence. We use the "object persistence" task, demonstrated in Figure \ref{fig:blai:perm}, taken from the Physical Concepts task suite \cite{piloto2018probing} where a ball rolls behind two pillars $75\%$ of the time. While the ball is behind the pillar it cannot be seen, which tests AlignNet's ability to deal with short term occlusion. Additionally, the visual properties of the ball change as it moves, due to the effect of the different lighting conditions. The viewing angle is also shifted randomly during the observation while still focusing on the centre of the scene. \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{figures/blai_pillars.png} \caption{Samples from the Physical Concepts: Continuity dataset. In $75\%$ of examples the a ball rolls behind two pillars, in all other cases the ball rolls in front of the pillars.} \label{fig:blai:perm} \end{figure} Our model achieves $100\%$ accuracy on this task, while the Hungarian achieves $98.9\%$ accuracy. Figure \ref{fig:blai_results} is a visualisation of AlignNet's performance on the Physical Concepts task, we see that the AlignNet is able to re-assign an object to the correct slot even after occlusion. If the AlignNet had not been able to deal with occlusion it would have placed the ball in a different slot after occlusion; rather the AlignNet knows that the ball is the same object before and after occlusion, assigning it to the same slot. \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{figures/unity_room.png} \caption{Observations of an agent interacting in an simulated 3D Unity Room environment \protect\cite{Hill2020Environmental, das2020probing, hill2020human} filled with objects from 58 different classes in 3 different sizes and 10 different colour.} \label{fig:room} \end{figure} Although our models do very well on SpriteWorld and Physical Concepts tasks, the tasks are by no means easy. Many versions of the model did not work. For example, without entropy regularisation (the second term in the loss, Equation \ref{eqn:loss}) on the permutation matrix, $P_{t}$, the accuracy would peek and then start to go down as the model found ways to exploit softer assignments to minimise the objective during training. We found that $\beta_1=0.1$ worked well. Empirically, we also found that we did not need to use the $\mathbb{KL}[q(\tilde{\Delta}_t | \tilde{H}_t) || \pi(\tilde{\Delta}_t)]$ term in the loss (Equation \ref{eqn:loss}), reducing the number of hyper-parameters that need to be tuned. The AlignNet also learns very low entropy solutions which means that hard and soft alignment are very similar. Using soft alignment allows gradients to be passed back through the model which may be useful when using the AlignNet for downstream tasks. \subsection{Unity Room} \label{sec:unity_room} In the previous tasks, the observations have be acquired by a stationary agent observing either a 2D scene from a static view (SpriteWorld) or a 3D scene from a camera, making very small movements left and right (Physical Concepts). In the Unity Room task the observations are collected by an agent following a learned policy where the agent was trained to pick up and move specific objects through a language instruction. This means that unlike the previous observations, these observations include examples of appearance, disappearance and re-appearance of objects as the agent switches its focus from one part of the room to another. The environment has the added complexity of containing many objects from 58 different classes in varying sizes and colours. We show visualisation of the AlignNet's performance on the Unity Room dataset in Figure \ref{fig:room_results}. The AlignNet achieves $86.2\%$ alignment accuracy, the Hungarian algorithm achieves similar performance. While our model is able to deal with the variety of object classes, colours, size and with some the agent's motion, our model is unable (by design) to deal with partially observable environments. When designing our model, we make an explicit assumption that the objects visible at $t=0$ are visible for the next steps (Section \ref{sec:set_up}) and therefore if an object disappears the model may not be able to handle this well. An example of this failure case is shown in Figure \ref{fig:room_results_fail}; the second slot, `Entity 2' is initially assigned a table, but once the table disappears from view at $t=7$ it is replaced by a similarly coloured object that was in a similar position before it disappeared. \begin{table}[h!] \centering \begin{tabular}{l| c | c |} & Physical Concepts Free-Form & Unity Room (agent turning) \\ \hline Memory AlignNet accuracy & 90$\%$ & 96$\%$ \\ Hungarian accuracy & 62$\%$ & 81$\%$ \\ \end{tabular} \caption{Comparing performance of the Memory AlignNet (Section \ref{sec:improved_align_net} and \ref{sec:improved_align_net_blai}) to the Hungarian on the Physical Concepts Free-Form task (Section \ref{sec:improved_align_net_blai}) and the Unity Room task where the agent is turning (Section \ref{sec:improved_align_net}).} \label{tab:hard} \end{table} What is significant though is that in some cases our model \textbf{can} deal with new objects appearing, this is demonstrated in `Entity 8' of Figure \ref{fig:room_results}, where a small pink shape appears from behind the white object that the agent is interacting with at $t=2$. \subsection{Unity Room with the improved Memory AlignNet.} \label{sec:improved_align_net} For these experiments we modified the AlignNet to have a slot-wise object-based memory and align with respect to the memory rather than with respect to the previous time-step (see Figure \ref{fig:memory_align_net}). We refer to this improved version of the AlignNet as \textit{Memory AlignNet}. We also make the dynamics model action conditional. By incorporating an object-based memory, we create an inductive bias for object persistence; once a new object appears it must continue to exist even if it disappears for some time. This allows the model to not only deal with appearance of new objects and disappearance of objects, but also the reappearance of objects. We create a modified task in the Unity Room to exhibit many examples of appearance, disappearance and reappearance of entities (or objects). In this modified task the model receives a sequence of 12 frames in which an agent situated in a Unity Room environment turns left for a number of steps drawn from the uniform distribution, $U\{6, 9\}$ and turns back for the rest of the time-steps. This ensures that the dataset captures objects moving in and out of view. Our Memory AlignNet achieves $90\%$ accuracy on this task demonstrating the ability to deal with longer term occlusion over multiple frames as shown in Figure \ref{fig:result:unity_room_spin}. The Hungarian algorithm achieves only $62\%$ accuracy, this is lower than in the previous section because in this task there are more examples of appearance and re-appearance. \begin{figure} \centering \includegraphics[width=\textwidth]{figures/room_spin_task/12850688_wid12_visuals_1.png} \caption{Visual Results on the Unity Room task where an agent is turning left for a number of steps and then turning right for the remainder of the steps. The unaligned inputs to the model are shown on the left and the aligned outputs are shown on the right. These results demonstrate that the Memory AlignNet is able to deal with new objects appearing (Entity 1 @ t=3) and is able to deal with entities disappearing (Entity 4 and 5 @ t=3) and reappearing in the correct slot (@ t=11). It also shows Entity 7 persisting in the same slot across time.} \label{fig:result:unity_room_spin} \end{figure} \subsection{Physical Concepts Free-Form with the Memory AlignNet.} \label{sec:improved_align_net_blai} For these experiments we also use the Memory AlignNet (as in Section \ref{sec:improved_align_net}) and we use a more complex Physical Concepts \cite{piloto2020learning} task shown in Figure \ref{fig:blai_free_from}. In this task the model receives a sequence of 15 frames. Each frame contains a number of objects including balls, containers, planks and cuboids that interact with one another. Interactions include collisions, occlusions and containment events. Initially unseen objects may drop or roll into the agent's field of view. Both this and the variety of physical interactions make this a challenging task for the Memory AlignNet. \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{figures/BLAI_free_form_task/14776773_physical_concepts_free_fom_inputs.png} \caption{Samples from the Physical Concepts Free-Form environment where objects roll, drop, collide and get occluded by other objects including containers. This dataset exhibits objects appearing, disappearing and reappearing later \protect \cite{piloto2020learning}.} \label{fig:blai_free_from} \end{figure} While the Hungarian baseline achieves $81\%$ accuracy, our Memory AlignNet achieves $96\%$ accuracy. The Memory AlignNet does better than the Hungarian because it uses a dynamics model to deal with changes in lighting and viewing conditions and has a memory that helps the model to deal with re-appearance. The Hungarian algorithm has neither of these; it does not take in to account the dynamics of the objects and does not use memory. Visual results are shown in Figure \ref{fig:result:blai_free_form_results}. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{figures/BLAI_free_form_task/14776773_wid4_visuals_3.png} \caption{Visual results on the Physical Concepts Free-Form task. The first column shows the input at each time-step. The rest of the columns shows entities 0 to 9. We see that each entity is stable across time. At $t=2$ we see a small plank (Entity 3) being occluded by the purple box (Entity 5) and then re-appearing in the correct slot at $t=3$. We also see that the Memory AlignNet is able to keep track of the purple ball (Entity 1) rolling down the plank, even when it is partially occluded by the purple box (Entity 5) at $t=2$.} \label{fig:result:blai_free_form_results} \end{figure} \subsection{Summary of Results} The AlignNet is a differentiable model that has learned to align entities without supervision while performing at least as well as the Hungarian algorithm, a hand-crafted and non-differentiable baseline, in fully observable environments and significantly outperforming when modified to deal with partially observable environments. We expect our model to be better than the Hungarian algorithm in several cases where there is ambiguity that can only be resolved by understanding dynamics. In fact the SpriteWorld tasks that the Hungarian performs worst on are SpriteWorld Task (b) and (c), which have the most ambiguities. Further, we see that the Hungarian algorithm fails in partially observable environments, while our Memory AlignNet performs well. \section{Related Work} \textbf{Advantages of our approach over existing approaches.} Unlike Veerapaneni et al. \cite{veerapaneni2019entity, reyes2019learning} and Smith et al. \cite{smith2019modeling} our model performs an explicit alignment in the entity space that does not require a decoder model to map back to entities in pixel space or to the masks. Additionally, our model learns to use dynamics to resolve ambiguous cases where two objects may be visually similar and where there is occlusion. While \cite{yi2019clevrer} ensure that a `combination of the three attributes uniquely identifies one object', we specifically look at datasets where scenes contain many examples of objects that are the same colour and shape (see Figure \ref{fig:sw}). We have also significantly built on our earlier work, the Self-supervised Alignment Module \cite{creswell2020alignnet}, by incorporating a dynamics model. This allows our new version of the AlignNet to deal well with moving objects, changes in lighting conditions and view point. \textbf{How do humans identify and individuate objects?} Psychologists and cognitive scientists have tried to explain how humans keep track of objects (or entities) as they go in and out of their visual field. Pylyshyn \cite{Pylyshyn1989TheRO} proposes one explanation which he refers to as ``sticky indices''; when a new object first appears it is assigned an index which ``sticks'' to the object as it moves; similar to tracing an object with your finger. Pylyshyn does not give an exact mechanism by which these indices ``stick'' to objects (or entities). Some works suggest that we evaluate the gap between two representations and determine whether that gap is plausible; whether it can be explained by our model of the world. If the gap can be explained then we consider these entities (or objects) to have the same ``index''. This is very similar to how the AlignNet works, using the dynamics model to predict where objects should be and using the permutation model to perform the matching. \textbf{Traditional and Deep Learning approaches to solving combinatorial problems.} Alignment is different to the minimum assignment problem encountered in combinatorics because the minimum assignment problem assumes access to a similarity or adjacency matrix. This is true of both the Hungarian algorithm -- a traditional non-differentiable solution to the minimum assignment problem -- and deep learning approaches \cite{bello2016neural, vinyals2015pointer, milan2017data}, which both operate on an adjacency (or similarity) matrix. The AlignNet does not require a pre-defined similarity matrix. Rather, the dynamics model in the AlignNet learns to account for possible differences in the object representations allowing us to compute errors to train the AlignNet. Additionally, we consider a more general assignment problem where there may be no match, for example, if an object appears for the first time. Andrychowicz \& Kurach \cite{andrychowicz2016learning} propose a model for sorting and merging sequences. However their proposed method is non-differentiable, unlike the AlignNet which is. ShuffleNet proposed by Lyu et al. \cite{lyu2019autoshufflenet}, shows that deep networks are able to predict permutation matrices. However, neither of these works focus on alignment. \textbf{Deep Learning Approaches to Object Tracking.} It is important to note that our work is significantly different to traditional object tracking, in that we focus on keeping track of pre-extracted entities without assumed access to the extraction process, while most object tracking literature focuses on object tracking in images where the objects are not yet extracted. Additionally, we do not assume access to ground truth bounding boxes (or labels) and train our model without supervision. An important and novel feature of the Memory AlignNet (Section \ref{sec:improved_align_net}) is its ability to deal with appearing, disappearing and re-appearing entities (or objects). He et al. \cite{he2018tracking} propose a model for tracking objects, but unlike the improved version of the AlignNet, their model cannot deal with reappearing objects because it terminates trackers when an object disappears. Additionally, He et al. \cite{he2018tracking} assume that all objects in the sequence were visible at $t=0$, meaning that the model cannot deal with new objects appearing later in the sequence. This is an assumption that we were able to relax with the improved AlignNet. Further, while the improved AlignNet is able to account for new objects current works treat objects that are not sufficiently similar to those seen before as false detections \cite{yoon2019data}. As in the Memory AlignNet, Valmadre et al. \cite{valmadre2017end} and Yang \& Chan \cite{yang2018learning} also incorporate memory into their object tracking model. However, they only track a single object, while we use the AlignNet to keep track of multiple objects. \textbf{Object-based reasoning in deep learning.} Some progress has been made towards object (or entity) based models both in reinforcement learning \cite{zambaldi2018relational, kulkarni2019unsupervised} and in relational reasoning over objects \cite{yi2018neural, janner2018reasoning, ferreira2019learning, reyes2019learning}. These models show promise over models trained on raw-pixel inputs, but in general these works focus on fully observable environments where objects do not come in and out of view. Several models exist that learn to extract objects (or entities) without supervision \cite{burgess2019monet, greff2019multi, nash2017multi, greff2016tagger}. However, if these models are applied to successive frames of a video, the output is a set of objects at each time-step and the correspondence between objects across time is unknown, especially in partially observable environments. This is the purpose of the AlignNet, with the Memory AlignNet able to keep track of entities in partially observable environments. \textbf{Writing objects to memory.} Learning to write discrete entities into a slot-based memory can be hard because networks often use soft addressing mechanisms \cite{graves2014neural}. One exception to this is the Neural Map \cite{parisotto2017neural}, where observations are stored in a 2D spatial map based on the agent's location in an environment, they store a single representation of their observation rather than entities. In our improved AlignNet we incorporate a slot-based memory that allow us to keep track of discrete entities over time. We achieve this by applying a slot-wise LSTM to the aligned elements at each time-step, treating each slot independently and allowing each slot to accumulate the history of a single object (when the model is trained correctly). See Figure \ref{fig:memory_align_net} for details. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/MemoryAlignNet_fig.png} \caption{\textbf{The Memory AlignNet Core}. The improved version of the AlignNet with memory. The Memory Core is a slot-wise LSTM; where an LSTM is applied independently to each slot. The Memory Core takes the aligned entities $\tilde{X}_{t+1}$ as input as well as the memory, $M_t$ and the action taken (if any).} \label{fig:memory_align_net} \end{figure} \section{Conclusions} The AlignNet performs very well in fully observable environments, both 2D SpriteWorld and 3D Physical Concepts: Continuity. The model is able to learn to leverage dynamics to resolve ambiguous cases. For example, when two objects with similar shapes and colour collide, the AlignNet uses the entities' (or objects') dynamics to resolve which is which. On the Physical Concepts: Continuity task we demonstrated that the model can deal with short term occlusion, realistic lighting conditions and small changes in viewing angle. For tasks in partially observable environments, we augmented the AlignNet with a slot-based memory (Section \ref{sec:improved_align_net}), which we refer to as Memory AlignNet. We found that Memory AlignNet significantly outperformed baselines in both the Unity Room environment and on the Physical Concepts Free-Form data, dealing well with the appearance of new entities and disappearance and re-appearance of entities. There is still work to be done to improve Memory AlignNet, namely by working on the architectures of the dynamics and memory models. By providing a solution to the alignment problem, the AlignNet opens up many new and interesting opportunities for future work using objects in reinforcement learning and other downstream tasks. \subsubsection*{Acknowledgments} We would like to acknowledge Peter Battaglia, David Barrett and Danilo Rezende for their helpful discussions and feedback. We would also like to thank Phoebe Thacker and Lucy Campbell-Gillingham for their support.
1,314,259,992,656
arxiv
\section{Introduction} The parametric empirical Bayes estimators (Morris, 1983) are known to be a useful method producing reliable estimates of multidimensional parameters. This technique is widely used in variety of fields such as small area estimation (Rao and Molina, 2015) and disease mapping (Lawson, 2013). Let ${\theta}_1,\ldots,{\theta}_m$ be the multiple parameters of interest, and $y_1,\ldots,y_m$ be the independent observations generated from the distribution $f_i(y_i|{\theta}_i), \ i=1,\ldots,m$. To carry out an empirical Bayes estimation, it is assumed that the parameters ${\theta}_1,\ldots,{\theta}_m$ independently follows the distribution $g({\theta}_i; {\text{\boldmath $\phi$}})$, where ${\text{\boldmath $\phi$}}$ is a vector of unknown parameters. Therefore, we obtain a two stage model: \begin{equation}\label{model} y_i|{\theta}_i \sim f_i(y_i|{\theta}_i), \ \ \ \ \ {\theta}_i\sim g({\theta}_i;{\text{\boldmath $\phi$}}), \ \ \ \ i=1,\ldots,m, \end{equation} which are independent for $i=1,\ldots,m$. Under the setting, the posterior distribution of ${\theta}_i$ is given by $$ \pi(\theta_i |y_i;{\text{\boldmath $\phi$}})=\frac{f_i(y_i | \theta_i)g(\theta_i;{\text{\boldmath $\phi$}})}{\int f_i(y_i | \theta_i)g(\theta_i;{\text{\boldmath $\phi$}})d{\theta}_i}, \ \ \ \ i=1,\ldots,m. $$ The Bayes estimator $\widetilde{\th}_i$ of ${\theta}_i$ under squared error loss is the conditional expectation (posterior mean) of ${\theta}_i$ given $y_i$, that is \begin{equation}\label{BE} \widetilde{\th}_i\equiv {\rm E}[{\theta}_i|y_i;{\text{\boldmath $\phi$}}]=\frac{\int {\theta}_if_i(y_i | \theta_i)g(\theta_i;{\text{\boldmath $\phi$}})d{\theta}_i}{\int f_i(y_i | \theta_i)g(\theta_i;{\text{\boldmath $\phi$}})d{\theta}_i}, \ \ \ \ i=1,\ldots,m. \end{equation} However, the Bayes estimator $\widetilde{\th}_i$ depends on unknown model parameters ${\text{\boldmath $\phi$}}$, which can be estimated from the marginal distribution of all the data $y=\{y_1,\ldots,y_m\}$, given by $$ L({\text{\boldmath $\phi$}})=\prod_{i=1}^m\int f_i(y_i | \theta_i)g(\theta_i;{\text{\boldmath $\phi$}})d{\theta}_i. $$ Using the marginal distribution of $y$, one can immediately define the maximum likelihood (ML) estimator as the maximizer of $L({\text{\boldmath $\phi$}})$. Based on the estimator ${\widehat \bphi}$, we obtain the empirical Bayes (EB) estimator of ${\theta}_i$ as ${\widehat \th}_i={\rm E}[{\theta}_i|y_i;{\widehat \bphi}]$. The variability of the EB estimator ${\widehat \th}_i$ can be measured by the integrated mean squared error (MSE) ${\rm E}[({\widehat \th}_i-{\theta}_i)^2]$, where the expectation is taken with respect to ${\theta}_i$'s and $y_i$'s following the model (\ref{model}). Since $\widetilde{\th}_i$ is the conditional expectation as given in (\ref{BE}), the MSE can be decomposed as ${\rm E}[({\widehat \th}_i-{\theta}_i)^2]=R_1+R_2$ with $R_1={\rm E}[(\widetilde{\th}_i-{\theta}_i)^2]$ and $R_2={\rm E}[({\widehat \th}_i-\widetilde{\th}_i)^2]$. The first term $R_1$ is not affected by the estimation of ${\text{\boldmath $\phi$}}$ whereas the second term $R_2$ reflects the variability of the ML estimator ${\widehat \bphi}$, so that the second term can be negligibly small when $m$ is large. However, in many applications, $m$ might be small or moderate, in which the contribution of the second term to the MSE cannot be ignored. Hence, the EB estimator might perform poorly depending on the ML estimator ${\widehat \bphi}$. To overcome this problem, we propose to use the bootstrap averaging technique, known as ``bagging" (Breiman, 1996) in machine learning literatures. This method produces many estimators based on bootstrap samples, and average them to produce a stable estimator. We adapt the bagging method to the EB estimation to improve the performances of EB estimators under small or moderate $m$. This paper is organized as follows: In Section \ref{sec:bagging}, we consider mean squared errors of EB estimators and propose a bootstrap averaging empirical Bayes (BEB) estimator for decreasing the mean squared error. In Section \ref{sec:FH} and Section \ref{sec:PG}, we apply the BEB estimators in well-known two-stage normal hierarchical model and Poisson-gamma model, respectively, and compare the performances between BEB and EB estimators through simulation and empirical studies. In Section \ref{sec:conc}, we provide conclusions and discussions. \section{Bootstrap Averaging Empirical Bayes Estimators}\label{sec:bagging} As noted in the previous section, the performances of the EB estimators depend on the variability of the estimator ${\widehat \bphi}$, which cannot be ignored when $m$ is not large. To reduce the variability of the empirical Bayes estimator ${\widehat \th}_i$, we propose to average many empirical Bayes estimators with bootstrap estimates of ${\text{\boldmath $\phi$}}$ rather than computing one empirical Bayes estimator from the observation $Y=\{y_1,\ldots,y_m\}$. Specifically, letting $Y_{(b)}=\{y_1^{(b)},\ldots,y_m^{(b)}\}$ be a bootstrap samples of the original observation $Y$, we define ${\widehat \bphi}_{(b)}$ be an estimator of ${\text{\boldmath $\phi$}}$ based on the bootstrap sample $Y_{(b)}$. Then the bagging empirical Bayes (BEB) estimator is given by \begin{equation}\label{BEB} {\widehat \th}_i^{\text{Boot}}=\frac1B\sum_{b=1}^B\widetilde{\th}_i(y_i,{\widehat \bphi}_{(b)}). \end{equation} Similarly to Breiman (1996), we note that \begin{align*} \frac1B\sum_{b=1}^B\left\{\widetilde{\th}_i(y_i,{\widehat \bphi}_{(b)})-{\theta}_i\right\}^2 &=\frac1B\sum_{b=1}^B\widetilde{\th}_i(y_i,{\widehat \bphi}_{(b)})^2-2{\widehat \th}_i^{\text{Boot}}{\theta}_i+{\theta}_i^2\\ &\geq \bigg\{\frac1B\sum_{b=1}^B\widetilde{\th}_i(y_i,{\widehat \bphi}_{(b)})\bigg\}^2-2{\widehat \th}_i^{\text{Boot}}{\theta}_i+{\theta}_i^2 =({\widehat \th}_i^{\text{Boot}}-{\theta}_i)^2. \end{align*} By taking expectation with respect to the model (\ref{model}), we have $$ \frac1B\sum_{b=1}^B{\rm E}\left[\left\{\widetilde{\th}_i(y_i,{\widehat \bphi}_{(b)})-{\theta}_i\right\}^2\right] \geq {\rm E}\left[({\widehat \th}_i^{\text{Boot}}-{\theta}_i)^2\right], $$ which means that the integrated MSE of BEB estimator (\ref{BEB}) is smaller than bootstrap average of the integrated MSE of the EB estimator. Hence, the BEB estimator is expected to perform better than the EB estimator. The amount of improvement depends on $$ \frac1B\sum_{b=1}^B\widetilde{\th}_i(y_i,{\widehat \bphi}_{(b)})^2-\bigg\{\frac1B\sum_{b=1}^B\widetilde{\th}_i(y_i,{\widehat \bphi}_{(b)})\bigg\}^2 =\frac1B\sum_{b=1}^B\left\{\widetilde{\th}_i(y_i,{\widehat \bphi}_{(b)})-{\widehat \th}_i^{\text{Boot}}\right\}^2, $$ which is the bootstrap variance of the EB estimator and it vanishes as $m\to\infty$ but it would not be negligible when $m$ is not large. Therefore, when $m$ is small or moderate, the BEB estimator would improve the performance of the EB estimator. In the subsequent section, we investigate the performances of the EBE estimator compared with the EB estimator in the widely-used hierarchical models. \section{Two-stage normal hierarchical model}\label{sec:FH} \subsection{Model description} We first consider the two-stage normal hierarchal model to demonstrate the proposed bagging procedure. The two-stage normal hierarchical model is described as \begin{equation}\label{FH} y_i|{\theta}_i\sim N({\theta}_i, D_i), \ \ \ \ \ \ {\theta}_i\sim N({\text{\boldmath $x$}}_i^t{\text{\boldmath $\beta$}},A), \ \ \ \ i=1,\ldots,m, \end{equation} where $D_i$ is known sampling variance, ${\text{\boldmath $x$}}_i$ and ${\text{\boldmath $\beta$}}$ are a vector of covariates and regression coefficients, respectively, $A$ is an unknown variance. Let ${\text{\boldmath $\phi$}}=({\text{\boldmath $\beta$}}^t,A)^t$ be the vector of unknown parameters. The model (\ref{FH}) is known as the Fay-Herriot model (Fay and Herriot, 1979) in the context of small area estimation. Under the model (\ref{FH}), the Bayes estimator of ${\theta}_i$ is \begin{equation*} \widetilde{\th}_i(y_i;{\text{\boldmath $\phi$}})={\text{\boldmath $x$}}_i^t{\text{\boldmath $\beta$}}+\frac{D_i}{A+D_i}(y_i-{\text{\boldmath $x$}}_i^t{\text{\boldmath $\beta$}}). \end{equation*} Concerning the estimation of unknown parameter ${\text{\boldmath $\phi$}}$, we here consider the maximum likelihood estimator for simplicity. Since $y_i\sim N({\text{\boldmath $x$}}_i^t{\text{\boldmath $\beta$}},A+D_i)$ under the model (\ref{FH}), the maximum likelihood estimator ${\widehat \bphi}$ is defined as the maximizer of the function: \begin{equation*} Q({\text{\boldmath $\phi$}})=\sum_{i=1}^m\log(A+D_i)+\sum_{i=1}^m\frac{(y_i-{\text{\boldmath $x$}}_i^t{\text{\boldmath $\beta$}})^2}{A+D_i}. \end{equation*} While several other estimating methods are available, we here only consider the maximum likelihood estimator for presentational simplicity. Using the maximum likelihood estimator ${\widehat \bphi}$, we obtain the EB estimator of ${\theta}_i$ as $\widetilde{\th}_i(y_i;{\widehat \bphi})$. \subsection{Simulation study}\label{sec:FHsim} We here evaluate the performances of the BEB estimator together with the EB estimator under the normal hierarchical model (\ref{FH}) without covariates, namely ${\text{\boldmath $x$}}_i^t{\text{\boldmath $\beta$}}=\mu$. We considered $m=10, 15,\ldots,40$. For each $m$, we set $D_i$ as equally spaced points between $0.5$ and $1.5$. Concerning the true parameter values, we used $\mu=0$ and four cases for $A$, namely $A=0.1, 0.3, 0.5$ and $0.7$. The simulated data was generated from the model (\ref{FH}) in each iteration, and computed the EB and BEB estimates of ${\theta}_i$. Based on $R=5000$ simulation runs we calculated the simulated mean squared errors (MSE) defined as \begin{equation}\label{sim-mse} \text{MSE}=\frac{1}{mR}\sum_{i=1}^m\sum_{r=1}^R({\widehat \th}_i^{(r)}-{\theta}_i^{(r)})^2, \end{equation} where ${\widehat \th}_i^{(r)}$ is the EBE or EB estimates and ${\theta}_i^{(r)}$ is the true value of ${\theta}_i$ in the $r$th iteration. In Figure \ref{fig:FH}, we present the simulated MSE of the EB estimator as well as the three BEB estimator using $25, 50$ and $100$ bootstrap samples under various settings of $A$ and $m$. It is observed that the BEB estimator performs better than the EB estimator on the whole. In particular, the improvement is greater when $A$ is small compared with $D_i$, which is often arisen in practice. Moreover, as the number of $m$ gets larger, the MSE differences get smaller since the variability of estimating ${\text{\boldmath $\phi$}}$ vanishes when $m$ is sufficiently large. We also found that the ML estimator of $A$ often produces $0$ estimates when $m$ is small, in which the EB estimator is known to perform poorly. However, the BEB estimator can avoid the problem since the BEB estimator is aggregated by $B$ bootstrap estimators and at least one bootstrap estimates should be non-zero. In fact, by investigating the case where the ML estimator produces $0$ estimates of $A$, the some bootstrap estimates of $A$ were away from $0$. This would be one of the reason why the BEB estimator performs better than the EB estimator in this setting. \begin{figure}[!htb] \hspace{-0.5cm} \includegraphics[width=15cm]{FH-sim.pdf} \caption{The simulated MSE of three estimators, BEB (bootstrap averaging empirical Bayes estimator) and EB (empirical Bayes estimator) in two-stage normal hierarchical model.} \label{fig:FH} \end{figure} \subsection{Example: corn data}\label{sec:corn} We next illustrate the performances of the BEB estimator by using the corn and soybean productions in 12 Iowa counties, which has been used as an example in the context of small area estimation. Especially, we use the area-level data set given in table 6 in Dass et al. (2012) and we here focus only on corn productions for simplicity. The data set consists of $m=8$ areas with sample sizes in each area ranging from 3 to 5, and survey data of corn production $y_i$, sampling variance $D_i$ and the satellite data of corn $x_i$ as the covariate observed in each area. We considered the following hierarchical model: \begin{equation}\label{FH} y_i|{\theta}_i\sim N({\theta}_i,D_i), \ \ \ \ \ {\theta}_i\sim N(\beta_0+\beta_1x_i,A), \ \ \ \ i=1,\ldots,m, \end{equation} where $\beta_0,\beta_1$ and $A$ are unknown parameters. For the data set, we computed the BEB as well as EB estimators. We used $1000$ bootstrap samples for computing the BEB estimator. In Figure \ref{fig:Corn-para}, we present the histogram of of the bootstrap estimates used in the BEB estimates and the maximum likelihood (ML) estimates used in the EB estimators. We can observe that the bootstrap estimates vary depending on the bootstrap samples. Moreover, in Table \ref{tab:corn-est}, we show the BEB and EB estimates of ${\theta}_i$, which shows that the BEB estimator produces different estimates from the EB estimator since the number of areas $m$ is only $8$. \begin{table}[!htb] \centering \caption{Direct estimates (DE) $y_i$, standard deviation (SV) $\sqrt{D_i}$, empirical Bayes (EB) estimates and bagging empirical Bayes (BEB) estimates in each county. \label{tab:corn-est} } \medskip \begin{tabular}{ccccccc} \hline & DE & SD & EB & BEB \\ \hline Franklin & 158.62 & 5.70 & 155.79 & 141.08 \\ Pocahontas & 102.52 & 43.41 & 102.82 & 97.48 \\ Winnebago & 112.77 & 30.55 & 119.74 & 117.34 \\ Wright & 144.30 & 54.00 & 127.86 & 124.05 \\ Webster & 117.59 & 21.30 & 109.61 & 102.42 \\ Hancock & 109.38 & 15.66 & 121.84 & 126.51 \\ Kossuth & 110.25 & 12.11 & 116.05 & 118.53 \\ Hardin & 120.05 & 36.81 & 136.97 & 137.05 \\ \hline \end{tabular} \end{table} \begin{figure}[!htb] \hspace{-0.5cm} \includegraphics[width=15cm]{Corn-para.pdf} \caption{The histograms of the bootstrap estimates of $\beta_0$ (left), $\beta_1$ (center) and $A$ (right). Each vertical line denotes the maximum likelihood estimate.} \label{fig:Corn-para} \end{figure} \section{Poisson-gamma model}\label{sec:PG} \subsection{Setup} The Poisson-gamma model (Clayton and Kalder, 1987) is described as \begin{equation}\label{PG} z_i|{\theta}_i\sim \text{Po}(n_i{\theta}_i), \ \ \ \ \ {\theta}_i\sim \Gamma(\nu m_i,\nu), \ \ \ \ i=1,\ldots,m, \end{equation} where $m_i=\exp({\text{\boldmath $x$}}_i^t{\text{\boldmath $\beta$}})$, ${\text{\boldmath $x$}}_i$ and ${\text{\boldmath $\beta$}}$ are a vector of covariates and regression coefficients, respectively, $\nu$ is an unknown scale parameter. This model is used as the standard method of disease mapping. Let ${\text{\boldmath $\phi$}}=({\text{\boldmath $\beta$}}^t,\nu)^t$ be the vector of unknown parameters. The model (\ref{PG}) is known as the Poisson-Gamma model considered in Clayton and Kaldor (1987) and used in disease mapping. Under the model (\ref{PG}), the Bayes estimator of ${\theta}_i$ is given by $$ \widetilde{\th}_i(y_i;{\text{\boldmath $\phi$}})=\frac{z_i+\nu m_i}{n_i+\nu}. $$ Since the Bayes estimator depends on unknown ${\text{\boldmath $\phi$}}$, we need to replace ${\text{\boldmath $\phi$}}$ by its estimator. Noting that the gamma prior of ${\theta}_i$ is a conjugate prior for the mean parameter in the Poisson distribution, the marginal distribution of $y_i$ is the negative binomial distribution with the probability function: $$ f_m(y_i;{\text{\boldmath $\phi$}})=\frac{\Gamma(z_i+\nu m_i)}{\Gamma(z_i+1)\Gamma(\nu m_i)}\left(\frac{n_i}{n_i+\nu}\right)^{z_i}\left(\frac{\nu}{n_i+\nu}\right)^{\nu m_i}. $$ Then the maximum likelihood estimator of ${\text{\boldmath $\phi$}}$ is defined as ${\widehat \bphi}=\text{argmax}_{{\text{\boldmath $\phi$}}} \sum_{i=1}^m \log f_m(y_i;{\text{\boldmath $\phi$}})$, which enables us to obtain the empirical Bayes estimator $\widetilde{\th}_i(y_i;{\widehat \bphi})$. \subsection{Simulation study} We next evaluated the performances of the BEB estimator under the Poisson-gamma model without covariates, described as \begin{equation}\label{PG-sim} z_i|{\theta}_i\sim \text{Po}(n_i{\theta}_i), \ \ \ {\theta}_i\sim \Gamma(\nu\mu,\nu), \ \ \ \ \ i=1,\ldots,m, \end{equation} where we set $\mu=1$ and $\nu=40, 60, 80$ and $100$. Note that $\nu$ is a scale parameter and ${\rm Var}({\theta}_i)=\mu/\nu$, so that random effect variance ${\rm Var}({\theta}_i)$ is a decreasing function of $\nu$. Regarding the number of areas, we considered $m=10,15,\ldots,40$. For each $m$, we set $n_i$ as rounded integers of equally spaced numbers between $10$ and $50$. Similarly to Section \ref{sec:FHsim}, using (\ref{sim-mse}) with $R=5000$ simulation runs, we calculated the MSE of the BEB estimator as well as the EB estimator of ${\theta}_i$. The results are presented in Figure \ref{fig:PG}, which show that the BEB estimator tends to perform better than the EB estimator. In particular, the amount of improvement is greater when $m$ is not large as we expected. Moreover, we can also observe that the MSE difference tends larger as $\nu$ gets larger, which corresponds to the case where the random effect variance gets smaller. This is consistent to the results in the normal model given in Section \ref{sec:FHsim}. \begin{figure}[!htb] \hspace{-0.5cm} \includegraphics[width=15cm]{PG-sim.pdf} \caption{The simulated MSE of three estimators, BEB (bootstrap averaging empirical Bayes estimator) and EB (empirical Bayes estimator) in Poisson-gamma model.} \label{fig:PG} \end{figure} \subsection{Example: Scottish lip cancer} We applied the BEB and EB method to the famous Scottish lip cancer data during the 6 years from 1975 to 1980 in each of the $m=56$ counties of Scotland. For each county, the observed and expected number of cases are available, which are respectively denoted by $z_i$ and $n_i$. Moreover, the proportion of the population employed in agriculture, fishing, or forestry is available for each county, thereby we used it as a covariate $\text{AFF}_i$, following Wakefield (2007). For each area, $i=1,\ldots,m$, we consider the Poisson-gamma model: \begin{equation}\label{PG} z_i|{\theta}_i\sim \text{Po}(n_i{\theta}_i), \ \ \ {\theta}_i\sim \Gamma(\nu\exp(\beta_0+\beta_1\text{AFF}_i),\nu), \end{equation} where ${\theta}_i$ is the true risk of lip cancer in the $i$th area, and $\beta_0,\beta_1$ and $A$ are unknown parameters. For the data set, we computed the BEB as well as EB estimates of ${\theta}_i$, where we used $1000$ bootstrap samples for computing the BEB estimator. In Figure \ref{fig:Scott-para}, we present the quantiles of the bootstrap estimates used in the BEB estimates and the maximum likelihood (ML) estimates used in the EB estimators. We can observe that the bootstrap estimates vary depending on the bootstrap samples while the variability seems small compared with Figure \ref{fig:Corn-para}. This might comes from that the number of areas in this case is much larger than the corn data in Section \ref{sec:corn}. Finally, in Figure \ref{fig:Scott-est}, we show the scatter plot of percent relative difference between the BEB and EB estimates, that is, $100({\widehat \th}^{\text{Boot}}-{\widehat \th}_i)/{\widehat \th}_i$, against the number of expected number of cases $n_i$. Figure \ref{fig:Scott-est} shows that the differences get larger as $n_i$ gets small since the direct estimator $y_i=z_i/n_i$ of ${\theta}_i$ is shrunk toward the regression mean $\exp(\beta_0+\beta_1\text{AFF}_i)$ in areas with small $n_i$. \begin{figure}[!htb] \hspace{-0.5cm} \includegraphics[width=15cm]{Scott-para.pdf} \caption{The histograms of the bootstrap estimates of $\beta_0$ (left), $\beta_1$ (center) and $\nu$ (right). Each vertical line denotes the maximum likelihood estimate. } \label{fig:Scott-para} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=10cm]{Scott-est.pdf} \caption{The scatter plot of the area sample size $n_i$ against the percent relative difference between the BEB and EB estimates of ${\theta}_i$.} \label{fig:Scott-est} \end{figure} \section{Conclusion and Discussion}\label{sec:conc} We have proposed the use of bootstrap averaging, known as ``bagging" in the context of machine learning, for improving the performances of empirical Bayes (EB) estimators. We focused on two models extensively used in practice, two-stage normal hierarchical model and Poisson-gamma model. In both models, the simulation studies revealed that the bootstrap averaging EB (BEB) estimator performs better than the EB estimator. In this paper, we considered the typical area-level models as an application of the BEB estimator. However, the BEB method would be extended to the more general case, for example, generalized linear mixed models. The detailed comparison in such models will be left to a future study. \medskip
1,314,259,992,657
arxiv
\section{Introduction} Recently, Dimopoulos et.al \cite{DKMW1} showed that the many massive axion fields predicted by string vacuum can be combined and lead to a radiatively stable inflation, called Nflation, which is an interesting implement of assisted inflation mechanism proposed by Liddle et.al \cite{LMS1}, see also Refs. \cite{MW1, Piao02061} for many studies, and a feasible embedding of inflation in string theory. Then Easther and McAllister found \cite{EM1} that for the mass distribution following Mar$\check{c}$enko-Pastur law, the spectral index of scalar perturbation is always redder than that of its corresponding single field. However, this result is actually valid for any mass contribution and initial condition of fields, as has been shown in \cite{KL1, SA1} numerically and in \cite{Piao} analytically. In addition, it was found for Nflation that the ratio of tensor to scalar is always same as in the single field case \cite{AL1} and the non-Gaussianity is small \cite{SA21, BB1}, see also Refs.\cite{SL1} for relevant studies. In inflation, when the value of field increases up to some value, the quantum fluctuation of field will be expected to overwhelm its classical evolution. In this case, the inflaton field will undergo a kind of random walk, which will lead to the production of many new regions with different energy densities. This was called as eternal inflation \cite{V1983a, L1986a}. In principle, it was thought that dependent on the value of field, there are generally three different phases in single field inflation, i.e. eternal inflation phase, slow roll inflation phase and fast roll phase, which should be also valid for Nflation. However, recently, it was found \cite{APQ2} that when the number of fields is large enough, the slow roll inflation phase will disappear, which means there exists a large N transition for Nflation. The reason is, though the end value of slow roll inflation decreases with the increase of number $N$ of fields, the value separating the slow roll inflation phase and the eternal inflation phase, hereafter called as the eternal inflation boundary for convenience, decreases more rapidly, thus they will cross inevitably at some value of $N$, after this the slow roll inflation phase will go out of sight. This result means there is a bound for the number of fields driving the slow roll Nflation. This is also consistent with recent arguments from black hole physics \cite{D07, D08}, in which there exists a gravitational cutoff, whose value equals to our bound, beyond which the quantum gravity effect will become important, see also Refs. \cite{Huang, LS1} for some similar bounds. In single field inflation, when the inflaton field is in its eternal inflation boundary the primordial density perturbation ${\delta\rho}/\rho \sim 1$, thus it will be hardly possible for us to receive the information from the eternal inflation phase, since in that time we will be swallowed by black hole \cite{RBI}. This result may be actually assured by a relation between the entropy and the total efolding number \cite{NSAEG}, in which when ${\delta\rho}/\rho \sim 1$, the entropy in unit of efolding number is less than one, which means we can not obtain any information. Thus it is significant to examine how above results change for Nflation, especially what occurs around its phase transition point. It can be expected that there maybe more general and interesting results. In this paper, we will firstly illustrate the phase diagram for Nflation, and then give relevant discussions. \section {Phase diagram for Nflation} In the Nflation model, the inflation is driven by many massive fields. For simplicity, we assume that the masses of all fields are equal, i.e. $m_i=m$, and also $\phi_i=\phi$, which will also be implemented in next section. Following Ref. \cite{APQ2}, the end value of slow roll inflation phase and the eternal inflation boundary with respect to $N$ are given by \begin{equation} \phi\simeq {M_p\over\sqrt {N}}, \label{ep}\end{equation} \begin{equation} \phi \simeq {1\over { N}^{3/4}}\sqrt{M_p^3\over m}, \label{cr}\end{equation} respectively. It can be noticed that the end value goes along with ${1\over\sqrt {N}}$, it decreases slower than the eternal inflation boundary with $N$, since the latter changes with ${1\over { N}^{3/4}}$. Thus when we plot the lines of the end value and the eternal inflation boundary moving with respect to $N$, respectively, there must be a point where these two lines cross, see Fig.1. This crossing point is \begin{equation} {N}\simeq {M_p^2\over m^2}, \label{caln}\end{equation} beyond which the slow roll inflation phase will disappear. Thus here we call this point as the critical point. It seems be expected that after the critical point is got across, the line denoting the eternal inflation boundary will not extend downwards any more, the line left is that denoting the end value, which still obeys Eq.(\ref{ep}), see the dashed line of Fig.1. The reason is the calculation of the eternal inflation boundary is based on the slow roll approximation, while below the end value the slow rolling of field is actually replaced by the fast rolling, in this case the quantum fluctuation is actually suppressed, thus it is hardly possible that the quantum fluctuation of field will overwhelm its classical evolution. However, the case maybe not so simple. In next section, we will see there is an entropy bound for the number of fields, and at the critical point this bound is saturated. This means that beyond the critical point our above semiclassical arguments can not be applied. Thus in this sense in principle what is the diagram beyond the critical point remains open. \begin{figure}[t] \begin{center} \includegraphics[width=8cm]{phase5.eps} \caption{ The $\phi-N$ phase diagram for Nflation. The upper solid line is the eternal inflation boundary and the lower solid line is the end value of the slow roll inflation. These two lines split the region into three phases, i.e. eternal inflation phase, slow roll inflation phase and fast roll phase. There is a critical point, beyond which the slow roll inflation phase disappears. }\label{xx} \end{center} \end{figure} The value of fields at critical point can be obtained by substituting Eq.(\ref{caln}) into any one of Eqs.(\ref{ep}) and (\ref{cr}), which is $\phi\simeq m$. This indicates that if initially $\phi< m$, no matter what ${N}$ is, the slow roll inflation will not occur. The existence of slow roll inflation is important for solving the problems of standard cosmology and generating the primordial perturbation seeding large scale structures. In the phase diagram Fig.1, we can see that the slow roll inflation phase is in a limited region, which means in order to make Nflation responsible for our observable universe, the relevant parameters must be placed suitably. We assume that all mass are equal only for simplicity. For the case that not all mass are equal, the result is also similar, as has been shown in Ref. \cite{APQ2}, in which the mass distribution following Mar$\check{c}$enko-Pastur law \cite{EM1} is taken for calculations. Thus the phase diagram is still Fig.1, the only slight difference is replacing $m$ with the average mass $\bar m$. It should be noted that here in the phase diagram the number $N$ of fields dose not include massless scalar fields. The reason is when the masses of fields are negligible, they will not affect the motion of massive fields dominating the evolution of universe, while the perturbations used to calculate the quantum jump of fields are those along the trajectory of fields space, since the massless fields only provide the entropy perturbations orthogonal to the trajectory, which thus are not considered in the calculations deducing Eqs.(\ref{ep}) and (\ref{cr}). Thus if there are some nearly massless fields and some massive fields with nearly same order, it should be that there is a bound $N\lesssim M_p^2/{\bar m}^2$, in which only massive fields are included in the definition of $\bar m$ and $N$. \section{Discussion} \subsection{On primordial density perturbation at the eternal inflation boundary} In single field inflation, when the inflaton field is in its eternal inflation boundary, the primordial perturbation ${\delta\rho}/\rho \sim 1$. The primordial density perturbation during Nflation can be calculated by using the formula of Sasaki and Stewart \cite{SS}. In slow roll approximation, $\left(\frac{\delta\rho}{\rho}\right)^2 \sim {m^2{ N}^2\phi^4\over M_p^6}$ \cite{KL1, LR1}. The motion of the eternal inflation boundary obeys Eq.(\ref{cr}). Thus substituting Eq.(\ref{cr}) into it and then cancelling the variable $\phi$, we can obtain \begin{eqnarray}{\delta\rho\over \rho}\simeq \frac{1}{\sqrt{ N}},\label{a13}\end{eqnarray} where the factor with order one has been neglected, which hereafter will also be implemented. We can see that ${\delta\rho}/{\rho}$ is decreased with respect to the increase of $N$, and for each value of $N$, ${\delta\rho}/{\rho}$ is always less than one. This result is obviously different from that of single field. The reason leading to this result is, in single field inflation the eternal inflation boundary and the point that the density perturbation equals to one are same, however, the changes of both with $N$ are different, one is $\sim 1/{N}^{3/4}$ and the other is $\sim 1/\sqrt{N}$. Intuitively, the eternal inflation means that the quantum fluctuations of fields lead to the production of many new regions with different energy densities, thus it seems that when we approach the eternal inflation boundary the density perturbation will be expected to near one. Thus in this sense our result looks like unintuitive. However, in fact what the eternal inflation phase means should be a phase in which the quantum fluctuation of field overwhelms its classical evolution, which is not certain to suggest that the density perturbation is about one. Thus different from single field inflation, in which we are impossible to receive the information from the eternal inflation phase since in that time the black hole has swallowed us due to the primordial density perturbation with near one, it seems that when $N$ is large, we may obtain some information from the eternal inflation phase, at least in principle we can obtain those from the boundary of eternal inflation phase. Beyond this boundary, the fields are walked randomly, thus the slow roll approximation is broken and the results based on the slow roll approximation are not robust any more. In principle, for the eternal inflation phase of Nflation we need to calculate the density perturbation in a new way to know how much it is actually, which, however, has been beyond our capability. The eternal inflation phase for single fields has been studied by using the stochastic approach \cite{AAS}. \subsection{ On entropy bound} The entropy during Nflation can be approximately given by dS entropy $S\sim {M_p^2\over H^2}$. Here we regard $S$ as the entropy at the eternal inflation boundary. Thus we have \begin{equation} S \sim {M_p^2\over H^2}\sim {M_p^4\over Nm^2\phi^2}\sim \sqrt{N}{M_p\over m}, \label{s}\end{equation} where Eq.(\ref{cr}) has been used. It is interesting to find that $S$ is proportional to $\sqrt{N}$, which means the entropy increases with the number of field. Here the case is slightly similar to that of the entanglement entropy for a black hole, in which there seems be a dependence of the entanglement entropy on $N$, which conflicts the usual result of black hole entropy, since each of fields equally contributes to the entropy \cite{D08}. However, this problem may be solved by invoking the correct gravity cutoff $\Lambda\sim {M_p\over\sqrt{N}}$ \cite{D07}, as has been argued in Ref. \cite{D08}. In Eq.(\ref{s}) if we replace $M_p$ with a same gravity cutoff $\Lambda$, then we will obtain $S\sim \sqrt{N} {\Lambda\over m}\sim {M_p\over m}$, which is just the result for single field, i.e. $S\sim {M_p\over m}$ at the eternal inflation boundary. Thus it seems that the argument in Ref. \cite{D08} is universal for the relevant issues involving $N$ species. It can be noticed that the efolding number ${\cal N}\sim {N\phi^2\over M_p^2}$. For initial $\phi$ being in its eternal inflation boundary, where $\phi$ is given by Eq.(\ref{cr}), for fixed $N$, i.e. along the line paralleling the $\phi$ axis in Fig.1, $\cal N$ obtained will be the total efolding number along corresponding line in slow roll inflation phase, hereafter called $\Delta {\cal N}$, see Fig.1. Thus with Eq.(\ref{cr}), we can have $\Delta {\cal N}\sim {M_p\over m \sqrt{N}}$. Then we substitute it into Eq.(\ref{s}), and thus for the eternal inflation boundary, we have \begin{equation} N\cdot \Delta {\cal N}\simeq S, \label{s11}\end{equation} which is a general entropy bound including $N$, and is also our main result. It means that below the eternal inflation boundary, we have the bound $N\cdot \Delta {\cal N} \lesssim S $. This result indicates that for fixed $N$, i.e. along the line paralleling the $\phi$ axis in Fig.1, the total efolding number $\Delta {\cal N}$ of slow roll inflation phase is bounded by $S$, while for fixed $\Delta {\cal N}$, i.e. along the line paralleling the $N$ axis in Fig.1, the number $N$ of fields is bounded by $S$, and at the eternal inflation boundary, the entropy bound is saturated. There are two special cases, corresponding to the regions around red points in Fig.1. For details, one is that for $N=1$, i.e. single field, we have $\Delta {\cal N}\simeq S$ from Eq.(\ref{s11}), thus the result for single field is recovered \cite{NSAEG}. Following \cite{NSAEG} to large N, Eq.(\ref{s11}) can be actually also deduced. By making the derivatives of $\cal N$ and $S$ with respect to the time, respectively, we can have \begin{eqnarray}\frac{d{\cal N}}{dS}\simeq \frac{ M_p^2}{m^2S^2}, \label{a11} \end{eqnarray} where $S$ is the function of $\phi$, see the second equation in Eq.(\ref{s}), and thus can be used to cancel $\phi$. By integrating this equation along the line paralleling the $\phi$ axis in Fig.1, where the lower limit is the eternal inflation boundary and the upper limit is the end value of slow roll inflation phase, and then applying approximation condition $\phi_e\ll\phi$, where $\phi$ and $\phi_e$ represent the values of eternal inflation boundary and the end of slow roll inflation, respectively, which actually implies that $S_e\gg S$ and thus $({S_e-S})/{S_e}\simeq 1$, we have \begin{eqnarray} \Delta {\cal N}\simeq (\frac{\delta\rho}{\rho})^2S, \label{a12}\end{eqnarray} where $\left(\frac{\delta\rho}{\rho}\right)^2\sim { M_p^2\over m^2 S^2}$ has been applied, which can be obtained since both ${\delta\rho\over \rho}$ and $S$ are the functions of $\phi$. This result has been showed in Ref. \cite{NSAEG} for single field, however, since Eq.(\ref{a12}) is independent on the number $N$ of fields, thus it is still valid for $N$ fields. For single field inflation, ${\delta\rho\over \rho}\sim 1$ only at eternal inflation boundary, thus we always have $\Delta {\cal N}\lesssim S$ for slow roll phase, i.e. the total efolding number is bounded by the entropy, which is saturated at eternal inflation boundary. Note that Eq.(\ref{a12}) is an integral result in which $\delta\rho/\rho $ with the change of $\phi$ and thus $S$ is condidered, which is slightly different from that in Ref. \cite{HLW}. Thus combining Eqs.(\ref{a13}) and (\ref{a12}), we can find Eq.(\ref{s11}) again. This also indicates the result of Eq.(\ref{a13}) is reliable. The other is that for $N$ being near its critical point, in which approximately we have $\Delta {\cal N}\simeq 1$, thus we can obtain $N\simeq S$, i.e. $S$ is saturated by the number $N$ of fields. This can also be seen by combining Eq.(\ref{caln}) for the critical point and Eq.(\ref{s}), in which we can find $S\simeq N $ at the critical point. Thus below the critical point, $ N\lesssim S$. From Eq.(\ref{s}), $S\sim \sqrt{N}{M_p\over m} \gtrsim N$ can be obtained. This means $N \lesssim ({M_p\over m})^2$. In Refs. \cite{D07}, it was argued that $M_p$ is renormalized in the presence of $N$ fields at scale $m$ so that $M_p^2\gtrsim Nm^2$, in other words, $N>({M_p\over m})^2$ is inconsistent. Here, if $N>({M_p\over m})^2$, then combining it and Eq.(\ref{s}), we will have $N>S$, i.e. the number $N$ of fields is larger than the dS entropy of critical point. This is certainly impossible, since intuitively it may be thought that there is at least a freedom degree for each field, thus the total freedom degree of $N$ fields system, i.e. the entropy, should be at least $N$, while dS entropy is the maximal entropy of a system. Thus we arrive at same conclusion with Ref. \cite{D07} from a different viewpoint. This again shows the consistence of our result. \textbf{Acknowledgments} We thank Y.F. Cai for helpful discussions and comment. I.A thanks the support of (HEC) Pakistan. This work is supported in part by NSFC under Grant No: 10491306, 10521003, 10775179, 10405029, 10775180, in part by the Scientific Research Fund of GUCAS(NO.055101BM03), in part by CAS under Grant No: KJCX3-SYW-N2.
1,314,259,992,658
arxiv
\section{} \begin{abstract} Terry Sejnowski's 2020 paper [arXiv:2002.04806] is entitled ``The unreasonable effectiveness of deep learning in artificial intelligence". However, the paper itself doesn't attempt to answer the implied question of why Deep Convolutional Neural Networks (DCNNs) can approximate so many of the mappings that they have been trained to model. While there are detailed mathematical analyses, this short paper attempts to look at the issue differently, considering the way that these networks are used, the subset of these functions that can be achieved by training (starting from some location in the original function space), as well as the functions to which such networks will actually be applied. \end{abstract} Terry Sejnowski's paper is entitled {\em The unreasonable effectiveness of deep learning in artificial intelligence}, \cite{Sejnowski:2020el}. and he compares and contrasts deep learning with Wigner \cite{Wigner:1960ue} who marvelled at the limited numbers of parameters in equations. This is in contrast with modern deep learning with its abundance of parameters and extremely high dimensional spaces. While he discusses how we should look to the brain for further optimization, in the paper he does not attempt to answer the question of {\em why} deep learning is so unreasonably effective. So why can Deep Convolutional Neural Networks (DCNNs) approximate so many of the mappings that they have been trained to model? There are detailed mathematical analyses, for example \cite{Hornik:1991kg} \cite{Lin:2017cm}, \cite{Grohs:2019wr} (and the many references within these papers), but this short paper attempts to look at the issue differently, considering the way that these networks are used, the subset of these functions that can be achieved by training (starting from some location in the original function space), as well as the functions that in reality will be modelled. Deep neural networks have a great many parameters. Firstly, there is the architecture of the network, constrained only by the number of inputs and outputs, and then there are the actual (trainable) parameters of the network itself. For a deep feedforward network, the number of these is set by the number of layers, and by number of pseudo-neurons (hereafter neurons) in each layer. For DCNNs, it also depends on how the convolutions are implemented. The activation function and output function of each neuron is a parameter, though these are not usually trained. The original topology is also constrained both on the nature of the mapping being approximated, and by how the inputs and outputs for the network have been coded, but these, and the actual internal topology (number of layers, number of neurons per layer, how and where convolution is performed) are not normally optimized during network training: only the weights and biasses are changed. We write $\cal{F}$ for the set of functions that the network can perform, and $f^1_{\rm init} \in \cal{F}$ for the function that is implemented by the network prior to training. Training will lead to a sequence of functions $f^i_{\rm init} (i \geq 1)$ eventually converging (assuming that learning is set up so that it does converge) to some $f^{\rm final}_{\rm init} \in \cal{F}$. Clearly, the sequence of functions will depend on $f^1_{\rm init}$, on the actual topology, and on the dataset and precise learning and adaptation rules used to train the network. Networks are generally trained using a dataset $D = (D_{\rm in}, D_{\rm out})$, where $D_{\rm in} = \{d^j_{\rm in}: j = 1 \ldots t\}$, and $D_{\rm out} = \{d^j_{\rm out}: j = 1 \ldots t\}$. $D$ is thought of as sampling some underlying classification or function $f_D$. $t$ is the total number of training examples. $d^j_{\rm in}$ is a vector whose length is the size of the input layer, $N_{\rm input}$, and $d^j_{\rm out}$ a vector whose length is the size of the output layer, $N_{\rm output}$. The aim is to be able to map other unseen inputs $d^{\rm new}_{\rm in}$ to appropriate $d^{\rm new}_{\rm out}$. For classification, there will be a $j$ such that $d^{\rm new}_{\rm out} = d^{\rm i}_{\rm out}$, but for function approximation, $d^{\rm new}_{\rm out}$ may be novel. For a particular training dataset $D$, we will normally {\em choose} an initial topology $\tau = \tau(D)$ that is likely to be able to solve the problem: that is, for which there is likely to be an \begin{equation} f^{\rm final}_{\rm init} = f^{\rm final}_{\rm init}(\tau(D)) \label{eqn:1} \end{equation} which approximates the characteristics of $D$ within some range of error $\epsilon$\footnote{We are attempting to be general here: the nature of $D$, and the nature of $\epsilon$ will depend on the problem at hand.}. We note that if experiment shows that this is not the case, we can choose a different topology $\tau'$ until we find one for which this is true. We note that $D$ is not arbitrary, but has some real-world problem at its root. Thus (for example) we would not be trying to learn a completely random classification of $N_{\rm input}$ binary inputs (which would essentially require $2^N$ parameters to learn precisely, or rather fewer than that if we choose a nonzero $\epsilon$). But can we say anything else about $D$, or about the classification or function that $D$ is a sample from (which is more important, since it is generalisation that we are really interested in)? In practice, $D$ is a sample from some underlying function $f_D$, mapping a set of inputs to a set of outputs, rather than a collection of arbitrary input/output pairs, and it is $f_D$ that we are trying to approximate. In \cite{Grohs:2019wr} there is some discussion of the variety of smooth functions for which deep neural networks are good approximators. The function space $\cal{F}$ (which $f_D$ is a member of) can be characterised as \begin{equation} \cal{F} = {\mathbb R}^{N_{\rm input}} \times{\mathbb R}^{N_{\rm output}} \end{equation} where $N_{\rm input}$ is the the number of input units and $N_{\rm output}$ is the number of output units. However, because computers are finite devices, the space is much smaller than this, more like \begin{equation} F' = (2^{64})^{N_{\rm input}} \times (2^{64})^{N_{\rm output}} \end{equation} assuming that both inputs and outputs are coded in 64 bits (as is currently likely to be the case for 64 bit floating point coded inputs). If we are successful in approximating $f_D$, this means that there is (and we have found) a $\tau$ such that $f^{\rm final}_{\rm init}(\tau)$ is close enough to $f_D \in F'$. But what sorts of mappings can the network produce? This clearly depends on the number of layers in the network, the way in which convolving layers are used, and (assuming the usual form of artificial neuron is used), the function used to compute the output from the activation of a neuron, and is the question asked in \cite{Grohs:2019wr} . Earlier we noted that $f_D$ was not arbitrary: but if we are to understand how $f^{\rm final}_{\rm init}(\tau)$ can be close enough to $f_D \in F'$ we need to refine what {\em not arbitrary} might mean. We identify two issues here: \begin{description} \item[Issue 1] the nature of likely $f_D$'s, defined by the nature of the problem from which $D$ was sampled \item[Issue 2] the actual domain of the function that $D$ was drawn from \end{description} Consider an image classification problem: for such a problem, we start with images (that is, $d^i_{\rm in}$ is a coded image, probably an $X$ by $Y$ pixel image, with each pixel coded in 8 (monochrome) or 24 (colour) bits). These images result from incident light (from many sources, usually) reflecting from (and/or being transmitted by) points in the world, passing through the point spread function of each pixel detector at the camera, resulting in the $X$ by $Y$ vector that we are trying to interpret. Considering Issue 1 above, there are issues of size and translation invariance in $f_D$. In addition, we would expect the same classification for an image and a slightly blurred version of that image, and we would expect the classification to remain the same under a range of illuminations. Thus the $f_D$ from which $D$ is drawn is quite tightly constrained. This effect is not limited to image classification: something similar is true for sound classification, in that small alterations in the intensity or pitch of the sound, or in the reverberation of the sound prior to transduction would again not be expected to alter the classification of the sound. Where this becomes more difficult is in the case of categorical perception, as in figure \ref{fig:OtoQ}: each change is very small, but at some point one needs to change the classification. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.8\linewidth]{OtoQ.pdf} \caption{When should an image classifier say that the letter O turns into a letter Q?} \label{fig:OtoQ} \end{center} \end{figure} A related issue arises with context: see figure \ref{fig:Bto13} (from \cite{Kay:2018ww}). While the first issue may be soluble using a neural network, the second one requires the use of the context of the symbols on either side, as the central images are identical. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.3\linewidth]{BfromKayPhillips.png} \caption{When should an image classifier say that the middle symbol is B and when 13?} \label{fig:Bto13} \end{center} \end{figure} For Issue 2 above, one might at first think that any possible vector that is $X$ by $Y$ with 8 (or 24) bit elements might be a possible $d^i_{\rm in}$. However, this is generally not the case: in general, pixel values do not change suddenly or randomly between adjacent locations. If one considered $\cal{F}$ rather than $F'$, and considered smooth functions, one might be able to restrict it by bounding local partial derivatives. The overall effect is that although $f_D$ is sampled from a function on the input space (whether $\mathbb{R}^{N_{\rm input}}$ or $ (2^{64})^{N_{\rm input}} $), many (indeed most) elements of this input space will never occur. Thus, testing $f^{\rm final}_{\rm init}(\tau)$ on randomly selected elements of the input space may be inappropriate, and lead to unexpected results as in \cite{Szegedy:2013vwa}, as well as more or less random patterns being misclassified (when they should be marked as unclassifiable). What do these issues say to {\em The unreasonable effectiveness of deep learning in artificial intelligence}? Firstly, we note that the {\em unreasonable effectiveness} is posited on the selection of an appropriate network $\tau(D)$ by the developer: while the mathematics might suggest that a single very wide hidden layer network might suffice \cite{Hornik:1991kg} the actual networks used do not look like this, primarily because appropriate generalisation is more important than being able to create a precise function. The arguments in \cite{Lin:2017cm} and \cite{Grohs:2019wr} are unaffected, but it is clear that the space of functions (particularly classifiers) that require to be approximated is smaller than might have been imagined. There is a sense in which the functions are relatively smooth, and constrained further by the application domain. Further, the neural network will always classify or approximate any input from the input domain, whether that input is possible (in terms of the actual problem input domain) or not. Sejnowski discusses some unanswered questions, specifically why it is possible to generalize from so few examples while using so many parameters. Based on what we have explained, because the classifier (or function approximator) is likely to include dimensionality reducing sections within the network (such as relatively narrow layers, or convolution layers), these inputs may be well away from the manifold that $f_D$ was actually trained on (and therefore gives appropriate results for), and may give inappropriate results. However, actual data from the same source as the training data will not suffer from this problem. Thus we conclude that the unreasonable effectiveness while certainly true, is perhaps just a little less surprising. Do the issues above affect the reachability of an appropriate $f^{\rm final}_{\rm init}(\tau)$? Issue 2 above suggests that testing of the function should be on problem-appropriate possible inputs, rather than from inputs drawn randomly from the domain. Thus $\epsilon = \epsilon(f_D, \tau, f_{\rm init}^1)$ should not be evaluated on randomly selected elements of the input domain. This fits with many test problems, where a dataset is supplied, and is used for training and testing. It is however not clear that this affects reachability directly: however, the user selection of $\tau$ (and $ f_{\rm init}^1)$ is clearly important here. \section*{Acknowledgement} Thanks to Andrew Abel for useful comments on an earlier draft. \bibliographystyle{halpha}
1,314,259,992,659
arxiv
\section{introduction}\label{intro} Since 2003, numerous exotic structures have been observed in experiments~\cite{Choi:2007wga, Aaij:2014jqa,Chilikin:2013tch,Chilikin:2014bkk,Ablikim:2013xfr,Ablikim:2013wzq,Ablikim:2013mio,Belle:2011aa,Adachi:2012cx,Aaij:2015tga,Aaij:2018bla}, amongst which many states cannot be accommodated into the traditional quark model. In the literature, there are many possible explanations. The most prominent ones are the molecules (loosely bound states of two hadrons), the tetraquarks (compact bound states), and the hybrids (composed of gluons and quarks), etc. For a recent review, see Refs.~\cite{Chen:2016qju,Guo:2017jvc,Esposito:2016noz,Ali:2017jda,Liu:2019zoy}. A fully heavy tetraquark state is a topic of great interest. The interactions between the heavy quarks may be dominated by the short-range one-gluon-exchange (OGE) potential rather than the long-range potentials. Thus, they are good candidates of the compact tetraquark states. Unlike a meson or a baryon where the color configuration of the quarks is unique, i.e. $q_i\bar q_j\delta_{ij}$ or $\epsilon_{ijk}q_iq_jq_k$, the color structure for the tetraquark is much richer. For the tetraquark states, the four quarks can neutralize the color in two ways, $6_c\otimes \bar 6_c=1_c$ and $\bar 3_c\otimes 3_c=1_c$. In this work, we label the two color configurations $|(QQ)_{6_c}{\bar Q\bar Q}_{\bar 6_c}\rangle$ and $|(QQ)_{\bar 3_c}{\bar Q\bar Q}_{3_c}\rangle$ as $6_c-\bar 6_c$ and $\bar 3_c-3_c$, respectively. In Refs.~\cite{Maiani:2004vq,Ali:2011ug,Eichten:2017ffp}, the authors investigate the tetraquark states in the $\bar 3_c-3_c$ configuration. In Refs.~\cite{Chen:2016oma,Maiani:2019cwl}, the authors pointed out that the $6_c-\bar 6_c$ configuration is also very important to form the tetraquark states. The fully heavy tetraquark state is a golden system to investigate the inner color configuration of the multiquark states. For the above reasons, the fully heavy tetraquark states have inspired both the experimental and theoretical attention. Recently, the CMS collaboration observed the $\Upsilon(1S)$ pair production and indicated a $bb\bar b\bar b$ signal around $18.4$ GeV with a global significance of $ 3.6 \sigma$~\cite{Khachatryan:2016ydm,S. Durgut}. Later, the LHCb searched the invariant mass distribution of $\Upsilon(1s)\mu^+\mu^-$ and did not observe the tetraquark state $X_{b b\bar b\bar b}$~\cite{Aaij:2018zrb}. The tension between CMS and LHCb data requires more experimental and theoretical studies of the fully-beauty tetraquarks. The mass spectroscopy has been a major platform to probe the dynamics of the tetraquarks. Since 1975, there have been many theoretical works about the mass spectroscopy of the fully heavy quark states~\cite{Iwasaki:1975pv,Chao:1980dv,Ader:1981db,Zouzou:1986qh,Heller:1986bt,SilvestreBrac:1992mv,SilvestreBrac:1993ry}. The existence of the fully heavy quark states is still controversial. Recent interests have followed the experimental developments in the past several years. The mass spectra have been calculated in different schemes, for instance, a diffusion Monte Carlo method~\cite{Bai:2016int}, the non-relativistic effective field theory (NREFT)~\cite{Anwar:2017toa}, the QCD sum rules~\cite{Wang:2017jtz,Wang:2018poa,Chen:2018cqz}, covariant Bethe-Salpeter equations~\cite{Heupel:2012ua}, various quark models~\cite{Lloyd:2003yc,Debastiani:2017msn, Barnea:2006sd}, and other phenomenological models~\cite{Berezhnoy:2011xy,Berezhnoy:2011xn,Karliner:2016zzc,Esposito:2018cwh,Karliner:2017qhf}. The lowest $bb\bar b\bar b$ and $cc\bar c\bar c$ states are estimated to be in the mass range $18-20$ GeV and $5-7$ GeV, respectively. In contrast, the authors of Ref.~\cite{Wu:2016vtq} investigated the mass spectra of the $QQ\bar Q\bar Q$ states in the Chromomagnetic interaction (CMI) model and concluded that no stable $QQ\bar Q\bar Q$ states exist. Later, several other approaches, such as the nonrelativistic chiral quark model~\cite{Chen:2019dvd, Liu:2019zuc}, the lattice QCD~\cite{Hughes:2017xie} and other models~\cite{Richard:2017vry,Czarnecki:2017vco} also do not support the existence of the bound $QQ\bar Q\bar Q$ states. To investigate the existence of the full heavy tetraquark states, we systematically calculate the mass spectra of the $bb\bar b\bar b$, $cc\bar c\bar c$ and $bb\bar c\bar c~(cc\bar b\bar b)$ in two non-relativistic quark models. In general, a tetraquark state should be an admixture of the two color configurations, $6_c-\bar 6_c$ and $\bar 3_c-3_c$. In this work, with the couple-channel effects, we perform the dynamical calculation of the mass spectra of the tetraquark states and investigate the inner structures of the ground states. We organize the paper as follows. In Sec.~\ref{sec1}, we introduce the formalism to calculate their mass spectra, including two non-relativistic quark models, the construction of the wave functions, and the analytical expressions of the Hamiltonian matrix elements. In Sec.~\ref{sec2}, we present the numerical results and discuss the couple-channel effects between the $\bar 3_c-3_c$ and $6_c-\bar 6_c$ configurations. In Sec. \ref{sec3}, we compare our results with those in other models and give a brief summary. \section{formalism}\label{sec1} \subsection{Hamiltonian} The nonrelativistic Hamiltonian of a $Q_1Q_2\bar Q_3 \bar Q_4$ tetraquark state reads \begin{eqnarray} H & =& \sum_{i=1}^{4}\frac{p_{j}^{2}}{2m_{j}}+\sum_{i<j}V_{ij}+\sum_{i}m_{i}\nonumber\\ & =&\frac{p^{2}}{2u}+V_{I}+h_{12}+h_{34} \end{eqnarray} with \begin{eqnarray} &&V_{I}=V_{13}+V_{14}+V_{23}+V_{24},\label{vi}\\ &&h_{ij} = \frac{p_{ij}^{2}}{2u_{ij}}+V_{ij}+m_{i}+m_{j},\\ &&\mathbf{p}_{ij}=\frac{m_{i}\mathbf{p}_{j}-m_{j}\mathbf{p}_{i}}{m_{i}+m_{j}}, \,\,\ u_{ij}=\frac{m_i m_j}{m_i+m_j}, \\ && m_{ij}=m_i+m_j, \,\,\ u={{m_{12}m_{34}}\over{m_{12}+m_{34}}}, \\ && \mathbf{P}_{ij}=\mathbf{p}_i+\mathbf{p}_j, \,\, \mathbf{p}=\frac{m_{12}\mathbf{P}_{34}-m_{13}\mathbf{P}_{24}}{m_{12}+m_{34}}. \end{eqnarray} where $\mathbf{p}_i$ and $m_i$ are the momentum and mass of the $i$th quark. The kinematic energy of the center-of-mass system has been excluded by the constraint $\sum^4_{i=1}\mathbf{p}_i=0$. $V_{ij}$ is the potential between the $i$th and $j$th quarks. The $u_{ij}$, $m_{ij}$, $\mathbf{p}_{ij}$, and $ \mathbf{P}_{ij}$ are the reduced mass, total mass, relative momentum, and total momentum of the $(ij)$ pair of quarks, respectively. The $u$ and $\mathbf{p}$ are the reduced mass and relative momentum between the $(12)$ and $(34)$ quark pairs. $h_{12}$, $h_{34}$ and $V_I$ represent the $(12)$ quark pair inner interaction, $(34)$ quark pair interaction and interaction between the two pairs. Since the heavy quark mass is large, the relativistic effect is less important. We use a nonrelativistic quark model to describe the interaction between two heavy quarks. The quark model proposed in Ref.~\cite{Wong:2001td} contains one gluon exchange (OGE) plus a phenomenological linear confinement interaction and the $V_{ij}$ reads \begin{eqnarray}\label{qm} &&V_{ij}(r_{ij})=\frac{\mathbf{\lambda}_{i}}{2}\frac{\mathbf{\lambda}_{j}}{2}\left(V_{\text{coul}}+V_{\text{conf}}+V_{\text{hyp}}+V_{\text{cons}}\right)\nonumber\\ &&=\frac{\lambda_{i}}{2}\frac{\lambda_{j}}{2}\left(\frac{\alpha_{s}}{r_{ij}}-\frac{3b}{4}r_{ij}-\frac{8\pi\alpha_{s}}{3m_{i}m_{j}}\mathbf{s}_{i}\cdot\mathbf{s}_{j}e^{-\tau^{2}r^{2}}\frac{\tau^{3}}{\pi^{3/2}}+V_{\text{cons}}\right),\nonumber\\ \end{eqnarray} where $\lambda$ is the color matrix (replaced by $-\lambda^{*}$ for an antiquark). $\mathbf{s}_{i}$ is the spin operator of the $i$th quark. $r_{ij}$ is the relative position of the $i$th and $j$th quarks. $V_{\text{coul}}$, $V_{\text{conf}}$, and $V_{\text{hyp}}$ represent the OGE color Coulomb, the linear confinement, and the hyperfine interactions, respectively. The OGE interaction leads to a contact hyperfine effect and an infinite hyperfine splitting. In Eq.~(\ref{qm}), the smearing effect has been considered in $V_{\text{hyp}}$. The $\alpha_{s}$ is the running coupling constant in the perturbative QCD, \begin{eqnarray} \alpha_{s}(Q^{2})&=&\frac{12\pi}{(33-2N_{f})\ln(A+Q^{2}/B^{2})}. \end{eqnarray} In this work, we take the square of the invariant mass of the interacting quarks as the scale $Q^{2}$. The values of the parameters are listed in Table~\ref{par}. They are determined by fitting the mass spectra of the mesons as listed in Table~\ref{meson mass}. \begin{table*} \renewcommand\arraystretch{1.5} \caption{The values of parameters in quark model I~\cite{Wong:2001td} and model II~\cite{SilvestreBrac:1996bg}. }\label{par} \centering \setlength{\tabcolsep}{2.3mm} \begin{tabular}{c|cccccccccc} \toprule[1pt]\toprule[1pt] \multirow{2}{*}{Model I} & \multirow{2}{*}{} & \multirow{2}{*}{} & $m_{c}${[}GeV{]} & $m_{b}${[}GeV{]} & $b[\text{GeV}^{2}]$ & $\tau${[}GeV{]} & $V_{\text{cons}}${[}GeV{]} & $A$ & $B${[}GeV{]} & \tabularnewline & & & $1.776$ & $5.102$ & $0.18$ & $0.897$ & $0.62$ & $10$ & $0.31$ & \tabularnewline \midrule[1pt] \multirow{2}{*}{Model II} & $p$ & $r_{c}$ & $m_{c}${[}GeV{]} & $m_{b}${[}GeV{]} & $\kappa$ & $\kappa'$ & $\lambda[\text{GeV}^2]$ & $\Lambda${[}GeV{]} & $A[\text{GeV}^{B-1}]$ & $B$\tabularnewline & $1$ & $0$ & $1.836$ & $5\text{.}227$ & $0.5069$ & $1.8609$ & $0.1653$ & $0.8321$ & $1.6553$ & $0.2204$\tabularnewline \bottomrule[1pt]\bottomrule[1pt] \end{tabular} \end{table*} \begin{table*} \renewcommand\arraystretch{1.5} \caption{The mass spectra of the heavy quarkonia in units of MeV. The $M_{\text{ex}}$, $M^I_{th}$, and $M^{II}_{th}$ refer to the mass spectra of mesons from experiments~\cite{Tanabashi:2018oca}, in model I~\cite{Wong:2001td}, and in model II~\cite{SilvestreBrac:1996bg}, respectively. }\label{meson mass} \centering \setlength{\tabcolsep}{3mm} \begin{tabular}{cccc|ccccc} \toprule[1pt]\toprule[1pt] & $M_{\text{ex}}$ & $M_{th}^{I} $& $M_{th}^{II}$ & & $M_{\text{ex}}$& $M_{th}^{I}$ & $M_{th}^{II}$& \tabularnewline \midrule[1pt] $B_{c}$ & $6274.9$ & $6319.4$ & $6293.5$ & & & & \tabularnewline $\eta_{c}$ & $2983.9$ & $3056.5$ & $3006.6$ & $\eta_{b}$ & $9399.0$ & $9497.8$ & $9427.9$\tabularnewline $\eta_{c}(2S)$ & $3637.6$ & $3637.6$ & $3621.2$ & $\Upsilon(1S)$ & $9460.30$ & $9503.6$ & $9470.4$\tabularnewline $J/\psi$ & $3096.9$ & $3085.1$ & $3102.1$ & $\Upsilon(2S)$ & $10023.26$ & $9949.7$ & $10017.8$\tabularnewline $\psi(2S)$ & $3686.1$ & $3652.4$ & $3657.8$ & $\Upsilon(3S)$ & $10355.2$ & $10389.8$ & $10440.6$\tabularnewline \bottomrule[1pt]\bottomrule[1pt] \end{tabular} \end{table*} To investigate the model dependence of the mass spectrum, we also consider another nonrelativistic quark model proposed in Ref.~\cite{SilvestreBrac:1996bg}. The potential reads \begin{eqnarray} &&V_{ij}(r_{ij})=-\frac{3}{16}\sum_{i<j}\lambda_{i}\lambda_{j}\Big(-\frac{\kappa(1-\text{exp}(-r_{ij}/r_{c}))}{r_{ij}}+\lambda r_{ij}^{p}\nonumber \\ &&-\Lambda+\frac{8\pi}{3m_{i}m_{j}}\kappa'(1-\text{exp}(-r_{ij}/r_{c}))\frac{\text{exp}(-r_{ij}^{2}/r_{0}^{2})}{\pi^{3/2}r_{0}^{3}}\mathbf{s}_{i}\cdot\mathbf{s}_{j}\Big),\nonumber \\ \end{eqnarray} where $r_0=A(\frac{2m_im_j}{m_i+m_j})^{-B}$ is related to the reduced mass of the two quarks $(ij)$. In this model, all the mass information is included in the hyperfine potential, which is expected to play a more important role than that in Model I. The parameters of the potentials are listed in Table~\ref{par}. With these parameters, we calculate the mass spectra of the mesons and list them in Table~\ref{meson mass}. In this work, we concentrate on the $S$-wave tetraquark states and do not include the tensor and spin-orbital interactions in the two quark models. In Table~\ref{meson mass}, we notice that both models are able to reproduce the mass spectra of the heavy quarkonia. In the following, we will extend the two quark models to study the fully heavy tetraquarks. \subsection{Wave function}\label{secwavefun} \begin{figure*}[htbp] \centering \includegraphics[width=0.8\textwidth]{jacb.pdf} \caption{The Jacobi coordinates in the tetraquark state. } \label{jac} \end{figure*} In a $Q_1Q_2\bar Q_3\bar Q_4$ tetraquark state, there are three sets of Jacobi coordinates as illustrated in Fig.~\ref{jac}. Each of them contains three independent Jacobi coordinates, and they can be transformed into others as follows, \begin{eqnarray}\label{cor} &&\mathbf{r}_{jk}=\mathbf{r}_j-\mathbf{r}_k=\mathbf{r}+c_{jk}^{a}\mathbf{r}_{12}+c_{jk}^{b}\mathbf{r}_{34},\nonumber\\ &&\mathbf{r}=\frac{ m_{1}\mathbf{r}_{1}+m_{2}\mathbf{r}_{2}}{m_{1}+m_{2}}-\frac{m_{3}\mathbf{r}_{3}+m_{4}\mathbf{r}_{4}}{m_{3}+m_{4}},\nonumber\\ &&\mathbf{r}'=\frac{m_{1}\mathbf{r}_{1}+m_{3}\mathbf{r}_{3}}{m_{1}+m_{3}}-\frac{m_{2}\mathbf{r}_{2}+m_{4}\mathbf{r}_{4}}{m_{2}+m_{4}}\nonumber\\ &&=\frac{(m_{1}m_{3}-m_{2}m_{4})\mathbf{r}+M_{T}u_{12}\mathbf{r}_{12}-M_{T}u_{34}\mathbf{r}_{34}}{(m_{1}+m_{4})(m_{2}+m_{3})},\nonumber\\ &&\mathbf{r}''=\frac{m_{1}\mathbf{r}_{1}+m_{4}\mathbf{r}_{4}}{m_{1}+m_{4}}-\frac{m_{2}\mathbf{r}_{2}+m_{3}\mathbf{r}_{3}}{m_{2}+m_{3}}\nonumber\\ &&=\frac{({m_1} {m_4}-m_2 m_3)\mathbf{r}+M_Tu_{12}\mathbf{r}_{12} -M_T u_{34}\mathbf{r}_{34}}{(m_1+m_3) (m_2+m_4)}, \end{eqnarray} where $M_T=\sum^4_{i=1}m_i$ is the total mass of the four quarks. The transformation coefficients $c^{a(b)}_{jk}$ are listed in Table~\ref{coe}. The superscripts $a$ and $b$ represent the quark cluster and antiquark cluster, respectively. \begin{table*} \renewcommand\arraystretch{2.0} \caption{The coefficient $c_{ij}$ in Eq. (\ref{cor}).}\label{coe} \setlength{\tabcolsep}{2.5mm} \begin{tabular}{cccccccc} \toprule[1pt]\toprule[1pt] $c_{14}^{a}$ & $c_{13}^{a}$ & $c_{23}^{a}$ & $c_{24}^{a}$ & $c_{14}^{b}$ & $c_{13}^{b}$ & $c_{23}^{b}$ & $c_{24}^{b}$\tabularnewline \midrule[1pt] $\frac{m_{2}}{m_{1}+m_{2}}$ & $\frac{m_{2}}{m_{1}+m_{2}}$ & $-\frac{m_{1}}{m_{1}+m_{2}}$ & $-\frac{m_{1}}{m_{1}+m_{2}}$ & $\frac{m_{3}}{m_{3}+m_{4}}$ & $-\frac{m_{4}}{m_{3}+m_{4}}$ & $-\frac{m_{4}}{m_{3}+m_{4}}$ & $\frac{m_{3}}{m_{3}+m_{4}}$\tabularnewline \bottomrule[1pt]\bottomrule[1pt] \end{tabular} \end{table*} To simplify the calculation, we use the first coordinate configuration to construct the wave function considering the symmetry of the inner quarks. The wave function of a tetraquark state is \begin{eqnarray}\label{wavefunction} &&\psi_{JJ_z}=\nonumber\\ &&\sum\left[\varphi_{n_a J_{a}}(\mathbf{r}_{12},\beta_a)\otimes\varphi_{n_b J_{b}}(\mathbf{r}_{34},\beta_b)\otimes \phi_{NL_{ab}}(\mathbf{r},\beta)\right]_{JJ_{z}},\nonumber\\ &&\varphi_{n_aJ_{a}M_{a}}=\left[\phi_{n_al_a}(\mathbf{r}_{12},\beta_a)\chi_{s_{a}}\right]_{M_{a}}^{J_{a}}\chi_{f}\chi_{c_a}, \end{eqnarray} where the $\psi$ is the total wave function of the tetraquark state, and $\varphi$ denotes that of the cluster (a) or (b). $J$ ($J_z$) is the total angular momentum (the third direction component) of a tetraquark state. The $\sum$ is the sum over all the possible wave functions which may couple to the definite angular momentum $J$. $n_{a(b)}$ and $N$ specify the radial dependence. The $s_{a(b)}$, $l_{a(b)}$ and $J_{a(b)}$ are the spin, orbital and total angular momentum of the cluster $a$ ($b$). $L_{ab}$ is the orbital angular momentum between the two clusters. The $\chi_{s}$, $\chi_f$, $\chi_c$ are the wave functions in the spin, the isospin, and the color space, respectively. $\phi$ is the spatial wave function and is expressed by the Gaussian basis~\cite{Hiyama:2003cu}, \begin{eqnarray} \phi_{n_a l_a{m_a}}(\mathbf{r}_{12},\beta_a)&=&i^{l_{a}}r_{12}^{l_{a}}\sqrt{\frac{4\pi}{(2l_{a}+1)!!}}(\frac{n_a\beta_{a}^{2}}{\pi})^{3/4}\nonumber\\ &\times&(2n_a\beta_{a}^{2})^{l_{a}/2}e^{-r^{2}\beta_{a}^{2}n_a/2}Y_{l_{a}m_{a}}(\mathbf{\Omega}_{12}).\nonumber \end{eqnarray} with $\beta_{a}$ being the oscillating parameter. In this work, we concentrate on the $S$-wave tetraquark states. Their wave functions are expanded by the basis which satisfies the relation $\mathbf{l}_a+\mathbf{l}_b+\mathbf{L}_{ab}=\mathbf{0}$. The states with higher orbital excitations contribute to the ground state through the tensor or the spin-orbital potentials. These contributions are higher order effects and neglected in this work. Thus, for the lowest $S$-wave tetraquark states, we only consider the wave functions with $l_a=l_b={L}_{ab}=0$. The wave function of the tetraquark state in Eq. (\ref{wavefunction}) is simplified as \begin{eqnarray}\label{wf} \psi_{SS_z}&=&\sum_{\alpha_,n_A,n_b,n_{ab}} \chi_{\alpha} \phi_{n_a}(\mathbf{r}_{12},\beta_a)\phi_{n_b}(\mathbf{r}_{34},\beta_b) \phi_{n_{ab}}(\mathbf{r},\beta),\nonumber\\ \chi_{\alpha} &=& \left[\chi_{s_a}\otimes \chi_{s_b}\right]^S\left[\chi_{f_a}\otimes \chi_{f_b}\right]\left[\chi_{c_a}\otimes \chi_{c_b}\right]^1, \end{eqnarray} where $S$ is the total spin of the tetraquark state and $1$ represents the color-singlet representation. For the spatial wave functions, we have omitted the orbital angular momentum in the Gaussian wave function $\phi$. The wave functions are constrained by the Pauli principle. The $S$-wave diquark (antidiquark) with two identical quarks (antiquarks) has two possible configurations as listed in Table~\ref{con}. Then, for the $cc\bar c \bar c$, $bb\bar b\bar b$, and $bb \bar c \bar c$ tetraquark states, the possible color-flavor-spin functions read \begin{itemize} \item $J^{PC}=0^{++}$ \begin{eqnarray}\label{0p} & \chi_{1}=\left[[QQ]_{\bar{3}_{c}}^{1}[\bar{Q}\bar{Q}]_{3_{c}}^{1}\right]_{1_{c}}^{0},\,\,\, \chi_{2}=\left[[QQ]_{6_{c}}^{0}[\bar{Q}\bar{Q}]_{\bar{6}_{c}}^{0}\right]_{1_{c}}^{0}. \end{eqnarray} \item $J^{PC}=1^{+-}$ \begin{eqnarray}\label{1p} \chi_{1}=\left[[QQ]_{\bar{3}_{c}}^{1}[\bar{Q}\bar{Q}]_{3_{c}}^{1}\right]_{1_{c}}^{1}. \end{eqnarray} \item $J^{PC}=2^{++}$ \begin{eqnarray} \label{2p} \chi_{1}=\left[[QQ]_{\bar{3}_{c}}^{1}[\bar{Q}\bar{Q}]_{3_{c}}^{1}\right]_{1_{c}}^{2}. \end{eqnarray} \end{itemize} where the superscript and subscript denote the spin and color representations. \subsection{Hamiltonian matrix elements} \begin{table} \renewcommand\arraystretch{1.5} \caption{The configurations of the diquark (antiquark) constrained by Pauli principle. ``S'' and ``A'' represent symmetry and antisymmetry.}\label{con} \setlength{\tabcolsep}{2.5mm} \begin{tabular}{lc|lccc} \toprule[1pt] \toprule[1pt] $J^{P}=1^{+}$ & $QQ$ & $J^{P}=0^{+}$ & $QQ$\tabularnewline \midrule[1pt] $S$-wave(L=0) & S & $S$-wave(L=0) & S \tabularnewline Flavor & S & Flavor & S\tabularnewline Spin(S=1) & S & Spin(S=0) & A \tabularnewline Color($\bar{3}_c$) & A & Color($6_c$) & S\tabularnewline \bottomrule[1pt] \bottomrule[1pt] \end{tabular} \end{table} With the wave function constructed in section~\ref{secwavefun}, we calculate the Hamiltonian matrix elements. For the quark model I, the matrix element of $\langle h_{12}\rangle$ reads, \begin{eqnarray} &&\langle\chi_{\alpha_i}\phi_{n}(r_{12})\phi_{\lambda}(r_{34})\phi_{k}(r)|h_{12}|\chi_{\alpha_j}\phi_{m}(r_{12})\phi_{\nu}(r_{34})\phi_{k'}(r)\rangle\nonumber\\ &&=\delta_{\alpha_i\alpha_j}N_{\lambda,\nu}N_{k,k'}\langle\phi_{n}(r_{12},\beta_a)|h_{12}|\phi_{m}(r_{12},\beta_a)\rangle \nonumber\\ &&=\delta_{\alpha_i\alpha_j}N_{\lambda,\nu}N_{k,k'}\left(\langle T_{12}+ m_1+m_2\rangle+ \langle V_{12}\rangle \right), \end{eqnarray} with \begin{eqnarray} &&N_{k,k'}=\left(\frac{2\sqrt{kk'}}{k+k'}\right)^{3/2},\nonumber\\ &&\langle T_{12}+ m_1+m_2\rangle=N_{m,n}\left(\frac{3mn\beta_a^{2}}{2u_{12}(m+n)}+m_1+m_2\right),\nonumber\\ &&\langle V_{12}(\mathbf{r}_{12})\rangle=\langle V_{\text{coul}}\rangle+\langle V_{\text{conf}}\rangle+\langle V_{\text{hyp}}\rangle+\langle V_{\text{cons}} \rangle,\nonumber\\ &&\langle V_{\text{coul}}\rangle=I_{C}\frac{4\pi\alpha_{s}\beta_a}{(2\pi)^{3/2}}\sqrt{m+n}N_{m,n},\nonumber\\ &&\langle V_{\text{conf}}\rangle=-\frac{3}{4}I_{C}\frac{8\pi b}{(2\pi)^{3/2}\beta_a\sqrt{m+n}}N_{m,n},\nonumber\\ &&\langle V_{\text{hyp}}\rangle=-I_{CM}\frac{8\pi\alpha_{s}}{3m_{i}m_{j}}\frac{\sigma^{3}}{\pi^{3/2}}\left(\frac{2\sqrt{mn}}{m+n+2\sigma^{2}/\beta_a^{2}}\right)^{\frac{3}{2}},\nonumber\\ &&\langle V_{\text{cons}} \rangle=I_{C}V_{\text{cons}}N_{m,n}, \end{eqnarray} where $n,\lambda,k,m,\nu,k'$ specify the radial dependence. The $I_C$ and $I_{\text{CM}}$ are the color factor and the color electromagnetic factor in Table~\ref{color} and Table~\ref{Icm}, respectively. $\chi_{\alpha_i,\alpha_j}$ denote the color-flavor-spin configurations as illustrated in Eq. (\ref{wf}). Since the potential $h_{12}$ is diagonal in the color-flavor-spin space, it does not induce the coupling of different $\chi_{\alpha_i,\alpha_j}$ channels and the $\langle h_{12}\rangle $ is proportional to $\delta_{\alpha_i\alpha_j}$. The derivation of $\langle h_{34}\rangle$ is similar to that of $\langle h_{12}\rangle $. \begin{table} \renewcommand\arraystretch{1.5} \caption{The color matrix element $I_C=\langle\frac{\mathbf{\lambda}_{i}}{2}\frac{\mathbf{\lambda}_{j}}{2}\rangle$ for the $(ij)$ pair of quarks. The subscripts denote the color representation of the cluster.}\label{color} \setlength{\tabcolsep}{2.5mm} \begin{tabular}{cccccccc} \toprule[1pt]\toprule[1pt] \multicolumn{6}{c}{$\langle(Q_{1}Q_{2})_{\bar 3}(\bar{Q}_{3}\bar{Q}_{4})_3|\frac{\mathbf{\lambda}_{i}}{2}\frac{\mathbf{\lambda}_{j}}{2}|(Q_{1}Q_{2})_{\bar 3}(\bar{Q}_{3}\bar{Q}_{4})_3\rangle$} \tabularnewline \midrule[1pt] $Q_{1}\bar{Q}_{3}$ & $Q_{2}\bar{Q}_{4}$ & $Q_{1}\bar{Q}_{4}$ & $Q_{2}\bar{Q}_{3}$ & $Q_{1}Q_{2}$ & $\bar{Q}_{3}\bar{Q}_{4}$\tabularnewline $-\frac{1}{3}$ & $-\frac{1}{3}$ & $-\frac{1}{3}$ & $-\frac{1}{3}$ & $-\frac{2}{3}$ & $-\frac{2}{3}$\tabularnewline \midrule[1pt] \multicolumn{6}{c}{$\langle(Q_{1}Q_{2})_{6}(\bar{Q}_{3}\bar{Q}_{4})_{\bar 6}|\frac{\mathbf{\lambda}_{i}}{2}\frac{\mathbf{\lambda}_{j}}{2}|(Q_{1}Q_{2})_{6}(\bar{Q}_{3}\bar{Q}_{4})_{\bar 6}\rangle$} \tabularnewline \midrule[1pt] $Q_{1}\bar{Q}_{3}$ & $Q_{2}\bar{Q}_{4}$ & $Q_{1}\bar{Q}_{4}$ & $Q_{2}\bar{Q}_{3}$ & $Q_{1}Q_{2}$ & $\bar{Q}_{3}\bar{Q}_{4}$\tabularnewline $-\frac{5}{6}$ & $-\frac{5}{6}$ & $-\frac{5}{6}$ & $-\frac{5}{6}$ & $\frac{1}{3}$ & $\frac{1}{3}$\tabularnewline \midrule[1pt] \multicolumn{6}{c}{$\langle(Q_{1}Q_{2})_{\bar 3}(\bar{Q}_{3}\bar{Q}_{4})_{3}|\frac{\mathbf{\lambda}_{i}}{2}\frac{\mathbf{\lambda}_{j}}{2}|(Q_{1}Q_{2})_{6}(\bar{Q}_{3}\bar{Q}_{4})_{\bar 6}\rangle$} \tabularnewline \midrule[1pt] $Q_{1}\bar{Q}_{3}$ & $Q_{2}\bar{Q}_{4}$ & $Q_{1}\bar{Q}_{4}$ & $Q_{2}\bar{Q}_{3}$ & $Q_{1}Q_{2}$ & $\bar{Q}_{3}\bar{Q}_{4}$\tabularnewline $-\frac{1}{\sqrt{2}}$ & $-\frac{1}{\sqrt{2}}$ & $\frac{1}{\sqrt{2}}$ & $\frac{1}{\sqrt{2}}$ & $0$ & $0$\tabularnewline \bottomrule[1pt]\bottomrule[1pt] \end{tabular} \end{table} \begin{table} \renewcommand\arraystretch{1.5} \caption{The color magnetic factor $\langle I_{\text{CM}}\rangle_{\alpha_i \alpha_j}=\langle \chi_{\alpha_i}|\frac{\lambda_{i}}{2}\frac{\lambda_{j}}{2}\mathbf{s}_{i}\cdot \mathbf{s}_{j}|\chi_{\alpha_j}\rangle$ for the $(ij)$ quark pairs. The $\chi_{\alpha_{i,j}}$ denotes the color-flavor-spin wave functions in Eqs. (\ref{0p})-(\ref{2p}).}\label{Icm} \setlength{\tabcolsep}{2.5mm} \begin{tabular}{c|ccccc} \toprule[1pt]\toprule[1pt] & \multicolumn{3}{c}{$I_{CM}^{ij}=\langle\frac{\lambda_{i}}{2}\frac{\lambda_{j}}{2}s_{i}\cdot s_{j}\rangle$}\tabularnewline \midrule[1pt] \multirow{6}{*}{$0^{++}$} & $\langle I_{CM}^{Q\bar{Q}}\rangle_{11}$ & $\langle I_{CM}^{QQ}\rangle_{11}$ & $\langle I_{CM}^{\bar{Q}\bar{Q}}\rangle_{11}$\tabularnewline & $\frac{1}{6}$ & $-\frac{1}{6}$ & $-\frac{1}{6}$\tabularnewline & $\langle I_{CM}^{Q\bar{Q}}\rangle_{22}$ & $\langle I_{CM}^{QQ}\rangle_{22}$ & $\langle I_{CM}^{QQ}\rangle_{22}$\tabularnewline & $0$ & $-\frac{1}{4}$ & $-\frac{1}{4}$\tabularnewline & $\langle I_{CM}^{Q\bar{Q}}\rangle_{12}$ & $\langle I_{CM}^{QQ}\rangle_{11}$ & $\langle I_{CM}^{\bar{Q}\bar{Q}}\rangle_{11}$\tabularnewline & $\frac{\sqrt{3}}{4\sqrt{2}}$ & $0$ & $0$\tabularnewline \midrule[1pt] \multirow{2}{*}{$1^{+-}$} & $\langle I_{CM}^{Q\bar{Q}}\rangle_{11}$ & $\langle I_{CM}^{QQ}\rangle_{11}$ & $\langle I_{CM}^{\bar{Q}\bar{Q}}\rangle_{11}$\tabularnewline & $\frac{1}{12}$ & $-\frac{1}{6}$ & $-\frac{1}{6}$\tabularnewline \midrule[1pt] \multirow{2}{*}{$2^{++}$} & $\langle I_{CM}^{Q\bar{Q}}\rangle_{11}$ & $\langle H_{CM}^{QQ}\rangle_{11}$ & $\langle H_{CM}^{\bar{Q}\bar{Q}}\rangle_{11}$\tabularnewline & $-\frac{1}{12}$ & $-\frac{1}{6}$ & $-\frac{1}{6}$\tabularnewline \bottomrule[1pt]\bottomrule[1pt] \end{tabular} \end{table} Unlike the $h_{12}$ and $h_{34}$, the $V_I(\mathbf{r}_{ij})$ with $i=1,2$ and $j=3,4$, which is the interaction between the diquark and antidiquark, may lead to the mixing between different color-spin-flavor configurations, i.e. $\chi_{\alpha_i}$ and $\chi_{\alpha_j}$. The $\langle V_I(\mathbf{r}_{ij})\rangle$ reads \begin{eqnarray*} &&\langle\chi_{\alpha_i}\phi_{n}(r_{12},\beta_a)\phi_{\lambda}(r_{34},\beta_b)\phi_{k}(r,\beta)|V_I(\mathbf{r}_{ij})\nonumber \\ &&|\chi_{\alpha_j}\phi_{m}(r_{12},\gamma_a)\phi_{\nu}(r_{34},\gamma_b)\phi_{k'}(r,\gamma)\rangle \nonumber \\ &&=\langle V_{\text{coul}}(\mathbf{r}_{ij})\rangle_{\alpha_i\alpha_j}+\langle V_{\text{conf}}(\mathbf{r}_{ij})\rangle_{\alpha_i\alpha_j}+\langle V_{\text{hyp}}(\mathbf{r}_{ij})\rangle_{\alpha_i\alpha_j}, \end{eqnarray*} where ${\beta_{(a,b)}}$ and $\gamma_{{(a,b)}}$ are the oscillating parameters. The implicit forms of the notations are \begin{eqnarray} &&\langle V_{\text{coul}}(\mathbf{r}_{ij})\rangle=I_C\tilde{N}_{m,n}\tilde{N}_{\lambda,\nu}\tilde{N}_{k,k'}\frac{2\alpha_{s}}{\sqrt{\pi}\sqrt{\frac{2}{k\beta^2+k'\gamma^2}+4a_{ij}^{2}}},\nonumber\\ &&\langle V_{\text{conf}}(\mathbf{r}_{ij})\rangle=I_C\tilde{N}_{m,n}\tilde{N}_{\lambda,\nu}\tilde{N}_{k,k'}(-\frac{3bz_{ij}}{\sqrt{\pi}}),\nonumber\\ &&\langle V_{\text{hyp}}(\mathbf{r}_{ij})\rangle =I_{CM}\tilde{N}_{m,n}\tilde{N}_{\lambda,\nu}{\tilde{N}_{k,k'}}\nonumber\\ &&~~~~~~~~~~~~~\times\left(-\frac{8\alpha_{s}}{3m_{i}m_{j}(4\tau_{ij}^{2}+\frac{2}{k\beta^2+k'\gamma^2})^{3/2}\sqrt{\pi}}\right), \end{eqnarray} where \begin{eqnarray} &&\tilde{N}_{m,n}=\left({2\sqrt{mn}\beta_a\gamma_a\over{m\gamma_a^2+n\beta_a^2}}\right)^{3/2},\\ &&a_{ij}=\frac{(c_{ij}^{a})^{2}}{2(m\beta_{a}^{2}+n{\gamma_a}^2)}+\frac{(c_{ij}^{b})^{2}}{2(\lambda\beta_{b}^{2}+\nu\gamma_{b}^{2})},\\ && \tau_{ij}^{2}=a_{ij}^{2}+\frac{1}{4\sigma^{2}},\,\, z_{ij}^{2}=a_{ij}^{2}+\frac{1}{2(k\beta^{2}+k'\gamma^{2})}. \end{eqnarray} With the above analytical expressions, we calculate the mass spectrum of the fully heavy tetraquark states $QQ\bar Q'\bar Q'$. The numerical results are given in the next section. \section{Numerical results}\label{sec2} The wave function of a tetraquark state is composed of all wave functions which subject to the conditions discussed in section~\ref{secwavefun}. The number of the basis $N^3$ increases from the minimum required to a large limit. We take the $cc\bar c \bar c$ tetraquark state with $J^{PC}=1^{+-}$ as an example to investigate the dependence of the results on the number of the basis. Its wave function is expanded with $N^3=1^3$, $2^3$, $3^3$, $4^3$ and $5^3$ basis, respectively. The corresponding eigenvalues obtained through the variational method are displayed in Fig.~\ref{convergence}. The mass spectrum tends to be stable when $N^3$ is larger than $2^3$. Therefore, we expand the wave functions of the tetraquark states with $2^3$ Gaussian basis in the following calculation. \begin{figure}[htbp] \centering \includegraphics[width=.48\textwidth]{convergence.pdf} \caption{The dependence of the mass spectrum on the number of Gaussian basis $N^3$. The line and dashed line represent the numerical results in model I and model II, respectively.} \label{convergence} \end{figure} \subsection{A tetraquark state $QQ\bar Q'\bar Q'$ with $J^{PC}=0^{++}$} A tetraquark state $QQ\bar Q'\bar Q'$ with $J^{PC}=0^{++}$ contains two color-flavor-spin configurations $\chi_1$ and $\chi_2$ as listed in Eq. (\ref{0p}). Its wave function reads \begin{eqnarray} &&\psi_{JJ_{z}}^{II_{z}}=\sum_{\alpha_{1}}A_{\alpha_{1}}\phi_{\alpha_{1}}\chi_{1}+\sum_{\alpha_{2}}B_{\alpha_{2}}\phi_{\alpha_{2}}\chi_{2}\nonumber\\ &&=\sum_{\alpha_{1}}A_{\alpha_{1}}\phi_{\alpha_{1}}(\beta_{a},\beta_{b},\beta)|(QQ)_{\bar 3_c}(\bar Q\bar Q)_{3_c}\rangle\nonumber\\ &&+\sum_{\alpha_{2}}B_{\alpha_{2}}\phi_{\alpha_{2}}(\gamma_a,\gamma_b,\gamma)|(QQ)_{6_c}(\bar Q\bar Q)_{\bar 6_c}\rangle, \label{0++state} \end{eqnarray} where $\alpha_{1,2}=\{n_a,l_a,n_b,l_b,N,L\}$, $\beta_{(a,b)}$ and $\gamma_{(a,b)}$ are the oscillating parameters for the $\bar 3_c-3_c$ and $6_c-\bar 6_c$ tetraquark states. $A_{\alpha_1}$ and $B_{\alpha_2}$ are the expanding coefficients. At first, we do not consider the mixture between the $\bar 3_c-3_c$ and $6_c-\bar 6_c$ tetraquark states and solve the Sch\"odinger equation with the variational method. We obtain their mass spectra and display them in the left panel of Fig.~\ref{couple effects}. For the $cc\bar{c}\bar{c}$ and $bb\bar{b}\bar{b}$ systems, the $6_c-\bar 6_c$ states are located lower than the $\bar 3_c-3_c$ ones as illustrated in Fig.~\ref{couple effects}. In the OGE model, the interactions between the two quarks within a color-sextet diquark are repulsive due to the color factor in Table~\ref{color}, while those in the $\bar 3_c$ one is attractive. However, the interactions between the $6_c$ diquark and $\bar 6_c$ antidiquark are attractive and much stronger than that between the $\bar 3_c$ diquark and $3_c$ antidiquark. There exists a $6_c-\bar 6_c$ tetraquark state, if the attraction between diquark and antidiquark wins against the repulsion within the diquak (antidiquark). If the attractive potentials are strong enough, the $6_c-\bar 6_c$ state stays even lower than the ${\bar 3_c}-{3_c}$ one. That is what happens to the $cc\bar{c}\bar{c}$, $bb\bar{b}\bar{b}$ tetraquark states with $J^{PC}=0^{++}$ in the two quark models. For the $bb\bar{c}\bar{c}$ ($cc\bar b\bar b$) state, the $6_c-\bar 6_c$ state is lower in model I, while the $\bar {3}_c-3_c$ state is lower in model II. \begin{figure*}[htbp] \centering \subfigure[~$cc\bar c\bar c$]{ \begin{minipage}[t]{0.323\linewidth} \centering \includegraphics[width=1\textwidth]{diagcccc0.pdf} \end{minipage}% }% \subfigure[~$bb\bar b\bar b$]{ \begin{minipage}[t]{0.33\linewidth} \centering \includegraphics[width=1\textwidth]{diagbbbb0.pdf} \end{minipage}% }% \subfigure[~$bb\bar c\bar c$ ($cc\bar b\bar b$)]{ \begin{minipage}[t]{0.33\linewidth} \centering \includegraphics[width=1\textwidth]{diagbbcc0.pdf} \end{minipage} }% \centering \caption{ The mass spectrum of the $0^{++}$ tetraquark states $QQ\bar Q'\bar Q'$ without and with the coupling between the $\bar 3_c-3_c$ and $6_c-\bar 6_c$ configurations. The blue lines and red dotted dashed lines represent the results in model I and II, respectively. In every diagrams, the left half and the right half are the mass spectrum without and with mixing between $\bar 3_c-3_c$ and $6_c-\bar 6_c$ configurations, respectively. The corresponding states are connected by the black dashed lines. } \label{couple effects} \end{figure*} \begin{table*} \renewcommand\arraystretch{1.5} \caption{The mass spectra of $cc\bar c\bar c$, $bb\bar b\bar b$, and $bb\bar c\bar c$ ($\bar b\bar b cc$) tetraquark states with $J^{PC}=0^{++}$. $\beta_{(a,b)}$ and $\gamma_{(a,b)}$ represent the oscillating parameters of the $\bar 3_c-3_c$ and $6_c-\bar 6_c$ tetraquark states, respectively. }\label{0++} \centering \setlength{\tabcolsep}{1mm} \begin{tabular}{l|cccc|cccc} \toprule[1pt]\toprule[1pt] $J^{PC}=0^{++}$ & Model I & M [GeV] & $\bar{3}_{c}\otimes3_{c}$ & $6_{c}\otimes\bar{6}_{c}$ & Model II & M [GeV] & $\bar{3}_{c}\otimes3_{c}$ & $6_{c}\otimes\bar{6}_{c}$ \tabularnewline \midrule[1pt] \multirow{2}{*}{$cc\bar{c}\bar{c}$} & $\beta_{a}=\beta_{b}=0.4$, $\beta=0.6$ & $6.377$ & $11\%$ & $89\%$ & $\beta_{a}=\beta_{b}=0.5$, $\beta=0.7$ & $6.371$ & $43\%$ & $57\%$\tabularnewline & $\gamma_{a}=\gamma_{b}=0.4$, $\gamma=0.7$ & $6.425$ & $89\%$ & $11\%$ & $\gamma_{a}=\gamma_{b}=0.5$, $\gamma=0.8$ & $6.483$ & $57\%$ & $43\%$\tabularnewline \midrule[1pt] \multirow{2}{*}{$bb\bar{b}\bar{b}$} & $\beta_{a}=\beta_{b}=0.7$, $\beta=0.9$ & $19.215$ & $1\%$ & $99\%$ & $\beta_{a}=\beta_{b}=0.9$, $\beta=1.1$ & $19.243$ & $17\%$ & $83\%$\tabularnewline & $\gamma_{a}=\gamma_{b}=0.7$, $\gamma=0.9$ & $19.247$ & $99\%$ & $1\%$ & $\gamma_{a}=\gamma_{b}=0.8$, $\gamma=1.2$ & $19.305$ & $83\%$ & $17\%$\tabularnewline \midrule[1pt] \multirow{2}{*}{$bb\bar{c}\bar{c}$} & $\beta_{a}=0.6,\beta_{b}=0.5,\beta=0.7$ & $12.847$ & $14\%$ & $86\%$ & $\beta_{a}=0.7,\beta_{b}=0.5,\beta=0.8$ &$12.886$ &$53\%$ &$ 47\%$ \tabularnewline & $\gamma_{a}=0.6,\gamma_{b}=0.4,\gamma=0.9$ & $12.866$ & $86\%$ & $14\%$ & $\gamma_{a}=0.7,\gamma_{b}=0.5,\gamma=0.9$ &$12.946$ &$47\%$ &$53\%$ \tabularnewline \bottomrule[1pt]\bottomrule[1pt] \end{tabular} \end{table*} In general, a tetraquark state is a mixture of the $\bar 3_c-{3_c}$ and ${6_c}-{\bar 6_c}$ states as illustrated in Eq.~(\ref{0++state}). With the couple-channel effects of the $\bar 3_c-3_c$ and $6_c-\bar 6_c$ color configurations, we obtain the mass spectrum of the $0^{++}$ states and list them in Table~\ref{0++}. The spectra obtained with $\bar 3_c-{3_c}$ and ${6_c}-{\bar 6_c}$ mixing are given in Fig.~\ref{couple effects}. The mixing effect will pull down the lower state and raise the higher state. The two quark models lead to similar mass spectra for the $cc\bar c\bar c$, $bb\bar b\bar b$, and $bb\bar c\bar c$ ($cc\bar b\bar b$) tetraquark states with the differences up to tens of MeV. However, the proportions of the components in the two quark models are quite different. The mixing between the $\bar 3_c-3_c$ and $6_c-\bar 6_c$ states are more stronger in model II. The reasons are explained as follows. In model I and model II, we find that only the hyperfine interactions contribute to the couple-channel effects of the ${\bar 3_c}-{3_c}$ configuration and the ${6_c}-{\bar 6_c}$ one, while the contributions from the confinement and Coulomb potentials vanish. We illustrate the underlying dynamics as follows. The matrices of $h_{12}$ and $h_{34}$ are diagonal due to the orthogonality of the wave functions of different configurations. However, the $V_{\text{coul}}+V_{\text{linear}}+V_{\text{hyp}}$ in $V_{I}$, which describes the interactions between the diquark and antidiquark, may result in the couple-channel effects of different configurations. For an S-wave tetraquark state with two identical quarks (antiquarks), such as $QQ\bar Q_1\bar Q_2$ ($Q_1Q_2\bar Q\bar Q$), the spin wave functions of different possible configurations are orthogonal, which is constrained by the Fermi statistic. Since the OGE Coulomb and linear confinement potentials do not contain spin operators, they do not contribute to the couple-channel effects due to the orthogonality of the spin wave functions. And only the hyperfine potential contributes. That is what happens to the $QQ\bar Q'\bar Q'$ state in this work. For a tetraquark state without identical quarks and antiquarks, i.e., $Q_{1}Q_{2}\bar{Q}_{3}\bar{Q}_{4}$ ($Q_1\neq Q_2$ and $Q_3\neq Q_4$), the spin wave functions of different configurations may be the same. The four quarks form a color singlet state and the color matrix element is \begin{eqnarray} && (\sum^4_n\mathbf \lambda_n)^2|\chi_{i,j} \rangle =0 \end{eqnarray} where $\chi_{i}$ and $\chi_j$ represent two different color configurations and they are the eigenvectors of $\mathbf \lambda_1+\mathbf \lambda_2$ and $\mathbf \lambda_3+\mathbf \lambda_4$. Considering their orthogonality, one obtains \begin{eqnarray} && \langle \chi_i|(\mathbf \lambda_1+\mathbf \lambda_2)^2|\chi_j \rangle =0,\nonumber \\ && \langle \chi_i|(\mathbf \lambda_3+\mathbf \lambda_4)^2|\chi_j \rangle =0. \end{eqnarray} Then the color factors of the (13), (14), (23), and (24) pairs of quarks cancel out, \begin{eqnarray} && \langle \chi_i|(\lambda_1+\lambda_2)(\lambda_3+\lambda_4)|\chi_j\rangle=0. \end{eqnarray} Moreover, if the coupling constants are the same for the four quark pairs, the contributions from the OGE Coulomb and the linear confinement potentials will cancel out completely. In model I, the contributions from the color interactions do not cancel out exactly due to different $\alpha_s$. However, partial cancellations are still expected. In model II, the OGE Coulomb and linear confinement potentials do not depend on the mass of the interacting quarks. Thus, the couple-channel effects arising from the OGE Coulomb and linear confinement potentials cancel out. The mixing between different color-flavor-spin configurations only comes from the hyperfine potential, which is inversely proportional to the interacting quark mass. Thus, the mixing in the $cc\bar{c}\bar{c}$ state is generally larger than that in the $bb\bar b\bar b$ state. In model II, all the flavor dependence is packaged into the hyperfine interaction, which is different from model I. The hyperfine interaction in model II should play a more important role than that in model I. Therefore, the couple-channel effect in model II is stronger as illustrated in Fig.~\ref{couple effects}. In model II, since the $r_0$ in the hyperfine interaction is the function of the reduced mass between the two quarks, its value for $b\bar c$ is in proximity to that of $c \bar c$. Then, the mixing in $cc\bar c\bar c$ and $bb\bar c\bar c$ are similar as illustrated in Table~\ref{0++}. One may wonder the additional dependence of the mixing on the number of the expanding basis. For instance, when we use $2\times3^{3}$ bases to expand the wave function of the $cc\bar c\bar c$ state in model I, we find there are $11.4\%$ $\bar 3_c-3_c$ and $88.6\%$ $6_c-\bar 6_c$ components in the tetraquark state. The percents change slightly with the number of the basis. In Ref.~\cite{Liu:2019zuc}, the authors pointed out that the state $|(QQ)_{\bar 3_c}(\bar Q \bar Q)_{3_c}\rangle$ $(Q=c,b)$ is located lower than the $|(QQ)_{6_c}(\bar Q \bar Q)_{\bar 6_c}\rangle$ state, which contradicts with our results. The inconsistency was due to their use of particular wave functions. The authors used the same oscillating parameters for the $\bar 3_c-3_c$ and $6_c-\bar 6_c$ states. Moreover, the oscillating parameters are proportional to the reduced masses of the interacting quarks. With their wave function, we reproduced their results. However, if we remove the two constrains on the wave functions, we find the lowest state with a dominant ${6_c}-{\bar 6_c}$ component as listed in Table~\ref{comparison}, which is lower than that in Ref.~\cite{Liu:2019zuc}. \begin{table*} \renewcommand\arraystretch{1.5} \caption{The comparison of the mass spectra of $0^{++}$ $cc\bar c\bar c$ and $bb\bar b\bar b$ from Ref.~\cite{Liu:2019zuc} and our results using the same quark model. In the right table, we remove the constrains on the wave functions used in Ref.~\cite{Liu:2019zuc}. }\label{comparison} \centering \begin{tabular}{l|cccc|cccc} \toprule[1pt]\toprule[1pt] & \multicolumn{4}{c|}{Ref.~\cite{Liu:2019zuc}} & \multicolumn{4}{c}{without constrains}\tabularnewline \toprule[1pt] $J^{PC}=0^{++}$ & $w=0.325$ & M [GeV] & $\bar{3}_{c}\otimes3_{c}$ & $6_{c}\otimes\bar{6}_{c}$ & & M [GeV] & $\bar{3}_{c}\otimes3_{c}$ & $6_{c}\otimes\bar{6}_{c}$\tabularnewline \midrule[1pt] \multirow{2}{*}{$cc\bar{c}\bar{c}$}& $\beta_{a}=\beta_{b}=0.49$, $\beta=0.69$ & $6470$ & $66\%$ & $34\%$ & $\beta_{a}=\beta_{b}=0.4$, $\beta=0.6$ & $6417$ & $33\%$ & $67\%$\tabularnewline & $\gamma_{a}=\gamma_{b}=0.49$, $\gamma=0.69$ & $6559$ & $34\%$ & $66\%$ & $\gamma_{a}=\gamma_{b}=0.4$, $\gamma=0.7$ & $6509$ & $67\%$ & $33\%$\tabularnewline \midrule[1pt] \multirow{2}{*}{$bb\bar{b}\bar{b}$} & $\beta_{a}=\beta_{b}=0.88$, $\beta=1.24$ & $19268$ & $66\%$ & $34\%$ & $\beta_{a}=\beta_{b}=0.7$, $\beta=0.9$ & $19226$ & $18\%$ & $82\%$\tabularnewline & $\gamma_{a}=\gamma_{b}=0.88$, $\gamma=1.24$ & $19306$ & $34\%$ & $66\%$ & $\gamma_{a}=\gamma_{b}=0.7$, $\gamma=0.9$ & $19268$ & $82\%$ & $18\%$\tabularnewline \bottomrule[1pt]\bottomrule[1pt] \end{tabular} \end{table*} \subsection{The tetraquark states with $J^P=1^{+-}$ and $2^{++}$} Constrained by the Fermi statistics, the tetraquark states $QQ\bar Q'\bar Q'$ ($Q$ and $Q'$ may be the same flavors) with $J^P=1^{+-}$ and $2^{++}$ only contain one color component, i.e. $\bar 3_c-3_c$. We list the mass spectra of the $S$-wave states and their radial excitations in Table~\ref{j1j2}. The mass spectra in the two models are quite similar to each other. The results from Model II are slightly higher than those in Model I. The tetraquark states with $J^P=1^{+-}$ and $2^{++}$ have the same configurations except the total spin. Therefore, the mass difference arises from the hyperfine potential, which is quite small compared with the OGE Coulomb and linear confinement potentials. Thus, the mass spectra of these two kinds of states are almost the same. \begin{table*} \renewcommand\arraystretch{1.5} \caption{The mass spectra of the $cc\bar c\bar c$, $bb\bar b\bar b$ and $bb\bar c\bar c $ states with $J^{PC}=1^{+-}$ and $2^{++}$ in units of GeV. } \label{j1j2} \setlength{\tabcolsep}{2.5mm} \begin{tabular}{c|cccc|cccc} \toprule[1pt]\toprule[1pt] & Model I & $nS$ & $J^{PC}=1^{+-}$ & $J^{PC}=2^{++}$ & Model II & $nS$ & $J^{PC}=1^{+-}$ & $J^{PC}=2^{++}$\tabularnewline \bottomrule[1pt] $cc\bar{c}\bar{c}$ & $\beta_{a}=0.4$ &$1S$ & $6.425$ & $6.432$ & $\beta_{a}=0.5$ &$1S$& $6.450$ & $6.479$\tabularnewline & $\beta_{b}=0.4$ &$2S$ & $6.856$ & $6.864$ & $\beta_{b}=0.5$ &$2S$ & $6.894$ & $6.919$\tabularnewline & $\beta=0.6$ & $3S$ & $6.915$ & $6.919$ & $\beta=0.6$& $3S$ & $7.036$ & $7.058$\tabularnewline \midrule[1pt] $bb\bar{b}\bar{b}$ & $\beta_{a}=0.7$&$1S$ & $19.247$ & $19.249$ & $\beta_{a}=1.0$ &$1S$ & $19.311$ & $19.325$\tabularnewline & $\beta_{b}=0.7$&$2S$ & $19.594$ & $19.596$ & $\beta_{b}=1.0$ &$2S$ & $19.813$ & $19.823$\tabularnewline & $\beta=0.9$ & $3S$& $19.681$ & $19.682$ & $\beta=1.1$ &$3S$ & $20.065$ & $20.077$\tabularnewline \midrule[1pt] $bb\bar{c}\bar{c}$ & $\beta_{a}=0.7$ & $1S$ & $12.864$ & $12.868$ & $\beta_{a}=0.7$ & $1S$ & $12.924$ & $12.940$\tabularnewline & $\beta_{b}=0.5$ & $2S$ & $13.259$ & $13.262$ & $\beta_{b}=0.5$ & $2S$ & $13.321$ & $13.334$\tabularnewline & $\beta=0.7$ & $3S$ & $13.297$ & $13.299$ & $\beta=0.7$ & $3S$ & $13.364$ & $13.375$\tabularnewline \bottomrule[1pt] \bottomrule[1pt] \end{tabular} \end{table*} \subsection{Disscussion} A tetraquark state can be expressed in another set of color representations as follows, \begin{eqnarray} &&|(Q_{1}Q_{2})_{\bar{3}_{c}}(\bar{Q}_{3}\bar{Q}_{4})_{3_{c}}\rangle \nonumber \\ &&=\sqrt{\frac{1}{3}}|(Q_{1}\bar{Q}_{3})_{1_{c}}(Q_{2}\bar{Q}_{4})_{1_{c}}\rangle-\sqrt{\frac{2}{3}}|(Q_{1}\bar{Q}_{3})_{8_{c}}(Q_{2}\bar{Q}_{4})_{8_{c}}\rangle \nonumber \\ &&=-\sqrt{\frac{1}{3}}|(Q_{1}\bar{Q}_{4})_{1_{c}}(Q_{2}\bar{Q}_{3})_{1_{c}}\rangle+\sqrt{\frac{2}{3}}|(Q_{1}\bar{Q}_{4})_{8_{c}}(Q_{2}\bar{Q}_{3})_{8_{c}}\rangle,\nonumber \\ &&|(Q_{1}Q_{2})_{6_{c}}(\bar{Q}_{3}\bar{Q}_{4})_{\bar{6}_{c}}\rangle \nonumber \\ &&=\sqrt{\frac{2}{3}}|(Q_{1}\bar{Q}_{3})_{1_{c}}(Q_{2}\bar{Q}_{4})_{1_{c}}\rangle+\sqrt{\frac{1}{3}}|(Q_{1}\bar{Q}_{3})_{8_{c}}(Q_{2}\bar{Q}_{4})_{8_{c}}\rangle \nonumber \\ &&=\sqrt{\frac{2}{3}}|(Q_{1}\bar{Q}_{4})_{1_{c}}(Q_{2}\bar{Q}_{3})_{1_{c}}\rangle+\sqrt{\frac{1}{3}}|(Q_{1}\bar{Q}_{4})_{8_{c}}(Q_{2}\bar{Q}_{3})_{8_{c}}\rangle. \nonumber \\ \end{eqnarray} To investigate the inner structure of the tetraquark, we calculate its proportions in the new set and the root mean square radii of the state, which are listed in Table~\ref{proportion0++}. The ground states contain the $8_c\otimes 8_c$ configuration. In model I, the proportion of the $8_c\otimes 8_c$ configuration is considerable, which supports that the solution is a confined state rather than a scattering state of two mesons. In model II, though the $1_c\otimes 1_c$ configuration is dominant, the root mean square radii are of the size of nucleons. Thus, they are also unlikely to be scattering states. We also take the $cc\bar c\bar c$ as an example to study the density distributions of $r^2\rho(r)$, $r^2\rho(r')$, $r^2_{12} \rho(r_{12})$ and $r^2_{13} \rho(r_{13})$. The $\rho(r)$ and $ \rho(r_{12})$ are defined as follows, \begin{eqnarray} \ensuremath{\rho}(r)={\int}|\ensuremath{\psi}(r_{12},\,r_{34},\,r)|{}^{2}d\vec{r}_{12}d\vec{r}_{34}d\hat{\vec{r}},\nonumber \\ \rho(r_{12})=\ensuremath{\int}|\ensuremath{\psi}(r_{12},\,r_{34},\,r)|{}^{2}d\vec{r}d\vec{r}_{34}d\hat{\vec{r}}_{12}. \end{eqnarray} The definitions of the $\rho(r_{13})$ and $\rho(r')$ are similar. The dependence of the density distributions on the extension of the basis function is displayed in Fig.~\ref{dependence}. We find that the distributions are confined in the spatial space and tend to be stable with different number of the expanding basis, which indicates the state may be a confined state instead of a scattering state. \begin{figure*}[!tbp] \centering {\subfigure[~Density distributions in the first Jacobi coordinate.] {\includegraphics[width=0.43\textwidth]{model1.pdf} \includegraphics[width=0.43\textwidth]{model2.pdf}}} {\subfigure[~Density distributions in the second Jacobi coordinate.] {\includegraphics[width=0.43\textwidth]{c2model1.pdf} \includegraphics[width=0.43\textwidth]{c2model2.pdf}}} \caption{The dependence of density distributions on the number of the basis functions.} \label{dependence} \end{figure*} \begin{table*} \caption{The proportion of the color configurations and the root mean square radii of the $cc\bar c\bar c$, $bb\bar b\bar b$, and $bb\bar c\bar c$ ($\bar b\bar b cc$) tetraquark states with $J^{PC}=0^{++}$. $\sqrt{\langle r_{ij}^{2}\rangle}$ and $\sqrt{\langle r^{(')2}\rangle}$ are the root mean square radii corresponding to the second Jacobi coordinate in Fig.~\ref{jac}. }\label{proportion0++} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} \toprule[1pt]\toprule[1pt] $J^{PC}=0^{++}$ & \multicolumn{11}{c}{Model I}\tabularnewline \midrule[1pt] $N^{3}=2^{3}$ & After mixing & $\bar{3}_{c}\otimes3_{c}$ & $6_{c}\otimes\bar{6}_{c}$ & $1_{c}\otimes1_{c}$ & $8_{c}\otimes8_{c}$ & $\sqrt{\langle r_{12}^{2}\rangle}$ fm & $\sqrt{\langle r_{34}^{2}\rangle}$ fm & $\sqrt{\langle r^{2}\rangle}$ fm & $\sqrt{\langle r_{13}^{2}\rangle}$ fm & $\sqrt{\langle r_{24}^{2}\rangle}$ fm & $\sqrt{\langle r'^{2}\rangle}$ fm\tabularnewline \midrule[1pt] \multirow{1}{*}{$cc\bar{c}\bar{c}$} & $6.377$ & $11\%$ & $89\%$ & $90\%$ & $10\%$ & \multicolumn{2}{c|}{$0.54$} & $0.30$ & \multicolumn{2}{c|}{$0.49$} & $0.38$\tabularnewline \hline \multirow{1}{*}{$bb\bar{b}\bar{b}$} & $19.215$ & $1\%$ & $99\%$ & $75\%$ & $25\%$ & \multicolumn{2}{c|}{$0.35$} & $0.19$ & \multicolumn{2}{c|}{$0.31$} & $0.25$\tabularnewline \hline \multirow{1}{*}{$bb\bar{c}\bar{c}$} & $12.847$ & $14\%$ & $86\%$ & $92\%$ & $8\%$ & $0.39$ & $0.50$ & $0.26$ & \multicolumn{2}{c|}{$0.41$} & $0.32$\tabularnewline \midrule[1pt] \multicolumn{12}{c}{Model II}\tabularnewline \midrule[1pt] $N^{3}=2^{3}$ & After mixing & $\bar{3}_{c}\otimes3_{c}$ & $6_{c}\otimes\bar{6}_{c}$ & $1_{c}\otimes1_{c}$ & $8_{c}\otimes8_{c}$ & $\sqrt{\langle r_{12}^{2}\rangle}$ fm & $\sqrt{\langle r_{34}^{2}\rangle}$ fm & $\sqrt{\langle r^{2}\rangle}$ fm & $\sqrt{\langle r_{13}^{2}\rangle}$ fm & $\sqrt{\langle r_{24}^{2}\rangle}$ fm & $\sqrt{\langle r'^{2}\rangle}$ fm\tabularnewline \midrule[1pt] \multirow{1}{*}{$cc\bar{c}\bar{c}$} & $6.371$ & $43\%$ & $57\%$ & $97\%$ & $3\%$ & \multicolumn{2}{c|}{$0.47$} & $0.30$ & \multicolumn{2}{c|}{$0.45$} & $0.33$\tabularnewline \hline \multirow{1}{*}{$bb\bar{b}\bar{b}$} & $19.243$ & $17\%$ & $83\%$ & $94\%$ & $6\%$ & \multicolumn{2}{c|}{$0.28$} & $0.17$ & \multicolumn{2}{c|}{$0.26$} & $0.20$\tabularnewline \hline \multirow{1}{*}{$bb\bar{c}\bar{c}$} & $12.886$ & $53\%$ & $47\%$ & $93\%$ & $7\%$ & $0.32$ & \multicolumn{1}{c|}{$0.44$} & $0.26$ & \multicolumn{2}{c|}{$0.37$} & $0.26$\tabularnewline \bottomrule[1pt]\bottomrule[1pt] \end{tabular} \end{table*} We present the mass spectra of the tetraquark states and the mass thresholds of possible scattering states in Fig.~\ref{mass spectra}. As illustrated in this figure, the $bb\bar b \bar b$, $cc\bar c\bar c$, and $bb\bar c\bar c$ states with $J^{PC}=0^{++}$ are the lowest states. But they are still located above the corresponding meson-meson mass thresholds, which indicates that there may not exist bound states in the two quark models. \begin{figure*}[!tbp] \centering {\subfigure[~$cc\bar c\bar c$]{ \begin{minipage}[t]{0.45\linewidth} \centering \includegraphics[width=1\textwidth]{diagcccc.pdf} \end{minipage}% } \subfigure[~$bb\bar b\bar b$] {\includegraphics[width=0.48\textwidth]{diagbbbb.pdf}} \subfigure[~$bb\bar c\bar c(cc\bar b\bar b)$.] {\includegraphics[width=0.45\textwidth]{diagbbcc.pdf}}} \caption{The mass spectra of the $cc\bar c\bar c$, $bb\bar b\bar b$, and $bb\bar c\bar c(cc\bar b\bar b)$ tetraquark states. The blue line and red dotted dashed line represent the results in model I and II, respectively. } \label{mass spectra} \end{figure*} We also investigate the constituent quark mass dependence of the tetraquark spectra. We vary the quark mass and display the results in Fig.~\ref{massdependence}. The figure shows that both the tetraquark mass and the $\eta_Q\eta_Q$ threshold increase with the quark mass. The $QQ\bar Q\bar Q$ is always located above the mass thresholds of the $\eta_Q\eta_Q$ and no bound tetraquark states exist. \begin{figure*}[!tbp] \centering {\subfigure[The mass spectra of the tetraquark states $QQ\bar Q\bar Q$ with $J^{PC}=0^{++}$.] {\includegraphics[width=0.43\textwidth]{quarkmass.pdf}} \subfigure[The mass difference between the tetraquark states and the mass threshold of $\eta_Q\eta_Q$.] {\includegraphics[width=0.44\textwidth]{massdif.pdf}}} \caption{The quark mass dependence of the $0^{++}$ tetraquark states $QQ\bar Q\bar Q$ in model II. In this figure, we use the $\eta_Q$ to denote the meson state $Q\bar Q$ with $J^{PC}=0^{-+}$.} \label{massdependence} \end{figure*} \section{Summary}\label{sec3} In this work, we have systematically calculated the mass spectra of the tetraquark states $cc\bar c \bar c$, $bb\bar b\bar b$, and $bb\bar c\bar c$ in two nonrelativistic quark models, which contain the OGE Coulomb, linear confinement and hyperfine potentials. For a $QQ\bar Q'\bar Q'$ ($Q$ and $Q'$ may be the same flavors) state with $J^{PC}=0^{++}$, it can be formed by a $6_c$ diquark and a $\bar 6_c$ antidiquark, or a $\bar 3_c$ diquark and a $3_c$ antidiquark. For the tetraquark states $cc\bar c \bar c$ and $bb\bar b\bar b$, the $6_c-\bar 6_c$ states are located lower than the $\bar 3_c-3_c$ ones due to the strong attractions between the diquark and the antidiquark. For the $bb\bar c\bar c$ ($cc\bar b\bar b$), the mass of the $6_c-\bar 6_c$ state is lower than that of the $\bar 3_c-3_c$ one in the model I, while the $\bar 3_c-3_c$ one is lower in the model II. Our calculation shows that the $6_c-\bar 6_c$ color configuration is important and sometimes even dominant in the formation of fully heavy tetraquark states. One should be cautious about neglecting the $6_c-\bar 6_c$ color configurations in calculating the tetraquark states. The $6_c-\bar 6_c$ configuration couples with the $\bar 3_c-3_c$ one through the interactions between the diquark and antidiquark. For a $QQ\bar Q'\bar Q'$ state, we prove that only the hyperfine potential contributes to the mixing between the two configurations, while the contributions from the OGE Coulomb and the linear confinement potentials cancel out exactly. \begin{table*} \centering \caption{The mass spectra (in units of GeV) of the tetraquark states $cc\bar c\bar c$, $bb\bar b\bar b$, and $bb\bar c\bar c$ in different frameworks. The $M^{1}_{th}$ and $M^{2}_{th}$ are the numerical results from the quark model I and II in this work, respectively. } \label{summarize} \setlength{\tabcolsep}{1.7mm} \begin{tabular}{l|cccccccccccccc} \toprule[1pt]\toprule[1pt] & $J^{PC}$ & $M_{th}^{1}$ & $M_{th}^{2}$ &~\cite{Berezhnoy:2011xn}&~\cite{Karliner:2016zzc} &~\cite{Wu:2016vtq}&~\cite{Anwar:2017toa}&~\cite{Bai:2016int}&~\cite{Barnea:2006sd}&~\cite{Liu:2019zuc}&~\cite{Chen:2016jxd,Chen:2018cqz}\tabularnewline \midrule[1pt] & \multirow{2}{*}{$0^{++}$} & $6.377$ & $6.371$ & \multirow{2}{*}{$5.966$ } & \multirow{2}{*}{$\ensuremath{6.192\pm0.025}$} & \multirow{2}{*}{$6.001$} & \multirow{2}{*}{$...$} & \multirow{2}{*}{$...$} & \multirow{2}{*}{$6.038$} & $6.470$ & \multirow{2}{*}{$6.44 \pm 0.15$} \tabularnewline \multirow{3}{*}{$cc\bar{c}\bar{c}$} & & $6.425$ & $6.483$ & & & & & & & $6.558$ & \tabularnewline \cline{2-12} & $1^{+-}$ & $6.425$ & $6.450$ & $6.051$ & $...$ & $6.109$ & $...$ & $...$ & $6.101$ & $6.512$ & $6.37\pm0.18$ \tabularnewline & $2^{++}$ & $6.432$ & $6.479$ & $6.223$ & $...$ & $6.166$ & $...$ & $...$ & $6.172$ & $6.534$ & $6.37 \pm 0.19$ \tabularnewline \midrule[1pt] \multirow{4}{*}{$bb\bar{b}\bar{b}$} & \multirow{2}{*}{$0^{++}$} & $19.215$ & $19.243$ & \multirow{2}{*}{$18.754$} & \multirow{2}{*}{$18.826 \pm 0.025$} & \multirow{2}{*}{$18.815$} & \multirow{2}{*}{$18.72 \pm 0.02$} & \multirow{2}{*}{$18.69 \pm0.03$} & \multirow{2}{*}{$...$} & $19.268$ &\multirow{2}{*}{$18.45\pm0.15$} \tabularnewline & & $19.247$ & $19.305$ & & & & & & & $19.305$ & \tabularnewline \cline{2-12} & $1^{+-}$ & $19.247$ & $19.311$ & $18.808$ & $...$ & $18.874$ & $...$ & $...$ & $...$ & $19.285$ &$ 18.32 \pm0.17$ \tabularnewline & $2^{++}$ & $19.249$ & $19.325$ & $18.916$ & $...$ & $18.905$ & $...$ & $...$ & $...$ & $19.295$ & $ 18.32 \pm 0.17$ \tabularnewline \midrule[1pt] \multirow{4}{*}{$bb\bar{c}\bar{c}(cc\bar{b}\bar{b})$} & \multirow{2}{*}{$0^{++}$} & $12.847$ & $12.886$ & \multirow{2}{*}{$...$} & \multirow{2}{*}{$...$} & \multirow{2}{*}{$12.571$} & \multirow{2}{*}{$...$} & \multirow{2}{*}{$...$} & \multirow{2}{*}{$...$} & $12.935$ &\multirow{2}{*}{$...$}\tabularnewline & & $12.866$ & $12.946$ & & & & & & & $13.023$ & \tabularnewline \cline{2-12} & $1^{+-}$ & $12.864$ & $12.924$ & $...$ & $...$ & $12.638$ & $...$ & $...$ & $...$ & $12.945$ & $...$\tabularnewline & $2^{++}$ & $12.868$ & $12.940$ & $...$ & $...$ & $12.673$ & $...$ & $...$ & $...$ & $12.956$ & $...$\tabularnewline \bottomrule[1pt]\bottomrule[1pt] \end{tabular} \end{table*} In Table~\ref{summarize}, we summarize our numerical results and those from the CMI model~\cite{Berezhnoy:2011xn,Karliner:2016zzc,Wu:2016vtq}, a nonrelativistic effective field theory (NREFT) and a relativized diquark and antidiquark model~\cite{Anwar:2017toa}, a diffusion Monte-Carlo method~\cite{Bai:2016int}, a constituent quark model with the hyperspherical formalism~\cite{Barnea:2006sd}, the nonrelativistic potential model~\cite{Liu:2019zuc}, and the QCD sum rule~\cite{Chen:2016jxd,Chen:2018cqz}. In this table, we notice that the numerical results in the two nonrelativistic quark models are similar to each other. The results show that the lowest states are the ones with $J^{PC}=0^{++}$. These ground states are located about $300\sim 450$ MeV above the lowest scattering states, which indicates that there may not exist bound tetraquark states $cc\bar c \bar c$, $bb\bar b\bar b$, and $bb\bar c\bar c$ $(cc\bar b\bar b)$ in the scheme of the two nonrelativistic quark models. The parameters of the two quark models are determined by the meson spectrum. The potentials in a four-body system may be slightly different from those which are widely used in the conventional meson and baryon systems. The different confinement mechanism may lead to different spectra. For example, the three-body force arising from the triple-gluon vertex may be non-negligible for the multi-quark systems. In contrast, this force vanishes for the traditional $q\bar q$ meson and $qqq$ baryons. The fully heavy tetraquark states can be searched for at CMS, LHCb, and BelleII. More experimental data may provide a deeper understanding of the interactions in the multi-quark system. \section*{ACKNOWLEDGMENTS} G.J. Wang is very grateful to X. Z. Weng, X. L. Chen and W. Z. Deng for very helpful discussions. We also thank Prof. Makoto Oka and Prof. Emiko Hiyama for helpful suggestions. This project is supported by the National Natural Science Foundation of China under Grants 11575008, 11621131001 and 973 program.
1,314,259,992,660
arxiv
\section{Introduction} In 1950s Keller \cite{K1954} and Osserman \cite{O1957} obtained independently optimal conditions for the existence of a solution to the boundary blow-up problem \begin{equation}\label{ko} \left\{ \begin{aligned} \Delta u&=f(u)&&\quad\mbox{ in }\Omega,\\ u&=\infty &&\quad\mbox{ on }\partial\Omega, \end{aligned} \right. \end{equation} where $\Omega\subset \mathbb{R}^N$ is a bounded domain and $f\in C^1[0,\infty)$ is a nonnegative increasing function. The condition on the boundary $\partial\Omega$ in \eq{ko} is understood as $\lim_{x\to x_0} u(x)=\infty$ for all $x_0\in \partial\Omega$. Keller and Osserman obtained that \eq{ko} has $C^2(\Omega)$ solutions if and only if \begin{equation}\label{kocond} \int_1^\infty\frac{ds}{\sqrt{F(s)}}<\infty\quad\mbox{ where }\; F(s)=\int_0^s f(t)dt. \end{equation} Interestingly, condition \eq{kocond} also appeared in other circumstances: it is related to the maximum principle for nonlinear elliptic inequalities. For instance, if $u\in C^2(\Omega)$ is nonnegative and satisfies $\Delta u\leq f(u)$ in $\Omega$, then, if $u$ vanishes at a point in $\Omega$, it must vanish everywhere in $\Omega$. We refer the reader to Vazquez \cite{V1984} and to Pucci, Serrin and Zou \cite{PS1993,PS2004,PSZ1999} for various extensions of this result. Problems related to boundary blow-up solutions have a long history and they can be traced back to at least a century ago when Bieberbach \cite{B1916} investigated such solutions for the equation $\Delta u=e^u$ in a planar domain. Since then, many techniques have been devised to deal with such solutions (see, e.g. \cite{GRbook2008,GRbook2012,Rbk} for an account on the progress on this topic). Boundary blow-up solutions for semilinear elliptic equations with nonlinear gradient terms have been only recently investigated (see for instance \cite{AGMQ2012,CPW2013,FQS2013,MMMR2011}). In this paper we investigate a semilinear elliptic system featuring a mixture of power type nonlinearities and nonlinear gradient terms. More precisely, we shall be concerned with \begin{equation}\label{sys0} \left\{ \begin{aligned} \Delta u&=v^p&&\quad\mbox{ in }\Omega,\\ \Delta v&=f(|\nabla u|) &&\quad\mbox{ in }\Omega, \end{aligned} \right. \end{equation} where $\Omega\subset \mathbb{R}^N$ is either a ball centred at the origin or the whole space, $p>0$ is a real number and $f\in C^1[0,\infty)$ is a nondecreasing function such that $f(t)>0$ for all $t>0$. Our study will assume that $u$ and $v$ are positive radially symmetric solutions of \eq{sys0}. Note that we do not assume a priori any condition at the boundary for neither $u$ or $v$ but this will be needed in the course of our analysis as we shall be concerned with the classification of all solutions to \eq{sys0}. If $\Omega$ is a ball, system \eq{sys0} was first considered by Diaz, Lazzo, and Schmidt in \cite{DLS2005}, in the case $p=1$ and $f(t)=t^2$. Such choice of exponent $p$ and function $f$ is related to the study of the dynamics of a viscous, heat-conducting fluid. The authors in \cite{DLS2005} obtained the existence of one positive solution and, in case of small dimensions, of one sign-changing solution that blows up at the boundary. Their study was further extended to time dependent systems in Diaz, Rakotoson, and Schmidt \cite{DRS2007, DRS2008}. We shall first be concerned with the case where $\Omega$ is a ball. In such a situation we obtain that \eq{sys0} admits positive radially symmetric solutions $(u,v)$ such that $u$ or $v$ (or both) blow up around $\partial \Omega$ if and only if \begin{equation}\label{KOgradient} \int_{1}^\infty\frac{ds}{\Big(\displaystyle \int_0^s F(t)dt \Big)^{p/(2p+1)}} <\infty. \end{equation} This can be seen as the analogous condition to \eq{kocond} obtained by Keller \cite{HV1996} and Osserman \cite{MST1995} for \eq{ko}. We also provide a complete classification of radially symmetric solutions in such a case. Moreover, we shall obtain (see Theorem \ref{thm2} below) that the equation $$ \Delta^2 u=f(|\nabla u|)\quad\mbox{ in }B_R $$ has (not necessarily positive) radially symmetric solutions that blow up at the boundary $\partial B_R$ if and only if $$ \int_1^\infty\frac{sds}{\displaystyle \Big(\int_0^{s}{F(t)}{dt}\Big)^{1/3}}=\infty \quad\mbox{and}\quad \int_1^\infty\frac{ds}{\displaystyle \Big(\int_0^{s}{F(t)}{dt}\Big)^{1/3}}<\infty. $$ If $f(t)=t^q$, $q\geq 1$, we are able to give the exact rate at which the components $u$ and $v$ blow up at the boundary. In such a setting we use dynamical systems tools for cooperative systems with negative divergence. Further, condition \eq{KOgradient} appears again in the study of \eq{sys0} in the case $\Omega=\mathbb{R}^N$. Again when $f$ is a pure power type nonlinearity we shall be able to precisely describe the behaviour of solutions at infinity. \section{Main results} Let us first present the analysis of system \eq{sys0} in the case where $\Omega$ is a ball. Namely, we shall first investigate the system \begin{equation}\label{sys} \left\{ \begin{aligned} \Delta u&=v^p&&\quad\mbox{ in }B_R,\\ \Delta v&=f(|\nabla u|) &&\quad\mbox{ in }B_R, \end{aligned} \right. \end{equation} where $B_R\subset \mathbb{R}^N,$ $N\geq 2$ is the open ball of radius $R>0$ centred at the origin, $p>0$ and $f\in C^1[0,\infty)$ is a nondecreasing function such that $f(t)>0$ for all $t>0$. Let $F$ be the antiderivative of $f$ that vanishes at the origin (see \eq{kocond}). Sometimes in this paper we shall complement the system \eq{sys} with one of the following boundary conditions: \begin{itemize} \item either $ \mbox{ $u$ and $v$ are bounded in $B_R$;} $ \item or \begin{equation}\label{cond1} u\mbox{ is bounded in $B_R$ and } \lim_{|x|\nearrow R}v(x)=\infty; \end{equation} \item or \begin{equation}\label{cond2} \lim_{|x|\nearrow R}u(x)=\lim_{|x|\nearrow R}v(x)=\infty. \end{equation} \end{itemize} From the first equation of \eq{sys} it is easy to see that the situation $\lim_{|x|\nearrow R}u(x)=\infty$ and $v$ is bounded in $B_R$ cannot occur. \begin{theorem}\label{thm1} We have: \begin{enumerate} \item[(i)] All positive radial solutions of \eq{sys} are bounded if and only if \begin{equation}\label{bounded} \int_{1}^\infty\frac{ds}{\Big(\displaystyle \int_0^s F(t)dt \Big)^{p/(2p+1)}} =\infty. \end{equation} \item[(ii)] There exists a positive radial solution $(u,v)$ of \eq{sys} satisfying \eq{cond1} if and only if \begin{equation}\label{int1} \int_{1}^\infty\frac{sds}{\Big(\displaystyle \int_0^s F(t)dt \Big)^{p/(2p+1)}} <\infty. \end{equation} \item[(iii)] The exists a positive radial solution $(u,v)$ of \eq{sys} satisfying \eq{cond2} if and only if \begin{equation}\label{int2} \int_{1}^\infty\frac{ds}{\Big(\displaystyle \int_0^s F(t)dt \Big)^{p/(2p+1)}} <\infty\quad\mbox{ and } \int_{1}^\infty\frac{sds}{\Big(\displaystyle \int_0^s F(t)dt \Big)^{p/(2p+1)}} =\infty. \end{equation} \end{enumerate} \end{theorem} By taking $f(t)=e^t$ and estimating the integrals in \eq{bounded}-\eq{int2} we find: \begin{cor}\label{corblowup2} Consider \begin{equation}\label{eqexp} \left\{ \begin{aligned} \Delta u&=v^{p}&&\quad\mbox{ in }B_R,\\ \Delta v&= e^{|\nabla u|}&&\quad\mbox{ in }B_R. \end{aligned} \right. \end{equation} Then any solution of \eq{eqexp} is either bounded or satisfies \eq{cond1}. \end{cor} We now let $f(t)=t^q$, $q\geq 1$. From Theorem \ref{thm1} we obtain: \begin{cor}\label{corblowup3} Consider \begin{equation}\label{eqtq} \left\{ \begin{aligned} \Delta u&=v^{p}&&\quad\mbox{ in }B_R,\\ \Delta v&=|\nabla u|^{q}&&\quad\mbox{ in }B_R, \end{aligned} \right. \end{equation} where $p>0$ and $q\geq 1$. Then we have: \begin{enumerate} \item [(i)] All positive radial solutions of \eq{eqtq} are bounded if and only if $$p\leq 1 \mbox{ and } 1\leq q\leq \frac{1}{p}.$$ \item [(ii)] There exists positive radial solutions of \eq{eqtq} satisfying \eq{cond1} if and only if $$q>2\Big(1+\frac{1}{p}\Big).$$ \item [(iii)] There exists positive radial solutions of \eq{eqtq} satisfying \eq{cond2} if and only if $$\frac{1}{p}< q\leq 2 \Big(1+\frac{1}{p}\Big).$$ \end{enumerate} \end{cor} The three regions $A$, $B$ and $C$ in the $pq$-plane that correspond to the cases (i), (ii) and (iii) in Corollary \ref{corblowup3} are depicted below. \begin{figure}[!htb] \centering \includegraphics[scale=.45]{picture3.eps} \caption{The three regions described by Corollary \ref{corblowup3} } \end{figure} In the two pictures below we used MATLAB to plot the solution $(u,v)$ of system \eq{eqtq} for $q=3$ and $p=2$, $p=4$ and for various space dimensions $N$. Our next result deals with the biharmonic problem that derives from \eq{sys} by taking $p=1$. In this case we are able to deduce optimal conditions for the existence of a boundary blow up solution. \begin{theorem}\label{thm2} Let $R>0$. The problem \begin{equation}\label{sysb} \left\{ \begin{aligned} \Delta^2 u&=f(|\nabla u|)&&\quad\mbox{ in }B_R,\\ u&=\infty&&\quad\mbox{ on }\partial B_R, \end{aligned} \right. \end{equation} has radially symmetric solutions if and only if \begin{equation}\label{intc} \int_1^\infty\frac{sds}{\displaystyle \Big(\int_0^{s}{F(t)}{dt}\Big)^{1/3}}=\infty \quad\mbox{and}\quad \int_1^\infty\frac{ds}{\displaystyle \Big(\int_0^{s}{F(t)}{dt}\Big)^{1/3}}<\infty \end{equation} \end{theorem} \begin{figure}[!htb]\label{fig2} \centering \includegraphics[height=7.7cm,width=17cm]{type2.eps} \caption{The solution $u$ (left) and $v$ (right) for system \eq{eqtq} with boundary condition \eq{cond1} in the case $p=4$, $q=3$ and for various space dimensions $N=2$, $N=20$ and $N=40$. The graph of the $v$-component is restricted on the vertical axis to the interval $[0,50]$. } \end{figure} \begin{figure}[!htb]\label{fig3} \centering \includegraphics[height=7.7cm,width=17cm]{type3.eps} \caption{The solution $u$ (left) and $v$ (right) for system \eq{eqtq} with boundary condition \eq{cond2} in the case $p=2$, $q=3$ and for various space dimensions $N=2$, $N=20$ and $N=40$ by restricting the vertical axis to the interval $[0,50]$. } \end{figure} \begin{remark} (i) In Theorem \ref{thm2} we do not require neither $u$ nor $\Delta u$ to be positive in $B_R$. (ii) Classification of radially symmetric solutions for $\Delta ^m u=u^p$ may be found in \cite{DLS2014,LS2009a,LS2009b,LS2011}. \end{remark} We shall next be interested on the behaviour at the boundary of solutions to \eq{eqtq} that satisfy either \eq{cond1} or \eq{cond2}. Consider \begin{equation}\label{eqtq1} \left\{ \begin{aligned} \Delta u&=v^{p},\, u>0&&\quad\mbox{ in }B_R,\\ \Delta v&=|\nabla u|^{q},\, v>0&&\quad\mbox{ in }B_R,\\ v&=\infty,&&\quad\mbox{ on }\partial B_R.\\ \end{aligned} \right. \end{equation} Note first that according to Corollary \ref{corblowup3} the system \eq{eqtq1} has radially symmetric solutions if and only if $q>1/p$. Our main result regarding the behaviour of the radially symmetric solutions to \eq{eqtq1} is as follows. \begin{theorem}\label{thm3} Assume $p,q\geq 1$, $(p,q)\neq (1,1)$ and let $(u,v)$ be a positive radially symmetric solution to \eq{eqtq1}. Then \begin{equation}\label{blowv} \lim_{|x|\nearrow R}(R-|x|)^{\frac{q+2}{pq-1}}v(x)=\Big[\frac{(1+2p)^{q}(q+2)(q+pq+1)}{(pq-1)^{2+q}}\Big]^{\frac{1}{pq-1}}. \end{equation} Also, \begin{enumerate} \item[(i)] If $q>2(1+1/p)$, then there exists $L:=\lim_{|x|\nearrow R}u(x)\in (0,\infty)$ and \begin{equation}\label{blowu0} \lim_{|x|\nearrow R}\frac{L-u(x)}{(R-|x|)^{\frac{pq-2(1+p)}{pq-1}}}=\Big[\frac{(1+2p)(q+2)^{p}(q+pq+1)^{p}}{(pq-1)^{2p+1}}\Big]^{\frac{1}{pq-1}}. \end{equation} \item[(ii)]If $q= 2(1+1/p)$ then \begin{equation}\label{blowu2} \lim_{|x|\nearrow R} \frac{u(x)}{\ln\frac{1}{R-|x|}}= \Big[\frac{(1+2p)(q+2)^{p}(q+pq+1)^{p}}{(pq-1)^{2p+1}}\Big]^{\frac{1}{pq-1}}. \end{equation} \item[(iii)] If $q< 2(1+1/p)$ then \begin{equation}\label{blowu1} \lim_{|x|\nearrow R}(R-|x|)^{\frac{2+2p-pq}{pq-1}}u(x)= \frac{pq-1}{2+2p-pq}\Big[\frac{(1+2p)(q+2)^{p}(q+pq+1)^{p}}{(pq-1)^{2p+1}}\Big]^{\frac{1}{pq-1}}. \end{equation} \end{enumerate} \end{theorem} Now we shall be interested in the system \eq{sys0} posed in the whole $\mathbb{R}^N$, namely \begin{equation}\label{sys1} \left\{ \begin{aligned} \Delta u&=v^p&&\quad\mbox{ in } \mathbb{R}^N,\\ \Delta v&=f(|\nabla u|) &&\quad\mbox{ in } \mathbb{R}^N. \end{aligned} \right. \end{equation} Our main result in this case is as follows. \begin{theorem}\label{thmrn} We have: \begin{enumerate} \item[(i)] The system \eq{sys1} has positive radially symmetric solutions if and only if $$ \int_{1}^\infty\frac{ds}{\Big(\displaystyle \int_0^s F(t)dt \Big)^{p/(2p+1)}} =\infty. $$ \item[(ii)] Assume $f(t)=t^{q}$, where $q\geq 1>p$ and $pq<1$. Let $(u,v)$ be a positive radially symmetric solution. If \begin{equation}\label{div} \frac{p(q^2-4)}{1-pq}\leq 2(N-1) \end{equation} then \begin{equation*} \lim_{|x|\rightarrow \infty}\frac{u(x)}{|x|^{\frac{2+2p-pq}{1-pq}}}=\Big[\frac{(2+q)(N(1-pq)+p(2+q))^{1/p}(N(1-pq)+q(2p+1))}{(1-pq)^{\frac{2p+2-pq}{p}}(2+2p-pq)^{\frac{pq-1}{p}}}\Big]^{\frac{p}{pq-1}} \end{equation*} and \begin{equation*} \lim_{|x|\rightarrow \infty}\frac{v(x)}{|x|^{\frac{q+2}{1-pq}}}=\Big[\frac{(2+q)(N(1-pq)+p(2+q))^{q}(N(1-pq)+q(2p+1))}{(1-pq)^{2+q}}\Big]^{\frac{1}{pq-1}}. \end{equation*} \end{enumerate} \end{theorem} \begin{remark} (i) Condition \eq{div} hold in particular if $q\leq 2$. (ii) It is easy to see that $(u_0,v_0)$ given by $$ \left\{ \begin{aligned} u_{0}(x)&=\Big[\frac{(2+q)(N(1-pq)+p(2+q))^{1/p}(N(1-pq)+q(2p+1))}{(1-pq)^{\frac{2p+2-pq}{p}}(2+2p-pq)^{\frac{pq-1}{p}}}\Big]^{\frac{p}{pq-1}} |x|^{\frac{2+2p-pq}{1-pq}}\\ v_{0}(x)&=\Big[\frac{(2+q)(N(1-pq)+p(2+q))^{q}(N(1-pq)+q(2p+1))}{(1-pq)^{2+q}}\Big]^{\frac{1}{pq-1}}|x|^{\frac{2+q}{1-pq}} \end{aligned} \right. $$ is a solution of \eq{sys1} with $f(t)=t^q$ that vanishes at the origin. Theorem \ref{thmrn} states that any positive radial solution $(u,v)$ of \eq{sys1} with $f(t)=t^q$ behaves like $(u_{0},v_{0})$ at infinity. The remaining of the paper is organised as follows. Section 3 contains a detour in dynamical systems; here we state the main tools which we use to study the asymptotic behaviour in Theorems \ref{thm3} and \ref{thmrn}. The following sections contain the proofs of our main results. \end{remark} \section{A detour in dynamical systems} For any points $x=(x_1,x_2,x_3)$, $y=(y_1,y_2,y_3)$ in $\mathbb{R}^{3}$ define the open ordered interval $$ [[x,y]]=\{z\in \mathbb{R}^3:x<z<y\}\subset \mathbb{R}^3. $$ Consider the intial value problem \begin{equation}\label{det1} \left\{ \begin{aligned} &\zeta_{t}=g(\zeta) \quad\mbox{ for } t\in \mathbb{R},\\ &\zeta(0)=\zeta_{0}, \end{aligned} \right. \end{equation} where $ g:\mathbb{R}^{3}\rightarrow \mathbb{R}$ is a $C^{1}$ function. This implies that for any $\zeta_{0}\in \mathbb{R}^{3},$ there exist a unique solution $\zeta$ of \eq{det1} defined in a maximal time interval. We denote by $\phi(\cdot,\zeta_{0})$ the flow associated to \eq{det1}, that is, $t\longmapsto \phi(t,\zeta_{0})$ is the unique solution of \eq{det1} defined in maximal time interval. We shall assume that the vector field $g$ is cooperative, that is $$ \frac{\partial g_{i}}{\partial x_{j}}\geq 0 \quad \mbox{ for } 1\leq i,j\leq 3,\;\; i\neq j. $$ The following results are due to Hirsch \cite{Hirsch1989, Hirsch1990}. \begin{theorem}\label{thmdet1}{\rm (see \cite[Theorem1]{Hirsch1990})} Any compact limit set of \eq{det1} contains an equilibrium or is a cycle. \end{theorem} \begin{de} A circuit is a finite sequence of equilibria $\zeta_{1},\zeta_{2},\dots,\zeta_{n}=\zeta_{1}$, $(n\geq 2)$ such that $W^{u}(\zeta_{i})\cap W^{s}(\zeta_{i+1})$ is non-empty, where $W^{u}$,\;\;$W^{s}$ denote the stable and unstable manifolds. \end{de} \begin{remark} If all equilibria are hyperbolic and their stable and unstable manifolds are mutually transverse, then there cannot be any circuit. \end{remark} \begin{theorem}\label{thmdet2}{\rm (see \cite[Theorem 2]{Hirsch1990}). } Let $K\subset \mathbb{R}^{3}$ be a compact set such that: \begin{enumerate} \item [ (i) ] All equilibria in $K$ are hyperbolic and there are no circuits. \item [ (ii)] For any $T>0,$ the number of cycles in $K$ having period less than or equal to $T$ is finite. \end{enumerate} Then: \begin{enumerate} \item [ (a) ] Every limit set in $K$ is an equilibrium or cycle. \item [ (b) ] The number of cycles in $K$ is finite. \end{enumerate} \end{theorem} \begin{theorem}\label{thmdet3}{\rm (see \cite[Theorem 7]{Hirsch1989})} Let $\zeta_{1}$, $\zeta_{2}\in \mathbb{R}^{3}$ such that $\zeta_1< \zeta_2$. If $$ {\rm div}{g}<0 \quad\mbox{ in } [[\zeta_1,\zeta_2]], $$ then \eq{det1} has no cycles in $[[\zeta_1,\zeta_2]].$ \end{theorem} \begin{de} A subset $A\subset \mathbb{R}^{3}$ is said to be positively invariant for the flow $\phi$ if $\phi(t,\zeta)\in A$ for all $\zeta \in A$ and $t\geq 0.$ $A$ is called invariant for $\phi$ if $$ \phi(t,A)= A \quad\mbox{ for all } t\geq 0, $$ that is, for any $z\in A$ and $t\geq 0$ there exists $\zeta \in A$ such that $\phi(t,\zeta)=z.$ \medskip The following notion of chain recurrence is due to Conley \cite{C1972,C1978}. \end{de} \begin{de} Let $A\subset \mathbb{R}^{3}$ be a nonempty positively invariant subset for $\phi$ and $\zeta,\zeta'\in A.$ \begin{enumerate} \item [(i)] For $\varepsilon> 0$ and $t> 0$, an $(\varepsilon,t)$-chain from $\zeta \in A$ to $\zeta'\in A$ is a sequence of points in $A$, $\zeta=\zeta_{1},\zeta_{2},\dots,\zeta_{n},\zeta_{n+1}=\zeta'$ and of times $t_{1},t_{2},\dots, t_{n}\geq t$ such that $|\phi(t_{i},\zeta_{i})-\zeta_{i+1}|< \varepsilon.$ \item [(ii)] A point $\zeta \in A$ is called chain recurrent if for every $\varepsilon> 0, t>0$ there is an $(\varepsilon,t)$-chain from $\zeta$ to $\zeta$ in $A.$ \item [(iii)] The set $A$ is said to be chain recurrent if every point $\zeta\in A$ is chain recurrent in $A.$ \end{enumerate} \end{de} \medskip \begin{theorem}\label{thmmst1}{\rm (see \cite[Theorem 3.3]{C1972}, \cite[Lemma 1.4]{MST1995})} If $A$ is connected and chain recurrent, then for any $\zeta,\zeta' \in A$ and any $\varepsilon, t>0$, there exists an $(\varepsilon,t)$ chain from $\zeta$ to $\zeta'.$ \end{theorem} \bigskip Consider now the initial value problem \begin{equation}\label{fd1} \left\{ \begin{aligned} &\xi_{t}=G(t,\xi) \quad\mbox{ for } t>0,\\ &\xi(0)=\xi_{0}\in \mathbb{R}^{3}, \end{aligned} \right. \end{equation} where $G$ is a $C^{1}$ function on $(0,\infty) \times \mathbb{R}^{3}$ such that $G(t,\cdot)\rightarrow g$ uniformly on compact subsets of $\mathbb{R}^{3}.$ We shall say that \eq{fd1} is asymptotically autonomous with the limit problem \eq{det1}. We denote by $\Phi(\cdot,\xi_{0})$ the semiflow defined by the initial value problem \eq{fd1}. \medskip The following result will play a crucial role in the proof of Theorem \ref{thm3}. \begin{theorem}\label{thmfd2}{\rm (see \cite[Theorem 1.8]{MST1995})} Let $\xi_{0}\in \mathbb{R}^{3}$ and assume the trajectory $\Phi(t,\xi_{0})$ associated to \eq{fd1} is bounded. Then the $\omega$-limit set $\omega_{\Phi}(\xi_{0})$ has the following properties: \begin{enumerate} \item [(a)] $\omega_{\Phi}(\xi_{0})$ is nonempty, compact and connected. \item [(b)] $\omega_{\Phi}$ is invariant under the flow of $\phi$ of \eq{det1}, that is $$ \phi(t,\omega_{\Phi}(\xi_{0}))= \omega_{\Phi}(\xi_{0}) \quad\mbox{ for all } t\geq 0. $$ \item [(c)] $\omega_{\Phi}(\xi_{0})$ attracts $\Phi(t,\xi_{0})$, that is, $$ dist(\Phi(t,\xi_{0}),\omega_{\Phi}(\xi_{0}))\rightarrow 0 \quad\mbox{ as }t\rightarrow \infty. $$ \item [(d)] $\omega_{\Phi}(\xi_{0})$ is chain recurrent for $\phi.$ \end{enumerate} \end{theorem} \medskip In particular, Theorem \ref{thmfd2}{\rm (d)} states that the invariant set consisting of two equilibria $e_{1}, e_{2}$ and a heteroclinic orbit that joins them, or a homoclinic orbit connecting an equilibrium point $e_3$ with itself cannot be the $\omega$-limit set of an asymtotically autonomous semiflow. \section{Proof of Theorem \ref{thm1} } A useful result in proving Theorem \ref{thm1} is the following lemma. \begin{lemma}\label{lemma1.1} We have \begin{equation}\label{equiv_cond} \int_1^\infty\frac{ds}{\Big(\displaystyle \int_0^s F(t)dt \Big)^{p/(2p+1)}} <\infty\mbox{ if and only if } \int_1^\infty\frac{ds}{\Big(\displaystyle \int_0^s \sqrt{f(t)}dt \Big)^{2p/(2p+1)}} <\infty. \end{equation} Moreover, $$ \int_1^\infty\frac{ds}{\Big(\displaystyle \int_0^{2s} {F(t)}dt \Big)^{p/(2p+1)}} \leq \int_1^\infty\frac{ds}{\Big(\displaystyle \int_0^{s}\sqrt{f(t)}dt \Big)^{2p/(2p+1)}} \leq \int_1^\infty\frac{ds}{\Big(\displaystyle \int_0^{2s} {F(t)}dt \Big)^{p/(2p+1)}}. $$ \end{lemma} \begin{proof} Let $H:[0,\infty)\to [0, \infty)$ be given by $$ H(s)=2\int_0^s F(t)dt-\Big(\int_0^{s}\sqrt{f(t)}{dt}\Big)^2\quad\mbox{ for all }s\geq 0. $$ Then $$H'(s)=2F(s)-2\Big(\int_0^{s}\sqrt{f(t)}{dt}\Big)\sqrt{f(s)}\quad\mbox{ for all }s\geq 0.$$ Therefore \begin{equation} \begin{aligned} \frac{H'(s)}{2}&=F(s)-\sqrt{f(s)}\int_0^{s}\sqrt{f(t)}{dt}=\int_0^{s}f(t){dt}-\sqrt{f(s)}\int_0^{s}\sqrt{f(t)}{dt}\\ &=\int_0^{s}\Big(f(t)-\sqrt{f(s)}\sqrt{f(t)}{dt}=\int_0^{s}\sqrt{f(t)}\Big(\sqrt{f(t)}-\sqrt{f(s)}\Big){dt}\\ &\leq0 \quad\mbox{ for all }s\geq 0. \end{aligned} \end{equation} Hence, $H$ is nonincreasing which yields $H(s)\leq H(0)=0$. This further implies $$2\int_0^{s}F(t){dt}\leq\Big(\int_0^{s}\sqrt{f(t)}{dt}\Big)^{2} \quad\mbox{ for all } s\geq 0.$$ So, $$\int_1^\infty\frac{ds}{\Big(2\int_0^{s}F(t){dt}\Big)^{p/(2p+1)}}\geq \int_1^\infty\frac{ds}{\Big(\int_0^{s}\sqrt{f(t)}{dt}\Big)^{2p/(2p+1)}}. $$ In order to establish the first inequality in our lemma, let $h:[0,\infty)\to \mathbb{R}$ be defined by $$ h(s)=\int_0^{2s}{F(t)}{dt}-\Big(\int_0^{s}\sqrt{f(t)}{dt}\Big)^2 \quad\mbox{ for all }s\geq 0. $$ Then, \begin{equation*} \begin{aligned} \frac{h'(s)}{2}&=F(2s)-\sqrt{f(s)}\int_0^{s}\sqrt{f(t)}{dt}\\ &\geq\int_s^{2s}f(t){dt}-\sqrt{f(s)}\int_0^{s}\sqrt{f(t)}{dt}\\ &=\int_0^{s}f(s+t){dt}-\sqrt{f(s)}\int_0^{s}\sqrt{f(t)}{dt}.\\ &=\int_0^{s}\Big(f(t+s)-\sqrt{f(s)}\sqrt{f(t)}\Big){dt}\\ & \geq 0 \quad\mbox{ for all }s\geq 0. \end{aligned} \end{equation*} It follows that $h$ is nondecreasing which yields $h(s)\geq h(0)=0$ for all $s\geq 0$. This implies \begin{equation}\label{eqg} \Big(\int_0^{2s}F(t){dt}\Big)^{p/(2p+1)}\geq\Big(\int_0^{s}\sqrt{f(t)}{dt}\Big)^{2p/(2p+1)} \quad\mbox{ for all }s\geq 0. \end{equation} Therefore $$\int_1^\infty\frac{ds}{\Big(\int_0^{2s}F(t){dt}\Big)^{p/(2p+1)}}\leq\int_1^\infty\frac{ds}{\Big(\int_0^{s}\sqrt{f(t)}{dt}\Big)^{2p/(2p+1)}}.$$ This concludes the proof. \end{proof} \noindent{\bf Proof of Theorem \ref{thm1}.} It is enough to prove (ii) and (iii). We shall divide our proof into three steps. \noindent{\it Step 1: Let $(u,v)$ be any positive radial solution of \eq{sys}. Then, letting $w=u'$ we have \begin{equation}\label{eqw3} \frac{v^{p}(r)}{N}\leq w'(r)\leq v^{p}(r)\quad\mbox{ for all }0<r<R, \end{equation} and} \begin{equation}\label{eqv3} \frac{f(w(r))}{N}\leq v''(r)\leq f(w(r)) \quad\mbox{ for all }0<r<R. \end{equation} Indeed, we note first that $(w,v)$ satisfies \begin{equation}\label{sys01} \left\{ \begin{aligned} &w'(r)+\frac{N-1}{r}w(r)=v^p(r)&&\quad\mbox{for all }0<r<R, \\ &v''(r)+\frac{N-1}{r}v'(r)=f(|w(r)|)&&\quad\mbox{for all }0<r<R,\\ &w(0)=v'(0)=0, v(0)>0. \end{aligned} \right. \end{equation} Integrating in the first equation of \eq{sys01} we find \begin{equation}\label{eqw} w(r)=r^{1-N}\int_0^r t^{N-1}v^p(t)dt\quad\mbox{ for all }0<r<R. \end{equation} This implies that $w>0$ in $(0,R)$ so $u$ is increasing. We now integrate in the second equation of \eq{sys01} to deduce \begin{equation}\label{eqv1b} v'(r)=r^{1-N}\int_0^r t^{N-1}f(w(t))dt\quad\mbox{ for all }0<r<R. \end{equation} This means $v'>0$ and $v$ is increasing on $(0,R)$. Using this fact in \eq{eqw} we find \begin{equation}\label{eqw2} w(r)\leq \frac{r}{N}v^p(r)\quad\mbox{ for all }0<r<R. \end{equation} Combining \eq{eqw2} with the first equation of \eq{sys01} we have $$ v^{p}(r)\leq w'(r)+\frac{N-1}{N}v^{p}(r) \quad\mbox{ for all }0<r<R $$ which implies \eq{eqw3}. In particular $w'>0$ in $(0,R)$ so $w$ is increasing. Using \eq{eqv1b} we obtain \begin{equation}\label{eqv2} v'(r)\leq \frac{r}{N}f(w(r))\quad\mbox{ for all }0<r<R. \end{equation} Using \eq{eqv2} in the second equation of \eq{sys01} and also the fact that $w>0$ we deduce the estimate \eq{eqv3}. \medskip \noindent {\it Step 2: System \eq{sys} admits a positive radial solution $(u,v)$ such that $\lim_{r\nearrow R} v(r)=\infty$ if and only if } \begin{equation}\label{int01} \int_{1}^\infty\frac{ds}{\Big(\displaystyle \int_0^s F(t)dt \Big)^{p/(2p+1)}} <\infty. \end{equation} Assume first that $(u,v)$ is a positive radial solution of \eq{sys} with $\lim_{r\nearrow R} v(r)=\infty$. Using \eq{eqv3} we have $$ v''(r)\leq f(w(r)) \quad\mbox{ for all }0<r<R. $$ Multiplying the above inequality by $v'(r)$ and then integrating over $[0,r]$ we have \begin{equation*} \begin{aligned} \frac{(v'(r))^{2}}{2}&\leq \int_0^{r}v'(t)f(w(t)){dt}\\ &\leq f(w(r))\int_0^r v'(t)dt \\ &\leq f(w(r))v(r) \quad\mbox{ for all }0<r<R, \end{aligned} \end{equation*} which gives $$ v'(r)(v(r))^{-1/2}\leq C\sqrt{f(w(r))} \quad\mbox{ for all }0<r<R, \mbox{ where } C>0. $$ Multiplying the above inequality by $w'(r)$ and then using \eq{eqw3} we have $$ v'(r)\frac{v^{p-1/2}(r)}{N}\leq w'(r)v'(r)(v(r))^{-1/2}\leq Cw'(r)\sqrt{f(w(r))} $$ for all $0<r<R$. This further implies $$ \frac{1}{N}\left(\frac{v^{p+1/2}(r)}{p+1/2}\right)'\leq Cw'(r)\sqrt{f(w(r))} \quad\mbox{ for all }0<r<R. $$ Integrating over $[0,r]$ we obtain $$ v^{p+1/2}(r)-v^{p+1/2}(0)\leq C\int_{w(0)=0}^{w(r)}\sqrt{f(t)}{dt}. $$ Since $\lim_{r\nearrow R}v(r)=\infty$, we can find $\rho \in (0,R)$ such that $$ \Big(v^{p}(r)\Big)^{(2p+1)/2p}\leq C\int_0^{w(r)}\sqrt{f(t)}{dt} \quad\mbox{ for all }\rho\leq r<R. $$ Using \eq{eqw3} we obtain $$ \frac{w'(r)}{\displaystyle\Big(\int_0^{w(r)}\sqrt{f(t)}{dt}\Big)^{2p/(2p+1)}}\leq C \quad\mbox{ for all }\rho\leq r<R. $$ Integrating the above inequality over $[\rho,r]$ we have $$ \int_{\rho}^{r}\frac{w'(t)}{\displaystyle\Big(\int_0^{w(t)}\sqrt{f(t)}{dt}\Big)^{2p/(2p+1)}}{dt}\leq C(r-\rho)\leq Cr \quad\mbox{ for all }\rho\leq r<R. $$ By changing the variable and then letting $r\nearrow R$ one obtains \begin{equation}\label{eqww} \int_{w(\rho)}^{\infty}\frac{ds}{\displaystyle\Big(\int_0^{s}\sqrt{f(t)}{dt}\Big)^{2p/(2p+1)}}\leq C(R-\rho)< \infty. \end{equation} Hence, $$ \int_{1}^{\infty}\frac{ds}{\displaystyle\Big(\int_0^{s}\sqrt{f(t)}{dt}\Big)^{2p/(2p+1)}} < \infty . $$ Using Lemma \ref{lemma1.1}, this is equivalent to $$ \int_{1}^{\infty}\frac{ds}{\displaystyle\Big(\int_0^{s}{F(t)}{dt}\Big)^{p/(2p+1)}} < \infty. $$ We now assume that $f$ fulfills \eq{int01} and prove that \eq{sys} has a positive radial solution $(u,v)$ satisfying $\lim_{r\nearrow R} v(r)=\infty$. Looking for radially symmetric solutions of \eq{sys} we are led to solve \begin{equation}\label{sysr} \left\{ \begin{aligned} &u''(r)+\frac{N-1}{r}u'(r)=v^p(r),\;r>0\\ &v''(r)+\frac{N-1}{r}v'(r)=f(|u'(r)|),\;r>0\\ &u(r)>0, v(r)>0 \mbox{ for } r\geq 0. \end{aligned} \right. \end{equation} In order to obtain the local existence of a solution, it is more convenient to introduce $w=u'$. Thus, the system \eq{sysr} reads \begin{equation}\label{sysr1} \left\{ \begin{aligned} &u'(r)=w(r)\\ &w'(r)+\frac{N-1}{r}w(r)=v^p(r)\\ &v''(r)+\frac{N-1}{r}v'(r)=f(|w(r)|)\\ &w(0)=v'(0)=0, v(0)>0. \end{aligned} \right. \end{equation} By twice integration, \eq{sysr1} is equivalent to \begin{equation}\label{sysr2} \left\{ \begin{aligned} &u(r)=u(0)+\int_0^r w(t)dt,\;r>0,\\ &w(r)=\int_0^r t^{N-1}v^p(t)dt,\; r>0,\\ &v(r)=v(0)+\int_0^r t^{1-N}\int_0^t s^{N-1}f(|w(s)|)dsdt, \; r>0,\\ &u(0)>0, v(0)>0. \end{aligned} \right. \end{equation} Since $f$ is a $C^1$-function, by a standard contraction mapping principle one obtains the existence of a solution $(u,v)$ of \eq{sysr} defined in a maximal interval $[0,R_{max})$. By Step 1, $w$ and $v$ satisfy \eq{eqw3} and \eq{eqv3}. Thus, we have \begin{equation}\label{eqwv} \left\{ \begin{aligned} &f(w(r))\leq Nv''(r) &&\quad\mbox{ for all }0<r<R_{max}, \\ & w'(r)\leq v^{p}(r) &&\quad\mbox{ for all }0<r<R_{max}. \end{aligned} \right. \end{equation} Multiplying the two inequalities in \eq{eqwv} and then integrating over $[0,r]$ we deduce $$ F(w(r))\leq Nv^p(r)v'(r) \quad\mbox{ for all }0<r<R_{max}. $$ Multiplying the above inequality by $w'(r)$ and using \eq{eqw3} one obtains \begin{equation}\label{eqwv2} w'(r)F(w(r))\leq Nv^{2p}(r)v'(r) \quad\mbox{ for all }0<r<R_{max}. \end{equation} Fix $\rho\in (0,R_{max})$ and denote $$ G(r):=\int_{\rho}^{r}{F(t)}{dt} \quad\mbox{ for all }\rho\leq r<R_{max}. $$ Integrating \eq{eqwv2} over $[\rho,r]$ and using \eq{eqw3} we have \begin{equation*} \begin{aligned} G(w(r))=\int_{\rho}^{w(r)}{F(t)}{dt}&\leq N\int_{\rho}^{r}{v^{2p}(t)v'(t)}{dt}\\ &\leq C[v^p(r)]^{(2p+1)/p}\\ &\leq C[w'(r)]^{(2p+1)/p} \quad\mbox{ for all }\rho\leq r<R_{max}. \end{aligned} \end{equation*} Hence $$ C\leq \frac{w'(r)}{\Big(G(w(r))\Big)^{p/(2p+1)}}\quad\mbox{ for all }\rho\leq r<R_{max}. $$ A further integration over $[\rho,r]$ yields $$ C(r-\rho)\leq \int_{\rho}^{r}\frac{w'(t)}{\Big(G(w(t))\Big)^{p/(2p+1)}}{dt} \rho\leq r<R_{max}. $$ By changing the variable of integration and then letting $r \nearrow R_{max}$ we have \begin{equation}\label{eqG} C(R_{max}-\rho)\leq \int_{w(\rho)}^{\infty}\frac{ds}{\Big(G(s)\Big)^{p/(2p+1)}}. \end{equation} Therefore, $$ R_{max}\leq C\int_{1}^{\infty}\frac{ds}{\displaystyle\Big(\int_0^{s}{F(t)}{dt}\Big)^{p/(2p+1)}}< \infty. $$ We have obtained a positive radial solution $(u,v)$ of \eq{sys} in $B_{R_{max}}$ satisfying $\lim_{r\nearrow R_{max}}v(r)=\infty$. Now, if $R>0$ is any positive radius, we let $$ \tilde f(t)= \lambda^{2(1+1/p)} f\Big(\frac{t}{\lambda}\Big)\quad \mbox{ for all } t\geq 0. $$ Clearly $\tilde f$ satifies \eq{int01}. By the above arguments there exists $(\tilde u, \tilde v)$ such that $$ \left\{ \begin{aligned} \Delta \tilde u&=\tilde v^p&&\quad\mbox{ in }B_{R_{max}},\\ \Delta \tilde v&=\tilde f(|\nabla \tilde u|) &&\quad\mbox{ in }B_{R_{max}}, \end{aligned} \right. $$ where $B_{R_{max}}$ is a maximum ball of existence. Let \begin{equation*} \left\{ \begin{aligned} u(x)&=\tilde u\Big(\frac{x}{\lambda}\Big) &&\quad\mbox{ in } B_R,\\ v(x)&=\lambda^{-2/p}\tilde v\Big(\frac{x}{\lambda}\Big) &&\quad\mbox{ in } B_R. \end{aligned} \right. \end{equation*} By taking $\lambda =R/R_{max}$, we deduce that $(u,v)$ satisfies \eq{sys} in $B_R$. \bigskip \bigskip \bigskip \noindent{\it Step 3: Proof of (ii) and (iii).} Assume \eq{sys} admits a positive radial solution $(u,v)$ in $B_R$ that satisfies \eq{cond1} (resp. \eq{cond2}). By Step 2 above, $f$ must satisfy \eq{int01}. From \eq{eqg} we have $$ \Big(\int_0^{2s}F(t){dt}\Big)^{p/(2p+1)}\geq\Big(\int_0^{s}\sqrt{f(t)}{dt}\Big)^{2p/(2p+1)} \quad\mbox{ for all }s\geq 0. $$ Using this fact and working in the same way as we did for estimating \eq{eqww} and \eq{eqG}, there exists $\rho\in (0,R)$ such that \begin{equation}\label{eqint} \int_{w(r)}^{\infty}\frac{ds}{\displaystyle\Big(\int_0^{2s}F(t){dt}\Big)^{p/(2p+1)}}\leq \int_{w(r)}^{\infty}\frac{ds}{\displaystyle\Big(\int_0^{s}\sqrt{f(t)}{dt}\Big)^{2p/(2p+1)}}\leq C_1(R-r) \end{equation} and \begin{equation}\label{eqint2} \int_{w(r)}^{\infty}\frac{ds}{\displaystyle\Big(\int_0^{s}F(t){dt}\Big)^{p/(2p+1)}}\geq C_2(R-r) \end{equation} for all $\rho<r<R$, where $C_1,C_2>0$ are constants. Let $\Gamma: (0,\infty)\to (0,\infty)$ be defined as $$ \Gamma{(t)}=\int_{t}^{\infty}\frac{ds}{\displaystyle\Big(\int_0^{s}{F(t)}{dt}\Big)^{p/(2p+1)}}\quad\mbox{ for all }t>0. $$ Note that $\Gamma$ is decreasing and by \eq{int01} we have $\lim_{t\to \infty}\Gamma(t)=0$. From \eq{eqint} and \eq{eqint2} we deduce $$ \Gamma{(2w(r))}\leq C_1(R-r) \quad\mbox{ and }\quad \Gamma{(w(r))}\geq C_2(R-r)\quad\mbox{ for all }\rho\leq r<R. $$ Since $\Gamma$ is decreasing, the above estimates yield \begin{equation}\label{equ} \left\{ \begin{aligned} 2w(r)&\geq \Gamma^{-1}(C_1(R-r)) &&\quad\mbox{ for all }\rho\leq r<R,\\ w(r)&\leq \Gamma^{-1}(C_2(R-r)) &&\quad\mbox{ for all }\rho\leq r<R. \end{aligned} \right. \end{equation} Let us recall that \begin{equation}\label{equmax} u(r)=u{(\rho)}+\int_{\rho}^{r}{w(t)}{dt} \quad\mbox{ for all }\rho\leq r<R. \end{equation} From \eq{equmax} and \eq{equ} we find $\lim_{r\nearrow R}u(r)=\infty$ if and only if $$ \int_{\rho}^{R}w(t){dt}=\infty $$ if and only if $$ \int_{\rho}^{R}{\Gamma^{-1}(C(R-t))}{dt}=\infty, $$ for some constant $C>0$. Hence $\lim_{r\nearrow R}u(r)=\infty$ if and only if $$ \int_0^{C(R-\rho)}{\Gamma^{-1}(\sigma)}{d\sigma}=\infty, $$ if and only if $$ \int_0^{1}{\Gamma^{-1}(\sigma)}{d\sigma}=\infty. $$ With the change of variable $t=\Gamma^{-1}(\sigma)$ we now obtain $\lim_{r\nearrow R}u(r)=\infty$ if and only if $$ \int_{1}^{\infty}\frac{s}{\Big(\int_0^{s}F(t){dt}\Big)^{p/(2p+1)}}{ds}=\infty. $$ This implies \eq{int2}. We now assume that \eq{int2} holds and show that system \eq{sys} has a positive radial solution $(u,v)$ that satisfies \eq{cond2}. We proceed as in Step 2. First we obtain the (local) existence of such a solution in a ball $B_{R_{max}}$ and then, by the same scaling argument indicated at the end of Step 2 we are able to conclude the existence of the desired solution to \eq{sys} in $B_R$ that satisfies \eq{cond2}. \section{Proof of Theorem \ref{thm2}} Let $v=\Delta u$. Then $(u,v)$ satisfies \begin{equation}\label{sysr3} \left\{ \begin{aligned} &u''(r)+\frac{N-1}{r}u'(r)=v(r) &&\quad\mbox{ for all }0<r<R,\\ &v''(r)+\frac{N-1}{r}v'(r)=f(|u'(r)|) &&\quad\mbox{ for all }0<r<R,\\ &\lim_{r\nearrow R}u(r)=\infty. \end{aligned} \right. \end{equation} Integrating twice in the second equation of \eq{sysr3} we find that $r\longmapsto r^{N-1}v'(r)$ is increasing on $(0,R)$, which further implies $v'$ is increasing on $(0,R)$. Thus, there exist $$ L:=\lim_{r\nearrow R}v(r)\in \mathbb{R}\cup\{\infty\}. $$ If $L<\infty$, then $v \leq L$ for all $0<r<R$. This fact combined with a twice integration in \eq{sysr3} yields $$ u(r)- u(0)\leq \frac{Lr^{2}}{2N}\quad\mbox{ for all }0<r<R. $$ This implies that $u$ is bounded over $[0,R)$ which is a contradiction. Therefore we must have $\lim_{r \nearrow R}v(r)=\infty$. In view of this fact there exists $R_{0}>0$ such that $$ v(r)>0 \quad\mbox{ for all } R_{0}\leq r<R. $$ Let $u'=w$. Then, from \eq{sysr3} we deduce \begin{equation}\label{sysr5} \left\{ \begin{aligned} &w'(r)+\frac{N-1}{r}w(r)=v(r) &&\quad\mbox{ for all }0<r<R,\\ &v''(r)+\frac{N-1}{r}v'(r)=f(|w(r)|) &&\quad\mbox{ for all }0<r<R. \end{aligned} \right. \end{equation} From the first equation of \eq{sysr5} we find \begin{equation}\label{eqa1} \Big(r^{N-1}w\Big)'=r^{N-1}v\geq 0 \quad\mbox{ for all } R_{0}\leq r<R, \end{equation} which in particular implies that $r\longmapsto r^{N-1}w$ is increasing on $[R_0,R)$ and so, there exists $$ L_{0}:=\lim_{r\nearrow R}w(r)=\lim_{r\nearrow R}u'(r)\in \mathbb{R}\cup \{\infty\}. $$ With a similar argument as above, if $L_0$ is finite we derive that $u$ is bounded which contradicts $\lim_{r\nearrow R}u(r)=\infty$. Hence $\lim_{r\nearrow R}w(r)=\infty$. Integrating \eq{eqa1} over $[R_{0},r]$ we obtain $$ r^{N-1}w(r)-R_{0}^{N-1}w(R_{0})=\int_{R_0}^r t^{N-1}v(t)dt\leq v(r)\int_{R_0}^r t^{N-1}dt =\frac{v(r)}{N}\Big(r^N-R_{0}^{N}\Big), $$ for all $R_0<r<R$. This yields \begin{equation}\label{eqa2} w(r)\leq \frac{v(r)r}{N}+\frac{C}{r^{N-1}} \quad\mbox{ for all }R_0<r<R, \end{equation} where $C=C(R_{0},N)>0$ is a constant. Since $v(r)\to \infty$ as $r\nearrow R$, we may choose $R_{1}\in (R_{0},R)$ such that \begin{equation}\label{eqa3} \frac{C}{R_{1}^{N-1}}\leq \frac{v(r)r}{2N(N-1)} \quad\mbox{ for all }R_1<r<R. \end{equation} Combining \eq{eqa2} and \eq{eqa3} we obtain \begin{equation}\label{eqa4} w(r)\leq \frac{2N-1}{2N(N-1)} v(r)r \quad\mbox{ for all }R_1<r<R. \end{equation} We now use this last estimate in the first equation of \eq{sysr5} to deduce $$ \frac{v(r)}{2N} \leq w'(r)\leq v(r) \quad\mbox{ for all } R_1<r<R. $$ The same approach is now applied to the second equation of \eq{sysr5} in order to deduce $$ \frac{f(w(r))}{2N}\leq v''(r) \leq f(w(r)) \quad\mbox{ for all } R_1<r<R. $$ From now on, we follow line by line the proof of Theorem \ref{thm1} with $p=1$ to reach the required conclusion. \section{Proof of Theorem \ref{thm3}} \subsection{More properties of solutions to system \eq{eqtq1}} Let $(u,v)$ be a positive radially symmetric solution of \eq{eqtq1} in $B_R$. Letting $u'(r)=w(r)$ and $v'(r)= \psi(r)$ we have \begin{equation}\label{ds3} \left\{ \begin{aligned} &w'(r)+\frac{N-1}{r}w(r)= v^{p}(r) &&\quad\mbox{ for } 0<r<R, \\ &v'(r)= \psi(r) &&\quad\mbox{ for } 0<r<R, \\ &\psi'(r)+\frac{N-1}{r}\psi(r)= w^{q}(r) &&\quad\mbox{ for } 0<r<R,\\ &w(0)=0, v(0)=m>0,\psi(0)=0. \end{aligned} \right. \end{equation} The next result is a comparison principle between sub and supersolutions of \eq{ds3} which is true in virtue of the quasimonotone character of our system. \begin{lemma}\label{lemma1.2} Let $(v_{1}(r),w_{1}(r),\psi_{1}(r))$ and $(v_{2}(r),w_{2}(r),\psi_{2}(r))$ be solutions of \begin{equation}\label{com1} \left\{ \begin{aligned} &w_{1}'(r)+\frac{\theta}{r}w_{1}(r)\geq v_{1}^{p}(r) &&\quad\mbox{ for } 0<r<R_{1}^{max}, \\ &v_{1}'(r)\geq \psi_{1}(r) &&\quad\mbox{ for } 0<r<R_{1}^{max}, \\ &\psi_{1}'(r)+\frac{\theta}{r}\psi_{1}(r)\geq w_{1}^{q}(r) &&\quad\mbox{ for } 0<r<R_{1}^{max},\\ &w_{1}(0)=\mu_{1}, v_{1}(0)=m_{1},\psi_{1}(0)=\nu_{1}. \end{aligned} \right. \end{equation} \begin{equation}\label{com2} \left\{ \begin{aligned} &w_{2}'(r)+\frac{\theta}{r}w_{2}(r)\leq v_{2}^{p}(r) &&\quad\mbox{ for } 0<r<R_{2}^{max}, \\ &v_{2}'(r)\leq \psi_{2}(r) &&\quad\mbox{ for } 0<r<R_{2}^{max}, \\ &\psi_{2}'(r)+\frac{\theta}{r}\psi_{2}(r)\leq w_{2}^{q}(r) &&\quad\mbox{ for } 0<r<R_{2}^{max},\\ &w_{2}(0)=\mu_{2}, v_{2}(0)=m_{2},\psi_{2}(0)=\nu_{2}. \end{aligned} \right. \end{equation} where $\theta \geq 0$,\;\; $\mu_{i}\geq 0$,\;\; $\nu_{i}\geq 0$,\;\;$m_{i}\geq 0$ ,\;\;$(i=1,2).$ If $$ \mu_{1}\geq \mu_{2},\;\;m_{1}\geq m_{2},\;\;\nu_{1}\geq \nu_{2},\;\; (\mu_{1},m_{1},\nu_{1})\neq (\mu_{2},m_{2},\nu_{2}). $$ Then $$ w_{1}(r)>w_{2}(r),\;\; v_{1}(r)>v_{2}(r),\;\;\psi_{1}(r)>\psi_{2}(r) \quad\mbox{ for all } 0<r<min\{R_{1}^{max},R_{2}^{max}\}. $$ \end{lemma} \begin{proof} Without loosing any generality we may assume that $m_{1}>m_{2}.$ Note that the first equation in \eq{com1} and \eq{com2} can be written as \begin{equation}\label{com3} \left\{ \begin{aligned} &\Big(r^{\theta}w_{1}(r)\Big)'\geq r^{\theta}v_{1}^{p}(r) &&\quad\mbox{ for }0<r<R_{1}^{max}, \\ &\Big(r^{\theta}w_{2}(r)\Big)'\leq r^{\theta}v_{2}^{p}(r) &&\quad\mbox{ for }0<r<R_{2}^{max}. \end{aligned} \right. \end{equation} Since $v_{1}(0)=m_{1}>v_{2}(0)=m_{2}$, there exist $\rho>0$ such that \begin{equation}\label{com4} v_{1}(r)> v_{2}(r) \quad\mbox{ for all } 0\leq r\leq \rho. \end{equation} Using \eq{com3} and \eq{com4} we have $$ \Big(r^{\theta}w_{1}(r)\Big)'\geq r^{\theta}v_{1}^{p}(r) > r^{\theta}v_{2}^{p}(r) \geq \Big(r^{\theta}w_{2}(r)\Big)'\quad\mbox{ for all } 0\leq r\leq \rho. $$ Hence \begin{equation}\label{com5} \Big(r^{\theta}w_{1}(r)\Big)'> \Big(r^{\theta}w_{2}(r)\Big)' \quad\mbox{ for all } 0\leq r\leq \rho. \end{equation} Integrating \eq{com5} over $[0,r]$, $r\leq \rho$, we find \begin{equation}\label{com6} w_{1}(r)> w_{2}(r) \quad\mbox{ for all } 0\leq r\leq \rho. \end{equation} Using \eq{com6} in the third equation of \eq{com1} and \eq{com2} we get $$ \Big(r^{\theta}\psi_{1}(r)\Big)'> \Big(r^{\theta}\psi_{2}(r)\Big)' \quad\mbox{ for all } 0\leq r\leq \rho. $$ Integrating over $[0,r]$, $r\leq \rho$, we deduce that $$ \psi_{1}(r)> \psi_{2}(r) \quad\mbox{ for all } 0\leq r\leq \rho. $$ Let us denote \begin{equation}\label{com7} R:=\sup\Big\{\eta> 0 : v_{1}(r)>v_{2}(r), w_{1}(r)>w_{2}(r), \psi_{1}(r)> \psi_{2}(r) \mbox{ in }(0,\eta)\Big\}. \end{equation} First of all note that $R>0.$ Using the same arguments as above we have $$ R=\min\{R_{1}^{max}, R_{1}^{max}\}. $$ \end{proof} \begin{lemma}\label{lemma1.3} Let $(v_{1}(r),w_{1}(r),\psi_{1}(r))$ and $(v_{2}(r),w_{2}(r),\psi_{2}(r))$ be the solutions of \begin{equation}\label{com8} \left\{ \begin{aligned} &w_{1}'(r)+\frac{N-1}{r}w_{1}(r)=v_{1}^{p}(r) &&\quad\mbox{ for } 0<r<R_{1}^{max}, \\ &v_{1}'(r)=\psi_{1}(r) &&\quad\mbox{ for } 0<r<R_{1}^{max}, \\ &\psi_{1}'(r)+\frac{N-1}{r}\psi_{1}(r)=w_{1}^{q}(r) &&\quad\mbox{ for } 0<r<R_{1}^{max},\\ &v_{1}(R_{1}^{max})=\infty,\\ &w_{1}(0)=0, v_{1}(0)=m_{1}>0,\psi_{1}(0)=0 \end{aligned} \right. \end{equation} and \begin{equation}\label{com9} \left\{ \begin{aligned} &w_{2}'(r)+\frac{N-1}{r}w_{2}(r)=v_{2}^{p}(r) &&\quad\mbox{ for } 0<r<R_{2}^{max}, \\ &v_{2}'(r)=\psi_{2}(r) &&\quad\mbox{ for } 0<r<R_{2}^{max}, \\ &\psi_{2}'(r)+\frac{N-1}{r}\psi_{2}(r)=w_{2}^{q}(r) &&\quad\mbox{ for } 0<r<R_{2}^{max},\\ &v_{2}(R_{2}^{max})=\infty,\\ &w_{2}(0)=0, v_{2}(0)=m_{2}>0,\psi_{2}(0)=0. \end{aligned} \right. \end{equation} If $m_{1}>m_{2}$ then $R_{1}^{max}< R_{2}^{max}$. \end{lemma} \begin{proof} Let $\sigma> 0$ be such that \begin{equation}\label{com10} \Big(\frac{m_{2}}{m_{1}}\Big)^{\frac{pq-1}{q+2}}< \sigma< 1. \end{equation} Since $m_{1}> m_{2}$ and $pq-1> 0$, it is always possible to choose such a $\sigma.$ Then $$ \tilde{w}(r)=\sigma^\frac{2p+1}{pq-1}w_{1}(\sigma r),\;\;\tilde{v}(r)=\sigma^\frac{q+2}{pq-1}v_{1}(\sigma r),\;\; \tilde{\psi}(r)=\sigma^\frac{pq+q+1}{pq-1}\psi_{1}(\sigma r) $$ satisfies \begin{equation}\label{com12} \left\{ \begin{aligned} &\tilde{w}'(r)+\frac{N-1}{r}\tilde{w}(r)=\tilde{v}^{p}(r)\\ &\tilde{v}'(r)=\tilde{\psi}(r)\\ &\tilde{\psi}'(r)+\frac{N-1}{r}\tilde{\psi}(r)=\tilde{w}^{q}(r).\\ &\tilde{w}(0)=0,\;\;\tilde{\psi}(0)=0,\;\;\tilde{v}(0)=\sigma^\frac{q+2}{pq-1}m_{1}> m_{2}. \end{aligned} \right. \end{equation} Using Lemma \ref{lemma1.2} for $(\tilde{v}(r),\tilde{w}(r),\tilde{\psi}(r))$ and $(v_{2}(r),w_{2}(r),\psi_{2}(r))$ it follows that $$ \tilde{v}(r)> v_{2}(r),\;\; \tilde{w}(r)> w_{2}(r),\;\; \tilde{\psi}(r)> \psi_{2}(r). $$ In particular $\tilde{v}(r)=\sigma^\frac{q+2}{pq-1}v_{1}(\sigma r)> v_{2}(r).$ Since $v_{1}(\sigma r)$ is finite as long as $\sigma r< R_{1}^{max}$, it follows that $v_{2}(r)$ is finite as long as $r< \frac{R_{1}^{max}}{\sigma}.$ Since $R_{2}^{max}$ is the maximum time of existence for $v_{2}(r)$, it follows $$ \frac{R_{1}^{max}}{\sigma}\leq R_{2}^{max}. $$ Using the fact that $\sigma< 1,$ we have $$ R_{1}^{max}\leq \sigma R_{2}^{max}< R_{2}^{max}, $$ which completes the proof. \end{proof} \subsection{Proof of Theorem \ref{thm3}} We may always assume $R=1$ because if $(u,v)$ is a solution of \eq{eqtq1} in $B_{R}$ then \begin{equation}\label{ds1} \left\{ \begin{aligned} &U(x)= R^{\frac{2+2p-pq}{pq-1}}u(Rx) &&\quad\mbox{ for } x\in B_{1},\\ &V(x)= R^{\frac{2+q}{pq-1}}v(Rx) &&\quad\mbox{ for } x\in B_{1}, \end{aligned} \right. \end{equation} is a solution of \eq{eqtq1} in $B_{1}$. In the sequel we shall assume $R=1$. Thus, letting $w=u'$ and $\psi=v'$ we have that $(v,w,\psi)$ satisfies \begin{equation}\label{ds2} \left\{ \begin{aligned} &w'(r)+\frac{N-1}{r}w(r)=v^{p}(r) &&\quad\mbox{ for } 0<r<1, \\ &v'(r)=\psi(r) &&\quad\mbox{ for } 0<r<1, \\ &\psi'(r)+\frac{N-1}{r}\psi(r)=w^{q}(r) &&\quad\mbox{ for } 0<r<1,\\ &w(0)=0, v(0)=m>0,\psi(0)=0,\\ &\lim_{r\nearrow 1}w(r)= \lim_{r\nearrow 1} v(r)=\lim_{r\nearrow 1} \psi(r)= \infty. \end{aligned} \right. \end{equation} Further, let \begin{equation}\label{eqhet1} a(r)=\frac{w(r)}{A}(1-r)^{\alpha},\;\; b(r)=\frac{v(r)}{B}(1-r)^{\beta},\;\; c(r)=\frac{\psi(r)}{C}(1-r)^{\gamma}, \end{equation} where \begin{equation}\label{dsa1} \begin{aligned} A&=\Big[\frac{(1+2p)(q+2)^{p}(q+pq+1)^{p}}{(pq-1)^{2p+1}}\Big]^{\frac{1}{pq-1}},\\ B&=\Big[\frac{(1+2p)^{q}(q+2)(q+pq+1)}{(pq-1)^{2+q}}\Big]^{\frac{1}{pq-1}},\\ C&=\Big[\frac{(1+2p)^{q}(q+2)^{pq}(q+pq+1)}{(pq-1)^{q+pq+1}}\Big]^{\frac{1}{pq-1}}, \end{aligned} \end{equation} and \begin{equation}\label{dsb2} \alpha=\frac{1+2p}{pq-1},\;\;\beta=\frac{q+2}{pq-1} ,\;\;\gamma=\frac{q+pq+1}{pq-1}. \end{equation} From \eq{ds2} we deduce that $(a,b,c)$ satisfies \begin{equation}\label{ds4} \left\{ \begin{aligned} &(1-r)\Big(a'(r)+\frac{N-1}{r}a(r)\Big)= \alpha(b^{p}(r)-a(r)) &&\quad\mbox{ for } 0<r<1, \\ &(1-r)b'(r)+\beta b(r)= \beta c(r) &&\quad\mbox{ for } 0<r<1, \\ &(1-r)\Big(c'(r)+\frac{N-1}{r}c(r)\Big)= \gamma(a^{q}(r)-c(r)) &&\quad\mbox{ for } 0<r<1,\\ &a(0)=0, b(0)=m/B>0, c(0)=0. \end{aligned} \right. \end{equation} We next introduce a new change of variable in the system \eq{ds4}, by letting $r=1-e^{-t}$ and $X(t)=a(r)$, $Y(t)=b(r)$ and $Z(t)=c(r)$ where $t=\ln(\frac{1}{1-r})$. Thus, \eq{ds4} yields \begin{equation}\label{ds5} \left\{ \begin{aligned} &X_{t}+\frac{N-1}{e^{t}-1}X(t)=\alpha(Y^{p}(t)-X(t)) &&\quad\mbox{ for } 0<t<\infty, \\ &Y_{t}=\beta(Z(t)-Y(t)) &&\quad\mbox{ for } 0<t<\infty, \\ &Z_{t}+\frac{N-1}{e^{t}-1}Z(t)=\gamma(X^{q}(t)-Z(t)) &&\quad\mbox{ for } 0<t<\infty,\\ &X(0)=0, Y(0)=m/B>0, Z(0)=0. \end{aligned} \right. \end{equation} \bigskip The proof of Theorem \ref{thm3} will be divided into three steps. \medskip \noindent {\it Step 1: $\xi(t)=(X(t),Y(t),Z(t))$ is bounded as $t\rightarrow \infty$. } \medskip Let us assume by contradiction that $(X(t),Y(t),Z(t))$ is not bounded. Then we claim that $Y(t)$ is unbounded. If $Y(t)$ is bounded, the first equation of \eq{ds5} would imply $$ X_{t}+\alpha X(t)\leq \alpha Y^{p}(t)\leq C \quad\mbox{ for all } t\geq 0. $$ which is equivalent to $$ \Big(X(t)e^{\alpha t}\Big)'\leq Ce^{\alpha t} \quad\mbox{ for all } t\geq 0. $$ Integrating the above inequality we easily deduce that $X(t)$ is bounded. Similarly, $Z(t)$ is bounded which contradicts our assumption. Therefore $Y(t)$ must be unbounded. \medskip Let $\tilde{m}>m$ and $(\tilde{v},\tilde{w},\tilde{\psi})$ be the solution of \eq{ds3} with the initial conditions $$ \tilde{w}(0)=0,\;\; \tilde{v}(0)=\tilde{m},\;\;\tilde{\psi}(0)=0 $$ defined on the maximum interval $(0,\tilde{R}).$ By Lemma \ref{lemma1.3} we have $\tilde{R}<1$. Let $(\tilde{X},\tilde{Y},\tilde{Z})$ be the solution of \begin{equation}\label{nds5} \left\{ \begin{aligned} & \tilde{X}_{t}+\frac{N-1}{e^{t}-1}\tilde{X}(t)=\alpha(\tilde{Y}^{p}(t)-\tilde{X}(t)) &&\quad\mbox{ for } 0<t<\tilde{T}:=\ln\Big(\frac{1}{1-\tilde{R}}\Big), \\ & \tilde{Y}_{t}=\beta(\tilde{Z}(t)-\tilde{Y}(t)) &&\quad\mbox{ for } 0<t<\tilde T, \\ & \tilde{Z}_{t}+\frac{N-1}{e^{t}-1}\tilde{Z}(t)=\gamma(\tilde{X}^{q}(t)-\tilde{Z}(t)) &&\quad\mbox{ for } 0<t<\tilde T,\\ & \tilde{X}(0)=0,\;\; \tilde{Y}(0)=\tilde{m}/B,\;\; \tilde{Z}(0)=0, \end{aligned} \right. \end{equation} associated to $(\tilde{v},\tilde{w},\tilde{\psi}).$ Then $(\tilde{X},\tilde{Y},\tilde{Z})$ blows up at $\tilde{T}.$ Since $Y$ is unbounded, we can choose $t_{0}>0$ such that $Y(t_{0})>\tilde{Y}(0)=\tilde{m}/B.$ Let us set $$ \hat{X}(t)=X(t+t_{0}),\;\; \hat{Y}(t)=Y(t+t_{0}),\;\; \hat{Z}(t)=Z(t+t_{0}). $$ Then, one can easily check that $(\hat{X}, \hat{Y}, \hat{Z})$ satisfies $$ \left\{ \begin{aligned} &\hat X_{t}+\frac{N-1}{e^{t}-1}\hat X(t)\geq\alpha(\hat Y^{p}(t)-\hat X(t)) &&\quad\mbox{ for } 0<t<\infty, \\ &\hat Y_{t}=\beta(\hat Z(t)-\hat Y(t)) &&\quad\mbox{ for } 0<t<\infty, \\ &\hat Z_{t}+\frac{N-1}{e^{t}-1}\hat Z(t)\geq \gamma(\hat X^{q}(t)-\hat Z(t)) &&\quad\mbox{ for } 0<t<\infty,\\ &\hat X(0)>0, \hat Y(0)>\tilde m/B, \hat Z(0)=0. \end{aligned} \right. $$ In virtue of Lemma \ref{lemma1.2} we deduce that $$ \hat{X}(t)>\tilde{X}(t),\;\; \hat{Y}(t)>\tilde{Y}(t),\;\; \hat{Z}(t)> \tilde{Z}(t) $$ which contradicts the fact that $(\tilde{X},\tilde{Y},\tilde{Z})$ blows up in finite time. Hence $\xi(t)=(X(t),Y(t),Z(t))$ is bounded as $t\rightarrow \infty$. \medskip \noindent {\it Step 2:} {\it Analysis of the autonomous system associated with \eq{ds5}.} We shall embed the autonomous system associated to \eq{ds5} in the whole $\mathbb{R}^{3}$ by considering the initial value problem \begin{equation}\label{eqna1} \left\{ \begin{aligned} &\zeta_{t}= g(\zeta) \quad\mbox{ for all } t\in \mathbb{R},\\ &\zeta(0)=\zeta_{0}\in \mathbb{R}^{3}, \end{aligned} \right. \end{equation} where $$ \zeta= \left(\begin{array}{c} X\\Y\\Z \end{array}\right) ,\;\; g(\zeta)= g\left(\begin{array}{c} X\\Y\\Z \end{array}\right)= \left(\begin{array}{c} \alpha(Y|Y|^{p-1}-X)\\\beta(Z-Y)\\\gamma(X|X|^{q-1}-Z) \end{array}\right). $$ Using a standard comparison result we have \begin{lemma}\label{lemmadet11} Let $(X(t),Y(t),Z(t))$ be the solution of \eq{eqna1} and $t_{0}\in \mathbb{R}.$ Then \begin{enumerate} \item [ (i) ] If $X(t_{0}),Y(t_{0}),Z(t_{0})\leq 0$ then $X(t),Y(t),Z(t)\leq 0$, for all $t\leq t_{0}$. \item [ (ii) ] If $X(t_{0}),Y(t_{0}),Z(t_{0})\geq 0$ then $X(t),Y(t),Z(t)\geq 0$, for all $t\geq t_{0}$. \end{enumerate} \end{lemma} The system \eq{eqna1} is cooperative and has negative divergence. It has exactly three equilibria, namely ${\bf0}=(0,0,0)$,\;\;${\bf1}=(1,1,1)$ and ${\bf-1}=(-1,-1,-1).$ It is easy to check that {\bf0} is asymptotically stable. The linearized matrix at {\bf1} and {\bf-1} is $$ M=\left[\begin{array}{ccc} -\alpha&\alpha p&0 \\ 0&-\beta&\beta \\ \gamma q&0&-\gamma \end{array}\right], $$ and the eigenvalues $\lambda_{i}$, $1\leq i\leq 3,$ are solutions of $$ (\lambda+\alpha)(\lambda+\beta)(\lambda+\gamma)-pq\alpha \beta \gamma= 0. $$ From the definition of $\alpha$, $\beta$ and $\gamma$ in \eq{dsb2}, we have $$ \alpha+1= p\beta,\;\; \beta+1= \gamma,\;\; \gamma+1= q\alpha. $$ This shows that $\lambda_{1}=1$ is an eigenvalue of $M.$ Also $\lambda_{2}+\lambda_{3}< 0$ and $\lambda_{2}\lambda_{3}=(pq-1)(\alpha \beta \gamma)>0.$ Thus, $Re(\lambda_{2})< 0$,\;\;$Re(\lambda_{3})< 0.$ So {\bf1},\;\;{\bf-1} are saddle points with two-dimensional stable manifolds. Using Lemma \ref{lemmadet11} and the fact that {\bf0} is asymptotically stable, we deduce that the system \eq{eqna1} has no circuits. By Theorems \ref{thmdet2} and \ref{thmdet3} any compact limit set of \eq{eqna1} reduces to an equilibrium point. Hence, any bounded trajectory $\phi(t,\zeta)$ converges both backward and forward in time to one of the three equilibria described above. \medskip \noindent {\it Step 3: Analysis of the non-autonomous system \eq{ds5}.} \medskip Let $\xi_{0}=(0,m/B,0)$ and denote by $\Phi(\cdot,\xi_{0})$ the semiflow associated to \eq{ds5}. By Theorem \ref{thmfd2}, the $\omega$-limit set $\omega_{\Phi}(\xi_{0})$ is invariant under the flow $\phi$ of the autonomous system \eq{eqna1}. Thus $$ \phi(t,\omega_{\Phi}(\xi_{0}))=\omega_{\Phi}(\xi_{0}) \quad\mbox{ for all } t\geq 0. $$ Due to the group property of the flow $\phi$, the above equality is true for all $t\in \mathbb{R}$ because \begin{equation}\label{nau1} \omega_{\Phi}(\xi_{0})= \phi(0,\omega_{\Phi}(\xi_{0}))=\phi(-|t|,\phi(|t|,\omega_{\Phi}(\xi_{0}))=\phi(-|t|,\omega_{\Phi}(\xi_{0})). \end{equation} Hence, $\phi(t,\omega_{\Phi}(\xi_{0}))= \omega_{\Phi}(\xi_{0})$ for all $t\in \mathbb{R}.$ \medskip Let $z\in \omega_{\Phi}(\xi_{0})$. Since $\omega_{\Phi}(\xi_{0})$ is chain recurrent, for all $n\geq 1$ there exist a finite sequence of points in $\omega_{\Phi}(\xi_{0})$ $$ z=\zeta_1,\;\zeta_2,\;\dots,\; \zeta_{k_n},\;\zeta_{k_n+1}=z $$ and a sequence of finite times $$ t_1,\;t_2,\;\dots,\; t_{k_n}\geq n $$ such that $$ |\phi(t_i,\zeta_i)-\zeta_{i+1}|<\frac{1}{n}\quad\mbox{ for all }1\leq i\leq k_n. $$ In particular, for $i=k_n$ we find \begin{equation}\label{eqzeta} |\phi(t_{k_n},\zeta_{k_n})-z|<\frac{1}{n}\quad\mbox{ for all }n\geq 1. \end{equation} Since $\{\zeta_{k_n}\}\subset \omega_{\Phi}(\xi_{0})$ is bounded, it follows that up to a subsequence (still denoted by $\{\zeta_{k_n}\}$) we have $\zeta_{k_n}\to \zeta_0$ as $n\to \infty$, for some $\zeta_0\in \omega_{\Phi}(\xi_{0})$. By Step 2, $\phi(t,\zeta_0)\to \ell\in \{{\bf 0,1}\}$ as $t\to \infty$. Using the continuous dependence of the flow $\phi$ on the initial data we can let $n\to \infty$ in \eq{eqzeta} to deduce $z=\ell \in \{{\bf 0,1}\}$. Thus, $\omega_{\Phi}(\xi_{0})\subset \{\bf0,\bf1\}$. Since $\omega_{\Phi}(\xi_{0})$ is connected, by Theorem \ref{thmfd2}(a) it follows that $\omega_{\Phi}(\xi_{0})=\{\bf0\}$ or $\omega_{\Phi}(\xi_{0})=\{\bf1\}.$ Assume by contradiction that $\omega_{\Phi}(\xi_{0})=\{\bf0\}.$ Then $(X(t),Y(t),Z(t))$ tends to {\bf0} as $t\rightarrow \infty.$ Then, we may find $t_{0}>0$ such that $$ 0<X(t_{0}), Y(t_{0}), Z(t_{0})< 1. $$ Let now $\tilde{m}>m$ and $(\tilde{X}(t),\tilde{Y}(t),\tilde{Z}(t))$ be the solution of \eq{nds5}. By taking $\tilde{m}$ close to $m$ and using the continuous dependence on the initial data of solution to \eq{ds5}, we may assume $$ 0< \tilde{X}(t_{0}), \tilde{Y}(t_{0}), \tilde{Z}(t_{0})< 1. $$ A comparison principle now implies $$ 0< \tilde{X}(t), \tilde{Y}(t), \tilde{Z}(t)< 1 \quad\mbox{ for all } t\geq t_{0}. $$ But the above inequalities contradict the fact that $(\tilde{X}(t),\tilde{Y}(t),\tilde{Z}(t))$ blows up in finite time. Hence, $\xi(t)=(X(t),Y(t),Z(t))\rightarrow {\bf1}.$ \medskip Using \eq{eqhet1} it follows, \begin{equation}\label{end1} \left\{ \begin{aligned} &v(r)(1-r)^{\beta}\rightarrow B \quad\mbox{ as } r\nearrow 1,\\ &w(r)(1-r)^{\alpha}\rightarrow A \quad\mbox{ as } r\nearrow 1,\\ &\psi(r)(1-r)^{\gamma}\rightarrow C \quad\mbox{ as } r\nearrow 1. \end{aligned} \right. \end{equation} Now, the first part of equation \eq{end1} implies \eq{blowu0}. Let $\varepsilon>0$. Then there exist $\delta \in (0,1)$ such that \begin{equation}\label{end2} (1-\varepsilon)A(1-r)^{-\alpha}\leq w(r)=u'(r)\leq (1+\varepsilon)A(1-r)^{-\alpha} \quad\mbox{ for all } r\in [\delta,1). \end{equation} Assume $q>2(1+1/p)$. Thus $\alpha \in (0,1)$. By Corollary \ref{corblowup3}, $u$ is bounded and increasing on $(0,1).$ Thus there exists $L=\lim_{r\nearrow 1}u(r)$ and integrating \eq{end2} over $[r,1]$, we find $$ \frac{A}{1-\alpha}(1-\varepsilon)\leq \frac{L-u(r)}{(1-r)^{1-\alpha}}\leq \frac{A}{1-\alpha}(1+\varepsilon) \quad\mbox{ for all } r\in [\delta,1). $$ This proves part (i) of Theorem \ref{thm3}. \medskip Assume now $q=2(1+1/p)$. Thus $\alpha=1.$ Integrating \eq{end2} over $[\delta,r]$, where $\delta<r<1$, we find $$ A(1-\varepsilon)\leq \liminf_{r\nearrow 1}\frac{u(r)}{\ln(\frac{1}{1-r})}\leq \limsup_{r\nearrow 1}\frac{u(r)}{\ln(\frac{1}{1-r})}\leq A(1+\varepsilon). $$ Letting $\varepsilon \rightarrow 0,$ we get $$ \lim_{r\nearrow 1}\frac{u(r)}{\ln(\frac{1}{1-r})}=A. $$ This proves part (ii) of Theorem \ref{thm3}. \medskip Assuming $q<2(1+1/p)$, in a similar way as before we derive the proof of part (iii) in Theorem \ref{thm3}. \section{Proof of the Theorem \ref{thmrn}} As in the proof of Step 1 in Theorem \ref{thm1}, we obtain that $u'$, $v'$, $u$, $v$ are increasing and \begin{equation*} \left\{ \begin{aligned} &u'(r)=r^{1-N}\int_{0}^{r}{s^{N-1}v^{p}(s)}ds \quad\mbox{ for all } r>0,\\ &v'(r)=r^{1-N}\int_{0}^{r}{s^{N-1}(u')^{q}(s)}ds \quad\mbox{ for all } r>0. \end{aligned} \right. \end{equation*} This yields \begin{equation}\label{rn1} \frac{rv^{p}(0)}{N}\leq u'(r)\leq \frac{rv^{p}(r)}{N} \quad\mbox{ for all } r>0 \end{equation} and \begin{equation}\label{rn2} \frac{v^{pq}(0)r^{1+q}}{N^{q}(N+q)}\leq v'(r)\leq \frac{ru'^{q}(r)}{N} \quad\mbox{ for all } r>0. \end{equation} From \eq{rn1} and \eq{rn2} we deduce that $u'(r)$, $v'(r)$, $u(r)$, $v(r)$ tend to infinty as $r\rightarrow \infty$. Inspired by the change of variables introduced in \cite{HV1996} (see also \cite{BVH2010,G2012}) we define $$ X(t)= \frac{ru'(r)}{u(r)},\;\;Y(t)= \frac{rv'(r)}{v(r)},\;\;Z(t)=\frac{rv^{p}(r)}{u'(r)} \mbox{ and }W(t)=\frac{ru'^{q}(r)}{v'(r)}, $$ where $t= \ln(r)$ for $r\in (0,\infty)$. A direct calculation shows that $(X(t),Y(t),Z(t),W(t))$ satisfies \begin{equation}\label{rn3} \left\{ \begin{aligned} &X_{t}= X(Z-(N-2)-X) \quad\mbox{ for all } t\in \mathbb{R}, \\ &Y_{t}= Y(W-(N-2)-Y) \quad\mbox{ for all } t\in \mathbb{R}, \\ &Z_{t}= Z(N+pY-Z) \quad\mbox{ for all } t\in \mathbb{R}, \\ &W_{t}= W(qZ-qN+q+N-W) \quad\mbox{ for all } t\in \mathbb{R}. \end{aligned} \right. \end{equation} By L'Hopital's rule we have $\lim_{t\rightarrow \infty}X(t)=2-N+\lim_{t\rightarrow \infty}Z(t)$. Thus, it is enough to consider the last three equations of \eq{rn3}, namely \begin{equation}\label{rn4} \left\{ \begin{aligned} &Y_{t}= Y(W-(N-2)-Y) \quad\mbox{ for all } t\in \mathbb{R}, \\ &Z_{t}= Z(N+pY-Z) \quad\mbox{ for all } t\in \mathbb{R}, \\ &W_{t}= W(qZ-qN+q+N-W) \quad\mbox{ for all } t\in \mathbb{R}. \end{aligned} \right. \end{equation} We rewrite our system as \begin{equation}\label{rn5} \zeta_{t}= g(\zeta) \end{equation} where $$ \zeta=\left(\begin{array}{c}Y(t)\\Z(t)\\W(t)\end{array}\right) \quad\mbox{ and } \quad g(\zeta)= \left(\begin{array}{c} Y(W-(N-2)-Y)\\Z(N+pY-Z)\\ W(qZ-qN+q+N-W) \end{array}\right). $$ Since the system \eq{rn5} is cooperative, the following comparison principle holds: \begin{lemma}\label{lrn1} Let $\zeta(t)= \left(\begin{array}{c}Y(t)\\Z(t)\\W(t)\end{array}\right)$ and $\tilde{\zeta}(t)= \left(\begin{array}{c}\tilde{Y}(t)\\ \tilde{Z}(t)\\ \tilde{W}(t)\end{array}\right)$ be two nonnegative solutions of \eq{rn5} such that $$ Y(t_0)\geq \tilde{Y}(t_0), \;\;\;Z(t_0)\geq \tilde{Z}(t_0),\;\;\; W(t_0)\geq \tilde{W}(t_0) $$ for some $t_0\in \mathbb{R}$. Then $$ Y(t)\geq \tilde{Y}(t), \;\;\;Z(t)\geq \tilde{Z}(t),\;\;\; W(t)\geq \tilde{W}(t) \quad\mbox{ for all }t\geq t_{0}. $$ \end{lemma} \bigskip \bigskip From \eq{rn1} and \eq{rn2} we have $Z\geq N$ and $W\geq N$. Therefore there are only two equilibria of \eq{rn4} which satisfy $Z\geq N$ and $W\geq N$, namely $$ \zeta_{1}= \left(\begin{array}{c}0\\mathbb{N}\\mathbb{N}+q\end{array}\right) \quad\mbox{ and } \zeta_{2}=\left(\begin{array}{c}\frac{2+q}{1-pq}\\mathbb{N}+\frac{p(2+q)}{1-pq}\\mathbb{N}-2+\frac{2+q}{1-pq}\end{array}\right). $$ \begin{lemma}\label{lrn2} $\zeta_{2}$ is asymptotically stable. \end{lemma} \begin{proof} The linearized matrix at $\zeta_2$ is $$ M=\left[\begin{array}{ccc}-Y_2&0&Y_2\\pZ_2&-Z_2&0\\0&qW_2&-W_2\end{array}\right]. $$ The characteristic polynomial of $M$ is $$ P(\lambda)=\det(\lambda I-M)=\lambda_{3}+\alpha\lambda_{2}+\beta\lambda+(1-pq)\gamma $$ where \begin{equation*} \begin{aligned} \alpha&=Y_2+Z_2+W_2\\ \beta&=Y_2Z_2+Y_2W_2+Z_2W_2\\ \gamma&=Y_2Z_2W_2. \end{aligned} \end{equation*} Since $\alpha$, $\beta$, $\gamma>0$ and $pq<1$, we have $P(\lambda)>0$ for all $\lambda \geq 0$. If $P$ has three real roots then they are all negative, so $\zeta_2$ is asymptotically stable in this case. It remains to consider the situation where $P$ has exactly one real root. Let $\lambda_1 \in \mathbb{R}$ and $\lambda_2,\lambda_3 \in \mathbb{C}\setminus \mathbb{R}$ be the roots of $P$. We claim that ${\rm Re}(\lambda_2)={\rm Re}(\lambda_3)<0$. We need to show $P(-\alpha)=-\beta\alpha+(1-pq)\gamma<0$, that is, $\beta\alpha>(1-pq)\gamma$. By AM-GM inequality we find $$ \alpha \geq 3\sqrt[3]{Y_2Z_2W_2}\quad \mbox{ and } \quad \beta \geq 3\sqrt[3]{(Y_2Z_2W_2)^{2}} $$ which yields $\alpha\beta> (1-pq)\gamma$. Hence $\zeta_2$ is asymptotically stable. \end{proof} \begin{lemma}\label{lrn3} For all $t\in \mathbb{R}$, we have \begin{equation}\label{rn6} 0\leq Y(t)\leq \frac{2+q}{1-pq}, \end{equation} \begin{equation}\label{rn7} N\leq Z(t)\leq N+\frac{p(2+q)}{1-pq}, \end{equation} \begin{equation}\label{rn8} N+q\leq W(t)\leq N-2+\frac{2+q}{1-pq}. \end{equation} \end{lemma} \begin{proof} Since $v'(0)=0$ and $v(0)=0$ we obtain $\lim_{t\rightarrow -\infty}Y(t)= \lim_{r\rightarrow 0}\frac{rv'(r)}{v(r)}=0$. Also \begin{equation}\label{crn1} u''(r)+\frac{N-1}{r}u'(r)=v^{p}(r) \quad\mbox{ for all } r>0. \end{equation} Using L'Hopital's rule, we deduce that $$ \lim_{r\rightarrow 0}\frac{u'(r)}{r}=\lim_{r\rightarrow 0}u''(r)= u''(0) $$ which combined with \eq{crn1} yields $$ u''(0)= \frac{v^{p}(0)}{N}=\lim_{r\rightarrow 0}\frac{u'(r)}{r}. $$ Therefore \begin{equation*} \lim_{t\rightarrow -\infty}Z(t)=\lim_{r\rightarrow 0}\frac{rv^{p}(r)}{u'(r)}=N. \end{equation*} We claim that there exists $t_{j}\rightarrow -\infty $ such that \begin{equation}\label{rn9} \left\{ \begin{aligned} &Y(t_{j})\leq Y_2,\\ &Z(t_j)\leq Z_2,\\ &W(t_j)\leq W_2. \end{aligned} \right. \end{equation} Because $\lim_{t\rightarrow -\infty}Y(t)=0$ and $\lim_{t\rightarrow -\infty}Z(t)= N$, it remains only to prove the last part of \eq{rn9}. Assume by contradiction that this is not true. Thus $W> W_2$ in $(-\infty,t_0)$ for some $t_0\in \mathbb{R}$. Then, by taking $t_0$ small enough we have $$ W_t= W(qZ-qN+q+N-W)< 0 \quad\mbox{ in }(-\infty,t_0). $$ Hence, $W$ is decreasing in the neighbourhood of $-\infty$ and so, there exists $\ell= \lim_{t\rightarrow -\infty}W(t)$. Again using L'Hopital's rule we have \begin{equation*} \begin{aligned} \ell=\lim_{t\rightarrow -\infty}W(t)&=\lim_{r\rightarrow 0}\frac{ru'^{q}(r)}{v'(r)}\\ &=\lim_{r\rightarrow 0}\frac{rqu'^{q-1}(r)u''(r)+u'^{q}(r)}{v''(r)}\\ &=\lim_{r\rightarrow 0}\frac{rqu'^{q-1}(r)v^{p}(r)+(1+q-qN)u'^{q}(r)}{u'^{q}(r)-\frac{N-1}{r}v'(r)}\\ &=\lim_{t\rightarrow -\infty}\frac{qZ(t)+1+q-qN}{1-\frac{N-1}{W(t)}}\\ &=\frac{1+q}{1-\frac{N-1}{\ell}}. \end{aligned} \end{equation*} This yields $\ell=N+q<W_2$ which contradicts our assumption that $W>W_{2}$ in a neighbourhood of $-\infty$. This proves the last part of \eq{rn9}. We then apply the Comparison Lemma \ref{lrn1} on all the intervals $[t_j,\infty)$ for $j\geq 1$ to obtain the upper bound inequalities in Lemma \ref{lrn3}. In the same way we obtain the lower bound inequalities. \end{proof} \medskip Let $K=\overline{[[\zeta_{1},\zeta_{2}]]}\subset \mathbb{R}^{3}$. By Lemma \ref{lrn3} we have $\omega(\zeta)\subseteq K $. Since $\zeta_2$ is asymptotically stable, $K$ has no circuits. Also, by \eq{div} we have $$ {\rm div}\, g(\zeta)=-W+(q-2)Z+(p-2)Y+N+2-qN+q<0\quad\mbox{ in } K. $$ Using Theorems \ref{thmdet2} and \ref{thmdet3} we deduce that $\omega(\zeta)$ reduces to one of the equilibria $\zeta_1$ or $\zeta_2$. If $\zeta(t)\rightarrow \zeta_1$ as $t\rightarrow \infty$ this implies in particular that $Y(t)\rightarrow 0$ as $t\rightarrow \infty$. On the other hand, from the second equation of \eq{rn3} we deduce $Y_t> 0$ in a neighbourhood of infinity which is impossible given that $Y(t)> 0$ in $\mathbb{R}$. Hence $\zeta(t)\rightarrow \zeta_2$ as $t\rightarrow \infty$, that is \begin{equation*} \begin{aligned} \lim_{t\rightarrow \infty}X(t)&= 2+\frac{p(2+q)}{1-pq},\\ \lim_{t\rightarrow \infty}Y(t)&= \frac{2+q}{1-pq},\\ \lim_{t\rightarrow \infty}Z(t)&= N+\frac{p(2+q)}{1-pq},\\ \lim_{t\rightarrow \infty}W(t)&= N-2+\frac{2+q}{1-pq}. \end{aligned} \end{equation*} Using $(X(t),Y(t),Z(t),W(t))$ , we have \begin{equation*} \begin{aligned} \lim_{r\rightarrow \infty}\frac{v(r)}{r^{\frac{q+2}{1-pq}}}&=\lim_{t\rightarrow \infty}(Y(t)Z^{q}(t)W(t))^{\frac{1}{pq-1}}\\ &=(Y_{2}Z^{q}_{2}W_{2})^{\frac{1}{pq-1}}\\ &=\Big[\frac{(2+q)(N(1-pq)+p(2+q))^{q}(N(1-pq)+q(2p+1))}{(1-pq)^{2+q}}\Big]^{\frac{1}{pq-1}}. \end{aligned} \end{equation*} And \begin{equation*} \begin{aligned} \lim_{r\rightarrow \infty}\frac{u(r)}{r^{\frac{2+2p-pq}{1-pq}}}&=\lim_{t\rightarrow \infty}\frac{(Y(t)Z^{q}(t)W(t))^{\frac{p}{pq-1}}}{X(t)Z(t)}\\ &=\frac{(Y_{2}Z^{q}_{2}W_{2})^{\frac{p}{pq-1}}}{X_{2}Z_{2}}\\ &=\Big[\frac{(2+q)(N(1-pq)+p(2+q))^{1/p}(N(1-pq)+q(2p+1))}{(1-pq)^{\frac{2p+2-pq}{p}}(2+2p-pq)^{\frac{pq-1}{p}}}\Big]^{\frac{p}{pq-1}}. \end{aligned} \end{equation*} \noindent{\bf Acknowledgement.} This work is part of the author's PhD thesis and has been carried out with the financial support of the Research Demonstratorship Scheme offered by the School of Mathematical Sciences, University College Dublin.
1,314,259,992,661
arxiv
\section{Introduction} Here I discuss three important ways to tackle pulse propagation in nonlinear optics. These include methods that both do and do not follow the traditional approach of using pulse envelopes. The description is taken in the 1D limit, but some discussion on including transverse effects is made. The aim is to cover the considerations relevant when modeling wideband fields, a regime not comprehensively treated in many standard texts \cite{Agrawal-NFO,Shen-PNLO,Boyd-NLO,Yariv-QE,Haus-WFOE,Siegman-Lasers}. The three ways are solving (a) Maxwell's equations, (b) directional Maxwell's equations, or the (c) standard second order wave equation. Solving Maxwell's equations is a well established approach, with a long history (i.e. finite difference time domain (FDTD), see e.g. \cite{Gilles-HV-2000jcp}), although it is computationally intensive and has generally been little used in nonlinear optics (but see e.g. \cite{Flesch-PM-1996prl,Gilles-MV-1999pre,Tyrrell-KN-2005jmo}). Practical versions of directional Maxwell's equations have appeared only recently, such as that of Kolesik et al. \cite{Kolesik-MM-2002prl,Kolesik-M-2004pre}; other approaches followed \cite{Kinsler-RN-2005pra,Kinsler-2006arXiv-fleck,Mizuta-NOY-2005pra}, the most general being \cite{Kinsler-2010pra-dblnlGpm}. However the first proposal dates back to Fleck in 1970 \cite{Fleck-1970prb}, although only as something of a remark in passing, rather than a full investigation. The most common approaches nonlinear optics are those based on the standard second order wave equation, particularly with regard to envelope propagation and the celebrated slowly varying envelope approximation (SVEA). The SVEA allows us to convert the second order wave equation into a first order equation that can efficiently propagate narrowband pulses. Recently the SVEA has been relaxed \cite{Brabec-K-1997prl,Porras-1999pra,Kinsler-N-2003pra}, extending the use to moderate bandwidths. However, much better approaches based on factorizing the second order wave equation also exist. An early example can be seen in \cite{Shen-PNLO}, but also most notably by Blow and Wood \cite{Blow-W-1989jqe}, and also the recent Ferrando et al. \cite{Ferrando-ZCBM-2005pre} and Genty et al. \cite{Genty-KKD-2007oe}; the most general formulation, even allowing for magnetic effects, is that of Kinsler \cite{Kinsler-2010pra-fchhg}. We can try to solve any of these equations directly, without recourse to an envelope and carrier representation. This means ensuring sufficient numerical resolution to integrate each of the field oscillations as it propagates across the simulation window. This approach is the standard one when solving Maxwell's equations (i.e. FDTD), but generally in nonlinear optics an envelope approach is used. This has a number of advantages: it imposes a direction on the modeled pulse, and it removes the fast oscillations at the centre frequency. In combination with a moving frame, it can turn a pulse of rapidly oscillating fields moving at the speed of light into a smooth, nearly-stationary waveform -- with commensurate gains in simulation speed. These benefits usually come with a restriction on the allowed bandwidth of the pulse being modeled. This paper is organized as follows: in section \ref{S-envelopes} I compare field and envelope approaches. In section \ref{S-maxwell} I consider Maxwell's equations in both field and envelope pictures, followed in section \ref{S-directional} by the same, but utilizing a directional rewriting of Maxwell's equations. Then, in section \ref{S-secondorder} I consider the role of second order wave equations, in particular using factorization methods. Finally, in section \ref{S-conclusion} I present some conclusions. Although not directly relevant to the discussion here, it is also worth noting that directional waves in unstable resonators were quantized by Brown and Dalton\cite{Brown-D-2002jmo}. \section{Fields vs Envelopes}\label{S-envelopes} It is often remarked upon that envelope methods work surprisingly well. However, this surprise seems to be based largely upon the SVEA equation for pulse propagation, which indeed contains many approximations (see \cite{Kinsler-2002-fcpp,Kinsler-N-2003pra}). Recently it has been shown by several groups \cite{Kolesik-MM-2002prl,Ferrando-ZCBM-2005pre,Kinsler-RN-2005pra,Mizuta-NOY-2005pra} that equations nearly identical to those generated by the SVEA can be found by assuming little more than the lack of backward-going field components. Even when revisiting Blow and Wood \cite{Blow-W-1989jqe}, we can see that their mathematics contained minimal constraints on the bandwidth of the envelope, although their specific nonlinearity model did contain such restrictions. \subsection{The definition }\label{Ss-envelopes-def} For envelope methods, the direction is imposed by the form for the carrier function, and is usually a plane wave traveling in the chosen direction. Thus the typical envelope and carrier representation of some field $Q$ is ~ \begin{eqnarray} Q(t; z) &=& A(t; z) e^{\imath\left(k_0 z-\omega_0 t\right)} + A^*(t;z) e^{-\imath\left(k_0 z-\omega_0 t\right)} , \label{eqn-Q-env0} \\ \tilde{Q}(\omega;z) &=& \tilde{A}(\omega+\omega_0; z) e^{\imath k_0 z } + \tilde{A}^*(\omega-\omega_0; z) e^{-\imath k_0 z } . \label{eqn-Q-env} \end{eqnarray} In some of the following equations, I will shorten the argument of the exponential in the carrier function with $\Xi = \imath\left(k_0 z-\omega_0 t\right)$. It is worth noting that we are not required to use carrier functions with the usual exponential form \cite{Gabor-1946jiee}, e.g. in semiconductor physics, Bloch functions are routinely used as carriers to form a basis for electron (or hole) envelope functions. Note that it is {\em approximations} that restrict the validity of envelope approaches, not the use of them in itself. This is contrary to the impression that might be gained from SVEA approaches, and even their generalizations \cite{Brabec-K-1997prl,Kinsler-N-2003pra}. The potential benefits of envelopes are not tied to restrictions on the bandwidth of the pulse being modeled. \subsection{The big advantage}\label{Ss-envelopes-advantange} Replacing real fields $E$ and $H$ with an envelope-carrier description give us at least one clear advantage: it removes the dominant contribution to the underlying field oscillations. The resulting smoother envelope is therefore easier, and much less computationally expensive to propagate. It is the rapidity of the fastest time-domain modulation of the field or envelope which constrains our time resolution, and the rapidity of the fastest spatial modulation which constrains the spatial resolution. Note that although we usually hope that our envelope will then have a relatively slowly varying form, the mere replacement of the EM fields by envelope-carrier combinations imposes of itself no approximations whatsoever. Two processes may act to twist an initially smooth envelope into something more problematic. First, linear dispersion can add chirp, which manifests itself as a nonlinear phase ramp across the pulse. These are usually relatively smooth changes, and cause little problem. Second, there are nonlinear effects. Some, such as self phase modulation (SPM) can be relatively mild, others, such as coupling to backward propagating waves and harmonic generation can impose significant oscillations. \subsection{Nonlinear polarization terms}\label{Ss-envelopes-nonlniearity} As mentioned above, nonlinear processes can generate oscillatory contributions to the envelope. We can see how this occurs by considering an instantaneous third order nonlinearity, which depend on $E^3$ and has the form ~ \begin{eqnarray} E^3(t; z) &=& \left[ A(t; z) e^{\Xi} + A^*(t;z) e^{-\Xi} \right]^3 \label{eqn-envelopes-nonlinear3-a} \\ &=& A(t; z)^3 e^{+3\Xi} + 3 A(t; z)^2 A^*(t;z) e^{+\Xi} + \text{c.c} \label{eqn-envelopes-nonlinear3-b} . \end{eqnarray} Here we see a useful side-effect of the envelope-carrier representation -- that nonlinear terms can be separated into convenient components. In this example, the full $\chi^{(3)}$ nonlinearity splits into a third harmonic generation (THG) term proportional to $A^3$, and an SPM term proportional to $\left|A\right|^2 A$; along with complex conjugate counterparts (c.c). Clearly the THG term is non-resonant with the chosen carrier, and keeping such non-resonant nonlinear terms will impose significant oscillations onto our envelope as it propagates. Such oscillations break approximations relying on a relatively smooth envelope, which is why in SVEA models they are discarded; however there is no {\em a priori} requirement to do so. E.g., in the wideband Raman model of Kinsler and New \cite{Kinsler-N-2005pra,Kinsler-2005eWBRAM}, the Stokes and anti-Stokes fields appeared as sidebands on the envelope spectrum. \subsection{Multiple carriers, wideband envelopes}\label{Ss-envelopes-multi} We can generalize from a single envelope-carrier pair by using multiple envelopes with carriers at different carrier frequencies. However, each added envelope greatly increases the number of individual polarization terms resulting from a nonlinearity, so as a rule it is best to use the minimum number possible. As an exercise, just calculate the number of terms in an eqn. (\ref{eqn-envelopes-nonlinear3-a}) derived from a field defined as $E = A_1 e^{+\Xi_1} + A_2 e^{+\Xi_2} + \text{c.c}$, not $E = A_1 e^{+\Xi_1} + \text{c.c}$! Multiple carriers work best when there are multiple narrowband fields which resonantly interact, such as in an optical parametric amplifier \cite{Shen-PNLO,Boyd-NLO} (check) or for Raman processes \cite{Kinsler-N-2005pra,Kinsler-2005eWBRAM}. In such cases we can ruthlessly discard nonlinear polarization terms which are not perfectly in resonance with processes of our choosing. Another use for multiple carriers is for including both forward and backward propagating fields. If we use multiple carriers, and also allow wideband envelopes, then it is possible for multiple envelopes to cover the same piece of spectrum. In a continuous mathematical description this overlap will always happen, but in a discrete or numerical implementation it will depend on our parameters. This overlap is not necessarily a problem, as long as we are careful about assigning polarization terms to whichever envelope equation we choose -- we must make sure not to add the same term twice, for example. This can also lead to a non-unique description, in the case where a polarization term could be equally well drive the evolution of one of two (or more) envelopes. Nevertheless, such non-uniqueness does not break our model, it just allows us a choice which we might be able to use to our advantage. In a simulation, we could try to assist our numerics by managing the envelope spectra by generating a total spectrum, and then reassigning components in the overlap region according to some smoothing procedure. The use of wideband envelopes can raise some interesting issues. For example, if the bandwidth of an envelope is greater than its carrier frequency, then the envelope will extend into negative frequencies. This is not a problem for our physical model, since we still have to reconstruct the field from the envelopes and carriers, and those negative frequency components are matched by complementary positive frequency ones from the complex conjugate of the envelope\footnote{That is, they {\em should} be so matched. If they aren't, you've done something wrong.}. Again, we might consider a spectral management scheme which swaps these unexpected components over, restoring the envelope to pure positive frequency content (and hence its conjugate to pure negative frequency content). However, we then find that at zero-frequency we have introduce a hard cutoff in the envelope spectra, and so induced unwanted oscillations in the time domain version of the envelope. However, while such spectral management might seem to offer advantages, in practice it makes little difference, and adds needless complication to simulation code. I would consider it only if some unexpected interaction was generating significant spectral content near the band edge of an envelope, at a position where it would be well within the spectral range of some other envelope; and even then it might be easier to simply increase the envelope bandwidth. \subsection{Directionality}\label{Ss-envelopes-directionality} A carrier imposes a direction of propagation, and most carrier-based models silently neglect even the possibility of backward propagating fields, even though there is a coupling between them. However, Casperson\cite{Casperson-1991pra} used both forward and backward carriers to construct an envelope model with a separation of the forward and backward field components and interactions. The more recent paper of Sanborn et al. \cite{Sanborn-HD-2003josab} used the same approach. Backward traveling components, if forced onto a forward traveling envelope, will appear as non-resonant terms. If identified correctly, these can then be discarded. See also section \ref{S-forwardbackward} for a discussion of the coupling between forward and backward waves which is induced by a nonlinearity. \subsection{Moving frames}\label{Ss-envelopes-moving} In combination with a suitable moving frame, an envelope representation can turn a pulse of rapidly oscillating fields moving at the speed of light into a smooth, nearly-stationary waveform -- with commensurate gains in simulation speed. However, we need to guarantee that all contributions from backward traveling components are removed, otherwise the envelope will contain oscillatory components moving at approximately twice the frame speed. A typical moving frame is defined by for a frame speed $v$ as $t' = t - z / v $. Thus the spatial derivatives in propagation equations are altered using ~ \begin{eqnarray} \partial_{z} &=& \partial_{z'} - v \partial_t \end{eqnarray} \subsection{Estimating the computational cost}\label{Ss-envelopes-compute} Consider a wideband pulse, with a bandwidth of the order of its centre frequency $\omega_0$. In a full-field approach, this will have the fastest modulations of the field being of the order $2\omega_0$. In comparison, an envelope approach results in the fastest modulations on the envelope being of the order $\omega_0$. Simplistically we might then hope that the envelope approach allows us to halve our time and space resolutions whilst still retaining numerical accuracy. For narrow band fields the advantage is much clearer -- a bandwidth of $\omega_0/100$ might allow resolutions to be coarsened by a factor of 100. For fields of a wider bandwidth, we gain little advantage, unless we shift our carrier frequency $\omega_0$ to the centre of the spectrum, even if that is not co-incident with the dominant frequency component. A more comprehensive examination of the effects of numerical resolution has been given for the nonlinear Schr\"odinger equation by Sinkin et al. \cite{Sinkin-HZM-2003jlt}. Note that since the linear response (dispersion) of the medium can be done exactly in the frequency domain, regardless of step-size, it might seem more appropriate to focus more on the role of the nonlinear response when estimating the necessary temporal and spatial resolutions. However the accuracy of a propagation method cannot be easily evaluated whilst ignoring the dispersive propagation, because both dispersive and nonlinear effects occur simultaneously. Even if a split step method is used, they are interleaved, and their effects cannot be disentangled. \subsection{Disadvantages}\label{Ss-envelopes-disadvantage} The slight disadvantage of using envelopes is that real valued time dependent fields are replaced by complex valued envelopes. This doubles the amount of storage used during computations, and also requires the use of complex Fourier transforms rather than the faster real ones; although of course the spectra of the fields are complex in any case. In practice, the computational cost is small, although the complexity of the simulation code is increased. \section{Maxwell's equations}\label{S-maxwell} When propagating fields in free space, we use the the source-free Maxwell's equations. To simplify the description we transform their time-like behaviour into frequency space. This enables us to write the convolutions required to model the linear time-response of the medium (e.g. dispersion) as multiplications. However, since the form of the nonlinear response is not simplified by this process, a convolution in frequency space appears. In frequency space, time derivatives convert to factors of $-\imath \omega$, so the equations are ~ \begin{eqnarray} \partial_z \tilde{H}_y(\omega;z) &=& - \imath \omega \tilde{\epsilon}(\omega') \star \tilde{E}_x(\omega;z) , \label{eqn-firstorder-dtE} \\ \partial_z \tilde{E}_x(\omega) &=& - \imath \omega \tilde{\mu}(\omega') \star \tilde{H}_y(\omega;z) . \label{eqn-firstorder-dtH} \end{eqnarray} The ``$\star$'' denotes a convolution, ~ \begin{eqnarray} Q(\tau) \star P(t) &=& \int Q(\tau) P(t-\tau) d\tau ~ = \mathscr{F}^{-1} \left[ \tilde{Q}(\omega) \tilde{P}(\omega) \right] . ~~~~ \end{eqnarray} A rather nice way to scale these equations is to define suitable $\epsilon_n$ and $\mu_n$ corresponding to a suitably chosen refractive index, hence $\mu_n$ will usually be $\mu_0$. This means $c_n = 1 / \epsilon_n \mu_n$, $k = \omega / c_n$, $\tilde{\epsilon}_n = \tilde{\epsilon}(\omega')/\epsilon_n$ $\tilde{\mu}_n = \tilde{\mu}(\omega')/\mu_n$. We then define $e = \sqrt{\epsilon_n} E$ and $h = \sqrt{\mu_n} H$, which ensures $e$ and $h$ are of comparable sizes. This gives us the scaled Maxwell's equations ~ \begin{eqnarray} \partial_z \tilde{h}_y(\omega;z) &=& - \imath k \tilde{\epsilon}_n(\omega') \star \tilde{e}_x(\omega;z) , \label{eqn-firstorder-dtEE} \\ \partial_z \tilde{e}_x(\omega) &=& - \imath k \tilde{\mu}_n(\omega') \star \tilde{h}_y(\omega;z) . \label{eqn-firstorder-dtHH} \end{eqnarray} It is worthwhile comparing this scaling with that from the directional fields approach in section \ref{S-directional}; with the correspondences $\sqrt{\epsilon_n} \leftrightarrow \alpha_r$ and $\sqrt{\mu_n} \leftrightarrow \beta_r$. For our purposes, there are two main ways to solve Maxwell's equations: either FDTD\cite{Gilles-HV-2000jcp} or Pseudo-Spectral Spatial Domain (PSSD)\cite{Tyrrell-KN-2005jmo}. In FDTD we propagate forward in time, holding the fields $E(z), H(z)$ as a function of space. However, in nonlinear optics, it is more convenient to use PSSD, where we propagate forward in space, holding the fields $E(t), H(t)$ as a function of time. Under PSSD derivatives are calculated pseudospectrally \cite{Fornberg-PSmethods}. However, its most important feature is that the entire time-history (and therefore frequency content) of the pulse is known at any point in space, so applying even arbitrary dispersion incurs no extra computational penalty. In contrast, FDTD (or other temporally propagated methods) must use convolutions or time-response models for dispersion. Although spatially propagated simulations (e.g. PSSD) make it difficult to incorporate reflections properly, this is not a significant constraint as most such simulations are only interested in uni-directional propagation anyway. For example, in a 1D medium with linear dispersive properties defined by $\epsilon_r$, $\mu_r$, containing a third order $\chi^{(3)}$ nonlinearity defined by $\epsilon_c$, the equations are ~ \begin{eqnarray} \partial_z {H}_y(\omega;z) &=& - \imath \omega \tilde{\epsilon}_r(\omega') \tilde{E}_x(\omega;z) \nonumber \\ && - \imath \omega \left\{ \tilde{\epsilon}_c(\omega') . \mathscr{F} \left[ E_x(t;z)^2 \right] (\omega) \right\} \star \tilde{E}_x(\omega;z) , ~~~~ \label{eqn-pssd-dH} \\ \partial_z \tilde{E}_x(\omega;z) &=& - \imath \omega \tilde{\mu}_r(\omega') \tilde{H}_y(\omega;z) , \label{eqn-pssd-dE} \end{eqnarray} where $\mathscr{F}[Q(t)](\omega)$ denotes the Fourier transform of some function $Q(t)$. This model allows for the time-response of the nonlinearity, and is thus applicable to (weakly coupled) Raman systems as well. Note that the terms dependent on $\epsilon_r$ and $\mu_r$ are simple products. The linear dispersion combined with a time-dependent third order nonlinearity gives a permittivity function which would be written ~ \begin{eqnarray} \epsilon(\tau, t) &=& \epsilon_r(\tau) + \epsilon_c(\tau) \star E(t)^2 \label{eqn-epsilon-t} \\ \epsilon(\tau, t) \star E(t) &=& \epsilon_r(\tau) \star E(t) + \left\{ \epsilon_c(\tau) \star E^2 (t) \right\} E(t) \label{eqn-epsilon-t-E-t} \\ \tilde{\epsilon}(\omega') \star \tilde{E}(\omega) &=& \tilde{\epsilon}_r(\omega) \tilde{E}(\omega) \nonumber \\ && ~~ + \left\{ \tilde{\epsilon}_c(\omega') \mathscr{F} \left[ E^2 (t) \right] (\omega') \right\} \star \tilde{E}(\omega) \label{eqn-epsilon-w-E-w} . \end{eqnarray} In the case of instantaneous nonlinearity, $\tilde{\epsilon} \tilde{E} = \tilde{\epsilon}_r \tilde{E} + \epsilon_c \mathscr{F} \left[E^3\right]$. A simple and efficient way to propagate these equations is using staggered $E$ and $H$ fields, which allow us to use an Euler-like integration for each field, but achieves second-order accuracy \cite{Yee-1966tap}. However, while the $E$ and $H$ fields necessary for a forward propagating pulse are easy to determine for co-incident $E$ and $H$, we need to use staggered initial conditions or else we get a significant backward propagating component. Even with correctly staggered initial conditions, we see a small spurious backward component, the size of which depends on the time step. This backward pulse is hard to get rid of completely, but it can be filtered in the time domain when the two pulses have propagated apart far enough. Another point to consider, particularly when generating the initial conditions for very short pulses, is the zero-force condition \cite{ZFC}. This can be easily satisfied by deriving the $E$ and $H$ fields for the pulse from a suitable vector potential, rather than simply assuming a form for the $E$ field. When considering the solution of these these Maxwell's equations, it is useful to partly calculate the time derivative of eqn.(\ref{eqn-epsilon-t-E-t}). For dispersion and a time response $\chi^{(3)}$ this gives us three terms, ~ \begin{eqnarray} \partial_t \left( \epsilon(\tau, t) \star E(t) \right) &=& \partial_t \epsilon_r(\tau) \star E(t) \nonumber \\ && ~~ + \left( \partial_t \left\{ \epsilon_c(\tau) \star E^2 (t) \right\} \right) E(t) \nonumber \\ && ~~ ~~ + \left\{ \epsilon_c(\tau) \star E^2 (t) \right\} \left( \partial_t E(t) \right) . \label{eqn-dt-epsilon-t-E-t} \end{eqnarray} Thus we see that to solve the equations, we will need to calculate the derivatives of three terms: the usual dispersive term, the time response term, and the field. We will also need to retain the value of the time response term as well. Since the time response term contains a convolution, it is best calculated in the frequency domain, which is particularly convenient when using pseudospectral derivatives. We will need {\em two} FFT's to transform $E$ and $E^2$ into frequency space. There we construct the dispersion term and the time response term by simple multiplications, and set up arrays for the derivatives by multiplying by $-\imath \omega$. We then need {\em four} back transforms for a total of six in all: one more for the time response, and three for the time derivatives of the dispersion, time response, and field. For an instantaneous nonlinearity, we need only three FFT's: two forward transforms (for $E$ and $E^3$), and one back transform for the combined derivative. In addition to these six (or three) FFT's needed to solve the $\partial_z H$ equation, the $\partial_z E$ equation requires another two, for a total of eight (or five). \subsection{Envelopes}\label{Ss-maxwell-envelopes} Although it is not often done, we can represent Maxwell's equations using an envelope and carrier representation. We express the fields $E$ and $H$ using ~ \begin{eqnarray} E_x(t;z) &=& A(t;z) e^{\imath\left(k_0 z-\omega_0 t\right)} + A^*(t;z) e^{-\imath\left(k_0 z-\omega_0 t\right)} , \label{eqn-E-env} \\ H_y(t;z) &=& F(t;z) e^{\imath\left(k_0 z-\omega_0 t\right)} + F^*(t;z) e^{-\imath\left(k_0 z-\omega_0 t\right)} . \label{eqn-H-env} \end{eqnarray} We insert these into the Maxwell's equations above, separate out the normal and complex conjugate (c.c.) parts, cancel the carrier exponentials present on both sides of the equations, and rearrange to leave only $\partial_z$ terms on the RHS, ~ \begin{eqnarray} \partial_z \tilde{F}(\omega;z) &=& - \imath \omega \tilde{\epsilon} (\omega') \star \tilde{A}(\omega;z) - \imath k_0 \tilde{F}(\omega;z) , \label{eqn-envel-B} \\ \partial_z \tilde{A}(\omega;t) &=& - \imath \omega \tilde{\mu} (\omega') \star \tilde{F}(\omega;t) - \imath k_0 \tilde{A}(\omega;t) . \label{eqn-envel-A} \end{eqnarray} Of course there is still much detail hidden in the permittivity $\tilde{\epsilon}$, since it contains the nonlinearity. Consequently, I do not apply this envelope definition to a general equation of motion because how $\epsilon$ is expressed usually depends on the field and therefore on those envelopes. Starting with eqn.(\ref{eqn-epsilon-t-E-t}), and expanding ${\epsilon}$ with terms for both (linear) dispersion $\epsilon_r$ and a time dependent $\chi^{(3)}$ nonlinearity ($\epsilon_c$) gives ~ \begin{eqnarray} \epsilon(\tau, t) \star A(t) e^{+\Xi} &+& \textrm{c.c.} \nonumber \\ &=& \epsilon_r(\tau) \star \left\{ A(t) e^{+\Xi} + A(t)^* e^{-\Xi} \right\} \nonumber \\ && + \left( \epsilon_c(\tau) \star \left\{ A(t)^2 e^{+2\Xi} \right. \right. \nonumber \\ && \left. \left. + A(t) A(t)^* + A(t)^{*2} e^{-2\Xi} \right\} \right) \nonumber \\ && \times \left\{ A(t) e^{+\Xi} + A(t)^* e^{-\Xi} \right\} \\ \epsilon(\tau,t) \star A(t) e^{+\Xi} &=& \epsilon_r(\tau) \star A(t) e^{+\Xi} \nonumber \\ && + 2 \left\{ \epsilon_c(\tau) \star \left|A(t)\right|^2 \right\} A(t) e^{+\Xi} \nonumber \\ && + \left\{ \epsilon_c(\tau) \star A(t)^2 \right\} A(t)^* e^{+\Xi} \nonumber \\ && + \left\{ \epsilon_c(\tau) \star A(t)^2 \right\} A(t) e^{+3\Xi} \\ \tilde{\epsilon}(\omega+\omega_0) \tilde{A}(\omega) &=& \tilde{\epsilon}_r(\omega'+\omega_0) \star \tilde{A}(\omega) \nonumber \\ && + 2 \left\{ \tilde{\epsilon}_c(\omega'+\omega_0) \mathscr{F} \left[ \left|A(t)\right|^2 \right] (\omega') \right\} \star \tilde{A}(\omega) \nonumber \\ && + \left\{ \tilde{\epsilon}_c(\omega'+\omega_0) \mathscr{F} \left[ A(t)^2 \right] (\omega') \right\} \star \tilde{A}(\omega)^* \nonumber \\ && + \left\{ \tilde{\epsilon}_c(\omega'+3\omega_0) \mathscr{F} \left[ A(t)^2 \right] (\omega') \right\} \star \tilde{A}(\omega) . \nonumber \\ \label{eqn-epsilon-envelope} \end{eqnarray} We can see in eqn. (\ref{eqn-epsilon-envelope}) that the first three of the terms (one dispersion and two SPM-like) are resonant with the chosen envelope, but the last (third harmonic generation) is not, and it modulates the envelope at $2\omega_0$, (and subsequently the propagation by $\sim2k_0$). Note the form of the second SPM-like term, which needs contributions from two $A(t)$'s and one $A(t)^*$ to have the correct frequency dependence, but the convolution is with the $A(t)^*$ and not an $A(t)$ as might be expected. Note that in the case of instantaneous $\chi^{(3)}$, the third RHS term reduces to $\epsilon_c \left|A(t)\right|^2 A(t)$, giving ~ \begin{eqnarray} \epsilon(\tau) \star A(t) e^{+\Xi} &=& \epsilon_r(\tau) \star A(t) e^{+\Xi} + 3 \epsilon_c \left|A(t)\right|^2 A(t) e^{+\Xi} \nonumber \\ && + \epsilon_c A(t)^3 e^{+3\Xi} , \\ \tilde{\epsilon}(\omega'+\omega_0) \star \tilde{A}(\omega) &=& \tilde{\epsilon}_r(\omega+\omega_0) \tilde{A}(\omega) \nonumber \\ && + 3 \epsilon_c \mathscr{F} \left[ \left|A(t)\right|^2 A(t) \right] (\omega) \nonumber \\ && + \epsilon_c \mathscr{F} \left[ A(t)^3 \right] (\omega) . \end{eqnarray} This expression can then be substituted directly into eqns. (\ref{eqn-envel-B},\ref{eqn-envel-A}) In this formulation, we have made no ``slowly varying'' approximation like those in traditional approaches \cite{Agrawal-NFO,Shen-PNLO,Boyd-NLO,Yariv-QE,Haus-WFOE,Siegman-Lasers}, or in the variously corrected extensions\cite{Brabec-K-1997prl,Kinsler-N-2003pra}. The price we pay is having two envelopes instead of one, since now the magnetic field is explicitly retained. Also, the model still contains backward propagating components; which, with the chosen carrier functions, will impress oscillations at $2 \omega_0$ on the envelope, and oscillations of $2 k_0$ on the propagation. placing greater demands on our numerics. Unfortunately there is no way to filter these out at any point in the simulation, because their backward propagating nature can only be established by linking the time-like behaviour and space-like propagation of both $E$ and $H$ fields. We cannot always rely on only time-like behaviour to filter them out, because, e.g., both backward propagating terms (at $+k_0$ and $-\omega_0$) and third harmonic generation (at $+3k_0$ and $3\omega_0$) are equally detuned from the carrier (at $+k_0$ and $\omega_0$); although we could do so if we were in a regime where third harmonic generation were negligible. At the start of this subsection, we hoped that dividing out the carrier oscillations would give us a slowly varying pulse envelope, which would then enable us to coarsen our numerical resolution, and speed simulations. This is true, up to a point -- but remember the most likely reason we are using a Maxwell solver is that we want to model a wideband situation. It is the rapidity of the fastest time-domain modulation of the field or envelope which constrains our time resolution, and the rapidity of the fastest spatial modulation which constrains the spatial resolution. We can do better than these Maxwell equations approaches without having to use second order wave equations and their complicated approximations by using directional Maxwell's equations, as described in the next section. \subsection{Transverse effects} \label{S-maxwell-transverse} There are two main transverse effect likely to be of interest in pulse propagation models: mode averaging, and diffraction or off-axis propagation. Mode averaging is easy to incorporate if you assume some known transverse profile for the mode: e.g. for an optical fibre or some other waveguide. The transverse derivatives vanish, and the material properties are evaluated as an integral over the transverse dimensions, weighted by the mode function. Diffraction and off-axis propagation they result from a coupling between the vector components of the $E$ and $H$ fields -- including those along the propagation direction. Thus they are much harder to understand, as compared to a paraxial model based on (e.g.) the second order wave equation, although they can be simulated easily enough in a full 4D FDTD code. This is because they result from a coupling between the vector components of the $E$ and $H$ fields -- including those along the propagation direction. \section{Directional Maxwell's equations}\label{S-directional} To my knowledge, the earliest rewriting of Maxwell's equations in a directional form was by Fleck \cite{Fleck-1970prb}, who treated a dispersionless medium and plane polarized wave. However, the idea was not used beyond its brief appearance there. Fleck constructed his directional fields by combining the sum and difference of the E and H fields, weighted by the square roots of the permittivity $\epsilon$ and permeability $\mu$ respectively. The new combined fields represent the forward and backward traveling components of the total field, and we can derive first-order wave equations for these new fields. In the mid 1990's, the concept was rediscovered and used to evaluate the properties of grating structures by de Sterke, Sipe, and co-workers \cite{Sipe-PD-1994josaa,deSterke-SS-1996pre}, but not applied to pulse propagation. The work considered materials with a spatially varying refractive index, but did not incorporate material dispersion or nonlinearity. This concept of using directional fields for {\em pulse propagation} was not revisited until the work of Kolesik et al. \cite{Kolesik-MM-2002prl,Kolesik-M-2004pre}. After selecting a preferred direction, they then projected out the forward-like and backward-like parts of the propagating fields. This procedure resulted in first order wave equations for the propagation of the forward and backward field components. Subsequent work by Kinsler et al. \cite{Kinsler-RN-2005pra,Kinsler-2006arXiv-fleck}, presented a directional rewriting of Maxwell's equations using a generalized form of Fleck's construction; note also the independent work of Mizuta et al. \cite{Mizuta-NOY-2005pra}. All of these methods use the same basic concept -- use the right combination of $E$ and $H$ fields so as to create a pair of forward and backward-like fields. Here I follow the most general formulation that I know of, which is that of Kinsler \cite{Kinsler-2010pra-dblnlGpm}, as developed from earlier work \cite{Kinsler-RN-2005pra,Kinsler-2006arXiv-fleck}. These handle the electric and magnetic properties of the propagation medium on an equal footing, incorporates the dispersive properties of the medium in a very general way, and retains all the vectorial behaviour of the fields. The result is paired first-order equations for the plane-polarized directional fields $G^\pm$ (and a longitudinal component $G^\circ$). Although complicated in the general case, these simplify greatly in the usual case(s) of transverse and/or paraxial propagation regimes. The cost of using these directional fields is that while we can efficiently remove backward propagating contributions, computing the nonlinear terms is more demanding. In contrast, the work of Kolesik et al. and Mizuta et al. is distinguished by a greater emphasis on the practical applications of directional fields. Because these new $G^\pm$ fields are directional, we can efficiently separate out the forward-going part of the field, and neglect the backward. This is an important step, because the standard Maxwell equations based approaches treated in the previous section could not easily remove the backward parts of the field, and these can cause inconvenience in numerical simulations. For example, the spurious backward component caused by imperfect initial conditions should no longer occur. The definitions of the ${G}^{\pm}$ fields, describing the transverse properties of a plane polarized EM field, in the frequency domain are ~ \begin{eqnarray} \tilde{G}_x^{\pm} (\omega) &=& \tilde{\alpha}_r (\omega) \tilde{E}_x (\omega) \pm \tilde{\beta}_r (\omega) \tilde{H}_y (\omega) , \label{eqn-S-defs-GvectorW} \label{eqn-S-defs-Gvector} \end{eqnarray} The $\tilde{\alpha}_r$ and $\tilde{\beta}_r$ ``reference'' parameters are best chosen to closely match the medium, whilst ignoring nonlinear effects, so that $ \tilde{\alpha}_r(\omega) \tilde{\beta}_r(\omega) = 1 / c(\omega)$. That is, relevant (linear) dispersive properties of the medium are included in the reference parameters, i.e. that $\tilde{\alpha}_r(\omega) = \tilde{\epsilon}_r(\omega)^{1/2}$. They have the definitions ~ \begin{eqnarray} \tilde{\epsilon} ~~~~ = \tilde{\epsilon}_r(\omega) + \tilde{\epsilon}_c(\omega) &=& \tilde{\alpha}_r^2(\omega) + \tilde{\alpha}_r(\omega) ~ \tilde{\alpha}_c(\omega), \label{eqn-defs-alphaX} \\ \tilde{\mu} ~~~~ = \tilde{\mu}_r(\omega) + \tilde{\mu}_c(\omega) & =& \tilde{\beta}_r^2(\omega) + \tilde{\beta}_r(\omega) ~ \tilde{\beta}_c(\omega), \label{eqn-defs-betaX} \end{eqnarray} where the correction parameters $\tilde{\epsilon}_c$ and $\tilde{\mu}_c$ represent the discrepancy between the true values and the reference. These correction terms will usually just be the nonlinearity. More generally, the smaller these correction terms are, the better the match, and the more likely it is that a description involving only ${G}^{+}$ will suffice. Note also that there are alternative ways of constructing directional ${G}^{\pm}$-like fields \cite{Kinsler-2006arXiv-fleck}. In the widely used moving frame defined by $v=1/\alpha_f \beta_f$, where $\partial_{z} Q = \partial_{z'} Q - \alpha_f \beta_f \partial_t Q$, using these $G_x^\pm$ fields gives the (non-magnetic case) propagation equation \cite{Kinsler-RN-2005pra}, ~ \begin{eqnarray} - \partial_{z'} \tilde{G}_x^{\pm} &=& \mp \imath \omega \tilde{\alpha}_r \tilde{\beta}_r \left( 1 \mp \xi \right) ~ \tilde{G}^{\pm} \nonumber \\ && ~~~~ \mp \frac{\imath \omega \tilde{\alpha}_c \tilde{\beta}_r} {2} \star \left[ \tilde{G}_x^{+} + \tilde{G}_x^{-} \right] ~~ , \label{eqn-firstorder-Gpm-comoving} \end{eqnarray} where $\xi=\alpha_f \beta_f / \tilde{\alpha}_r \tilde{\beta}_r$. Although this moving frame has no sensible limit as the frame speed tends to zero, the stationary frame case can be recovered by setting $\xi=0$ and replacing $z'$ by $z$. $G^\pm$ field simulations usually assume $G_x^-=0$, and treat only the forward traveling components of the EM field. Correctly writing down the form of nonlinear terms for eqn. (\ref{eqn-firstorder-Gpm-comoving}) requires some care, and consideration of the specific nonlinearity involved. Fortunately the task is simplified because it is simply a rewriting of the (electric) nonlinear term from Maxwell's equations with the appropriate scaling factors relating $\alpha_c$ to $\epsilon$, and $G_x^\pm$ to $E$. Wave equations with a more familiar appearance can be obtained using \begin{eqnarray} \tilde{E}^\pm(\omega) &=& \tilde{G}_x^\pm(\omega) / 2 \tilde{\alpha}_r(\omega) . \end{eqnarray} These have the units of an electric field (i.e. V/m), but actually incorporate information about the magnetic field as well. If we take this step, we can transform back into forward propagating ``electric fields'' $E^\pm$, and get ~ \begin{eqnarray} - \partial_{z'} \tilde{E}^{\pm} &=& \mp \imath \omega \tilde{\alpha}_r \tilde{\beta}_r \left( 1 \mp \xi \right) ~ \tilde{E}^{\pm} \nonumber \\ && ~~~~ \mp \frac{\imath \omega \tilde{\alpha}_c \tilde{\beta}_r} {2} \star \left[ \tilde{E}^{+} + \tilde{E}^{-} \right] ~~ . \label{eqn-firstorder-Epm-comoving} \end{eqnarray} An approximate forward-only wave equation can be found by setting $E^-=0$ in eqn. (\ref{eqn-firstorder-Epm-comoving}), (or $G^-=0$ in eqn. (\ref{eqn-firstorder-Gpm-comoving})). For a time response $\chi^{(3)}$ nonlinearity, this is ~ \begin{eqnarray} - \partial_{z'} \tilde{E}^+(\omega) &=& - \imath \omega \tilde{\alpha}_r \tilde{\beta}_r \left( 1 - \xi \right) ~ \tilde{E}^+(\omega) \nonumber \\ && - \imath \omega \left\{ \tilde{\beta}_r \tilde{\epsilon}_c(\omega) . \mathscr{F} \left[ E_x^+(t)^2 \right] (\omega) \right\} \star \tilde{E}^+(\omega) \nonumber \\ \label{eqn-Ep-chi3} \end{eqnarray} Notice the similarity to eqn. (\ref{eqn-pssd-dH}), but that the field is propagated in a single first order equation, rather than two (i.e. both eqn. (\ref{eqn-pssd-dH}) and (\ref{eqn-pssd-dE})). The cost is that it only propagates {\em forwards}, but this is what we wanted. Further, the method can be implemented using fewer Fourier transforms than are required for a full Maxwell equation solver \cite{Kinsler-RN-2005pra}. The gain is that of not solving for $\partial_z E$ (eqn. (\ref{eqn-pssd-dE})), which requires a pair of FFT's if done pseudospectrally. Solving for pulse propagating in a medium with dispersion and a time dependent (or instantaneous) third order nonlinearity therefore requires only six (or three) FFT's, as compared to eight (or five) for solving Maxwell's equations. However, in practice the speed gain can be less clear cut. A PSSD solver moves forward one full step $dz$ in two staggered steps, one integrating for the magnetic field, and integrating for the electric field; and only the magnetic field integration needs to calculate the nonlinearity. This staggered scheme is second order accurate even though each stagger-step is only integrated using an Euler method. We can achieve nearly the same level of accuracy for the directional fields by employing a leapfrog algorithm \cite{Press-TVF-1992-numericalrecipies}. If we wish to use more accurate (and so more complicated) numerical integration algorithms (e.g. a Runga-Kutta scheme), then we can only outperforms the staggered (or leapfrog) PSSD schemes if the propagation step size is (greater than) twice that of the staggered PSSD. To complicate matters further, for reasons of numerical stability, we often need to tie the propagation step $dz$ to the time grid step $dt$. This means that if there are bandwidth constraints limiting our $dt$, we may not have as much much freedom to adjust $dz$ as we might like. \subsubsection{Special case: $\chi^{(2)}$} In the case of a $\chi^{(2)}$ nonlinearity, two different field polarizations are coupled together, and the equations given above tend to obscure the final form the nonlinear term will take. In this case, the time-domain displacement fields $D$ in the two polarizations are ~ \begin{eqnarray} D_x &=& \epsilon_x \star E_x + 2 \epsilon_0 \chi^{(2)} E_x E_y ~~~~ = \epsilon_x \star E_x + \mathscr{N}^{(2)}_x , \\ D_y &=& \epsilon_y \star E_y + \epsilon_0 \chi^{(2)} E_x^2 ~~~~ = \epsilon_x \star E_x + \mathscr{N}^{(2)}_y . \end{eqnarray} If we assume that all of the linear response of the material (denoted above by $\epsilon_x, \epsilon_y$) is absorbed into the reference parameters $\tilde{\alpha}_r, \tilde{\beta}_r$, we need only consider the nonlinear part. Note in particular that for the $D_y$ field (i.e. $\mathscr{N}^{(2)}_y$) this does not depend on $E_y$, meaning that the forms of the wave equations given above (aimed largely at a $\chi^{(3)}$ system) are not very useful. First, note that for a $\chi^{(3)}$ nonlinearity ~ \begin{eqnarray} \mathscr{N}^{(3)} &=& \epsilon_0 \chi^{(3)} E^3 , \\ \tilde{\mathscr{N}}^{(3)} &=& \mathscr{F}\left[\epsilon_0 \chi^{(3)} E^2\right] \star E \\ &=& \tilde{\alpha}_r \tilde{\alpha}_c \star \tilde{E} , \end{eqnarray} and these give a nonlinear term for the wave equations of ~ \begin{eqnarray} \frac{\imath \omega \tilde{\alpha}_c \tilde{\beta}_r} {2} \star \left[ \tilde{G}_x^{+} + \tilde{G}_x^{-} \right] &=& \imath \omega \tilde{\beta}_r . \tilde{\alpha}_r \tilde{\alpha}_c \star \tilde{E} , \end{eqnarray} By comparing these $\chi^{(3)}$ terms, we can see that in the $\chi^{(2)}$ case, the nonlinear terms in the $\tilde{G}_x^{\pm}$ and $\tilde{G}_y^{\pm}$ wave equations (see eqn. (\ref{eqn-firstorder-Gpm-comoving})) will be rewritten as follows ~ \begin{eqnarray} x: ~~~~ \frac{\imath \omega \tilde{\alpha}_c \tilde{\beta}_r} {2} \star \left[ \tilde{G}_x^{+} + \tilde{G}_x^{-} \right] &\Rightarrow& \imath \omega \tilde{\beta}_r \mathscr{F}\left[2 \epsilon_0 \chi^{(2)} E_x E_y\right] , ~~~~ \label{eqn-gpm-chi2-Gx} \\ y: ~~~~ \frac{\imath \omega \tilde{\alpha}_c \tilde{\beta}_r} {2} \star \left[ \tilde{G}_y^{+} + \tilde{G}_y^{-} \right] &\Rightarrow& \imath \omega \tilde{\beta}_r \mathscr{F}\left[\epsilon_0 \chi^{(2)} E_x^2\right] . ~~~~ \label{eqn-gpm-chi2-Gy} \end{eqnarray} These can then be put in a form containing only $\tilde{G}_x^{\pm}, \tilde{G}_y^{\pm}$ if desired, but it is simplest to reconstruct the $E_x, E_y$ directly before calculating the nonlinear terms. If a more extensive collection of the $\chi^{(2)}$ coefficients needs to be included, this procedure can be reproduced using the appropriate nonlinear field combinations. Further, if the time-response of the nonlinearity is also important, then we can include this by replacing $\chi^{(2)} E_x E_y$ and $\chi^{(2)} E_x^2$ with appropriate convolutions: e.g. $(\chi^{(2)} \star E_y ) E_x$ and $(\chi^{(2)} \star E_x) E_x$. For the $\tilde{E}^{\pm}$-like wave eqns. (\ref{eqn-firstorder-Epm-comoving}) the nonlinear terms are ~ \begin{eqnarray} x: ~~~~ \frac{\imath \omega \tilde{\alpha}_c \tilde{\beta}_r} {2} \star \left[ \tilde{E}_x^{+} + \tilde{E}_x^{-} \right] &\Rightarrow& \frac{\imath \omega \tilde{\beta}_r}{2 \tilde{\alpha}_r} \mathscr{F}\left[2 \epsilon_0 \chi^{(2)} E_x E_y\right] , ~~~~ \label{eqn-gpm-chi2-Ex} \\ y: ~~~~ \frac{\imath \omega \tilde{\alpha}_c \tilde{\beta}_r} {2} \star \left[ \tilde{E}_y^{+} + \tilde{E}_y^{-} \right] &\Rightarrow& \frac{\imath \omega \tilde{\beta}_r}{2 \tilde{\alpha}_r} \mathscr{F}\left[\epsilon_0 \chi^{(2)} E_x^2\right] . ~~~~ \label{eqn-gpm-chi2-Ey} \end{eqnarray} Since we will want to apply the nonlinear effects in the time domain, we need to back-transform the terms in eqns. (\ref{eqn-gpm-chi2-Gx},\ref{eqn-gpm-chi2-Gy}) or eqns. (\ref{eqn-gpm-chi2-Ex},\ref{eqn-gpm-chi2-Ey}), requiring a pair of Fourier transforms in addition to those required to get the time-domain fields. If using the $\tilde{G}^{\pm}$ form, there is an additional transform, because we also need the time domain field(s) $E(t)$ -- with the $\tilde{E}^{\pm}$ form $E(t)$ can be found directly. Further simplifications can be made: e.g. in a semi-wideband limit around a central frequency $\omega_0$, we can assume the frequency dependence of the $\tilde{\alpha}$ parameters in the nonlinear terms vanishes, so that the transform(s) to convert from $G^\pm$ to $E$ is unecessary. In an SVEA-like narrowband limit all these transforms vanish because (in the nonlinear terms) the frequency dependence of the $\tilde{\alpha}$ parameters vanish and the the factor of $\omega$ simple becomes $\omega_0$. \subsection{Envelopes}\label{Ss-directional-envelopes} Here I have intentionally simplified the definitions to best match what is most likely to be used in practice: a forward propagating $G^+$ (or $E^+$) only model. A more complete description of $G^\pm$ envelopes, such as that in \cite{Kinsler-RN-2005pra}, would include the role of forward and backward traveling envelopes for both of $G^\pm$. We have seen that in the forward-only approximation, $G^+$ and $E^+$ follow identical equations of motion. The envelope and carrier representation of $E^+$ is ~ \begin{eqnarray} E^+(t;z) &=& C(t;z) e^{\imath\left(k_0 z-\omega_0 t\right)} + C^*(t;z) e^{-\imath\left(k_0 z-\omega_0 t\right)} . \label{eqn-Gp-env} \end{eqnarray} I do not apply this envelope definition to the general equation of motion because how $\tilde{\alpha}_c$ is expressed depends on the field and therefore on those envelopes. We now, for the case of a time response $\chi^{(3)}$ nonlinearity, substitute eqn. (\ref{eqn-Gp-env}) into eqn. (\ref{eqn-Ep-chi3}), we then (as usual) split the normal and c.c. parts, cancel exponentials, and rearrange leaving only the $\partial_z$ terms on the left, ~ \begin{eqnarray} - \partial_{z'} \tilde{C}(\omega) &=& - \imath \omega \left( 1 - \xi \right) \tilde{\beta}_r \tilde{\alpha}_r \tilde{C}(\omega) + \imath k_0 \tilde{C}(\omega) \nonumber \\ && - \imath \omega \tilde{\beta}_r \tilde{\epsilon}_c(\omega+\omega_0) . \mathscr{F} \left[ 2 \left| C(t) \right|^2 \right] (\omega) . \tilde{C}(\omega) \nonumber \\ && ~ - \imath \omega \tilde{\beta}_r \tilde{\epsilon}_c(\omega+\omega_0) . \mathscr{F} \left[ C(t)^2 \right] (\omega) . \tilde{C}^*(\omega) \nonumber \\ && ~~ - \imath \omega \beta_r \tilde{\epsilon}_c(\omega+3\omega_0) . \mathscr{F} \left[ C(t)^2 \right] (\omega) . \tilde{C}(\omega) , \nonumber \\ \label{eqn-Ap-chi3} \end{eqnarray} The first line on the RHS will mostly cancel in the narrowband case, since $\tilde{\beta}_r \tilde{\alpha}_r = 1/c(\omega)$, and $k_0 = \omega_0 / c(\omega_0)$, thus with $\delta = \omega-\omega_0$ it becomes ~ \begin{eqnarray} - \imath \left[ \frac{\omega}{c(\omega)} - k_0 \right] &=& - \imath \left[ k(\omega) - k_0 \right] \nonumber \\ &=& - \imath \left[ \left. \frac{\partial k} {\partial \omega} \right| _{\omega_0} \delta + \frac{1}{2} \left. \frac{\partial^2 k} {\partial \omega^2} \right| _{\omega_0} \delta^2 + ... \right] , ~~~~ \end{eqnarray} where in the truncated expansion on the second line we can see the expected group velocity and group velocity dispersion terms. Note that eqn. (\ref{eqn-Ap-chi3}) is directly comparable to one derived from the NEE of Brabec and Krausz \cite{Brabec-K-1997prl}, but the {\em only} approximation I have made is to discard backward propagating fields. Since the NEE makes several additional approximations, eqn. (\ref{eqn-Ap-chi3}) is {\em more accurate} and {\em less approximate}. Indeed, Brabec and Krausz were fortunate in that their chosen approximations produced a result remarkably similar to that from the less restricted directional fields approach. Note that Kolesik and Moloney \cite{Kolesik-M-2004pre} also reduced their directional wave equation to a number of special cases, including that of Brabec and Krausz. \subsection{Transverse effects}\label{Ss-directional-transverse} As for Maxwell's equations, there are two main transverse effect likely to be of interest in pulse propagation models: mode averaging, and diffraction or off-axis propagation. Mode averaging is easy to incorporate if you assume some known transverse profile for the mode: e.g. for an optical fibre or some other waveguide. The transverse derivatives vanish, and the material properties are evaluated as an integral over the transverse dimensions, weighted by the mode function. This is just the same as for Maxwell's equations, although we now may be averaging slightly different quantities (e.g. $\alpha_r$ rather than $\epsilon$). The work of Kolesik et al. \cite{Kolesik-MM-2002prl,Kolesik-M-2004pre} allows for transverse mode structure, that of Mizuta et al. \cite{Mizuta-NOY-2005pra} for transverse averaging over a single mode. Diffraction and off-axis propagation are again much harder to understand, because (again) they result from a coupling between the vector components of the $E$ and $H$ fields -- including those along the propagation direction. However, second order wave equations derived from the first order directional fields equations exhibit a $\nabla_\perp^2$ diffraction term which is the same as that seen in standard second order wave equations (see e.g. eqn. (\ref{exact-BKP}), in section \ref{S-secondorder}). This means that weakly transverse effects can be accurately incorporated by using a split step scheme alternating between the wave equation and a $\nabla_\perp^2$ diffraction term. Note that Kolesik et al. \cite{Kolesik-M-2004pre} had wave equations incorporating diffraction (transverse) terms. \section{Second order wave equations}\label{S-secondorder} The standard second order wave equation applies to propagation in non-magnetic materials. If we consider the case of small transverse inhomogeneities of the polarization, the three dimensional wave equation in typical notation (e.g. from \cite{Brabec-K-1997prl,Kinsler-N-2003pra}) is ~ \begin{eqnarray} \left( \partial_z^2 + \nabla_\bot^2 \right) E(\vec{r},t) - \frac{1}{c^2} \partial_t^2 \left\{ \epsilon_L(\tau) \star E(\vec{r},t) \right\} = \frac{4\pi}{c^2} \partial_t^2 P_{nl}(\vec{r},t) . \nonumber \\ \label{eqn-3DWE-time} \end{eqnarray} Here $\nabla_\bot^2$ is the transverse Laplace operator, $\epsilon_L(t) = (2\pi)^{-1} \int_{-\infty}^\infty d\omega \tilde{\epsilon}_L(\omega) e^{\imath \omega t}$, $\tilde{\epsilon}_L(\omega) = 1 + 4\pi \chi(\omega)$, and $\chi(\omega)$ is the linear electric susceptibility. The electric field $E$ propagates along the $z$ direction. Both $E$ and the nonlinear polarization $P_{nl}$ are polarized parallel to the $x$ axis. Because of their starting point, methods based on this second order equation are slightly more restricted than those starting from Maxwell's equations. However, the differences in practice will likely be small, especially in the usual case of non-magnetic propagation media. Most uses of eqn. (\ref{eqn-3DWE-time}), notably the slowly varying envelope approximation (SVEA) relies on using an envelope-carrier description for the fields, then expands for weak dispersion, and resonant nonlinear perturbations about this carrier. This approach is discussed below in subsection \ref{Ss-secondorder-trad}. Alternatively, we can attempt to factorise the equation into a product of two first order parts, as can be done for linear waves (see e.g. \cite{Tanuiti-Nishihara-NLW}). Factorization is considerably more useful than the traditional approach, and is discussed below in subsection \ref{Ss-secondorder-factor}. \subsection{Traditional approach}\label{Ss-secondorder-trad} Unlike the other approaches discussed in this paper, the traditional approach assumes the use of an envelope-carrier description of the field. Kinsler and New \cite{Kinsler-N-2003pra,Kinsler-2002-fcpp} presented a comprehensive re-derivation of the envelope propagation equation based on the second order wave equation, which subsumes the SVEA and Brabec and Krausz's NEE \cite{Brabec-K-1997prl} as special cases. Since it is the most general, I use the Kinsler and New calculation, leaving some definitions to their paper rather than repeat them here. Noting that $\xi$ and $\tau$ are scaled space and time variables, that $alpha$ and $\beta$ have different meanings from the rest of this paper, and that $\hat{D}'$ contains the dispersion terms, we have ~ \begin{eqnarray} &&\partial_\xi A(\vec{r}_\bot,\xi,\tau) \nonumber \\ &=& \left( - \frac{ \alpha_0}{\beta_0} + \imath \hat{D}' \right) A(\vec{r}_\bot,\xi,\tau) + \frac{\left( \imath / 2 \beta_0^2 \right) \nabla_\bot^2} { \left( 1 + \imath \sigma \partial_\tau \right)} A(\vec{r}_\bot,\xi,\tau) \nonumber \\ &+& \frac{2 \imath \pi }{n_0^2} \frac{\left(1 + \imath \partial_\tau \right)^2} {\left( 1 + \imath \sigma \partial_\tau \right)} B(\vec{r}_\bot,\xi,\tau ; A) + \frac{ T_{R} } { 1 + \imath \sigma \partial_\tau } , \label{exact-BKP} \end{eqnarray} where ~ \begin{eqnarray} T_{R} &=& \left[ - \frac{\imath q^2 }{2} \partial_\xi^2 + \frac{\imath}{2} \left( \frac{ \alpha_0}{\beta_0} - \imath \hat{D}' \right)^2 \right] A(\vec{r}_\bot,\xi,\tau) . \label{exact-BKP-Tr} \end{eqnarray} Eqn. (\ref{exact-BKP}) {\em is exact} -- it contains no more approximations than the starting point eqn. (\ref{eqn-3DWE-time}) except for the expansion of $\epsilon$ in powers of $\omega$. If we set $T_{R}=0$, this gives us a generalized few cycle envelope (GFEA) equation, which contains the SVEA \cite{Shen-PNLO}. Brabec and Krausz's NEE can be recovered from eqn. (\ref{exact-BKP}) in the 1D case where phase and group velocities are the same (i.e. $\sigma=1$), likewise Porras's SEEA \cite{Porras-1999pra} can be identified in the diffraction term. Of course we cannot just set the $T_{R}$ term to zero without some justification, but this has already been extensively discussed, not only in both \cite{Kinsler-N-2003pra}, but also the detailed analysis \cite{Kinsler-2002-fcpp}. Now consider the complicated few-cycle correction to the polarization term in eqn. (\ref{exact-BKP}), which contains partial derivatives ($1+\imath \sigma \partial_\tau$) in the denominators. These will need to be evaluated by Fourier transforming into the conjugate frequency space ($\Omega$). Further, the $T_{R}$ term is divided by another such term. Clearly these might, in wideband cases, result in denominators close to zero, causing the approximations to fail. This means they put a serious brake on the validity of {\em any} such approach, especially if the bandwidth of the pulse approaches the carrier frequency. Note that the best first order expansion of the few-cycle corrections to the polarization term is more general than that given by Brabec and Krausz, and contains the group to phase velocity ratio $\sigma$, i.e. ~ \begin{eqnarray} \frac{2 \imath \pi }{n_0^2} \frac{\left(1 + \imath \partial_\tau \right)^2} {\left( 1 + \imath \sigma \partial_\tau \right)} B(\xi,\tau ; A) &\approx& \frac{2 \imath \pi }{n_0^2} \left(1 + \imath \sigma \partial_\tau \right) B(\xi,\tau ; A) , ~~~~ \label{eqn-betterthanBK} \end{eqnarray} Unfortunately for the venerable SVEA based on the second order wave equation, and even its most general variant presented here, the directional fields method discussed in the previous section \ref{S-directional} has made it utterly redundant; as, indeed, has the approach in the following subsection \ref{Ss-secondorder-factor}. There is no reason to use any form of the GFEA or SVEA when we can generate equations like eqns. (\ref{eqn-firstorder-Epm-comoving},\ref{eqn-Ep-chi3}) by not only using {\em fewer} approximations, but much {\em simpler} ones than those taken by neglecting $T_R$. \subsection{Time propagated direct solution}\label{Ss-secondorder-time} It is of course possible to solve the second order wave equation by propagating it in time, either with or without the use of an envelope and carrier. This approach has been used with significant success by Scalora and co-workers (e.g. their early work \cite{Dowling-SBB-1994jap,Scalora-DBB-1994prl,Scalora-C-1994oc}). By propagating in time reflections are handled correctly, an important feature when treating structured materials. Generally the solution is achieved retaining the second order spatial derivatives (both in $z$ and transversely in $x,y$), but approximating the time derivatives to first-order. The approximation is made using an envelope with a well-chosen carrier frequency, and gives rise to the SVEAT, or slowly varying envelope approximation in {\em time}. \subsection{Short pulse equation (SPE)}\label{Ss-secondorder-spe} The second order wave equation can be converted into the SPE by using a multiscale expansion \cite{Schafer-W-2004pd}. First, specialize to a third-order nonlinearity (strength $p$) and only second order (ordinary) dispersion (strength $d$) and then rewrite the second order wave equation as ~ \begin{eqnarray} \partial_z^2 E(t;z) - \frac{1}{c_1^2} \partial_t^2 E(t;z) - d_2 E(t;z) - p \partial_t^2 E(t;z)^3 &=& 0. ~~~~ \label{eqn-2ndWave-spe-start} \end{eqnarray} We introduce the scaled co-moving frame variables $\tau = (t-z/c)/\sigma$ so that $\partial_t=(1/\sigma)\partial_\tau$, and $z_n = \sigma^n z$ so that $\partial_z = -(1/c_1\sigma)\sigma^ n\partial_{z_n}$; hence after simplification eqn. (\ref{eqn-2ndWave-spe-start}) becomes ~ \begin{eqnarray} - \frac{2}{c_1} \partial_\tau \partial_{z_1} E(t;z) - d_2 E(t;z) - \frac{p}{\sigma^2} \partial_\tau^2 E(t;z)^3 &=& 0. ~~~~ \label{eqn-2ndWave-spe-scaled} \end{eqnarray} Now, writing the field in multiscaled form as a power series in components $E_i$ scaled by factors of $\sigma$, we have ~ \begin{eqnarray} E(t;z) &=& \sigma E_0(\tau, z_1, z_2, ...) + \sigma^2 E_1(\tau, z_1, z_2, ...) + ... \end{eqnarray} and to leading order, we can write eqn. (\ref{eqn-2ndWave-spe-scaled}) down as the SPE ~ \begin{eqnarray} - \frac{2}{c_1} \partial_\tau \partial_{z_1} E_0 - d_2 E_0 - p \partial_\tau^2 E_0^2 &=& 0. ~~~~ \label{eqn-2ndWave-spe} \end{eqnarray} This equation has spawned a literature all of its own, because it (like the ordinary nonlinear Schr\"odinger equation) provides a rich variety of mathematical solutions. Note that what is essentially a variant of the SPE, but specialized for HHG by generalizing the dispersion and nonlinearity is also in use \cite{Geissler-TSSKB-1999prl}. \subsection{Factorization approach}\label{Ss-secondorder-factor} An alternative to the traditional style of derivation discussed above, we can instead factorise the second order wave equation in a way similar to that done for linear waves (see e.g. \cite{Tanuiti-Nishihara-NLW}). This was initially suggested by Shen \cite{Shen-PNLO}, followed by Blow and Wood \cite{Blow-W-1989jqe}, and more recently revisited by Ferrando et al. \cite{Ferrando-ZCBM-2005pre} and Genty et al. \cite{Genty-KKD-2007oe}; the most general formulation, which also allows for magnetic effects is at \cite{Kinsler-2010pra-fchhg}. Note that the work of Weston examines this kind of wave-splitting with more mathematical rigour (see e.g. \cite{Weston-1993jmp}), although without consideration of residual terms, and (at least initially) in the context of reflections and scattering. This theory was based on that from the earlier work of Beezley and Krueger \cite{Beezley-K-1985jmp} who applied wave-splitting concepts to optics. First we reduce eqn. (\ref{eqn-3DWE-time}) to the 1D strictly paraxial limit; then transform into frequency space. Here I re-use the symbol $\beta$ as the propagation wave vector to match the notation of Genty et al. \cite{Genty-KKD-2007oe}, so that $\beta(\omega) = \omega \sqrt{\epsilon_r(\omega) \mu_0}$. The wave equation therefore is ~ \begin{eqnarray} \partial_z^2 E(t;z) - \frac{1}{c^2} \partial_t^2 E(t;z) - \mu_0 \partial_t^2 P(t;z) &=& 0, ~~~~ \\ \nabla^2 \tilde{E}(\omega;z) + \beta(\omega)^2 \tilde{E}(\omega;z) + \mu_0 \omega^2 \tilde{P}(\omega;z) &=& 0, ~~~~ \label{eqn-2ndWave-simple} \\ \partial_z^2 \tilde{E}(\omega;z) + \beta^2(\omega) \tilde{E}(\omega;z) + \beta^2(\omega) \tilde{\mathscr{N}} \star \tilde{E}(\omega;z) &=& 0, ~~~~ \label{eqn-2ndWave} \end{eqnarray} where for a third order nonlinearity, with $\epsilon_c = \epsilon_0 \chi^{(3)}$, ~ \begin{eqnarray} \mathscr{N} &=& \mu_0 \epsilon_0 \chi^{(3)} \omega^2 E(r,t;z)^2 / \beta(\omega)^2 \\ &=& \mu_0 \epsilon_c \omega^2 E(r,t;z)^2 / \beta(\omega)^2 \\ &=& \frac{\chi^{(3)} }{n(\omega)^2} E(r,t;z)^2 . \end{eqnarray} I now briefly consider three factorization approaches, from the simple method of Blow and Wood \cite{Blow-W-1989jqe}, an improved version, and finally the most rigorous approach. Although these traditionally involve an envelope-carrier decomposition introduced early in that calculation (see Blow and Wood), the step is in fact unnecessary and I omit it. \subsubsection{Simple factorization} Factorization approaches are simple in two situations: a dispersionless medium with an instantaneous nonlinearity, and a dispersive medium with no nonlinearity. In the dispersionless nonlinearity case, we can factorise in the time domain. In the linear dispersive case, we can factorise in the frequency domain. In the dispersive nonlinear case, it is (usually) not possible to analytically factorise the second order wave equation. \noindent \textbf{Basic Blow and Wood: } The simplest, but least rigorous method of factorising is that of Blow and Wood \cite{Blow-W-1989jqe}. Ignoring many of mathematical difficulties, Blow and Wood ignored the details of nonlinearity and dispersion. Remembering that $\beta = \beta(\omega)$, and without their envelope-carrier decomposition, they had ~ \begin{eqnarray} \left[ \partial_z + \imath \beta \sqrt{ 1 + \tilde{\mathscr{N}} \star } \right] \left[ \partial_z - \imath \beta \sqrt{ 1 + \tilde{\mathscr{N}} \star } \right] \tilde{E} &=& 0 . \end{eqnarray} They then separated out the forward propagating term. The envelope equivalent of this was then expanded using a ``weak nonlinearity'' assumption with a binomial expansion, keeping only the first order corrections. \noindent \textbf{Improved Blow and Wood: } The approach of Blow and Wood ignores the mathematical difficulties due to the use of the square root in combination with the frequency-domain convolutions between the nonlinear term $\mathscr{N}$ and the field spectrum $\tilde{E}$; Fortunately, we can instead ``complete the square'' (e.g. $1+N \simeq 1+N+N^2/4 = \left(1+N/2\right)^2$), enabling us to preserve the convolutions correctly. This requires us to make a weak nonlinearity approximation, but it is one nearly identical to that used when expanding the square root in the Blow and Wood calculation. So, with a the weak nonlinearity constraint ~ \begin{eqnarray} \frac{1}{2} \tilde{\mathscr{N}} \star \tilde{E} \ll 1 , \end{eqnarray} we get ~ \begin{eqnarray} \left[ \partial_z + \imath \beta \left( 1 + \frac{\tilde{\mathscr{N}} \star}{2} \right) \right] \left[ \partial_z - \imath \beta \left( 1 + \frac{\tilde{\mathscr{N}} \star}{2} \right) \right] \tilde{E} &=& 0 . \label{eqn-factored-2nd-nlLeft} \end{eqnarray} By assuming the forward-like and backward-like terms in square brackets factorise, ~ \begin{eqnarray} \left[ \partial_z \pm \imath \beta \left( 1 + \frac{\tilde{\mathscr{N}} \star}{2} \right) \right] \tilde{E} &=& 0 . \label{eqn-factored-2nd-nlLeft-fwd} \\ \partial_z \tilde{E} &=& \pm \imath \beta \tilde{E} ~~ \pm \imath \beta \frac{\tilde{\mathscr{N}} \star}{2} \tilde{E} . \end{eqnarray} While this equation can give excellent results, it is restricted to weak nonlinearity: as we see below, it lacks the nonlinear coupling term between the forward and backward propagating fields. \subsubsection{Linear factorization} Kinsler \cite{Kinsler-2010pra-fchhg} treats this approach in detail, separating this second order equation into two first order equations, using a method based on Ferrando et al.'s \cite{Ferrando-ZCBM-2005pre} application of Greens functions. This follws early applications of factorization to nonlinear waveguides, such as that by Genty et al. \cite{Genty-KKD-2007oe} with their nonlinear envelope equation. The first step to achieving a first order wave equation containing the necessary physics but without unnecessarily complex approximations is to rewrite the wave eqn. (\ref{eqn-3DWE-time}) to emphasize those contributions that, without any coupling, would freely propagate forward and backwards respectively. To do this choose a specific propagation direction (e.g. along the $z$-axis), and then denote the orthogonal components (i.e. along $x$ and $y$) as transverse behaviour. The wave equation eqn. (\ref{eqn-2ndWave}) can then be written ~ \begin{eqnarray} \left[ \partial_z^2 + \frac{n^2(\omega) \omega^2}{c^2} \right] E(\omega) &=& - \mathscr{Q} . \label{eqn-factored-2nd-nlRight-start} \end{eqnarray} Here I have moved some or all of the linear response (e.g. the refractive index) out of the total polarization, and over to the LHS as $n^2(\omega)$. The remaining polarization term $\mathscr{Q}$ would then include any nonlinearity (e.g. $\tilde{\mathscr{N}} \star \tilde{E}$) or atomic response; the diffraction (i.e. $\nabla_\perp^2 E$); and indeed (if desired) even some linear terms such as the angular dependence of the refractive index. After Fourier transforming $z$ into $k$-space, where the $\partial_z$ becomes $-\imath{k}$, we have ~ \begin{eqnarray} \left[ - k^2 + \beta^2 \right] \tilde{E} &=& - \mathscr{Q} \\ \tilde{E} &=& \frac{1} {k^2 - \beta^2} \mathscr{Q} \quad = \frac{1} { \left(k+\beta\right) \left(k-\beta\right) } \mathscr{Q} \\ \tilde{E}_+ + \tilde{E}_- &=& - \frac{1}{2\beta} \left[ \frac{1} {k+\beta} - \frac{1} {k-\beta} \right] \mathscr{Q} . ~~~~ \label{eqn-factored-2nd-nlRight-total} \end{eqnarray} where $E$ is now written as a sum of both forward and backward propagating parts $\tilde{E} = \tilde{E}_+ + \tilde{E}_-$. I now split eqn. (\ref{eqn-factored-2nd-nlRight-total}) into a sum of two parts, where each half represents the propagation of the forward field $E_+$ or the backward field $E_-$, and rearrange, ~ \begin{eqnarray} \tilde{E}_\pm &=& \pm \frac{1 / 2\beta} {k \mp \beta} \mathscr{Q} \\ \left[ k \mp \beta \right] \tilde{E}_\pm &=& \pm \frac{1}{2\beta} \mathscr{Q} . \end{eqnarray} Now I transform back from $k$-space into $z$, and multiply by $\imath$, so that ~ \begin{eqnarray} \left[ \partial_z \mp \imath \beta \right] \tilde{E}_\pm &=& \pm \frac{\imath}{2\beta} \mathscr{Q} \\ \partial_z \tilde{E}_\pm &=& \pm \imath \beta \tilde{E}_\pm \pm \frac{\imath}{2\beta} \mathscr{Q} . \label{eqn-factored-2nd-nlRight-fwdbck} \end{eqnarray} If our polarization $\mathscr{Q}$ contains a nonlinearity $\beta^2 \tilde{\mathscr{N}} \star \tilde{E}$ and diffraction terms $\nabla_\perp^2 E$ we have \cite{Kinsler-2010pra-fchhg} \begin{eqnarray} \partial_z \tilde{E}_\pm &=& \pm \imath \beta \tilde{E}_\pm \pm \frac{\imath \beta}{2} \tilde{\mathscr{N}} \star \left[ \tilde{E}_+ + \tilde{E}_- \right] \pm \frac{\imath}{2\beta} \nabla_\perp^2 \left[ \tilde{E}_+ + \tilde{E}_- \right] . \quad \label{eqn-factored-2nd-nlRight-fwdbck2} \end{eqnarray} If we compare this result (i.e. eqn. (\ref{eqn-factored-2nd-nlRight-fwdbck})) with the comparable equations for the directional fields $G^\pm$, in particular with the electric field form given in eqn. (\ref{eqn-firstorder-Epm-comoving}); we see that they are essentially identical: since $\omega \alpha_r \beta_r = \omega / c_r \leftrightarrow \beta$. A similar procedure can be applied to eqn. (\ref{eqn-factored-2nd-nlLeft}) if desired. Eqn. (\ref{eqn-factored-2nd-nlRight-fwdbck}) is almost the same as the (more approximate) eqn. (\ref{eqn-factored-2nd-nlLeft-fwd}). Since the RHS nonlinear term is a function of $(E_++E_-)$, it provides a route for coupling between the forward and backward waves; its form can be obtained from the nonlinear part of (e.g.) eqn (\ref{eqn-pssd-dH}). A specific example for the case of a time-response $\chi^{(3)}$ nonlinearity has been given in \cite{Genty-KKD-2007oe}, but in my notation it is identical to that for the rescaled directional $G^\pm$ fields (i.e. $E^\pm$), i.e. eqn. (\ref{eqn-Ep-chi3}). \subsubsection{Special case: $\chi^{(2)}$} In the case of a $\chi^{(2)}$ nonlinearity, two different field polarizations are coupled together, and the equations given above tend to obscure the final form the nonlinear term will take. First, note that the factorisation process changes the nonlinear term from $\beta^2 \omega^2 \tilde{\mathscr{N}} \star \tilde{E}$ into $\imath \beta \omega^2 \tilde{\mathscr{N}} \star \tilde{E} /2$. This means that the term itself is multipled by a factor of just $ \imath / 2\beta$, and this transforming factor is what we need to use in the general case. For a $\chi^{(2)}$ nonlinearity, the time-domain displacement fields $D$ in the two polarizations are ~ \begin{eqnarray} D_x &=& \epsilon_x \star E_x + 2 \epsilon_0 \chi^{(2)} E_x E_y , \\ D_y &=& \epsilon_y \star E_y + \epsilon_0 \chi^{(2)} E_x^2 . \end{eqnarray} Note in particular that the nonlinear part of the $D_y$ field does not depend on $E_y$, making the wave eqn. (\ref{eqn-factored-2nd-nlRight-fwdbck}) (aimed largely at a $\chi^{(3)}$ system) inappropriate. In any case, the $x$ nonlinear term is just $2 \epsilon_0 \chi^{(2)} E_x E_y$, and the $y$ term $\epsilon_0 \chi^{(2)} E_x^2$ so that in the pair of frequency domain wave equations (cf eqn. (\ref{eqn-2ndWave-simple})), the nonlinear terms are ~ \begin{eqnarray} x: ~~~~ ~~~~ \beta^2 \omega^2 \tilde{\mathscr{N}} \star \tilde{E} &\Leftrightarrow& 2 \mu_0 \epsilon_0 \omega^2 \mathscr{F}\left[\chi^{(2)} E_x E_y\right] \\ y: ~~~~ ~~~~ \beta^2 \omega^2 \tilde{\mathscr{N}} \star \tilde{E} &\Leftrightarrow& \mu_0 \epsilon_0 \omega^2 \mathscr{F}\left[\chi^{(2)} E_x^2\right] , \end{eqnarray} and in the factorised equations these become ~ \begin{eqnarray} x: ~~~~ ~~~~ \imath \mu_0 \epsilon_0 \frac{\omega^2}{\beta(\omega)} \mathscr{F}\left[\chi^{(2)} E_x E_y\right] \\ y: ~~~~ ~~~~ \imath \mu_0 \epsilon_0 \frac{\omega^2}{2 \beta(\omega)} \mathscr{F}\left[\chi^{(2)} E_x^2\right] . \end{eqnarray} Since we will want to apply the nonlinear effects in the time domain, we need to back-transform these nonlinear terms: ~ \begin{eqnarray} x: ~~~~ ~~~~ \mathscr{F}^{-1} \left[ \imath \mu_0 \epsilon_0 \frac{\omega^2}{\beta(\omega)} \mathscr{F}\left[\chi^{(2)} E_x E_y\right] \right] \\ y: ~~~~ ~~~~ \mathscr{F}^{-1} \left[ \imath \mu_0 \epsilon_0 \frac{\omega^2}{2 \beta(\omega)} \mathscr{F}\left[\chi^{(2)} E_x^2\right] \right] . \end{eqnarray} So we see that a true wideband approach to the nonlinearity requires a pair of Fourier transforms. In a semi-wideband limit around a central frequency $\omega_0$ we can probably assume the factor $\omega^2/\beta(\omega)$ becomes $c \omega/n(\omega_0)$. In an SVEA-like narrowband limit it would become $\omega_0^2/\beta(\omega_0) = c \omega_0 / n(\omega_0)$, and the need for Fourier transforms vanishes. If a more extensive collection of the $\chi^{(2)}$ coefficients needs to be included, this procedure can be reproduced using the appropriate nonlinear field conbinations. If the time-response of the nonlinearity is also important, then we can include this by replacing $\chi^{(2)} E_x E_y$ and $\chi^{(2)} E_x^2$ with appropriate convolutions: i.e. $(\chi^{(2)} \star E_y ) E_x$ and $(\chi^{(2)} \star E_x) E_x$. \subsubsection{Factorization and envelopes} Taking only the forward part of eqn (\ref{eqn-factored-2nd-nlRight-fwdbck}), we replace $\tilde{E}_+(\omega) = \tilde{A}_+(\omega+\omega_0) + \tilde{A}_+^*(\omega-\omega_0)$. Since this the equation is linear in the derivatives, when split into $\tilde{A}_+$ and $\tilde{A}_+^*$ parts it looks very similar, being ~ \begin{eqnarray} \partial_z \tilde{A}_\pm &=& \pm \imath \beta \tilde{A}_\pm \pm \frac{\imath \beta}{2} \tilde{\mathscr{N}} \star \left[ \tilde{A}_\pm + \tilde{A}_\mp \right] . \end{eqnarray} For the case of a time-response $\chi^{(3)}$ nonlinearity, the equation will be identical to that for the envelope version of the directional $G^\pm$ fields, i.e. eqn. (\ref{eqn-Ap-chi3}). \subsubsection{Factorized fields} An important feature of this approach is that we see that \emph{any} contribution (whether linear or not) that is included in the source term will couple the forward and backward fields together. Consider two differing factorisations of the same systems; e.g. one with the loss included in $\beta$, and one with it in the source term. The one with the extra source contribution will see a corresponding extra forward backward coupling term, apparently conflicting with the fact that the two factorisations are of the same system. The resolution of this conundrum is simply that the forward and backward fields of the first factorisations ($E_{1\pm}$) are not the same as those for the second ($E_{2\pm}$); the meaning of ``forward field'' (or ``backward field'') differs between the two implementations. This is perhaps clearer in the $G^{\pm}$ formulation (see section \ref{S-directional}), where the different factorisations would correspond to different choices of the reference parameters $\alpha_r, \beta_r$. If no further approximations have been made, when the real electric and magnetic fields are reconstructed from any factorised $E_{i\pm}$, the answers should be in agreement. \subsection{Transverse effects} \label{Ss-secondorder-transverse} In common with most pulse propagation, we can restrict ourselves to paraxial beams and incorporated transverse effects by using a split step scheme alternating between the wave equation and the $\nabla_\perp^2$ diffraction term. This is equally applicable to either the traditional or factorization approaches. However, in the factorization approach we can treat the $\nabla_\perp^2 E$ diffraction term as a ``source'' term, and, like the nonlinearity, move it to the RHS before factorising. Thus eqn. (\ref{eqn-factored-2nd-nlRight-fwdbck}) could be rewritten to include diffraction as ~ \begin{eqnarray} \partial_z \tilde{E}_\pm &=& \pm \imath \beta \tilde{E}_\pm \pm \frac{\imath \beta}{2} \tilde{\mathscr{N}} \star \left[ \tilde{E}_+ + \tilde{E}_- \right] \nonumber \\ && ~~~~ ~~~~ ~~~~ ~~~~ \pm \frac{\imath}{2\beta} \nabla_\perp^2 \left[ \tilde{E}_+ + \tilde{E}_- \right] . \label{eqn-factored-2nd-nlRight-fwdbck-diff} \end{eqnarray} \section{Forward-backward coupling}\label{S-forwardbackward} We can see in eqns.(\ref{eqn-firstorder-Gpm-comoving}, \ref{eqn-firstorder-Epm-comoving}, \ref{eqn-factored-2nd-nlRight-fwdbck}) that we simplify into a forward-only picture by dropping the part of the nonlinear polarization term due to the backward field. In situations where there is no pre-existing backward field, and where there are no interfaces to cause reflection, this is an excellent approximation that holds true in the regime of weak nonlinearity. It is only an approximation, because the nonlinear polarization drives {\em both} the forward and backward fields, so in strongly nonlinear systems, a backward wave can be generated directly by the forward wave. The important ``weak nonlinearity'' criteria for perturbative nonlinearities to guarantee the validity of a forward-only model is \cite{Kinsler-2007josab} ~ \begin{eqnarray} \frac{1}{n_0^2} \sum_{m>1} m \chi^{(m)} E^{m-1} &\ll& 1. \end{eqnarray} On the subject of reflections from interfaces, it is worth noting that ``nonlinear'' reflections can occur even if the linear dispersion on both sides is identical -- as long as the nonlinearity changes, as in e.g. periodic poling, where its sign changes. Note that since nonlinearities are in practice very weak (e.g. $\chi^{(3)} E^{3} \sim 0.06$ at the damage threshold of fused silica), uni-directional propagation models perform very well, and the role of nonlinear reflections is generally negligible. \section{Conclusions}\label{S-conclusion} I have described three forms for the spatial propagation of optical fields: Maxwell's equations, directional fields, and second order wave equations. These forms have been describe in both standard and envelope-carrier pictures. While solving Maxwell's equations remains the ``gold standard'' and most exact procedure, it is computationally demanding, and it can be difficult to set up initial conditions. These difficulties are avoided by using a directional fields approach, where we can propagate more efficiently in the usual forward-only cases. Further, envelope theories based on forward-only directional fields give equations of motion similar in form to the traditional SVEA ones based on the second order wave equation, but without requiring complicated approximations. When comparing the various approaches taken to directional fields, a number of important points stand out. \begin{enumerate} \item The first successful attempt at deriving useful directional versions of Maxwell's equations was by Kolesik et al. \cite{Kolesik-MM-2002prl,Kolesik-M-2004pre}. \item The most flexible and complete formulation is the directional $G^\pm$ fields of Kinsler et al. \cite{Kinsler-2010pra-dblnlGpm,Kinsler-RN-2005pra}, relying only on simple combinations of Maxwell's equations to achieve a directional form. It is applicable to propagation media with {\em any} frequency-dependent electric or magnetic properties, and variant forms \cite{Kinsler-2006arXiv-fleck} can be used if required. \item The factorization style approach \cite{Blow-W-1989jqe,Ferrando-ZCBM-2005pre,Genty-KKD-2007oe,Kinsler-2010pra-fchhg} gives propagation equations for the electric field that can be simply expressed and solved without the construction of the conceptually abstract $G^\pm$ directed fields, even for media with a magnetic response \cite{Kinsler-2010pra-fchhg}. \end{enumerate} It is encouraging that these three approaches discussed in this paper (Maxwell's equations, directional $G^\pm$ fields, and factorized second order wave equations) all give essentially identical results in the case of uni-directional propagation in non-magnetic media. \section{Acknowledgments} I acknowledge a wide variety of useful discussions with G.H.C. New, S.B.P. Radnor, J.M. Dudley, and G. Genty. I also thank N. Broderick for bringing refs. \cite{Sipe-PD-1994josaa,deSterke-SS-1996pre} to my attention; and to M. Scalora for refs. \cite{Dowling-SBB-1994jap,Scalora-DBB-1994prl,Scalora-C-1994oc}.
1,314,259,992,662
arxiv
\section{Introduction} \label{sec:introduction} A broad class of disordered soft materials, including emulsions~\cite{Becu:2006}, foams~\cite{Rouyer2008}, colloids~\cite{Mason1995,Knaebel2000}, microgels~\cite{Cloitre2000}, and star polymers~\cite{Rogers2010}, share in common several notable rheological properties. In nonlinear flows, their steady state flow curve of shear stress $\sigma$ as a function of shear rate $\ensuremath{\dot{\gamma}}$ is often fit to the form $\sigma=\sigma_{\rm y}+a\ensuremath{\dot{\gamma}}^n$ with $n<1$, corresponding to yield stress fluid behavior for $\sigma_{\rm y}> 0$ and power law fluid behavior for $\sigma_{\rm y}=0$. In the regime of linear response, under a small amplitude oscillatory shear strain, their viscoelastic storage and loss moduli, $G'(\omega)$ and $G''(\omega)$, are often in near constant ratio, with $G''/G'$ typically about $0.1$, and with both functions showing only a weak or negligible frequency dependence down to the lowest accessible frequencies. Consistent with the existence of these sluggish relaxation modes, another striking feature is that of rheological aging~\cite{Fielding:2000}, in which a sample's flow response becomes progressively more solid-like as a function of its own age $t_{\rm w}$, defined as the time since it was freshly prepared at time $t=0$, for example by loading it into a rheometer and preshearing it, before a test deformation is later applied after a waiting time $t=t_{\rm w}$. The application of a sustained shear flow will however typically halt this aging process and rejuvenate the sample to a steady state with an effective age set by the inverse flow rate $1/\ensuremath{\dot{\gamma}}$. These shared rheological features have been attributed to the generic presence in these materials of the underlying `glassy' features of structural disorder ({\it e.g.} in a disordered packing of emulsion droplets or foam bubbles) and metastability ({\it e.g.} in the large energy barriers involved in stretching soap films, which impede droplet rearrangements). The term `soft glassy materials' has accordingly been coined to describe them~\cite{sollich1997,Fielding2014}. In the rheological literature, soft glasses are often also referred to as yield stress fluids. Recently, these have been suggested to fall into two broad categories: `simple' and `viscosity bifurcating'~\cite{Moller2009,Fielding2014} yield stress fluids. Among these, viscosity bifurcating fluids~\cite{Ragouilliaux2007,Moller2009,Fall2010,Martin2012} typically exhibit a strong time dependence (sometimes called thixotropy) in their transient rheological response. Furthermore, under a sustained applied shear flow they typically exhibit shear banding, with their steady state flow field comprising macroscopic bands of differing viscosities, with layer normals in the flow-gradient direction. This ability to support steady state shear bands is thought to stem from a non-monotonicity in the underlying constitutive curve of shear stress as a function of shear rate (as pertaining to initially homogeneous flow states). In contrast, simple yield stress fluids~\cite{Coussot2009,Ovarlez2010,Ovarlez2013} typically show much weaker thixotropy and are thought to have a monotonic constitutive curve, being thereby incapable of exhibiting shear banding as their steady response to a sustained applied shear flow (at least in the absence of concentration coupling). Beyond the steady state shear banding just described, recent years have seen an increasing realization that shear bands might also form quite generically in flows that involve a strong time-dependence~\cite{Fielding:2016,Moorcroft2011}, even in materials that have a purely monotonic underlying constitutive curve and are therefore incapable of supporting shear bands as their steady state response to a sustained applied shear flow of constant rate. (In fact this prediction applies not only to soft glassy materials but to complex fluids more generally~\cite{Moorcroft2013a,Fielding2014}, though we restrict our attention to soft glasses in this work.) To date, this concept has been investigated in detail in the transiently time-dependent flows of shear startup and step stress, as we now summarize. In shear startup, an initially well rested sample is subject at some time $t=t_{\rm w}$ to the switch-on of a shear rate $\ensuremath{\dot{\gamma}}$ that is held constant thereafter. Measured in response to this is the material's shear stress startup curve as a function of the time (or equivalently of the accumulated strain) since the inception of the flow. Typically, this signal rises initially linearly at early times, before displaying an overshoot after which the stress finally falls to attain its steady state value as prescribed by the material's flow curve at the given imposed shear rate. In Ref.~\cite{Moorcroft2013a}, it was suggested that the presence of this overshoot should generically predispose a material to the formation of shear bands, at least transiently, as the stress declines from its overshoot to the final steady state value. (In this steady state, the flow field may either remain banded, in a viscosity bifurcating fluid; or heal back to homogeneous flow, in a simple yield stress fluid with a monotonic underlying constitutive curve.) This phenomenon has indeed been widely observed: experimentally in carbopol gel~\cite{divoux2010,Divoux:2011b}, Laponite clay~\cite{Martinetal2012a,gibaud2008}, a non-Brownian fused silica suspension~\cite{Kurokawa:2015} and waxy crude oil~\cite{Dimitriou:2014}; in molecular simulations of a colloidal gel~\cite{Colombo:2014}, polymeric fluids~\cite{Mohaghehi:2016} and molecular glasses~\cite{Yunfeng:2007,Varnik:2004}; and in theoretical studies of a model foam~\cite{Kabla:2007,Barry:2010}, the soft glassy rheology and fluidity models~\cite{Moorcroft2011,Fielding2014,Lehtinen:2013}, the STZ model of amorphous elastoplastic solids~\cite{Manning:2007,Manning:2009a}, a mesoscopic model of plasticity~\cite{Jagla:2010}, and a model of polymer glasses~\cite{Fielding:2013}. In cases where the height of the stress overshoot increases as a function of the age of the sample before shearing commenced, the severity of the shear banding is predicted to increase accordingly. In a step stress experiment, an initially well rested sample is subject at some time $t=t_{\rm w}$ to the switch-on of a constant stress $\sigma$ that is held constant thereafter. Measured in response to this is the material's creep curve $\gamma(t)$, often reported as its time-differential $\ensuremath{\dot{\gamma}}(t)$. In soft glasses, this signal typically displays an initial regime of slow creep in which $\ensuremath{\dot{\gamma}}$ progressively decreases over time, followed (for stress values $\sigma>\sigma_{\rm y}$) by a yielding process in which $\ensuremath{\dot{\gamma}}$ increases to finally attain its value as prescribed by the steady state flow curve at the given stress. In Ref.~\cite{Moorcroft2013a}, it was suggested that a material should be generically predisposed to the formation of shear bands during this yielding process that follows the initial regime of slow creep, during the time-interval over which the time-differentiated creep curve simultaneously curves up and slopes up as a function of time and the sample starts flowing. This phenomenon has indeed been observed: experimentally in carbopol gel~\cite{Divoux:2011a,Magnin:1990}, carbon black~\cite{Gibaud:2010,Grenard:2014} and a colloidal glass~\cite{Sentja:2015}; in particle based simulations of colloidal glasses~\cite{Chaudhuri:2013}; and in stochastic simulations of the soft glassy rheology model~\cite{Moorcroft2013a,Fielding2014}. In the shear startup and step stress protocols just described, the time-dependence is transient in nature, typically persisting for just a few strain units during the time taken to establish a final steady flow out of an initial rest state. In consequence, for a simple yield stress fluid at least, the associated shear banding is itself transient: the bands that form as the material initially yields and starts flowing then subsequently heal away to give a homogeneous final steady state. (A viscosity bifurcating fluid can instead maintain bands even in steady state, due to the non-monotonic underlying constitutive curve.) In view of this, an important question of fundamental principle is whether an imposed flow that has a {\em sustained} time-dependence can give rise to correspondingly sustained shear banding, even in a simple yield stress fluid that is unable to support banding as its ultimate steady state response to a steadily imposed shear flow of constant rate. Indeed, intuitively we might expect a square wave caricature of a large amplitude oscillatory strain to correspond to a repeating sequence of forward then reverse shear startup runs. In any regime in which these repeated startup events are associated with an overshoot in the signal of stress as a function of strain, we might intuitively expect shear banding in each half cycle, associated with these overshoots. Likewise, we might intuitively expect a square wave caricature of a large amplitude oscillatory stress to correspond to a repeated sequence of positive then negative step stress experiments. In any regime in which each repeated step is associated with a yielding process of the kind discussed above for the simpler protocol of a single step stress, we might intuitively expect to find shear banding associated with these yielding events in each half cycle. In what follows, we investigate this scenario by studying the response of the soft glassy rheology (SGR) model~\cite{sollich1997,Sollich1998}, in its form as extended to allow for the possibility of heterogeneous shear flows~\cite{Fielding2008}, to several different large amplitude time-periodic imposed shear flows. We consider in turn the protocols of large amplitude oscillatory shear strain (LAOStrain), large amplitude square wave strain rate, large amplitude triangle wave strain rate, large amplitude sawtooth strain rate, and large amplitude oscillatory shear stress (LAOStress). In each case, we shall demonstrate shear banding to be an important part of the flow response across a wide range of values of the amplitude $\gamma_0$ (or $\sigma_0$) and frequency $\omega$ of the imposed flow. In the limit of zero frequency $\omega\to 0$ of the imposed oscillation, our initial intuition might lead us to expect to recover a situation in which the system simply quasistatically sweeps up and down its steady state flow curve during the course of each cycle, with the flow remaining homogeneous at all times (in a simple yield stress fluid at least). Crucially, however - and counterintuitively - in the glass phase we shall find that banding persists even at the lowest frequencies accessible numerically, in a manner that furthermore appears consistent with the idea that it would persist even to the limit $\omega\to 0$, were this accessible numerically. We emphasize that this is true even for the simple yield stress fluids considered here, which have a purely monotonic underlying constitutive curve and are unable to support banding as their true steady state response to a sustained applied shear of constant rate $\ensuremath{\dot{\gamma}}$. We shall show that this arises from a repeated competition, within each cycle, between glassy aging and flow-rejuvenation: the sample ages (with its typical stress relaxation timescale $\tau$ increasing) during the weak flow phase of each cycle, then is rejuvenated during the strong flow phase (with $\tau$ decreasing). Put simply: an aging material has no fixed intrinsic stress relaxation rate $1/\tau$ compared to which we can set the driving frequency $\omega$ to be small and expect to recover steady state response. This scenario has far reaching implications for the flow behavior of aging glassy materials, in suggesting a possible generic predisposition to shear banding even in flows of arbitrarily slow time-dependence. The protocol of large amplitude oscillatory shear (LAOS)~\cite{Hyun2011} has been the focus of intense interest in the rheology community in recent years, in particular for its suggested use in `fingerprinting' complex fluids via a series of tests in which the amplitude and frequency of the imposed oscillation are separately varied. At high frequencies, a material's elastic response is probed. At low frequencies, viscous response might a priori be expected (although in the aging materials of interest here that idea should be treated with caution in view of the remarks of the previous paragraph). Large amplitudes flows probe nonlinear response, while linear viscoelastic response is recovered for small amplitudes. \begin{figure*}[!tbp] \includegraphics[width=15.0cm]{./Fig1.eps} \caption{The large amplitude time-periodic shear flows that we shall consider: a) oscillatory strain, b) square wave strain rate, c) triangle wave strain rate, d) sawtooth strain rate, and e) oscillatory stress. For each of a) to e), the top panel shows the strain (or stress) and the bottom panel shows the corresponding rate. The horizontal axis is the same in each subpanel.} \label{fig:protocols} \end{figure*} In the context of yield stress fluids, LAOS has been studied both experimentally~\cite{Yoshimura1987,Knaebel2000,Viasnoff2003,Rouyer2008,Ewoldt2010,Renou2010,Guo2011,VanderVaart2013,Koumakis2013,Poulos2013,Poulos2015} and theoretically~\cite{Yoshimura1987,Viasnoff2003,Ewoldt2010,Rogers:2011,Rogers:2012,Koumakis2013,Mendes2013,Blackwell2014,Sollich1998,Rouyer2008}. In terms of a consideration of shear banding in this protocol, however, few experiments have directly imaged the flow field across the sample, although strain localization was reported in foam in Ref.~\cite{Rouyer2008} and in concentrated suspensions in Ref.~\cite{Guo2011}. In similar spirit, all the theoretical studies of which we are aware have simply assumed the flow to remain homogeneous, discarding upfront the possibility of banding. A central contribution of this work is to suggest that aging yield stress fluids might generically be expected to exhibit shear banding in LAOS, and furthermore that the presence of banding has a major influence on the measured bulk rheological signals. Indeed, we shall show that a system's Lissajous-Bowditch curves can differ strongly when calculated within the assumption of a purely homogeneous flow, compared with a calculation that allows bands to form. This suggests that attempts to rheologically fingerprint a fluid without taking banding properly into account -- as is widespread in much of the existing theoretical LAOS literature -- should be treated with caution. In a previous Letter~\cite{Ranga:2016}, we announced the basic result that an aging yield stress fluid, as modeled by the soft glassy rheology model in its glass phase, can exhibit shear banding in large amplitude time-periodic shear strain protocols. That study was restricted to the model's glass phase, where its noise temperature parameter (defined below) $x<1$, presenting numerical results for the single value $x=0.3$. The present paper contains a much more detailed discussion of the results announced in Ref.~\cite{Ranga:2016}. It also extends our study to a much broader range of noise temperatures, including those above the glass point, $x>1$, where the model shows power law fluid behavior, with no yield stress. We report significant banding here too, suggesting that the scenario is applicable not only to aging yield stress fluids, but also to power law fluids with sluggish relaxation timescales. The present manuscript also gives new results for shear banding of soft glasses in large amplitude oscillatory stress. The paper is structured as follows. In Sec.~\ref{sec:protocols} we define the flow protocols to be considered. Sec.~\ref{sec:model} outlines the SGR model in which we shall perform the study, together with our simulation method and some results used to benchmark it. We then present our results: in Sec.~\ref{sec:LAOStrain} for shear banding in large amplitude oscillatory shear strain, in Sec.~\ref{sec:LAOSothers} for large amplitude square or triangular or sawtooth wave strain rates, and in Sec.~\ref{sec:LAOStress} for large amplitude oscillatory shear stress. Sec.~\ref{sec:conclusions} discusses our conclusions. \section{Flow Protocols} \label{sec:protocols} In this section, we define the rheological protocols to be studied throughout the paper. In each case, we shall consider a sample of fluid that is freshly prepared at some time $t=0$ and then left to age undisturbed for a waiting time $t_{\rm w}$ before the periodic flow is switched on. (We shall discuss in Sec.~\ref{sec:model} the way in which we model a freshly prepared sample in the SGR model.) For the imposed flow, we shall consider several different possible waveforms, listed as follows. For each strain-imposed waveform, the strain amplitude will be denoted $\gamma_0$, and the strain-rate amplitude $\ensuremath{\dot{\gamma}}_0$. Likewise in the stress-imposed waveform, the stress amplitude is denoted $\sigma_0$ and the amplitude of the rate of change of the stress $\dot{\sigma}_0$. \begin{itemize} \item Large amplitude oscillatory shear strain, abbreviated to LAOStrain. Here $\gamma(t)=\gamma_0\sin(\omega (t-t_{\rm w}))$. See Fig.~\ref{fig:protocols}a). \item Large amplitude square wave strain rate, in which the strain rate periodically switches between equal positive and negative values, with a switching time $\pi/\omega$. The associated strain signal is triangular. See Fig.~\ref{fig:protocols}b). \item Large amplitude triangular wave strain rate, in which the strain rate is piecewise linear and continuous in value but with repeated slope discontinuities. See Fig.~\ref{fig:protocols}c). \item Large amplitude sawtooth wave strain rate, in which the strain rate is piecewise linear with repeated discontinuities in value. See Fig.~\ref{fig:protocols}d). \item Large amplitude oscillatory shear stress, abbreviated to LAOStress. Here $\sigma=\sigma_0 \cos(\omega (t-t_{\rm w}))$. See Fig.~\ref{fig:protocols}e). \end{itemize} After many cycles have been performed, in any regime where significant shear banding arises, the response of the system becomes (at least to excellent approximation) invariant from cycle-to-cycle $t\to t+2\pi/\omega$, and independent of the waiting time $t_{\rm w}$ before the flow commenced. For an initial waiting time $t_{\rm w}=10.0$, this state of cycle-to-cycle invariance is typically achieved after $50$ cycles. Except where stated, all our results below are for an initial waiting time $t_{\rm w}=10.0$ and for a run in which $50$ cycles are performed before we then start taking measurements. Such results have therefore achieved cycle-to-cycle invariance. Indeed, to obtain better statistics in calculating the Lissajous-Bowditch curves, we generally average the data over the $50$th to $100$th cycles. An entirely feasible experimental protocol, however, would be to wait for the sample to become highly aged before then performing a LAOS run comprising just a few tens of cycles. Accordingly, we shall also present data for $t_{\rm w}\to\infty$ (i.e., initializing the sample in equilibrium above the glass point), performing only $50$ cycles before we then average the system's response over the next $50$ cycles. (During those $50$ cycles over which we average, a small degree of time-variation does in fact occur, during the system's slow transient evolution to the state of cycle-to-cycle invariance after $1000$ cycles.) Such data are clearly not in a state of cycle-to-cycle invariance, but do correspond to the experimentally feasible situation of an old sample subject to a few tens of LAOS cycles. At lower strain amplitudes, in the absence of shear banding, indefinite cycle-to-cycle aging is expected even after many cycles. This has been studied in detail previously~\cite{Fielding:2000} and we do not consider it further here. To seed the formation of shear bands we add a small perturbation to the initial condition, such that the effective initial sample age as a function of position $y$ across the rheometer gap of width $L_y$ is $t_{\rm w}\left[1+\epsilon\cos(2\pi y/L_y)\right]$ with $\epsilon=0.1$. In obtaining the result of Fig~\ref{fig:transition} only, we also (in order to mitigate noise) included a toy model of flow cell curvature, by rendering the shear stress a function of position across the cell $\sigma\left[1+\kappa\cos(2\pi y/L_y)\right]$ with $\kappa=0.01$. (In true planar shear, the stress stress must be uniform across the cell, giving $\kappa=0$.) \section{Soft Glassy Rheology Model} \label{sec:model} We perform our study within the soft glassy rheology (SGR) model, which we now summarize, referring the reader to Refs.~\cite{sollich1997,Sollich1998,Fielding2008} for full details. The model considers an ensemble of elements, each of which is taken to correspond to a local mesoscopic region of a soft glassy material comprising (say) a few tens of emulsion droplets. Each element is assigned local continuum variables of shear strain $l$ and stress $kl$, with $k$ constant, which describe the elastic deformation of this region of material relative to a state of locally undeformed equilibrium. The macroscopic stress of the sample as a whole is taken to be the average over the local elemental stresses: \begin{equation} \sigma(t)=k\int dE \int dl\; l P(E,l,t). \end{equation} The elements are then taken to undergo loading and activated hopping dynamics in an energy landscape of traps, as follows. Under an imposed deformation, each element experiences a buildup of local elastic stress such that, between hops, the local intra-trap strain of each element affinely follows the macroscopic strain field, $\dot{l}=\ensuremath{\dot{\gamma}}$. These local stresses are then intermittently released by local plastic yielding events. Each such yielding event is taken to correspond to the hopping of an element out of one trap and into another. These hopping events are modeled as being dynamically activated: an element in a trap of depth $E$ and with local shear strain $l$ is assigned a probability per unit time of yielding given by $\tau^{-1}(E,l)=\tau_0^{-1}\exp\left[-(E-\tfrac{1}{2}kl^2)/x\right]$. In this expression, the parameter $x$ is an effective mean field noise temperature that is intended to model in a mean field way coupling with other yielding events elsewhere in the sample. Upon yielding, an element instantaneously resets its local stress to zero and selects its new energy barrier at random from a distribution $\rho(E)=\exp(-E/x_g)$. In a freshly prepared sample, we assume a distribution $P(E,l)=\rho(E)\delta(l)$, corresponding to a well rested system just quenched from a high noise temperature. This exponential `prior' distribution $\rho(E)$ confers a broad spectrum of yielding times $P(\tau)$ and results in a glass phase for $x<x_g$ in which the model exhibits rheological aging, with the typical relaxation timescale increasing linearly with the system's age $t_{\rm w}$ in the absence of flow. The application of a sustained flow however rejuvenates the sample and restores it to an effective age that is set by the inverse flow rate $1/\ensuremath{\dot{\gamma}}$. Throughout we use units in which $x_g=1$, $k=1$ and $\tau_0=1$. The steady state flow curve $\sigma(\ensuremath{\dot{\gamma}})$ of shear stress as a function of shear rate has a yield stress $\sigma_{\rm y}(x)$ for noise temperature $x<1$ in the glass phase, beyond which it rises monotonically according to $\sigma-\sigma_{\rm y}\sim \ensuremath{\dot{\gamma}}^{1-x}$. This gives simple yield stress fluid behavior, precluding steady state banding. For noise temperatures $1<x<2$ the flow curve is of power law fluid form, with $\sigma\sim\ensuremath{\dot{\gamma}}^{x-1}$. For $x>2$, we recover a Newtonian flow curve with $\sigma\sim \ensuremath{\dot{\gamma}}$. \begin{figure}[!tbp] \includegraphics{./Fig2.eps} \caption{Results of our waiting-time Monte Carlo simulations (symbols) for the homogeneous form of the SGR model subject to large amplitude oscillatory shear strain, compared with independent results for the same quantities obtained from analytical expressions (lines). Panel (a) shows the storage $G'$ (filled symbols) and loss $G''$ (unfilled symbols) modulus for the fundamental mode; and (b) shows the residual $q$ measuring the weight in all higher modes. For each quantity, curves top to bottom are for frequency values $\omega=10^{-1} (\square),10^{-2} (\textcolor{blue}{\circ}),10^{-3} (\textcolor{red}{\triangle})$. The noise temperature $x=1.5$, above the glass transition. Number of streamlines $n=1$, number of SGR elements per streamline $m=1000$. We thank Prof. Peter Sollich for providing us with the data from the analytical expressions~\cite{Sollich1998}.} \label{fig:comp_sollich} \end{figure} So far we have described the model in its original form~\cite{sollich1997,Sollich1998}, which is spatially homogeneous and unable to account for any heterogeneous flow effects such as shear banding. In Refs.~\cite{Fielding2008,Moorcroft2013a}, we provided an extension to the model to allow for the formation of shear bands coexisting with layer normals in the flow gradient direction $y$. This adopts a 1D approach in which the velocity is confined to the flow direction $x$ and varies only in the flow-gradient direction $y$, with the $y$ coordinate discretized into $i=1 \cdots n$ streamlines of equal spacing $L_y/n$, for a sample of thickness $L_y$ between rheometer plates at $y=0,L_y$. The shear rate field is then $\ensuremath{\dot{\gamma}}_i(t)$ as the coordinate $y$ varies across the streamlines $i=1\cdots n$. At any streamline $i$, this is related to the fluid velocity $v$ in the $x$ direction by the spatially discretized derivative $\ensuremath{\dot{\gamma}}(y)=dv(y)/dy$, i.e., $\ensuremath{\dot{\gamma}}_i=(v_{i+1}-v_{i-1})/(2L_y/n)$. Although the shear rate field does not vary in $x$, each streamline has its own sub-ensemble of $j=1\cdots m$ SGR elements, with the shear stress of the $i$th streamline defined as $\sigma_i=(k/m)\sum_j l_{ij}$. In this way, this 1D model essentially comprises a series of SGR models stacked in the $y$ direction, coupled by a 1D Stokesian force balance, which we now describe. In zero Reynolds number conditions of creeping flow, which we assume throughout, the condition of force balance imposes, in this 1D approach, that the shear stress must remain uniform across all streamlines at all times, $\sigma_i(t)=\sigma(t)$. However, suppose a hop occurs at element $ij$ when its local strain is $l=\ell$, reducing the stress on that streamline. With the model as described so far, this potentially violates force balance. To correct for this, we restore force balance by updating all elements on the same streamline $i$ according to $l\to l + \ell/m$. This ensures uniform stress across streamlines, but with an overall sample stress that is incorrectly unchanged compared with that before the hop. To ensure a properly reduced global stress after the hop, we then update all elements on all streamlines as $l \to l - \ell/mn$. The scenario of force balance just described implements the propagator implied by Stokesian balance in the single spatial dimension $y$, with translational invariance imposed in $x$. A 2D approach would instead be possible, using the 2D propagator discussed in detail in~\cite{Picard:2004} and used in 2D elasto-plastic lattice models in~\cite{Picard:2005}. (Indeed Ref.~\cite{Picard:2004} describes how its 2D propagator reduces to 1D upon integrating over the flow direction $x$.) We expect our 1D approach to be well suited to the problem in hand here, of studying shear bands that form with layer normals in the flow gradient direction. To account for the structure of the interface between any shear bands that form~\cite{Lu1999}, we further incorporate a small stress diffusivity between neighboring streamlines. To do so, after the hop of an element with strain $\ell$ on streamline $i$ as just described, we further adjust the strain of three randomly chosen elements on each adjacent streamline $i\pm 1$ by $\ell w(-1,+2,-1)$, with $w$ small. Our numerical simulations of this model are performed using an event-driven waiting time Monte Carlo algorithm~\cite{Bortz:1975,Gillespie:1976,Fielding2008}. In each `event', the next element to yield is selected stochastically: the probability $P_{ij}$ that the next element to yield is the $j$th particle on the $i$th streamline is $P_{ij}=r_{ij}/\sum_{ij}r_{ij}$, given an elemental hop rate $r_{ij}=\tau^{-1}(E_{ij},l_{ij})=\tau_0^{-1}\exp\left[-(E_{ij}-\tfrac{1}{2}kl_{ij}^2)/x\right]$. The time interval $dt$ to the next hop is also selected in a stochastic way: $dt=-\ln(s)/\sum_{ij}r_{ij}$, where $s$ is a random number selected from a uniform distribution between $0$ and $1$. All results reported are converged with respect to increasing the number of streamlines $n$ and the number of elements per streamline $m$. For further details of this simulation method, the reader is referred to~\cite{Fielding2008}. \begin{figure*}[!tbp] \includegraphics[width=18.0cm]{./Fig3.eps} \caption{Response of the SGR model to LAOStrain of amplitude $\gamma_0=1.59$ and frequency $\omega=0.001$ for three different noise temperatures: $x=0.3$ (top row), $x=0.7$ (middle row) and $x=1.1$ (bottom row). Sample age before shearing commenced $t_{\rm w}=10$ for $x=0.3,0.7$, and $t_{\rm w}\to\infty$ (i.e., sample initialized in equilibrium) for $x=1.1$. Data shown for cycle number $N=50$. Signals show: (first column) shear stress as a function of time over a cycle, (second column) snapshot shear banded velocity profiles normalized by $V_0=\ensuremath{\dot{\gamma}}_0 L$ at three times over a cycle, (third column) inverse effective sample age as a function of time over a cycle, and (fourth column) degree of shear banding as a function of time over a cycle. Flow profiles in the second column are shown for the times indicated by the corresponding symbols in the other columns. Number of streamlines $n=100$. Number of SGR elements per streamline $m=100$. Diffusivity $w=0.1$. Toy curvature parameter, $\kappa=0$. Initial heterogeneity $\epsilon=0.1$. } \label{fig:velo_prof} \end{figure*} As a check of our code, we compared the results of runs with a single streamline $n=1$, for which the flow is homogeneous by definition, with those of analytical calculations for the original homogeneous model~\cite{Sollich1998,Fielding:2000}. We did so for both small and large amplitude oscillatory strain and stress, and for the model's transient and steady state response to a shear startup and an imposed step stress. We do not show data for all these, but a sample comparison is shown in Fig.~\ref{fig:comp_sollich} for large amplitude oscillatory shear strain at a noise temperature $x=1.5$ above the glass point. Ignoring the transient behavior, the stress response to such a deformation can be written \begin{equation} \sigma (t) = \gamma_0 [ G' \sin(\omega t) + G'' \cos(\omega t) ] + \delta \sigma(t) \ , \label{eqn:moduli} \end{equation} where $G'(\omega,\gamma_0)$, and $G''(\omega,\gamma_0)$ are the storage and loss moduli that characterize the response of the system at the level of the fundamental mode, with the all the higher harmonic stress contributions being measured by the residual $q(\omega,\gamma_0)$, where \begin{equation} \label{eqn:residual} q^2=\frac{\int dt [\delta \sigma(t)]^2}{\int dt [\sigma(t)]^2}. \end{equation} To within numerical noise, we find excellent agreement between these quantities computed within our stochastic simulation and the same quantities computed from analytical expressions. \iffalse We note that for the simulations near equilibrium, that is at low $\gamma_0$, the initial trap energy distribution is taken to be equal to the equilibrium distribution $\rho(E)=exp(-E(1-1/x))/(1-1/x)$. For $\gamma_0>1$, the initial trap distribution is taken to be completely rejuvenated as $\rho(E)=exp(-E)$ for the simulations to converge faster to the steady state solution. \fi \section{Reported measures} In what follows, we shall be interested in the extent to which the response of a sample to large amplitude time-periodic shear protocols is shear banded, for different values of the amplitude and frequency of the imposed oscillation. To characterize the degree of shear banding in the sample at any time $t$, we measure the spatial variance in the shear rate across the flow cell \begin{equation} \label{eqn:degree_banding_time} \Delta\ensuremath{\dot{\gamma}}(t)=\frac{1}{N_0}\sqrt{\langle \ensuremath{\dot{\gamma}}^2\rangle_i-\langle\ensuremath{\dot{\gamma}}\rangle_i^2}. \end{equation} where $\langle\cdots\rangle_i$ denotes an average across streamlines. For large amplitude oscillatory shear strain, the normalization factor $N_0=\ensuremath{\dot{\gamma}}_0$. For large amplitude square/triangular/sawtooth wave strain rate, $N_0=\omega\gamma_0$. In normalizing in this way by a quantity that scales with the peak strain rate over the cycle as a whole, rather than the strain rate $\ensuremath{\dot{\gamma}}(t)$ at the given time $t$, Eqn.~\ref{eqn:degree_banding_time} in fact provides a conservative estimate of the degree of banding (while also reducing the error that can arise due to noise when the instantaneous rate $\ensuremath{\dot{\gamma}}(t)$ is used instead). In large amplitude oscillatory shear stress, the normalization factor $N_0$ is defined as the maximum shear rate observed at any point in the cycle. (Therefore, in LAOStress $N_0$ is calculated numerically, whereas in imposed-strain protocols it is known upfront.) In summarizing the response of the system over a broad range of values of the amplitude and frequency of the imposed flow, we sometimes instead report the degree of banding as defined in Eqn.~\ref{eqn:degree_banding_time}, now averaged over a cycle: \begin{equation} \label{eqn:degree_banding_average} \Delta_c\gdot=\langle\Delta\ensuremath{\dot{\gamma}}(t)\rangle_T, \end{equation} where $\langle\cdots\rangle_T$ denotes a time average over a cycle. Indeed to reduce noise we further average $\Delta_c\gdot$ over the $N=50-100$th cycles. Typically, a value $\Delta_c\gdot>0.5$ in this cycle-averaged measure corresponds to significant banding seen in visual inspection of the velocity profiles. For large amplitude oscillatory stress, we report the degree of banding maximized over a cycle, $\Delta_m\gdot$. Finally, we shall find it useful to characterize the way in which the effective age of the sample varies as a function of time over a cycle. To do this, we define \begin{equation} \label{eqn:age} \langle 1/\tau \rangle(t) = \sum_{i=1}^{i=n} \sum_{j=1}^{j=m}\exp ( - (E_{ij} - k l_{ij}^2)/x)/(m n), \end{equation} the inverse of which gives a measure of the sample's age. All our results below are presented for just a single simulation run, apart from in Fig.~\ref{fig:transition}, which averages over twenty five runs. \section{Results: Large Amplitude Oscillatory Shear Strain} \label{sec:LAOStrain} In this section, we report our results for the response of the soft glassy rheology (SGR) model to a large amplitude oscillatory shear strain (LAOStrain). In Fig.~\ref{fig:velo_prof}, a complete cycle of the oscillation is shown for three different values of the noise temperature: two in the glass phase, $x=0.3$ (top row) and $x=0.7$ (middle row), and one just above the glass point, $x=1.1$ (bottom row). The amplitude $\gamma_0$ and frequency $\omega$ of the imposed oscillation is the same in each case. The origin of time is chosen to be that at which the strain rate switches from negative to positive (inset in the top left panel). For each noise temperature, we show the stress as a function of time over one cycle (first column), snapshot shear banded profiles at three different times (second column), the degree of shear banding as a function of time over the cycle (third column) and the inverse of the average stress relaxation time, which can be taken as effectively being the inverse sample age, as a function of time (fourth column). The sample age before shearing commenced $t_{\rm w}=10$ for the noise temperatures $x=0.3,0.7$ in the glass phase in the top two rows, while $t_{\rm w}\to\infty$ (corresponding to a sample initialized in equilibrium) for the noise temperature $x=1.1$ above the glass point in the bottom row. Consider the first half of the cycle, during which the strain rate is positive and the sample is straining in the forward direction. Initially, when the strain rate has only just switched from negative to positive after the end of the previous cycle, the imposed flow is weak and the sample is old and aging. This can be seen by the fact that the inverse effective sample age (fourth column) as defined by Eqn.~\ref{eqn:age} is initially small and decreasing. The associated rheological response is accordingly predominantly elastic, with the stress initially increasing approximately linearly with the time and accumulating strain (first column). As the shear rate progressively increases towards its maximum positive value at the end of the first quarter cycle, the effect of the stronger shear is then to rejuvenate the sample, with $\langle 1/\tau \rangle$ increasing to a maximum. Associated with this rejuvenation is an overshoot in the stress as a function of time, with the sample then yielding into a flowing regime where the stress remains relatively constant as a function of time. As the shear rate progressively drops towards the end of the first half cycle, the stress likewise drops and the inverse age decreases (ie, the sample ages again). The same sequence of processes then repeats in reverse, with appropriate changes in sign, during the second half of the cycle in which the strain rate is negative and the sample strains in the reverse direction. Closely associated with the stress overshoot and subsequent process of yielding during each half of the cycle is the formation of shear bands. This can be see by the snapshot velocity profiles, $v(y)=\int_{0}^{y} \dot{\gamma}(y') dy'$, in the second column, which deviate strongly from the linear form they would have in the absence of banding. At any time $t$ we take as a measure of the degree of banding the quantity defined in Eqn.~\ref{eqn:degree_banding_time}. Our results for this quantity as a function of time over the cycle are shown in the third column of Fig.~\ref{fig:velo_prof}. As can be seen, this measure increases sharply around the time of the stress overshoot, then subsequently decays. Comparing the three rows in Fig.~\ref{fig:velo_prof}, we find the response is broadly the same in the model's glass phase, where its underlying steady state flow curve has a yield stress, and just above the glass point, where the flow curve is of power law fluid form. The lower noise temperatures however show a more pronounced alternation between aging and rejuvenation within each cycle, and a stronger stress overshoot. The peak of the degree of banding over a cycle is also slightly stronger for $x<1$. \begin{figure} \centering \includegraphics[width=8cm]{./Fig4.eps} \caption{\textbf{(a)} Oscillatory shear stress response of an SGR model with noise temperature $x=0.3$ at $\gamma_0=1.59$, and $\omega=0.001$ for different weighting factors $w=0.05,0.1,0.15$ of stress diffusivity; and \textbf{(b)} the corresponding normalized cross correlation between local shear rate $\ensuremath{\dot{\gamma}}(y)$ and inverse age $\ensuremath{\frac{1}{\tau}}(y)$ at the time indicated by $\circ$ are shown. For $w=0.1$, the \textbf{(c)} shear rate, inverse age profiles, and \textbf{(d)} normalized cross-correlation are shown at different times marked by the symbols in \textbf{(a)}. The other model parameters are n=100, m=100, $t_w=10$. \label{fig:corr} } \end{figure} As seen by comparing the third and fourth columns of Fig.~\ref{fig:velo_prof}, there is a strong temporal correlation between the degree of shear banding and the inverse sample age averaged across the sample. To explore the link between these two quantities in more detail, we now examine the spatial cross-correlation between the local shear rate inside the sample and the local inverse sample age. To do this, we measure the normalised cross-correlation between the local inverse sample age $\ensuremath{\frac{1}{\tau}}(y)$ and shear rates $\ensuremath{\dot{\gamma}}(y)$ at different streamlines as shown in Fig.~\ref{fig:corr}. The discrete cross-correlation function between $\ensuremath{\dot{\gamma}}, \ensuremath{\frac{1}{\tau}}$ between streamlines j apart is defined as \begin{equation} \rho_{\ensuremath{\dot{\gamma}}\ensuremath{\frac{1}{\tau}}}(j)= \begin{cases} \sum_{i=0}^{n-j-1}\left(\ensuremath{\dot{\gamma}}(i+j) - \overline{\ensuremath{\dot{\gamma}}}\right)\left(\ensuremath{\frac{1}{\tau}}(i) - \overline{\ensuremath{\frac{1}{\tau}}}\right),\ j \ge 0\\ \rho_{\ensuremath{\dot{\gamma}}\ensuremath{\frac{1}{\tau}}}(-j),\ j<0 \ , \end{cases} \end{equation} where i indicates streamline number, n is the total number of streamlines, and the overline denotes the mean across the sample. The normalised cross-correlation function is given by \begin{equation} \hat{\rho}_{\ensuremath{\dot{\gamma}}\ensuremath{\frac{1}{\tau}}}(j)=\frac{1}{\sqrt{\rho_{\ensuremath{\dot{\gamma}}\gdot}(0)\rho_{\ensuremath{\frac{1}{\tau}}\tauinv}(0)}}\rho_{\ensuremath{\dot{\gamma}}\ensuremath{\frac{1}{\tau}}}(j)\ , \end{equation} where $\rho_{\ensuremath{\dot{\gamma}}\gdot}(0)$ is the autocorrelation function for \ensuremath{\dot{\gamma}}. Similar to the stress-signal, this normalised cross-correlation function $\hat{\rho}_{\ensuremath{\dot{\gamma}},\ensuremath{\frac{1}{\tau}}}$ can then be averaged over multiple cycles to reduce the noise, and expressed as a function of distance $y/L$ rather than the streamline number. The normalised cross-correlation function allows us to explore how the spatial correlation between the inverse sample age and local shear rate depends on the weighting factor w for stress diffusivity. From Fig.~\ref{fig:corr} (b), it is clear that width of the cross-correlation $\ensuremath{\langle\hat{\rho}_{\gdot,\tauinv}(y)\rangle}$ increases with increase in the weighting factor of diffusivity, as is to be expected. The maximum amplitude of the correlation is highest immediately following the stress overshoot, and decreases as the shear rate changes direction as shown in Fig.~\ref{fig:corr} (d), which can also be qualitatively inferred by comparing the \ensuremath{\dot{\gamma}}, \ensuremath{\frac{1}{\tau}} profiles given in Fig.~\ref{fig:corr} (c). Thus, over a LAOS cycle, the average inverse sample age can indicate shear banding, and the the local inverse sample age is correlated with the region of shear banding. \begin{samepage} \begin{figure}[!htbp] \includegraphics[width=8.0cm]{./Fig5.eps} \caption{Elastic Lissajous-Bowditch curves for the homogeneous (dashed lines) and heterogeneous (solid lines) SGR model in LAOStrain for noise temperatures $x=0.3,0.7,1.1$ in panels (a) to (c) downward. Initial sample age before shearing commenced $t_{\rm w}=10$ in each case. In the heterogeneous calculations, the instantaneous degree of banding $\Delta \dot{\gamma}$ is indicated by the color-scale. The grid of values of $\gamma_0,\omega$ is the same in each panel, and indicated by crosses in Fig.~\ref{fig:phasediag}. Data averaged over 50th to 100th cycles. Heterogeneous calculations have: number of streamlines $n=25$, number of SGR elements per streamline $m=100$, diffusivity $w=0.05$, toy cell curvature $\kappa=0$, and initial heterogeneity $\epsilon=0.1$. Homogeneous calculations have $m=1000$ SGR elements.} \label{fig:ELB} \end{figure} \begin{figure}[!htbp] \includegraphics[width=8.0cm]{./Fig6.eps} \caption{Viscous Lissajous-Bowditch curves for the homogeneous (dashed lines) and heterogeneous (solid lines) SGR model in LAOStrain for noise temperatures. Parameter values as in Fig.~\ref{fig:ELB}. Thin dotted lines show steady state flow curve $\sigma(\ensuremath{\dot{\gamma}})$. } \label{fig:VLB} \end{figure} \end{samepage} A common way of visualizing the response of a viscoelastic material to an imposed large amplitude oscillatory shear strain is parametrically to plot the stress as a function of strain over the course of a cycle, to give the so-called elastic Lissajous-Bowditch (ELB) curve; or as a function of strain-rate over the course of cycle to give the viscous Lissajous-Bowditch (VLB) curve~\cite{Ewoldt:2008}. A grid of such figures plotted for different values of the amplitude $\gamma_0$ and frequency $\omega$ of the imposed oscillation then gives a so-called Pipkin diagram, which is commonly used for rheologically fingerprinting viscoelastic fluids. Our results for Pipkin diagrams computed in the soft glassy rheology model are shown in Figs.~\ref{fig:ELB} and~\ref{fig:VLB}, in the ELB and VLB representations respectively. In each case we explore the same three noise temperatures as in Fig.~\ref{fig:velo_prof}, although now the initial sample age before shearing commenced $t_{\rm w}=10$ in each case. The solid lines pertain to the heterogeneous model that takes shear banding into account. The dashed lines are for simulations that impose upfront a purely homogeneous flow, disallowing any possibility of shear banding. For a simple linear elastic solid, the ELB curve would comprise a straight line through the origin. In contrast, a purely viscous liquid would give an ellipse. In the SGR model, the ELB curves for low imposed strain amplitudes indeed show purely elastic response. (We do not present these here.) In contrast, for strain amplitudes $\gamma_0>1$ we see highly nonlinear ELB curves. Strongly nonlinear ELB curves have been observed experimentally in soft glassy materials in Refs.~\cite{Renou2010,Rogers:2011,Poulos2015}. These ELB curves contain essentially the same information as discussed in the context of Fig.~\ref{fig:velo_prof} above, but with time now as a hidden parameter that increases as the curve is explored in the clockwise direction during the course of any LAOS cycle. The bottom-left to top-right sector corresponds to the positive strain-rate half of the cycle, in which the sample is straining in the forward direction. With this in mind, we now identify in the ELB curves a sequence of physical processes~\cite{Rogers:2011} corresponding to the alternating competition over the course of each cycle between glassy aging and flow rejuvenation, and between elastic and viscous response. The bottom-left of the ELB curve corresponds to the time at which the strain-rate switches from negative to positive and the sample starts being sheared in the forward direction. Initially this shear is of low rate and the sample accordingly is old and aging (for the low frequencies $\omega <1$ to which the SGR model applies), with rather elastic rheological response: the stress initially increases linearly with strain. As the shear rate then progressively increases during the first quarter cycle, the increasingly strong shearing acts to rejuvenate the sample. We then see a yielding process in which the stress goes through an overshoot as a function of strain, before declining to a flowing regime in which it remains almost constant. The same sequence of processes then repeats in reverse, during the negative straining half cycle, clockwise from top right to bottom left in the ELB curve. \begin{figure*}[!tbp] \includegraphics[width=16.0cm]{./Fig7.eps} \caption{Left panels (a,c,d): dynamic phase diagrams showing the cycle-averaged degree of banding in the heterogeneous form of the soft glassy rheology model in large amplitude oscillatory shear for $x=0.3,0.7,1.1$ respectively. Dashed lines show constant $\dot{\gamma}_0$. The $\times$ indicate the grid of $\gamma_0,\omega$ values explored in more detail in the ELB and VLB curves of Figs.~\ref{fig:ELB} and~\ref{fig:VLB}. Initial sample age $t_{\rm w}=10.0$ for all three noise temperatures. Data averaged over 50th to 100th cycles. Right panels (b,d,f) show counterpart discrepancy between the ELB curves calculated within the assumption of homogeneous flow, and those calculated allowing for shear banding, for the same parameters. Number of streamlines $n=25$, number of SGR elements per streamline $m=100$, diffusivity $w=0.05$, toy cell curvature $\kappa=0$, and initial heterogeneity $\epsilon=0.1$. } \label{fig:phasediag} \end{figure*} In each of the ELB curves, the colorscale shows the degree of banding $\Delta \ensuremath{\dot{\gamma}}$ at any point in the cycle. Consistent with our discussion of Fig.~\ref{fig:velo_prof} above, we find the appearance of shear banding to be closely associated with an overshoot in the signal of stress as a function of strain in the ELB curves. Typically, shear bands form as the overshoot is approached and persist for some time as the stress declines afterwards. This behavior is strongly reminiscent of transient shear banding associated with stress overshoot in the startup of shear of a constant rate~\cite{Moorcroft2011}, as summarized in Sec.~\ref{sec:introduction} above. For an ergodic viscoelastic fluid with a fixed characteristic stress relaxation time $\tau$, we expect a sequence of LAOS experiments repeated with the same amplitude $\gamma_0$ for progressively lower values of the imposed frequency $\omega$ to reveal a progression from elastic-like response in the high frequency regime $\omega\tau\gg 1$ to viscous-like response in the low frequency regime $\omega\tau\ll 1$. Furthermore, in the limit $\omega\to 0$ we expect to recover a scenario in which the fluid repeatedly sweeps quasistatically up and down its viscous steady state flow curve as the strain rate slowly increases and decreases over the course of each cycle. A Lissajous Bowditch plotted in the viscous representation of stress as a function of strain rate should then correspond to the fluid's underlying steady state flow curve. For any material in which the constitutive curve is purely monotonic, shear banding would be impossible in this quasi-static limit. Such a scenario was indeed explored in ergodic polymeric fluids in Refs.~\cite{Adams2009,Carter2015}. However, in the glass phase $x<1$ of the soft glassy rheology model we find no such progression with decreasing frequency leftwards along any row of the Pipkin grids in Figs.~\ref{fig:ELB}a,b) and~\ref{fig:VLB}a,b). Even at the lowest accessible frequencies we still observe strongly elastic response, in some part of the cycle at least, with the stress increasing almost linearly with strain in the ELB representation $\sigma(\gamma)$. In the viscous representation $\sigma(\ensuremath{\dot{\gamma}})$, we never find the VLB curve to approach the underlying steady state flow curve: instead, it displays markedly open loops even at the lowest frequencies accessible numerically. Furthermore, we find that strong shear banding likewise persists, despite the underlying constitutive curve being monotonic. This highly counterintuitive behavior arises from a basic competition within each cycle between glassy aging in the low shear rate phase of the cycle alternating with flow-induced rejuvenation, yielding and the associated shear banding in the high shear rate part of the cycle. Put simply: an aging material has no fixed characteristic relaxation rate $1/\tau$ against which we can set the frequency $\omega$ of the imposed oscillation to be small. This finding has far reaching implications for the flow of aging soft glasses, suggesting a strong predisposition to shear banding even in imposed flow protocols of arbitrarily slow time-variation~\cite{SM}. In contrast, for noise temperatures $x>1$ above the model's glass point, the underlying flow curve is of power law fluid form. In the absence of flow, true aging is absent~\cite{Fielding:2000}, although very long transients associated with sluggish relaxation timescales may nonetheless still arise. In consequence, in a sequence of LAOS experiments performed at fixed oscillation amplitude $\gamma_0$ for progressively smaller values of the imposed frequency $\omega$, the ELB and VLB curves enclose a progressively smaller area. For noise temperatures far enough above the glass point and low enough frequencies, the VLB curves eventually tend to the steady state flow curve, with no associated shear banding. However, for the noise temperature $x=1.1$ considered here, only just above the glass point, we have not been able to access low enough frequencies to see a return to purely homogeneous response. It would be interesting in future work to explore the response in the low frequency limit for noise temperature just above the glass point. In Figs.~\ref{fig:ELB} and~\ref{fig:VLB}, we have discussed the response of the SGR model to a series of LAOStrain experiments with a set of imposed amplitude and frequency values $(\gamma_0,\omega)$ arranged on a 3x3 grid. To explore more fully the regimes of amplitude and frequency in which significant banding arises, we show in the left panels (a,c,e) of Fig.~\ref{fig:phasediag} full dynamic phase diagrams, respectively for each of the three noise temperatures $x=0.3,0.7,1.1$. In any such phase diagram, each coordinate pair $(\gamma_0,\omega)$ corresponds to a LAOStrain experiment performed with those given $(\gamma_0,\omega)$. Represented by the colorscale at each $(\gamma_0,\omega)$ is then the cycle-averaged degree of banding $\Delta_c \ensuremath{\dot{\gamma}}$, as defined in Eqn.~\ref{eqn:degree_banding_average}, arising in a LAOS experiment performed with that given strain amplitude and frequency. We have checked that a value $\Delta_c\ensuremath{\dot{\gamma}} > 0.5$ corresponds to strongly visually apparent banding in the flow profiles. \begin{figure}[!tbp] \includegraphics[width=7.5cm]{./Fig8.eps} \caption{Transition from non-banded to banded flow in the soft glassy rheology model at a noise temperature $x=0.3$ in a series of LAOS experiments performed at a fixed frequency $\omega=0.1$ for increasing values of the strain amplitude $\gamma_0$. Device curvature $\kappa=0.01$, sample age $t_{\rm w}=10.0$, $\epsilon=0.1$, $w=0.05$. Data averaged over 50th to 100th cycles, and over 25 separate simulation runs. $m=1600$, $n=75$.} \label{fig:transition} \end{figure} For all the noise temperatures shown, both in the glass phase and just above the glass point, we find significant banding across a significant region of the plane of imposed strain amplitude and frequency: roughly, in the glass phase $x<1$, for strain amplitudes $\gamma_0> 1$ and strain rate amplitudes $\ensuremath{\dot{\gamma}}_0=\gamma_0\omega < \ensuremath{\dot{\gamma}}_{0{\rm c}}(x)$. (Lines of constant strain rate are shown by the dashed lines in Fig.~\ref{fig:phasediag}.) The value $\ensuremath{\dot{\gamma}}_{0{\rm c}}(x)$ of the strain rate amplitude below which significant banding is observed clearly decreases with increasing noise temperature $x$. Accordingly, the degree of banding for a given pair of values of imposed oscillation and frequency $\gamma_0,\omega$ decreases with increasing $x$. This can be understood by appreciating that for increasing values of $x$ in the model's glass phase, we see less pronounced aging. Indeed, true aging is eliminated in favor of long transient evolution to a sluggish steady state for $x>1$. Accordingly, the repeated aging and rejuvenation that underpins the triggering of shear banding in each cycle becomes less pronounced with increasing $x$. Inspecting again the color maps of the degree of banding as a function of imposed strain amplitude and frequency in the phase diagrams of Fig.~\ref{fig:phasediag}, we see that (at any noise temperature $x$), the transition from non-banded to banded flow, in a series of LAOS experiments performed at a fixed value of the frequency $\omega$ and progressively increasing amplitude $\gamma_0$, appears to be rather sharp. This transition is investigated in Fig.~\ref{fig:transition}, where we indeed see a rather sharp transition to banding with increasing strain amplitude. Most theoretical studies of LAOS to date have imposed upfront a homogeneous shear flow, discarding any possibility of shear banding. However, our results in Figs.~\ref{fig:ELB} and~\ref{fig:VLB} show the danger of calculating rheological fingerprints (ELB or VLB curves) within any such assumption. In each panel of Figs.~\ref{fig:ELB} and~\ref{fig:VLB}, the solid line shows the Lissajous-Bowditch curve in a calculation that properly allows for banding, while the dashed line shows the corresponding curve in a calculation that disallows banding and imposes homogeneous flow. As can be seen, the presence of shear banding can cause a strong discrepancy between these two curves, particularly for strain amplitudes that are only just in the nonlinear regime. To explore this discrepancy further, in the right panels (b,d,f) of Fig.~\ref{fig:phasediag} we show as a color map in the plane of imposed strain amplitude and frequency the maximum difference in stress $\Delta_m \sigma$ between the homogeneous and heterogeneous calculations. For numerical convenience, this is measured over a time interval $T/10$ following the peak in the stress signal for the heterogeneous flow, where $T$ is the time-period of the oscillation. (This is indeed the time-interval when any difference between the two signals is most pronounced.) As can be seen, for the noise temperatures $x=0.3,0.7$ in the glass phase, a strong discrepancy between the homogeneous and heterogeneous calculations is observed for imposed strain amplitudes just into the nonlinear regime $\gamma_0\gtrsim 1$. For the noise temperature $x=1.1$ above the glass point, where the model shows ergodic power law fluid behavior, this discrepancy is essentially non existent. (However, strong discrepancies were reported in a model of ergodic polymeric fluids in Ref.~\cite{Carter2015}.) An important message of this work is therefore to counsel caution in seeking to fingerprint complex fluids via theoretical calculations that assume homogeneous flow. \begin{figure*}[!tbp] \includegraphics[width=15.0cm]{./Fig9.eps} \caption{(a,c,e) Elastic Lissajous curves of the SGR model in LAOStrain at a noise temperature $x=0.3,0.7,1.1$ respectively. In each case the oscillation frequency $\omega=10^{-3}$, with curves shown for values of the strain amplitude $\gamma_0=1,2.51,6.31, 10$. (b) The cage modulus $G_{\rm c}$ ($\blacksquare$), storage modulus $G'$ ($\bullet$) and loss modulus $G''$ ($\circ$) extracted from a family curves, as a function of imposed strain amplitude for the same frequency $\omega=10^{-3}$. (d) Maximum stress $\sigma_{\rm max}$ ($\blacksquare$) and dynamic yield stress $\sigma_{\rm dyn}$ ($\square$). (f) Strain acquired at the stress maxima since strain reversal $\gamma_{\rm ac}$ ($\blacksquare$) as defined in the main text. Lower and upper dotted lines in (f) show $\gamma_{\rm ac}=\gamma_0$ and $\gamma_{\rm ac}=2\gamma_0$ respectively. Initial sample age $t_{\rm w}=10.0$. Data averaged over 50th to 100th cycles. Number of streamlines $n=25$, number of SGR elements per streamline $m=100$, diffusivity $w=0.05$, toy curvature parameter $\kappa=0$, initial heterogeneity $\epsilon=0.1$. In each of (b,d,f), the color coding with respect to noise temperature matches that of (a,c,e).} \label{fig:osc_rogers} \end{figure*} Finally in this section on large amplitude oscillatory shear strain, we seek to interpret the elastic Lissajous-Bowditch (ELB) curves of the heterogeneous soft glassy rheology model within the framework of a `sequence of physical processes', as introduced by Rogers et al. in Ref.~\cite{Rogers:2011} and applied to yield stress and power law fluids in Ref.~\cite{Rogers:2012}. In particular, we shall compute the various nonlinear quantities proposed by Rogers et al. as being useful measures of the response of yielding materials in LAOStrain. With this in mind, in the left panels of Fig.~\ref{fig:osc_rogers} we show again ELB curves for our three different noise temperatures $x=0.3,0.7,1.1$, respectively in panels from top to bottom. In each case, we show results for a fixed value of the cycle frequency $\omega=10^{-3}$, for several different values of the imposed strain amplitude $\gamma_0$. For each such curve we then computed the storage and loss moduli, $G'$ and $G''$, as defined in Eqn.~\ref{eqn:moduli}. These are plotted as a function of the imposed strain amplitude $\gamma_0$ in the top right panel of Fig.~\ref{fig:osc_rogers}, by the filled and open circles respectively. The elastic modulus decreases with increasing $\gamma_0$: initially gently in the linear regime $\gamma_0\lesssim 1$, then much more rapidly in the nonlinear regime $\gamma_0 \gtrsim 1$. The loss modulus $G''$ instead initially increases with $\gamma_0$ in the linear regime, before showing a peak then subsequently decreasing in the nonlinear regime. These forms are consistent with the earlier results of Ref.~\cite{Sollich1998}. In the linear regime, $G'>G''$, with the reverse true in the strongly nonlinear regime. Both quantities decrease with increasing noise temperature $x$, for all values of the imposed strain amplitude $\gamma_0$. In the nonlinear regime, all the quantities shown in Fig.~\ref{fig:osc_rogers} are in a state of cycle-to-cycle invariance (to excellent approximation)~\footnote{See the supplementary material of Ref.\cite{Ranga:2016}}. In the linear regime, the values of $G'$ and $G''$ slowly age. This was studied previously~\cite{Fielding:2000} and we do not consider it further here. The storage modulus $G'$ is intended to characterize the material's elastic response. As just noted, it decreases dramatically through the nonlinear regime to become small at high values of the imposed strain amplitude $\gamma_0$. While this may be a reasonable representation of the response of the material integrated over an entire cycle, $G'$ nonetheless fails to capture the obvious region of elastic response that persists even at large imposed strain amplitudes, in the part of the ELB curves near flow reversal at $\gamma(t)=\pm\gamma_0$, where the stress $\sigma(t)$ is small. Recall the steeply sloping sections of the ELB curves in Fig.~\ref{fig:osc_rogers}a). To characterize this regime of elastic response near flow reversal, Rogers et al. defined the `cage modulus': \begin{equation} \label{eqn:cage} G_{\rm c}={\frac{d \sigma}{d \gamma}}\at[\Big]{\sigma=0}. \end{equation} Our results for this quantity, extracted from the ELB curves of Fig.~\ref{fig:osc_rogers}a),c,e), are shown in Fig~\ref{fig:osc_rogers}b). In the linear viscoelastic regime $\gamma_0\to0$, it was proved analytically in Ref.~\cite{Rogers:2011} for these yielding materials that $G_{\rm c}=G'+G''^2/G'$. We have verified that this relation is indeed satisfied for our data. Beyond the linear regime, the cage modulus remains almost constant across the full range of $\gamma_0$ considered, even as the storage modulus falls dramatically at large $\gamma_0$. In this way, the cage modulus is able to capture the intra-cycle elasticity observed for small stresses near strain reversal in the ELB curves, even at large values of the imposed strain amplitude $\gamma_0$. At any given imposed $\gamma_0$, the cage modulus $G_{\rm c}$ decreases with increasing noise temperature $x$. \begin{figure*}[!tbp] \includegraphics[width=15.0cm]{./Fig10.eps} \caption{Left panels (a,c,e): dynamic phase diagrams showing the cycle-averaged degree of shear banding in the heterogeneous form of the soft glassy rheology model in large amplitude square, triangle and sawtooth strain rate respectively. Dashed lines show constant $\dot{\gamma}_0$. Right panels (b,d,f) show counterpart elastic Lissajous-Bowditch curves for the homogeneous (dashed lines), and heterogeneous (solid lines) models for the grid of $\gamma_0,\omega$ values indicated by $\times$ in the left panels. In the heterogeneous calculations, the instantaneous degree of banding $\Delta \dot{\gamma}$ is indicated by the color-scale. Noise temperature $x=0.3$. Initial sample age $t_{\rm w}=10.0$. Data averaged over 50th to 100th cycles. Heterogeneous runs have: number of streamlines $n=100$, number of SGR elements per streamline $m=100$, diffusivity $w=0.1$, initial heterogeneity $\epsilon=0.1$, and toy curvature parameter $\kappa=0$. Homogeneous runs have $m=1000$ SGR elements.} \label{fig:dif_prot} \end{figure*} Another measure that is commonly discussed in relation to yield stress fluids is that of the `yield stress' itself. Indeed several different quantitative definitions are commonly used to characterize this intuitive concept~\cite{Dinkgreve:2016}. Broadly, the stress above which the material starts flowing is termed the static yield stress, while the stress below which it stops flowing is called the dynamic yield stress. In the SGR model, the maximum stress that can be maintained indefinitely without the material flowing with a non-zero strain rate at long times (the `static yield stress'), and the minimum stress obtained in sweeping the imposed strain rate $\ensuremath{\dot{\gamma}}\to 0$ (the `dynamic yield stress') are the same, and give a well defined `yield stress' $\sigma_{\rm y}(x)$ that is non-zero for $x<1$~\cite{Fielding:2000}. In this context of oscillatory flows, Rogers et al.~\cite{Rogers:2011} sought to obtain measures of the yield stress from the ELB curves. In particular, they defined the static yield stress to be maximum stress $\sigma_{\rm max}$ in the ELB curve, and the dynamic yield stress $\sigma_{\rm dyn}$ to be the value of the stress at the point where the strain is maximum, $\gamma(t)=\gamma_0$. We have marked these quantities on the ELB curves of Fig.~\ref{fig:osc_rogers}a,c,e) by filled and open squares respectively. Fig.~\ref{fig:osc_rogers}d) plots the same quantities (with the same symbol key) as a function of imposed strain amplitude $\gamma_0$. In the linear viscoelastic regime $\gamma_0\to 0$ the two quantities coincide and follow a linear elastic increase with $\gamma_0$. In the nonlinear regime $\gamma_0\gtrsim 1$, they start to separate, with the dynamic quantity $\sigma_{\rm dyn}$ becoming lower than the static one $\sigma_{\rm max}$. At any fixed $\gamma_0$, both $\sigma_{\rm max}$ and $\sigma_{\rm dyn}$ decrease with increasing noise temperature $x$, as expected. However there is a clear difference between the dependence of the static yield stress $\sigma_{\rm max}$ on the imposed strain amplitude $\gamma_0$ in the nonlinear regime $\gamma_0\gtrsim 1$ for noise temperatures in the glass phase and those above the glass point. In the glass phase, it is roughly constant. Above the glass point, it increases with increasing $\gamma_0$. Another measure commonly discussed for yield stress fluids is that of the yield strain. Several different definitions again exist. In the present context of LAOStrain we consider $\gamma_{\rm ac}$, defined as the strain acquired between the point of strain reversal (where $\gamma=-\gamma_0$) and the point of absolute maximum stress in the cycle following the strain reversal (i.e., the point shown by the filled squares in Fig.~\ref{fig:osc_rogers}a). Our results for this quantity are shown in Fig.~\ref{fig:osc_rogers}(f), with solid squares. In the linear viscoelastic regime, the ELB curve is a straight line through the origin, giving $\gamma_{\rm ac}=2 \gamma_0$. The trends reported in the SGR model in Fig.~\ref{fig:osc_rogers}d) for $\sigma_{\rm max}$ and $\sigma_{\rm dyn}$ and in Fig.~\ref{fig:osc_rogers}f) for $\gamma_{\rm ac}$ broadly resemble those reported experimentally in star polymers~\cite{Rogers:2011}, a hard sphere suspension~\cite{vanderVaart:2013}, and a colloidal gel ~\cite{Kim:2014}, though we do not attempt quantitative comparison. \section{Results: Large Amplitude Square/Triangle/Sawtooth Wave strain rate} \label{sec:LAOSothers} In the previous section, we presented the results of theoretical calculations suggesting that soft glassy materials exhibit shear banding in large amplitude oscillatory shear strain, across a broad range of values of the amplitude $\gamma_0$ and frequency $\omega$ of the imposed oscillation. In the glass phase, we showed that this effect persists even at the lowest frequencies accessible numerically, even though the model's underlying constitutive curve is purely monotonic, rendering it incapable of supporting shear bands as the true steady state response to a steadily imposed shear of constant rate. We interpreted this counterintuitive behavior as arising from an alternating competition within each cycle between glassy aging in the low strain rate phase, and flow-rejuvenation in the high strain rate phase. In this section, we show that same scenario also arises in other large amplitude time-periodic shear strain protocols. While being far from conclusive (we perform our calculations in just one particular model of soft glasses, for four different strain-imposed waveforms), this finding has potentially far reaching implications for the rheology of soft glasses more generally, in suggesting a rather generic predisposition to shear banding in time-varying flows of any waveform, even in the limit of an arbitrarily slow time-variation. With these remarks in mind, we consider now the protocols of large amplitude square, triangle and sawtooth wave strain rate, as sketched in Fig.~\ref{fig:protocols}b)-d). (These imposed flows are in fact the basis functions for examining the oscillatory shear stress response of materials as proposed by Klein et al.~\cite{Klein:2007}.) Corresponding to the dynamic phase diagram of the cycle averaged degree of shear banding $\Delta_c\gdot$ shown in Fig.~\ref{fig:phasediag} for oscillatory shear flow, the counterpart phase diagrams for these other three protocols are shown in the left panels of Fig.~\ref{fig:dif_prot}, for a single noise temperature in the model's glass phase. We indeed observe significant banding for a large range of values of the amplitude $\gamma_0$ and frequency $\omega$ of the imposed oscillation, for all three protocols. Perhaps surprisingly, even the quantitative degree of banding is similar in each case, and is seen over a similar region of the $\gamma_0,\omega$ plane, though with slightly less banding in the sawtooth case. In the right panels (b,d,f) of Fig.~\ref{fig:dif_prot}, we show ELB curves corresponding to the grid of $\gamma_0,\omega$ values indicated in the counterpart phase diagrams in panels (a,c,e). In each case, we find a sequence of physical processes similar to that described above for LAOStrain. The local degree of banding $\Delta \ensuremath{\dot{\gamma}}(t)$ is indicated by the color scale round each cycle. As can be seen, the onset of banding is again closely associated with the stress overshoot in each case, closely reminiscent of banding associated with stress overshoot in the simpler protocol of shear startup~\cite{Moorcroft2011}. As in LAOStrain, we see a significant difference between the ELB curves as calculated allowing shear bands to form (solid lines) and those for purely homogeneous shear (dashed lines), particularly for imposed strain amplitudes in the region of transition from no banding to banding. \begin{figure}[!tbp] \includegraphics{./Fig11.eps} \caption{ (a) Cage modulus $G_{\rm c}$, (b) maximum stress $\sigma_{\rm max}$ (filled symbols) and dynamic yield stress $\sigma_{\rm dyn}$ (open symbols) and (c) strain acquired between the point of strain reversal and that of maximum stress. In each case data are shown for LAOStrain ($\bullet$), square wave strain rate (\textcolor{red}{$\blacksquare$}), triangular wave strain rate (\textcolor{blue}{$\blacktriangle$}) and sawtooth strain rate (\textcolor{green}{$\blacklozenge$}). Noise temperature $x=0.3$. Initial sample age $t_{\rm w}=10.0$. Data averaged over 50th to 100th cycles. The lower and upper dotted lines in (c) show $\gamma_{\rm ac}=\gamma_0$ and $\gamma_{\rm ac}=2\gamma_0$ respectively. Number of streamlines $n=100$, number of SGR elements per streamline $m=100$, initial heterogeneity $\epsilon=0.1$, toy cell curvature $\kappa=0$, diffusivity $w=0.1$.} \label{fig:prot_rogers} \end{figure} To characterise in more detail the ELB curves of these three alternative protocols, we discuss finally the non-linear measures discussed in the context of LAOStrain in Sec.~\ref{sec:LAOStrain}. As seen in panel (a) of Fig.~\ref{fig:prot_rogers}, the cage modulus is approximately the same for all the four protocols. The maximum stress $\sigma_{\rm max}$ and the stress $\sigma_{\rm dyn}$ at the point of flow reversal $\gamma=\gamma_0$ are shown in panel b). The strain $\gamma_{\rm ac}$ acquired between the point of flow reversal and the point at which the stress attains its maximum value is shown in panel c). For all four protocols, in the linear regime we see essentially elastic response in which each of $\sigma_{\rm max}$, $\sigma_{\rm dyn}$ and $\gamma_{\rm ac}$ increases linearly with the strain amplitude $\gamma_0$ (though with a lower prefactor for $\sigma_{\rm dyn}$ and $\gamma_{\rm ac}$ in the case of the sawtooth wave because its imposed strain range is $0$ to $\gamma_0$, compared with $-\gamma_0$ to $\gamma_0$ for the other three protocols). \section{Results: Large Amplitude Oscillatory Stress} \label{sec:LAOStress} \begin{figure}[!tbp] \includegraphics[width=8cm]{./Fig12.eps} \caption{\textbf{Top:} Dynamic phase diagram showing shear banding in oscillatory shear stress protocol for the SGR model with a noise temperature $x=0.3$. \textbf{Bottom:} Viscous Lissajous Bowditch curves corresponding to $\times$ symbols in the top panel, with the degree of banding shown by the color scale. Initial sample age $t_{\rm w}=10.0$, data averaged over 50th to 100th cycles. Thin dotted lines show steady state flow curves $\sigma(\ensuremath{\dot{\gamma}})$. Number of streamlines $n=25$, number of SGR elements per streamline $m=100$, diffusivity $w=0.05$, initial heterogeneity $\epsilon=0.1$, toy cell curvature $\kappa=0$.} \label{fig:osc_stress} \end{figure} \begin{figure}[!tbp] \includegraphics[width=8cm]{./Fig13.eps} \caption{Response of the SGR model to LAOStress of amplitude $\sigma_0/\sigma_y-1=0.2$, frequency $\omega=0.001$ for cycle number $N=50$. Signals show (a) shear rate, (c) degree of shear banding, and (d) inverse effective sample age as a function of time over a cycle. (b) Snapshots of shear banded velocity profiles corresponding to the symbols in the other figures are shown. Initial sample age $t_{\rm w}=10$, number of streamlines $n=25$, number of SGR elements per streamline $m=100$, diffusivity $w=0.05$, initial heterogeneity $\epsilon=0.1$, toy curvature parameter $\kappa=0$.} \label{fig:laostress_velo} \end{figure} As summarised in Sec.~\ref{sec:introduction} above, when an initially well rested sample of soft glass of some age $t=t_{\rm w}$ is subject to the switch-on of a stress that is held constant thereafter, it initially shows a regime of slow creep in which the strain rate progressively reduces over time. For imposed stresses above the yield stress, this regime of slow creep is then followed by a yielding process in which the strain rate increases towards its final flowing state on the flow curve. During the time interval in this yielding process over which the strain rate signal simultaneously curves and slopes upwards as a function of time, the sample is predicted to be unstable to the formation of shear bands~\cite{Moorcroft2013a}. Intuitively, we might expect a large amplitude oscillatory shear stress (LAOStress) protocol loosely to correspond to a repeating sequence of positive and negative step stresses. If a yielding process arises following each of these steps in each half cycle, we might then intuitively expect to see the formation of shear bands associated with that yielding, by analogy with the banding seen in the simpler step stress protocol just described. With this in mind, we now consider finally the response of the soft glassy rheology model in LAOStress, in its glass phase $x<1$. In Fig.~\ref{fig:osc_stress} (top) we plot as a color-scale the degree of shear banding maximized over a cycle for a wide range of LAOStress experiments of imposed stress amplitude $\sigma_0$ and frequency $\omega$. As can be seen, significant shear banding arises across a broad region of the plane of $\sigma_0,\omega$. Banding persist even at the lowest frequencies accessible numerically (in a manner apparently consistent with it persisting to the limit of zero frequency $\omega\to 0$, were this accessible numerically), as in the strain-imposed protocols considered in previous sections, despite the model's underlying flow curve being purely monotonic, precluding banding as the true steady state response to a constant imposed shear stress. In the lower panel, we present the corresponding VLB curves for the grid of values of imposed stress amplitude $\sigma_0$ and frequency $\omega$ marked by crosses in the top panel. The time-dependent degree of shear banding is shown as a color-scale round each cycle. The results can be understood as follows. For most of the cycle the stress is below the yield stress, and the shear rate is accordingly small. Once the stress exceeds the yield stress, the sample yields and starts to flow (at low frequencies at least - at higher frequencies there is insufficient time for this to occur). Associated with this yielding process is indeed the formation of shear bands, as predicted by our intuitive argument above. Noting that the shear banding only arises in a relatively small portion of the cycle in LAOStress, we chose in our color-map in the left panels to show the degree of banding maximized over a cycle. The response of the system as a function of time round a cycle is shown in more detail in Fig.~\ref{fig:laostress_velo}. Consistent with the preceding discussion, shear bands form in time-regimes where the stress exceeds the yield stress, and the material rejuvenates and starts to flow. \section{Conclusions} \label{sec:conclusions} In this work, we have studied in detail the response of soft glassy materials, including both yield stress fluids and power law fluids, to large amplitude time-periodic flow protocols, in the context of the soft glassy rheology model. For each of large amplitude oscillatory shear strain, large amplitude square wave strain rate, large amplitude triangular wave strain rate, large amplitude sawtooth strain rate and large amplitude oscillatory shear stress, we find the response of the system to be significantly shear banded, for a wide range of values of the amplitude $\gamma_0$ (or $\sigma_0$) and frequency $\omega$ of the imposed oscillation. Indeed, our results (in the glass phase $x<1$ at least) suggest that in the limit $\omega\to 0$, significant banding will be present for all imposed strain amplitudes in the nonlinear regime (with a smaller range of amplitudes implicated for larger frequencies). We emphasize that this is true even though the model's underlying constitutive curve is purely monotonic, such that its steady state response to a steadily imposed shear of constant rate is incapable of supporting shear bands. We attribute this to a repeated competition, within each cycle, of glassy aging and flow rejuvenation. In the four strain-imposed protocols, the formation of shear bands in each half cycle appears closely associated with the presence of a stress overshoot in the elastic Lissajous Bowditch curve of stress as a function of strain, in close analogy to the transient shear banding associated with stress overshoot in shear startup studied previously~\cite{divoux2010,Divoux:2011b,Moorcroft2011,Moorcroft2013a}. Loosely and intuitively, therefore, we interpret LAOStrain (and the other strain-imposed protocols) in terms of a repeating series of forward and reverse shear startup flows. Likewise, in the stress-imposed protocol the formation of shear bands in each half cycle appears closely associated with a yielding process, just beyond the point at which the stress first exceeds the yield stress in the underlying constitutive curve. Again, this closely mirrors the transient shear banding associated with yielding following the imposition of a step stress studied previously~\cite{Divoux:2011a,Moorcroft2013a}. Loosely and intuitively, therefore, we interpret LAOStress in terms of a repeating sequence of positive and negative step stress experiments. Our results suggest a possible generic predisposition of aging glassy materials to flow in a heterogeneous, shear banded manner when subject to large amplitude time-varying flows of even arbitrarily slow time-variation. It would be very interesting to investigate this suggestion further, both experimentally and in molecular simulations of glassy systems. \begin{acknowledgments} The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement number 279365. The authors thank Prof. P. Sollich for providing data to check our code in its homogeneous flow mode in Fig.~\ref{fig:comp_sollich}. \end{acknowledgments}